pax_global_header00006660000000000000000000000064141063135650014516gustar00rootroot0000000000000052 comment=703259f2dca711205f81d2d8f9e3fa45774eb0da seaborn-0.11.2/000077500000000000000000000000001410631356500132305ustar00rootroot00000000000000seaborn-0.11.2/.coveragerc000066400000000000000000000002161410631356500153500ustar00rootroot00000000000000[run] omit = seaborn/widgets.py seaborn/external/* seaborn/colors/* seaborn/cm.py seaborn/conftest.py seaborn/tests/* seaborn-0.11.2/.github/000077500000000000000000000000001410631356500145705ustar00rootroot00000000000000seaborn-0.11.2/.github/CONTRIBUTING.md000066400000000000000000000023271410631356500170250ustar00rootroot00000000000000Contributing to seaborn ======================= General support --------------- General support questions ("how do I do ?") are most at home on [StackOverflow](https://stackoverflow.com/), where they will be seen by more people and are more easily searchable. StackOverflow has a `[seaborn]` tag, which will bring the question to the attention of people who might be able to answer. Reporting bugs -------------- If you have encountered a bug in seaborn, please report it on the [Github issue tracker](https://github.com/mwaskom/seaborn/issues/new). It is only really possible to address bug reports if they include a reproducible script using randomly-generated data or one of the example datasets (accessed through `seaborn.load_dataset()`). Please also specify your versions of seaborn and matplotlib, as well as which matplotlib backend you are using. New features ------------ If you think there is a new feature that should be added to seaborn, you can open an issue to discuss it. But please be aware that current development efforts are mostly focused on standardizing the API and internals, and there may be relatively low enthusiasm for novel features that do not fit well into short- and medium-term development plans. seaborn-0.11.2/.github/workflows/000077500000000000000000000000001410631356500166255ustar00rootroot00000000000000seaborn-0.11.2/.github/workflows/ci.yaml000066400000000000000000000037721410631356500201150ustar00rootroot00000000000000name : CI on: push: branches: [master, v0.11] pull_request: branches: master env: NB_KERNEL: python MPLBACKEND: Agg jobs: build-docs: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v2 - name: Setup Python uses: actions/setup-python@v2 - name: Install seaborn run: | python -m pip install --upgrade pip pip install `cat ci/deps_latest.txt ci/utils.txt` pip install . - name: Install doc tools run: | pip install -r doc/requirements.txt sudo apt-get install pandoc - name: Build docs run: | make -C doc -j `nproc` notebooks make -C doc html run-tests: runs-on: ubuntu-latest strategy: matrix: python: [3.6.x, 3.7.x, 3.8.x, 3.9.x] target: [test] deps: [latest] backend: [agg] include: - python: 3.6.x target: unittests deps: pinned backend: agg - python: 3.9.x target: unittests deps: minimal backend: agg - python: 3.9.x target: test deps: latest backend: tkagg steps: - name: Checkout uses: actions/checkout@v2 - name: Setup Python uses: actions/setup-python@v2 with: python-version: ${{ matrix.python }} - name: Install seaborn run: | python -m pip install --upgrade pip pip install `cat ci/deps_${{ matrix.deps }}.txt ci/utils.txt` pip install . - name: Cache datastes run: python ci/cache_test_datasets.py - name: Run tests env: MPLBACKEND: ${{ matrix.backend }} run: | if [[ ${{ matrix.backend }} == 'tkagg' ]]; then PREFIX='xvfb-run -a'; fi $PREFIX make ${{ matrix.target }} - name: Upload coverage uses: codecov/codecov-action@v1 if: ${{ success() }} seaborn-0.11.2/.gitignore000066400000000000000000000002071410631356500152170ustar00rootroot00000000000000*.pyc *.sw* build/ .ipynb_checkpoints/ dist/ seaborn.egg-info/ .cache/ .coverage cover/ htmlcov/ .idea/ .vscode/ .pytest_cache/ notes/ seaborn-0.11.2/LICENSE000066400000000000000000000027231410631356500142410ustar00rootroot00000000000000Copyright (c) 2012-2020, Michael L. Waskom All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the project nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. seaborn-0.11.2/MANIFEST.in000066400000000000000000000001271410631356500147660ustar00rootroot00000000000000include README.md include CONTRIBUTING.md include LICENSE recursive-include licences * seaborn-0.11.2/Makefile000066400000000000000000000003241410631356500146670ustar00rootroot00000000000000export SHELL := /bin/bash test: pytest -n auto --doctest-modules --cov=seaborn --cov-config=.coveragerc seaborn unittests: pytest -n auto --cov=seaborn --cov-config=.coveragerc seaborn lint: flake8 seaborn seaborn-0.11.2/README.md000066400000000000000000000061411410631356500145110ustar00rootroot00000000000000
-------------------------------------- seaborn: statistical data visualization ======================================= [![PyPI Version](https://img.shields.io/pypi/v/seaborn.svg)](https://pypi.org/project/seaborn/) [![License](https://img.shields.io/pypi/l/seaborn.svg)](https://github.com/mwaskom/seaborn/blob/master/LICENSE) [![DOI](https://joss.theoj.org/papers/10.21105/joss.03021/status.svg)](https://doi.org/10.21105/joss.03021) ![Tests](https://github.com/mwaskom/seaborn/workflows/CI/badge.svg) [![Code Coverage](https://codecov.io/gh/mwaskom/seaborn/branch/master/graph/badge.svg)](https://codecov.io/gh/mwaskom/seaborn) Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics. Documentation ------------- Online documentation is available at [seaborn.pydata.org](https://seaborn.pydata.org). The docs include a [tutorial](https://seaborn.pydata.org/tutorial.html), [example gallery](https://seaborn.pydata.org/examples/index.html), [API reference](https://seaborn.pydata.org/api.html), and other useful information. To build the documentation locally, please refer to [`doc/README.md`](doc/README.md). Dependencies ------------ Seaborn supports Python 3.6+ and no longer supports Python 2. Installation requires [numpy](https://numpy.org/), [scipy](https://www.scipy.org/), [pandas](https://pandas.pydata.org/), and [matplotlib](https://matplotlib.org/). Some functions will optionally use [statsmodels](https://www.statsmodels.org/) if it is installed. Installation ------------ The latest stable release (and older versions) can be installed from PyPI: pip install seaborn You may instead want to use the development version from Github: pip install git+https://github.com/mwaskom/seaborn.git#egg=seaborn Citing ------ A paper describing seaborn has been published in the [Journal of Open Source Software](https://joss.theoj.org/papers/10.21105/joss.03021). The paper provides an introduction to the key features of the library, and it can be used as a citation if seaborn proves integral to a scientific publication. Testing ------- Testing seaborn requires installing additional packages listed in `ci/utils.txt`. To test the code, run `make test` in the source directory. This will exercise both the unit tests and docstring examples (using [pytest](https://docs.pytest.org/)) and generate a coverate report. The doctests require a network connection (unless all example datasets are cached), but the unit tests can be run offline with `make unittests`. Code style is enforced with `flake8` using the settings in the [`setup.cfg`](./setup.cfg) file. Run `make lint` to check. Development ----------- Seaborn development takes place on Github: https://github.com/mwaskom/seaborn Please submit bugs that you encounter to the [issue tracker](https://github.com/mwaskom/seaborn/issues) with a reproducible example demonstrating the problem. Questions about usage are more at home on StackOverflow, where there is a [seaborn tag](https://stackoverflow.com/questions/tagged/seaborn). seaborn-0.11.2/ci/000077500000000000000000000000001410631356500136235ustar00rootroot00000000000000seaborn-0.11.2/ci/cache_test_datasets.py000066400000000000000000000005061410631356500201700ustar00rootroot00000000000000""" Cache test datasets before running test suites to avoid race conditions to due tests parallelization """ import seaborn as sns datasets = ( "anscombe", "attention", "dots", "exercise", "flights", "fmri", "iris", "planets", "tips", "titanic" ) list(map(sns.load_dataset, datasets)) seaborn-0.11.2/ci/check_gallery.py000066400000000000000000000005341410631356500167730ustar00rootroot00000000000000"""Execute the scripts that comprise the example gallery in the online docs.""" from glob import glob import matplotlib.pyplot as plt if __name__ == "__main__": fnames = sorted(glob("examples/*.py")) for fname in fnames: print(f"- {fname}") with open(fname) as fid: exec(fid.read()) plt.close("all") seaborn-0.11.2/ci/deps_latest.txt000066400000000000000000000000521410631356500166700ustar00rootroot00000000000000numpy scipy matplotlib pandas statsmodels seaborn-0.11.2/ci/deps_minimal.txt000066400000000000000000000000361410631356500170240ustar00rootroot00000000000000numpy scipy matplotlib pandas seaborn-0.11.2/ci/deps_pinned.txt000066400000000000000000000001051410631356500166500ustar00rootroot00000000000000numpy==1.15 scipy==1.0 pandas==0.23 matplotlib==2.2 statsmodels==0.8 seaborn-0.11.2/ci/getmsfonts.sh000066400000000000000000000002171410631356500163500ustar00rootroot00000000000000echo ttf-mscorefonts-installer msttcorefonts/accepted-mscorefonts-eula select true | debconf-set-selections apt-get install msttcorefonts -qq seaborn-0.11.2/ci/utils.txt000066400000000000000000000000551410631356500155240ustar00rootroot00000000000000pytest!=5.3.4 pytest-cov pytest-xdist flake8 seaborn-0.11.2/doc/000077500000000000000000000000001410631356500137755ustar00rootroot00000000000000seaborn-0.11.2/doc/.gitignore000066400000000000000000000001471410631356500157670ustar00rootroot00000000000000*_files/ _build/ generated/ examples/ example_thumbs/ introduction.rst tutorial/*.rst docstrings/*.rst seaborn-0.11.2/doc/Makefile000066400000000000000000000136731410631356500154470ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " clean to remove genrated output" @echo " html to make standalone HTML files" @echo " notebooks to make the Jupyter notebook-based tutorials" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* -rm -rf examples/* -rm -rf example_thumbs/* -rm -rf generated/* -rm -rf introduction_files/* -rm introduction.rst -make -C docstrings clean -make -C tutorial clean .PHONY: tutorials tutorials: make -C tutorial .PHONY: docstrings docstrings: make -C docstrings introduction.rst: introduction.ipynb tools/nb_to_doc.py ./introduction.ipynb notebooks: tutorials docstrings introduction.rst html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/lyman.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/lyman.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/lyman" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/lyman" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." seaborn-0.11.2/doc/README.md000066400000000000000000000015311410631356500152540ustar00rootroot00000000000000Building the seaborn docs ========================= Building the docs requires additional dependencies listed in [`./requirements.txt`](./requirements.txt). The build process involves conversion of Jupyter notebooks to `rst` files. To facilitate this, you may need to set `NB_KERNEL` environment variable to the name of a kernel on your machine (e.g. `export NB_KERNEL="python3"`). To get a list of available Python kernels, run `jupyter kernelspec list`. After you're set up, run `make notebooks html` from the `doc` directory to convert all notebooks, generate all gallery examples, and build the documentation itself. The site will live in `_build/html`. Run `make clean` to delete the built site and all intermediate files. Run `make -C docstrings clean` or `make -C tutorial clean` to remove intermediate files for the API or tutorial components. seaborn-0.11.2/doc/_static/000077500000000000000000000000001410631356500154235ustar00rootroot00000000000000seaborn-0.11.2/doc/_static/copybutton.js000066400000000000000000000051361410631356500201740ustar00rootroot00000000000000// originally taken from scikit-learn's Sphinx theme $(document).ready(function() { /* Add a [>>>] button on the top-right corner of code samples to hide * the >>> and ... prompts and the output and thus make the code * copyable. * Note: This JS snippet was taken from the official python.org * documentation site.*/ var div = $('.highlight-python .highlight,' + '.highlight-python3 .highlight,' + '.highlight-pycon .highlight') var pre = div.find('pre'); // get the styles from the current theme pre.parent().parent().css('position', 'relative'); var hide_text = 'Hide the prompts and output'; var show_text = 'Show the prompts and output'; var border_width = pre.css('border-top-width'); var border_style = pre.css('border-top-style'); var border_color = pre.css('border-top-color'); var button_styles = { 'cursor':'pointer', 'position': 'absolute', 'top': '0', 'right': '0', 'border-color': border_color, 'border-style': border_style, 'border-width': border_width, 'color': border_color, 'text-size': '75%', 'font-family': 'monospace', 'padding-left': '0.2em', 'padding-right': '0.2em' } // create and add the button to all the code blocks that contain >>> div.each(function(index) { var jthis = $(this); if (jthis.find('.gp').length > 0) { var button = $('>>>'); button.css(button_styles) button.attr('title', hide_text); jthis.prepend(button); } // tracebacks (.gt) contain bare text elements that need to be // wrapped in a span to work with .nextUntil() (see later) jthis.find('pre:has(.gt)').contents().filter(function() { return ((this.nodeType == 3) && (this.data.trim().length > 0)); }).wrap(''); }); // define the behavior of the button when it's clicked $('.copybutton').toggle( function() { var button = $(this); button.parent().find('.go, .gp, .gt').hide(); button.next('pre').find('.gt').nextUntil('.gp, .go').css('visibility', 'hidden'); button.css('text-decoration', 'line-through'); button.attr('title', show_text); }, function() { var button = $(this); button.parent().find('.go, .gp, .gt').show(); button.next('pre').find('.gt').nextUntil('.gp, .go').css('visibility', 'visible'); button.css('text-decoration', 'none'); button.attr('title', hide_text); }); }); seaborn-0.11.2/doc/_static/favicon.ico000077500000000000000000000360561410631356500175610ustar00rootroot00000000000000 h6  (00 h&(  miAYUVSVSYUmiAmi:~TPvHDvHDvHDvHDvHDvHD~TPёmi:fa\xJGvHDvHDvHDvHDvHDvHDvHDvHDxJGlc\mi:xKGvHDvHDvHDvHDvHDvHDvHD|PGcPtX~^:~TPvHDvHDvHDvHDvHDvHDVJnU|[}\}\}\f}AgRVKzMFyKE~RH`NtX}\}\}\}\}\}\}\}Aj}\}\}\}\}\}\}\}\}\}\}\}\_h}n}\}\}\}\}\}\}\}\}\}\fs|}vh~]}\}\}\}\]ix}}}}ļ~}{snnt|}}}}}}¶AحϢ}}}}}}}}}}Aܴگگحɚ}}}}}}}:ڰگگگٮΡÿ~»ɜ۸:[ڰگگگگگگگگڰݼ\:ܴگگگگگگܴ:AܴܴܶܶA( @ 'H~[~[H'Vfb[XvHDvHDvHDvHDvHDvHDZWfaW}f\XvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHD\W}f"d`vHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDc`ǻ">TQvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDTQ>>}RNvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDVP>"VQvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDTJeQsX|\hһ"c`vHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHD{NGcPvY}\}\}\}\}\t}fvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDTIkT|[}\}\}\}\}\}\}\}\f\XvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDXLqV}\}\}\}\}\}\}\}\}\}\}\lV[MxJEvHDvHDvHDvHDvHDvHDvHDvHDvHDzMFaOvY}\}\}\}\}\}\}\}\}\}\}\}\}\}\Wv}\{[lU_OVJ|PH{NG}QHXKbPqV|[}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\vm}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\q'}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\^gqz}'ƭH}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\dq|}}}}бHɸ[f}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\~]iw}}}}}}}ƥ[ƥ[}xi~]}\}\}\}\}\}\}\}\}\}\}\}\}\}\^lz}}}}}}}}}ƥ[ͭH}}}zna}\}\}\}\}\}\}\}\}\}\cp|}}}}}}}}}}}бH'}}}}}}wmfa__bgoy}}}}}}}}}}}}}}'–𾴃}}}}}}}}}}}}}}}}}}}}}}}}}}}ø޻گժʝþ}}}}}}}}}}}}}}}}}}}}}}}ǽVگگگگح̞»}}}}}}}}}}}}}}}}}}}}}̩VݷگگگگگگԨ}}}}}}}}}}}}}}}}}øfگگگگگگگگ׫Ɨ}}}}}}}}}}}}}}}ȥfگگگگگگگگگ٭˝}}}}}}}}}}װ"ܴگگگگگگگگگگگթȚþ’͠حܴ"=۳گگگگگگگگگگگگگگگگگگگگ۳>=ܴگگگگگگگگگگگگگگگگگگܴ>"گگگگگگگگگگگگگگگگ޻"fݷگگگگگگگگگگگگݷfV޻޷گگگگگگ޷໮V'H[[H'(0` ô{{#{#{ô;zxhjgda`^Ç_[ӈ`]݉`]݇_[ӈa]ċebjfxuj>wd`zNJyLHwJFvHDvHDvHDvHDvHDvHDvHDvHDwJFyLHzNIc`ԭ}xb^vHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDa\y!Ͽ|db`yMJvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDyMJb^͞}zjϿϯrq{OKvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDzNJrp*rovHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDolѵ-Bc_vHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDa]ڪB"a]vHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDxKEqeƯ-sqvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDwIDzMF~SHbPqW|\}\~ԿϿurvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDyLFXLeRqW{[}\}\}\}\}\}zj|PMvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDyLEdRvYz[|[}\}\}\}\}\}\}\ajb_vHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDxJE~SIfRz[}\}\}\}\}\}\}\}\}\}\}\}\sй!nzMJvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDyLFZLjSxZ|\}\}\}\}\}\}\}\}\}\}\}\}\}\ayb_vHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvID]NsXz[}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\rs}QHyKEvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDvHDxKE}PH`OvY}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\dz} vƟyZlT`OUJyKFvHDvHDvHDvHDvHDvHDvHDvHDvHDxJESI^NjTwY|\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\t̷=c}\}\|[yZvYnVcPZMTJSJTJYLaOlUvYyZ{[|[}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\aɴ>jb}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\aj{_}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\_ejq²й u}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\~\gsy{|}ǽϯsâ}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\^bnz}}}}}}ĺ˭rҢ}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\`irz}}}}}}}}úŤǦݬi}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\^ny|}}}}}}}}}}źħ#Ǧźݼ}yi`~\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\]amz}}}}}}}}}}}}}źĠ#̥ĹҼ}}|wnd~\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\}\]gpy}}}}}}}}}}}}}}}úŤϯź¼}}}}|zvi]}\}\}\}\}\}\}\}\}\}\}\}\}\}\_mx{|}}}}}}}}}}}}}}}}ĺ˭ Ż}}}}}}}}zpfa_^]~]~]]^_bis{}}}}}}}}}}}}}}}}}}}}ƼǾ}}}}}}}}}|zvrpooqsv{}}}}}}}}}}}}}}}}}}}}}}~ɾղhŘ}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}Ɵh:۳گحУĖ~}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}ӹ; ޼گگگگحѥȚþ}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}Ƽnگگگگگگٮحժʜ}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}Զwߺگگگگگگگگگح̟}}}}}}}}}}}}}}}}}}}}}}}}}}}Żn۲گگگگگگگگگگح͠}}}}}}}}}}}}}}}}}}}}}}}}ɧxߺگگگگگگگگگگگگحթŗ}}}}}}}}}}}}}}}}}}}}}}ŻԻdܳگگگگگگگگگگگگگگ׫Ǚ}}}}}}}}}}}}}}}}ʝdâگگگگگگگگگگگگگگگٮ֫͟Ó}}}}}}}}}}ÔΡ׬گگگگگگگگگگگگگگگگگٮ׫Ԩ˞”̟Ԩ׬ٮگߺگگگگگگگگگگگگگگگگگگگگگگگگگگگگگگگٮ޺*Bߺگگگگگگگگگگگگگگگگگگگگگگگگگگگگگگ޹Bگگگگگگگگگگگگگگگگگگگگگگگگگگگگ"âܳگگگگگگگگگگگگگگگگگگگگگگگگ۱dߺ۲گگگگگگگگگگگگگگگگگگگگ۲ߺjnߺگگگگگگگگگگگگگگگگگگߺnn޼۳ڲڰگگگگگگگگڰ۲۲s :hྏ޻ߺ޸޹߹޸߹޻ྏj=   seaborn-0.11.2/doc/_static/favicon_old.ico000066400000000000000000010200761410631356500204100ustar00rootroot00000000000000 ( ( ^'^'rLrLrLrLrLrLrLrLrLrLrLrLհrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLhrLrLðrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLgrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL߰rLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLurLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLްrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLgrLrLΰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLʰrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLorLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL|rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLKrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLѰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLgrLrLްrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLirLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLŰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLHrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLgrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLlrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLհrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL3rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLirLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLٰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLѰrLİrLrLrLrLȰrLװrLrLrLrLrLrL2rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLvrL˰rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLsrLGrLrLrLrLrL8rL>rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLhrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLkrL$rL}rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLذrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLgrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLurLfrL!rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLwrL۰rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLɰrLrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLUrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLİrLTrLrLhrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL8rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL3rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrL̰rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL!rLsrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL,rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL]rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLErLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLerLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL"rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL0rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLqrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLWrLrLOrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLǰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL)rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL(rLݰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLNrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL@rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL[rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLܰrL&rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLҰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLgrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLŰrLrL°rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLirLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLvrLذrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLxrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL5rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLhrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLNrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLװrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLprLհrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLذrL(rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLjrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL}rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLprLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL.rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL@rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLڰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLΰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLkrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLirLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL~rLfrLfrL'rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLjrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLϰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL|rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLװrLurLfrLfrLfrLfrLfrLfrLqrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL{rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLgrLfrLfrLfrLfrLfrLfrLfrL7rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLirLذrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL~rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLðrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLurLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLorLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL3rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLOrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLްrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL°rL rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLİrL rL۰rLrLrLrLrLrLrLrLrLrLrLrLrLrLհrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLprLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLѰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL/rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLgrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL{rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL rL԰rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL)rL/rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLFrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLFrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLqrLŰrLrLrLrLrLrLrLrLrLrLrLrLrLrLڰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLgrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLҰrL rL\rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLErLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL'rL'rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLYrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLްrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL4rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLܰrL rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLirLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLΰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL8rLTrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLqrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLΰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL/rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLްrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL;rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLhrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLͰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLѰrLrLQrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLerLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL7rL rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLذrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLVrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL[rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLyrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL)rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL-rL*rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL1rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLǰrLrLrLrLrLrLrLrLrLrLrLrLrLrLҰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLlrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL۰rLrLerLrLrLrLrLrLrLrLrLrLrLrLrLrLrLsrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLӰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLUrL3rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLɰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLҰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL>rLҰrLrLrLrLrLrLrLrLrLrLrLrLrLrLɰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLʰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL7rLrrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLkrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL{rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLCrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL°rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL3rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLurLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL3rLrLrLrLrLrLrLrLrLrLrLrLrLrLrL۰rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLlrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLgrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL?rLUrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLzrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLgrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLưrLrL&rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLڰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLӰrLrLǰrLrLrLrLrLrLrLrLrLrLrLrLrLrLϰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL`rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLhrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLorLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLorLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL{rL8rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLŰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL(rL̰rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLΰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLirLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLvrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLQrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLvrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLirLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL|rLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLװrLrLrLrLrLrLrLrLrLrLrLrLrLrLְrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLڰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL<rLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLurLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLҰrLrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLjrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLgrLfrLirLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL rLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL˰rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLذrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLŰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL|rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL)rLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLlrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLirLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL[rLfrLfrLprLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL rLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLҰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLаrLrLrLrLrLrLrLrLrLrLrLrLrLrLݰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLwrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLgrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLްrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL|rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLgrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrL{rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLmrLfrLfrLfrLfrLgrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL̰rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLðrLfrLfrLfrLfrLfrLfrL۰rLrLrLrLrLrLrLrLrLrLrLrLrLrLҰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLqrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLqrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLqrLfrLfrLfrLfrLfrLfrLfrLkrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLʰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLưrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLhrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLʰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLgrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLirLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL̰rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLqrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL|rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLtrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLܰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLհrLrLrLrLrLrLrLrLrLrLrLrLrLrLذrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLorLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLɰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLxrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLhrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLgrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLhrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLǰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLͰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLlrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL|rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLðrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLܰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLmrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLϰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLprLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLprLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLorLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLϰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL°rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLذrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLΰrLrLrLrLrLrLrLrLrLrLrLrLrLrLްrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLvrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLkrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLgrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLưrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL~rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLyrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL}rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLhrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLڰrLrLrLrLrLrLrLrLrLrLrLrLrLrL԰rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLsrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLjrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL̰rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLȰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLyrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLݰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLmrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLyrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLѰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLưrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLvrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL{rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLxrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLjrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLְrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLذrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLqrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL|rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLΰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLhrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLjrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLgrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLڰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL°rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLmrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLjrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLtrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL̰rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLҰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLyrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL]rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLɰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLŰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL|rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL\rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLyrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLirLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL-rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLڰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLprLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL@rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLJrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLsrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLװrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLݰrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL#rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLðrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLlrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLorLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLyrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLذrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLSrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLɰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLirLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL7rLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrL6rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLװrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLfrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLTrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL rLͰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLqrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLCrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL/rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLerLrLrLrLrLrLrLrLrLrLrLrLrLrLrLprLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL%rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLdrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLSrLHrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL!rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLArLܰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL߰rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL rL+rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLUrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLkrLwrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL^rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL°rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLͰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLLrLZrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL{rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLxrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL_rL-rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL4rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLsrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLirL7rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLDrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL{rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLsrL6rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLްrLrL rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL!rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLCrLXrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLirL6rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLװrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLprLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLذrLrL rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLUrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLTrLdrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLаrLrLrLҰrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLErLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLMrLBrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL<rLrL߰rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLɰrLrLrL˰rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLΰrLrL|rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLTrL]rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL`rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL"rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL+rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLqrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLtrLrLİrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLݰrLrL&rLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLDrLorLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL rLgrLrLrLrLrLrLrLrLrLrLrLrLrLrLrL rL)rLrLrLrLrLrLrLrLrLrLrLrL$rLrLNrLrLrLаrLݰrLrLirL??||<<<?   ???????????????????>>??seaborn-0.11.2/doc/_static/logo-mark-darkbg.png000066400000000000000000002122321410631356500212530ustar00rootroot00000000000000PNG  IHDRug`9tEXtSoftwareMatplotlib version3.3.1, https://matplotlib.org/ pHYs.#.#x?vIDATxw$Y~ ~seFj]uWO3[{.4<(vri<{ܳ]b%\\I-:22v!#<2,{z\P_p!@  =@ n@ @ R @BP @ BJ@ A_A)@ !(@ }!@ /@ @ R @BP @ BJ@ A_A)@ !(@ }!@ /@ @ R @BP @ BJ@ A_A)@ !(@ }!@ /@ @ R @BP @ BJ@ A_A)@  f1xO۠=d /ߘ_PNqiwEh_B<&?m\1|KH6ETS  !(A485=u N#X"S  8cODNЄQ&D@ 0#u/xx 'MX ޼1P;  '8% Ο2WxKL@p!(| gqHPT!cބylS CJ>a 6IMaee $AҡP)JR&'0QL=WQSUTTjw1G0f/q'㍿;pYpX4v٢G,, 9̊R\t :{ܘ_(zR GYLc#QHemmmem.ɦGv8稨 * * * *.ռ [ߘ_9 @pаY h5 ڎ HzD9J1UVPgy/(o@ #aO$A),&/MDNXlssRGRF/\&,n/\E A)e4"Y?`b$mxlH9JP\6fM=W=>KBJ.a1& ͎Å%=Q*_."[r>&,y-B GJW~ڔ3) 7BtK#5Umy ̗߹1>A)\0q;o@?zVRЉs5M\s A)\|gqNŊ!! =PS%$(lXN1YT GJYL]~_Y3dw""mH)pQWScCrf^o_ ų:@ 89/@?;sIbFŐ$t(-irX8 ?9Z2> A) 1lAAED O'tDe "v9(+Ae7~2@ @0p!ZDr`.c?]^lvʾOP ΁|S  l9th8V+.民 r Z:ߘ_!(?Bk VI˜ۏq~C1kU$Ilr'&y{6BF9TXeQ XD@0L1K69$B0aGp(`aC olbܽB)O"`wB R4 97} # ,&w\7@ƸǏa2>w;h⠄}bxisׇ'q/l0ۅ :Єݘ_xǐ@p#@F<`6Ƽ~}ɚ߳0Ρ2B@&B(LK(k`J%z|6NΌ3,y(!,ֻԱYb=AZ)>f;$xR D&cv#OWo/y`^DJ_V#i|mŊ_ޠmsEq;3Wa11He `)jSٱ%5o.F>w $~ώMr5vTk,eZF 1pf$ !(4m ȣN7f! 9=wUc݀oc5#QLQ-8JTCM\K]ъ˃'b㐩;}ē=(._f9n)L-Zb{1W 8BGOG.qL`| U4 Ny]:8HU$ԙw1AYo{e8d PB F ""_ z.mk닉@9#@W mFbR"3:u 19`(!VrA!xP)rbFD.nŜF$~Lyv<64N]ƌ?ɜ%DfP J}b" W8RØ`Q'0RLyX&(6(w2)<6PB}'{90L˙tw;+r0L K}zw oߘ_x w/n'oYL砥Ҍ%0|3ox g`Csݶ6 <!ߺ֐(8:eKJoMzX7>~ċEDŽ'бA(ŨwߎI20c%{;}TԾ!.&߻1p wBP ;qf&\+{gQ w2)dP=rs8ݘqj]zGȔbMij#ݾA ax3TڑaM_TB'~MΦe,P f|!H Ib1|+DྡpS ~q)9E|TưINGX4mlmzEa*JȅJ\5T$wW)o"ciÛ;kώN .1fj{.sYcfyT>%0`dy| D`ҹyc2αUAZŌCnAqGp `1I=b_1.2BP iq/>OoI2.5}YڔAoCՎg{aA mamP_<~{́fN*c*ή1S݉gǦ[Wf!OS()uPB`mtp>$B @E^8 Zxܣ݌3dAXI(D\p#}eJ1c.>tIu얌ߛ8>i| "Y\4lXƏ~ulr])ŋspZ,-6ưM]xnlcsֈu[%z UU!ZDho!@pnT͂/]1͑3}w؆m܋A)XLįAKoq(!q)M8ktu9*cHx}kp+]l5Ma#v'e˹U&$ώNev)a\';ȵ5.SqOc!0NľbŘǏYP(Ju--=1Ui馡F ѳ(r<6Z#&\34ۿmv_wqWBCpYf,#*cߤ wY˥FOPy@w]MUz 6J: -9o[#c=WQYVHg4~6f!|AI#_b)b9aW,?xggCW6|d1J'9h"ߨo$=8Xʦ 3{ڻz v;>`D? C]{D'3syp5=qBew288M1 `hySMXg7th aoV7]& 񙫰HR J`@FDU(' DDGQe z l,f0}D[26Z,V8>sP'9es& lOR;-] /ޘ_]%+''b"Vska!2m:yqѲ>oMptQ4]"!ѴR}}@[AڊI@3^`:רU3F5rIx![0 bPXg*^XB;<]C-d;e5|#hn>Ln!ߦAb"o7[@p\@D}IR\ ʼn3r7tl ^f>s h(~Уۅ)عZc4jEV;GC~Pz.?\&[޲R']"-zObV!b ݵLPBlJg>iW"N741k~K2~Gk@pW"c1,&? q<~%gf0m"[g&' ]k)FFu21$Jg1&ɠ=FQAr "OarccT0ي5Uي.!1װ8h|93(! #0sab"0gu!b1cТ?1V3j9[$#Q)BfaI2\]s7P/Nqi % 1l3珲Ka22DMU]ȟmpgJf.[DlύN܅_IK΄W+/iD%ƛeu2qۑ|X-ZӬz} 8̠mcBHtН/tF;(A_S>0lUZ% vXPt\QDtޞlv_4N7^? >[@08\xOA{rHoq1NGJbR"_壄e+r gs㳈cuYx$2r!&qBp'cd&$S-pLB]q{ ?hH #SόNlBkr.\B OSNbeyabOx1qsV& -4 ^EG%"ي^kZCKn"e&°Pɐ((EҬ>[ Rxv\,uX']tI)fCq(+cԘS!)[HZ8kVHKP9; 9mmg\+0 Aϩv+rzvQg* *T`2|6{N tBiϦG# @&'z03B^¸|j.}M.Q1}̰o i_0oUPUt(UUtKPua /N#NԞAG^\LP !(Dk~=~?}6;;kd*ar֘s8Eb2pXk,m(WQ]Sd%L|X*c"yѕPU(5ƊqwtCa n.l 4~9EN0Do '/86}b"tу$ sNL6g6dm2G-7qnFXKBt\c]ė83hh[~!S-e#}1Ws6Wt&skphs5*4!vJ`1oe< Xt ?ظ2t7R#!xvt 7%jrPd\D*c`XΤ=8L"pȖF-&B%ppD0_p+2rHuưM_~F$x'Z /&?}c~e(F+o!bN\Fq)0PCZ[+SCN|V0QkLkv> *xusT!-l:u9 (!Hxes9d :uIQS**g.C SIue˓܂q[{`ט*ʍA)87+D%[b9#hC.a 9,dp;߶[5tJ0 95E0PTj#& ao6"^S>>4Q5ώN#p*(ߏ\K R{*] Sw>;i=<22";~BAA)8BQQ\Fe9KT@0lDLTd*e5L!Z3)bjCE뾳rcwr',kW{k ]c[1 Ns:cxus.F;;X['cLa᳃$n7mA)8SqZzoņ'bwMA9F:=_@kX(YPge|ofa\ *TƐrO8\ckb(xya:21~Y5JӺ=|w-bNjBR;":~/ݘ_0o~@!(gF#wEo&(pE;u u.s\!Sg*>z>cx=J#ޚ(f\dJXboEF0 3ē[XͥJ%<24!t12ۙ}LmKmS͕O^SÓv{SXu. n sd>Zg܆n/3@ 3a1_\"DG0 {  S)!])C VIBfv,<3: ͡[)LӮUt̳n2r3HǺ ݉YCN72 h9(ق"#8]?'1Tw2+?R* ݁ OcLb?|b< m&X1 ; W!SbwHWO1xs}d-{[kO 'n/|dSA)8 hzv<9<\{$B%RLx[OE˟u lUC 1`1ae{L;]guIXL}s^}Ɯ?va;Qh.@R8D@JyӾ#n/" A/S˃džQxߚ%MCZomڰS֬M^@p !(c1'.}A\ _N狊Vsi|&x"6@ƈA0RHsHtޠt]UUtV|cX=(kxij~f5`s5T4+ƿ_Fvpq+4n\`UUGK7{~vtۅ,sYxalIjSCT8NZ?o ,1|e/ߘ_7}@wj@XLĝfq2x26BLvAa I|%&F2&%Pj.3'Qz5J#8+K$Ԙ JژC*r&1 h|gqP{q9C=m^KhlQ;*EphEZ;xusћM$JptO&z}G~qmLGܭxmt_́/McGap?2T9PPB U.<'Г SfOpM׃Tΰ_*vz=lP;&>l"IdJq54Qk ֳ+ԫ$s+fjAww6s×&'ܸF "Tg Qxa|.ݣ*xwwc єvp uvYКE]UZ|OFxq>,Hⱡ1\Duͳ4>{LL ;#Vt Sb 3FpVZ-XL,4 Cf2xjx oT8Tڦj[l]RcA #nT`TwT)[LszXʦ2^G;-}-Z5*Pm/E0>ç8K>}DGQr5b)kc-w (X ^RC,Mp{B h0+}]+02 nZ.݉a+GieMVIL%" nqfTkNfeZv{Ht >qs̻qP.ɃBaLw$<17`I1[ЌjO9?)!`nkXə#`ǥU|UUx27WR _.&]/BP b1wm}#7:A0BOY܁f= dMb[,j B)_>kV"NZgel-.}Ko7cR˜]cA1HosbƯuMSB}1 jJ,@mns0F5 UװY6ĠB9`8x8:Sc{@H\*Ǿv9m|fV^_[kHw91pڨS Ž .9=HhޤN+J2GXʤ~711}q!(=#׌G@ptSw|ݲR(&_. <:4vJ5߫K]UT0w5g/%J1cGTDR!aKnܻPI)JÑG&1ZU|vdgGM{_v:t*#[h'eJ1cF>l z kۈۇ@*zHYl|/ A?qS nwQu %EFޖ6xi1ڍ-Qp#00ad?$Bmƭ]$Ӄ=CE,"çns]Le-P<X%ɰb+\45_אsA<0p;i1㨌h]>J k4&}(!=G`mϛqo?Fm*phgpP)Pj ́1zlcfM+b:(hw|TWD)).MB }"6gG/<Ånxb"~}Ü^pװ w2q,w~ؓ4"!gT3z>Q&}/10Qg<5`wmo!LOl9"z͔7G;:00Lo~|kRwVXJ9\_F&s_/๱ bc }mܣ\oB<`(,v%œ܄izfzJ3vh9GTn3A/%1N]vyOͯnZ:7?wct|F cc}Hc+L6l7e2;ʵfW7 88mdSPuB&#3H,vk:SQW3e{%hЮ%5,&Ϥndž05'v1T:>;f! 9F^\F4!bz_TKImM/GQ9GRjcMDl/mJi3~]J x|hnH%Rxv\ fjb.o֨LJd87{.[uuOI2&}D8xcK_\+>AjۋBp_E{%X˥[̣TH"hs`&42A(\ evm _/&C7('RЖDF SSOM6V>, Y<ݏ3pvVeSk3Ɇ͎"#0i)S"D9:}QwNɣ몡+ȔxWw?Br9m+V풌@4!/SÓxyN_ !3r4s':^D)Ch9G1c]zQ;X˥RI\^P>{J2^=.b"ߘ_?S_!nq oR ύMDL㬣vcihR6| ri wTK]8n6x>Ա s3α;8tEDS 29u` cCLTœ?1XO|a _3=*,f1 Bnڍ<\Fj~תX"/sydl412m6{k[4!ώ͜oFA 8i -xftk PC":a"BWCnJ ?XןN;\͍_KTAkmӝ3}x *zF^<34.^9Ѧy+T@C|tQ{P)Ca*V;|h3j r 8 $:8oYè@+0*d)cBd2h[WcܘmVdRa nrp[l܀q*^\7 c Rނc4r&1jӣSp$ G_]v!BpdH'-6<5<`-?R!́qw jbzPbGՆgr)b"oQO/dw⩑)X"NnL)ƽd%[chH :){Zì?u0Ρ2~-pȖ7.oC)[֞ ^T@,U,{BgWam^G/6|iٍ]UJ'H'1ᱡU9Ju%SBLn.!VڜnAaji:G9قf RG tD煨4RXL-~ 4߰ˋ'bgXu;e)ݰ2rz U&TUFέńL%\F{vIƸ'54{riTʨ1Zow`Нv On![068UDf`!A tos^Rfك\Nقi3ЩI+A#Z/^҈QA<PB7>5g8$ivO~  QܕJASL6?ad)oGG64զMe &ڢQ&x~l2w֏MOvxlV*}g l.KivgiX$DeưO=#6i٩dRf,>Ϗ ԂG/1ٲ󫓗P-z;`o O Q)Z9G܆`L$\F>]Ga *c8(SaT@UQZN!"Iz2Dl\ך/٘U|jg{?*~[::%JPd/afGÑ|u)bqRnw_.{k TzXq`n1 j/o,Agwz7]-c{RՊmӎ/\GlO@itsk?mdC1͙FafBuQT$I2ڝ5 IJNÈ=Z7DT__R K.ܺ-)bZi-T x:d 2yuzSq腈j]DX˥ EOlߒp-y~lFҨ+P״Jjj2AN8͓PBu㭃$>I~- _}JCL0(&c3iY/h(FVrUJ;@ѹ.Ywn) GGM7ˌ/Yz5$vBXe o,&V:Ӎå+871,gR$:VsRW՞$l`mF" ;=64V+(km8=zyh~{>;JVÑ m `=} ť`J e1BTޟ}HØ7"*&FBdO BW2~oRZ>[xK Ն lLi[8N1{L^'@2lR&uS]an&L5j7TJЎbT =h]`1˼w5R6!o՚,(ԫ}BPB0 5ܑ| a~~">Du~EF0dM SQmj.u4ݠCYcDvd^̔z ZSbΔش܍%;M! >`1nGx0\ɕ2)c_| u9-g롟Y*c,dq BEl: zA"pʹF?=1kUZ|IjώNm΁i_v w1lqLWCMU JnvMC94q|%4~zВRǭtIyx`zcg\;s9L v'dBq{!߉cCs(ZD.ɘS<6f헋Nb\:p-1C"oa9b":B#aqӻϕ`WBC[ (LūHW@8vxTí=J' ![թKYÈU묃Pdžq/ۅ^5v$?6}4~~[W L(81 lm:S[jי?\s D(~|jGyult0]hrol궕 ڝxvtZ稪 0@Ƥ/x5 SVtXVuN(!])fj́ˡ(@^ѥM$B`xހ)v! ܘ_X Ŭb"MӾP/c^\*&N6ۙlҞӾ`OM9 cmb qU6N'@5 1 gxk{XD"ώNCZ% ύʹJmi1 0[餮. :5mȣR»N(!R /]cI@?ÖhoVxusK7ndMwTt+jB[xwwÔ\F1 e 7qC; W.0D-}U> {\9G\BJF=(oV*$=ڈLL ھqZ1JV+ժj,)!p|ib먼Å/O.-L%\ #cIb.:cLVᴘlrTwQ/>*5Tl$JXy\D6thg1 ^e8>Jx26R+a5RC]ҦَFeloNC5W}B\D =b"nր.<64v$5$ UΰKc%2htz i]i"gF ULLxsg _D( 2)osQ׊:S{Ls;##BVb#S n80R.§`6@7A<=2wv;%-v[,`9J_*JorxlsAe كAJ.bX@\@]U5)[0 j hJ(,i:)T:(vy0v'>L6q7zMV;r 1Tȵih: !(! лfSÓ*JhOjŘǏfF&󙶩-Վ'ᔭ%WHV;sA c;Iꌡ ^ZAZǏ ND"p[mx ;t;(%g8V!ۓ9siL}51αW#S^Ӎ69%J$^\2Ҵ l17D#=׻b#SuQq~)kks]IzkfPt"#XΦ]i7za-6pIlKTTjÌ/tYP1}6,kJOO8lW4h;ڞ݂vE|B{_.N&lBx1ƠrBՆ] c91fryAs҇eR/^OSJ J vpt.MvrqG礝VɤF8t4PzV"Ixzd /1b|-"a1 ݺI2:f=hư_.ೃcM5`)^S32-jbƌ?ڈq] UƐW *>HnfjόNm>IRmMUX'b:\}5Hof/u6I6}2[jJoaCI&}s*%J1c= ntX. #ѨzC~p^$JZWcIC5*dL`-xft/m1RO7c1;Ld"M. cX[:UαKk TzGoM!a(ЩVcj/o,uG1/mAVO4Q<3#SZ;Lxes{=;*cxoopV!7w#ڤO/@~Oz>IoEFtEM[Φ27q}fL?;^3xH~G&Dc~q/AyK~ 쁞*tr[*2m<&)!q yo Sq ٳ3;}mmOnD02'q-Ä73yp)1`pt=ݴG؀d**Iq0"{o !XmZ\b㳺}$J1 ,T $n;Ta=;xc{n`4H;zv.f{e h:x<6i!(n tt_ 0jЃ 8Rw&كif๱]݉GZ q vl6F51L)a݂yŒ/{Msls-G:A+8{H"pGrZx4:/O&ɺ2Æ0#h}&͒3(km2\|mb~Vϰ[ʛk$?werb5=Bd7 Y,M9w)[zz/u*JS3"Ks'UȝvX)_WQi^yPg1 >탚 R\ #hw"N"bjǥ`pw YSֹ]i) (ƘS~眃b0:mv3T jRPl}a(L4UUkx"6Y^!9pZ-:adD]S*g\*P 6I٩IRvIҞZr;fFZ1@آCʻDIBAGG͸h] %vѡQ<Z>RJ4 8hZԘARFϑ1!_b;4% NLuzGVb_+:5)Q/ OrYQ \(QzT`PQQקuWu ~GwÓZU-orid=jQVwEc?u#׳QoBP_.& ola ]b"> `vgls$2XmFubDgD ij0V]'zNi[ % ~>?Jj=#¬H@qp KriTTvYƘLJY2 uL-JVhdj{8m%F&}n0Tww7֙6ikňۋGcL)d<<3d4>mT5,.&Oޘ_0Z0PXL=SBzMYh0 <~IGtC9!pY=;+7@t`pjnj?a4| S=?t Pzi~.#uaFkw^Pk9MPԘ#Sq')E 0 `.>$>ExqBJ/~r0O(I2">]I~XL_1`NB0p'һcCcgjJ=l϶~,:fjQ(! plMhtmzUAόNs᝝Sn;2rc*b/εV)^0 ?q=B%Hݥ՘B+ cRVEuQ\ E**!K{X:r>1lѵq_tLUJkfz+ u<WV%0!(? /R sn /-g%GGqߴa1cٔ/$aG[;k]\F{L%:.^A`F@V> *Ta$\LB & cHWJA Z;<Ӿ faPBrL>kX*g-?^}ez:SB̀qEAtY)dVX%xr ȍwvױU,e$<5< (j1j+D{7}_' Dٰx.טˋ'.dOaJ,!Q0Bu(F.|adPkiTl J[l#B Sb)_±7`odQDG!Tjq~QO OTkǏL^:ӉP*stG.G9v[w]6 9=xrx/Q0,TubG.٠VM SOnԊrhG2R%;&C1d$[,@mt} a-}o}c{;EVrYޘ_X3A b"n 'l)8@"_9vy1By&sF)d3P82?\ EL(>sư]]}i)Նfa9ˣ0|m{G:ݸ><XbB%|ad‹I@{% 2_wVPBu{.޸!P 1FjU^1"]b&|jT߃R;<l o$<:4&Ĥ<kSuf|\"%ހfGYLngX`_ar ]K(~Z4dؓ(%b^ 3x$:VI*&UF1 K>I7"I{u g^`?D_]łϑ(ţQ}pdL EFP׎5F0ᖨtrX c﫵.f`7hd1LkJQ8CMUPUA9be8 c8("p4 *DD)b.&{8,Tj6Y{{60.t|?vdݶIUM9S hw˓Å'b8HjU0p%$Jtu7EVZF"b 4lݞ c(Ԫ`|KddLyF@xr'%Bxl Q@MU2' 9X% O&;;J_ǭ$Vscx$clca/RS*xms_yڃI C ּ᡾jR#N7@D =T00,&2kXr¥@d0 \ܴq[mr8ϭFA;ȑO1,e ~nXɦF ,P ΁f!I:݈<`}W[dKadB1P1 b4@Bp587wtm58tOCf7]kqhJ2Gz s` Xϥ|2CFՙ7t$9yVIB>v-}b(m8, 9=nl&;vF `wkn6SF%jV_9DVAV=9K4ԓs`k޳~x jpje k[+zwn/| LXLc t>W_ DuysFa Zn?-;X˥Yb%{7p`!:_ oݍQ>3dϰW*h&D]<uP Qֆu4sԘ%I\V$6u /Έ!m 5Lxrh/Y*q{{goLI:^Z$(!vJzq|T8Z1bpLJf3oLbNPlƄsݡ.-4 DDl;q%44E3 cHZlZUUp+fzhB2$d:S* #*eJ1D[4hEn<76s8ؾbJPHA v3W7ۑ&hVV:I(qj4))u3=Gڌu^>Eڵzk >Tư[ewoo|;}`66@ ͕'3o;!(/ .K Dl ogOn[o^Z/o,zjf͕5]xwwzՔTȔ"hw>}/|}*O h8 !A'ftM)4itfQ;=DK\]EP1αGN:aM].PW MR&ճsȲq.ĴܣeJ୪2nce %OR-KB]^QqgNF=~zB b"> nX_cY3|3-ԫxGPNW˽.:^ivdA A 6Y36xs\nVI¸'pC X$1si`ad dd3C)Rmq֗x+ԫ i~bqt J\cFAD)Ŵ/i.cXFg& $;dYxn}(3ctYu7FMiⱡ1#=TpAy4@Wͬ?a9"*Jo(!tsPbX0-=0UEZ.;}l浴ވ[Hb0%1ghSBp5Ne onώC8V!n1QTR |>WĚqY=6;}>4D.`F Ki;P,Qka<m9A_bm+tXTUCP':e /tŰۋYH>\}i_׳{Y>6{o\r)UPVhD<~\ C"t@tR<MF첌gF[nwK<dLħfyxgw_hNzrؠ VIR|{mlSÓxc}9˜?)_\=E94Z#'iUg(Q1n/$rס݉@Q@.2%k\`>JJ6)tz, ٙZ<!U.!aǡ.J!(ϑ4_׳D(|5:SA]ԚA]%B 5ȴbLjƮ2uf*%}|6jͥ7Q,ʼn9Xyo? Q+!"JXv#[}' !'Ʋ4L(Lvr<.s,gS[sn?߰iD)&<NriTTv{0mAL).X;a!ӶP(.qL=:XvB}.55qmYFS8֕~0b?mLr *gKLxO+nsfY v1ԙڗΫ]#I%nϪnڿ\ Fq%4k.QvRX9/췬 tY0 ~PQGI<(|nIœ?.3 ~jcOf$9|Al{j :kQs`W*(L0ޮ;xzdl=lk2rwS3jÃ>Ln7oܘ_8EBPlvz*xckTwMU>Iⱡ1\^SEcU;{c5 2 *|}ywy7?^hg8:PCZFMU!S Ϗov>JhTCv>`on3o2U]{'Mxhv-6F@bо^_ū[˺|EN7O(2r>t_8މi_[ެס]%)XLm%Bؙ[gՍ]shz0ak;l*"̲iksTΰ_*ݝ |6z /'(k&ȃyn-0kC"4dv:c([Njñ7cJN.w͇.<9vl΁SDnP"u8Dn ?6ug;`Ësxjx2gv& Dgs$<:4 ZPD1v#ЋVrt`c0drmDD)6GߓCe爄D) _굮$ cH hT}`@VR6Ådž`׭)(iVARBCLծF7pR07Wnc$y|S}xa|+2cޤ~s0]^L).K2>Iv,g[ZHbFTp lqH+(0t$k^zG=~}uu^av:spYFR^hEʳ1=FC`kްMϑ)\ ܳPBsy C pC}*iư;@*骗> ](F[ݏ4Wo:[&_?$y}{}"Obe1'1trVu TdMhĜ& cxecI=`ĵ6f{]+L\6᳃=$I=s0tGPQHV+`äxϠhUn 2ƵJV*c oTۮ^$a|0S:Sa}cccme{$8?T@r[̃C: T1n$A/pa.Ab&6n< <[ tI}Np$ SdžPRj%\ Fiz Er?KŐw mnOGuHQSU|vMŻ27 D֚#S-)]7fBdPR qlrhO&dPUpuv{x{*'pv!yY-&l3vj?[L1S@7BP 6ka8)c 0 nrΑtCQM1HZdsK82)uy= vU!(Llv0 vA:&DO몧d$d3"O˄eRLCH'ˠv"BKn)R&x0õ02aO!Daءyd+VH()uK4jzEe ۅ 6قi_P븾 ]gA Y("/!/bPD2_DT15xQ bbkJ\.Dv<wWz?3 $maxL8K$KXʦMu1 hwv_Cr~/͵e˩^Ga lB!g"zT=GM. lW7 $ 5*cH䱙nsWkWkXOC7F{^ vy>9iUTW#_&Yr81 cc6^LC4 !(b"nF,r0: *i5V9,vmcHWʸNbT8%fZj 5d*aJ5B:F{A{xJfL/j{<\޻8S`5F:L¹7/c+V6jTR y59a2Guq}N_J1lsdes9%?Ebu9F.kh BP+ -È]  Y?5*IJ'17wլ6ق݉J3z~(LŝL~8VSig [ߘ_` chSka*dȏPO*8; 7Fc8j.b_j7UTZkag7ߦ-@p3AH³RCRVtU6a(?Rec6 ؝xnt}]UzN7MĤ2%Z>trzsXOc%j*B՜f'Y0d؏!kc۝~băaom9D"Ϗkz9xusYz .!&/Z#N:0 &-*uGBe cJXW]1 a:4x%vقFg.?р݉gF[OisѤ`k&%RlNSD i(*;ATq{R<@&sjl'ɶ=wXdD0 \==!C.E " a1 hon+`h5B{kt15m"Bi:^ҳ\ klPww7mxq|~`댚r UU][]a p"Ó9\"*c3ϴ4Ur&\T݉@aiz y3kƏL^ /3W3Wa̭2,)Ka7WӴIp0?\$ɐtoL9Jo >ᏖnR bz䶑ߘ_xAi"8*mke|eҹ̢å=w⎺}xtht` c8ÅÓMJy3D<3:ݵT3U+Pyryt8iχ{[l<9<$C ca)VNSSOz≗6lv tFe|g[}ٔ\o%S saA9mJsq3yIv ?j[)#":hq%JX#S,O=Vf!Lxrb)KkUgqk"c4D\z ;\uL;7p5%;S&Tg\ds`%recC`ԦP[d[aU 1ffδyn_OlN(!q4$&e|ib.6IƵ0>cj@s<N1αv$)nlA$4+m:B~Y^Aur sPB14KQq3;}<:4hKSnTE^q&ioBlon1$uU{=CoᅱYȔbT| .-k;B ӣSB$BteuUŝIⳝ}P#(Jo,o 8,2. qu8hVYSs|!$[`Ũ[:\xgg]WW]xlE[AP81ww7vN\ FS⛻lgrulX$KCa\¥avydɗfǔ/7{c~ADJsƻm$bD1N'AHRH6_`.l{, 1 =Iov-q_7!2)O1ץ'|e 5UAʄ&Na|v'~d Y,gH@o<~S )8TƱM!_s3g B8Rm$*us' PW>[{GpmtJ9Xm]bt9ZNWi8av7BPb"wl; z ЁI99ӎJ@F=~ ԡ2R8dˡPd L%@嚁^9A]|' wUE;x}ND0:ɐosݷ)0#e faבwqc~@JmdO0  +4.bN8ǨǏ\S c`citJ>Vf|0ݓ\F/ʌ3Dn$K;}c.nrHa 銾[4:iQ[b=3⛻b̡஦TM ݆Gxx,nq$aD)*zζ{g:`%{ZA]e,&n\ 9;!?7:(u|w-aX7fzBO;6E&__Qs!P95S_.qlG#8+֊ xsg'ێ'0lYRΚ7G yxdHnaBd2"D.-6/~ci"j"̭< Tpk7ֶM\!m ǷO`.#øtcd˜ *=V!wbLJOC=/~U&"B#!Ä7`yg #z75ww7:ɔb•# 2`ݍ"ØJAl 㳰Ir'UUnNق'`P7>IN7;V|u\IV&׷Ʈ8,2щaej|QKGF1 b\W\1p{K'y6A8rtwRBrL;7$J |ad 5UA"R5U%NS<>ppcVtIMU9[ nb|1_XƗ';nG M9J{̈Ӎ'c8s\a4A)l9Q6[Sk Z[+xtbN ekJfaH XΦ]os8$LxNI?3U{e,&>W?53VlrxC},CLN\SO߹qJұNg=pΡ4fgaC =U*L{91jy^. 9=-uBAD( _Epԙ Sԝ'QC

;E"ݽ1S-㕍%Lzx(2QIm+6rQZl0b.ϙɺÍ]! 9>G{q}j.D=eDhk{b"3ӈd~ hfOB=nbԣ*cxgw[㳥#N709 lE,Yӵ)ާ>~:Ĥb8엚=K@Vf?oZO O@i4;%.X[a qf> 3PBsy0kwpZelI>Zqs Ť}70ΑH'q5;W{v:_AM7^ ]d|;gxh,/̌c{6b#n/ :q_ݍPva1<2Q 31VwU>McgЉ9{)piwDZ[ q\~m20-mƷx퀈PvO@t{Ou7 Jl3*$Bv0ĪEGbbc cXΦ:Ig!Ȉh%b抻M1 Jr&I@kR tཱུm!&9i,p:p}z On5jǧ{z@r ewᮦPN1ۧ9|7kᾣVYL({T.0IVsԀ,~h5n4Әr~n2S*Kxwu봡@ I |%<:1gtkG Ŝ?+@,&\ﶝU0 u.1fzۦ9 >k8:jCؤvR8Llac=6$og/)yH+dފ׿:^&Ĥ@ hKMպ_oV{shǤ/DISO~!"E=_FVd]\ nO#S V rj X(EO!6CI53*g+u5=ft;xv,j3RCLT%cݐ'9.o9>kwְ6/*o8Ow>F|<=;ky)l ]="? alb"ݶ Ŵ/׹A>+\ u^$c3lJN !HB30>K>نK\F2ʚ-6Ae z,dOM ٝ ڕPL슅J ԓh?xci9f+ǿc|۸>='FvhB 'S߼1 Fyh;2 }Sfch,j0|^ϝpxlh U8(q,6B KYk#ƌB%l52K  z8 ?Z8+U)!,㙑cҏ2um4È ҥs+V{]] $='@ H ,ۼ^{e{Ѩ9?${]H3#iest3#}tC4>M$.# GxO]e_g |dAV?\Ew Cf`dJeF/.xfj$뺫jq4f;(\fN;[jpZ[5AGQU"8O_J[z:+k>/7vR^RD&2 &;:\Ff4#Ӟǃ[)љM4ι'0[GhL$;d+a"0 \2jNVF4!15Qsf`f2soiG.3``{}5 _\Q88s 5YW2$КvbwXʜIEUxnz4ۄuoUFvַpZirav×_N!X 87:jmM+%4U=-dF{\\$P^i%˵VYf2Qέlk;MF#q%Y `0h0`39lǠ{ :*kVׄmSefavu[ZyBcl0r5K(UUe1\iB1EkUf:B[WRƎ 8(nz4e%6y[J!Ud Vƒ=V7յm2NGE53cSjUg991:*X\go߿b@oM=gg@U ؀i3 9;`KM>74ԶnBLQxqvE1&&Kʸ Ѵ0Cr ZLl4a&Y3TQU +DJBU쪏$HlOhޝZB:*kØF+܎7f6 ')TJh._]t̥q|m!DYtXUsTC;V$$P^뭀Wg4'4 MchP0Nrylz-)[Ux1tT認QYwMBQX /3^`:#'if[]%f˫ 0D;e+-Vr`x;˅_g#ȅᅑI^gۊ̴UTYJ@|owM>QOuv,^`b2qoW?>7C5b4UUє~WSTxS#$2-<;=}TZm^@syefKu,ef Me9[vJ E,Ci%w0rkBјed4MhYEQUuם悪WM$(Km6驮#*ˠp}Ci9vgu#6B*qEűiڳbƲ l&34G/;MD%p?ЬըfMBQ\QXG\Z`*}U12-5TקC\Q& ~e=kl?1WgC+y=LF#}5 TYms㋆WjJZ۸09 pxh99F!Vx@i0訬ֳlAzWx$Pݽq¨wӋ3+Y8,asGJLf]A#( 4n8gнȰŮVzjU6[AN^VaYzJ̯24WTVI aн7"vUYzZ**QV9"=41./eoBb 1䥳z67V סrӪLM=R\QW c<9qR50Ӎ%gt#PUN-LǰrWJjJrzl$&$ś k?*MĔӬUgn_f4`Vʾ6ѐ<]Ad0F?^BQK;7\!6Sie - v)o948aQXrhSJMȎ5& /_8{W qE0yA5 ԗfj[ 5,ƭWBQE  W[O[y E%yUNmĤ{ d|M(G'(Y!XSs<{C8 U5zW.(utw6]e"QTPPO y5[=.6ntUpm㉄xqvӦYnkjQ h,+_R0sC<[!Dzh%6<;Q+PB2Sl@ihjW_RFŚ6`*  qf3񺨵R^}Ji-&J*OM &Iyrr4`ֶ6Lfno%gE#_a[/{##&"GNN,Xӷ|C9Tڴx;:f;jrH(ݷs놝P<7b=bKsh\u.5[kMOW^IEU9<55%eѫkʾjޮ~&sNtBQ859Wq]]w儚ʫI Dw+̑ ^BNNΦ V f@p紃`SJ^ړ@k2+,VUhx(z6m}5 40 S#UK騬A%}+ ^utdf_S;eFɀgcQBWA~܏ #5ش>\])m!#n%+"(M$37sؽY ܘL]I٪ϹRS@=%&󕑷᮳* @%9u˾3I hVױ}}:TUEQU(>7HP, tz:& 7!3݉8W,0]l]U-g8B4gнȄ}ğڒRT7ZQ¹y^bI߄Bį,'~=SV14BztLtZ2 l%oFQ+F$U%(\r/0s_\Sm+jk;w7*cOꬬaOc[NGJF#f] jha9j4Qbѷʽ/OB'YtTb*=@2{|-[)P>l0Ҡhi1ל5g2YQ"8ONjh $~v &=mlgнt{%ضWamj[iwxyj% u4ҼKDBMb@PZ`$]_l@)S[灴ӴWqK[t(co髩gG}sFkJt%P^F¼4>ËcxCB~Ӷ92=LP,;>lm]hIXrA@UU\e0 0s~s9<qa2(\s旤Blr0^?-+炖J=MR>h{4j,Xn]ʶZ+ kRpǦ91>?!#g$P{-Y,r{406DalV'4nY).-r϶?_nRf|v-/}<48`j;P"8_` }k]5'm&8z-x,ԍB޴O8IJkaCi~ټ{ }eJk4OyfrUk.&nb]dNх1/ "QNNrbb9_`#(ʘö#DCY@&XG=z&P*1%A(#*X&*ʱىke1E᥹ hi הRe-M۹>o…EO08 6B!VmN(g{@YjPff~RQ0\%"Wmh2訬^HٲfuG"c+o4qCDqʌ+&X^^mUlkڐզ=>005r0s !o#GdE(u`tg&WJN鬬a_SP\{r$O Vz] {\W4TVaa sjrSs2-"fuekWE)uE(IEU c<9qY h"ͭ]*ڧGe#FV׳&_i MgyJ(<''gwyf]!Dѵ\O,u(HoTUS#es~κf3>M&f Rܑu?\^ɨ? 2 1H ..rjry EbB1u"`ݣ@IEU i;Wuj nnWb@s&Ʈ{LGQUFݜM7hd !`O 2$Pnȁ֊**UZۈZ.ι8’BL{|i?_j`5jo=W/5PjɆˌ} OL\ĜZj,l%I%\L{|B,7ZGQ Qʵؐs`b4roW?M/oulm,0 qi#nqٕ-BlRtV>]Fj^EnD)5[詮cKurNp,C K\_½,k!Bhm̩7XU~(@ 9d00 TJ MD hj2]%܌,rdGBU@4Řنt̩3 K_`3w,`bȢQI"R!Xio@Yf`1ۘ߹W>(@jPSBK$rYt3r3#Haq!"AV麟|bBHs垜wncܪ |D_2aaH!bׯY\G[yЯծj[ޢ?%./`]B!-9:3F(E(n@sqd%e(ccce#_R!|Ū6VDKyt7@U{ 3^?SnSn/KB!̢`YЭH[j4a5m?;PG!dֻ{+h6]C"83^?3ڏB!DAGi`L;yXYBe QUr9_9_yYoW`YB" ,VS+,6 nG(h24W~MF-!B(e @)Si᭪*P@E2 eh.+BؘLI6XJj~OȈB!Kj2LM rw ]&B"JqTllLOgBR HY@7h#D2:)BqH6:HCZ 5v]i a !B9vzkfBQLMALw !BW0VH.vyj5(zC釯(4&Ɉhd0$5^ ($WBlzPj?6}R3l@V3C @Jui U6ʬJ-JԯJ-ԯɿOw4j)J8'E_c,_s4F0]K!Dkԥ)N\EZ4 ) hjJmTPSZBuY թ?kU)XF22wb1d S{CBB VJ 0m6ZJ]kJ4FkdRDce94TXYFcE9 RBi~> ,_f!dd)BQ%j !vڐ@74  ,FS6ZBdh0TYN[MXQNUiv#L[MիM( K ?>YBl0Ƭh€U{(Qn@i31QT`4Nh0PQF{Mm5TR]ŔMz'f x͇)wFhne9;@8”ϴw׀z&!ȥ`$+, JfBql!2hNyk =_B=5tVF +>< n2Kea(P`8J0@(r8F4'(dm4&w۬f*Jm(/RQfMԚ%USU^U*JllkBa=~<>ƖWU^BkCմ7ZGwk6kO]F*k UUs18be/BheZ(/?"r31U7co*=@).F\,kNu4|0` cWh4PMo{=mRՒVpegz -,qiťy%Ф5L@V;6+zk˨0+Eq.M.pibT\YAQT&=L{x&zQ֮&Vvx]mMjk``p>0\lB\+G( vGAO]|*H85B)E͋S}y;ZHGmUFjp \<#.8o$Y7n|e6v7=tbड *H,་s3 \[=B )'SKʫaud]8.E[M%;ZHceOMOȉ97" 9‹&x6 Fw7- Mbfw{3ۛ+ # n.p~fA_ ie ˭D(ӯMh0S_ÎF6jGv<<.M3piywuE~ Ebj1G{m5V뺏h\7Ƅǹy,5Bl&@iн[Wg(+42ҿK)bzjq7iU!1Ź9"QL.Ms4>4Ԕ ][Z)iO 骫]vf~8=5{9 !FjdJ]Y&Cɔw4!tVHM=+&{=h#yiEӢ'y &coVۖJZ+ypg?K^835_Ņ(JZ2)V R3՛ [U8ڛqt4S[Vѵ`.L NhC"pal cc'*cNnfL:^vUYWû.ShMy 2B4SIUU5eUSVގ4Ues p΍38 iD֖|<{Vm84 4ֱ7ƥ%f97 g QdѨgTF(@=-cYM&v5JϺS;zua/f E9|r'ZLk.nAQ&m- lki sfz3/yשB\RIJy+$@2:a.omMi(_irsF8rz9Y)6@4t~fnupnKi-9΁v\eNLprbOH6QHx@ARd;hzuWW^ʾVuP%? (R3ómtpJ4Wq>۾E7'&f8; (x"mԹ1GF(fbj-a16kt_-dr]ȑxBI_j1qö:zkoÜf$pzsvzccLȔy+G;e2hzVQs\S-UΞJ,傡(GΌܩQ.ͣJqQGNqe%V=mh\yl憮6njc4''fdy&i92Bޔ#Esj2hvkt]( .!_ ,<}|QWU{p }6iu< bYN !Ե7@e(j V)N[u%7hoƦs4rb3'y]'6%2y4y4 yCIb2-, :\q5rm2WJHX6&{:}o;53'^Z .M.rir1o}}8[1]i,vǹ^QK!6@Bq-4E_6ji(uZ(*x'.LB\#yeMe)=i_yJl4&OjofINN̋DըdAR$P恬G(ˌZ:]׸A|O˻=8x!3|w7svnم%. ޸w;ω^b\^ hJeб5WSNVar{ک.-l( /Lċ4- J F87:GE;n7Vxb-ܲ%NqnfAyB iU!9BT٪ކZ\eK:Z#^C "Γ zx?ea6߾ccSco9:D("BlI^:?IGS ޚpjn<K!!5CL@Z0:v62%>fxsڑBlb#s;ܼ5haOG c.s~fAYMK+C<)Gs!PjE+郿֚|WW^m}]GFc =5̣ϝgr޳>B@(>s={wtvٸbk,9>1CLJMfN⪮/@YB+Pj}3䫮jnf[K洽7.ebe^÷ঝ]W( {sߎ>^䅑I9QlZ9qE~UIZ:F(1m4Bi4m}]GN̹sx!2sirקi)[sρ~JV=z<;4΂?=b}ieA>QSVݭEmyiڶ|ٳ] !٢'ȷ}zwkn&#་×YDqjMy 2BG(5/h}3lJ[trSO;#,Kp0 ub3 GbNy7Vs{_{:[4wa{ #CQTq;x;qG,[+y.sC86E$^𵜅МLȦcSN'-шh okm.Mڅg|s1ٔS0tP6fSh0hlnd={'XD!Dޘw?|{oЭ۩,n;ƙkde,SޅCH%Yܔ&PZͦuɍ6mIMY6 gsL֩B`(<}?{[=t6^hd_W+3=bue^MCԕ㊒vXjZuW6SjI&YH !D'913'qwo[{Sl#G;AF(@*;p=7Us΍6n2?x<d9,m`hZjym;у$xDaQQs@t k}Qײe&m&|Y%.G !aw 4K 2tưs-s2hsfh>˩K9B%2'l੫.n[#Uo.W<N}kpSYnB\?ā6hsf>,BBl>H>ˣϟݼ]tqd@+_)! >P:4%]p<>eUʉ6%79V^sϬϳC LΑ QnYN{OUUI9ΏeA!k?:y4U^hތ# ,H=KCf Jݳ2BG(By0>wEy+.K ыk`[O/,922KcӄHC@'#;hHމӵƖz <~lr$ږz>wucZPex3(@yQA EUkKZ-&Xs9!IUU^<7&n5wpѳb=8yGF&4'IMY 4pUU%(5KT2*qJ͖۴WIBMjl׳^⾛rM[,n[67\$gW1NI,2Y b|08<}[v\%859ˑI|uXOZF'Ae^@cniW[b24} P$&O(>9lj䡃;qG'M΍=팺̼Vd6 ~h6nMtt$P\(_{Fwc}_긊"pq| Wqߍ[F;U+驯`$K½,g)or.:펢yQ42E;Pj|[$P !X˻̿<-{xli_y:fN{wwsi~\sL{RWɠ i5˱MRcL!bqgN ̉a:xmܲ zxCaN4!h4}Qs#9P|2EQU+2 4U05yBE&_6VŪ.-}ܳ󳋼86*u+PTe}E%P1/ ,ǢTXW,Z_%Rx}[/mt70`o?u} wrM[ٹeb2Į&ۏNpnq+*G8>1Ëc,1y_s(Т )H mu`sdl^!Dzx&Zi+Ķa%+;G]OsvzhB6nR+ct7g<ܑ7]^s}e%Vò.W&Bll}mO/'I<ކZzjy8g9>1ø#yQL)9r!'#(k5E# 'qvd6B!xE?z:9{z)-xlfwX 811É:z((UV_Qƽ۷p-L}9K\6 9 E5XtrxfWo4L]6dcB|6 5PTUs^ B!6p4"u6v4*~YUi {;[ۙ P7#.7c.OfO>䳢W<*mR!X%?>G?lG_V[t.-a_W+Z#/ywy_2DQBSnO;D)@R:JG_ϞY !O,pnds#scǩ,{K+pR[U*KljkbW[xI72jV^7{ݣB!09arG/P[U֮F]lj o315: @l@.  5t621qBKn2GNq%V3[QO_{} $.cS4iw /- ,&ӊmn.RV_-"!hJjh;WD;(*&P|@s~:*kVl{'~tn{&Dx([++އ^˖g*,iANzj5^OMei$BbNnX+B'H0tNLN:U6uL|GPUf鿡nݍ$B!D!0ܲ;mEzM=: x U?`]7$jW|y}:8vv|Ux?\յ?G6B!uPaTk[1lJ1yhp t1J̖w}Ձ o,.!Wִc,znc1N0MFevRnp!Blֆ*vm8jQf]IL hN{`ྛB!rbHS_ZUUMwGHfMO%;x\I!9-_[- W8ZʄߝMO[=[s5!B@_G=uiLzkOcE@ZOQR9(BW9} UU3֕6 Wq'\mnCEYB!Xe6nݓ+?Y'/_B}YO+wi"B!k?s*,Wѕ6 _V逗P<7s%BNt(c&,LFM[B!0GKE ]E6 r Y=m/\P)BJo[=7J&k\峩zYC:~HW.ضb'Blx}ƚ*K !x7`\X'ʈrѯF*8ءk#%j(XWls׭;Z]^o x}Aybִm(#NH䝸.^sig@.$Bl kk;{*y֦%rvGM=m'P6޶\tM!(z7즷>mo$Ĥߣzm 2; EQ_O\JB!0}+.mz(Qkz-Y _SoW{lEׄBm;hJf1dnٯ&u'#2KNzڞ]E8m+oB!62|#mUU98wOdۯNenFKef-TַݝN !HՒLЇ;~Ut$P8κUg](?mEׄBkK NFQUWI8m\@sw?aҼٯ|٠#BLF~7kG#zn'-r@e8K(tp~ip<}ؿ5]B! wXM8ɯ8펡;& VpfqVfc_'"Bڪ2qN/Wt$-rDe9Y zN=ejk*!O=tP0"_p26 ǀ#pj~Zs 559BQxvp'mEU95?3x 9;|EÌؠ_xdBM&&v>}q>z9$rm|xJOs:6ܲrB[TMqDg? k u|GV[t~މ$B!_K}oof37$kCq/i;ؠSYQ>AτBf0Oj1m 2#Ο^@:7ԱA{o` RR!Dq{vv5m܈3e/2 k(EÌx7oL} !(Z-Uv#hXme#@<9˱h62-X^E9#S$7ˊ5$reAL} !؄LuL6|T6= iw r1[L} !(&z|nzoEو>$P?8#SB!6LO/,^$PE :qEL} !tOuMWuMulYW(7i 0[l'SB! ީQ 5d#κ@R??0KP'>9[!D1 |-iNucQ]ʥ8+WP?48h '&as\vU!zOG "hL;ӶYT/:펉;'2"rc|xxHB(Ȩwޚ~o΍..٨({'߼=X_lt7 ֞6^Nv%OuZ6#S઩o],jO}FՒ !ko>Td{cH _6*Vx!(?rU7^N̺bU$Pn״ 2ў~7ql%B vh2l%#k(7P |8Ti?8K}i%^t?w1On:*~/8U|iQv4gf;wx9]^2ս$Pn01yhpS$7ꤥrtv{;L+XIB\-@)WYnÏ܁ј~d,8*Lu_'9\i9ļ):}=ͼ}E߄B oڪ3lr|~xLﭿ ]9 2?Hr^Tw8ogmYN!ko{fQݧ%Tw@'R~`ao$3*k5g !k?pf;o$L&ZuDNI/Ф*Gg&+JlNS !Xw|wc2q%љ G;(rFeI } ff~?Y@V!'yIbO|4ܓ@gvGx;'kal=uN!ov>7~F^+EAyiwL'秨-)Қԁa8P!)כ(G6X AϬn{hGÜh/8S\ ևe>zʻ;0:{>gsM!(~Onn?ſ~m|pљqM)DB흷{B) >ݢ3HQ9)M pI,`N4n}0GfH(o~M6BQ$Ealz-m3caWw^D@Yvb(ȋsg~5?MB w5jSUc_巜vǷW9$P?Eo透 ӺB/Ɂ[we7!'OYfFW3(@^3]ҼݟW*{'P={iK$ H8è_gīo_ֵ !(4=q]mG<.H.Q$P1A2T^sr~WhKyG !(pN{FvL{9Q09#8I֨]Շf_GkϢB!Yg_;_‡1Mm8Q.L^D@Yv^QU4M|O>B{ok6]B{ʟ|DWFBW7~]tB6L37>paGrQ> xEӁXg&YY6w|J!XwR3vjp$_XMEq@9AcQѷf߮~HDȦBU/~ztE<59i k$P v3$K x'q]D0"hqBUik tw4j yzr͗)AuI8'IT) a&of]]FëB-m=?E}]3F)fsyQ 4:~$Tf.j_Vjñg[e/B; t.1~pddR\!R\iw[uO( f'2}yT c!^EX8df*幩1bJBWzo>MUR!6'^񷟦VW1Il= uIi9K$CL[8<93"x_cZt]N~]$L/Jiww?:O$ēCx!ǜ|qBq&O~m|#o};<*w^HKiwH֩L c<59Ĥߣ7=t#_܇),ϬBQJ+>v^t/pdFfʊR'wޏ$ߪB`2zG(uM, y>zB7Y&2;T;480| ޘ<51ĭm=[O[lw z3^P.ݟf35hgG3\T9펿B!@F(Evo2ZFxb ]=|R! ώ|뿒Q\XjdxI "+Nu1%S# {\U~uͦ;+yh6/~<[UU=.ѽ*nrw q5 "kNi&l&שiNO>Yv7'* !D1~mwFQNOsja:3!}S9\H9;dzog&GXEu_o}W{ӎLN!ξw2t_CqbyZB‡>コO %˫y?RcR9*5%z)R[jU߂ɨ#o<ȝ7~4 !:w=47&(q2q!vǷVsX';Lxzr`;kyeRV /~<0Fxjrhar S¤X+(ŚqHn9鵞HO\Fw?r_iB~wΌ{V{7$7[B!R)1Cc".B|QɄpbnc3xX3(Śsa/BLWF%0ON\Z.nsϦXS)Gq_848pPh'/Z%G+??S|@<ˬBJfw52NUUNOy+w9GWs!#b]OezmBUyinfƉ]+k+da2stvs O{%L&R;1 >d~tˏ/2ft嵕G((aBSIEGgV`ʟ|ndܖ|Niwd\]ClɔN#Z2>Hptv@5{ڰ+顛x}џXBw!`2r}93*PI95?$,)W{!%#bC9j xy|l0wV/}iGz}|>o2S~/?M!O¤h(ņssoL$03љqVv5ŷSZ)BJJ/'lH"љqΎMd_N= d[ݡIM3PMmUTgt?*~/?dEͰFog_O$yx!D18}~j:Z2VUZA&WV2B9ͩH!Ly*9{DqΎXZV*%]1O 2Kyg~U]95?B(M7<\67b-k'ہ_B(c88K\|Ͼ^aIDAT[wbYW!D1۬'⿾[ q%M !aR;y/Uʛ2ʠ{I UTa0t_o{|KͳODjg"&n{7QQ٬$^ggx+@rTl67b('HXhe(8N3w FFݲ_w>UI4̳S#&LFI.aR%5Z٫F+o\}BADm=0M]X_{㏾_\:;C$'ߒq-bJ K }8 W(D(EArg ~?@ƋL=8Zihs?3|f|hJ]}|]=L^!2@]s7B `x?sh'stv%v7RSR= N_K=.YMWk`0п}ܺzpӋ3,f{Q3H$Rhsl޾,'2*)JQRO|hp[灷^S//ulkĜj_}.%wƺJjj*V%!*,fn~u4W>xKyr_O;o%D~@)J ' 5GFKL4`mbl@}mU!y,U+Ȝ=#?vl_M$^`ȳH"S1x7"HEiw>d1 PU..-VQ*;n?j$H׾~%aK Vq +InXU3!JQROܗ?jSYeb{]]UZiZ} #sM̭[Bfo:[^w ,*ʄyBX.$rq3!JQRHMdaS\,Nܹh4rah9u"B/<}KkVRUs9H.wq.n&D@)6 xj$V5 FxafJ {m#5W{[_~p$?G.fTUPk^s?{(e.YQU& ^@TRKM_ুUG#4790rfyA{"ǹt~ 5{B Hn8x`k+ c%.s5LoHؔRO=4897fsP<Ʃ/W@ou=U <'W1~lRiewܵsPz+H0u1Y$ze~iw (4(Ŧzxӡ;gsh"9z髩_Uk*99yi}9.fƃw31q1uWr65Ln(D@);948p'3lWS뺫jmܒq׸wnb8<ãL ϠQ c0CO&,ܼ<cQ.Q#yiׁu9L).nl*#%FKWWSOmIYVX̼My>SObqv\!ĺ1@CKw߽wTzܫ y{sc1 ݑr!J!^!B0[@}6TII[)[ji^UT>ćS |ߟ#YWTQ[-՟J Ea*e 쾀 r9 .D@) R/zhpk_$RCy"!^ =UuVSjHƮF~oęQ;q0~/{ *v[x}zrzP,ƈŨo)m y9A" BhHΡ/O}f{h"EZʫ詮bU_i߮+/c  =zEYs)օhnO朎DBrQA_W{/;\I:9EwS>$g>f>z說jwպ;_|=`~(O>إ)Q,rlݷDEyI#1s3]b97#6 |kͅ(f(Ȑ?48$ *З{/cuq5O[E]U4UW(/]owNEg菎s(!0'!6y]|?weuJUea9tg?AH :(X ߤX>B\[Ee*e*f2^YMge 5ҜLC}=ݸ]fʚJ::ٱ[n{[纆GHnG#WBRxy2|KqY)J!QG 'YrM,9뚣b5.KsV(Ȟ}{|ANs̸p|,Qr{I2LUQPEKK==]lggWNΔBAfR!ry6\mNcz=Pq-b0!x<降F*iiMJ32>qg\`aM"  Ϟl`bPRZBye)Mtu4bҊcgWOF4`~LܲtyܗI!6P vGC]w;1r%@ J˩/-Ѐ޸bXR/cL,13fa˒ۏ$_&IшfFyeUVXCKS utw6XSzE \ Hx=,O7vz>be2B)DJK2XX-W0K˩ZXo7Ƿ?X^ -GD1W>UEMR>&#uܻ O0 M-PVVBue)eTQS]NMUK὏ # |8)k#?(sL=$G.!PZ~eRS&@Dִ6J||Ro(( J7|lVB4QRJSNϊ/$?6`=!vdž$Y!D$P Q o#9ry;밙g%ZBM!ZBfI( ޫc8?^ˢza;\!jHh^OC@誐i+j[)VVian6UpO$/e$kF~iwopY@)D948PBx7zX&*V*,6*6*,Vʭ6*,5;6X@,B0!F"QbJ-9& iwlؼ"$P QRrac{^ٜ V+f %f36yӍlJ$'b x|8/;[%RM@'M=oJ7GLقl$˿u9( 'b5#'?IeZ$?Ncb#X'(ؤ ؀H$ƞ RΘw&ф`d4`&\UUJSQQ E%*Ĕ E!~CM]PDldC()rF$P !+0oy070,c$@ !V RK*`xN@f9/^!RJ!Ī(#y+C͘%^'vJQ$P !r*u< Fkvk(% B5whpt[Y$T` Ȼ*B"RR'lA4#a2yupgI"o0 ~5S6$40s+<;h!$RQRS $fP TW+!cjNW~?Hz<ɠ(SBB'R!BdE !BHB!Y@)B!"R!BdEB!ȊJ!B B!"+(B!DV$P !BHB!Y@)B!"R!BdEB!ȊJ!B B!"+(B!DV$P !BHB!Y@)B!"R!BdEB!ȊJ!B B!"+(B!DV$P !BHB!Y@)B!"R!BdEB!ȊJ!B B!"+(B!DVlʨIENDB`seaborn-0.11.2/doc/_static/logo-mark-darkbg.svg000066400000000000000000010147341410631356500212760ustar00rootroot00000000000000 2020-09-07T14:13:59.975140 image/svg+xml Matplotlib v3.3.1, https://matplotlib.org/ seaborn-0.11.2/doc/_static/logo-mark-lightbg.png000066400000000000000000002167421410631356500214530ustar00rootroot00000000000000PNG  IHDRug`9tEXtSoftwareMatplotlib version3.3.1, https://matplotlib.org/ pHYs.#.#x?vIDATxwxy}EHU7!vdqOvqNO'9'թ_cmKVz% @μ1@,3[|Eno{4H$D"HEH$D"HoH$D"D"H$R"H$DRRPJ$D"H B JD"H$IAHA)H$D"))(%D"H$!D"H$ H$D"D"H$R"H$DRRPJ$D"H B JD"H$IAHA)H$D"))(%D"H$!D"H$ H$D"D"H$R"H$DRRPJ$D"H B JD"H$IAHA)H$D"))(%D"H$!D"H$ H$D"D"H$R"H$DRRPJ$D"H B JD"H$IAHA)H$D")n/@"HJMWw s\w/CvOX"pQԯQ"Hv4^D"ئGz.l]Yr&9<~؝J$sH${K l5[k! `bs8F;*/dO D"q{b!ťO,a sN?v4 H$RPJ$66W!+wqil2)HJ` &su\d3~ȔH$"D"qDZ<6xȨ$\/4ȔH$NR"lKWw8%+vsM3%,<~hrWW$H4RPJ$ # n$<%.S"G J"'- y <~/>UUPTUQPo57M&i"LSaH`e%EH$R"H_ tqh4Dyy02Ô^>ߋFӊ;}0DDZdZ? VV,'XYM0?3ygY`vvka ŅE@WwOxkvwWC}}ՕQ2**ʨYq. s]7-q9_`zzyKx,015cGwy=HA)\tuoBjJ}] 46TPEcc% UTQe0M'bhx)0dPp8%.cDRH.v>cE ߆X'ttXEc%+iFq1 8CCSbphWv{yyK\'H. Hc{4vDdˮ.(+ }QϾ}dq0M8N q0'O s0 {" }KX+cG]^D")(%t$JZK,Ⱦ} krgQ)8i211kBԩavsY# #yBWwO |?pN_Q:v.B4M&'gy0 /(txKXcG$3H0]=1]WtpÁ\dYXXWya:~h|O.H!Db{NW HI.vY`&c˯?vtuN,Hr#DG@d'Y__ ]UWuJ)q2//S122S?Ǐ}fN*HF Jd > \S¡-t!n MURRLdphǿ*8;ry;]EŌ.s9PVs|f?7x믻H$XI$.S'^gNĄy௱/dd#RPJ$;DWw %np㍇醃\yE;H%L‹g֢Ss>vTO&HHJNWwO qd<#+뎫K託]&,$$]n.7i/uV:O;:^I$;RPJ$%ZৰlJ"Fq}(EE! Ⓦ,cZ^1@S]X'w\J SXM<_M<IiR")")6w?yreuU\}x.}1!L<1r̭\_BEe]7xS|㯰(Y8%A9G")RPJ$EG 0M !P0] YL%GhG= SP3ˎϫ* 7uP.>"~rx8e;$x$XO cGKo$\(HA)ؠшusםy7P^Sm nkf҅@S -̒0,kOg@Lu5%VyxL֨V4\Uݰ&*W{[䒊6Jy.O<6cGgy`BD Jd{O`Hmv]m9o{MZ9?H mtģrW83Wbv|+5a {KY"J5hha E~nilmSP/,,,O_{ѱY?~U"A J$ f@ȯv?o\w-#Z F9;ϹU Fʷ&Gq{SCyl\[ۄK蝝Ѽ]/e10 ϼο1zd1=*V^K$%/15MMw]vZv)Y 4^]%&RAnnh (STy--` *V 9i*7\?O}(?Owu/R"y U"Ƀ'&j,&}^73}%FU"^m>0 eV5|2u\84^Œjng>.po-{'q@B@%=]=W*B!?o[D8&)X߉m_nojٷxB9w"\ZYOTPDpu#-;#ki G܌`nnG=bќ ǎ>[J$#RPJ.Z{:R.G vve L0M\0M۶6 ᥩ쵆 4!.^f57μJx}]30xuj|\U`'>33PǏ=]J$RPJ.:{X6?I*7]dz~S;;M ^@h%U+Jjw.cK6=r_˫{|9-s%(uap?yLղz@YN%>V{C5-Mƿ3s~ak@A2⿾ O+VgǏ-RH\4nPWZky#i;64NmR =Іg [|хA*ʞz\N%I~Z3&Ee\VUA0-؀c|nih[\Nqbz/7u슘Kbfea i }4 븮Pن㜞4r{9؎GvB& Cѡ>S[wD+8TQ&B ߿/Ɗ|cG_)$ 2HoFC|}ws[=-fGzF<>nkma ^adqnˈ[UiT f4p/b&K |}^?G7,̸3Sr_[۲7`xqצYSBœ^N%!z4Eݕ#a<0pU=uc[*v=ڝJ|ؓWla.ipE tz}oQ`z;wgm F_v~od]fVxjlvsI6.gQCLLy6e #9u*w4up֪B?7Sҩ[۷fc5bA0h.VRIXIz AaG]<7>w4uu)VV0/ut&Xi4BD JGWweX 9ۥnモh4TBf:7"?%wl}\܉WsL !@4C s`_ ˽{uB0 [n\F6 "N1Ŭqqie-lB0\T|Mb?@UT.)Sr]Ɛ2 qD!N;"_>NJw^pJ\I${)(% `n( o0`5ۏ+o&*a30VUi?d.i=oۛtSvB4MCY>\m&XrEOq*>z'>si,$b)D&E]p4x˔<36d+J* V)ϙJ6d|rc%rFŶ[iv<.RUm?ilW}oB _9~hDr 䂠mP`zs CN10M5I֛$Lf]n{}Nӕe 'GX^pO5+B.UC& 8tZ'j 膕WPPK8+6,<2tp) yKfc}[Zh9B}3v >Zp9oO:7?QǏ",K"Utuc㴶{W^Qmg6y^ŕT›DF O unMQ^2i;seo d'hŞZ!3f`~icVn@٦qUO񝳧H9Ce\SӔqwvWs5\U]o+O7zxk| /untcGZ.Y"{ Bb2c?#b0Mݶ#a<9:Q{j<s18_OnM;W\ec!(jvC;*/Nf}+zWƙOY,yjl0o1 08¦+ 2u(hڶH{+OɏG/iyHΡ'`!ǹ|PQ^V@/L0XT٩0ͼRH o4(kb}vvQXyi_Q^=OZ[ȡ>HKHvVrQKkx>}] !蟟9F9uѹYuM l p]mNN]vRm=ݠBAS>008!4|ܑb9å(ʟ; VG]L^+Xaiz?{_WwOxHJ=OIe'ş .g dvu~9*5S-*5EV((3qx!~Adtqq'蟛Y,& 4-RsP MUH ڲ6e0L.ifeS36=$cKSd;-\.wrtuSI$%CvyK,'/-cO}W"2 btɹ-geL,r:>9,R3V[yp0zp;m?n~((I{C>C<Բ(,6[5\nhULF +Xɪ42޴wx5w]S=y3;GǎeQIR'{5].n;v-"y.0xej9|٨dlR`10E{ X}'65Pq}]|Rj !Z+E9=퇘ZY qi4E!Z\ZX`0 ]瑡3LRUno%:L]7z?mxcG($ dOݣD.|0'w|brO?M;~ԅ$ǝ7vb< ]X_@RE9XlӅ@j21ӥ;?W]&IC灁kS{* 6\L(;ޑ||ѽ\6EKWwO%Vot4>.w?'$ah%.1P-e1ʼ]V4LL_]f>.ĆPė+L Q0j˼HnDM~Gb>Gw:w?Ç>Ѕu ve1A")2B)tuАͨz,$|oVwu u-iIBO`{?vC&}lhjʤpŪ% Sn;*ێtMTB{:j|2u@Kʫ)ۥ!KzzmB( VULL4e$c&+,(3IJj5 :KK<0?i;9nU-m7a)[c)ʺ<:|&) 7ԷR ldg_/xm6|"?<} Ǐv&D JɎJoy.45U?^:;*!265ICg!@"^?-+yuCwD+9XQSTeA|uG]WDMlÚVz R!B`/s6UEHՅU47lnǽ|~|v7/rOHr dHw$pG>Z>y Ӵ\sLScLSYsie-AǑI 'F\u~q8cƮ**+$)!p*/ؠ#QyEU=eW^|<8ݝ!rGs3'98/p[cnM{gZtߎhHI+׾|;*ob ,nb2K?~~?(dt!8[bc}L,b8)8q<;?iZz) 1a<2|Ƒ0LLJI ( .Uý瓘0Ŷ︦tDSȣv)3ll1A?z%Lb63C<:ܷlxn~3?~&'Y"):RPJNWwON=?ß#WaL, <5:@#AUGuq =eKeedmg. ^Ϋ^@7-!"`jy)NL00͘JUQjv5m۲ru<=6[L%xuzlkׅ3<;>X?ßd|v7p-)(%Efi}#|/5LgLJJM 4uIG'R=f^L,]Y* !蛛;'yd /O8O pL2 DI}aP$Â\]ȁXy6pc*{gH,oڷUcGjZ")RPJFWwG*^~ o DC-Sb;Ź`' > SP*ѱqvT+rskC{s%$pfnd+BY^kH cSY[U W"j 5Ue_y%퇸{mG WU7Ësה3SƗm` .C'?v;NtuH &HwoyAڬro}}\Suqԅˆzg.TUՍlL´=<3+ˎDrЅ[S<^5' z,qm7i~Wi'z4A_t߫w ?~{F\dF7YbFFm <#s$ EQk.Z#D>ۚ2[$3XA&;esr~Z-݌5U#VɾJF^%%hZv甆`sL.Sm+t#mmf &RIb\E!sie-L!!/@m]-<=Hb!;s0ӫvѪ?_[{񤨟Z{wFMMWwO ˬV[Esi2M/O2u'̬.sS}7LqIe ,r{W^Ic(StTJhKUr8܃nU2u"Ҳg1if}u!XHrbz|-r\F*ԅtUNn.U-R[xyjtۚPW4٪̇|go%=棚@8 bX|7>Ÿ|ۿ?tǎ}q)(%y|~.O|~YXYSۊL,k7{tLyNjύ3̕%OjJS8JsY%(B?Hۡ* j.&&^UQ&, 5a>3"4}ۦSy+撫g9.U1)eha3s,zC:cUCikߋZL9|TB4'9"ihkuGG{rё-Qr#1]=/~.m-ɺ. ^pTT?8Uun4yt8\|r6P9CFٕ "k`K 72/BlbuoI̬.30%s|b4hTa uzhhaeAE1rSznNh>ceJniT d}Iy47V92xǏ}D˓\.o#{nt$&;y!&D)z:+٩KMgkA[ӘϿ+є(6cvX [LRXZpfnbhs@䊐ZoLQ !HOp ^eha Ɩxmfo=ɣ},~oQE"^nMU[-QUzmm4y]^tѮF)v[/9oy ]mG.˿cTWJ0ǹiDEfNŧZhJK8gK.~ޏWTK,_kv(i2TP,򿪢P pu`٦K3|5*_}'G^B1ڼqUuh(lLZQ 3MNiT;)aZH931yirit!~]p)*e[O^ʌ̠gM a$/,Kr"$']=?jn5Ms{SS+&FW,sf>ZpZJ0ͼc&-uTyrf}a < ˋ<4E)*UU%q"?-#X HήX̲}.\x5-DL{ pxyjTaD~G[qyU]Ƞ ]yzlLJyrl3$ FϽٵuWwOhi bNaIG'by_]{ksX(α3'RK7h &G*Ѐ&Ӆ``>΋Κ25Eᖆ6"^ WƶMӫBwAGFQL PE 0蝝x\ղOw]YV54x nڼ9;=^hDSURNCm0uEφ<5|ckVmzST̟zu~7Մ淁ǎ^|O[$+]=ɲ?>^1 L <=6 Z:}4eP^xtvKr5oJ]oEᶦMb4/LIE4U5R-rb4tgK+kYVYEMQ8A[B6NI rL cMQ#bG蛛*&f.-uSV温g &dR CN(Mũa:CN'sNX+"tp|u ޾`ndqW>ʼm10L2qz=ZA[bi<>{2CenhDL,-_]!% 4U%SwmM>ZÝY]Ae͵fS taX 'V\nZibC%l8 Yс+CPKUԨKQC C"Bn/wtHodphŸ111t׿>zѽ1^gR\dZڥЅ^mt\nn_ЍA8/H'^no$α> xvĈr*\b%hyy]fTeC(U {ZPf_`qs)5s/ mEf ᷘ*0@Ԇ^EKYX%іw -x}i hC ,$kOSʼ~׾#`9[9^_LmlGLMs-Y46~_Ư}ÄB,,uS|Cw*Me1./OfRPjvj:[Zk ъC&h~~˽-\g%G| ZmY1@Q`ryim^:G,{*)R:B00+cY$ԅ"\]Ӹmd4MSJmb%AQ,76437]9l)[/y>'к{>(E${?d[ny/^oa5v[NKU m 3bM*I:i q:/A\4c[L|fWWH +bi)m]M:b496~Pqnx4IBOqjvetSVTA*m{3ѵ3Sdk:;'r=RiҊdk !893x`*5nG:K ** #)LN8B~KGʣd`?vC%%2-Ɉɯt߽㟾ߩ Eޘi'l[]]Ӹ)q!_o* -İsTۛ2Mir2>/4xmնBg4Jx5G dYS !H:O nHK?5MxTm[!xd̆vD>nihíjRT@8/L8=ggEOy.6~ꥼ-;Zf0?7'/{nuinGb{w옘+W Ѡr{iFi2cKL-/SaEik?vSd.U1&[kI =x=$ DN1 V냃-i3[SU玦NQ"^WVz(bR&zNo+&Vx`zjXC|o׶K Vw2xbŹTOيe7N(!e7p4M~w;X^2y##1ndH7~%YS.2QG LɥjۊCRT|M]50J}VY"g蛝ޔ/VR "Uvn0P+)JѺOS'7Ÿn:w0xjtTCiQ0M"#.μj(G۩1RROPN,/JZ-0nU-JC!߳@x.po] 9DqbzmY qg*4\onUCǫjDʉl_1Gr7ƗhG(`iT`%") TEDVB'TB}3ovڑXSl㴖a-&F[DXqy6dKJnae?_yԗ;Bi3mת >{CE=Zy4" K{~4MONv{wu|DʋI~>ɷ̠kWqmMWTS.B0X#[ ,HQM5UEUT/1_׏ƥ[իuryNJT * Mؖfѹn8nwƏ}NwE~]= 83z;\Uh($'Jik]\(DKfR*R2ߪR)+gfvĦ|XB{)0 5!n[a>r=#gs\#-ew|Sowۏ_Nk$RP^||q';zssR^bmRjEsWLf_'ʪrq,,hJV#DtD+s+ &_SL,/fťԇ"\QUgkp+wv2ogwO=%XdQNO scV覠7=<ٿ6T+E S$hȰ[۽[ۺ \ RUb]hErؘ]hTa:ퟛ`.U;:e}M 7ַpUjK[q:K]=2aٔs6OOwSV(yu]ZJ%jű%ycĝ59۔ 9 AR׭%4!LtSR5l*߷n7hLnCPx)&y=DӤ.Xư* H4UE4pEu=qfVWXJ%0M1MeQ.ϖ߱l"Nƶ¥ZCiDT[YaN Ri/o& CNvJWwcGVuIRP^tu"ҟ}\O榹LPQ{}U-ZQ +4L%LWO"LL۝AM2jI937q#)69x}]R!3U֨\=^i /αL( 0R„ikOsDSs^K*HP6rs:Gi5@sqJϬ.k֗9^Y򪢲J4 wm 3Yo!iw Ǯ;X iO`{7u|nQENJn bUT.iQ]YgS!Ësd1c6>(4\VYkkWou. BLp, O84r{^iB[W=Zi)u-Z=3]34@] r|6g^ǗTPuܗ8;7tYOq2>$(W//:}u.g r_nxϷ>TnX?6o~U݊h4-)#YS+KO2rDSԂ;?a߰Ϯ#Ǐu4Qr~!/`{ǀF|]|}o*ݢ. #v U֮8u!893s]nZ;6i#'G2eҭ\]DU v]ѳ׽^ś.+jR*O۪at)*7wtyPӯ'% QMV)ap̉k 5E ,tpMLf qmmmqfMs}Vj4Y'Fڶ*i0t8:Z"^ aY; A|u[Nzy*k;4(\ZYGKY(渓];:X%{Y,) [8oQ)x|?蝛ZwKUVu޶Hy^M9|om1 JŹrɷ( 1)xjt`CDSTnilh6g JG˷҅b`tigƇl9Bp2>ipbzEޏPuQ:̳CNTEj:Z+iE-<FB?7ã} xwv *={ʖ*z>y[7)/=KNWwO+}>v?p4^YfMUh Gףj;Ic#+Zm_u q;8X͞߆dbC(\nl9*}\ҥj\VYG{R sIyͶؔ0Y,&3ۡ*Vm]NYU9GVXp{ybU[6t)X;:heMU _4q]m"X9^} Njf;fŏ N߯gxqF `WO}mvr'J Y') 7v~>dzKjuai1WV !xd4ʑva xv|Z%rgs' s-xQ#MyPsÌѠreU!vO:̬. KUUn]o4AHvة%쟛9pu#-K <6okm`?ױLgCzgyu:h%TU(0")ðMkܪDAQ(b;\SfUO**u0um !f ާ~(ZC:l"wf;nml(] _ ѵǏ-N Bd?W~C{^Lucg( (  .n*(UR[({Yo'g lL0L?H-R(u.p]]3jf yZ/-$V9ޥ( .Ezvt&rhuț̮08\3p*`4]t|iP*c_2Cwjލ5FMLRȹ' .̮NT]*FJAy|*p͝eӥx9TQv nY ddq./#}4 'L兂$ɦ_Jx<.~>OǎʨW'|ݍj`vx\K,BO6sɷ'i\VUO4;}JBn/iI8gfz/푊 7Υd** X&㵘6=2e^e{FTfp14M^v;;3 a6>C %+[7_?X4^ H.@kF-_͏ ]Vy}fbCSXu*jqkږ)|S3.UPJڣȘi[Ӭd*cZH&xarFKm2j:l*7շ|:jqmm`Ai73훙O]Ws94[5+z'qMM^M%[;g"ZSUQxbNk<.7=&̴k#gl[e1܃jAOSX*ਆ<#sޢ (/?~g9׿9qɮs~fK6 lN~_P_s>Bpfvԡm&g|wzj[y4VLhj*.U[:\OR<4|2Ax[i,R[)l~BsYT(&3n*oS8Uvp28ϓckm@/@Wn)mZb\QUo+\De{"e!i:X (,;[xOpK"D2 ?w'} -BkHAyS`+ri/}} ]:Ld.:SMQ:ː;>n/UE>7RU*!noNVP,5R5~+pkZQ"Cv^RUզ3`li!Q/-pv>^4*W~뚹{ryUUS.X-`5&ٶצ' M3X)ta cȔB 0 8 ׎g_#K]VRG^&9[u>\*8ҷ]B08A=˳C-#LQ4H]THm(wH%w; 6}OHAy~8]YdƗlo77eY8\XP)( 9 /`U HnE)nO)F5fp*Jۂ=RA{i24uRgMUK6y9y|n}x5mEa! xG{Id,[>4:q้aƖZYbt*{V//%R)*˄:G8E,=EPf+Gq}f5@ t]kSG-UUrqgs' H"w/VFe6|.# sy%әT*븦qz2kktK;BsֈMUj.WKx[祼2no)=R5.sTbQ)HIA28WpPӳS4IwyrtyW&휋OsQ m@a  )LP\EmԪq'q7]dyo#)-<:n`3?wvć\*k8˩bv0m'b_]5g'TpPdbRUjeiNűV3XcYCQrc] /N006puMEU= +L{][8.;óCt-75`(\_b5X?79ޮnU:f_ǻ]1Lp>!)}AQ~ghh /{N?v-LRV%]=uڝ*#|w׸ܥjme:#Ctv_m-# <h@yu|Cbΐ(`E ʫ#m동1Po˽6r/V[rMU}RB.ayUx*nS&}s3J喆v[Bb|y0:/Ll`Ës<8xg&ھXUQԍhx|>Heo{m?<]-עӫL1VUn(`jOw_ o)ݪ$D >ew.};b,}+ v柱a)xH9nM!% 9TE@y}0L0ͬѥyQgUQ1MhkaVVC./Loxir˫qkfdd~GӸן c;ZhD;vwtWwǏN,B%;CWwO;`_ 2a)=˻&:ZP6<>WЈJ[ıel2L4V35E!jJ^iWlFʹ6 "#_r2BC8U h[Lх /90fkۊ7'Tda |ޠ~@^ Sh Q 9RI !pU LlM0xirԧl5;2CƗxyrmf+X5WT0B0X$KB:c\|?q.sǎ}RɎ <\gg֖~}@`oD S` 'R0MƗxblǸf4M{#gBp t.EB083ヶx9ud /İ@yqrnu2m)|5ۣH|[ %\QcXBsjf٬ަ!h5CqDQZ4ɕbrSP Zۚڹh͕SLBD1 pfnW  _0풔/}BɎ {VU~O]njhBG5\YUB4MRܗR 4yejqKT|ucBp:>UPkՉqHc;>KUHc{SIJEf`aM)$ պ0Ѕ`fe3SVn>b20 ASUjaZ 0뿦 Y5yҊ&Gyjl&,#zEVߏ- mY "iv|y)$7 U{ CVmMUqփzV͎6MEM>[{WK?~T?c{2dM9{F/ntq}*W4Rp:>mR3VEKzNzz* .źmR 6NH-&L|p MUlugD\bS)Ɩ7Xf.Y k2k3/-dK^EkY}U(auE)wvkjQaHs؏,4m5yW>NLrv>!\E+H<':JiO: s{sYNp+pkˋ|UJh%1_՝0!813lֲ`Kʫ y|[B'ybd`F M㪪3३Qm6py56mSKpTB\]ӸV֑2 8D6b%]Բ.M#;im8\ϑv*ANǧxAsv(:/+j.}.7] 5 xׯ;Z!#{_FC|shĤZ4MV+K'<k L01qeKa-{^/e72ԷK i>$x{tV^0xrl+hoB|̡mCt:V AOKY DO'`^01MFRfy@ZIӤSD~ a涶_Nɧ2n)MSϾO~wevK\RTctut?og[EQ޽pv]&ϰLXvFu Dm00)ؿHߟOffLRTˢE5S(7/`y 3{('#= Ϻޭ9Yq6Z>tDUO ΐ4M{4*h9#z^vS?3>ֳÄ>>FB|)4T8;gb>b2fdzX6Iq5240Mʼ{V*J୚'[UiB(/s~ ]=;~ >h\aZo\k_E2,$<7>7Μ1 /?7Ó|uNLE v~6ʧbADUaUPE-Bs3#'+HR!L0)Noxuzdžl /1&F62w&K(& ==6wLSUܚtkZVJm(WBAK"LBTL`H&#8L3P% Pۡ:5&wBw\&}ϔ#tu(^zet!^Y⁁S /mAK:'|oHB $d0xb,#{FTThG۩2:vڣnH.E)O$R!YJ`pߚVL.İy=c^8^8 yGڜ[5&ŋ[*j >˶J2`|iLJ/_;;E* UQU:Z˾XU34]\z5={,?`gԍ}B2Cg=5 5][0KDQRY\_LwAnklvr}]3+֣( <};kNMlQt:3|h^Y]:CP l4z:onR22qfv:owu'G]UQVLPEMhWojGybt`CYòqT*RQeMhۡi?:P" )(]= s}Uђg/(؀b*+ScĄ!u<2xjt籗ҡk;. ^elلXy4pl-* Qgюs 9B08{kgLJK pdbgfg` S$S 4uRɰݬx;ԇlMţis.^)r*m "厏V5nmhwBώ1M%a*MUXՆPzj|r]F ]& `M7*v4\^bUbpaVDQd|2sg1`|ih9]$t8S /Xiu|TԷ(жTE6,ڪp¾!O<f|i~[QP.$W@,t!@\MR5.p Vإ4UQFp(Ñ ŚrYeF/]=[8][% CCPZXeP5o%-7xw-*S߻}mg0MN t!Z,kzey[;'MSmlM&ޚrYe肮*WU7\9v>۲nil976t{lm2/g7=7gƇxK-fN)֠M4-[(7&! u-<1u9KgHzXcT fVg*(u2N'zUR.Bpfhn%Uk"RU XcMQQWH7/S<*%]d[Ep~ζ>_pw#K 뢖b)l--Pnqc79 3˸lFZB2#gt]"7w1aMU}*zg蟟Эlґj׭( nEN곕&u)*׷䜐<;T&}stF+4M"^#h&q:lHP FE§ɠ* on):R /VEuz?V9Rv 3T %iq=u{1 {m9׳J䌎*s"Xs9v ﲺjww{?vh#IQrXUPC}6#NN2a |(r@)q9`Ekά0M [.)adp_HҲZY\RQaZ.bQv1h`jySSYkn Z?dنYYFDLݚFgv7(䈞"^ߦтvTӤ5RR]`E}##fP 2cWs0ݨ( 0+X7շֵوz\[ۜy> +kn+v9p+яtv6_{"B G+;J Sc:=5ˊ**1NJ(rgvus(zxbTC߸Tz&nޙ5脢Xg (Ѕ`6B0p*A')4o8E|>b>?ִ| ʫyrԞ|ptmPs'x'f>zBnoHoCbd<:gW:>ǥCGDVB{No[qy^{{yujTrd:}.72bԇ"y|h{4d F=mQf?NVBu r] WTo8iB{$Du Xn]j.R>q*!7^55D#pGS'7ԵR;R]:KYJTUl7]N&adrqGvg/۷y~bxۦ7Mb\Ś&ESSU^C2\#pb*DE^T'Fukz:J:\%LJ52Ϛf* ˫ybx4pdT%xUnkn3I^?Je#r VOs MhV+MUV2]6.76/)q[ [\9"RjXN03˼>v/jr^EO Ֆѿټ +}L},RP<lgë]Huk0h GRRU:cy ʈh(üZ!UiR@#rF349XS+K<0p#=v&NЅ#s͕QqcBQ!WĦI!ӈ=Z Gpڶ[*e1ZZ4˩iuh EhO7 m%4UV=k,pkC;.ŊiB}(¬c4AۍrwfdU/AwauĥT\_ #Z#1%",aYY&)r׏[lKs?Ͽkgl[.F'pm~;5;^ZL&ٓwBP^nkYY0^Мsgs <2tYݼr."`.j qw~6l!Lp*>iۢ3Zۑ0]V%VQPFRd´*YBIhe+BSM)aL k%yIvCr_ZZVlB'993p*X?e]!s;轻"&[0 vp*W4'm5Jd:f\Ც:+dÄ VXst+xf~:sK#ב4 ^P}S\BZӳS&V֍,SvyUݚhUQM0ù0N#}\c.iZu2BaHu)*75bOXHK Qz)YtVL,1Dc8碮~~tQ5dr m_/讥+<0p>7շ-B ?a<=6\TE:Zb.z49x^khoj_]XgO2tf \%S$TEqB06OQ*㚚]Eta򭩛#{̼uC\a0eTkB00sw5gM}uJ&;@Wwxٹ{,55k+ !fkƾ]R^;XUI.6 2 -n ܴE+L=cܪmM\[DCmD>4o:S|;iL`ryG>F뚩 wF&r2* ++)=gՔ^HsQKq[7 ۍ;aG[N08v㖆6*]R++R߯W?v.I^ȔvULcb=ZQ9ҙWTד4 ƀ53Oǿ3Ӝu*Z/VU殖}6=`z~w5RξXն<,gK2 j0/힍'GiU)aɲb>I!=Ѕ`vyUV]I0{(8GӴlJ-ȍ[S)y}D>OXKq)b>?s.5|G_l~;]DF(KLu k۫~c0tszʎh+j>1gх@ ϒ<'Bʒguf+k]G{fsYUnYY!)FU&,_ !] 6~HqGZwDJgrqɅe&Z\brajQ     [?]{[l)v XNxK4ٟ"pyʕde> 1i*̥̍il瑡mSM(*j/NfZ.#i,īyS)5- 9_24(@P4I(,o<1_ C5`EO8 sؾ;!.0>2S K̯;0fV8y`CeZ\֔Q w$o1 \p7]K(|So]ֽ}%[D Rs';۾#4הxEk,ޛZsfn ߪ!Ū(_k"0=}άrs7|G& s i23qNхQ[TsUHS;Z 4L.,125'^= $ $}So4iBM8D}4L}he{bg+ <;-cU4 &&^M?@M0\2yggvu|E6)(KDWwC;VV]9GUTTͲjMfj S`N'X^\4-e1cVj|v>R5E+韛kM(h l·U=5cvM0@M$]^t<ó/z}ł!LF[V6եԔY"1"JypLq8}a!K:5577(4\VUW+nsLOjî٠S",?U O~>3u\] ! U4`y=>$ qsQNu)Y]vo{qEЅANǧ 6 9;jYV 7]g-qTeSZ"JkEp*!LJuڱt8cK XBWTQx7*$kȦSՈ`/5!ӫ|vZ%լ]B*s*>+c59c6|Oόf]t.Ձ6;95>ގCw9n& KO韞, Ezݴ[ⲥ2JMa 4MƖybthsie H?Sk‘ʼ^٠7{PVO8`= u-{TB A3}.m "kvPͭ <:gO4 psC4"{MJw{6)iJ(zy݀TEkn] NOpzb2;wH$ZsE{U9.+ٹE&r/VO:`?9z[Sj{;_H,:]=Gl?t;=%^Π gLJc6MD}3O,0t^?>{R/ж 'p}] H!)a  m=GI4g㛢|P oEX2r*n{xS~Be!(u[$;B003ǩiNOL3>gB:d_MUTD)LTd1i.0/몸֊uP4Qs݃gMQ afeۯ|n{~Q"(RPa&HT>cit/-0i^#nEa<56mT0xuz,"C "~ζt@KKq !sbfbzezp+ws? |Qd+) `E s^w<U۱J$/J>S4yar*#Ks4I{rgmu46ˣ^4VTh-}.)&u擧 i^%BXI04 Cc55\P v[[Qkd)ZZj-Ǐ)8|hʵ-|".㓼6=1]H&nfp* -?YSnL,&=1=N>Y{݊\cv2{SA1w) L_3]&4e?#s_8JQA!)1EQ(jxP Cf!iBG]0LAZ&g&gxyxWG'YMw2DR)CLuiⲆ: +I-Jڻ%Fxߛ8gXM|m>4;/)( 'm# >r*\2'g&_^]RQ**e1^u\1LKgLJ Z #Rߚb^YUT-'|DL蒭H[X@UPQBeʊMUkB5.UåXii203K:2! % 1#qsjh:]3MJ`ARUy؀9<;?wl ]=~")(6*+ w^tapʁeC|77& (NL2ԅ൙q.]K­Nqv2x}+K䙦%^) \v[ []ENB_״b,N`t{6BSU4}08Kr̡f9a҆jl>j[ܭiGI -Mx~3Y|үEP@WwOp)4W"Fe YS|wcAؔ( {m&Y-#~ S+K$rX!LdbyG&(|0xrl /75Sx;R"ɋCVTgtx$HMYj^0i rv~QҭjtdzC{t8~Xt"#KQzE90|[ոG MUYJ%"&tSLl_Kq>zpaSMM5/LpyU.EE&( U !Ŕ q/iH-"ŭ !89>s#λ|C"9_[tVsUSVm;ǣUdzeFr\]Xx}7#LNf..Ld2O{:]Wsc} p`Ayv|h)).U-R%;fyʁw.=ZYcB(9.75u\F>i0t8mKD\nh]ntC^w6?;lt'# yvMb@Q.`+ˎ$$@:MEQUuXhː}IkY9g >X՞K$ rv{J$-0Ls2suK=W7ަ2rP 3b?^U]XC{>k/'첶CbG斆6|.-KLgfYJ%iTCt*q*n) !0L&G^۲yrsٳgEuo'DSpbz‘UXOu\O\wkvCsd۱/VŁ]Ime0M3q>;kcB^$R* j*rK/M@cGq7/> 1yLLr}m3K$̬ 0 =tD*dn d|U:F=\bӳSTB\U݀WsmO^Si[L%^>! iV=L T$p4EKo!ό]:K#96hJFsbu;.&S3gG>W~ƤHiȳOTs_ġ-\ueMX|Md&]=!`ȩ.~G1{3'qƇ^,l5{#Cddl@Z*lta c[ԫf-ms嬌C^l.kam9WXH&Gӈx}(,lFyFYI퍛D"K㪦:ok*sVvxٓ|Φ3@cG d }> 1_#܁!gy縲a(&ꊎx}T MU ˙vKY+.l\ƕUL5v8B08[ZmʜhhGRM*_{K\.*LS|:Jtux3@}m?ȑ[.+ Do^ m,<_Hxke..WǶ1]rH0MTEYo;Bxd̶,W?VUh. ZA]UQ%Ք3gGxoH6H$ggE?cga^=> 1T-7ļ$ 4l; &LPեㆺftO|Y#SVmu!XM/̡UQ )5U%z9zgMXǭj4E錦MK(& !蛛)VP&U{ZXgy~pao.D"9[Y}=sڦw9ERSGƏK;YzIA :e9f  'Ft#h5&}(t`6v6M6B>Hfmq sQW]qnsegfd !BitG.#bvuo^{)LHfiZg+ zzkKX tNx.u^v+TrKHVHҢSA"N>suw1:\,+7ZޣNv4JBLj>wwOd*[KBrzo!Nf(SUC7_LTf8ݦm0U=,LzyVGYr:]03&4X~| 8\N:G~af#bMvtiVVo0ͻkNy;].Nv> !› 8nrY_R_e7_c~MM; 0 (>.:?~Zew:8ݫx]zY9׍ɍK(VaqZ&:ż4N[Z)=mrF9qff8NlVi&ԨJLHlӯ`=O kO!Dxk 1fmq=C7ݸ=q/d^@9/ ޫk7;.þ؝N-'$M$NeUfc}4|b#"h45槤:lap|j!gQj&i{(o0b '..řYУ#"N ɮ,H^φwݗ~DŃFj%t9Qv4q]F !o`U5l=],֗>z /C") 7I٠(OyCw>b[K&$,={ڧN'Zi`uV>ɑ :N ΖjE'?> N{;qN$))K1lFںf.s!8XΆ S|z/v@d!3ATիʦ&w&_8].i,OfiZ5=4Xz c(MJ#=&ylj4D\QPFp4hXAiR;n;N]Wt8-2ylhƔoXɊ{tlyr+ ?o-S8r&NSlnwp !T.ڻKN`CIs2T}]/`e88 ;#c#-(+j|FX˶D~919?9)$Nth4n1j4D)}x3}goyo$d9ɭ2aZK/e .5 ٬0shY[[(].ݣSMtr}zFߔfN{3`c;kko;FrL0j7mP(%(,]߫{{t?Ht91ןfn鹫 ˈ7D)7Ӂ֓.:Ot\r<]Xh5o"˭2vOVOiR S3J˅Pg MeEF.1_hw:r%:y{t8ly_u1\]0PCqv6q>gb.DEū^p8~A{Defo 2C~it %L\P‰$EF"q+3rNGKIR~FHy?^o@\.*O0٬h&fNWgphjՑ{=p9ུ90~=M}&&ԷNeeF.yqnc6 h}S3ɉKrpfGM#^BcfgUtz&5]~ BP2hR'xU;ln<ÐP0ApI< Qh䨘)FK^|@4wl3}]tgpW+q,H JXt̸{ٝ{ScHzjF߃C' on_ !BÞfnX`ҏ'$r%x̓J]i?/Ջe5n~_XqEA)+sxw9\NyҢc\N8[ԇ  [qNaYTѐG }1q}p:9Iޱ7>4?+?cN%B;ϓq]M}1 Brqڸ7^T 2CQ^Q9T]QQ h4$&Sag^G&3 ŞƳKr\ioʂ)6Ύ8oF Iɉvj,=ڻp2bkk>b&/`hʮf53#AinHs;k& dq!E7(krd PSjxh!2@RT4HtZII@ߔo0lw%,sEYE!VG!iY\](}Č-[GW&/rsZt,~cQByoa?XŽl=]/aR5{FBJR@W^0 #CQQG:K/SZVGqb*rݼE\_Jqb*z'؝vElv O\D$ӻ|xba~.4ʩm!DhkHU?|ʓ!žJt=.xs$vjmئWPuN.ɽ:bj zݤTnűܲ?m?i9-S{Z~<..K/^䵈ˆJOitU9gڭ.-1s}hpZ!k69#:eO8 Ϟ➪w쐃6Bhd}*+2D @Y^Q>.%%KflvA^rNi{v\Xv,Q,Qlٝ>6+.1k%(5ӧ>ҢcY]81+Wpkt4}#cSK!BdC6+W{DX ?*@WfsٝNFe.3/)Tu7Ι>wgדDiRC?ޚ读g?ɼVKIR Hz:<t}ӧ&;^)8ʥ&NW)u\  *Ow_}8N-n͈Ʊvj{8oQ:p:9IpvuSÒlI 3~*ޟ[s*gjɌM #&!3}gg'B's7ti"k!?%q6W]JMw@+*cqW ?A!LkncKS5W)Τ9Nv5GrqQ4UϨýaaDžXN}ןgFLZ6·I}ߝ&&FF2#РոrwBzfտhK;i޽Bn4{ %9ԤX_5up:P7_qՕ}ma035>x@iw:9٢:LL_IiF4Qf~/N'cmh]3Iє$ btt*>Wu|SWf*Y!c-\lU!FUW?ULW?1hU'b__t:{Fpr;:L`VMYJL(KNn˅`Gk=[jh}OwͼVq+49 NV"Is(/9=n|F¤BL𸍚.ۄ|a};leyEepRKLkkL+~lp=YzH!+VɶIEǒ0cp\lmgtXagKs V-t(Rgcn13uGbCO?O9¯_B15ofgdq+όL @ |+>ԡRbێQ uzalN':YOlG6`Ƌ{cw:N-uѐ%yŪ \QPAkv8in-{yd~ںB.vΪ:cS0 @zK/YNrϡA6./!1ҷSH./Q\2v\?[.iNmþ94#x!H:\[09fLgBB1l'U޷]v2zU<@IRP͙.:4{MNN' t <؏@ՑHVl<)9餺oPrܸ$rbqt/,ghuMhh%m!EYY0f \f;wPd @*ړ'Eph5\.:pÒ4Jh52Āu ˉ^%!2Xp= ڝNG8т! :=+3rIC?PhhöqFm6 gCA ZeWmnm! u} YUWT(u3S~^P@z^pF^ܠӓ3{%oPDONvm"'p/]o>Kgw:bw[äm;XU@FlCVj a'9pS;jT>4$b8].t`6k/\HD~K#e,4 V5h5JV@ܠՑ:h-ϧD9Qit yI>Hד4?ڮ^5s[f""5Zm|jE8SݢU18p r{M,a&gwZeidij:3-^e]v! Q3p]x|&#&)ZwP>Mղ6ЧqCMmk6B.;zX5i-O@ezZ"%%91UzDRbѬ2cY^Z-ei/6^6^QtZ-sDzdS֞T90ľ53peBqk,-!-5L7wsl,leyEeVE擹jEؘ[̰mƁ>A^NO(1Ng}ÍˉƳۍ8,m 1jY4;gi;{OrT4ȎKPgtRɞz2>!Lgs FEҦ]J]+4~d @ ܀c%FAѐŢL.8J3N Iu_}g$FFQFn\꽝KӲq\4 L~#U~|s:Sjk ,IdIZ#q ZQNr [܏B2A]wY"5R;{<p )56bAGվI5NΖ-ZchyuمNeyzR9Ml͎M,9xC,;{U"ۺtQ!ȩnr +uU9Rիc0j[DO;R3Uyi$FF2#5YgI@cZpB́SI?n0DzUw(Rc6C~^"\@u PWTj˔ڭX6oF3N'VSt bA>3Mnjg&u#Bx?ȘNT˗ϣݼ_aeJ{OXZr20>ƶZ 4ǥF&#Q8 onjPSCs=!Arς#Ċe%(}(/Wh*rt9l8\N": >--ϔ}M lN':8wx)LM!q利SftrMm6B!{T.Geл\AvV O:N@ }jCFC^|eD#f7V+öbu؉&mEÆb602I_9q ,LɘZ8іFƃc?BV}dff2YY)+w9!2>NNK\ˈ a$Vfz+Z:DL(5 + ʨPsKiR1qs&-cinHs,i !vː> 2%*O.Wy;bncKS56}Jw8X]0+ޠ|L~09AѢ@qb*ܡY6vrƞYjBzT\>W }(PhŲrNu12m2 92>y?N> 7 g1HNws3=8#B̎n(s(J t\41:2:K/R3Th4f}c0:3fn=ztHs;'Z;s=$!a(a /Ujv墶gq\[z)NLoFC> 4G  7.D4s\rQNNu2l5B!VeQ-Gzl ɡq+Z{Sg=q6h5h(HH&7. p/]OV{r.LO9. D!4t0;}6>,7ʁrF˗?iQŨưmrZ9..Nƞ~vrSBBV5P.Syw6@deRb,I:;9nN'u}l뢪 !33IL20j~V@ @YVza?T<;{jTG7crB nj?h4oiT 9ʁ4OuIQ q.jz9|+$B"煩I)+U(C`NHJܛ^S0FARd4uT$)5NVC:9EcEBrZ2UTi@>QP 痩Z47Mi@zXv8s O=d !竟Zʠ(bIOORݡF!;.xCONY9njzиB?I$$000j9+Pj\9h8w}10!Y{N'}tr B!t()e3J])f`rR4 Z-W1߅h0?%A&{G졦>r*[!sʐ<r&>rhZ"\.-ffL_7C>(1yC8&lvjzꥺ )BLF`}@.QP LarFAѐʌ.er0贸\\94X飮9-Bt0LaH x2>>F 9J; 0ᠩB]w=4Yp8%@ !Sj(33fpHqo5B1P.SjPV``s8h>{;B݃Ì~|ƜZͱP L@N k~,2)BfT1'OM,BHJPԮ cF݃4ZhBϰb+!B|URg6CsHJPj6 Cq goq/B]^?*cp_]!P yyUf2HK}YBB!DVXS?i5 K1P&$Đ3c9/IYi}@> !g{LH%>>Av3˫\ @wfi f qHG!"( Zǽ^QuQ+U"en].}#ct 10D!zFdZ!1=C#$%Lq og(80LwA-!Bx(UN^I DѸOLy&P\.,VkhaFcB!RC sԝ.,2c~ @ CޯaWm8 !⼔@\ .09Un4TMdzxugI!BLJi2'ǧA!!ezz"QQm!Bx;4Qt%28}{p̊.B!ܘИkI @yJ rB!@u'K@ 33{e[!* {_HRӍbv t;[AJ5,ާ6:NNEѸN|k48qpp8N)/{ޫīF1@R;\B% i 1:Hb DGDm{~ :Bޮ*Ř踍Qqg#?ncx|˨ȘTNBA*ejyEl2}@Fg(ƂS1VKrl4Q$EGERt1Q$z_]*ZC1όlXFst# ZeST&Rgsd4P zŒKz| q$F(JD'FxVkhazGq$j !RLQ(s@0 Duxm!/ $%q$DGN^ONRu858L02 VslPaU3!!^]aH @bxAhH!7)xrJ#B록ew8;8Npр^CӾkZϞZln|,1+-yHa?BӰu5ʒF!99.RW(wJO !Ԉ(5 d<̇GI# 1<:ЈqQ+#c6mvN'j'# z#$6@\ϯN"%1(ɂHdS[v`tAZhoBLDYlj|)5%AM A(}^Y$DQLQjIy>1Uct[g`z,Z ?4kq:]; bБCjb,$ƒ%b(X.&lw:i>z-r¯Ƽp*62Eᄷ^^b<1$7tBk. GCp栽g~\XrMO$'=4q1Soj)|p)E8].駡~Fm)/V+L&<$P3޿K$#TJSvpw`hvm2S6a9r]K";-Dr3(N0;H_ &'Ee3=졹o@!||[f(J@i@N4@nre0=QZzm^_%c`xS gMՐHqn*9P!·=3fSKug՝=rsBA2WT4vJ.ʒw苋4PJif*%)>~/븝. ~68.;i5r3(Ie^^* 2MOZὢ",`INCt̆~N9#x7JQJd ꥫ@ 'c~\e(;ƹF9ũN6tRڃCN Ec{}l9P @|L$ 3XP¢L pPFB ql,-js.Nwt!>Mbbn4@,0!͙P1l9I,`qv:޿̠g'@6ut`28beU ,(`^nzgin&Ks3;uQɶ.)DS]#rΩ 1 ,Dj4&(;S:m;8ɑVV7"Zm9z!BL氼,DUZ2S)L7@U['Um]˞Y!‰U!P*es2,dd +hdf8'cJ!;z=2KNmTr'_X氬,%󲉎Th4$RȵKh rc-B1C@*PR}£ /9eGod:nD];DV{"=;8<ƁSjXM6*8NN5trg^?DJB fQ> 2$uK˨hK'Z;!Di[a0W^ꕦ].wbv%D"/ed$כּ}'Wș.9L#w`=6r~k峬4GȺVa^z Sq;{9NU[5.DSb#J  r$r=Nǒ VdSS>ωFfp" p-bб$Ջ `A $tZ- XƘ6k !Ʌ;T_IeX+ @q*JaNΝ5g8'?jg:vkCC90nspd3N6jXTEˊpq! _w".eua.=C#jjpS;rG`2n+Jd;LRKfe~6+Ha_dg ;ֳh=͝37@!|t8^viJsX P 5.pyuq]rXP `;ARf(bWTT+#BeiN& s(LMR\CdmK P?;%IgMQ.r3T9ǶC8RG\]'C/}>Fi^\PºE^DtbE~݃khPS#Y\JR)x(f@ R°RbSf¦àӱSںpr~TDooP(?C)_d'UȺ<dS]VW5!jh+;xeg 2bM)kah} -oa:>")rjd;hC)K޾k,y$)q:8̛{pMtc';y´ ˋj ё\p/xK;kh+K.%ou3(),>\.,#.Ŕehyki)ǼݧXTIZ?RVˊlVgkawm[;e9\)RRQI )PSJ_'5aIsh^>Ks-kj=wQNj @: 9!Wr2'/oWHAJ"׎޽.5-RPRP诸aoc :9eɗ&qoF({z zgN>!:p"6wv,XsA?;ҥp8f(#)e]q> *j`}k !do 7#.󈊌8oU9*̡5j]Bx TV9@HPKa}I>+1xc?#gZyuIմJw!B@K_3 J)_o*NK8-v6qM qJnTP?Pޡ!6dqZ2J Ǭ6=4B!l*^u rͺ,)ɞ}j\ 7._U 籯ݵ YgqB6 zg {(<4tZ rPOVwdF-D8p\8́e$Q~{905ц.)+bCI[;Q(e@ySbټ$]b yG)K2ĸͮ @-Ҟ@͆U#mv՝'i쟝 !8/o;i ,*b"J'm_Daj{eM#ڰI)1f"ޣب~?bmQR!P$YiQ^s7bpX? !&tΞ/K pqIfbRbqB\Tžf5"l(M(dISȈR JFât6+;y]fnX-եD&C/(B7YK)C(Mjy ePLcc޿S%oNǪl+ 96kۉ?'8^>K#Ba~u?m>epd$П^eua 9FB *PiP*}2̥HsaQ.ц#0nsp-쬢2K#B1WwļSa 2V0?3iY^ȱ7\JK2C}('cXR,dcBtw}')Kbb,ʟtevb<^Bv4=k9 )rC92uZ-:ZQ38-e("o`QŶC bT7ws߶ǵq&YQI⺥e\-mB"hZt >r('xZv\^Z^ܔj4,`ci9I :08r-"`t M{[bM^󶍊sIY!J9F:=U5Wˡtb1xكhf=P䰡$m'{N4`^E]k,P!|7<:KoU\7.&?3muZ-+ YyڮYSen|ܮn*T?2b(uA\#xk^u~)!vmjve%Tl\̲ҜIۗeR*xDP:ԫrd2 K7gyF|,T]'ysF !D;Zњ6 ذܔG̜CeP5M<:(}9hǦ'u\a&1}<j W/*eWmZgy"\)J,6 fRWa;·%o PʆJSOyBU:I%\ mD5!$>dTJ D$Ҵ=zyl() #aDy6B1e|u׭O^Dk';jiS.ϔUPDH@Sa1_%j4,din& =iߨ&P eQPcwS<7=.%"B&ѱqwXFT[!=87.fyYΤu~ S(LMwx]MllBb":*g(%Pſ "a^z JU"՝Ul>P͘uvBpwuf$Ra11ɛ|Jl4/ϕ 籯ݵXFeq 5=$PUzʜͪlgbd]k`.~B1CZ:-r8xӎ*Nw4@!324ʳo_[r\a)mjY2>]MTuzB (8[xq*ںX!t69{Oja(s,G]āVFBB'@?3B"Pzni G0>ś04" !Dq\jbUrSn".)D7I(]R q]Mt AK~*j @J@t[΄^^QŮإBږxv}rŚ2bϿe븰85ETw);$Jp*j(7x}OmD]%"7^rK.(h4g fWmUbNW2JVAsK׏Z䰍Bq;ݧx}{u>˴Xn\ru * #BH(g'R<Ԡ5imIC>8!}Y\~MZ2*BφLg뚩p~}jiRӕbv a($-5a6٩ ("L5s;k\u|V0?3im𐗪3 /E;æB00<{VRbcX:;V,TuA^J [22f(ہ!.V,/Y^f/FqJQ+R'Z+õl?\t]5j?cXS˚\{[‰NQ-(ijV= ZU)} RK\t$s=9wΠq!tcHM5|M )JM(5a8[WB߈ BC1P(ʖ%o!^XFxC<Qn!rS'mi"..-͜+ %Y ^5+|אK^.5ƄBLfwP-RF[R~T2S Zgyb:Z-ދK ~9mIk2Rh}pB!BSMs75<~._]Uk0,VbtW-*ŜlfC 5Rz(FOZ~nw֫+ L/pG^nڤmS$P2ebyߝa1gc8sՅY RYE(աl2J픎d%iDB!]{Oov4zz brf#KH=qu1f1BCmSֵ{8'şgMp !gf%sŅٸȈI<{Bhb!GJN@Y[ۦZ '58Sⵃb ;B>{q7}sRr&/=h(NK8-96{ ,ГpKR8P 46v066NTa6E)PB!cjͽysi S|u))RgUa sPS藢3Ni{tl&ŝw2R{(=+5p:]y.eo!_|S $%}Wo7be~6I~Imm+Nul*f`3G_ԙ/*^B2n{geNZ.㒕r㻗sv)q?R?yUM7vY%\4ǁڝ9}̙y^17'_{\CZxÔ槳ay1--Z.o˪.Nwao n@o44t0>n`` TU@!XM]T7ui/KKrذU =p{g.[j뢪Y*JMqF}C$P!ſ4ɩ,[Zӂ!BNJ\--d<" ʱ`@ՋK.ZB^4馰>(`#5^LJ!6{O4D#+ye.刐%eE\RVS]覮CU EiI^?~Hn@9sX#| 3YQ !y6ٓ=+ni+r1D(Ʌy\XᠮSݜ2"LQUd =!(=6(OT50>n04FaaQ&f`xB!D౎}2Y Y1?I.9WN4g10 4 EIh4I?>>nDU6kL(w5stⲷJ!hlή:ZO^ˢ,,*`<UGfB/bdFug՝t0ԋO<]MW1@mL(eyXB!Dٝ9ʑ3E ei^TEdx8!,[;Ძ{t LJ1@ҁl2WTd(3dp$jo !r8.~u?I^yML|ˍNzꡦ9=9c #>kOcL*frAQ&N4BOSG?o9JBlr`AJ2CRBIF ,1+u} ?^i\\,R!Pc`xmjvN˂ Ve<Tɲ,yFǨ>zgCn?pozERR!D];'ydijr~.KKsXT(Vg"0-cuQGCOO\>ʼ$c#}B!Tuꮓ$z-,+aYi6)^$FG  YiOc6K"Of2AQjQ.+aǑB~lv'UuTuy,-aYI6 1>ɒ d0nwgq~z-XU}fz|p `|S,_6o6IB!fకGy܌Dg8EEDԟAc^z Sp\t 7@s:; +k߲%PnC>}O{ Ks8F|+UqѾQ+AtBKSh4̢Le(hu'h5eVYV-}4Zg(?%C䄰jQ]?wݤ20!!>S`$s] g.4ǫ;Oh(,d0=YLQ;pFmDExHvP}X쟄0 o(ik%;+e6KB!墾^L;h 7=/Hg~AL&:t)P|_ѮU|'ye^K!rAsg?͝׽t YPAAV2Zo]~ Nr'xs 2-)d:<x/r8?,*PMATݳv/5 +qdB!k+ynww)5]a#ǿ rD6,xBۏghhTMWY#ԄcTAiJ{iIQT !"8DGFd^6v: a0PMFR;Orw!BǚEB!Ț"&?rعKCfo a(=[zhlfb H!sG=ֶ5]r7HjVKeio!"%G8k(.nN@FmJxeoVeE~B!fEˊj'?rx㭃jjŝ1NXJDwmTlFBaU E8LT @FJߑ夒 !bvf$RoRSO 6x=ez{sbyh .:RuPn_!AinwELGc Fa(&)[;0fEYg<?/B!i4ʁr IϖKOiޗӓ(ˀB1;do>c*SfpD]'zmQaCB!a1vRu;ax/ Ra(=ZmlqkuK S!zuK پ8v5݅$s<  P^.zd<^fh2_,Sz6{oa!\G7RBePl26WTnjzzHMI͕kʦ(-X,(czB!YW]8ǻ{8xFMWoM *ۓJ N]ZZMVS!s/;-%_y!C1C n^{}O.FÕs\B!+G~բuUV"I&c?R6Nj J1DL^]!s J:Hm]e6-~X@US/o.ZV!/+&6&n 07ozQm.טBGW=88xf£wvHf62 Kv)Zf)BrZ壟\./Kmw2C@yxجԮc뽶YSB1;c"YkhlT[f?*$P4z^?nq)2!B̎KW^EB8@~]Jm?FwJW]ΕB!f:m?Ny? +H|G9N^R(!ϊ M!Sr~.^ۼiNSMwa@y~ݫMO}B!KxM{tB㜗0uKJx{Qmf0_CB!f0mGR݋ C~ڮXBHf)Bqӥ߃]./KaU IbI8z7++Rq !Bzmsh'O5l C$&[mnYJ!bV}@a${dT<_$Pz8bTWxmvI)O !?Rpq6g[طQq=s8@d_Z.VNI!wKjׂav188A0 Jvnm.Y9hK!瑜+ymێq2 &c=R;ߟ쵍^30!BuWfonQdlBJuj_?@WW6W]8I!m5ޯY7΅ $P`6ON?{k.YQ⧑ !\W[HTg6vn6&c_$PwF/000ME k@B2p^X,l2Vۥ~!R5ɸxS8+T܏_CB!pKmgjSfq_s= spR_έh/%iXFISB3y)Q+713㣹Vavrd52; yhph6֏\:iH B/9ܺgp:a˗ck^6fphTMw{Q*)!K>\;?/e鴴z])BRlU6##Vp 9k}#w'[s[鴜iӰBtU+Gg~CU W8@#~3J_ظj B!Ĕ礲vI6}Cmd[!DR~Mm[q3H Ψ{.²-"Yv\s?ވ&qfY`6i{LQl'KB!BڥM=8xFm8C_@5O>,} !j;:xvێW l2ZPy{YYB/u?n ,@9T">pMҷB`vM9xZm!qfY%]|e;z-" K@97nUjxP5/mnkwGdU#==~B_|\~dn3iqKsŊ^޴Ckv t%F9Kߪ6 ?ҷV?zCF(B̜h^W{#v;|I9♎gq:^򕯆,Bh-@elLR&c(DzCkx>x݅^tB!f͗-cQqb^R t%GP!*pHPjУ&,)f'K,]B(m`a̘ĄXwzAoQQ&7_\ݩSMW86 96Msx$P4mmzD-:y#KW/Bذe^ڴUۭ,u 3] Ëֵ)?C|L!bFѫ/PlWS|їa6[<0W(_gnn? S!B_|m hrX*žV?)Ƭ6/ bvɒwP}]};?S:~/0B19i |K\.Yv]|IJ. l2T}[[ۥ&7z(BxD &ʠol͇vm>f6UjsCe3{J+SnղbLghB!Z;>z Yuw^{ E=>)oTjrϟ7}‚Lm?zzNif3J!#_,{ybB,~|Js54tpiYxbH fU^Q9~d${߽/6tp/cBpab @%PqbߏgDBs4.&,y |Pu@kk?IbˤdLsB!Ͳl>E?ٓ|' b6_Wj|evQᾯ? !#9|㗡WVCվtd>Y'2ȘM?9s/lvIqWBx̕D)>l;(&-`ƿ;^.?' _ Bh٫IU8v/ݿ|{CsHe26VT='O?FBesg$=9Nmgg?_ە{zD&c7p0}3<B!’K@a!Q_)!&9$P1x umϏ7B>k3M4 kcCe"yg^ڴ'zCU۟g)]R< !V[wVjSݾt܄$P8|T̟0Ҧ]Q`BlEھ.ty!@e1MCQnjO^%Bd6_f~zu;B[Ã]Um1l {)P, !l2>Tm?<<#w*FNJ!9Eۉ+mk(##V_^fSh(C/_m;Hۢ`A޼O!D+܁ضaz}y{:>$P0O);>~GzzNTA!Vey |O6 !l2:^SLsK7C3"BσRRH!Mp=}}|ۗx IDP@&8p O3M|?HWpB1J>?~G|yGj(B*&!~uV,)Rվo~_$P &6%Tʮn ߹?Q}q~ã MmB!TKH¼tUwA&az{s(fq J߃CTav:}JJZGk8J!Jry&5%AUNPÇe&Ne"HgMƷqשTwZmW^۫}Lt$˖cwG)b2k/^1ёڿ^ݗ}Gf&Y(ŻMK ^p:gyoorkZ6[?rB?h|+~JU{{{{ӧk'JɞI.n.l2Ƈ> ȁվ!aEx@)Mv2-_;s;y?4~{w׿y\(硋oʝP3N7o3-ܛe!@)T1T>s]}yk!tD6H!a{W|g||,xwrIPl2Zq;?3>ͣy

=Stc3Ӿ֘Ox J>:q<۹Wɧ"htZn<ۉV?|ڥ4῀{ K MFd pSEܝN|K㗉MqBb_ϔ~o}v~ddTB@)l2>|rpgV,)?}+J}BE+JyOg"~o__n d| RLd|*0zi#_3^`"i:>ϿD]./# }~8J1mfmB/9NϩY೷^Ưu ) >T!WBj">{e^Nnp;5\i@)l2끗|}MA:;Or.,?}."\ JrT?ǝiT^E`k&R^]>{|y\3Cws|ѫ-B!o_soر8_לj YjL H_yJM((QC?=O[>K.Z#s,gLORs/lK7OOa!J1#<_.|;U}i$nJ!D@"ܗn|ݧ0ͷTd pI1S$Pc6>gϜiw[ܧoRuI!l*,㱇Sħ|}p7PCQf ώZϟզ9BJZ6?iFFtB bƙM1swfM{~KCozߙ%o J6+TOq;qw`!fJ1+<5 \QW߸}z=[ B:! 75bHl2 /;'Y[)MR'w=>Ø[}αXZ+Ĭ@)fdl~ ַێʯb|}Y!P׿uϳ[>—r/[k ԫ&N1RR hr+a~'q$&ƪ~k/+Vg[p9|)iU|}*P` sϳe둩|;)T;bdR)%pT߼_ʽO woE~BL]~I.wn9L(_= f`I1$P9g6;_? ?S lqA&|;%:^ED|G 3|zb? -}p\*Ĝ%o&˳W3yLJxR|5\__:÷}ضs>.l~4:- sE}4?_&_.0)Jp&cAxs!{\.g_w~"c"9vD|zrm~6&~5R"K" yW^Q*>Ϟd|=N_Zşxv Q_ rz.M8xz:؍ƛt"LJ<_@7?Ƨǁ|xQS ?~=_p%# S"# KyR#M'LZJb " xqV^kv=7:ȗoK/^FQ|Do}F髹ر0N҅Nױ|7.o| eYD"3"hxn]*cJ]~'46u|\lc?`EhOHPjYn1O<ߝRlh󓻞Nǽ"Q¤&2C)gVJ??||W G_9OMI!Ĝ@b\KrȈ'~<6 |NF2"d<] }Jv?nl=ippF__~\P.K((_~ʔ¤b|[&ql0)P噭S駻g+7K_eנ\GN5&Me(Bh(]R_^ofzdLwHpJnGB% "Mc?@T9t}~.t9Vk`ihOObfhtZZo¼)c~u:woO~ͱAM /ȿ,|Tڲ۶ڵ|SWsyoox:$!"۸;x=驾[;OWtNjֿwMv$D@)B +*,J?67/c^Fll&'(v@?{Vq_7F[xmYmVpd|s h$Pd6(\ |i,Ym+|/@|\4_x7xŴ IET\ 7㋟jJV'x_;xaߘM)""I!{bncSkpp?>2Ͻ~V^$`k~i7k'MS#3?oZχ_va'tu[1^nGgB* "y?Y-S\pϯl>{-/қVsˍ9U#1si7_t=+V-)l^r:lyuJG5?:"Ial2Y)-44v‚ >v\qJzݔZP/7k{gfD5\g?z9Qۗl;xC C=MB "x,,(G/;5ne\w S+*2>Sm)g7>ـkz7o4:- 'dollW^3Jgg+G(EX|\yEπL~~xsӍ뉋T~|zxy{a9#Rt| _/|jRPzkhh^/l2"Cx_ l$PyboOߞnXG>t1S/5)q3߿f~_zk)BΠg"nzܸ/}K;O`;ɸ_ $P Mm7wKȈ=>Yí\FvVʴxƥ\q)6L{yռ6\/,Ĝh PŅDDg+f7bp߽dtS!J!< (N6_˦=\q nF.ȟ8#"|| ?uۻݻHJVm$)a&{~6`=+ﮮ~^|yW ol2?$!@)+ uϏo橿_qXlZ|LXvCso޽'l=bVhZ2rҸ…|膵~gxynv6@~ώU(Pl2v)5t]`zs0qCq6^w l2s!BJ!|d6{+*0z%P⏾{yO_lܰkװRt^qQ|×_dA3:ץB&:iik__r8?p^DZ~g#)Hbġõ43Z}"#INb)Ϡ ?r8u7:- ø˄=+ BLo*pˋϿ_NnnW]qW]q99:x>EY~׷Z:I Kёdby W_%93z-ݼAx--32oHI!C~yclܼ|n??af.d9I~+甍GFN6_d\uh1QQYXTwOnfTjlz) ghA-d<`llN[ٻ﴿Oikd<5S/"DӸ\2/L+>,׉`\t"֭]D \NtrVKB\c/kt}w-q׮*e٢9D3!vb׮*8͘uF T9H'A(EJoFa|_- ozr:;}g:Hssg6H̪@ Kgт֭)cY >XUuM{;M)@))匯3fg ˖XU?0̑:B}cmm= 028\m*Pju:bcHIK ++L|q_:v:UV4MxA!ĻIbWTq2C<3>Ņk\z>135tqD#gjhlQFǰYm`_f,Pj4DDFEl|4S6/e ~t wiv:}8$e[( Og55 %9,_>li1qqѳ>s:wYhhꤥ>- 2`fxpQ+vm֮5PjZDGGCBb,)d'Dnv d'ZCC=Vᣵ9RKuML/ed#B$P <{-Wfvǹs.) ؀f`~0# `ephQFF 1:betl É:rR s~huZRS⧯fwe44tEb|4q1$%ĐKRB W\c$D| x8,{#<(p:r3CJZ %%9X6KX BQs+"KB6 BhF33dZ Eٔ24<ga0DPۨk̙fNWpL ums l w|l2 @)D*Ln=sY83NKaaـYV˼l"#%dFm]g[Ȇo: r3fg.# B ܵ-ضV0J(-!?/Yn\.#45wQ]ʙfΜis&o*&cG1M(1QY<УF|\4i奓F^n=CFѱqZZii颹nnwdBB!R9-ws;"Rs˼4rsHOK$%%Xt,3S'0tu[hiuOpN<(](#\qmVKRR, &'OJJ< $9H A_ ;@O@YVk؂;@d6x_ߒ"6_='M3xe`JgOw3Dr>Ͳm.2$7+38?{&6[>R?I|[x^CuK<&O 2020-09-07T14:13:57.855925 image/svg+xml Matplotlib v3.3.1, https://matplotlib.org/ seaborn-0.11.2/doc/_static/logo-mark-whitebg.png000066400000000000000000002243731410631356500214630ustar00rootroot00000000000000PNG  IHDRug`9tEXtSoftwareMatplotlib version3.3.1, https://matplotlib.org/ pHYs.#.#x?vIDATxuי[U L"˶$$q`dIvC adC0q'1%۲%ے% LwO3W骆tPp08p8 6zp8.(9p8U%p8*p8Tp8p8S\Pr8p Jp8NUpAp8é .(9p8U%p8*p8Tp8p8S\Pr8p Jp8NUpAp8é .(9p8U%p8*p8Tp8p8S\Pr8p Jp8NUpAp8é .(9p8U%p8*p8Tp8p8S\Pr8pBp8NQDT߉DdTD٬ eA rtLfLFL&fLF}a6 S_p8'RqXFDBplQ4N#7tzKĦammce'Hā7l01aj p ,Dc0tZWA˕wNhvx=vyBF/ppAp6h4)?cjʇ)?&H2Q6/݋z`7zy JSc8>49LLDԤH|wRtXr"͋v/:;0 .pAp1/cǧp|?PlYi@ ahhprp J C868's:8a ߆F.296p8k8zt3ώ"F/SGv3pyz/nNMA8Nip8X* #cH,b0pڎ.qz<[sp%s8zl 3Cx# F=L&=L&}θ M#Knٶ݁ ێf$pA( šgȣǎ`n>KZh@_+a`av.Z0F7хێ /؉wԼ\Pr8'1 {==@ akq-'ќvw_C  \4)ͭ !mgzu;(L"`XSnBcc.I". ƋxS'pjc O/~u/<5Xs :\xN\q.=Sz1(bL'8 .&dY{#>T:[=^s9@.]oZ({񻻚7 _z&s8U%Sto|ǶX=ڽjO(c`a*x$ӡl2XkJRd<T, Za0Eձ1_2GF+^Cيs[N(e1FfqλɅWE`Vp4'?>BXMmkv/.LAsEPƐQd<09Dvdy *|jʨL)R ͧצeRg0{I6[0RA}5/1q:x/ˮ?pNfpT&>?HMȍ|7^zyoqOU(w|pv қ2JΚi4E 0ã"NQRt؜ض(*Sry^wztF)W288n w5l1 xK/_y |$)S.(95H2)f7^kc dMmO\ӻzQ*{LR,|Rޖ.X혍GX)5Y S(6 7}YnҎ5ؽ&pNtxxMgpK/7\hԞ#k6uz\5xZqqxpyYϠݍrP;F@E4t3׃p80nbɚ=x]T9႒s25U7^=fNxPFA$9Y)Xx\ Ѽf a0Chc >l"ܖNZ5ʡgI[?D"~1/ /F[kR d J)G"O~z'~5jmo^u)j)eCF H+k0[w#Sx[Y6qzC lzcZ J*xtj &k ,4z"_GV|^;M-jMp~Fd׻į~}M:EQ7\7up6.(9 1{A|XDOw3nz7SáfLZtz\} [J¯@Ȧz),fQ$hL^J=nlh^&fc|,p͸pIL)B~ i>VoC$,% L2 (@:A0a( bd| n9<;pgRΩ SѱY|ošg*oYJKK0RptAp(* E#%  ,h4[ss7R<:=@2i+:`77(<<2Ɯs;b/Jq$0z΀{u81O -m8`yCL(?+fk<^7S.(9'5x ?]-VtZ]k_r.t1@*4z:F\W [ 88?XdDT'ux2aa.c3껪.n]*b:sLF3.l):{\S0!!g!(1D6۬ yGJnpceã٬؏G48T JI c 4۫NoMU7^ io2U_ץ}pM)B2gU7\-"P <a~<97XyIIpYG?:]ZURxV8 F\޻4\#QE liE!$ɽ2Uba\qsN&o~F6v48礄 JI,߷gW$ן^s9NkVW=tn:IkRq<:=(JQ*{ 0 Can%Q0YApa[,:5R1"Rbָ$谹p"!ko dR8×6'DPʔB @"k4I? ?$C5B1?>\(qz. .(9' x*߽HUmBb7ƫܴxZPo"2V? Uf\ۻCu$ȸshŶ7|SZB1A$h7TT&,}!~\ZAJ(c`!Ic8G<e zQBՎx+<9;*J-4?_;]EwU쫲>YoNo-$M _%>Vxc6_op5^ayJ1Y}AD fMfc4w@˩kiֱ!VR8ܛAAUS9!PdXxYT/0J#DSt\lu7dQ 3ŷxۯB@ / X>ٷ*^ elo):) mD ypͪ|!jgЗ!!pL>-:=v5n)Za&܉EZJ' xƀ~V Qp^k7 fI z6PX.i!XЇxey$(< ~]O wYOx7g%݆dn{}ÆD$WUL䕕\5[QP _2, B je3i=(k7~ 9KBIR<45`K1k3(r"e;rk6K:\ٯFm<&sI,(˕ ;Wbk]$¦f=3o|VV~g]o)1EY Mɑ s$oxݕxͫ/۰Jd9F}ֲQ)eX& Q  '@N`yHŦVҹ&R6\Mum1쟩,\G0 .vNo3$"@R-3G,ec0,UŐva0m:LYV_߇ݐ+>:lYq8%gSA)o ~?CQ*2}`NH:{ƏkcuAʔGڛJrI{/\F)58-˸c6M/ފ,Up`Mf}__4'S \#˗>\u2{A a0nH% _+_+ojq~5r6 pԏ"1)I"n~޻)$PO qĘ$nZ. vid.FˤL%I Sk.Rƪ 4J6IB.el7#}]fᑩQMbȽ_Nh_{/!oBߎO|GkרT Pr6g?s \ LOn4[k"-1,|c,{C֮P P#eMM&IW/eMxJڍFuSG8xӽ-o$Cʢ^pΞ EQ(~{jEQo7MS+YRI<:=3X& 綔^RgUw]8ud4Sel4"5^L<>3Du.n@8zm&I2׊rs4F4eFIɢW E]:k,+/O~OEA xk"Na,,D/O=]YmX_o+>WW(D1a$:daKOa2XA?"#Vls7¾AMR Z7iBfڌ1q<\<m6'B1N^\SAY>"A e_ WTۇW ?nCW~ᙊ߽kvpj  /( xkM:݉=^2ݐu|H f J"̀Ϥ#_+)5^l {b/پL|eW5&=7N\y 05^ c>WlhguMfN6âk?Y4&! ݮ9 \0cd6,fxONhg4**p)Mc,RާU/Ԇ&bR!?4gqy@$Ye0{[b/rb5dAXtXPF7Oj27uKŃ<p{T uadd*2*7x߈S\pB$dFEx 0VyP(l% z4T *xp|ڵlA< ' HƗu.}N/V2)<57YtRLg4l-9ɔ" 1!U"F:m.3q2EJI%f|SF񗑣VrMvHxCA?Sf[ Xɝw}H־vƿ}ft7aerԝy/*Am~hoN p`ӱHQK" vO#Ȇ5,2rO`µ)˄^ܽ &]:JRd'f'V,Com] "B$~#^}ي~^R  "dF!Z0⑩ъmڱHNRTbÞv<<5!6*޷BFAI]I3?I|j.g)\Prc ?=Z/|mAmɆB)"ARqm=1^дE%}ЉbNDl1: ZuyXKXE^׽Xl>!??Ĥl?z{^ddJq,!ߴ*1 nԏLArNVB48EB*DAXLjcoCCSÚ$(ѩQdIA$-FπH9l`xlNlT(xlf d "NɹI<<5l :{oĿ~0ON|ge85pAɩ9CҼ-7߇K.>+;P(E iAqd=p!rV+ZxK*84?UQ ,'D[īA ' q|aS05T Vnۥ5$AvOڬi X6ÁU]Gsk/807Q^|a@}{>`8<\PrjsG{+.W.x갲s0su1Xu|09]?6Mc*x-şW\Y/J1cxhjg\`ώ㎑#x?NZ+@QaAijVWy6.jTWL"FUЧcU֒wᕯX}84\PrjƟ8>lVn6OތwMoTNL)J+Ѭ:c᪻`}XЂ(Zv۫K[ݍZfIz˞CaTPp8PVJof gZyyqdB'h0[5qxTXtm-\ӳoS5c8_5BVY1YP]R :wz|7lҖc?uZTds߽9'{~wCmm>y3:갲ڰԀ{<DZ!n-VhV<,/LR۵ UG ݸ.]ߒ bՠ?K)Dz#Bi}jǮvH*&PT1/B2I$ RՑŃøs NG$ l9!hx Qn=N7â@hZ F#0bqSz3":\{y;__}>az&z_ FFf^M9U%*,>Gkw~G_\R2iF5{ onS?^͌L]ecAR ]_r>X$lx4΀VgYёQc"YcI0jQR+R;-̽˝E"eRg-L)#QBÍ-JR Y7-e(ϕ,Y$qxD|d#{]*jfg܎[n}XӾ?>?}fX:s%"0>診DI}9yuZYH5R8MbwS*TOw˦άUt؜;KL%@dh4{@*\ TőKMމa2Vd]/Åm=5{_fT.E8yf(6':lNLFCf@Y}k:B'ckE)V#`6~Ey݌o-e<;|;^:g͌Oχ4tZ}#v]u*x~a^SQT,ZV0)cxx\x$\T@BE~Ű`Tx]MbqQl|*FBEFUrU"\~Ob!x$.KT)>/\vlNTʨ:c94 a!@,N4JmN$+vE$d}(͔rE,דk_r.: BaF#x[t2fx7GG?~ۚd_o+=!$DLW):\PCrqbR =uMMY QHv&j+1Ie>+0ڟ-r\m S">B)q`2F(D(l<q1<<5x6fj,0Vuu;p~k7&KN"Wzvs.lwߣU~sAG'2_ǟ8hk/ :,?V F9]ڼ]wшB)(cReT2`iP]^L2TELMR ^E~nbItwd4Tfu!c1YX14Yl5[S }Vڅ ޒ Ec _pEk/I]<:s2y|Φλ!Rim5d7jG_(X6w-*M%g)]CaWUw7 XW Nb_]4:0<ۇ㶡fTr]7g5kN]ţEUL bM MZuxj۝GafF4}`4Liůi/=y02(Ů\6Kde]&z|coZ~tq=2׿4#>W+|=2HY]T, Zvlu7\(NlIr)IDSIRL%JG܉&*:Ni}(BdDD j@վ)z i‘;.4[%#Xm6iw2VaC٪zAp%þiu px?6N V;c}Nohjv #AţR m0וeV!xBs_ʯ(ꄬP|D0ë_yI998MPJf1i4ۺ@2{Əc"Z10L¸wxnEO ҁ5YG'Sb@#hV# o^}RGU%&+1] 5 ܖ.8 Ʋh2#WK[i'f':w7WonsV(sY1||ӘKDJhЇ?(r+?0jo{9႒SEAMf|so9go˺(kZ0n,| SZp5,Fr߁!K˝ҹ*u] NK:0V`"ZsF# 뒮یnWv ;2pI{{w/ED;8 kGEBpq{znH%R.E8]D37\WQwņs;kU(S# /@Y7k&E[֚sފ/|۵5_-v8%Q/׸nm3MM(xWv m4WuB}:k,37ײ޵s2Q$R(%͒Nzxb4;VvMV;v7kyQh(b*i0lIҩ|9_JB z#hSd&!D)P0%nBt,e ,~yv\t,35y]vNoh${?1N"NohAmj)ŁIMnVWtI/gb҇5;v\}Y_² e(/|W羧5\wC\jwÚ%RՍA nkpiG*e1r5\ѯuA"A8{ V0qQ]ާ:s_f1ZMAVĪ3P%xdJPdҲ\t* !:Q\6E& mNܡjM&I:a*^ eEC80_VL&sSE(8u`hƙ {kM$e 9{OqqlX2?9ZL@8ƒPZ-_[]`:_JDAXEW1hxEW7]i~/T8 s#0B/jly뛯k^}Y]TBѢBuqMѣP,UpRwCZ)hr2UթZԶ)Z#AM8 F\W$S;z8*~6TL EьZpvShld)]P(E8cӥRƣ!M[DEoh`QA"2xTxrnuȬBáNuҊi 0C)k^ H K bgC3 *fYMeg)47x_ԫ՗AIJJ!Bas4_%6W/մϝw0\Pb0ކp iU# zF?GiTc1V I9X9JNv`! ,@j3ee,.6-'"kM! 8` Va͒ %QLfMiY .>/} ˝O);T S_>G4󾿿7:h`W$fC> 5ፋU*&Oj%dܼhYkeDA`ݪGmF[.`De7yRb>JIVg4ھ*潯77{oԴ->_,i\W.Y>kϫӊQ +'E xML'k>zX*IjQ`"XC \ " \F3ފ6uMs=*JP;^XfV ehڋ&ɂZFם5ggAM o9ExN>yϯӊK g3p%ya]nr);Jq]T]䢄 2h^L@LK_}ӊA)ufJP( eZ,vLiIe"Mm8 RIiji0fI_+FJ4V I Aژ˒dZχP|;PW;8ގ:YF)(Uyۯ7aR 7X& &TUh__(r4#I2(cNتzP-8R Áe6}N:l.: F$]EoN3@ըrPMTlkba2iB5Y6*BTeH8Hp=6mN93" "\ U vky1>]Q@nځ+ђJ/A- R+U{,yHd3(J 3$#q R|?>S|oi;>?gsI,>"Q\WxqWU= ,`8@U &I}./D"T-,_yPTm'|jܜ 76?| >ooxUu\UTS#kn`y)Sc 8i>I-iވ}* u=Mh0[gbӼRD /VQUlO aK;`'K=zLSM[)c,Upk E"k Ģ,bLɛ{T5,=1նRn^7e iEƽ5?mAý52U*,]28+9e0a^.]"!8ۂ.&?]ݩz_ʻܜTM$G /@ح2GFˊI 0/vgK>85ȔAbyq0 /6Mf'_\Uq 12x|f|YD$.lUԋ".j-* rv+ti#%S'VIGܤ*R T] Gs~ M Mn:^qI{ӽ-bEO4ì–gwc[+B<<5 nB;*tSg*1 q`n&y^D9o9$S' ^rnp*@؂o(ۜΫD՝KQɕ)y<57d6XhVtb9TD2e5!0K:\_vTd0JRQA) "vz[U1ro%Lfj1Y`_ jr<ΪWVbaMVX lu5l"ȥ/Cý:Eಎ>xs;l2F90 vŏLN/e*3; !x;^K.>]>'R~9^Cy! ?őU>7 VQטVx$.  "v7! *M$ͩaj&D&WLqLF#K' 8uQȊnAR ȔBx`6@/4Aڍ'&KQ[K8/?q.Cc(?ה.E}mJ1^8?cIuQ (b[=KƐUtvs lϧ.hARb8@JB Z,6Xkv|)>AGQPưf"1Y`4vn*wQz-">8jG)>7A6~"vpAy7}Z񩏿 z"T$baۜ(BpQ[Oa"*ڲ8fIQ"(®7¦7RqM#Q=' `t8ۂf g5Ca (yAChiL@V8r@RUX&UX,~>0f:J Jp`g6b4NTN)P(P6]ͮ/P(C4[ň~ 回;夘Iަz{ӊ[r R1]_T#\]O3tX2SijF 5Q遐1D1̦(k?E3iMH`W^I(RO磫j "ndA}3+ 5fKRMLYGՎ:`Et9\TDN%NKյ,d] Vm!ww]8sT(U?eU$rL° 5v| o!V܆ΎF_9ω,cjڏLz|ߌ:+A!?[ա0HGJήi0hzM5ń(93t!WcYtN)& B$]oмR$"g6nk!贻puVxsw؝asj xj~Jst,mReA݅3ZUEC/Y`%OMV[L(b"ēs73sE@LxM?$zM@0d^:306>}P3z Mo뵸3븢0s+F %ERu4 **:g/`٪Ia0B~D]O=F5NjQ&f③#+Xz{-^/ٶѮ7bwcATOe" aZȍwTJVjA,)ШREcl< 2x/#G1$R-˄ު#UyeZ?~g1u7圠sSůS>ksg5jKْO͂ D$\ُg|3EVE;tn܍Ӽsu7,U0 * ɛ QьA"EFlq7hU(tLEr)Mdw' Dp7b)0@YKӽ-g3rMԮؤJ4*Ð#kҤgqvs-BdY/6FQBź*b0 rSl :AA 3jQ1Y ՠ!zŝw=j_^t7vmM ѣ~z;nߗzr70Jv7aWcƣA$Y$gl8N1 5 JSD DՁh&xdєbsjH!UQp:)Fj toY6{5}.Qj`Fbr)&qM*{[:Đ@m5XB~<hxޮ:A@نWzⵢ0v#*摫YWQ!19S=l_5ZغemScsrJXcd2ꊳ|_7ָ\4jZUF ٽvx݄V#߭]5Z8T^cJ+W(]lP,:NфFl4aNc.TIdU  QHΉơmF%ÀˋlvRZR"WlL͢a$P@ :.lUe$͗hQM@q M䚌kdmudɏ^]OۏX]D>pAyd1u_4^§>&5Y BUj._7Rj!Xt36ޡ.S)9хyl 9TA6DQ k> D$8$aՀe;qN\ݽ[M0JRE[~ `][TBuRB񫕲uK;>g?sU?tw .(Oxn㣪keGw)Nx,2?(J8 f 6,opC'h928eBݘ>(၉aD֘=a\i AL A?b4ƠE4[lqx@HqdJL%p$0">"fQ$zn9dtWYɡ0 T3\ID傦@ƧZ@EATyʔ⠯r+3i )_3cSzQޖ.8T4DTY~Vh^q.翼WQ:_tzU\P45\pƫ븢Mxi - / dl]Pm4QhjędYVw#L`$@0D2?Y$iPr?g"` ȎR  ̣6%&Ȕb,gTA&,OƱ =wWAg5iwLPv]mA`>-m4D\TT,x6:835!T@S"Al4Aʗԁv[fXc;AfֲR) |Ǡ۬0JSadt;j/7oCKsm|}9j}GMھ _{`6֯(QOl:7QvѼ&1Ѽ )dQ.$ (#Uz.JD5WO3SWZapI{t+dJ1 )u/` f+m! 5]\JŎ;6CɀB)LDCEM:=z܈% x;plB@2xTiE7wdόa~TzÃӼU}oatL]|Nc`KZLlf'ob=KF4D f+mΊﴻJ޸! "BKaxLª$l#͏l,3Z+6׋b73vjndH*,>.11u>#<&sbr:]G1Q~,!tF}'"T }b/ lif֍:qIG/v5^ʊIR+#&`8:jO M]IG'?svؾ#T |cGKnjڈݍmh:4ns̆u215-@F8cOQl4SsS%SlŞŲhE#S.y͇%E _{4O Mɏ.<ܨmN+>B7MaxBXX30Ɛ$d`<:eT.m`ծ7jz 덋C"[pJH(pZЋbѹǕ"S  NA h'Ț&ֳ(QnjGձJnxxjTU^FQv{[pB|djDA.6=Mmh:qyg?"&aI,,z=\.eJq,8f}OҎ>U#) wWUd Uϔ 4w5mRI=d&S-(#Z_C(\^>~ӻozq.(7O8ڣ|5> g=ƀtήChbeF v;<UtMlw7ayM;Ы4/j#-rhԋKz]V0Kϧ0# smdzT?W=K<89t'Cn޴^`.;GZ:1g30th2 㟜:#F Fe4î7|cMcauL Lzюg=a<\x1 ުx4RQ-U{g@ԎS#uKp֞-u^"Sh&#x?HS0F ?3ű]hnxtzOMhxw|"V\Aņ4b[~# zxa1EC*g8CU3s>xdjD`.K98?, W8N3ɟ̀%bUGܞ/:Q\^DHd!fѪ*2hTЪU T,?BˠM7Yl쳶൯QW ,,\PncPH]i;7yy-)E ǽ1 ǂ>?1 I)4%Kt,iD$uzpq{/Lh4[qQ{/zU7$& ȥ BVwȢ|)R5}񖔳:)I(kl)*iklji[izn״JR(\rj}ǝw:8&߽Mx&<>;f˦VPh.}L'+]& h4H& & Rx͛3] ϧEt\;E;Vlk(b"Cse#2,SBe*˦A@"sS@xϵhUmE4YȮLnDX^ ACA ⢶ޒ *807 Y"V {*Sf͏\BcUo~uKuWx?oGW0Kđ+3ΝTEA}c)lsXR/2UeG 1ͥF*Et]R"!gAM!QoP3xj~jQP0A#kJ10I@2Tasv*V* "yU=,lu5ns@Z-M6DzbQۂݍmEg~m4Xt"NTUEZ1.0ϒ,A.~I{QZqy;_jxX.q~9͝/Dŋ'&%Qmh-=DYlP(حQj^[i%=taLr~z2\:h52UozlUwFQsԎfcԤTZ7GGAIIJi84e ^01/w.|F'ɝldL)B$2I`+BkhhB0U>؝k`g9ךiQ*'!7Ȃќ 5ʾjI wM F\އGT68|KM+&!U7NK;> .O_|/y92JxS0 S_UR(ɑ5)cxbv"P3K"V *M1`ЯJ*BaDf'q ;&p?D6h\O FIfvZFD#* [,5!֞e(ǎӼeo!h4۰ŭkcef uTh4[!(R[ZJOd$AUoᬦv8 {[ ºEb߳:GA|5Fu﷿{|P}) P31|So˵hi^pRF<=?fӃ3NKJr$JhTFI8"! b_Dz$2%bx?>0 Ic8dvDukr>JɤL%t;Ӯ7fk `fJnEkrW%} 0 ,&uLwyb~$AVW#Á5:l΢V, /[&I; 랷\0RtHٜh:RHdP3#6.nl+;z<g3r-߷6H[翽.(י;8xR} Hu@S`6oYns֤>G Jh@-6h0Y0*z4Q42,Lcq;>{aUۙhA R1 7WAՎFuzUe4nхU_SCAӃ,CzQN-(贻ewc2p8D6  vfR"Mm1ՍdB"HHZLDCa%Xtzt:$=L&Ik9o҅:Z" -6҉ӫ KD묐aZH&r ^v/|<}p춏?q 8M[e$壓&?iKLwL]Ȕ⩹ɊmBYEP(Cנ9~5dJ0Byn,z-t+^qe*m(B).xЧڢvOHL)Rrǃ>)@C{O(U.2rjJޤ]zQDݍwD"d' 5666V*#ơP Br.s(rcQn:*S<5,w5kl6 w?Ȼ\G?T%&monC$U+I+ 5H=MHUij v6ᩑj& *9<9>ٿFJߴudGëT ا!V`0G(ݪnd:ZE)nVZh􉂀N v" H+2uXh@Yi*{[:!3)y8y6[h4lH.p9?]dd6v7м-n-T/y]ӃCݽC m%A@c!O̎2.No .?z!p費0 VF(*'GrT79"0ưLT<|<i]L)OUOjPxrGD"p̰ ˾s C !TiS y ]7[1$0 bY@O$&w.Ȕb*ZSL.e2@r~6XT^w^<2}q5{̾uX u UhT Y o*Z(Դ܉d>9 -1K:8=)?v VdE_aLY-j {К{#m,:zz؜}M'ed*LJ1Eٶ7FD&p"h:D&d&dVE2#TVj$JB`D\^^QI'=mF=&#,t00bd4. ^eCOW*o~|I:67\P0'V7 oMM:hm^9NNO&,Igјƀ\ds;0 œdnA`":\5s.`%t;p5)D+\ Nʪj`,^g}3hX10ƐS``0pU_G'2%R'R'_JsV8GPr6+#[P(n4a2a660l$@a\i%~0hSsVzߗvl|aNm3~?]uffrR0 uz4Mj|]1\5xX=N:N0 !"BKxV;u)RAp;-] J+%JiB*EBb!@C&RJ1xȷjIҡu\o &LjM,d@<@%\V lf4X-h7hm!p>\" 2 f/O~z'.l:lbUuMs#ɑ5S6'vxONavO3N Ȉg3 J B$5-.gɲ8e!Cʌ\P$RLk7fb'2)I9Á9LEøw]REbE1K#.},ēX'ql`!Ao^\6-hu|l X@4Q*pW^v72d2}v|oz8\P֕7^Φ:H="ɍŻk ƿ5p5m4)& #("EalSft_*A~0x• *)ciWC(8j*Sg3,™k4^pQ(/t(PdQDYh:h: |@diCӎ6aSj#+BQD3i00D<&3,wu67^_{Ӹ%:ee-zx뮬*@ 1gQ3jNjKE(`$>'b?bFQB݅^W.э' bhx͒.5B֌jWBJBP1IǣhONP"@BEb^xPp(0Mlw9qm h07AC NVoϩ2 Sрn]'=.4,eR?(H@uta,lxhrn0^z}t~wKyN=|l8ՒاBL.%Nbtm꺣zP R ,,7Jz#SE!0J:\֋FTf\V|*MȞ&% ¤mR.lB N@HE@V(tPM18| Hd*olN,px&=7$6@vn&bIҩWA&1Y@a gper!} x{5oGyNႲ<iUnFjÁejOͤ4ַΨP<H&Vd8 &%݆eJ1ɢѰ,H`qnK<&f&Q`pec"V{D&JA0 `,\U ь> \ v؝8M :5L@Z COI&TYkR/q|."M'ɬtLf@ tyu$ibij^ 3 ʹ8aA&׿}4^zyعfll`)Jz>yNj!(cHi6֤#STON,k,8 u)|"*Nm=eꋟKD4H:"oM&3P$^94?`9-0DA\AJnTt󑩂Tux/,peLA*F0\# reQTɏ^ubуF< (")gȚ6B"pVs;͹nc 8^(=[k_HIUΎ]ڗud7;K Zlu7Ԋ@oe'֞ʲ謟7pJ$ƾII'aK[^҇:y L+nhi]>.k9d2Y]{^tuU߈PHGW 9qFCuNbMA˔b<PfƱӄ~w.?G/VKF[N'*oꆐp:xm\ĤdJ;ઠ^舀:q`ndh6w#m>y2 9WUTC2+,NB' ζ&li@'&Pz:(tu5ڗn?_qgBוݖ6\Pր[o{>_-A7TDR }xD]H6IW̠Q p-V>Y$ 'cI`th:_7!vB)Ovҥk'|߿e EFM40 |F3ڂX#SG)8 Ft;h9AjZ/pvs0 I1n0bCC2aRNp!Ůq$b{Kv5RlzCM.1zw=TzX/?Qnv^x<^U۾U}D6SRL.e!ylq7UT D@݅3@ &ZAԷHrHhOc--Rg!.b1vRfo@@lNXB)DAI- zKIZBL<7eOOY:hmmM85cU J)okVOn^q/)~q/y9l'"\PV/}?#v3^K>L`0 b󖇡x$XNʔ9m^L ^  # h0Y+8d9G2Nf-^tL))uM+jit -:^>H 'fs9Idxbt ONn4ശFތVnq(FʘE*ζ&ԛy%C$=:MWywaU'/\PVA`!m_+`T&AJZLkV};|9јq!7ۚ֜У%8F ~o4m`@1^v-sHߍxga'!\PVԴ~Uլz< 9fpyv)) %\RsT(3ga"?%}0$fI ںU\ XmC<=1CssJC[4Q7$lkk1Y1YpҊh#QKMmh0WVbT)z7EW~]v?ݱ"zae'\PVȏ~rgY+nƕWy)}yv c&y(2ODQӡA͑䇥)׌1VՅ1*%&xxrWt @ ;f Jl9͝8s\xMV *\u7^ͣe7Uy{ܹ;+LʈJ(.tVc r~v6F>DSL0(Ҳ Bl:V8F(tZ  UK9c2T7Es'n UkKOg F ^Ü>s0o2{:[`[ mqر  rVv9>?;:j۷KD& vq̕atG4Rq g3B'h0YBD(J0g|Ә6YIagmzBj6i^RF1_A7v0q]* /\ tܵݗӱpk1jQة1 xbl xp8 &}~[8u!okw$SR#{݆;{Ipns' >,`sxh kvP(Xdǂ>urt !?VjlAVE /᝔xz~ tAʋڬLDCZߎU4[l51+g6jɉH"c>C(D* fkޖ|5!vɡR^eYzOgp`|ON#mY e Gf}82dY]8uͨFq=w~mׯNcC-HsHO۷uê,U@Bz(JJo '@^XtEѠP@2GG5\ѷgҸkXզu$`Џ3^ԳO2ex?f]`Ά8 g3xtzTSb"&'?2ym$`{kiGǵY#cxUv;Q;"d2`nUbrי}&&ȓ @DbRǂ>bȉׇE!FxE"0IE`ȧëၵF~9Z9k/B).ᾉMZtMtΜ} DAUo][p&Is;]LfOM<Nr1l<{IWf6ۻ̾ۥRGaE'<孒d2mozu^MmP13WQ<řVq#c@ ɤ0>[li Q h0[^}zPCi͓"A4wfj6c084(cЋ"#-vMfU7\ XG' qp8řpۡAh=h]1^7CeU&KoFTwC4V&k@;v_UB)SS0lle FAxIwx[,K{o;tޡQ,~-Rh-e0@q,Zp{%˶y$^y cYHLZQ_L]4S|YSI(&a\+TvY-h0!;J<{dFF#Uy*/燃S.:p;?!B|rcgcv6!;.P?)E<ӧ#??,A|.t,ё WY~gɤ3q< fpZQ0"cY&aqzE!Uk -"d08rx=$0|آ\d cN޷=~_Zc`eHE!/dq>n49WyzQ4)7WXZpK| !dcPӈ$IGNxW͝x _Zѧ7cp@F[l4;Kj#H8^_/CcWx&ugxW,U$aA6 )`d|ԉhh3}2Wńq>xGm;ZQC; m@v\4䤣0)n\Yŕ#:ͷ7b>.Q@)bho9$>>KWF b톌a 3r9d|<'e.OjǡѤV0y6 2Ei|ݺ_D]12 ̎!ewP7˅-F7 h3 :\ib|G-[2 }u}ز-(u (EH͝\}"(I8tY8ۉ3;fw!S1V*2.J7CT>"JLtF/l8YL5Ν霴8hÎ6kvC}BHH3 Z|}2Qo{B*r\qBEǾ& (E58^*:.Rf\fS?t~+<M$KP HEi2 qjߴ,>yZ u>cs8Vn7݇VB$!dTpupaA4yb43:xz5PF?KCΖ1}ڬw{T{&9+C~tO 01\ r;ӵQp3(L}6+&"rqn87LC6 04!.mh}xGڻM=83 !p/o݇6֮1oZe̗4VjL{6n<(:NV Z.΍•fgj286qLɑvH )q1,<8u>PŪ‘DnG(0ഏy= VLCN6cgC6~BB[ɂwcuYYP+s|Ux لo۰ n|DGML%zQl.U+f""blwFs<.g¸8mo%,'!փck̤tha,xhpaql"g)4rqn9fj{j6qvf Ź _ԡbYWJ[*à >ǿ܂/Q0I9[xЯՆc.7>|W (< ۍO>.iEC^YXsSݛSp7vlѦgBNdv'Xp {P6iƉ^ѣ9+CvdGkɎnG\.ll1ev>oufΏ 7e2xc!4qēO>7=+:>mۏOB#}os9IƟeQ$h60(><yQq׀?˝, L&ǒ|j81`Po8D7y' pqnLK<&(Va2S%#Iy3簽 gGnǶF,,[,&Ĝ۳ɋC'9GMW5pB7x]EZss30%%AL}km1,_U (5;$y>y0/۶>9SU˜Õ20̨), L.GUJfN$s4m c6ʭ2LɻdəR7j<0ܠ*5 S(yGmBNq}7J㓿(gYdGƠ6nycT(MnNlo*BDi3֞#W*'3S \r\р>Yʳ }طNt\RR f,k +`֙h_ 仺`ɹ<S92EJD$4:p<J"anqqzIk7(6qamaɰ$#)P|dYSkƥ1r`k`r籭 ;8/EA&"r#$U{q8ׅg &X\X7k\ƲPdrcGC+n;sIBHP9H; Λ#iO>V*hN=i)řaAFd42tѰ]p. [MBkxb6l:D$!$jl|a/q |@%w8U˗NM&7͈FS?Y9uypqnB [[{Q['e τ(Xwjͷ1g42L Xf.w( SA9N=8C5sۨ05rcEIyI\ 鿒Ğvlhyhl=C !$Ti9% 1 eK_~!8ǰ$b YKTuP?4#ϣfE}ȏ[z<:gp]m;yq{QYU!>\3~ _lcLBXN I^B64g q%Sf)5}:ư2ѤGtX84mB$ktր\ 7cSk=6Xۅm83,dzTP&c^j6fo:ۆm*"❃?7b0uJ<G!Pnt X}zwUe 8̛-rsbsCʐDF ˖9sq G^o#32 5QHDspׯ c v6b_s'iB:օo545wFX,ЎCB2ܽFqq=  %8ç|n?F!/*&l gXTWk`q `w;o/t &"!!dmFou:`s: ;52PCz}o?v4DݭM!ːZ轈- (9=5? (wBٳ29bT^s@a|ڢG%y ҵQH.,1|t=|Î.;8ͣ]ٌIHh*Y5>ۅhn7vaGC+z,ECB8ލ= Np!P\®=5UA=N JYA^w ĕ IݷgXjmǨ53a geSW5*d2IG ]^{oqlcux˭q & !OU{a(gѠ3\@yH#+XAbbcr0IaR&Cw4hUy-~x\艳]8&X k18{/QB!d~@jy!Iu&;%3{J``njGwGG!][VN0@JWHId>b6z|B瑮ـNJ#\nZ:xany5;!9%\9gT8)u&JC$S7L1HF 6V嘖DB9+Ci\ Zr b4Ǚ0*cwn.cYDzܨ1UnNko{QB 5]㣐*:Cl!rmWU) ,ÀedHHEy|*Lzt`@P"'*t,izf%ev<9[2A6Tr9ҴQ^4cWcwN$!~l'O]p~4t#+3K[!P3aE Zb|#cY$k1#1 {%6rEer&taq.A|fnIR@tHW,f'gz;pqۨB8ѭGy)xusmQReHyK ("2i+$gY$GDbYF>bT߉Z,D*ܧGݧ'?ڳQsE,fju49"7* Pp |rIB by)H !CoA͉6qs*}vd,d %ftln05"cEnyɌOq#chROMbzb:J\5b8uOJ8$G$.CuGv5Yo!=7j0S9!8V DGkc~'dʝ%Ji (a S86ϟ G~/<:C/̆S5a*F!5"RrngI\28GybGS5X4y;w9fz~)d{NQ(/oZd,&s<˅mr}Jd¾6YLΔXe)ȉESmFxha> &O7r SΣvJDB !5]}eEy.T*%SCe0y51JeHH[zpL߅DI92Ed )~*tq1L6}-mhC!dt|Rغ<7HSSs7LfJ KC1n60%ֳJq~2I7D{QO-!$e )r9EJ2B"<(awJs38djjoh.|lA!$ (+*<@e8t^tLRR ϟ8xxwncwFXg6w@BDwfxRb E/L9x_8;APrCEǕLj&`wP߃04:$@/ahl5BBOт! *? rеv<Fe-='˂+t̎!likAPGR!!+2FՅ E|x3vv@k'B!y4(Lb\|)P̡GIoJ<6nA hyjF09q݊9;(: fpsVd95]}ډn=BZ PJB!2J)I1HH?qV }ПqWaB~t̟y FxM~dGN70+<:0R#" fp<>vXgv*!2L؜N[Gy1(614ra!~FS<ĉ2 0E#5" h''H~n]H!ď(Ju<ʞcemmqe%'M":PeXLrKox-z#tXGBZ PJ8QNe9Q'PFEj%i>>dutqs 8ً>XƶK!L> шiDOFkk۰`^W?6m*9AʟǛF]՝ݡM!$u,\=`س<6UP^ qDm;BJdN(s2 0Dǃ8ѭlj>SqB!!h| ~ u=zπ!UeB!œ4Ie0]@g$~CٳaY l 85c(jY1ȉ.`rBCo?{Q BB!+̑Gi4Q] R 9RgφaajT$`ZB*n7<9+RƂwFބF}TM!H$V/طJW['^ՆK!G䝆3G?vh7π&mB7:LebB4jXONԶcnT_.m]@%:&?@tn0πFmf8j,N!B \vGnٷNpƦXޤ Ot79jw߄~#Ft,I!@"rcNh@. (nzq XoBs-zVۀ!;b91FGn7̿j-"ʞ\nMjphF6 3/ !{N2ii1FOI1Z֤ MrZZ83.7NF3 fLڨ8!gDN$nZR@鯤:]8t Xٹz,0ZihAr !@:Zm8,"0kF/6,#/].tidkM !d;`c-H=U PJNMq70 RU nw(:NJ@};|ZB! =F8N( _-kRM@ѩ/!7Q{ݱzhhGB!XhWzde&jY*h:jJMnMI|GO$!BF%C"uڏ@4eHTJ1t !BW'*&姉$ZKiB!k]p[l ({LcREŶ !B 9*."h~(9踛B! Qs[|4UR>(1ѣMx@#ce,d,  ʎzx9n~rs԰̶!cbsH AP:.;3X>! S"RNpjj `xCNl'lN'l_V&!@ :RJyN Ec7`0H2ޡ4Qr2y,hjaЩURB2 • {Ќt484`چN7{ fv= !" #s0,HHŒ&UPz[ƱCI@\xi5׆#>Bh,LƅZ:Rȳ ].k2+z-V[m$\X@!M,PP ) ZmOB<2 D1H>!,1  ꗘ"ʘhO BЪhH-]n ̓ js``Vrtq7W)PAFDDrj?52BH toP0) I-k hAь~# f8ݜ_BQvObct&E@)e8V$pb(FOxbB qz&+&+C'ppn8nXҿ^ btል 6RH b#qQ(}PSSPg3A4ui@Gpإ7ᤝ<3Zq4*$Ԅ(d% 39aJϿu hRui[= fJ{!xD[%~_JX@i20RuKE~B,Rucnc]=h"|5flBMsϩ?cY)qNEvJ,Sb³VaNU/*ƐӅ~Qף!N7|@vs0%ܒ#r"y0%bspu}h<@K<$cqfE]a@V\4JS1%%|~Mݰ;h:8ns .J`jN2aS #&1X55]& wH{7 C L&H젖Pڂe\Rz@Qp) i:bI72%)(3pЮ#.75m_Bii*Fy~ ;i痹p{;JG-MpJo (:LIJEYZT irV;6a&4H^pH3viFJ[)I`rLH[;J?cw M-( 8澧PHIzq8T׉뱿NU28ؼ1p+iHIEc܎fqbM$C){r('RPRϤDj13+m vF ]]GBCyo>7A^ZLEeI`1B&CyzӓgbOsvb!}2>JApAbM9vB),-Ґˠ5bˁz4v oum}k4,Ei^2Xv]qZ ))Ů6ڵ$dE)!C(9 uBߔYiHq<նczi: N-j+yHH\β7I&<]m8E'/L^P9  Ńb XAaRfg!7>Fs&+6æuЛyņOŚGQ%31sJU≺\X^SpknLXPlS JvK١(8"”YTDs5ذ:⋅@uS79X<3>'L!GeN:*sۏMtC[߂!.J)Ģ;MvT1NÔ$5 `Zl_Sv|oFAFYP*Fߵkiٱ{1 SN%yeQ99Hoqմݵ8R_-=8҃?ۃeX6PpRҢ,,nlohE,%dD%P͐(40%ffbVV"lذ_9AĹ;kvg bv!ZY*gY'<=-&lhю:'KbRb,HʳwPJM>@|`ÜtJ;֮i]5s* #ՍݨnF.ga|DiGH aWq8$3b1Xq|Pl|@)_읷4SRcF?&1pbFu݆ X!![ >t3ұ|v!GS8 p;[e|Q!es|@)iR$7!v( 9ff2;: Auvg l\Hn6cf&DbB/ρ*Lq 3S0=3}loAMWe"@,)es|@)%%.VtT'C):Tہ/vǑjNHh1OvupZVT"!f㢑 7@K'5K',ĊnS夓ջEb9̎ $ƊCv'6ڝ5қ'h48۫ŎVE<؈p\PVeE9܎ m0'pń7BJI).JI CE Gy:ˠ45 ssӑ)~vwkwǦXP<oþmHKŠ9QQU+X8уm-vH+'\Ncyl5@Rq2?2 Zx~N|k;Œ❶#Nv?مҜueQ$4V߂㝽gIBX pW R-&r<#Sb4j͐pP~m=ƉY !$ tQ|fgܹKu|fl2c,[Nj%FBL\8Le] m6R_eDb^^& DM6uwb'B8?Usc֔Q b#qAybWcv5$dm p3:MA2pv(YAqrTI (''6z ;7EWBLKϿoۏstfKTChSsTP ׄ ={BXut̓ç xb"5gK<_+AuHyig۪qۓ%BYm:pb$3BƲ(MMD< G,:OCWp(28Erb[8 b{5:],!;Ӎuj~ L/JùU(JuHqp;[6gI]"%RC!|^ac'ekw=v;!x bUy Ü7bŎVhF9"9ɟ(LNOqˢm!$$ 9\Xv YSU"<8a$c B-V" >$Plփ (ȈǪb,N˞}?F)afV*ffIovՂXANkqw: #hJaںvqb($ó?r1x-X+ 'ZzqX:gCuLVl4ba;{aÃؑɴA Q@ '"UWtM!D4$ +*:z&LY~njÉn=]v(yq7dJow I8B mN-@rⰢS3!:a'"?1&4c_s,vT-_,HpSsieAPG]pܣ~,|Z!c2[a2o@Ix>ւZҙX0-w~ WciQfG\K?#V#ePܒY@FTTqm]<IѾ^!ե7?_$Kg ?c)Cvo5~!9h4go*ʆ.dzS' Ş%sܧx2dVeC9SMz#tXG/n*,)QeCCV a@#GLjJ!\U d$i2_܇eX2#Y)2 hdE2`k'ZF*@j1"Iyv iiА*r1Y18&I!Ő݉vWO 39ga^Y6?rLL[m8ډ0Rq'vmrGtT_-o]#?HǣQxK: !L~.;xiLb4Å<.̛d(egoG.%J򓆆pOՒFPd'eYђںvL)lB!d8˵Laь|,ȁVoxg/u@U>$?Y[!:L"';WKAP)(Y[+@9/{2O1 >3b/^Ӌұ`Z.7NF&<=H{uh'MtD(b (coрR$i6!FK9,{2‰ۄ"af:ڌ(s˲pzR# ÜtM~`ںp zjA1*(2KkijD(B!~hacXrRc`Z.JiU%E9XRvۺpt$)"9v-yyP|)9;Q\1꘬ ( !v={0Pڑ8F9%hq:{) VS!E)Á((FҪ]0B!xUie o)9M}X Z}`ϟ?fY9)Z_ ʀRR"##MM»ɳTC!$-6|mFJUe٘[hဈe '> .tV̌QS]PX@yBBaNZB <}fAAcnY6d \^p2^l Z 9RUyXnn8QAAfo@!^ԵߟFIn gczQT"-lܹl7Qً N'WVlvMݢ_CAPJyvs9цҒQLNBHps<ֶ`m; SQY4)ÂSr70Tw`{ųʚ r  (ssR$Ul ( !)!l9Ѐ- (H@EA** Ӑ[mW ~MeiIoQfQ?JB!Sn7c]8؅|1ZT$/YЩU(OOBypi ׀FzL_OdhE+ʈ5rsSD/mˣLKV( !ou[Ŏbq(,QҼdd&HνH ɨnn߄-&t,^((HAɼTh4? @@ {R(KsSPx,B 9NnT7vuՄ$'%y)(MF.ܣ0LMIԔF4do4W^B'˂?,Hȣ<^ӂ2EiPB!bc&l?HMĔ$g'8+aͧːXf0`BIqsKϟ2hL͒Gg ,/2 "_*B`0`#.!ocڝ5` =1ى(JBQV"4jϮ2dT)w1;Lfh3 jEw(w=!z!rDHR( ן3jE 8%>=OJ&E}k 1Z h20 2NI(H8w1b B8Dڱܡ? H@ Dʦ.tv#9)f1Ӌ)$B&hGSg?>V RPxd$ >Zf4j}坝z45wSQFL@9oQt܎ոy>>( ˥B!dxh1LjvGGQxf$ #),ˎﬖ4x̯(B&R.fp; qQHOFkK$B̃y; Wcr!'-qMt(.x@2L&C"]Opܡ Zm9 S)$Doy&u1n"uL2! 9\օ#7-MC^z<r٨s pxǗŐƾ(B&J='xQcMG|Z!B#G00`GJlB/ ]Up*B!uSs)^ l (# ۷/pwR!t_.B!d (Y<;'ssRÕ (R|+S1s˲}$B!L"--=ԋS5'v'P (%6ltP9TM!hS|Q8&@y)ъ[~coePUÕB!d)ˎ^<_tXB LH,JzՏ-c؛B ls˅nFWW+"BDKMDVr_4c_P OWZHmxr!7-H~m"ԡ4BIF D,*l@ɲ,.7 8db=zVUY6-!Bg~%B1x@g ̓s-[2 ,{ ˖N4nWOOŒ!2A 2!8fW$%5V!Pfg%!'[B08h3O$B!Ej¶$#+SP nwb#c*K2!?'!2S3lvKtKB{wKŗ?7 {X>c50Ta16Ca2[zn HM_cA!* Ӡ)]q70X#B>ByYw`fFtflN>K BNY6@>P^8jHg.q<6\TNB!L8tf6Rg2(Z3Zm8fͤ#(YCk9RB!~elVAl< H. 5PErR *g3ya1 (L!d޴ѹT#9IPr_8WҸ>*Bv) !qB<Ï\|a/(iyZi#c RB!D 9tkZEJOi…=Q,.No}-:ڥ$B&ԅ"pхss (X>j `k3{jc+!IZ̚)8{+\gjiAJ V̔4oo|eY?VB!S;Rw'WpN".lXVihk"ZF!օc~Ummؼ\2+.[諥- (E$%`ْ qw6eXUU웅B!Ω*\7(ڥ.D*CYXҸ(8f٬U/B!VbLj^#֭':0EZZPRD̓poƻo S`AyF!,,*Lx6r5j 23}FDI>lfs a9|,B!R*L&+|S|W])g?RtTHYru}4B!`Th|Cv\*PX=d/ \%8p^tnB-IӀ TMH ;[hhfBMVU"v,~v'=BU ? 'jYldN|6aJ9;P?s$dr[ '{$] JpXg;aUXiT :0s)z{EǣCw%!B%hzA;dw\B,zJͫxqF﾿IpLx}F!+U@&iyo&YHOܪ)ZZȠC,⻫R&c+ABWSb1{jq'3W.tC&ʡ%x/cg9~;oh1,ˢC:!$x0 d: ^O&{\1Mx?oMB(,Y\ᣕ ( pw/O':OwK#9)f1 L1:$lBlgB&Ԝ$& 5N^}2+ўV4\JˍW^Rtu_eB!!a﬜.:J'==VB^du$hhdn 3|4B!$͚X1 jIs (JP$>x_ݗK2!BNX^^!:}^Bt̟;+ ]P0Jsq>(8bjfbiBH:gn1bucn%w R1r*s1sF/Fү0/F!p\Tp x$7kfVnIU`cct\z/E!V̈́Jm1To4 K;i$(􁼼T}^f7]щ"BBԜ$Td qܒ` bi!J>@Kk>x踟{B!'d,=?ކI]wJ_,JIMyVJko3<ӥfB!bG kk%wyHi;D҇2„:`pЎ^^#:W] !` %"8Kk0hSU|4r>-4v}m"pTC!$}o Cbi$ (}l#}[@kV *>+#Bϔ$)rJ/&F՗/(1F[n:_Ʀ.|vq s!BY>x$yMCQui PK*PZ-i+})ZS9=%B!e՜"&D ץ┕HnG)Fq]`eBc7_ZB.8F[OҜ^2YIX9 (5W^쇢:,Ө7%!vNU 2Ǹn<$5[P@94n4gxx~+g %$D[._X́'||&/ʱmg5~ߗK%:۷oɀ$"eӐ/88<Gݷr|: ('faسɚ9~}ߕ^u ]z_-BJ?=OJ&;ٕSD}f'4笙8g嬱.x'0. /},p#J/H!h^>OXxi9p/IB$Im\ iy KOm.!0 nb>t"'Аot=i(D笜Y39p|UtŒ.B,*Eqx?Qw!V9֥1I0 '0(!煗?ԩ(,H_\n| ^VJ!nq^Ƹi?~2xYdq踚VghTQ$rG[.O#:rhԣS(7>BYVLPjpjC.[Ҽru:VĬvv/O'zNnV"<_,BnluppEWp!ٳju (eڸ鐤[t{<̐B!ܹSP*:5;yaIsQOG.<>FCcG~u Ѻ,BZ^Z\>Mt\}Cǒ9ҏ,_6˗M4tG MGcD) !Lh?"dfW,gbG(# w^8I[[{JO B0 \X4ohm4ozZ<1~F/ ˵{E-_PX<B!x䦉o|v/֮'iNB_>p5!xwA~(7'r?QTqwpjqtBl<! i28o+ҍ׿ ,y<8P Ο%U ٝx'BXpw= uj>=wEe) z݉NIpA).8RX2O1 s9b$ol_hJ\a !opyO.I&%\N~J?/=2IoQt\|:%BĄ%W)EǾF|ဤyr~ G&~07p/sU-:nzi6n±,B,\X;vV}.yo8cYC.d> "x7ԓw"3#QpU R !$dx0H/'~wt楈kn#|C4=kĜb\z|Ic䢀20 WD^OVׯ~75wW%`2[a2o@)Ղi8wqf ~W0(r!LjHՔ7 ;@tgʱ,BH8(Np8߽NIs3 *dgѭ(\IDATMnnT\*j/nNt [K/"N"/#^t?7HkWb>ĿQ@%XL[g?tSw]seyBغQ%|0|S~-[H{2\%cYPa{W#/Wګف^Gs-f{qUJ&0XxGK>3)GmxK!*!lB42qUpOGK0<<*$VTGjRsZzqϞGOQJ#7B&R?>r3ϝ*i|O?RxQT2I2M)JzPއY45KKҮ_{۝.Bu<(%i|Ss5-Wx0ƅ< ( JJ~QPg{G5IՆ됗$"UTB4^/iM$4JJ]& "P@yY [l/bc nvBDd%ElN;gPx74e C%9$܊G_8ҎOfgAd|$~/fHpnu"#7L*//O>~;.Iet+x b+gPqT*pOĭ7:gpAsgz[kBdy+ҍ~WU,֛?^Q!ޢLape Ѓ?NpƗkzn%!dɕ \wxiv%yk>._W CW\Q$J2f(sU#!=~0#JK~Y3 =~.!cA%pQxћ-͇qmO`=o$]U绤"Ftǻi!|شٳm) =z3"=~>!cY@B|D&kV$|7X- &ɤJ2颣xWych4^EI߭H݋ lAOO Qp|\b5xzkw˲u+q($~eY\%(-Ï}&ذ =w]J-w~12~;~w3^}K8]nط$.tj)0./_9l'\vk y7w`W'%6g=` \^|w%fld2Ws2Q($~O&W.FUe1{-Ԝhxˍ7ހޏ[o zT]G^?!m:e?"DhT?yl|^0ݻ+I$`df&O܁݄W_[neo =oL]wx~ Q77?=a#BaX峊ӻ/E|Ϋ9[4m\kY՗Ӯ$ ,P2[s*oD绕1ecXhlƟzuj!d{.Co0G 77=ciWҮ$ HPrzg^{ݻJˍوޏn y|xvF"phz#ԷS`IH `TPe q!<'Џx{xWrV_v%I,LsrP[<}z3z?X7x =AY>5 /=Ci/~^2AluR}mx58p~L)O?Y̤1Cd$<;[ڿahȻ*qƢeUHKfnEs[/y]OB12e q-#3-yW*\_:v%IPL+X%Mej62x7XETTk"xGr^)>jLO($A%>> 8JǫynفuK;AQ{-1ٝxͯɚ;KpU 5ϙg;140՚!SE+qy:pÏ፷6bתJĵW/%:YB}fPOOa  x-+b d( y9xoN|vtv{.BȰD\rQ.;,ۍע4u-^T[o:qqcF% zQ_\έ3z &íxu ҃z  Xrw)oÐ݉Wڀ/ vF VqXrr1TacKvx > $j%z.x DPPȱEX/֭ysK[k_bpʙP^ͥ SkVkV`x퍯Pwn $X02yE檥Q0xM1}} ˦V6 IPJ|!^رzLg.d>. WQyA*!!I [6|v|MVSY?YԜ. (IHJ{=iċG5i>Ɋ|{.< _:Z狍O?|>x;NT7Q% j2Y*,W9-x-}2)Y砤$'($@II6۰cg5^hj[o}笜W,BrR̘\:Ktv㋵ ~M  "='V¥Y{ή~F|8y#?&,BNA՜)=ڏW^r̹UN ~ K㒋硨0}Ls*r\yQ Fo6B_W}:(!hQ9{xM+` 6l:4mF$&DkV`i^ #$Q@Id2+WEt>| mn.tV_1\HH( _^<|~ ۘ5ڀb^%fL-#*f֋G9dB0,8̚UKϟӝH`xZ|f'ٱ60|U%%ͅ·3 ( (2RY՗/§;mB؏o8ۏ"));VSu2q->b76o=vN! W*EJpY]=Ͽ܃5BWWO玍s+VtnBx(<< /_/uͷ7S??+żSjLL77nDhT8hlڃ=R,3nW/mGr}:JJ,s",_6r$ }%RέĪyahh.7Mŋʱ|4M ˲X:Kꚺ8p]pW .Jubpx3t: 2y5'ڰذ FNN2r /mBƈJBH&cxQ9-,î5[_&o4"55˖Lò%Ӑ$]Ωp`=[3$)aHHDyY./*Canʸ^{Gz?}}%1kf!$G($GA"T.‘#x صO_EXt-(C;San76tգ! 0BԌd974mt_}[uf*UW.mP2JJl⣏˵{}vۈ->ނ?1f(%P5g T*O_gDq~Nmغ8kZV.Cdi (."L)H {aqw`c ؘl r@jkRuUUu/{?a^'LN[7iITuj!iRh\ 6{I4_ ߏdE;Jʹ׍t}JiN]wQ}8^iF.@%)]/4\穵ӕ}j*'/f561/4宯f2-WZmJ3~?Ux|G}Ee"k5 ݹ;TV:[һWmJr}=Ožo=0 Ѯ~ ^SgǑ/fFy-.[i!Qj] |;١+{ud{IQ?Ȅķ *9ijhO}ΝHJLVV׿o韟RhezԳz^Z4y= 'ZZZUd5D,lf'a\AiVUrѣ^8~{b\=x/5<2}ފ|AY?GPedtCݺ쨫ե'54دKNr9j>f4Rh9x,䦶R[RU4 kTU!_K:>vjc0 t.㪯wl6C[XVp)'V]+K(Li;U[Lܠ4LS9vչr7RQG[oEG} %z66{Hf5::%y#$I-vsV--M%yN?*P.쒮߸j ~:0Ϟ֩7Ul`mEZ[k-Pl#XR6I%)J&RJnJo)*>r.5L4 O=*S^K]mmgZdh.YCzN56ԩTM(g@J["^Y]}眺Z97@%P2{c>?SLPww tn :ػx<jf汲Wt/t9+mJmiք߸[U&SufkUo_'z Ql5%-.55頦zRRtI]zNd;%p@Eq}nܼoJLuv &%MMwrnIYޘ0 Cow;˷.^A #s{euQOOz{nS{'9pE&cMM/jj*nUF.`._ztzKɏI􂞽p;7+)oV,'Y"6Jn ( i1, (,{Е~ nN8rw*y\n okA[ u݅LVqQV|`!W*z44د~t4pkd9'c4M56u㖷)sSwڋ"V#1QFnү+Wk,KP"%J59c?:oƿS:]ear:rp娵鴫&.&c}ni2Τiv'r.ea2Ri%i%)mnHL:Ln^&^D_ljLg 'Nf;x;#(H:C@x z8٪ש3&^'tPSA?)6i8ޞzz{$=!(ʹf-ir*Xq[x7BOkmJEwcYXikˡZ>bJEdZYΎCODdizf(>oV{n@*R`pE b0 55o~±ÝgA be2E"Z ƿ;±VQE"M4T/'?C힝vUUUك^_%/*k5s/O)Hv~0Zxy>TM=0dU>N3/Qw8jlRCP%%,!(` A KJXBP%%,!(` A KJXBP%%,!(` A KJXBP%%,!(` A KJX?L@zIENDB`seaborn-0.11.2/doc/_static/logo-tall-darkbg.png000066400000000000000000002365241410631356500212670ustar00rootroot00000000000000PNG  IHDR8 9tEXtSoftwareMatplotlib version3.3.1, https://matplotlib.org/ pHYs.#.#x?vIDATxw,Y~~H]y_uW{===Z%V!H$$@hEHr%rI)!`{{Ml^+MzY]&MDddU:0D|g?9@ @`z @ A)@ !(@ =!@ '@ @ R @BP @ zBJ@ AOA)@ !(@ =!@ '@ @ R @BP @ zBJ@ AOA)@ !(@ =!@ '@ @ R @BP @ zBJ@ AOA)@ !(@ =!@ '@ @ R @E/@ d!osB{nO@@nw7@pEA L iE Yۍ6޾_ @'BP KB2p0/wp\@CJ@pn,$Ns8)?=|!0ܹ\r Ux DrKVO]nՋ\@ R B2!M<6e S4qߌ+(@pw#@ MC<^Ye z/Df3!2^mYH&ϳ~ 4~޾^EJ@pDC@> %h9hA|GL@DJ>a~<L)dBR 'N!>j!Z PWݣo vEJ>b! h~7p HP8eN$klp4NplEBA4sΡ6eTT:ZGEQPQT*+Eh@k7>AJg!x| /-nfKhQ颗WLĦR?ebBbʽMXkFJd!64Sd6;vǑpM[`TP BBR" ߿^zA)#,$"?]K!p!pgw6"Ȕ^I|lL|Uh܌/_zA),$Ůp6. vR [)#SmT(]1 #!@pL f?!w5UAZA,/Q]/{3>} R XH&|#h"k=_p8uyuyqxG %8(V.b y3>EA)\R߄&"礀.۝ՂTE 2 A)\">@ODB@ p -r.BP d" qN$cǠ'4M[cCE=7$;7uR@!( | sN yt.lzބsLbۅrs^}h&\BP Lt?&$\1HDm}׬z ;~>0O!(sb!xY~N% z|1c *gruU^bI?O$4B2!C^ ^ylT¨71_҉ AP!~Ik3!pvC/r,6 Y+.6GAhF%~!c >Hb}Xb9{rV'^?|vu2R vyݓGJB48G/ao@tf߇XeTuvT_ p5ޏBP =L\6YNIƸ?1_C(L%!m<K9@^vILJ@CݣrKwd4O cjl262C7noY}p~AJ " r5p{1cn1:>Ob6J1 Zd2 Y|~zz %#Dq\m^ėg(i)氒MaT7Z}p^GJ@' #@߰vI¤?@i1+%xkesCY8e!85j.Ze Su>́g`QǾ^]_DVսV|e|A繎 Z+4Vs)T˻Ŀ 7*}A)`!* Y+v1`뿐ٽ~64! /q8V!2`t1l`op8l c v4taRJÿWo`{ !(,$AT|ӪcRB0 b&'"E96uۯMwKm6μLv;LJ0zDn;è/JֲiwFėg.?ed*e,eXm?]f|}+*KXD +,$s"Ut6" `WjTΰnT*(xmsDT?Ydz#S<'"sls3B,,Sg*eӅǜcx0:lKCԫL.|3>{X w5 _fcgw ai\~Xg|M]9s;CV+()u,jt3/$#a6F>dzkOQB2~f|~NJ BP /p[q̐Ӆ+ y|b9 ALmݾ9ǝ1RG\:1R"= |vG׈^<6;J'J&!)q;t}Nd_H&6u3>y]d D8fŕP Q1S\b]:vFn8n*K#tFDBTf̀4uU?t-Ze6p?aCEN./݌Ł] Rp_LH~_EGsCNKaN䚪 _B vIFmksgGrI2.ن'' uU\bUx6q[ x80YVlCot!HWJv1a#~Ly}ˎw,$O{X?x(jYJ*c-屘>8c4󎇢K]kfm1TUSgp$c\(.ER+.><14~dɣ0nb`.#J|gQWKC18v|lS}FCPI1.i^U_UL`='O7o[s8@J}B2f'tJ&aCQ/Ȉ\e omvB SC:v:gO+eJ*LD(87[50zTggb8!ҕ^]c8MF~<68venlXxx|pBĤjLE2|h~(F })kHMYѴ#f|X1@p"S:Pq-2xmT:0in7mֈ XgQQv;BN7axmPB괪T|"pR+닦j2<>{Ba "^= lzrl\Qᮮ.g#A)YH&waAz{ÍOmGM풄oO_35GiDvx1wj?XKt4 c z /0PK2.vJyN.-΁RCR%s(= sepcIW7tiH`jTZƧ-8ܻH eܓKo\ ;ݸB}a2AքS ի],ѡ D]sC"r)|u[݁/϶mRCV7Q=c>6w*$JPՐPQP\nHb_o. ~FAOv?E܋A)2;p#2t|$FΎqn0>>O> BNwWq0BOwjB=6;QLC* 9||fcCc 9QCz>RXF]S~g3pΏjڭ-t#!z Ա[ʃqiqX{ c3zpαS+ PBP ^mĄ/w!8;5*g~V)|i|s\GVk9e_Q0ΏMZEaGGvyҨ $Bt0BԐ5SyCVvI¨7uw?UCR»;mi=><>8%sT쬝<>8ݡ}`vjO޸vWs3p׊B˂ d CJa:xiF9 S)CFm-בLEx3>k;3V|>oL^i8?aRN1ES qG5FnL c*dޮY@l2XU`5VJ9]Z*cpK&FO~{]/1,gq+ozN{sE"w5 1x/Ǚq=2xOTPVxms ͻV)dSPl;El[#GTucXɥq52KJ4G/V*iv?e,fSlqv{Z60ΑMI@{_\WST|~1rCL7\vHb.Ä?OwM=_C ğݲE FTB.<Ejiᜣ*xu}ksj. 9=!͡ĀۧIgaՌ7/ Ae olwP.GwNX ȵu{Ӣ[';Q0Дt #=8BN7^#$~ k@pW!a!p-$ 5+ Ͻ{*8, LK-+sN !{?}J(L)޶N@0qfQq`lv׏D!Rx-b+Xɦc)nsOs-ܾ4NYֺq! +{9-$k!8]God!xZT`2~r9<<0z.Q3ՠ[x! #d<;2eH@K:m %B8-0'.m^8̲Uȡ c(kV- D.~p8tJ$љVe uUE"](2>o\K\jO4d$<„?t„qbNTԺ6L)QHⓃ}vMwo+$J;XSGt!6J+Xb# !WB18dۥ$ԃVr)óO9@Z>$ uќ On6.J!sΐ1_D Kg!9@Z$J1pEhv_N^j.OwP3 -Z\f\Z'.8m}ّIb*Ƥ?As(".7c7ZJ>2!xftMSJAL" NRz`D44k4j\2BMUO|(!1M}20X\\6cS>=i! tZx<4#^?>;Ųnp  ތϿc*k,wYdBka3إh7m#5k׼qb7R0δ{AnS`q:vhhꌡ2S;ѪRD0#A9N7 8 :K5ڛ G%,5ࣾ 0 d/ya"-TA)T4fp?d3nD.MD)fQ"ȎnY.@x 2wvLC"#"8Υʺ:c[l;0"Q˃uo cJVCR۫B&D]|u"Ov6Y&?~3>oM@`&YH&ƔtxalF.ݍD Zj>͔bs( ;Jvz"TNLk#n琳sM[sEMHl:oȔ⡁0: AKc44{,\@bYH&$Z󍩻L lEMUڢB Qx(fWƐ.[ccQُ1|r,hvO]D\w *cG)jBGsB'#`/a8 c`-T࿿\!(B21fwxl`> j[fy]ϏN#vyckyk.\޻d!2_ 19 ⫓{FLZ}L)$J!SRI@[/%6IX+LkYͥ6>9VA357%9^`XLb!{s~4w'0*SM6:/5^׏D샘:SXϧ;V9ZUAQU(Lx^|u2q_B21j.XH&/`"-F0Y{M&D.@[>BF%<ĸ/dH֙lct?={ðW |J 5UL)N788*6t ]|vga'[1ѣ+1[+j=sm=eEQ˽€w8,qX.҈{lD]N0KW>KýM]7,^@C)8?0;xc*c 8,[#W@e 6I< c@Ls`mX`o u0jGeSv|eDQoa|u2OvB#yxx 29R}d\v!L@ 9\` %|~5bðRT(Df~L(".jQȶH:$S0b8΄?Ӆwͼ^YH&X:$"B)+zɿ maaG/}i6|rQTN<20 yaјlump{KEȷHt< "]5Y"OO"t\,j-Z77[hkA@%J,s\n7IZsxoڦaCQx& ͎&mqtBa @^8khSC`+v~ u~"o,$! F)ţc_=~bN>WCPW6uG4p43 JLJym|z!L" `C@ ݈ 9ny_wLg}6yikcAe`#9͈!T MwћykLS]W0]B2q [0!&^ 11bHL&PC7L(3ˆu8UU1m2fMKz<2`Yy&L07VL cϪTRvߚUk;M=<]V-XhXB2A2IC-6*1 Ca3&n૓qpp2N٦y%]#x(6b(%=kwtݮXlRR-1_ԦcT8﫹4|vkYw; ࣽ-S]S]2nRBLJ6z:@,ԫ늚g54RD]^atny]H&<}3>pw-"B)d `PN41w$Z'Z[h1,g{.5+sIOS`Tk]M/ rCQ~9G1RL6תƒ6ͩźGM%(ň7KT˔"i|} &!NxlvNx :Lj7o.#_^S+NvSZ=?㥉9;`P 4AψgASF;Nkp=z>@ Pnbp‹ԣ (!Z:킱vl:\Hb&A2}>Cu4ZR{-k6}vfQ^sL&jRL6kDxskߚ XBUǽ0kw`>6 㳥p)^Ud歆NsP.{>6;M>G6;^{2~M?L1\+BP zb!=8ctk\ X:SjTưMý3#jk4BN7]:Fg\ &)Qg _3$BIwLt{3#xckmG36)ECPv1@>C9ڤ;ZM;J SpR&_"SV9RIw$ّ)xsC"DLN +$3@ ow[YL)0u4o݌ BP LL< O"a{p+dzvJ /-⥉9إ7}1T:Y?sDƭn0 1HیsiUU9WQ/5&ɼ4燻*$%B0 Zdv*cmZKs/ }N2HO2 7ıSY"Rv\b%•pI"p+poqlÕzElKG D(y~!Ļ;FZgx!}ZGJ)X`hKᙑ9RnUU ^=RCVŏ68BZ_Xy.@dIgwQ<28%~s^nsxlvӓxm`N)ƐLVSR«w8&ftlpiGEUPSIA(Q ́gFPS,eQW8C1!r:8Xc{εԷ(%-6a~<9p#2ww;[G] 5YɈ7ϸonxu!y3>>-Op#0 0qyЄ'9ndkU'rx}sPUV+xalMC&Va}v?`PBrFߌttF30UUA2ၞ7!|Ȅ@v\ ()ME9vRHWʨ6˔@pRIILR {xߛx(`Bd2>wph3{ ތ<=W *_6Ɛ]!&*C2[a*>Of#/}|D[Y\rt˔bO\Ǎ ^^\ MWYSB2}X .#hLx/ 3~X`#1daԷJ F7l@KwCgnͮ _@&Z .uG~?qH2ގ;g^kݍS-#~r{#γkf:פpSɡ HB c`/3ON7#ijt8>Fw82 džǯ J.ק_p)YH&dŤR<7:123d*esHWJ)S9twKr݉hQ`'Xe|oZkϱ޿L)xrh]&^ݒ_ژJp.KmS۩5NYF<T s>@bD53 Q'ۦfr쎖鹊RǫwHWTFHL)ď;X1m]n/|"{`IJ,JPLwYƠgy`1 9xsad\ 38O axsppd*e$+#ECN7fQ {}s{YxkP7-$MAuwn4ICbgwűپɦ{gR*6>>9cp8pnƪ٨Fg ;͎fΈHlw\J6콈|qln>@sѵ˃ _ȲN)EA ύNZ mh h.ŀې  gw(TPUm,a?=9:wv˟#S-[rċcں==A h,$?BN46fkR}Ml1ڇ ~aܼJ&&;ݺ7 ⥉9بt&Z9ZN9͝Ao~R!ZD-Z}%קh(]> Rcg1Sq=2gG,`Il1Ɨfָ0R\7jB1qIydž1exem(Țⵍed得hufrvVx4 yN,$1><9<ѷTT$uThO*?X}`G~9ri|iz k-SާQCRvH2^̅bpmߣ|?XMv=װǏ`3)|2B"S)IKqn9WIfQWUk0pd;v{[HP$BI>sZi2 ѣ01 hVccpZ)2Wz)ŷŇ4 cxg{͌o; iD ?}A<:8QcDAZ3x}sp#:tK /M7^77RD]51MHqFТo| @eK~d 7A/.9 dJᲝ=NZe %dz?ːӍF] YX7V'%Js{+8,]!S-Aft0):N2JV1)Q^{MR<=2w7E< n yܻA)8b!ndx(K8֮4_vQ8;0b8fܬ\0w)EEUPWU]KQJ_fqlTBBˌ@(23HDFm_Djv9j]1c.L%Kf!vt.)x~tnVq{g>/sކ?m Kff҈H rOeA),$#U#=\(ڟE#tXu6tqKDuB.JN1r~4gwt=Ƈ|ְyFi+zTƐAxccLHz"s܈D\4\5׼Kk;0`YlF>lɮ3X`pJ፭YյE8> JMO ؘ}3ԯIJDiRo/!x : $CWތoiyQC)B21 WC٠LWn4"b4*cG)R+#WFfy7۝,Q@1ԘlPG0bmI[Y'0Rlh6 {}_%XiN!*08$_ҲV S痛!Iݥd!0'k7܉340:H@F0SÓ':Q 1|5?#^?*` *6Yԙ 1y㷴Ne w֐"F<:82%2t76 bFF70Ѐ#gG&WTp;;?hoDPTưSvzQ E-csX!N1:c fBHOd!_;=b Q9L 2vT{+ J9u̍Ve wfiGСY](ekƐLɚ@@>Ֆ] ]]/ߜղ.ԫXΤPV(ǏQoIxk{ &IƷ[V~7Vʏt3>oSp }1I@2xftqVЌ(+u8dU]ET.3ݯ{~S4ԣ2BQQ֒jH;1dT||cQi.ŵ+%~5}m2Eus{|ύLaEVCe C{gPr ͭg6J1HnpαW*^ p\+KH|k7}Z#C LʔK"&E#fC_tYw S=%&e0ɾT9i2;$|rnc\Ծ􉆋^iLYDa%tV\ð2xlpTnbPov3=}!=mMhMt Zuyܨc< {bjJc6}FBIwlTiL(<3αM}"G;Rڲ:SOxQB]|\̤FTfԶ!J2,>32uT|rrs@o@+/1xlpMyqyȴQQ(ݸ#.'Ww x^>2x~tZX ѕ"_X7?{#(esG˴Qig5rBl+sָd46å#G[mL3A fǗfa2Q/ˍG n#O\\AXH&gӌL1 hrդx1V"P^z+E('oL%KlxNW:dJ)F(kO^A@jm4l`=Fg5u w )isڿ3s C^MBNH7=pލeB]d5fw]vώ3;6#ik9 k'O*X(TA&I`!w{h 8f7n*XicnFa*TαM#])!߸I:$A #B.F. c*,q5<qB,d2];"0BN:N v0ΏjOj39r dQ@EfCQبd*R+983&ta.k))\(ړΗK" Çf?77#TZ.[;]añC'~P?ojd6=I`.;&uPci봞`5 { yt6#ol [;zX)a1s1_Z5p^\jǙܲ /!˺#20 tsbI8p%t5yqk4VsiTu2&|!LF~>تA ZӅޞ }*o QK𡼇YH&z4'к/b^(kCN9)0αK=cg͚_!WC5Ӂ0i"6qS71lxwwC>O c?:^\FJ&Q϶0eg>7<+I'+n9mx!Dj%6X }xjxB9GxymQXK^!ɺŰ^Xjk J46s)[a){Ot]d1u{]ULx :ԗ߽ӈv y|x|h}*n17VNjs3>N,\ `A,$)&K-&ƒ.񐮔S;%PW{V̅j ^ =>:JnKk^itBDd;8|k=qNax :dh=Ǚwlxsk,+uGa*ۺwKy򺬪Tn1 5UĀ%Ϗ讅S /w $}f ui3ާ.2wvְ9cc){7W-RJ6wvuI)`ҠLJnjbL<Ӊ ,g!ԭcۚq|)BD0tހÉa92<\;|c4uV̅bRqsrUUn@ c3lw6܈ %2q5l}4!:x,rڊ|z4jFRƋ]*7V l6==<f1\> [;U?{Hɡ \ 8ܲdsSRv p˔]Y+TƐ*񁱇&zolX;谑]|~qc{dbw辻CQ̅[Epq'cNfWڸ>(u]x~tqlC6i7Fi參>ķAnDϚ6o2 a&SmDF̰M!6nڸx(+(J36*!"nNVMs\ e|?P֑OWtHUJ]HN Jਡ R]n<T5K-Pm6xl{B[!QP W1(k%H1+& ҸqjdY8`ܤJ 4&s(*jiM[B2nO'\*XH&51.4 j)RԎD)nDrq;\(RLCDQGrZ g L0|"^ǩ3,|AM Ms21}c#l`[ިۇ{$&RZM[= F1 bZ.;3Cݡm ^=[6U> g# e)MEy9])!U.TͦHU_*@m7C?cٝ*R\ˀ[OsY|'sp`kВQR{ X3!ufd+7/K }><:8f8 |\\z  y|:]@?6)1%J1 `GVF>:Ӣ,!f<[Zƭv7,nv91x7tlI4 =~$ :n hKcfhpl-y=dkdkτ?b#1TUh] u#dBFeJu:LKiy.%İq4n2M8ֱAx/F!uBLF9>E*+0|,BCMU[*iL7/-0 gnrƷPçXͥN&-rNc65cMM_hR!G~mzB8Ø uI~wR\ 9Ψ˃'&`zc<0Ѻ~qGwZ.r/.Ip6SOD~kwC~AFaQ9`XL6͌43T+Ҙp7JJHO%,5U+ΖXn^w+!@{}KF|a~.˓f!)S^ώLk Je ooaF^Ï7*vD2κvhbz(0@1`1r6Vs) y:SOtdKb7b6t8SM'dJ1w&1aFGTGKqj20ᩡ M۝xjxBs]tw.Yqo<6\3lHL6*xӢfz^aЍ\b!SmiV: w,[ʻdNYsS(sl-uYǷwJ ՌɄb4, a1$ހ^D)$P\ *R* Ée*q )˺oe®& c3U3 h(-׋GC^?=fV cTT0E4;!MvEħalTFO)J &M͠09IjSqLXTF%zc{L+6PL<}3>'"ByL5*QFSh:LV19r!f`ϵȀaE29 h/f#yMZ_*Et0\ 0)Lgޣ@36p΍td5vIll#ب/OowfÓ!D5!$B09zC>;;Sh*cHWJxowf,$K"BjF G?VpRCKmv<721{r ?&kw) Ʉ ׳=$7~0lUi3m{#_ 0R0Ώe;Y+AKMĬ>!Qӭ'tu 3R/_Xg8$_ŋc3tv1WHV(Ы&LY?v!oI i2lG:cXͥ`Q)S PP爩Ãvg߮~O O ^6]x^&ob:'\XM7Ot2xnd  dϤE>1.S3 Z.q_Z1_w^h>|c '7`;E;rP*Upy|& gJQQ\gL !80aގR/db^`5벻EM)7=\ QkrZ6#jj*cp6nms˶m =}^FQ| ߀d*.d!~QS0-Jh:N,c JǡjN nzgd {O94x;nD8 Sz5o&"#Qv'rutVb3a% a;YjmfǕp cjaF;v#CJIw ~)jL܇'ZRs2:WC^{)Z>:%SQ.S0>=N%C>fVmwK Df|w,A.c!x׻}C1C,IUoi*`i;zt=!Ջ˜4M =5*GvHVlAg}@ : ܰơ\l ML{mճ댁 rfxBP2zRB QLǛ+]~ϏNC>2wNꪨ >Oi6kO Oߘ>`1s C1KJAC׫8 ĭ76AyLt=fe0#&ѼJnruHL): $1p0_"߈l#@.34\У)Dǣ 6*Y" 7͇ӭ=d GP_-D$Bz2,.0sS(g>8et0"romZ{YKxvdaUӿCL0b(xjh/t˅d≛L !(Ԯh+p .u*بF@zuJ4 JxqlolJCQ\ ૌX32;x~t]ױ5H"}GO@t\K3˔b\(uN8%)֠é/VnLw;$JaeӟvgOM9 :S-ϒ@qyp8A@~ם)Qg;0dM^K!Zx@wqܲ ϞJY3#޺AB2՛gM ū ^[/1,0 R\ owvzD׏CK26HbƧ=(! h4^_Kw4؝xdpFzK/sTU?ڸtKq(T 9dMQP 1BPíýM!.D(=fT7z$#P9C=D,=H&|d*5J?zTZYD&9 *sdg]KSx(lCbwܥƼM(kPJGz.<>4%\r L|w3nw-%?5sZ1m=a>6l*g)5F%Dn̄iy|xg{/"t7ˠ0`޿gwJ(o**cXL`%wAcc6?ܳ=燻F7n dbЙ}фO2^^[4/ɸf ӂ~,?3w)PG[XͥuC&/c\ჽMSܲ /MıK#qmx&Wbrnq{7`OsRon#{*cXg>1FdLII]U>5uӴX%HUJ8,QS|DKuCEU𣍥&A ϏMTR9;;k*|킾!dBϠSLN<68&Ĥxl<20jjGMF&I5kqf!>*cXetI@~}s?vV5=vԱT lh*q11Uu੡ ܲ FĐ,qιI2P/:? \RҝP{`oN1W_.h*Ջ8JLL7Wx :!L:1u[<68qb)dB|/1͹_ӳJxzxR@-Bc 5r*gm Sh8YjX<=2e3tso:Ñhc;ӶCiT| KR)!plx|pF44VS$mŏ>G*:%BTWQ9 QSm*cpoxkkȨstܬV뵎mJ4| }knorJo^2xfdHKHy_R/x:EÓaά5 c(+5NQ`@<<lQ'sľ|lxmsv<>RU)t0 s\t! 0A)[`){b5q?8U)A wXͥtK`D<m[ h ZAW?ðӍbڝGSAv6tw}NuQϚF(b8C\B2Alb5"tF1V?J5PBp+g#?3_*gjc Xϵۅ $7tRA_ꗐd" "Hc6bbx(6FK;1ئ߰0kyjUT.j^)߳BC?l 6V+328\p{10nݎfk/ cHUJhoetJ 0Pa >l(rmZ.IFq3#K[ldJsy񍩫pL[#zll0@䨑P⍭ɷR{pH2&D(.BN$B:OE弧5bè7б(!=·1nHLAh1t5zo@~f||˼/AyXH&/pP_tYhI-i U)Ps$cØ/٧i~R6IBήט۱5@Mꭦpٴ VPS#_ftlˆ7p5=AV#-R cXɦ\5UCG㜣xeΙf73$J!x :G E!@81Za*>O鲹T k}P Cnte[A[UqƂ >GZ1LH^"|"?gCR<9|pMmTRZ*QH)z$5k>F>" TAOw9(ץŮ7F8ZV^Orr5!궪0|w7D>O!1ws\3썙YΈl]vj![+]nL6bn/".O'>V[:RCϾjNkGZFKR`ު6Jq2T,PB8^^_;!~ ,V"w SǦk ]KӮ˷6QU5mbjj2z -I *^X'ۖ86 !5L!SrGH "3}Fa J ?Xj /"j:iTam98!мSʝ61Jswu^ en"S)4Q, &5@y닶v`:ydT!])!U.ԡvܰp9Ɯ5B2񤥋dBgx*ƨ/5]4*cpmvM+%v稳R9zmsp`9{FTJ"p`tcnFk:\gnH2 DL;Ek>KjF/bi֍-fūMc3ylz>Fe@k0W3sg'`:L{^dGi`Jsf0TR4xҕ2g96#+:ګ.]ͥ8k/o iRpP.Nuf-6]m  IJYH;!D:a8UYcVXͥDEhv2pꎢ><0ҷrذkKdYg[P<9<~lQT>>ǧ0Dev8YhnBÅstע.{uf=K5< <̼.fwö7^ӥ΁\58%ޚKSQC~П*FHb؂M,(EƓ#4DontqlHѶţj!=N`d♾,F`{[4Rݿ٠FڎW`aS^vy]VD)fۅ>ĻHoTYO)A A£똼6>s۝=fKdlõȀ9^Y[f!{BVT.İ(nkƍN-cw<.41&2kplBS!Dوil|dG8 dҾ65_"Ⴣw,Y(ŵ3s#9\8x86Q_^>믧к]}\@p2,e{qhVq)Leb-i$sxbHW֯L)FA8%>jMGX>:8 F'QPgٴ7O=h64SQxq0])zB ne8Ԥ%xxsеQ'&t/EyhdJ,ҕ23HUJG:܃fPyTR׵\0@DКK/K#\utD d!xMPWܲ yEi4[m-я9ҭ;Ta*{;NVɾIa 4>jM^Ï6 ʨ˃NƑq's|TLJ`䨫xzx`%*BN7]t߸(Xe^P k37b*g)sS0dBgwԔhnq|/ǣ!22)Tza&pde=\ 9ףh!@uy;!Qm;Ƶ3t/S{VUEu5z<<0rQH_ZH&.Ak  7 G-qeTα9J6ue\(mߊn/ax2D'XcӤeXG1y.lۦnlC:>G.ن&Hoc;qy؝D=V*]ȡTH"t7"RZ;e}qh_sd!xшDiVdef,vȔeb9';W&tyu*L;;58q[i?^u^a*}Jh_;8VM3omV:\xfdvIiF?wy%Hb[]xqTƎ"m0RG2s\0!<[.kG lTQ)qqK-#jc"ٝAtL)&|!܈mP+$Jʡ8go <' ӳ c3<]TR~q>~/X$2OvtwJ)ݝs:z'_x(EhcI׶Qb|*ct<ΰǏLJqW ld|ybyNv~08#^?BѮc̥>M3I A`Wr;3bKa*8լ6slt $C[ɕ+ϗ*-U-W~r*Tv9 c(Tk(T \"3r"q! ^}Ҵ>2Ý| oB)ţcxM_P_*}шsz/4ݤ9rl=e(xlOC =fkeE~_\J6t|EĠǏ@'u(AQ&gՌu ^ +˝b.ӊ0BciǸjho 왴>.592Eϗ/PDUTΑ)U)UZ]z5~n<&$B77})W;,^q.[H&^}шwlk$\y;$J4Zmు1 yUOaEnXFN\ңZiIzZh&֒E;ۡ0{[_*`tި{uNGr//Kc3jq/;պl[2y H皎ۨ*2yleN:/H qcH#A?Vp{_Z fk{82ퟟ 6Y=]H& :ܝ<2 Od*ّ)kU$HUJP9SaT خ]=VPWUPB`R#x9SGu3獖 [myx"[t< on"07ͩk&v<639lgJ??Pע">)Dž#A?F> } 6KRJ ZUse.< \{'쒌!| o[h =ۆ.LuX;:]xtphs}cHqr1bp(t LNi?&hQJ+7rPW *V)L—>J*za %˺eHlj@X(HS "^Di)i8>j9:^SUc㉡qN˳YVsZ~c!݌Bסo8QaM܏;4+%l c3p@7 JQ_@ܢ9VG)+uv ӎ!ag/"Vr \{Upp %|] };&ALEB1\uqa}] ~sTT?XU^S<T b$بޱah?cGA`'hdDa }|ӵ}IE41'6R٬5*+u 3*aDL%<20l|M1ukqO]4w8`Cޞ)ktɝl+MD2(V{\j ov&ALF0x(fJPN\?\JoV||$a4t0cE_Οj4cc8] Z#`uhd:S>uO+;HH`vKy$gS6Z'c<C1tŋc2::gQ܈ 9jó1])#S)5"N|v&RDnSv~t bAe8RŽC,pgy8Rvَfq0 >@o<=oOdL#6 &mc<+u#2- :O4XЊr#2tos2(&:vy z}xYKjӍ ;=>jArs(|h6}i|<g)!O[a jȔbFI&Hl[RGLC>s$ u5\8F:ŽC$wɉb6vs 1 Xz(3#r4$BPtOi>=CaKhce I9VL8Mm‹cvR/Z[mzӍgGUA Vj'Ͷmk0wp'6zd0k̷ Ney5m,oL)bȫYH)LEbX˘/GFL|̼΀|at/b[;Hbi?r\GpeBFpe0 w|m_CpH28%k ˿_sp 5Wo筵7c? b;q2%}+m90l^TpX."U)A 2Ѻϣ.d*u3#U)Pg ol[W{ZQL5,r>?L b\M&uO㵍%]Mu< b:I c8vI1sbЄ\(f F-.>>aFD!( >ǧ B\ptx<rgYk.Yާ<>FC~k>D@XG!D쮤+%%MX rB27ޡ$8@$舍JaQ#Jmc6A9WG+jCe vMEpc S0cgAKsV"%B˘w%œEsO2b|;9u2w86qHR<5BJ XH&mc4⨌3Z<<6;=GZBIA~nR()5<=K≡#!-Q0a,gQU.#R c'ꉌ4BadDw9d*c茐Ugt!6r:)VkHlÍlf{ B Z+1!<<6+"K2V){3xsV931Lv/wE6BPB2?g!jX`ȶqrCF(ŕP ż֊bլk KxDLa*Ʊ9@RNن@aMe*a.3%(gCѾn-.WL։^4*c{׶=*z(;vާw07#ø6aD)aLVXɦn D!gl!nUгQ鶬FLtǡi[T>˜/ң6VˈFl&M?v7Bnp6<>4@xlv {.zMC֮~YLo㣍]jbfqۻ{MƃxtbcVj 06 YfSHuxzx\m<>:ovt"&d!*?гfqUi' -U?,,g{<@>_4۰l-D1YrReM@蔮jT]OyÓC}{gVj|{:~f-TpcH1L a],g@_3 K.l;Q"^XһWo_zU4B2A3ݶteV`$|k~[Be "~`dQ'6K|mp5<гTOv7mEߙ뢩2۩=,RgSNIL0`SD"~$:,s$w&nZE(!Pg*sp{k14oltmn yqa0\^bnD,=L%C1݂r>PBp˃ x MLZ׮M#QLaa8J8{/3c5\=Ą?K7_!/4O&exH#4l~ŽK*=5(ג:~/õ paĆfȿ!( #A|v\hidJq54yj`#S c 3{s(FpmÉ22Yo媔Lj׏GNtZl,# ]_O'U5Qm F9@BY*Qs QQok!'Rj cAB?{=mga㿐vw_ x:^_\Ï0 Q\<6;mtO!r L\)$! oL]8sF72fd Dhy>>7!,S&p7'יj٠bߘzbJY8X˧4WzAmF uw4hRA2~s$B`tйD ]OV6-[k.w:OiyQ[z897{4Ɵq_$`>U2w I`o>!@V5%&Mf! >#ږ<1o펂4ӁH_k \8G2b/FHxrh>OK`lT:W1 |c2Te ^6;|Aw$GJ,$Cm2f*դ}.ҕf>>8Q_PotIe8 ,&0`vȄ36~h0c㥉9](hUUi۩Әļhcol`'+|#OM!8l2p FD(zxwT9(2uzp'soh~NCxwwCWJxtpnBN#U)upS}mVwMI(kxwu}ѺwcG3*~~7b o-mmTޛ/~t t q| 0;_ݏP`!X+s:*cK4)d|rcɚ~zƉH84 aq,c:yQͭ2\)ny%؉?T+ ŽCh1XΎ#>9בJ ѳi }P㿄1s{/ IMUz QIn=kF y|M" F0ju B $7qڡ2\ KC_X8$S0ؙ= T,r,cF yVp;G/횢my"@ppg?DžgCN7b./]Kg4}Qw9"Bمd-:ӈY43i0N~*g P{:Ksp{%kjٞ gɱ虗;YبWa*:Dxsio-#>ų3\Q!  ?oe!BPv`!r%4sGe mb픝8vKyx~tv* lCGq;mQV֫Le\-ej)HVTU.ԅx:_ {ۻ{8,Yx 5=B26\.0m#݁S*cxwwo~5SӖDD(; z6b=E'9)Uja(+uLJL^%J1=S# |A0p.e[iة@s9sJ_~/SE#vF.I%IJyJx>XF]aH ܟu|k?J⡱!<7;o} C1j,t{!(۰L<n۹d|87ܹS,*!x : e<L%0ѕ2h}˜/+e:t-t׌ei?YCrPKP{k[xom 0@|0bxc >;SՅdOv#e{~^F3H 2US|`9\L)aTqJ246{E(Őǧ$% LrMmr+pZQC^MUΰKc5##ΈJJ(!Kd u[)UƐŏa''Ѵpz2g>ս0 aLd.[WRV7E,PH,JFciUX,+˱YE4 erCsůL(O&<=:mIGgm  ]S\*Gfb1OkՉ1).}]%>k~U]К8 ]D$A={'m:+7((if;oԴxF,t7hzz-I`Ql] y$v6!'\fX|tkQ"(ӌ/.zifRS\BWU--eXથuW62?f پ { ͈b yyaj4{4VpcKǫ|8=ֱ۪Sלȼ83>",z^BgmuҶ_0sSp6Za4k,Tt誮[c6FX,"%BV^q=񱳶]M@"bىNᚶVP[\ʜSƧ@j3hku;^b&M_\V;t!$cĸe`ʼn3U׿8y4|^,Fҍ+WZ130fZ鬪p`,BgoG5#d4\^qMj*.gY~ڭVZʫh,# _ZŞ& md(ᅡq]f֗ KI%ğs~} ʗQBO_o=F m6Z+VJ;mv7h"O$] svnY4XZюKgFtJ (+,ed\FcF=#a,X(UUK]I銡Խds%Wj_v6pmC+?hJmI)UE%`Pr% l`v|Y3~.NϱaC=Z*)̬ޟw9MCʗ`XyKeaٚT٬Vnhh {U1BiDs=,*_(QUץ&56 1K&(wJ骪5(!(y*o0=};4eo3N.Z,/1uk|Mhlgo}ū&!Qg~6/g_ќ/)z|_o80)"r3ɏn.4A )P=5ꨬYյZfX4+esV6۪j)}a!f6;jVjvϷ勏sڵ-"ֳ#I?oX訬6sz,tiS{FիډX,F8!a^#HK]Jl6Ϭhe:-iYH+]LМ?{?ծmfL/$y3Tw~ eO_ofvTei+EQ\肇`4b^XgM{CN4}Xܒ2,TQS\JՆSb7+d3Qb,ΆiۺyzlT K_ zv|FD*kw*  k zz+\BV;l@ 0HsY76wd|j0eJ3jJ6f53anO(?ۼZKZkE9|i##BeGPniXM{E@ ,@Y/)4˵'#(G'G e1b<71mm*>mGzJ&| -XRYtEQ, 񅖉b h*+^*TUY*Hww@[E5-U`XZg9]!NLH*NL$ ԗ10\_O_oLD<%NLlLZ8cmǦFoףO&/ , i-d4ž]MsY鳶WK$%q=àgWZ,Wi #{z=GMF%ӎ.Nq ҾfwnbPₕP{E@i%)8{X6.obVNw.;U UwjGQ=x o~T|ɮڦ5)H4b0#8770 >ѡ>sĈUƈtky媞Dy~p?0_y¤H(Ǧi-2;`ꀔ|ieO_.zveڲ>o_p]eSe nbZUGyaQFD4Qɑ#vng;O xM֫\I}I7uJur]⋏3LzM ""&I>K]h\VaV7m/H laKU-j05ˡ\(YI 812k[VlTZAj>m2Pvk+Z)̂ b%Fe2 e~e3?%eoS3I%ghp|B6-e8j(dwFO~Nk2yv5aX[G>kbW]Vo}"NL̥aMi#IjAѭ=}].s 6edQ*Ԟ**Ψ4OmI)X 땃YʺܷEoi"01'r l6Ýˑh3\t|30g;OwU-Էdl+SCY(a1L59@tyv`Flmu1]dһHSem+Jg?V6(6jPhQWR*/n@Kyee.^]a՚&#E/$Y[WTRV$(gg'_&_3baw]k0- Pz##4̩)"IVq|xn_u%eXmf ۔[eԮbNx[m8jӺ Z,g1|bjn| ݳ/;;h;KOKO<ljII;:qՒ|Y-UNdMeJum.\NTYY5VW,^[RL"lG\4&/0?"h%YO Cs'OdO`l`^>m$1PNw[-V^G 6} i,niBEagtWf7GM}GgclҟZ+Y,Vե}}ci96k1|4xiJDDD 寪ìo6U-`Ԯ>Nb٬V4s]S;Wmc!~4=JʒN3ۭVv6f4VW\J[yu6@pBɄѴOZ,WTQӀ-)p4 GG862Q!35|뒶ytil.sS4B 0 jkd.g&a /-X,/XX,R$Bp +jKS8sÆ7AWYtTVYUʹ%,PJky%զל{8?‰IխS)""cpM `x v`'p6Y)Ptj1ڒ2(XѸ$-]^8sK/Z,WrmC+vTj,kZi)J{C\ˍT'jxlFcC'G820r?""@4,۴Ur~@w0jPl8bj3?F xzlUc# f~NvJZ,xE#XmUTV^E &dX(Ysd` >U.""L@Y]TBrl-Wm5=}E<4-vUֲmm:DQpxKY%74ň$FKILgTQ:‘(ǧ8:8FZED$H䈙@eF $ď[#fJMDbR-]MEr|xB""R(ଛ +\Vi&P ؽYfe,.23f0LH(srtcZ)""rnb&iL!3܍e޸ۨAUQ192bMC3vU4S({8:8Iک-""Ww~r;wBb<ˆ~']Iy('o3jW_gwp4ssS,& W_,(+,6+ qՍSf}%*KV~՗ :ʼ\?Ytw$Z<ha<ǧ^ό rSK'M+.8[ml璉SfJ *έ@F971ñq.LIY}+/)7sۦXGf~2`Ȣ`h,j;,.yqzgW>8M]7!촕W1InO]"ݘ˱qzG'CfJ|f5Hud8e9# MQVPȶ:T+['ݡT͉ .MkJ[DDnX*uv0un ]?F/pd<ht:Xck6RYS\zfU,*LnYTlhrnZ,k>*F05lj NLbL̳eϧ,v-{L#k "(ϥ&/C⦖N억)MdXhbdjY9n^]*'dK,chÉ NMi]@:J nfOQ.gҞZ/D,٭64Ej5G#NNqbtp*ADDdUϸ~>z^垼 =}vVvfOZ-0˨O3k^56;X_b@[yv-'wbfdLU4/Q]uvÙgmv?Hf' y\Sj[حl_5kʻɱ)NN2)""9h`fYYG 8Ů|כiTWRjfPCHk\W >NNrrli?řiMgȆb\NpWկ\euLd,9k-204gǧ.m-snAʋ WlS]\TKy(rK%P6'hht] `8…YNLsnb@(/&4iE%W=U\gf2rP^J^ bu@iXh.|U T8jt7YOpqzNED$/  3ʩ,u+gk45\W[mlO;P +)]3-12 'f.wDDDVݘ;?AQ03jPj/0!ʫ]3+n< 75fuZl:cOacu?d_"_/Y\N8eҟ2 p]SťfCN4c}BH>.-kcNy({zmvlȹj妖NM%]y`ea16P]\q<%.LrajKsP#"" Yژs 7]v_TrGm#;k[p3?$bڨ+)Q@yak #sajs )""b Ksʀ,u+'c4!')+]qlł X1V[F϶P$Ьygs{h[DDĴquUmQqrJؐłRx̕@HṗK$)""1O"{s'|-[(5d;W= ͺ韙gdK8NI'`bcNQ@;(w5Jds?f=Lx[DDdm̩**6S0l4y({z-F* 7VbL/u34ap΍ۿt 6TMWO_o\ZY^J`&Jf,.k@DrѬѬ(saYXmI%Rh\UN[u%ՕTS[^ݚOڗ?bzǘ˘{1E?*%"hʞX1>9/QKl5jPl0:FCYl6+͕VW$de:MX _  D #ߋDX,6+v͊f{_(*S^RHYIoҲ-ul~a &˜/)"k" FW(X,  3F/2ދ*誯J+W=<Cf=>f=><>~,,kVFYIыe%TPWUJmeueT`fno)fepyvȪKAKWlSl !Ɓ`-W;ee%tUjjJPY3 ̸g | E{klVj*J*{񣶪ƚrZꫨ.Òa0/. "(gu3،b_7`(s/}BCMUliek[ݭuT<*BKu-ܶh,ȼY.L1j f({$h2~1Ki{UloȎ*lril9f#M^Ƙ213/~}uݭpٝe%}bj۽ rs\LU~dN3MP=}@Q;âJec+T퍵(½`i Nqax9|vyl`&:SZYVTȵ\L,c³H,&ghRD^h5=}.sC?H6|*4*T M]- nn&@s$skZR35"PVRȎFvnidWW]-LǴ\1=~.9;1Ùi.{@DQB90g*C4<0 ZJ!ʴv /830فIL12 3r΍P\hή&vu5"njㆮ6CaOrf|Y)IOa C24#z lT-Uii`wK_L4ox}^g`|N2O-^8En¹}6j}q5lk"riz3ӜƧ"F0{Te(2BiM!z]yiz|^D.MצMii9gGxl|k8BgۭVv4ճ7vqizIΌMH[67†GCrL>JSf1WSZ5\ڛH$فIroi*R6˻ɿY 6vniNs]VuloMF雚wts3:G$Pd+h2~VZ:P榊B&BdGfY9qagxvbKJB'/s85Tqֶ:6+K1g'99:IԬ6 `X[co((s0;h,уRh}tפtNcF8zf FAct贇o>qR;صpxb9@() 34r,Œ>lV+a㌡)`j2N? U_Á6Rh7]jY q0熦jGV٬#sg;.K ~Koicv )J"Q08ɿ{ʲqe04uS[Vw4tg1ᓃ<;ői)[b S/\.Q\h]¹հ$Q]y)ƽ?3 V L4A#90R UdqM[;ZRWm:_`#xw3 sD?NS^R {:f֕, [jPC׆S θu:HJ|ѠVF(s7*n޶F l榴.Oq3PDrb G/ Tp5[. I+9ʁV}qlp% @2ޔAnj5h}cf>IIsǁmrMe%+^cT_1 ,aEj< F Pj|v뷴Qo2Ocǘ GfBY`[uPQf~ƽ#/S8rj#\Ãk)_}m:ekgƧyCs5H3\6gnR2da;K=1E΍]m\3|Y?;L4s '#siVmmdd956C" $2Ӛg[ulo2<[;to?a@i͐V+׶7s۶+ CyԐvkȫ\ {Kڶn疭Ι)^bd޻=Y_FRF$0|F ±_"k(-,Ʈ6nn?/Fc<n:ù5ꡈld /_Oxݭij:&618!ΎOoz"&e0亍^mSP֖p۶Nw{i9.g25:"* q>xZ#EdMy֓'6^nꮾnr6l,O]dpֽYY oʱZ,جΧ;&K "xΏ!hڷ?;7[vQ]Qrն%ܵ۷ur|dCGrQ Jݠ)`*nY lVtr۶Í6h#S\]~_ ȷ8 Pc(7(Nj۷mE}to>~{op[wQ]q5%ܹ۶w;2 CL-ָDzU_˖ GxV:)PJ`)$P7m\ʭ:.mh#")?u>}[xWcox}im1-䱋#29""-rx?wk3mvؾ~/3F8:ur&A2b,Cb[c53n{ ?eS""N]ԥ xm{n V8"nݼ0<3Y@(ʆȓ@'.A{Eu82wCD%"YTsߍ;T'm蛚K\C?e%w:ʧ,ynb6 .D _F(!nh43|3훈qInnC[WX,hgGS= >4 "!/g4D%`J3_rU!r< Gx~Zymqno]}CEoڷwoGְǒj *l_rԨ/_Ŀ""*8qaj^{,K xc m\Q!,y\4 eȧ@g`1[V\H0ZDD$ Orq;T\baok{[,p#߄V+Ufne]6| BDc1U?oZh`lړΉH,.񟏝࿟PB|8ikhdIyI߾XwaoݺHZyyt7ͻ8k 6njㆮ6F=ؤ6:)o#y3 (_zJYY,ewN~;+HT^SqplxFuf(Va_PU%qG??-.y[/),mܶKs<70ʙi"Za)/J~Jf 0P!(vA|cΥ٬vLDDW0/qt4-;q축5[jP2dž'x~pY_` {-0ڐF@L}q|e^(e|#_w7VMUeOw7DU4}Tq ;.[}yqw8pc 3Z"A)Va\(s{"x}x<(E%|$zwu51 &Oȉ,y#\[.+d_DDDRx(/{wpW<k-o-[;ptpޑIյ\o-I?`.Pμ*W2<ҳxVlی57۟WD͸}_?? ;N^Ǧ864[`2FB^MwC~/z*/9J:dj2$r OZ_];cV*ʊWn@g+:[86جV5ֲ1rf|3,ger6`ϭ@Ɯ/ynrDDDr׷ģGT_]h.- 5UT,879YfDbqdLiDtCqLly7Y;orjʙU""G+E[-:j設XXZg85/FݳF'ArC{XUlrDDD6&R{xw[[~w꠲,eEl@g XqB"`14&͝K)<䪼.s*)M{(Q^8?8:8};hJ.X,VWZ]ɝ;# sqj sL/_-ڲJ~2+uLxlnfjB&""b燦8?4??r2hcvt7SXev;T23?fַvO^?XV:6C|L#3(3nߋ; lnvh}*p7l/?韙g`62|NIKaIl3#deQ#85Ts{ lka&ӣ%Ehf_G<`zK38fp?k;ȍvxkK>P^d팾)jh`bv!t'OSX`cgg#loŹ˪JCn< ιw/FSc]Y Y?ypS2q umLm{5 E8Nq=(m-/~T'`|v4p$ʨМY#&]^ùbUe6S4dvjq/xK<%,hkfwwDEiQJ۬lfK]5w87 02atȼqϫG1w6'Z?r%PZG9[H(wu5QZ\)GErKR"*)7#Sn9|%rwK0֖P[Vµ>hI"#^FL/RW>'aJ('|^b+٬\hށUWQ^'߼=Y_a%DE,C MgbXlaOw;4 կdZ_I~ĘS)O& `("aj_P캝 """9 18>9tJ lHK3r/X4}$/6S&{F&| I嵎6lVKN %"""q3^g>&_馻aoi4 FGS M菈&iۙfSJ4j7!&ms۵UHn3qD-u9F)e5E#Lڮk.DDD$wY,n6y/2`JF6vRΕ Q7+1*cזF ?b"fưU[RLQm@r8On·@0f[mS %E~""":nߗ|Z019y.+ڀ67x' Mܴw W)""mܴ3iE$eiG'ABQ"""m&mcr;J`6FDDD$nP 2cP:.s"+ڠ6+| xGmÊh@๒} xi7z?};ڒ1G}eܡ N2^>诀dInkypzq=^O {.""Vgv[Z~L"wzp.e\³}7VDDDdwΤ,?p.fSئ ީ5V)UkLCJ'7t7(P^}`ƨѰwްt7iRDD$NFc1n3B6<Jp6j 3&msN+.uR]Q ;yY )PTA\mVn{V:$"""ulqW̯@g>FSE|5)q6%[,Y-s#i_p)=6}l+(P&%3m޹Vr`N%vuP[݀_'@JV&%?h EQfn |%} Wp93 F"-&ߜwk3-ꚈdZy F"fno Wf]r&bQsq~F+ =6j4ǽH(,e_"""B;wߖ{)ܒNѯ|@ m%Ow2Ņq 7;o0ig*(Q7A§ݾW%DDD։jIf96{ԢxFWP j.1I[]阈g7I zVNJLKYo7q  R)tޤm槻MeHr.Q@8d8LXB"""kvYp:QiiwRLO4ꛟ&f0J;JDDDgo,o~LeJ2oFLlKDDDm}^fn7w6(p9aO̴=??QJ;MNΙLdYFWJWKm%"""WRs{k63(Id}LҀ\䮮&aQ"""ht?KdIBҜ? L cqOa=,IOK}%7Lf~t`\)Pr8/i{~~*V ܼ3Wx5{P\LƝ(`XjlBp)iۮfz;#1ۚBpqBğb};Qើޯ4j77u+2٬4h[d)X%EkߑuXu"ۏ͖|,ɯΑ;I(PI҇UHiAmw40:6/"UOyz>{‚T=?UZM{􇂆%Āg[Sr8a.'_raX"YꙈ`ڔg09g@kѠw_(Mw['N dO"""ޭ\%i_(Ȁg-M=% )r9GGEc1No9"""X\g$1sw9gܱMFk( /^SOUQɊmnܷgfDd}#_M~rFVUY~S nܳmuIxfNF'Ӣ@ǁg[ֶmW"x>< "f;=cq6)o3L;Մ:(""bƃ좺u`4[`noq61 c7̴==kp}X,NDDd(/5gv81gj"Ur8lm :-M5<蚈Ȇ4FR8r8d1y+pst>ymymbǧGrYV*q93Ɲ_ 0dA7=XmVZ\Ovy f.Ӱ^Gru}8j)t[[xED$Ͻu7P[YMq6~(Wt ݠɏh[DD򗙩n3ڈS(Wvj[DDNiq~q,+Hrm2`H869B8Mf{WED$uNuQ1{KmY# k :PӳkB4-""T J,Lr 𼙆ܳMTKbrədNr$'7CO_X/w76n˵Vvf&CfoG.`Fh{&=sLϾL%"" } ŧM0?׍:I ϓoV+,,w~X*%$""l$ͩ@(Pİi8>5fnxσvMDDdU|u+eO2Ey(IFrF`Q7 tV$mw͑9{1K]IC UUeOw76n ; y^p9tK@r@O_01b||j* y?{^dou^on!q̣4ս4\ʹĢ&bpNiIb՗XDDWqQ?;(0X&q1}p>q%cJsć yKΌۿL%""|4W!\2{۳ğ(sDb@L&֗|u:'""έvYp35]"(pN(s<|lcSLkǿ!*URDD؎M|e^0n3gGiF92>D4|=eII_PPT* uÄ \῁?ΰ{e 9pƀ0lBtDDdög'p/ޤ=B|WNQ1*\lO_Oc"_tR_RNKyi;o>kd""+/)Z2f6`-Rkn|Eٗr9/@\=}~L'Gh;ɧ?97ʱçMMUOyz>{‚9i7k7CA7"\S\ kGiF8<>hj }47f7~w?v>vhg E#fog(r8#K MiY^abV?RRw"""hoco\!p*&'Nӻvd)P890yl˹)vuo~;#E»隸)for8M  pw_3EŴW%mwͮN>3oLUCo_^UY~SlV ?;i0l;ᬉA+a*P8ppG'F(((yɆ7F_t("yx>< f 7gka;réI7헬-Myo.3 &SFbag]iw=;v[yflCqK:X֟r8G-pg'Owo1Ƣ<;1D 2{ ÙY(741g>NLjm vMDD6vl.qbj@JK>x@ο#~)9MDPP`gM~⎈l^;:݁Q9s.+iwN֍ifcڿhخ㟡""4U}{zNrIo(7Bw}fǀg.mj拿Q (""ǜϽ.La]^T*ҝީM8r8灇SbC(8~*|."", #V94OQ aΠ(78yH7K0FF޸w`1NFDDR?ƅ˗#g̒gLBxysQ֟Bp9!Ҕ2O 6*""y(h1nT^Ӊglp W6_ pd|"zd7ـ?toc.qd|@*Mrep91#f/plrP7 z(""ɽg.qlrIDph%y@2%t.fx Sm?S7&މFk_g ܩ"3Keq9k)pa~T77ݖfDD$׽M }M??&&UG(X0{əq7>rKDD6p æ{99cn+ax}%yF2Oc 0.:>s3xBH {IAx6IRc.QམQy4 ?CA }m1Spb{$S yp #O 2jK?F 3ό 1Q ?r85Ɇ@ ?~lH,3̚ coRI! ـGSr8<Ɇ@y|[ñ(O 07?oy=:QGDd#[q?7j> |.ƣ@I$~ k(F7*?~aՙ_shqW/U|Гq9aSf Gg.`.Tw~'V-^jmɻrs?GSOJk"ό >Q_<^HM-Sg4pLPo/ Ιr4{wEy _31Yҽk Ta|:a09N%(P .sCe 8:9…ysoH 47?4^'EDĴ7Kq _H*)M^J*TVHD$S%|?ƛ_whc /S}äL(Pʫ$v u^"1w85:ʗ>ES{cʗ>5;;MFĤpW8vs (%c<1| (h}Mu9׿K[ Ԙ,ybK)d0:rU iL2O\Ļdj~X-)"bb<]^,S} 8ޒT<uK0O\4}}? p$ Ͼ~Lyr"K&$]obDR 0S.rhKYee7q""WQYW_'ݦb1.g94O(s~ h r8c.ρڷ1ǧƈ<ͧغ{KzC[wok):LƢXcϺϹ/JRRr8 x'`nM€w,ͽ-))⯿q~aTD6/ʏ>50GΥrA.SP67=%e._ \7<|(Uv0䓲r?Gk<?՗[^r85 E(%-.QT6%=ٿݧy:)"v_OookF<<1|@8Mw%~LRr8n.r]$!NެSZRğGx^ծ#E$Y6gSܱX"b.c^(rdp^$*zٹ)L >︛?ǩLDDr^Em%~nׄQL qv.ӹX2Jɘ$>T[E'v4ް+՗Yn7j7[ܝ.J ùE bg ]`<|>O [=Ί[iu^7hT+DOHɘĒ5ze_= |0=/F8<>ȶ:7c{o[fg븈:ow}tw6&rjvKt9MbhRp~KڋY/h&ozXmڰ#"j_J)L<9r)0yYaRV yx~׺<6|wJ}oO%EDL}K=g?PJ׍.ylj^{-.\:QUr8灇/zm|0/LIackF+E$']9*uh&G921RU+|xcgȪJYU.3 |$W@a*x[scKŦO=ﻎ״RD]}K=$KNg ďQt.IF(eMο!~CJ%77$w w+LZQ5r87Ozm$N FRVk+Ed=V2ptb'G9?cw9OsH:(eM^ශR/ytibJ׽lRG7*/`)nJLr8GӹHR\b]g{z| 0_3Cح'~!m)[D|AKSMJׅNLK'w{LhR֍A:{xtG+[j/4Q^SKLyMQ #>IYO gȾUR<⣕Os|j,rw߶گ("iy[߿+}ޔ G#h?p(NqzST֝$u-{{f/p]c妯ZGȻR}iل"([|o:g" ]4ɳC|>r GG%د0)DRr| H%, 1|흷߾waߍX-tADjs{;EjM9>5S#v,p#ÙVqJդ!Y.HO_7+O PgE{RYb>6U.L\<3FLٰ,}wsoe[WJb z9=3A0 {զ@)9-zz{`GF"05ʠw} mTt& ?/S킈l0]-|o:gwZ׻e~) x|&Myˆ8Bl?0|/?sA$U÷߅nKhKYMlz;1k.ӟɍD֛lx?O>hS3\rϲ4W6~''^[L"뭺w.~}X,ƐwsS?沣ϸÙH$(PJp9{zo> _&!Mr=Þf*RoM:uvX4ž]tGD2umbL8=;Bp9.|pf4)K(%$~@uO_K<\5 \ ťo,>ɳCt69yIEV:Kkyf>NL0t 2+#fE~4 knS#h*`O}3UEMa_?G9}~?wjX4nD,V+]̇bώYpzfIB6gq3\@)yp>{ ҮH>_`rhjv5QVP}h?QgWS,tHJ'~4Uft/_(șIFjz[lN /YUU4eC]%??cuIMa&yҫy%(H 9b#C|z{6Ήl ids|M۲)PʦDʟ~>/>/ %m,rC/wLJ}?|Q  9%nñ}ܸ[b|f:0 .Sdɦ@)ZG=}bL.RSTŽƴX^vӁt`;2>$[JX,li{7BIIQbM3 p>ά slT "sGrTik.O$O=5I7ErBMc =?~uԾ,1o~:/  ME6:J+ gzz-s!#-U52PW'?C\G=r/2a'~]Y`sr$̠g~l6HׁϺ΋ټH>P yxWO_L8=;ٹI+ZUGuqI}̯;xsL%B-mEvn]s;zoRKYFD${/cټH>QI"y]O_=ߚ=CyJlVkzN(?xG=Ӄ,-f|lHJ*Jٹ{ {pbehqKlJOr8E .󱞾ہ|`6;gnOƖZj>JVǃw!#pyOePP~\3\+1'zxp_ "&%,^oԝ˂}OXZNWU-em5;;_|Oxa9|I"جom[7ܜWŘy1Zٟbr8f"NR$ zzwK=M/RlQYM{E5UEA> Cǹt~eyp/\dx#On"l>|<훋l "H<~ׁO?Eu,/[雟o~b:*iJ),,-&. L9v"Y+"y-tNflvkCAF<,W华E9/ (PdA=} |9:[85; KhB-[/woK[.ǏsEF&07’":ؿܹ֖U}`$آ7U+9Z/"(PdQ?>J7J3^ ,x}KA"X@i+SX\HiY TTP_WIflc׶6Z#f_'r%}G#"OR$$F-o _׮** W,38:4LNv/   EDa(U,PZVv+6{WP]]N}}M մ6Vϖ:JJl g|x״  "9xxw\ -`Gx}{x~,, G"Ģ1?bQHՂbj`ZlTVpN,6 VҒ"K,˩ts5Z1J x|p?y^0K)iv> F #G"{T7R$)Pl0=}ħ\^(P]\BuQ⣸D!s F¸Ku|xXώHj(E6]Otoo2sѓ^Sr8ϭs_D$M "y x0p79rpA(,"pT@8e1/Bpaq\FDBR$%x|P=z{e'fya!Eё"Y -2e¹QNoSq@)zz ?ؾ=J&F2 )Pb/f^@͎j]%FYY X C,C,B?s&so/8IWDCRdIԹI@zvHD֏芀yyy0%e ""JYQO_o/T<!^ ׳C"(EĴ] _c$s~gFDRڸz,[n|1y^ u핈lh "uoWu^y<("٦@)"k8+?ֱk$ʏK.si=;&">WM$>*֯wjO|}tꌈJzzy)\+ubܼV.sq:("bD-GV^1`ʇ-qQįW~oQ5E$(PHF(EDDD$# """"JɈdDRDDDD2@)""""Q(PHF(EDDD$# """"JɈdDRDDDD2@)""""Q(PHF(EDDD$#Jzz3ѴpnoDDd%(PHF(EDDD$# """"JɈdDRDDDD2@)""""Q(PHF(EDDD$# """"JɈdDRDDDD2@)""""Q(PHF(EDDD$# """"JɈdDRDDDD2@)""""Q(PHF(EDDD$# """"JɈdDRDDDD2@)""""Q(PHF(EDDD$# """"JɈdľWm-@P`XƁs.stzz-@+#kyX"L}ጬOoWO_o.aCn`.z7%:5@P)0r8l%Y%~}(1}OZO_opp69p8K=}~F&`7Pw$sO97 /g6\grlSJ x hU)bv9r$B=^)"K?Llp"3W}  ׏6嬑ޛ?6aўޞޭkzz{zڄI =}5ʗޢ;m?RB Q,SP HA*A7#Jo(M⛦_aP-cB%̑}^{_~>ik{^>+?b_` o9Xc-B+uؗIcbNĜڜӶ1K4mU1]wgbN+b7i'fg,9ma}`?. (;ss:ڨamU=kއf]Of9mCiјn٬z/cN0bNĜ>|vS*zp(puiQ[ŜNN6Sm}1Ĝ^`9y,۬嗮PR1cNGbfmy zC1serY1yiX2}a>~s:ZM#s8lw {(YQs#dNV&[bU? m㙏MbN_Ǚߘ&.um2 lSjׅ~G}9SVL߈9:NcNalf5 \sZcNovZOV~5.AeigQ,`z9A;;E'@5֟b8m[p֚<\_KQf޷`nJ). sj}\ژyfjx ӘӞ}JfƊߏ].cT2C wnƂӃ׼xX_{c{xmc1ta ?85f2൵)sz6;W"lީ]9},ȍbN`5\';3m{4Pͭ3ur@ebN|{&ycNʒK*1a\nwNTHڗ;ateAǜx,;6t9(8_gbhU8(&V5Ցu (77jJLŜF[R,m4߿de=O-lTPVXiRsi4Gt*9AZ]_aGVp6tiyr_kݴ0uk\RkH߃ X~Ae,lRdYDZ2PTp\,n1) z/h:ȉ?]M뤦i\/gѼ Dڷ6>}0T1(r VKq}م1qR@e{Jw!֦7bP'j(;rXImVz5X%돋S9 `|:mX<=ŶNcy佫K1JӤ&69SmŎuu N9og^UqqYa;ĥQu̘NVZRkE7ՅNݠ^Tu>K7:\8l}ՆF<&;G}jc0)lIpʊ@϶6cXn^1[.+h6%HgP9gA:9L.דcDvK3`0V0(U:+XXƶs@gǜpNӪ y\خUv/L]![yVRcv28s,8_:C1ŜΧlq *3e l kWWJjEegǜNuê/ ەh4Ꝏ*`|讏V?+0]UhE/K߻gƜ.^[|{9q\ (ƜڹS@پq5 xeCV,||n.us ?sz[cNsZ;v:cw=}Fqa4N R1 `b^z0gre尋3pcC(ʖs8vfVVs'y%zEJ?O9H1 ;/0l`g3uPƜ֋9 F{-+=>~'PwgZ=:C)=R@ٱjͦ ?yI5WƜ6kuky9;HKc͖>u.o>`?Ly Uma ː(7| .pA16N PmbNO9Tr+}n*{)"P`0D;Ɯx 9 jlsF1ًQQ'QVd2wbWc9d6k23Ĝ}jT1/}yv#< UhgT:= ~(4Z̀Ĝ Z`W-9-5w"p.v؜RT (KjnTuRD桀gة8 "vN ii]ܘ&լi]J9.`Q׃Ne{-[]1ҒD+`>ES@9bc3N'T3/cN 8̚Vn N}VŜvZCJ;Y1sz%lž ;s2k ڣfvp" #yCUh3N4["OJq] ;GR _y/4;=Srاt^ެ]"i ?ep`%cN+ռl㺃YH|XF[LcXS=g r]{)"2c.8yp~ol/H5js%z6C)-@KJ3ݗTG賂E'8yj^W̩NÕ͈nsZXW+Ap.S憾*4j o| -mK z/Ed. (ָly`}<\\tQjGH =Ojӆ?`7ja>F)~^\PNQ/߯qIly+=};@C;ɀݪ'6y>FDa\&2%)\sz}]kX:4D#)O8=8UVzTm݅H)?NId1bN9ŃQK&L>rbNYϧ1􉖻z]Ōw٘SbN#[=.`36/ >8t$iwGt8ʖU Ŗ8&JvlOd_!T71gO?/|.-uW,0(Z곶ͱ=gkgxXT|;P7V#%a`а쒈̇ʖĜ9}+2ғbNaL+Q~R'87%soT<xMv~sj]-\|jM ^4Gnr9mՂ,_wQAp |6.2^a5.ptGY)lA5+Ki&pz9HO09a__^/Ĝ96 Vmɏm-t[zJXPOC]דK^\8 obN[c:{hH`u^1}Ljf[Ϥ?)lʑDsحMMUgP &}">lfg/iBPLFbN%Fqr5bN5Ӿؿ?hH+ {ԼИ1Ĝ9|ziHwP6P|9wRo\TmxL[|s& ;WƜic 9s: _3v\pή؉ _YĜ9Kw+xswbNU Csznl`ZK^Ro~1M:|Ŝ6VB;TP6xW˞-vӓbNG?|^m!8pHV6Osud1GukI GݲӀcNG9 1-eAu Y1 @,v1C̭9-s v^]muJM51³>ﳳߘ1c ~*"5)l(82?X&bNo^X5>5/d㔗ێ1"ōŜV6K݈ekGO !66Ĝ^s:8XgMΟ[xx3Ēn9}UӚ1@w/quupTR϶(s>{ q;_yjCEJ1E=uk9 ےcWbKQ 8 pgUjoy4Kmטs/- `k}` um@V_[ o%lS1S Lvai!*sPJ-hә?bN;~˖67t-S,MV"om;`s/t+*}dWEOꓰwǜҀ.)&J 9^b,Wc۵L#2Ϫ؇lxSW;VPFܞ,P)c3ceݛ5m61|#) }{$wLX K+`.kS 6Kڹ4LpjϰUU?bجX 1.v&8_ZZcNa'ϼe'ǫь ?{KcbruNP$8?#yc3Z9]y-]H2 ;(vOK}p!"X F ÚA`yfJfFT1Mb Z_Eժa8x[p!@GbNołh5 xn3 G*OȐ(EkQu|i=ƂagV  &gj njxCpy{ɲaYoV GYۍJ09[p~Vp@,Mbxcp &EOe˪|7mnh5KQ?x~p |6vZ&`JVmw8 vY|P/ 6AUfGQ6ΏüȔՓ!Kv~ؗu} "8 ~:eehFlV^fPқa_}9xYp~3Kf:*r y1Xdnl\ m 3m>]lߡz8(PpزX.R6 8_f+w?bZ ΟQ}aopVn6gm}\Zk3o?kOƂ_f?ǡ\mz3"2dڔӱjc1]{`AC +VYݼL tnJ~W'W}ŀm-5M K#8iU} ;y;44(] ({V `K`3RkbY\ vlzɒX9l ݗcei .z~WoȻs]&{$l6n,?ҾeӪXau`ipFsVFl?sz&v޷kb߻1ߘs/.+YDS@)""""hS4RDDDDQ@)""""(FPH# (EDDD""""҈JiD4RDDDDQ@)""""(FPH# (EDDD""""҈JiD4RDDDDQ@)""""(FPH# (EDDD""""҈JiD4RDDDDQ@)""""(FPH# (EDDD""""҈JiD4RDDDDQ@)""""(FPH# (EDDD""""҈JiD4RDDDDQ@)""""(FPH# (EDDD""""҈JiD4RDDDDQ@)""""(FPH# (EDDD""""҈JiD4RDDDDQ@)""""(FPH# (EDDD""""҈JiD4RDDDDQ@)""""(FPH#X_k? IENDB`seaborn-0.11.2/doc/_static/logo-tall-darkbg.svg000066400000000000000000010346231410631356500212770ustar00rootroot00000000000000 2020-09-07T14:14:01.511527 image/svg+xml Matplotlib v3.3.1, https://matplotlib.org/ seaborn-0.11.2/doc/_static/logo-tall-lightbg.png000066400000000000000000002427001410631356500214460ustar00rootroot00000000000000PNG  IHDR8 9tEXtSoftwareMatplotlib version3.3.1, https://matplotlib.org/ pHYs.#.#x?vIDATxwx$]snFi6z앳`|p1 l1c` pIƲ770YQΩ:Sխff\ٝC]p0H$D"H EH$D"H.lH$D"D"H$R"H$DRRPJ$D"HB JD"H$IQHA)H$D") )(%D"H$E!D"H$(H$D"D"H$R"H$DRRPJ$D"HB JD"H$IQHA)H$D") )(%D"H$E!D"H$(H$D"D"H$R"H$DRRPJ$D"HB JD"H$IQHA)H$D") )(%D"H$E!D"H$(H$D"D"H$R"H$DRRPJ$D"HB JD"H$IQHA)H$D") n/@"HJIwO sẇ~D83nw$YW@"Hv0^D"䥻7oiZMX;۵J$EH${t!e7(p89wda&H$&RPJ$tY8fjvqi2S!07;ͅI$K)(%IՀu嘢PvqipS\<<;$Ň({zAL$Y^YyHz7%H.lH$ɈC/,I"S"XE JD^7p#-Hx<7^׽wnUSQEQP7074 .H$R%%Y]KOXx:{|ZK$.RPJ$= fͻn.P񸨬 Q RY"DEʊ ^Lk_K[ev.s?gbf? u8|G`$K)(%^'p|'ET+il0uXQs2jb]d.13,3ϢbScG"R" aȷ]]TWij*ij6i=LL3224#3430wEJ$‘R"Hmnj@ pLjktu5S<6Tv{i%k$cc8'O295K˒;2H$%@ J {+vw5PWWAWg#4B!n/K,.p覟^WߑcDRRPJ$=?9pW&w5H((\H,V9uj'G9yj'G]?;2 H$R" ;ڲ( _嗷Ix[+N)._ߑN/@"C Jd~ |7;hsv ]HRxknߑN\"XC Jd.@NW HUbU^ݽ Ϙ'ϥDwR"tV~ډsVTCxALaK [Wyc7&ZI$ɧēzO:WKtdgR"!{z/>|(9c(r͗qݵ/1((KJ?OgZYߑy"Db"DRF{zk> U^tǕk64M/t!P^]afmEs(C{c,u]C/ȣ[^+s*'. )(%2YrtpMk\{bҢd  .O>+ZMGe4<{?}Tlo௤DRzHJHwOo5)EUs5~DTGIh5*j̲ƣx_,f;\")RPJ$% Ӕ{<(0 )tIu810,uQBpbnSpySsNU[хS,%׽on$kofw? +>wF5H,{7Rx;oۮ]Ţ w n/7uX0 Ɩ90b"oU9+ZKsL`$6ϋSc~Ûވ&.&xp蔭clES0N&.w4wIb9yj<Ћ$%.}G^(A% )(%-ČHTǬwB[o *۠KyyjBm#R6<2rzSTr;nihF7(xc s8u[Gi'Z( #Ň% )(% tiStkK ]uadSFJ)(a04ϋ6`|'Gp 3xO~.7gjryxE\wO?;RӃDr#SޒK^ yJ0k[Qnr8Ad:zX2n\kbAdF$u8ynͱ؟[[呑Ӷ_sc}z0 3xt[ZЅ``qf ϲNmۿ+>{'FyZF<cߑpE J%GwO{,MSkyߏIKsM񋳉.1N0wN^̻+ZKs\Mх 917Hl!pkBQ:ha!C.j}Ak^i(c˅eD.L&#msb+i Ug!-tV)0|\鵹[{thrg;t)&\(HA)d´ENq:4s7QS- SgZ]v;UQ*_$άLZw/RBGST 7[1z1%tni쯨$<·".:hz{ŸLz/E?EQk覮$+]?t1Enml+-2Qec ,$ֈSB" t*]?:/N2YT:6Bj2lGԳG,-k+<=~1ڦ]lV .hLLx9쿑ipŎޛ?o<ć?V_X3xɵKӸ`AsLd4Tx|\_׼ D7nM"`9S @G˫4g'Xryf݀t1)  XVx|Ծzio;4EEaw_W߻7շPk3ZN^yuoѡbe5\HA)(ez}>nОJڍmv"/-96gZno i!XN&x}vb&t%UA4ī3$GC\[ׄCQB:'o),~M9X0 Npfq.*=>*,00 t0 <5ά$=akѝ0 x(RcLK.&\4d\[q½w_& ~uClksfqh;;/6X2΃CO8z`[q cdh[+Gʅ.+1Ii4E%¡+KĒ a4a?OiZ<71SHK?u8T%gɃ0 4L Q \WL>TA#wٞux2=]sO`iɮ#䂧.)}Żq .Œ,qzaOc0BW ùtQ^v+bXiZnݏ.j" cI(hi z(LB0gmiLi8{2H75wr-Eu!xmvYk h+Kg# =j*E G1i!mfNIVB0y^ϙfVE۹o0xv9ԱdcsSޔRV RB$(UE-(NB(S|/9]=_>wdtHv [+lVE(PLC~_I0͵ 0[`>) DeWRmZZKC[o;$4k+99?i.V>w!h1Usot*5@WPh޸BI Wgƙ~,&7,_͝D<Ye$0Qj$.EuTrYes[Ra6J\o0[^[@EMeM\* MWԜl 5B*}Wr"?wX\2𛲶RRgG@!tv>LA9ز}{[oԒy5v **>vnmGFN^h:.օȈc0!ޙYTUz:13E6lDA]Wl;]^șZYUJ* i!:CNZI%y}v"g<VP*7]T7q}G)$!dӫb6>:4WdόKxb~vOna*#6|IVL\U]BH o~$i?\|ϖxE[̬`H%u0<|fb,pCtFd˯ /'gچWțdO!dOw ={}}x=KwD:MQ[4\]ӸLL>XSm;wT @njS|gXIuo # #r{SL쿥ؕTiޥ5\"k$>?(0>w4ZI {6%KwO7 ~ػѥΦUR`6%YWTڥȤa3&f8>7ʼnifV iMZj U'w 9,;g7xz|ȶxef؛c?sKG~BL{K4(dRtjiSC{u+PϞIi:e`E ]3^v펺fb9I bbcLonEG2 [_I](ow xQ]BхXOQ+J\&PeݞRfY'|h&}GVC Jɮ[xso!U__Qڅ!iCٜߚpp垨Ӆ9F/éj\]H?kbLݶ֒3-:q+$;sF*=>nml/{:a00r*0 ܚFK(JAQEou!xtd༎p;8UO>^z4_ʿ2Qxme?fd %B Jɮ{/?놊x;o#0b0XN%85?||0pkZ.I:g[^<ϬNK&& nwޗ@"))(%;JwOo𿁂ɻnO~xGJ&i!P0/&J@a`[j'4\S[|'^7hZ8%4"aS:T77ws83ׅa[pmm%`vI[M0khFrޱd@P^[DK(w+^2BϲO?sQpYɶHA)1{zoR_}u_EƦ0{ƙ<=?CrWڊvγL6΅;Y3_HZ((Ēqf$uRq{10<62`9jT5nkl#\OՙMf[Q RBkE]}sSic^k+nmh\̮0B<Y;TyD<bwOOd? H9tRPJv .xh糿4|)c*+1qt!pj@Ӆ:RWg\ܾ:ÕoiKxA뺯vdIb[ΓV@+xmf"uZXUa.åiymY[a|y ]G.7QP(P҅GsNqC]3ЖRO^iqh6ݭ9h W?Zߏs _ )hg#WUI$#d%smϼ{2ff&Xi$Z4f1ƃç,m~ڰi!][az4(TJCU]^$10uuPɾŎ&s f]SařŹMOa x`$ =( w0'<{˒H)(%eSLv{;G?G}}aR"- 3[Nik9ʁ(F3c1EH0xnb빉a+cYK867ɳ<31ī,Wkʢ* >g(sҡ\]cOY3i8|YYv1" .ti˺@ 1PUNN,f @E Xmӫ, R|1g9$\.wr>GBlD0_wdo=H.XdRR{z=Mu_bR)!v]g. og0h=p;K4QZj-uTj҂(m0 <8tjK1%LȀ9iu7#TUaKڡTxνi E =.. w4upwk 0?$,158[`8@"FSn/]*| /iO@Qa\k`܆v;*JN䵚qkˢc#٨VԖ5A}*K`dڞ Z ɱچ8C8XQz5ѪpXOi[ c;NSHtmm?_g}UW@wOpIHA))}݃vt?vOIg ],qlvtO:::zD$ҫRbI{zæ G"&u! B|%㫙Ko.Xo̎b;;ӫ9SA""U;ETG?s' <Ʃ=ᡸ8Tj  pC] [+b;tTsL]i*KmӪH65 CΙV@k"Vd6{DsL!*%wGw瀿$mR-=^_Ue|c\}x_פ pl9Cçxr,gsfq$7BYsբG:7 ESU+\]nTԜWOxxySaR+3;>|PUno-&|NW}k|yn#%kRV0C,y2}ufc-/1..J[[r}]sId @7T|ԫ|ԫW K%}5Ww?U!~̽@"ɉtF1=&egZW5e|^=#.N8Ό}q.1^=#\t7v~ۛ:igHZH^h75fƟi} m]i FKɜ5%2(rkc[YuCXёPSwF^}yeAnzN+~fb c!V:lA"ûf ɖHA)ْJv;:W1jjs݉Sy660h El7 7bٮ "ksOKjĩjE -'˜酙K6J*k~yU.L۶|@Y2#lG?CV=MUZl;]\Y]MekJ |$u=3'NTJS0l WV,+ Щ c#,$_9h3:WT6\[tލW7TTb5v*I#)h4UE"qi E~OzFf` atzЅ`54,*A`{ai! Z6}vm%k)4ҏxmSe>cFo+MitZy w|~%ܒ[>dgțȇۃc[O6|w~L0|O`3RESqry6&)]ǡ4)]GQFtI1ug#NU#2gs[bR7f]n4ŜObg6iKiN0dZ.A?Dg4.xTЅFsn3 8؁) LyɞX_oaіstDv&phG4_a;^<{Oߑ2-Or!#{z0;}g,RB祩QFbofYjAnk.}LXKQ]'Nٓ+<>nojeՅ`>ƩifVΫi;]4tDMQj%2g3k+<>:sJ[KTϗ],A}JQKl쥩Ѽ5kh=Wх3 ʛw|0J:/L0T5%w^ a 3ZRJ۷vX$RBwOo=>{lڊ ʻTz#P)RFryY[b]B!f-¡:"U+K85z^B0㹉3BGy^S\mrkc;!G<鱂Rn B ?Jۺ]X~_u[B8fHAy|ino:[̒ KS m3!!"!; f9`$HJhPIt!S<=1b_I0 06mՅ`>g1Wyfw-/*-5,`byg'-sc} 5yׅ`9S_N|NgY,q.z -ؖPeYggfuǶۡ3ZšZihZȦa|NF23I$ob:RT^HAy ~ 8/8|UG]NMqzqvSOSy˗U濡B0Ԗϩj-Y0+$'秙XY"% q8i FV)ʖzaMB.س:#*p @Z|9nK[wՅ,!9H ɹ)X[ڶn wo](oi;4^N%\c-BS!>Gsʇ4싾eOL%1 36P5 |^)ZL6;Q #K {vi~S~1ݴ99?x%LZ%*+q>ϡ\[۔W 3 b-yPj?}l+h|^[~9Qd?%3V?嗵kY;1 5&28v E< (B\[&hUQ/AhlaO{j&(>%!UQͶ{sC[I>iky X`l\[۔&l'6~w?gKT^|];\Iufu|9t| !a0[(̖ѕ"LOn~Hlaע-3GLñMӋ(8U7tRiq(䎦n6|3SLJ⎦}LRP9COH.'WR6{z/wY'>`Dg 4ό `ֆbĮ{ލRbYN5U+ OݶYUUܚ[۹65s[$&Z[;.ˋ"om;55ϱhB{NLf9x{$wXu/|L @dfo!*k^gVbìf0(= d!KbMΤ2gVH:NMGѭ~M.2=h'5Y*RRݳӂtCDw37" PrV55m5U))a5bvmdIO= LQH,RP^t~=_tͤa̒iv{h F,u{4ig癏$ݚKC LE(n6"-ds*@?ā.&u8,U^&lzk#^?!pO) cPBsR"YVRITU8UH92fΛhxTgfl[;ThUQ-5#_.MQA1>NzF/Lfg0ܹ"QUP4T\?f%|Y׋dٵUzU1U+a\.w;B;E\瑶A9=ìt%q?>Ѕ yizyPUZQ.tJ c# qk0xiz¶tquM#Qo]TGK4M$;]^pd$-CKs<=w 4뿏a./uS,7շR0_^[ڨrFNñ&V&z`ĖGg62[7#\Sh<6:׏pr{Snr7-/ vEV.!?sRSLdE_c~n;>d"ELwO=@`\T~?+p" TGO睍xc0Z)cg 5օ`)牱3j0\QU~RNSU%-ˋ<79by뚩ϧ 㣃Y-Lم!1۳ҳlg=~<摑5FyNޔgV"~I=C'6XXb1 ype4j­9,a=@rg+TEᎦ=1RY^wu8n۟=z.TPWՕwnDj.AkFS s;;c@Oߑ:dϲ7+%E{-(&~b̛c#||'G֟UE9k/ѪjϖS s6zxizl=JZW FWxBdr#LqrKcQރ_ ?€˫lg#WVoېc-EZ:œg-5+㖺'WcL,YU3C$@Ryr-UQr-KxS>{@LWRFmJBfm,O-EWZ,İe1 0Ӧ;B;/>;齦K,-}ywߵ0-a;3) d*kl7PٞcO=_:<+5AS0Rq3Z]P*NK =fPTno`Emz5e؞С4#)Dk7DN"Llu onpmUM.Wyiz Ѕ`nmWg=eYN%xbLI|u>3ﰳKܣ$)".`;ʷa{^@9ϑR)K ؎uf!p/ũQk?CոbYֳH! .Q fn䊢) ]*WT1 )T5}}M]M@STL') G8:=?С\eq޼||ph빂:sռQb'F[Rq Qc0-4Etwu]nͅtF_Q $Q {̪0L윑brrviPb˿>luZ?{zo;2Y%{ )(/"{z=5^G?SE0 v\̮m1keuD=>NMEƩCSYbiCPT.,rTFRBgty`A5cy}G_%{)(/2 huo8!۩b0* /ͳJ* u х_QSUa!b#RŒD=Ol 5M1fɌEN5̍s>Vpȹ pj+ vkhڷkXn$䖣' `1g1iZ𴄢\U@K(BK(JRtEgƠNW6@?CQsFjНgZʖ^b_q#a S8>3>m,1DK0U5huE)x0s/,\/ =2uPRUYX\g[&௻{z eKl#çX>?ñs^>3٥Mu'秷9BS3NM۲i" x ܙ&eݯjWU+,I_[5UeKֻ;~ki~rFNٽ,xtZyfVxSS.Mp4z:0vU* /2`-&fvAb ,$d.\RUJX +-хJtCVJ$ͱm>Q :{'!)޷`ujX(<=>@T2#R"oV4#y/JBچF^uPɦT#[SU!/Rs-rC]34hmU#RYоJ["oq`5ёChEІ)?!7@y;#!1yt#l౑A[b2R2Γ%j JQZ1~򱔈z\?Dssw$%GF(/p{zU!~ !wa#k ak?;TH1wɸm1q*}4  :ƹzMUP9TYUuĒ ViTBnQP}~<%,> o[*fWǸzэ8T@Zp'ۼauWT3שXhnnx{])*>_-T^UjSqjCDe4oOdRFB!?%+?uwXiW#Id7لc&{jjJSm;ض[YzZ+kl-;g|)4w.W}3v{iTrBx΂na͖C pmmv[|Y+ 8+\׆vpwtryrnsS}%(Y_[ppeJ}(٩1@*OS'_8D< uC(ް!.XjUjj| !3y=Mr"JwOg[mBZ#MŜ7MQJ rp(0z2>X"n团MRqC}K^A* 7ַ, Y+S 3%+ٲ75uq{mV$+[i5XU)-7k|9Vrdӌ-/GSBpvi=3%U+j+hU6"ryvho>T$ p?wq x.CUfKtUTq{ 80W10Z9)xvxlݺ3 7oNU9+ZKlΥ6^].:gVWbo"\'gO8f#\IG) 3br5F?C)\`eEMP˪yN~ۯE%<5\LYH $s:7y9Efzm>~şٿ[&S =~xͼqEc7gO`tx0t5'QPVT[S oJN`Żv3-U]/$"*K6"۱8  GƬ;ptn2gbE5MjF;cȇ,0ZN,eC׈* 7շIˢppscۦh~ZVR N1Gs+Z^km ^-ܥj|8 oqrwO+}G˒)(/0{zoW]?2(?վS[Z~'r #*i! HڦƬnvH&+"`슝Em CK8s) Lq ju+J=:%J !lO:9?EF)EASF>\nklDZWy'u4L[efwD*90CtEKRVEQ)^yun{\V7pQJWzp%U^H@ <>2hl!3:rwj օ I TFQPVhd[dWVsC] 7ֵpUuYRԂ:K )JQRFUlE//PqGsw4us7åQ Sgm]Kxdxٵ!~MMaY"niC{o~'_'@PUd n_8]E60EƝ9v KnOnE|5䉱3R]*VںBJ`lnm.K6 )>MU#-HCK ,̰١tFsERqsc[ꥊ0 3pdӟai,8u5}4E᝝WtH %-1+I љ N0 ӺÉn\3ɝ-2b;8ω-? wY#SE1 1 p8XQC/@+pi]`FB>3YT:HUZf3M:3Fak f[V1 ёۚNg yCI a^&Z';Y(ih5X4E,$#1 AWқ{"|Ȕ~KJ謏1̲6 `9(XLYİBWUxN^#Oyv\J*IZ\f~JqQz~3_GfGǑ {bт>O]Td#]˩$+DfvUa4O ./|`5 ~ljw$ ˩0pk"n/0lyvzT-_s]Mqj~z[Is8 chgŎ8ٴGm;lDz:*Ub e;STEmɱ3[CQ/!0XM% SըV)ʖO gƇKyg>_渂ymfAjmTa ,m~;]T(ڞous;O(t/VW_K ,&xpTA*ݭ]%dx"NH OO!܎bK84oEM\"85ZФ ]-] -̸+Z,.T!xpTÕ\^U[4 OϽ nӅ`8m] A.;}.r?*W][!QC׏;IA\O9ۚڋ*#B_<+Vw)]R6vӫbr_G}RL5{Mm^G)%pj̟Ŏd]^(z],I0<;L^جXI%lioohiβ b͟"h}<㽘DE(=waQ~[nkI Z:ɉ5JUQܕ^}Y]сrMUI:}G K`084 MQ8PQ>`v)oXLXa%0̎PzvUi! .͟ɯ) ]Ѫu`> W3y +<>'/-3k-sU^˙.ujE*s t[ũQK(WU7wS5"/{c2#`E.gxi*˩a4849 UQ3mWp6 HJR:  Y @d$,/ Kh WT5Evy}vX2[fBpfqWm4(]'\_׼~<0Hu>ӧ4fHi\^UU tEMO4:-tMY9>7E[ؚ(sӡgteK86bvh#BƂ pߕ Mu!{v+|n>k?pMmjuY tTf<=gQl1Fb [D̬k{BLYOveu{s++f*%c) [Q(m5-d&Gϋ&*O0 kZoD sվ^F,7},q%4SCt!l5b!FJ\.\VD2x:U5k8aÁ.DIKN>UV,E+E\R2N#VS?D}}eWd:;x'2j:EBOJrvi=㣃N bbRLnf9౑^/BQŭ\8T[3r6FڔLE1x2DZJST !- Uș ^]S$-4볓MRAGg wJ98kF 3[DκG5^PsfSjkk9:ˋAdHd>/!cs]fX*sk+)"W羉a[HCC%HwO %](#tzbo;tet!8:;ɋSyӨU:Yizltv`pqFT}Uu~{ ɡt 8֣5`N&̥D?W$4O9KBlOS!ET[s鵕.쭰M RX0rTԬ7%Ya$4L큣<<|GF1;||-P5Ֆסኒ?|anwk_Ɏ#+vyGdN-XG7Oڛmd,> H-0 hChWro~y꼂{[ġ:8ZO KSMa[%||aY*ŦK.# _r*JsZ/p2ĮvET*(%+ u?jm*2W8{KEVxt4+K9EeE=\PrwVguN@ =@wO-l8nwqɽ^@/m^Ǘ:LDXIAl,KxOh(\^eBN_ю<{mOK0jyx09ypnoN% ɱ3^ ้MUHUi554esp|{ܧ{zo.b$rɤ٠_{ii)vaxʶJbעp SSRD)fK85sü2= !NUQ\Sk}BPn!@uxNVXaCC]^$Bzf&x5Ob[i˂6Ѡ(Zވ.z\,'nġnnՔwT6RJ6^i6Ai"/4M8ܵdE ƶ% rC%t8\:k_{nbv}[vH= ~+v{\[>0Xx9i%牕2bhi"+3s-wSBќ<=XamDF*6ٴܚ37O+64ۦT1ۯQ67s8`_ז끢Mϥi9[ ]r}] ow] ,x|wxS>k .2_]/DUz:g,k$d"r*:•=W }u)|lHwOm6AywW5Y۞lwSP0 ((EGTulsYJƋ\WIlDwOSݟ:R.! Y,mjvVQ_n/֘|mzȍXj߆i`ȩjVz|~_տ|#ke^d =tY]%W+r6TF\1%#ފ!iߕ#ӖiVmMmQ] UAonAfߋp%"膁YCESׅ8UmG75$ޮGo¡j\VY겭M"() a6gۡT@Qn77wpr&Wd)q-?9vՇw7c[-5JJl{zotK]]݅ъ9Fy}vϭ1Y q{J x;Bo؎Y԰x:I3xo wӯ)х(IG>蓊S2e5 nhjȜM#‘pW{Uu #n]jZBMV7ᵆ*00GVh_gjOEˍt}] 3Ruw=pmmmdKi-e֖_b2O RL(1R&~ 贲,(!-f'-{)@O!\%hYs~hۉ-apfq8FN7wt|#؊ofmD0-/wdhj|vofkqkL.0JɌmlDOF^aUTSLH)f_ G<J9jvN*jhlmS -3v1:Cύ-%t!MQgSȻ%S<<|jCVs0 #SRBy|R$puߑ %t bQL]S ک2xO暪 x׋s( pe} Xp 0̛RFyu!xvb񕥼J C\[d9f0<'槷4Xqb~+ZM[r=▆6:;禆V˾r{k27Ta嵌Xf`ata3it;UPUDkp(eoz׿pVqr>.rw:xN|N -Um]^u8WǙXYty<8  ̭2DJhJGc 'W5.]d#2BtJU4U)pzFt!#b7KZ'fYZ8DjA[̞1Ɨf!rs8TTJb08fV108-#zf"酙-JK0eU;lk$?s_a|’Mw2/KA W,I,W]I0T9mkÛ;J$ k3o5E6˝aN'Vsٵ)1||gQ$|hqY<;[:q[s]N?QP\N! @cc/l sÌ-bn܁Sv㥗L_Y9#,@wƢ|ϻo51 y̹4R?hUu\.Mo -8ݖonтq{9XQS :R3m3RVˋI0Ke-  $&fbt;;-Etb + Db2YKXKaN|fcd#mUoi(84mru:q9v?-{W,ITFy߮_{u+uK뀏pdrNyGf;n8{ҖڵM4#e 7f:s`ad8UJh%*glOĈzؾH4xrL 0) Bg2þ}Ev+Y؊5tFJEjWR0 IV,,&XN$XMq-6Ndq*gV`:e$qz2?nEj*w55M4sZ<>nj[^X2P 1( GvUL`V33\ly?D/4%JQ0PUZTz<;1r!f\:#U6W95?.,!:Uy.2\ibOUQ8i+9fVXicg'au8iWh%|?/TLV[]eq5Z|g)@{3DDaMQzݛDf:'_}:iڎ 34ûބ/0K(Egʻ*e&ӈQ+۾m7qe^Q~#džc 4b&\TEA`[ԋCU g瘏k5E%P@Qʓ:Z_*)!4"(}'J^NHț4-cˋ4#=a֗]^.$4Ee(VdtkgmUWvη7?7}G..edʻdq¬ؖp31.iٜ)XUQ@;aלj32Ѻ!|z4-t`1 *j8PQSp2-t^LK;#\YPy||x4˩g2;T;:lu4c1[1ڎ/4U6!!!pb3)ӯ- m}5a8Y.VG%K%>7q| .L>1 )1 f- mĒ NO3_E7IK(B["]z>1LuTEtTLqod UfE "ka οE'cgI0?򖶃mwIuc.,1`9K]fT7«SiT   R ]"? f0B: W%A9G1 qߎPG?×Zz#ewt=''ӈ{V=xt[ҝ;D<^m\jn .͟g6TQB.22TA `?,BGqQ粜J0kI)]-\磔s+k#K,K_b ]YcveWFMy hiV27TωG3B/Mm9:6ԉ% ;]\_L)y69v|XB E,6'޻EզmWYWi虏>ۛ:p7 U`J4gUQ d\ ,<;xǘp&# ,u<0*/ EkU(j]ibty^ۍ0 zGG,ق<<|+h WTT'=|S {ϖlu,=7b+Bpj~csS_MxpwtEuSL.:nUU@CոuV T .OI!1ɩYXKKJ.=<Qj+_[ESHl1g_uu͸5FIךJBjpHi-).ad| bRQ>hĤ.EzWI ^am_p%b6) EQX3*Ւ Jx[ύ|OY=w~<,iI-}%9.kzCb_*G=KQ+VfY[ᑑB\Y]RPXLPg;"5Eahi},Y +oi;C=Γcgm1:U|EgruM#I]gru-tqKCj0]h&9cDRJF돜8C`]5iWw*,+礦[yW6T~ltdeѻJ҈ e],`r%v< JP\L$7շn!a mNEuu(IRE&HU%05b=iɧhʍ-]yQ/RIc0Y[I^"&}!%ZGxjpe 5\XKKE$׷lL)wyo$cᅨoY?;2Q]"HAY>Ua[>.O6 5 ҡ `ţ99BVy}vCuo/Rru[̦-L 񒥳,ѕf#6sU#qtC)*! pNQkMUi Ei GYJYL! MǷyf&yydWG_-<@")'ˉ$Oa+kH:]KVd(G{#~-xl=e]%E mo*0 זYzWg9\`UE9 nh WaP,fi F {i Wp!MQš>̭++ TF)RiCUq**jj} mT=~}C[YI^d*V|cDS,y*Zj*u@hAu8r_?ʿZg{zwtYu#e^ǶZڢOl]^pMC͇a1Jf'K;:7uBZSU-T, =M#RYPj5-Ħz";P!h*0u!,a)8ϭRRbXI$yedF&]({O"+,A:>H]8:7`Ee=.Ź:"ّ[egޗ/.n,k>x_IƸ la0B~4UeɕFTEEU(hVL87zaIK 0Xa.axNTx}`7IwhU S%.OX'Bpbr899.z%)L,tTpus=qlsTpm kYc8`&r_MS>V6?;Bu"eq.[n:T&-g7FTEv,*)5mb1FQƶˋNk&þh#~zQP[QrY%^IVrfA'&g919Zi)ZpM#W39;M;jkw͗qeY+p_t"'HwOl?W\^ _%nolRX#3 sTV{!񭋯05R\^YKg$̾nml‚ݍyu 7Ե^sEׯnCx{u9॑ ^^9T|\RzB^kyqg^Z me,|玏}A忴}G,z.V,^psmo _ibfT\[:t!P^]alyQõ{t^/VphA:L2hmڊCUi?d颩 )H秈=H"Oͮ@I`Yg0 NNQNLΖ4*.\¾ km@ )3<5y-f缑峿5~昕Mn;"/ 6ჽ˛ &E#,mܡjtE- ʶPŎ?( AO/ƖJԴD'bgצ`hip4tUpSkBPUjAj| u s8YMm WMK{HP.y~hώ$"%,'f ]\RDޜ8UH%J8#g=nŜNࡲ/"C 5+s5ח~՜~|_@E/rZ訊.0Tsf1͇1Fb,׈)T.=RSΫ3s_ cc+ `tyv,-%!}!jL)Mf-0NU!my(fDrK$yY;y omP}p-hX5Lyۤ:|9764ƂЅΛW4VEZzu:#U4$- 5jiS0 sX^q(*=ۡ Q# 7* 꼢s0 Bfl=Sv:sq}]3ЮDWx/ KƬ]"lf3jY(ay1O%?(Zta>2rX[?ZkbR'fx.w_"}V)?5N VFm *x{M7~`e_~KJt^'ORTo莬K"MgZ<:rńuSkjiXdqj~ct;TZe'N[69fwMSZ 0 b`3 tyssgY/,g6uT(Tq0Ӥ]|<;1̄9pEU=-DI;;ƳgFY\+9D")u \HЂiN155y˘ Pߑ;,Ts5;&&L4RájĤ.ό/NX[4+DAbL1N[kk :VJf( MmpeYk 䵙m$dyh)o٬d>O7Եps}+mj[UE9޶4ﰘ[_{?c|i)&%=R<w矟y덍夦&w]ceS/PZ O(_OKs,b8?{}~nnhC`>^ljh FLcshuIɤk3 3<E徎Ceh#6#YNwtZ2>4$4sU2.U6vҊ[Y婁^'.iK"˙Of4$RPn;|pM:.M3GŒ >w0Ÿ4E!rS,>g ~ xj~)|>3*YxV)>q/{ueI"l2Q1<6J ǩr!Z[,9(ى/BܞOZ aٻ{/I{o;qaC{.zK[:m{Eoj Ж2J` {%ٲ!Wö,}GĎ9j}D8u]mX}?:T4^nˋOŽhb0Y&jvW,㳽ݎaglsݜ뢺f6]s k9Z{~"NLn~7=KZZ2ATTV-32>n%BF,Nvx\TXӲ,O`g҅Q6I͹o5apD6 0ԏy}ǻɋOdqr:as9$FF30>bUV}O?{ꚩꓲ?BinHs;%)l+),3ر?=<݃JMwVTV-3g~ &rfNvrmV<,)M9) j)JLbr~G3Q0.+p(EՒOvlŒҫqs?uȰpSHVbRѐј#f%vF'-nkp[d4*- ~ S')mv;'[SLfb&SecVlt'{AM/x3`'ͧQQY?507IH{<~ >U9>8t8 ZVսm]ywe4iqB gϪ;KHo 6{uUz o6c9U 4p ߌyr moh%m!ļ`SQPkhhL*,r/ F2C9&vXs aZ-9qd&`S7ؤNVGFLW-Mɠ497\gsvRkXvlv;=t 32LN%9*cxN./y:Q3sey%(l Aicm^yN`W^$,5篛ց!v6K!DYi\R$mkj۸勿Vi`Ѡo|ʷarBIpKLC P:+Mlh4Ll,LmkuGDqS7ɔd$%pTFCVaG̸j0ԯjCGD]9m6 P7;uxIRՋp8yX譺Ɔo3i%vU3Ze%zucw88&eG@ug/՝|` Vfض4eqLRˁ.t@vU5 iTAVl¬gtnN'G.IJ0y2%\~ϞÁۃ+YQ31j)HL !q$]޶jF^LiFs21TOQqid%ϙNZRӹ5eD¼ 6ǚ;SBߨbT]6Pmj%8ʋHHEeU>vI\}^Wѐt.5~n<‘V{mVcmf.1^_8|jDUi؈H޴Uyݎhud͊ a&3c(;&8v>S3+ӳY2NyI0#W_/4rQ|B;2F]O?%3_qɶ$%28x}U_6-C ovX7߿:Vˆ|v.0.;4=<.O&MfLW:ً8D cgı){ѬHF7Xy!&D Qlx~ާmE!PS;;&''8|F2`3ϏC\$P#jyO.^h4)fnkv;gXẢSaP9:iqޔc$RNqR 1a&CyqM`k9sElxTTw8hLߨg75 !Bu}-j%8JdʪpJRRٲyhh5Uh5 ^?VKBd2YL#f-L@b (Nm6;-:ucw/ c2 $GսDڢjYl6ǜ&YE}L-o-bo{@OurF!qWl hhTxʪxA?.0!(wJvnq^Ʀ@1q,K"."ң6Aj{g\zN<%8 73\譹_4_J覹&1)2-ff48o@Z^Qvַ/{#bF'Z;vE;X˽<-<(ULUk4ر/l9LjFcCV>ʷv&vw4ѯpZyp&b٘ORbHlRRdOk5[tJբjYŲ,Gc<6<W{E'_V̓eB`dr3y5YBeTjzU1I~}mV09ŁCl-"%*fƟ˶v^mt9РKUHo@nG>"ud: O(J%rȎKtmF>n9ډϙBneFzWsDRWWUTVe zn !(`l,wNilo8ƕfnw8QmatBN\mm:Yڞ-6A`/MCeV!7.tb#dOWWjQ[uk>"ԝe|Jw^VMP?X<<&_+N@)>T|Ò9)3Έł&[yfS?0!8/ủK 6;Gڹ |uXlNn4$xڽ3Ie}f]z0cg2:J!9~:..Z&**f>6B6PѕWCibܧiOONO; 穃-dꖀIcyZ֬\8ѬLϔ=X6":((%S@PxygxǏwru#]({0ݯTBv2P[A|\47Jx갿JCsQ)# 1S4 pQB2sÕnP[p`s<>ldnv%kuljS7K ͊ Qa,JHfQb kcO] u}~:%b&Dk-ͦKfdDۻ7#$eEeUBe$"E>07JN5, 8Iyò,#݆+΄u^GȎM,9pÙnf=4XF!L<5%h58$'m6NvEbot(#"¸쒕<VVTV Fo!@ ܠ; Siڋ')<55[r mGj_XZt:eپN5FKlnc2AE59/^׀O'/f2OpCmB1/L#tFL+֨ P 7*5HHaYyq<.44)mZ֣C$J""/$wijɋO";685IZv&ߪaFT`j4&}B畴 mܭ.bno3~|BcvaΌr2TTV%+ۼq :iu%yl\x$ j4E: Rf1Fy| du}o^;8ݯqK¤BmӖ䛢ؼivFH @ \Y%EABGh5gX,<%:D7ym۩S&z|B{nd?IY&y|#0ChdB1dP2cL I(J0n))Vhؒg(JLylei~ۡ)]cF813bi}3]pӸM !N:7[L"Y#؄T Sjzu 11_[ղ6#uy$DDMFj+ HuRZjT qI~;t?tݱ^֣hȋOzfYr::/v;[:ՃmzU!:~5-66իtu]EeUHS ?,HRj4ӹ%2lmd͆V!6ϔZf&=vJes".+&!rr;_}p[¤Bjedtu+{P\(5ںy ei5Z@YJ52fDĆGKRd_qa8Z4$Gx6#C]-di'$QD(C}'-4ZrȉORch hb:B!GS IguR:*ʁj?/`RTܽ%8唩)i1qDbA f@d7ݎibS􏿱 Rѐ4Zř06h96pU9d%z}f:Sa9#&$W N>ISmllr?B]}˚^RݍH :J RR)+͙ FV~Ffoeo{ێZ3H'´Zb.r:<ĤFVGn|"q8abH7K`c+ǚ;0OZgeio/ ټi@3RJ 4 V*Vlv; :a=Jr>MXjpSB6:{ʕ+h4nvفʠCAiIqq ugy&LE'pl lN9ńUNj !^*W.1gS[׮G~Z@ @v*uW)-(V!w3*pŲLҢ~h 6"tud pc-tF{8B!Q3&8 3^U&P>P8jjm;6]8z!L,ko{l9{904UAbT`Jnw?NHBᑆA_J52 %*O\QqV ZGX;iuzQN/ po{OtYﶏH]q m,5m}Nu1f!XX?s eK(+xj3ar!l7&6ʠj5jpWC+Nt0%ń[CmtZ-3YbnŌxSjVܝ =hg@Bwf$r[EeUѠHPJ`OIN0;2lՖZ((CuLerTTi4Ui$DF2=ܦh|VfSωN;{)b! ϞVRfjbÅ*쟴 LNNrMi榘N!7>?&0sܜpp?ĉNNw˾H!sQ!PzRCV^VF,iufwI,K͚}J:9͉N{8B!BIoQ? ,wTfMEehv(#eĹU@bf( Y o+㉲ttAm2s]Kq!A}pm,+U)bpf~V@jJT'pu,MN:PƄ3wzu^wi! !}@ Ό"rX ˃9S"ɍKmdgfd9-\yr|JMw;{jBaOH%+3.xpf?k\"Xf'Zeas5E'%*fȱ;!oB!i|B@s.PVTV逵JB!BNt5VTVE lUJPQ:l!sOMu9\$Pʪx N]BTb>h5"H+!:Whs ͜jcldlrBX2O0dwvYh5Ng[Ivց!j}${0)UY:Hf(@)EͅJ4 SxD}{ m4яYdPfwgġ-~?-)g,rj4$QĕKKPOm3`J!;J7,n.2TTVEf*5W(P*03(H4#8Eg2nsMԶϨY\Z>?xe$QteѾVe*/ A5}ꥵHf/o]NIUF~A,@ 1n?.3'.2%,J(=f 皺8MucMsvZx{`l 6:/`Ia&)T\<~BF'lG5 P "q@b Ro(1?(F;Cdv ^lcՍ]T7v=fըs= @TDef04~"Pˆ\&&lG75]}re!J~tm8(@iBϲtf7Դr4vK R+':8Y@Td8ˊ2YY\ғ}D27Xv{9CugwVaQ 7]D=*whJ_"^xCOSpA('k9QNPVHHu+G3Y *aeiˊ2PqzV4gqc =ll{Լ"M(V_ِ R1k4(Kb+Df%zÜf5mighF)&Lek+jSm]tɁ!RvjDGG`V~@9Ϣ0-٣{&8zg9UׁeR~P3D[Ov-g|,T<9P<9ən6w/3B,d }*:*RM%iɬfYNa: rl3ϴpY74q9r2My10)jR1`_Z2F웂D64'07oSl6;ηXηɾH&6jb&ٶ֖43Z4ʳ0'8veR!tn]DR^ @P["t:Vg(u[?:zM|1J=!38lgxfsSٱ+ rҒ%\=hlwp-DSJ7H OCjq1l*cu~ۺ[S,6iC5kꞃ >! +'xⵓ[՛0s)TXb71+LT@dRAN,ؾwpszQ)#DP<,J-KغW("VR|v9OinW%PJ@鷱6山0XwmnQB,\Mx+֗qզõ s2XA{Z8-epJ {f\hZRʼLŻ-6l΁9" Nkxz6.+-K(͟e^r"ߐqշpq);$BRCqsĤۏG84@if*J (Qq̀iC5 Mfd#N6R[yEuhfy;ʋ8޺fg)BLFLL:CVj0fv"YV˪,䓑ؾÞ38,oS?Uٹq1q1Ӷ cKq>8fZLsLd7Ȟf;z~a=!TPfu7)f@ۅ eJl4J X8I^;Zs_?!eK.mKY^=cEI,JModum`RVCDTx/C`wTb)l/]DyVྡQW+knm!_8p|ηQ̵[ue!a3|Lsi Z9ʈdB,( 3އ%Pib J_ E,Ng{"S׷38ݤTBxsGjbTN ,c"Q^%8fzFV##sjh?_:Γҵ%Tn]Jf,tZ/e]AUDSXY!PFCN;|b#\<1eƮu<,rR1,6^ñȊbgOsb޲iFGsBp=MrG4We;Jc R}bNڂ+<3OS7B0j09öUE\}IӶua]A6]DOr@f(T?p;2|v*<9h3n#u<, !xkGxX+Ks~rgMV@TZ-9 3AU7 _6`xrfhyyFmB5휨i('mش|: RvV@9ߌjp!OHa(5%8h3avg*W !BSC{?wcpk,eR"tJFòt,"7y;3OsM!K(y~;7q%$O<:"m{xtG,B]RTr`Y0%+P*|QL'2LǺ3)΃6<B7-<4=VhPǞfzv"d)-y+MfH J >JKdKq> sR83uٽg鑃6Bfurn2gؾ,#T:S.l $fdTJ KTLJ *^RLt30E!xz'+1[UKK¡6̓9RһJ Pƶ|S=aY6Bozv/#z< ёT,+Emngo} 9V Na2J DŒ´Zb"8I iY֒"8]߁aYNA!m< v/-KHImDl,\g/{iBGᄹYt8jPʒwQL&yz:.2V˦<6*";ՈaY:dB̵I}Ue"*-cQIZ,N}ĞN>K 'T-3D'DLL\´$q /!gV!fgvo`yqۖzqsx\}n}3JU E'eI=<,USB9vf$RmV>y Q\Ek`_} #2Q )]r$PA?\QY5Ĺk'MZ{19á34" A[2C)$.:=ywOaxlb!&,V^'/L oBahO,+7.fUYxҒ)JKƼr1[;9NidG-(JU'83K@i4+*BwZZnB m;FZR,W/3^ν[RO9څ&u-[v-eѠA(]Σ(;vP#B&cyzGyclXϕ(3?6yLm]nj3PvנmqѤ$o"5 6.__ƥkȰ0䰶 Q3G[:8!wϡH53Y (c<Ԡ{i7IHB1{M 3P\903_Cs@)sB,o,/Na۪",$)qp9,bqݜe&OӁ1%e4VԺ}9B!~:GXZɶEl\V@L=a:Vf"7ƹ^NuQRiR鰯˱`<A(](UܘK`Bg:9O`\*b\tnӱ27pYَw܁(9 [*,-u~B1&6ifb"ش-+ YZ08岜 d`ilG7՝'O0Ғ~4iO7u˲?0SB26nõr(6,+`E,)DGVKIF %)\A۠=w$8+N7Jtsw(2<,!"pFyy^:xĸpYH phKN$/9er]}4 bs 3'Nx2r `^`v'NԻ'-)8z*!mhdIfkr 4 ->X.b|JmwetbrMRL6AO6PBkgdL\K 3%P !)fah֕~iˊss 'm&λf/;Y(sEiIn?><yW1@0VGa ! ${O6d#aZg~i>~ixJ|T$k Y[ch0ifK<F~j>LM%#9nYB!.9vc۸姳<ՋstĦh4rIJŅX6MbOu}./}EYt{lB!DPp8o|s7ŹYDz,"Rcqf3wzDJ|/ `(_QjvG$P !j^81cYQkX8XGE2/yg& 49Cfy*ԟo@yԣ,zB!7, Kɬ,fEI62U^$DG:?΀9dwA r޲ A(/GyvJ(ScJo؟B!BNK-]< : 2XQl{/$FG&?5Y&iA:156m4-` @ R>etm{!bY&md]{YQ}WD$;%XmvM47Dc*Mᜉԟ|P n9Ng3Y[s$P !ep̮c:VFI,-diQK 30EI,JM2imD뀉bg'O5ɋJTjrIa&1Q̵^fhϾArAZ1?Fq)ѤFufeuDۀQ&cUL!B$PGwYdNNeUYN6(Xb{?0uOax,K 1/hsUh(JfYQ&2Y\NijaRnoY5]I@bD!PSSN۬+ϓ@)BASG?M8^vZedPVAvZ_Ut~J 'J;6P*EԵPB!p5k#Β> Q fLt?>JJ5ڻ 7}?EmBoqmôdPJI^%yidz6w^B&P zGEeՓkW_Aw3ZW/R>u .ٖ˽wu!Pij^y/.:\gtm13>5@}rAO Y @ syö?nIy?%D224R-mdm;-vv8Bg^FJ'2RHטB2NSrg!($`Pjwxn۬-B!q->c4~REq jqym֕m@B!떸j*!4峀O Jޥi&B!J_xcZHB.P !UvVcgFaJB,B3.6MuFH @8m2qt6WKB!6SSU-wC'4zn?L~f?#By|?4K/S۝lB2P R^?~UU !")8Z,V^uRMW'}?ƴЄdtR35i۪B{.B4 [WV32bVcBj7\wQo[>ScY(bh .:Ru`j^! Ò RWmye[.@l4U3qq3ٶȣ@]B1;vumdĬ\!A~Y37Ibrش|aW)B,,a:6-+p]'TWu Fzv!IWw2B!d\m^zy@ Jg3.Pغ~F @/ ![WsPr{ ]H/y aqv 44tmsB!fѕ~FU؏  j @jE9I,)xB1Kf͋/= r@R#㋇*jR !LivjKGt 1N%`4'+7]x1B1Oٰ4m?2Dȓ@UI6w0J2 !B׎e<>݅t1I|>Fѡs抍i5B!@pņ2m8|D133e !(][Mۧ ~<%!rsBH.'%};ȋ(Obe?m B![W)X 0ԡLA?S]{X'w,B1f'Of' 0+5:r5nl\VXK!NK`mΝoZ5ݍr[@ѠV ZB!KU΃ɻ}σ (ջTjkicV30q !JMn45wk)5M|* ARQYIw? F28`F,?ۃEodBy7deh(xB(gq+CP} \A`4ԴܶY^On;GBf+%+5m'9pZmr IѠ x^M{7(do@P_!N:}*moPsF^:$P4:sϺmSRur%B[:ۻ g6>*IѠ? ]MbtᲭU!p'!6.]ᶍf?=˿Ӆ$P;bƦ.^x}̴Da%B^m/[MwV(}d4kմ}X&ݶ§*OvX!U\me??\ F灅8 `LQOOmVm2ӓB}bёnۜ;3jѠ7<0HFv2B_K؄B4/֖mc]b?&$PѠnSӶ\ mj ЄBIqq}tz~"'E{{VΎm)]Va ! (Iu4<|x3 ~f4m9!(z@m_0F'B,q10TqwW ?@9 5mzFNvf2߸CB!_q״a.h4<06(g7~F\OZv&Be\moT}{(gѠEn7VloNBp[=g 7bh @9)gPӳd[!D0Q}::L=X2 s5 {|7'N+o!@R<F3; sĵTܠҷBv_^ꖃ8sLrܧ^RYB]>zsDPcO х:.!bN%FwY'U/ue\sBѠ$:Qj_ 5- 5m;]+[Z|j_&B̊Ok+iI~sc,ua4U'{(߷w_8%T\mO|pIer!O|;CƬIL޻:X*6aib獇1.sw|yf4U7P1c;< ~OO T!224R0;_~Boځd{ɒw0{;ԴW&&&ݶ{|B̯pK W?11.rѠ$mpN+ow?nB+|Ba3UH~w44v{(kcUMǫWlO\ME>N!Ύl]UW3oK>n4} hԶWO/)>B9 O\I]{{?=) @y=LNԌ̈́GFc|B! S7iXO˜B< ?@` hU|M+oPl(/C:B!f]D/$E+~55mjny[nQ0R6( ʪ^AEwjU ۶,s-K\3P!/.:r2F8TD߽v{_R6F^ys(Ѡ6jGoo%33mO*״qt S!BB|\~立sF2n# _~+ټB0hW?O^6A:}%럁' jyܢm##}at!Sޝۙ-Q]܈=Q( hpRӾV˯B፼$>hp~0j>f4Q hwTm@~}Iۥ&;!BI\L$DE+^ݧv>b4UM#{(ABEeՏoiF eqnۭXR-7?TU!S>u ĄXtZ _ed$+}}) zY1$P...UXI(mw^m<^? Q!Bǐi j},+RlWWOwO~sKAo)-088.lZB/[U I]w,:b1$P. Fx7|?zH&}&ȝB!Թ|])jbI+CtuhЫv.h>S 'U7ЄBuK7+s89yov׉Dd4y*O='RyR|B ϿR*}}$p. ѠnLjڏƷ[l7Is!M`HIPako|^F_8h4}g(8A_sRUi>{1(ƻqM _Ѩ'#"8[##Ep(,tΗ1'I! z=<\}WO]fBVԒzA@e18(z#sՆ/T/G'b]p'Ux}N{(Ѡ>?^~]Uۯ|aB讽a;Tǿ^I]5*D$P!A_\ }w?ŋ/Uk*B\uzˍھQǧ=~tG #2H .?VKP)APwj{9~v#tozoAHe3/CeJ~ OԫK0B!O<ͦQ{R(Ѡ|sTwA!X{%|wj{T߾~&&&=y/ GX0$PA[Gjۏ[mqL_ RRH!{8N7ǸgaFN' ;j8[ݬz+F!X4^?}g6Ǹ.rp7 *Y ό'o97rӵ!w !Dh-73Y}o|^pywb4]j#g6j\׾!a:/G)bht|ޮMT}󏌍y&wv!2 z3tTnߺWʫ/_)##B G?W_|"8L~[z&7kD@PP̘yo|^է7)WwBT\B/Qq1gUOf8Pz!Fe2}Ui8.9(+枻n%!5Q !UBj"u+Y?QSU$P0A\ U$ߺ~7d'7Iq^R!22$U>Ƿn@g+}CABe3]N@]BY;껿㢹ǟ[VAb-YğhU_x<-Z~gf"xH N< 6߯jFVF2 ]^R!Zk7/_Bxxk')T;]!"I v`,p+5VT`<B4{Pn{|W:\zBx87W?^gw<:$^eGNԣIJ!m/|}ZmGyzRpIHoJ _8wg*OnZWmRVH!|?,vq }/ gRJ6;c_s츺[u>w>.w !Q|;J3G=}}&O_nnDPd4=yndL7ቧ.+tٖ/uB1D~/qٖ;r{8=@$P Uh".q;W7'JE^V!OE<ǯR䤕__DZ=*X`4o3ߋ&Rxh>088}tt$+T4:-[TU߼ <}9 AA&cF乓­C3Tb՝`B`3Z3u|rT/7 T G=}P +F%xTܶ{[rVϚSijA !VuV//TNEm.w}ocE!IDAT(׌`Psy//o7ћ*цɕB OV[ͪv;dhA!H>1u8C~OCF~FO|pS<}9!x) _ء/y8äoBLCѠ¹Ͼ$_]tu_YZ:6,"`޸u~k/ճmD3=\H~a4ǁO_.~Wܞ}g~E}.vMBLމ߸^QYu;z]zxx~KV2$~ƭl\S7vz7p!'ii>NQAg,+ ~|7/9|h?BDf( z=/w~ ?"iu:nxe(Lm<l0)fJ1+}5PϞis_vܣ__4O_R!LZvwK|鿮W_;+jjڼy-F7 D5F~g&+ͿFYqV !ųY䗿?e${+.-;1$@Ss?ӛ1a!_c|Glby;+ 0_o ]8KɛJ1g`ώOL?K>R1+i6Oӟ?xě[o=vѠBxCSF pxG_rg:iRnB"mǎq~#޼h{urGoJ1\*SQY*fתOVqߩ=ѸBI"nsfOy=|hпmBBf(ż1/.}ɽzLwo}%{B&q|;w?a:n嗾I#Z¤O(żr!{ -<f+ozţgwl[?&׿RQ]xxeocr5']󵪻m7k>n1]T;Ao~\QY:7 >rV~?W~NϼӇ9l"4lv/$?~Qo$@+aA'hпl坿|23=~ݛOQ}"շsIäy⚕/a)`IH$Pb4{^x=|_r糍mY?7rYBntu EY^^'Nǟ&7 >o;b6;8FZ(>Vu7n37}cT?j⧯Cʼn8W8B VCAiC#,_1s<>_Q |h!fPƹ9^q8<>?vϺHOM߿ߔ,+#B,h(]VoC&{n̳K?ILf(E@3f** XiCQ?yp/~],.L~8r_8Afs-d"?_o|s- pѠijS 3bAp]!/n>n-Ϳ1 yEYɵWo@窝Ws ~|yFF<"&V7{nqL&`yrXPdR,8F~;rd'~.״z5^[|DDyՇb~ED^ͿMůKt\Y#aR,D2C)$lW/,󦟩e+7ɛ<; U|?=L{3!M|ixg p8|Q|h!P x ZS?yˈpnz{6>kMf(BYLJ?v =||WabM9O?ߗ0kI Xʪ;3<<<Ϳ?ZAŕ<+,LN?v32:N\, 1ߒ2/}u6㋇CFzz|aF~ $Pa4WTVm> ? ٛ~zz|ZnYFycnalI7B 3/O|Wy݇`3gijuH7{΄(EPq}CEe? ^$oj3˖-ӟd MݬqLN{8/PE%ykټΫ8YϽ8sQ9{o"HA ʪ{p.o3g͛𩛮(۫~V,)7? gηr=OSs!fj)[^?}={v[7tpr`?vt&D @)Ѡ?PQYL+?ṔrZ> ,nlqw3F:Ĩa "c~J> S=::yyc7{ey[; "sp‹Gx\_222ڪIzj{_靈KÅPCYyuՏXw56j99?\ypB,(E2j'3rZ>gx=w_w_Na|.,"+(-򹿦.~U^z(6_n1љ Jr.Z(}6獇1pۖ\A{J mbG_03uB$f{e^"عs-푗ٳ?ۀdy[" "$?]9[@9v>ŮݧX} ֬.8 #W\Ɂo/QsKqB,0eK \5%>p88v?2Ga?~g4,$PzEEe}׀ݿGrh-K`h_Jim)֖b6OSx#t6wkVEh^ŕ;wl!::>v;{L?QFNX$P  !;UwnZ̢ >\c aa:ᅵrzLkI|!9#W]Jjo'X6^~?? _ANX$P qA |gSs7?pV*HbbO}&psS_}auB̙Xm(#/l bx O>A ѠgB BLh)kA1rkx׍(-yY|>C{8{ } o,]Y޵6b/rI?~Ѡ?N&(prmEe87oOsCwOaw.D@)\o8ﯨ*OpH ))\sW\KqQ? 3-|j>X&yţqϷ21&ezVjA/Wt:x񥣼$ SAߝ *4RNUTV_ XNYkHOw6vb|8GԉK/s6^b h8Jf(җ+*~| ";>,Vqε\}%qqz yb7ʷw+9~.X"#-d͚*.[MYfʧ32b]'y񥣜<0[6B" B 궊ʪ,v"ppD=O;g˦%b-6.)*+~Sm׎sx=m]LK &Q+/[͒ҜYMʁռQ?m$`4n.D@),pa;p˝|I+>OedkX?yjIiΛBFs[{ԙF:!` 5#LV,+dkl6Nn䥗ڮg^~ ccAQ(6LGL| ) 䤱|y Sػ,{z>? )F$RyRQYDkguRn^FYY_o,Fs[[령L4R!*622)Ϡ$K{Q|_vjjػ {c.^v{`"b~H"TTVƬe\fJr<[6/e%\QD|߫yjQ]ƙsm44ugbp`acf,lk0""&>!xR(ZŲ\? qTToYK 䴶O5k!` ]DPRUŬZU0c6OGs[t ;#f, 6VlV+vq m]Xaa눈 .>8LO"'+41'O7rDOSW1[}  _f#(PU@Q]NV!(1Lc 63|o:38CsRt\#R WQY_;"t:-Y)妓Jjj" ē@Jr<11h4p8M?0LaL Gk[l/g88 FP!Ĭ@)Dqչ,,+p9+G/Qᤤ&@JJ<)ΰOjj1DGEItT$QQv+ng|| ǜ!MO,8f`7ONC!*؈shT#::Hc"$** 322VFAբ8kMj]3v݁np8LLX.ű -fef*@uhVJ!+pmi)J!Č**"y a91B. B\|,>k 1p_rFJ!O**tL2y` Gx#2 vͫa X#!J!ĂPQY2ș'=5A}a4GkBJ!DPqҌs h5u Ok,~oDj8 !J!BV!B B!'(B!O$P !BHB!>@)B!|"R!BDB!J!B B!'(B!O$P !BHB!>@)B!|"R!BDB!J!B B!'a=!IEeeM}F!LdR!BDB!J!B B!'(B!O$P !BHB!>@)B!|"R!BDB!J!B B!'(B!O$P !BHB!>@)B!|"R!BDw%E/ a A2 EAIʀR$`B]Wt % a +,YBHD(Hf8o_fgV>}yV{ oԢRDDDDjQ@)""""(ZPH- (EDDD""""RJEԢRDDDDjQ@)""""(ZPH- (EDDD""""RJEԢRDDDDjQ@)""""(ZPH-sug·eŁEo=)v~p>,Rg>|L , m._'L=Gs~<}L`3}k|:VŮװKsCf{cʔ͖ʋk`1vWL\$`|kͶʠCyYxKx7ˁߦXx \7HaیHxF=à@rl=n~{y?8cT~^c:ۑ/gC/VOF09q``_4p)ppH~ލ6 (;|0/<bqsۚ2 jjq>;G류~C (ˇsTm0pF- (GTvvzp;K)TG-·)vK2k ``ogW8`/]?M'c#nM{8 n3-|Ta]CwʛHy ]y,ч*_~|'>WKho|{o:YXzAׁWw&`/9m^ |xz(sG4ЬS,6-|x ~ S,{vo_8Npxw(Ogҟ`·Sr>u>;`s0C[|8u>|x8p voph܎h·% &漝<|8d/|_9r>܀M؊6? \|ؽu$,fI+ϙY·ͰiXߤ?$i-s>q>|u:jǀ79> _Ns7ͪzow>lQaN^ެ:n؃·AhOe·9_HS*Z8M.bp}]7ZXQ_]5{Ff?:v>]Wz {{:E·E˰a"·vQa~idꕺh pR9DәrARJ < \X 7ةE؋V]g6nr>|w$:6qW ('f{mآ+׆%1uZe0ۃb.G·=Yauƶg?`_r]T| [i=+Xr·?b+ǰinȌ)pgN# PWt=bXc^(_ʗA(B -(S$)· Qa75Q_ b=pkR7U nMf&h l@,6]7dZ·uOo!){#IY0r5zQI]sXؔ?eeV.WK(y; lxU뱡۱R?Lŀb nW5/pa6W;;vRٸ #A,wKؿc>으;ꡮR,na7`+똌Ncqbɡņ^®ؿoY,CP*ϰp$p`%O݇J?ɹ抽 1sn٣r`׫˹P/=ku{ >,Pbxz.XO׺LVXs}ݿ'`X2X,8];ЦJʹ[b=n.A9 X]m^`UZa·bC/U|2 Ǵu`A)/S(aɽlb4N8vr>M~?.MxiϔMӆ}d.T[S,W9>1[w0AЎCc{1B[h{X!)bCEle9Ý7wC0uOU7a[,- bqSgЖq3y+9)Unlõg+(Zlb1aؒ=*O+C,x=·{ X9&2*y(݃&?D,)M?l c9ކݿblSæ0lEjaA)iM+a/=U)'S,~df$7%D>'qVb|8;ϳr?hrc/H'5^e/59 *S^;X\`b9&_JAmX?v =|8@rvREhÇaY),ҏR,^o=G7S,#/mŖM^b&gqS,kQ3|x/ֳUuȥM1AјׁS,R󭚪B@y<6;3Syẁrh*ͬ:/nvLkCMe/wZ0ڶ4 (|:$`[hB)zO =vì2|jǬ8VUTũ5(38`= 8풓h`Zj8ʿ;7WEt*/?ڜ7R,zV.WLYG+sj;ntZ}-|"W 6-[f@93S,^ߢs>| ICH8Vy7|H$@`(6\2#o`=&4L%? |5b~)b (r>)g`֧yL9ۚ0B[lN`T&R,\Y1X~5|H*P@ٜܬ6Hq`CGO),SM2܆\gi+b;MlbLꛩr.W3oP.BD!n@9vWlPMc946SEoF@C/ڶOE zحs>SᐪoMV[Rˬbqpo-?7;(U)R,~b1r%(:'𡖛ӫϧXFL O"?X/,l(l^1f0HN)d{Xær^3Des,X\N('#=hwft[FܖArڊbQo9fcre)hL%j{Xlnvgц:eǰR#Lm4p`2S@ٜ3j+jhT͝#:\Am7++qt_i2浪M?!D+]7KğczMYnV[G 9&o`i$Z͡ث˰w8{9rΣBܓQtRXoZ+9MT%}.M.9k?Yq_t`(· (2HѵdCoK]iMI`Pr ;!"XJͭelZN)N0:sX[yFTVygM ӑI~+scNOP6r[Ԏ{dy&kНOm[n z ̠^)Ruzus>I~u2u4ۗ"yHvn[;r)lNn0в9%/YM܆2MO΃hC-Fߠ<ͷ&"w5yrX|iXLr>2·w>,I;[:أ8;Isug%ga ·z=wG~صe2S^PcPEn2r K P */h/Nʟ6p`K +#AoB-/[ rf^)r7AF|Cq>,<;094G"QGX}/\Xq8pԖ\*1-hialחKmе:+-T6np.wY0#U{@U7,V#Fܭe6X\Tnv*>U)M ʮ(]j%t>-مH]7as^Uʏ3Qߠ{Z5C)RGE4vQ߫ͣlal?ZՌޒOiU0XJ~U-`>QeXL~|8 ,?K7_ k)>n|8 س붌P喙u;ZDZR,s>( jlq:·R,$08&y{y6qIl==kh1T홭rݶ:G9-w (;P |Xw\F·-S,zMo"oMr\;*T4`nBrݎsZ%(z.Ed: (;b1|Ò>Cipa״ܛ)%KN`)D)׎f>Kfbb!7%X/H{A/")0)O`=N=35܀fzVr>|YVdlao|8:BZ0r69J~ PؖY4+f{{-(pC3c#,N"|q#(c*vz|gj]7ǡ#/1m=kղH}J4¤XbX||X1u6f6opX4V׆]7!3)l{~4粗\DrK%X!TOHܝYn;9pS,i Mús}4$Ͻv+>"2 Ds) QU?F^·%*eX&)G٘LU7t#·ѐKpre[[8"wg\4P2)OX|[جB3ΉC-S曜|͆Tl Ȱur>M{f,'izclGs)"P@9JX|ECr{F\YJ.w sR,k%FNC;eL'3K"\ޛbjy @6U'_Yn3{4[+m6n@UVIr~W gHC5D"-6Edr+:d&$%>[ [m%a-Hk]7G;?5:˽cuus?K))RfݹC9rrߍvU3=Z+:VU{NYUMzUnpEfש8,~gmmGdvr˝We!e8mr{'_MxՖis`u#*ړb\uOf1 /ɠrR,wE#g0j|/C[Pr[1S+u;zv݈·E#3Oly3|·i#2Q@"*] ч|{8P·Z"7] -_A]79N *)7Qi(ʖ':l6[ҞFv'XbDf; ([|pp7'7}/O+an]8ppT ;Fvژ i`⓱&/pofٷ4\N -Edaa1/Y.g;·w:<p, UW,0ȱ}CuVV9;#J@^qæXFWXFmWw)_~9f% q>| eޥ9MKS{'QM?p`5̑yL \|=]=Y|'u{4[vu>l=_K +r m·c+{ख#2[S@ـJ` \fҞf]P-w·Ig |ؤjsw)|7ӮLo&Ҡ BصV^[j߻p@[UL=GEa (k*H ĺU|U񰣝9r_X||8 .զe"Q@YC䯤^_.xhMZ x"K3c9l q>,|VG*bgB U0$f>t4hW=ƍe~|U ; 1Sۨ6DZg)Rm,-· Zh 96FB~v[hP֐b銇~Fӹ8 yGM!Txزs| lQ՜zJxje9NlpqЛYm f&ԣCi'9p>mέt>,|&v6xV)URM5³ڜ_ ·SkS~GTD*R@YSy*|dZ/U<2iO%4{Iks>,U.; WS,^o=y?s>z· UgQs} mW ob+sx&G +9*~S}S-܃|ȜL}kd,>T֊˹Q-Gܴy|x}+vnVĒ7,`[Rf>ű!})·') ^\V!/X/& s֤XR5v]Uz y+06u"0 ;c໰on`RHx5n=~Z7z/փ<~ar3+Z}P@ِ\LevI~||z^<88"wK^Oבь́+6Ь!/LK4ҤX|KV?YXC>bLꛥל7=j '_,3OHheʷbC׃)kK8>[3)L)s :/mU R,n6kޘGт^kX%b1%PlD_s9`(? +}8hr!G>bq]+N7 wXJ=lbq mN=)ϥXͥm31xK8/0*?z{S,e^dTS@ق;>u{O^wwՈ'8Xv)7])3="g 2N@v1_R,6)S,NW6}u\˙tDeR, E)_LMkR,z+w?cZVO6)bbކVgmb,=]+6ӤX jN·`9B;r/2ꔜCe):r·ys^u\haU7'.&zۻ4Ѳt%%·u,;\XZX\r]"RS ꔘѫ!3 c吜ޓ^X= "tYk%gk˝~\Vg3,uM]bI^_? | v{]]?q..Rrce^lxUGj%,XŹՠ/a mĒ I(·հ9[+ce1̏ZAz[}nv8X [$v.z&gJ=8kZ~V~.]cpbyK?֋{Co`q>,հv%:tm#:qϣL=u5YDS@)""""hQԢRDDDDjQ@)""""(ZPH- (EDDD""""RJEԢRDDDDjQ@)""""(ZPH- (EDDD""""RJEԢRDDDDjQ@)""""(ZPH- (EDDD""""RJEԢRDDDDjQ@)""""(ZPH- (EDDD""""RJEԢRDDDDjQ@)""""(ZPH- (EDDD""""RJEԢRDDDDjQ@)""""(ZPH- (EDDD""""RJEԢRDDDDjQ@)""""(ZPH- (EDDD""""RJEԢRDDDDjQ@)""""(ZPH- (EDDD""""RJEԢRDDDDjQ@)""""(ZPH-ўk)IENDB`seaborn-0.11.2/doc/_static/logo-tall-lightbg.svg000066400000000000000000010346231410631356500214650ustar00rootroot00000000000000 2020-09-07T14:13:59.334522 image/svg+xml Matplotlib v3.3.1, https://matplotlib.org/ seaborn-0.11.2/doc/_static/logo-tall-whitebg.png000066400000000000000000002601421410631356500214570ustar00rootroot00000000000000PNG  IHDR8 9tEXtSoftwareMatplotlib version3.3.1, https://matplotlib.org/ pHYs.#.#x?vIDATxuי[U== lYFS d)`'M6M0gM' sbfe˖l4aiC]03'Rmz%1p8D^p8Ԇ Jp8NYpAp8) .(9p8e%p8,p8p8p8S\Pr8pʂ Jp8NYpAp8) .(9p8e%p8,p8p8p8S\Pr8pʂ Jp8NYpAp8) .(9p8e%p8,p8p8p8S\Pr8pʂ Jp8NYpAp8) .(9p8e%p8,p8p8{SI2D DD*S'H&RٿKg.w (eKB@ ,(j1l1j1b1b1.;cr_~9q8Sx<@ @4((X,KՄfX^>^Vip8E!,p8|cc 0;F D"K\,#^'jj\hjɏ45n9 Jv-DZE(4;%qhn LUh67נѰp8g\Pr8(3871 Mall3!K@A]MM5hkCww6u7QpApBQ ̠w }87H24*Ftu6 =]MiBk(8) G3`xx' x<1 l@Ow3zzHfkk-G3\Pr82 N8zl3ɀ[ZN8 7h gup8 2NYyiyLlڦ ̳; L.(93LF#x`<#XF )k>d V102PRAzH=B"ZD2}Z6"-7pv J 1i<(=elϧz3 8EkX֧RUt&$)DQ{t=<<7M&ֆv`[R Bz/\Pr89A<z({IE >45ՠч7=f@9( ]eчo]۰vpAᜆb }s<{^ ~ėSy(( btt9 ff뽼8\xf޵ ^ 6npA&LLxj8ؿa"Xu4--xll\諾$isdd'';z/ >N޵ .ފzz/T.(9Si<~< NrP_EOw64[t{Yp }cKLN{YhoǥW֖^).(9S@<|h?zm ^uJw364 NuO$G_NdM{{0" \/r|^纭 J Oɿ/ҵBՈs1giJ$Ãx`?G5=v㚫vKMXE<xh?sdM} ȳwng" <4nd2`m8M$ްlDp61=6ޏG=p$&h4Ct9mspU;uK+p6\Pr8p8w f^/ފ.܂wS؜F8xh=>@ &mjn/\Yop ğ܃ǟ8T]ؕ5n\1J)zƳ&G?Qs$_v6n~.l?G-9u Jg}{0<2]sIswt"⭨T|b~U~hjknڅ^{7Op.(95bphSx}UtZq[{6&!{0 B6\.K}'xz1D"l6koٍNQO9?/xU;n^3;7JA0a6C* U2@Ix]ъ!<~ǟ<ڏbGd[[vUws8 JLτgmaAΦB >b> Q]rWEe~y<|d7\~)lp`xd#xP}7bx\wsPc T@ЬX{7W8dhL1E6:\>u or.oAa5`><>$⚫vⵯy Z=6s&%S'zG_<'ri ߄x1v]+R$ 0>e A@Ã-ZH)|Wb ŽFDX"(cz58f\uD)# <}k[cBp%go_{64WΙFcx@?~oeofn 7\!n"44*z3 Rqe|tKIpeK̒AW2TAZQ;- LSB)Nfp>y޷peG^Wᜳ;7Q႒pǏő=Y[vK`]P{!Ch{L\ܩ)Je afN%M8atRFx~z\kq̸hb8}7va8M,tiOp`Emk+9႒)@o~û+^3%\s[.AwwSŎ{Ch4}SE6ƙ'J' 86窢Ruhr!pbf\ҹ!)˥o _##ͷNpVal|w^<؁֍߼7^!NnWRi2TqDCE=߀NwMmJ%QBnlGŶ$n>;9Y lVɀk6uZDb޽㟟tb}ɕ;׿M;Ι"({Xv[k-^pՕ;xMa8:`״o' &K3xM&8& e5p͸4P.G< /аbnwuT/pb ׏᷿{ Hn܂۫-NhFA$u6&AYg-,c8uA0b[h A4-™h: T4E3% I"<\sչQҢ9O~ ,^Wo^}lSɉ).(9g4tS/F$Rzi1T;Dw;Dh;aE>&PXI%h ԃ \f DB"ݙle6*Zi[w?εF\,޵bN T;^{^~nW OysH⁇G?"sv}Uؼim=h9ȈSPQ1Y4ڜ.("NсU'X$lI @"FO>bE m Ԕ13p-Ʈ6ۜP(Exg'K>U2Mù8~b?xB^nõWQ/̃ JDZ#ףk>+R[CS(,˙yxj`Ӻ c1 H)2Nf0 Af&QBӃn"!EKBoSguA)S0>_C pcVE5 ?YgwA r>FQ\=vz+esKV:pA9ccw{-;"aDtx__ZOVR<=1|BpQ}+j:sv:gOٴFJTH0L1z1CtUsBl[Dq<:? v'Ϋk^2۟op݉ZEL*"Mf0 .|_]&k]zÝw? Y.= \сsf%GQ({'Bz'^P_ P=C)"&s4uk|Ӧ٨tl#BrhlE S~T hT{ds+LtR<2WRM-]F1ᙉ!(/MvΫk^u4ð%S'pXqǍ_.(95G [}Eۻ.ڊx=::*Qg:_ɵQqCǖldl(EBf+.o)8=T Jaˣ9(ϤHf.Ϊ_U*"8>7xNnEM3 &J& nO lu1DR <6گ)5[qis`q~Xi¿U< 9႒sZKo}޲m[|8{{GVV> xl$%\ЊzSWB)MO@.jhEE{1QP01cp$3Flk2۝^o(3Q OLƑTd pMX}267Nw<4~WcW ! 7.(9Lo7ހoP[Vn,dJ~ShNx֢@M.Ԅ Ftk RlNW ' P'3W5z<wVQ0`8wm>=^?2rS((c4KjS脮}~Gz(ne;p < 9sp0ߣ^5.Dr?EтV?ီ .[Ҹ6h:w;0Km\PPMVK8xZ(`2p$"C$f 5A5Sy"A4e FQDݥv?)E s#yrl_I yK(cH2N(:&_ѤsD`mHJڍ4]o%ni8 J)O2t/~'*w8]\$h",BT&=%ê7LY^9[v+WchiZmlFSY(rB(L(a:]1Yhd*$S0M髟T+[aU#)ECCyEHhhW(It-01_Or~ Uz Y{:<|W7&o1|xa-C("!gX?rSӃs+p$XTzS6M`>:ӕa%\߱9od,N"v`T~ `2]utնjSJ.hpMxIKQ Nb (ņ]횄4eGѽΛyond"8~xǽzmjo-Yc6/Yd2;~|/~[6]x%6tRGG6g Eڦ%BZjJN'20|ŤL<ƴmJFhmQ(ų#(~&bMD7c N7:NX`%)eB賉2T$("-(ӆւaŻ*pw_Y_2a2m,SW,Ao~Dby5TsF5uH4`2uA4QkM)E(<isxa4CW>P(SEbf1<>zrŐB)ÁMZ8J~FQb^l.éQoԌo~x{^ ,c x;G+@g sʐJe3+9M-/ۅۿ~xEEGU"c.ӵOopv\j>dCGbIBJZ=to\`T  JbȂ#E\ ÁzE[G%I?E"7^~*btlHp֊SW9#9rto}7>^rTr|ĿUkR+ҒQ,~ɄLli>I]MN*&2"p~] 4a7o(&0-|)E,0-Dd.A2Y4G$.M$JU(EFQZtX/|wldï8oѡ é)I2Oo~8(-=nK;#( Ob|^m5m=pJ1$X 4u$Pp<0%XKZ6"RP9FàcbdPZP(Exn|rW@VRR[e 5=?6{kKnx`RX4;8WIF'sYS ȔB Fte]2 ,~p=GG@^׽Vr64\Pr6,Ǐ_FFwk߽] UPa|^ݵmVmjjp[QcWguS @6ѓ׺ԹMSZ4+ !@'/ s+&#[fCb^ѳ})dzՁZ˘Ze2i&b) Sb4MY[Yl$݋?Uz7xKzkolPl8(~x7tVclqzc<MR5&`e JPFP/eZ[o8ea)TOȱn؊@2'*W1W&b`mC:E6Nk[}uvlȚ !I"t x.?%@87\Pr6p _/3J>+_q)n{ l@,ivpnmӚ 2RLƢ;Y c\۾ `RMF3gbm {a!& .kX;7uI%ޝj_M.R dn).ڊotqX[dC_[|CI@mti҈#KI)25?4Ng0AatԛJVU VsuЋYnzİn1 g's)$x+/ ޒ癣x;C*WW PrE/o^Kpۭ7nX!">Mu[5"gxR`b5)E8)̬nNU#ΓtR˺883QVbnSCWw(. _44 V_RH$w*-Z)yɺ6r8u&_9+iz/tVxeՃR Q<5>[$[}up6D-B)1z.AĹM9MLE883Qt[Վ] m_LzԷ@D2i<0xb|f+.ig#S ca>e &QD BEoJhp=7tlbp$_cvx-<ҧhq8%gط_ Η+noԈJ.13)gLơ0(Fˋ,SiEA_pf mN]^u+b"٪9pa},˚;QcQdJq"8\ڶMH֒*"803XtU0[zh0ȸ__j\D"x7J[+p%gQ?_<\F}׿ƎsG T#ӊLDXyQӍ޵M (8^p2j\UJNf0궞%QL [4 :=("Nɱ~M#E7yT˲bίkA]q֒_8~W&*(nQΙ5ef&}g8tx|Ţ} gc"S u^y8@Xhw2=1Lţ3^κ;BݠjRd<<܇N+ ^ dXQPFt䋄༺fUXLjGC`v6~G)nf* KƑVH ,~QS Ҧv8Y:XbfuVǂX,V(sKzʤ,ZqIc;jm,\"D ldf0bl.!wsw~d 3=;ǶmUX.(9k=|w<:;{|&s8bQDI( . F`];BJqhv gԣ|FI'KZ [ G0 :OZ Mv7Nb4VO YzE}޳&#H) ɂ6QPR(]GB\X߂sUQPGGNYy[o&QBˋM>3<2|00A+\UX.(9UEQo~DI|.o3lkF9<;Y0b2qnmFEccV$IӶk%(eJ1ṩH"! !16FZFl3~uM:Ի( ad$sS%[c6Vy1D?=V]^xjPߊ(x>#.oh**s%QmF#]*]{Jկ x]%pAɩhOܾ^Z&=+/? +;P(ų#i&F)x~z Qm̵4pPMLüqRLǣxfB\ĕRAnZ qL(6WL=ptiWC 6um6(pxnЂXEϪA(a.G{_ot{y=Ge8s0<2wwJzoI (b.1 ǃ(f]*"t'u.pJYN"&RͶN"J 5 BEQia?ՂAm)M?Jx R<<҇p`I2h "Ƞ%+v?n4}FFKp 1bӊgûmկ P08)EhvI0Z J+]#m[m0kL/]|%e M~=790#*$ NB!J/E1K$:pn>ÞiR<;5*SϤV<=1j}WގzeŻm<2s9pAɩ1׏cCwJӟ|Ϸ`/D(c8Zb1ǾѢJBQҵ>ƀshwn,nCˤK3ETR U08,QL1 ⑑xdTbՈ]Dvg.~Ip㲦>65cO hmZZ1MQp*\F o{-'ݢkX,o+Ngt_:޵J 1dRTYhXJ1+[2i c m| j,uFl_עRs#!]QjCR<<܇x~+h:GiNJ[ FK$Zl׶mB ьF7w4]i5q28$o jECUpd6ߍ]Q߿_/N'9<)`0O|G8v\m嗝5z|bkp$H4,C\& z<5JF^d,L@!2&&t[b<dYoQ78[j$JEbrQ-ɌUk6F5hq5 29BĆ mg|wl(! "xk1>U[+..ф `X<[ )EF8Urf1.F%F_S6\Prb||11YSp5wMEg WR 867z$Jb8l 0bheQ &cQd0A|"[L(XԗHjlSy#8w4P.CxV\P2L"H)2$Alh B͑3_VΎf.LLD챔ˉSGFCg;IG+YTmTRLX;Mf:5&`1g{}{gބF^)n)޾1|#7l6C-.t{VVdJq<0`H(n7(")gwrMh"!yŶB)tRZ{ԗRq9csS,I_^aPNhiW]B)NfY2ۛfPk.fߔ,JxiN y[%6yk5GKa,qcVD5Bſ"pSuD?!|?=]6tw pȾ}#$ j=j~&1 qEs׊B))<>_4e͇MpyK' dhL%6UuNAhκ&[ה%XU̡ˉS`3+"( 8L)z38JU ǣ#'qM[bm` )"#PBI &jlGZ|&LV^`^dJ30eJ!ddd10r_f sSE&Msv|S?ԴPh?z<+䜮pAͣ/|uQ<{{>m*Nh9©$]ɱ]WtO Uo 0Da4/']&}k@ hHqScASQjX]`,tH|eA BMJ\N2xvrdՇ˹D Gf'_NjCVI*f1[P a-N7n 0,t@ tvϽ?λѼ_FV/ P |%U\!tԸs}ҵ x^[n]URҁȢ0 h$T:z𚭐 .#e J8= Vh: M. S(j/hS źνHS"MlOdnJ~>ʵW %kDdF8&cQ\X Q0faMO5 ә6LA»Wlw6c ?  {nlEQ|[=յn'?:k @4BR$Jp`:j !ҁ c Ql˻0L)Nь4l',O-1Ή]g ?tpa}+VBҐP KG1t;t{jRs*&3⹩_IP8Xv`Duev`1 @OLH.SZBwُO1?~(WA\Y, JNAR >=GtWs}O-G#N̒.o7Rje@ZR#NԷ% фURё+ɦ5 gdRjh_ X㼺fX ƼjI ;ܾ5d 5=f+lGrC%c|>F{ 6'-Z-A.d /˂2P2 ّ]4{s/0;{"#{Lywf8>-^1y~zlEURqpvf'ʞDRj]aݵpaLr>2/ ˚:u;˚;W*88;Qp݃%ޝY+`5h346]?'ƀ뚗L,bCSN)En4WZa˞*s`f]Zkg0!)FRd<2܇G1h&Z'wr C(xJ^o~hi/O?}D1眞pAYH$|pni׿Vj)}=mtڇth؜K&5=uU0hcXurNf~ۨg=?<ɍ~}R fG ^҅:ۋSK̒:nlxKoĮ6,n3q7wVqK S߹bĩp:t}ev)cH+ +: 2(xbtTc9kk=Wކ-[tw >oH| 76 LJ>}KA]||jcēMvגnRH<8tbt!\>l_Q0 鱒7tlY5Rq b.[n(jGfM'Pos5؜83J)EA(@ZQ`UkdeL0aAaD$!(`HF؍Ƽޑ,{tiJ!dA520TH& 5-<)b&>3E-f+Ϋk9)^. xt䤮[IpCǖ.'L3?<:;Ͻ-\Pr F ?wT L)E,SL$J~zP(E4“cEEe݅WxJqwђsV;.nh|əAK4U@)[1l$ (~iszpaMn|6j-ۙۥ^)rSz3VYzVtgRC٩O9?cb>P*!dAd(8J/^ԗ5Rk5p<\Trr8 Q|ᑕ: ڿy t ZWmIί tMvנ98\TnwͪAQH"#( _. y, ,VTL*L+SH$zm/FҊ,&cjQ`sSI+*Joj`.,IVɐW1{+EJQV|_r[Y+0&m]i&Sdp-I">q~T~ _?u߁sF#\cdTE|3^˫d0->-=r1f2E"QV8')gPnssٚwR܃B3MV4 F4]TKd1D O埮pIrkpMrh,"g/LRڱ fQ:#-JAO격+Z`3W}0( Ob|^{q!Ėh1;\g̥N]4/3j|*s*%sج}DQ\s*l%d,6(R0i%#Wzfyp,UB)TØ`K$Ztj'SXOHY"icRLǣxnrTPA#Xyϫ8*x4ui4sQYR/)5m%\۶iz[Q>|v5iSm~{}mTh_%Eni+_| Tr< xM&>7WqeP0\`BL݉Z4L 02T`sVNI9g&N/'.k^5%P`*tbذ]kχN%! Zn*԰ SѼE :V_4Htj0Tt!ȸhV[}uy[D=5 S(l2p#%NG2S'H 71-_[xMg0<>Հc6O݊tVqeQ(ʼn4NDD[+~CQ(d,UA`bvQ02ig0 C)d@ÃN"!)S9Yg?+*pc sy3i?x\@K7*⑑,p,G) S41,IxiMr oxIsŗEAsL r@PosȃL<=1tV\^8p!*/3\S3.(Pr>z,#>p*LT,IQEVnRG*wͪ T4 .Se--$jJ.B]Ӏ<3*84;YRR[|uK^j7biX|e +[]؎ZʴB)s84;YպRd(ŞU h,bB_ xj|P=~lZeZs>ۑLjtv6˟'8\TG3h4>IńMBLQpbu11)S9Mb"$-s.r$AY,iPٲ$,{q9FB%; ./v1م+RdT$zL&Z"7Wm|lIp^]sQ(35 b>BP6@gE{-g>>QLe5%#׼j>fz [#lNX/s9e @'r#\[h@X4=5V{Jߐ@v7`}w5#Sg'(bpc %:25 [KՎMZʾħ~szq( >Alf|s[Z52h$+ѕ"N!)Wn~h4nQr/dpV" ) 4B`>#D"pys'<&hf()W|'"n0.L" `6|sKT:<~P*1lo.j0oO9y\{ΙJ4f1B4Zrϻ8SIQ̫WGĊ͖uȂ $J;k`/\0'm\ĤZk`b> Zft67-Y2+ is '&slܢ;Ԟ#✵O9C{jb1s픈LR@ yH E"!g`1oAjTHN._CZ+bM"gS2(b[}MU/vËW2='5Y)2RAEiA Heڦۋ\w&QBU(hvp#.g0a>E2gi2oY)WH(dJ!  gVz0@yVm[܆Hh}^x=Wl gÓҼl>}ۆdB#Ɍf[SZ#S c L$MɂF +xDRDIHWHۜ쭅hZ"-n"Wb[bxQHESRjOK-Tg3DRI !IC:S bN]1c)ݞeӣF5]">| ƅ%:zfQQځ*(Sr(f|9׋H:D)E@l$%=L,moG>͖Bv;ʗ_Rֹ9.(Os}|y{ɀ|Vl ܫP"ㅙqLƢ+ZUSDzS)c,sQ(e /̌c,Z! c48s8 uATh顜4L)#Y)E0`-NϒMstkDBn`3g4 a265؜hvuyt"O "^:/4f;ktErv8O &;9 de͝0I$h ·AUF2!"c dԔ_sN G$IhuxL9 PjV>c`~h;Oߊ~⇚o?)ܜ,T}/Bs ( _Š!!gpߪbPNxO/A{"S Nt6&IɱX gs!dJ11+& !WGF'k%% !h;a,m$tRN98d,X *xdo\h4g&5w3p CZu-G@.iˌnm-wBqW03P*"#88;TdJ1 6 ZqN>AH~9uY_9*SǐeuC|kecxbi PdF.!hqxڋFڳLJ]Άp/̌/܌>|vGѽoWʇ2hI)'w`Vy]&3Nݓrcxz|H|Ñ &cѬ5C]qV/)G|D_LJ1[*$"NlM[$pISG$hr٫sT!Zgs1O|G|2ƀ űڻ]/p WN喃:`$|& p-@?RS4 XEA@݅t 2Tx̪OlTӘEWq<0Þq i|DCh9aE4݈e28s|$˛<뺙W=xZu$N'NBHɃu6_鵾H%H)29> x{p {=icG->ל R|+!$?ó܇Ύͫ(g 8%3?߭y?>7b.:F/ xfbUTb426@"#2ZCs5LQ e BP$zbS*2TYґ- N\׾]n ^d-$Ar:>]4ѯq4GS-N.osєьZU7&Kyŝ(1yf0i<ԈdH:=jY*QZ1I%+yXF|ӷůy~{bk'7鬬a!(cc*^֋;9: U3&-nq7N9LbE݅hc,gEA[}8t DIE RR:jY4Jx-#n9dJq<0]R,>E# N ? $oL|SAվ,HT,-"j`P&GJ%,l^ 2UAs*m*nĒ_>=*yx"ztϼ ~w1;)= Oo-89ԚJހs(J0. n84]Mșė2H$m.f3I}(ѹ1@d}Ag&0NURa  ]jփ}M۵)|S?B4ZR!)(1m7]QɏmL))푦h8M@$.nl49fǏOw20^T# 3mh) k f+.lh-*BpQCff0H"B!0l͝h6Y5[Y׬+o- U51H 9g2IχJ1 ⁡8hT9HTrz<5eGLDJi4Wz^O~u4~Gf/ J n(󃻰o֗]U\Q~y.4cgmӪ:5wbVwI@Ra8DK[k ف6[>CͨuRvBcJ zz-6[ۄK;[}?kIpg6:]>t ̖`xTua9b%0NT"PSTaej`.,NUt BXXl2[LVk˹;\>wBW?gck(OAo/~'4oMpͻMi-Nn$ YCTԜՈekS(JZnS ,t͗BDɊUC VB!eͺF36o Fll/ިʠkc1솴C לӡkD mxxWsǷY}I4_L 1c%|,2TYPGc,[^ 5m>z\U^pAyq [Ӽ9gwo+*!jujwi+؍:]GH"SZD57|*MNi@9(c}OU7EI H0GjTzuR2AFuO ଚ"rKZAcьK: -zU(ѹɢcfV83;>f.35W/4д7hicֶ*SI6VS> #ky }nꝨ"uCŦ3Ƣ!<9ڏ;#xrcPvtέP"CF VRjd^ş!v5'k% c/U0".il) ZfV8fH+a[un exrl`h`*Fqhv6uf0j::e,DDAX|jR# Gb~1$huxQ;  .")Vɬ/$7]-*%5iĊMre5(%fRQ,> .${يR-&!\Ӷ Վ-Z+abU2`Zv",n _9XNu9Oa~AEgGCI[X& hpuk7,AsW`5vt 6Lc7pM&E/.WtaN1CJɂs~oS.SiEƉ 8?<;}g|yPF"+4?3KĤz@3>pӯ+~5jm&pTFX+v<%D,-%IX1Lvg* S٩LRvHb1|:Պe ǹqZnԘ tdRT _y{sr<ط?ÚenJC49h0I#Iegg[!l BFcᒏx"Jvq{M=c4\"L 1D nM[T+̸ &n{A} U"w 8A_pfմl"Dl!aVv,]&bxhW6wɲn`qw%ʝHD@˧k@>BUB*tB(1Z0 0?@<( يNO DBV)hi8"$h&vWe b0Xpj3-˞Kx~k?݉vvt>AX%8U#T?M$Jᾒ%n#g\n„V,c+LgJ%+6"AMǐ+Z`3+*鱒&Y%jp$ϱ[Y.PFp_:\>USW$c S(iڶM%7)b$B_p D́m%%EGJ:rZnm\J81Y-1` E:QJTd<>_m掲J*(ǟ8i{ǎ{T+S:JQ/|dWg#p1Ylέm*iߝuͰHIH6aE-766*{R GB$vc?961N9+ƢaFCFhqQ L㹩 3-(Zt65u <0x;&.^TT6;QTCB)^è#B(u2&+7R0In @uޚsb-f#>7q<}/)뜜k(7( {?޿A]>ɉJŊ5J!h](_K!%dĥDL8E ѓ吡a49ІHG& l֢]&i!J? (Ht|9̧SPA,ʊ$ǰc4Z% l"sL8 neCI@Ҝoģ#}Eo"!8߸bLQ{%{ˎ"O*u[껷jlEX`~5MTE"31DR +"&3 f0fgNǧ;=b2D,ixk!e -&kb00ز 6d^"hǕRdLQosrP(՜P* U`7aŲ%@]U 3ЌIPZ {DP$Nɱ2f˔"JYUL.e>8<;QǥB`%Mk$A%I9IY VDRJ]hȔ"<CZC3M}%R2V:R jJs7u/ 9"L5QTY~;A+iFJLHĐ3i}[V,ч՚w񑊮S\PnR _@Q oڅ+QU/ Ù+F &xdBgsdA<16;z0 #*EAd [VËVkqCM7$I*o/Dkkx5 0W鉡b'xfb)5]*Sc>%g(CX67 %4]Tk-]ث[)JXԲZ6yy"1g&pWQ<:r㞁cxz|d"& "z<~Ѐ+؁tmeY ӕr܇QmV+ x?\/xQ<5of6Z|D2 a<[k.mۄwo˻ڶMp =/-k5 i` ҔjKWI &j\(c4MԇB*K"APFK7Ϥ1^p$XrE22ĮETBE !GBު4;z^ E+3=X$t̚;jV~mpA8rt㚶5=Lu nD#%dFqhvBL,xEسP/9#E!8Fۅ|1n~mƯF;jtg1wf= O-Df}+**)/syoKrR$U^̢xy%I*.0hO:&8L6 fI%["n6q1,x-c{=ɻ:Yۄfj&?>#GOLT.(יT*/坯BkKmWPF1SVbQ]#(dM+̙MOG cP-k:E1B6YN K:V FyzHlja)>͇Ф"$u% `R5 ^dE!(9ŢK+[~ v5XBˇŗ"u״e*. 4͝Hh:VdAlέ꫃dhl#n܊&^-wJӶ2|kB*S| ߋ1mi=]s^W0I+8O"%ep$TvzfĊzH6Էi4WP40MGXkʍNW׌ U̦`%\yjA*-gޕb#kћŰJP0USS@卵9(265.ol]_t|%ϓ@# l#hB0@hd|T'7@z#SuFC^ {vנS*hͥ=+ x?ޥ`.(בC7sz/WTyEuDjRvDXwL̕1H:YɯL)FA0=wX&GqYS'\ W LdhtJ6:ݾ|pxv@Q5[qQC%}9USoۜy"@i./$"@h*)PşO &QKZJ&0 )g3v$ 8 RhsBDXlpu;ԇQ/S{ȦYu;5C*enYګ0ΪpAN$i|Ҝ~5,#,Ղ2 10,iv:lib5^AdB NO|g[M0 ` 4$(f=?,Y/Q=) ibYՐݞQq(((zEd, D4؜ F]%AlH #WKɚZs\܉FN^;0:Xj%u] DAc0XLx߿ >.-cj{yicQrbLcƎs("UCQLţhuzcoP4Z Z*_WU@mZ8G͡kj@>$H:h0/2Yt|r0%IA',g1;Ox^^$t|^Wzxnkp,L%͙ZRՀ%-xnjA[[+H:TGåझjr.rnOvtl?x+O} \P ෿RӶ^yF+*alUn0bd.,`)n9BtG#,bOtApMrנ^h D}͔fR'n 'bG (B?߈FK{e3 {4[@Ǐe !u9`PGV̕>nO͆"V [U+,"%.]4]%䖋l`Zu^rSCLngcrxo.d;^q@z!8I&x;qm鵯|-qNgWRGع ߁)|رP9c7pUkw7?و U-…/Rꨪ NF˒ i4gY~B$##}yEd-j\Jˤd, QD 5>z^|&vٵE}u#wRND@{j,6nl{gᢆ֊MC:)H9z mJ)r,9߈Vg9c cٲȗ N*|\^iGgUa,&_K\L@Sb'm%EA@ӣ6,yX(r! vWeQr 1$`WN aW׬9(cq"8d,E =?]^V۱wrxEw|ņ4Ɍsj3{CbyWc;Ҋr1iv٢nGJ͞ZHD`sEQaфO zv 6 dՠTlUC\گi[-1X| gQi9q~}sE>G|dO8fQ•0isuN?V* G3,PkC,b^9P(sS#/ŰLA\/M4[_'D^#b򕯸t$PǜQQGQpVM=F#f 6LfW fIQ߈>Ml֖}v-[lal>\TJ`S$v?56+[5 !MOrIDk)&ӲD&D:,CQ:)ېa j$@ToH64`1`?-{ S3$gRxbtWtk9x/X<;}7ߴk Vr chڶxCWTwmywjP)hwy1?4t 0"|z<~5H V/:'bx\ҔD"vۼv[|uykV0C 䕩p@8BD)gh;W:|5tcO'I&0J!VEc"#JԊ$0rS5+4-Ff#\s v鴍`k!ȺM#$Fa8vxfqO^{pg岭l\~p݈#u\S= ,CvE.2"zV{{PmD$hsypQ"!km]l1G_pvAX4ڝX\Ҩ|!ٕ,z<KJV4OcHt0 ~*D388‰ŸH2n*)ŖҘO bZ"2]36 \5ا*9R8EõMH׿>X<O;k3.(cøq*8c#ѐ&CM`U _@EfLHɄz0a0@0XH׊DdF͉T'u(uVj,pB$jDQ8ITⳜE.2H+2.YQZLcl>""8 ;k``<Հ2`,h 31D㘉0;CJ綑QC(D(z(qXwdحVov:hiӗ"$rNm[s-/ۅ?ݹw߻7plnYpAYE[m]Nn}]DAe3>7+jQosV{--vxL֬FdF$=֏Tujb-\@sgu>d`:>_1G C =X6ёg1<<҇˛;untR(C#Ki:T#(E1Z ^uN]N4ht;a6:RѲ>x&տ~710 |ǣ@$RA1o}7TSWp rϠW[Ǜo+҇$؎h-CafрV.ov? N#(`AƍZ$,?ʮ1֨qWY-jCٵ $WBÞAMSDdJ^ھij/iYD8P!s5(Qh ƦvM^n'N4hp;6{_ u8< Ip,hq@b8Vo'zGqϽ{.8/A>p P[#Ζ-xuWyE! f v5-~9M4@.2g2 ñR^d?QJZ|&k`ZQ0Q@, B F0;X>G \,XTygBύv>-){b /L::6VM0 f08gu38v|?\~v8ApAY%n͍8WmxQ~@*g{@CO0aX%.k ҆qJhr0s>8ThNsJ dRp2L>shw{K8cu0!Dţcv>GkdD[>|n:lFE͉}-N 1$k*ext$ԣ嫨z+wАh4ޠS ǎ{7t7UyEkL)38ִ}\>\ 3 /E(!g0G*0 "jmK<6"J .8:\^Z cjܓy0+vL&L-G s=ˤUu G18REd X~Mqhlz!Un5uƍ.LI$(;\^0FN"Yx5N h+Dӌx1|WًyNႲ( ŷ{M7?ӆ7nFrPG}4~\۾J+[MՈ`B)QgL.hϨ{<~Xl."pEs26N.w ol &%ѴºI֒":h9H  eMϡo:sj᜚I^f2COݵ>LճsM z.$JpGu'PguT<+ucOhkޠS0<'zG5mns5 e uq9Xu6gU^nr8"cNɡP3 g1&bv`kC.oT,:B4] ˏ-SP*csSyo< VFI&4;88;Qz>8e QDεp.a4Az03X*80: @wݵ^x\F]mxltLAqiSDB0Nk 827s VtX[o׿AoޅS>˻_LT–-~ڤyc:Q>:J@t(x`'KͶ K:0ML)MdHۜrCۧ"Lӊt2S`$ͪy墒/LZl \L=c5n[XͧP ]YD,YNϡ&DH,I{Sæ8@.Fy|f+ίoI* M*Z7~?X^"ExOi| ^Gse=8߅߼)鉡iխ=0I`wa (j0,K"$ߒ ]2"mkw1)eZI%+ Г"յ#©$F!N^./ TkQA$D8e 6 ^u"LEq`tƦVTT ♁Q,flo9Mhp~ShdEVj[vwxh_C2Az,YuwZ x^ZvGc 3b2GJqhv;J !u6;LTrF 0uͥ8Cٲ ./5ƶΧ>$V'K+;սAFQ3 `tQ/c 2tvBU -Ma:Z~cVI<7'Qcb{SiCCBlAi ]m4_n{]Oկ9/e(kAp}>1%76ƎƲ[ TQX$:>P0Hh` Է,iQpvMڝ^ RdMt}%VeJiREs c*W(GH: d_k!05vH,)0:P{Fav>Gwٱ;a`E$f=FQ`8Pr^UE|7ah}IQ(}ȇ :Ⴒ zȣ/h7PW~@me m 6yEu_dv6A Ɓd,@QPԆ%?"&S2e"1d@ & /IPvyjfp\bzPSs?<ީ9(ܪs2d9Z/miz?*QE8`8hHshB˷&(7ހO~E}W_dzewkm}֊4i5#K:؀fQ7$AXiT5֕2C 澩1"zðJ_W F4؜iz:=%]+RYfQ C x~d{Er8{ǰwp ~6`Gs)q)Ts ֬%E2Tur!x7^m}$ѵ+;cմm;7ev'FRFQDzv ! A=c#C":yOQXeSJUT)MQW'pcG(؎i DH++_YA)"nae Ssxnh '*pN'fI?\P:9;}N$oxuUY$쩅Ip,0b p&͵*2H+2z3+:\&35h '279j+1Dpv'v6/鴶eyY.F"6Š!_9I뾈S(JO3 S !+Dc Z p$I`9;t|E0sC?dP<;9IsDBNϚDSi<74gNT)ᔆlm  ktKEB~ZkS-*&vr$l4R$ĤB)N "ĂS%I@L -t 849ƊBlwhpZWN@>8GFJf>]Xߊ] mpmAÍk7ep _?w_ ׵o@xU;͒u& lnRLiE$=EBpysW%dupS.iE\Y۬Ϫi#簧SɆ9MH N{=55\UTPwqT3ۻwr@􄄒@B^zH#PBHh! B0]-꽯JmYƀ43ZI[;w,Y:{=`k q S (Ex=ۅ=ϝ4enׄ;HXb :짏PG ^2 QȊslɍz2ӥq84{QoE Dt91_U @0FM_6=R덈Vk&kCU_7%\mˍvoA!ݽҨ(3sR8_a]3NuP_mBœf5#'.K2S;95gg!?/y q+'eJnoGkZ=C7ǡM_<:\eI  TRl Ntrt*SݟE=> fs9bwuJQ㭝Wیq. !U=k$3!oa\}jֻ{q%@" +B$pλоŋ &Zn_5ZB< > @/W ^:F28{wװǯGB<ϙbTwH:mۋ)`"ϷzlwHKBPwev_xH r.yY~w, {*%R,O:"aYktH}^Ǽflv@{#J`Y-2 Wɕ94T9UC2_[aO[48kj;Kޣ`2Ij{cxb~nlKDJ%`QcŞ\# (04d' ˥شa_>iR1p_HXX̏ON>~R 0H:-*KǮD/:7.J2][yKZmeawK]G "3"ʫ]ǜhr.v5׊3XW_*GPJ&v cYdZ3tbȣq!=h*ض=T=N7mX۶ˡǰm (dfl_=_ʲHFm Vn+AZ  3~ bIb:q[0䧃X׎vZ=b_k%eDWLH:w;bx<n)  %"/U`]=>kVLbYmWۄ#%0dw`Ge>n,JEVzUѶCㆇ\|2-9dP@_؋.X}K>'|0 RFq̝ExF>&=!";2(w}hC//$ H #4z0Lf0z[HY f&r֢XԜǃ~v&To7$@|ȳ HEQ8izά($j ޝl&TwSHBH@q9lhšVG㜜4#x¥%]tJ[E (46u +*LGvv_^S)"B! =¿G:us1 '熌@'WB~VN2y8G..;97 a1wqnZg@/troѝ&sjM_ pGgED ^A:}s,2"JQ&jeF!cGݜǃS=Sӈ^Z2<*;zPу+jrPXF9;qdfj_v'/@^1Avd4uW?[F?H &4<;+"ng/2;zvKTRejD>.'{kdrѽ5#&h8cEc7!uJY Zb::2?fKMa50*rԧ`vXS;6dx9!Lu%0r(P{P@Pe`wa`~{]a#AG>J&c‘޻qn*{Poœ$Īu>vdN8QJd sK`\/zxq` eh 2R J'`̘̊I1 '熔aaTaPx|~8P߂-xyɝBIϐݽȊ9Kg  A ]۷_!(ʳ|.ᤂ eyc4ShW+o@LeZ L8#s@{ȉi'LJiBOplfDԤǃJS'ͽpQ§uhj 91HG~Wם}CRX_MCVm±v8}(A!`} o@)K~|xq:]ø^bТAy<Qc7oZ8)k,&a]Z. FO 0,bZ,MJǚlwK*7ǡ:U0y. \#S+W FB3]^[P3f09r m_|e~n/0`};W`C+RhEzk_;;ڡ< "!B$, \(I0g|K;{xpuBsu!NiR²X~aZ iZ*6d ?*LmahdI&iD3%BBPcg;&11 j޹Z[{PQل‚4/3(Q@yGEe>eKec&P5Zǡ3NhtPiq:P? D#Rǃ6L>69} ?sW(aqV~TVDwR|h% X{dyJرё7ˍ>+g4xQh #- ɔ, %0'6EqɕLK|~`d&@>8_e9]ݲSLBʠ݁n1Kh gpȽP8| f.Xvބv:d痲9RVD2Y`u9U3Aef0N.e1oLCV[Ek2& -J%8w\yQ;JwܽvI]かsפD7*ӰpHz[T$1ç$'pLa+v0`W(TCJ<6cqxZ`swBBBU  1Yz`@ wr0Lf+W<+1~}m7pg`09u4aiRJFm9|\3ftXQքEi`kP7@0fi@!!N3R@}43('V'IkxF $"5 =.6n B7Z:xؘDiY^NdFt7_vbOm#jz}Bݩ ;]PS^z`@q|1\zr/1hu,Lfu]5}]ybu:g80`o+ŧy TB1nJJqr8Nv}-F~hFz#>uL'HcY`B&q(o✙(}1?*8m@܅j@-bi#_ɚ.6(B!(k99^ Z CC;-ׯ CXh(T +τ\"')ϗJxQ=6+* 㑪ɞXjFàPW.Du_,pݐI$HT6ŹQCQq`Rfc^T2 < oFYK'5ӑ6!LҖހR.b93xo@Gg/^b ˀrIqW JUjpس7)5Ѣ܋Y7l|,ZL"\vs*LF>d0@M°997vΪy=L(ClfZpVM!ӢS@ʲHE A?p2I8ί_U? aPLa` ʼn>/oq nBFiseEo<,ʰKʱXl(;_O -ȇD29=RVh0Q,eD4]Fc$, )+ dM ~ӗab-Y?Fv]C(k`B֎1KH$X0_pҲ:X°mUNS"`"r%R^=2 '[,ܨRO~& ($ۼvqjEjz ,g=IÉ]U 㶽x ^2B> nWkYA#J12?lEqL"uqҠ+=<@as"#Nw0 2 Q>?BNo7#xa%MBss!Q|~q$JˍN +.΂Z=E%,9ɘ \9#W J=f,]R[ 09U|2 u>̉#oǡ:g>;>BZ:O4%ge s`%"jJ9Y XC IN'$i t: a)@tF$]2,LH{n ;-^TGyFQgRq7,€8]aTA'Wɠ*3\#+Rf4l01쳋ZvoHY 2 Q壌J5cJi0(B^eG7d% _g_IyXasxDiL@lw2tvF&ÚE&(ŘNL($PJXikan0NxT²($(wQi(! $gBNԷ੝%x~l`BBDw\\$23ٷ7k<ԄeSsڅw,*L>a1"/*6+zlX]N042D(x;p<. :ARQ<̉Mơfqc NM²HG nu&؜HZ=u `} Z:1.M9PƯdqyik3)^b@ RlՒšPMLVkaTiqgv@yb7lƉvq e$j) lJYZ=3)(j~e1+& Z 5c cZD)HDݓt8څ -TBӄYze\S@j@ F9ىS2 XE7ǡfpZ`Y*5 ',x #]Zܐ$ HLa(x3dkj隔B L=eNN:;rW]ϫ LaP:.W4 [`RM6:[1/>Yh@F?;zݟ|rs(o6MZ!L.7u0exQ`op:]B? @UU3vฅ [*H1:,f=-L!nlEisFBӅFS?2coha`@i;q3e `I8tX "VrDk'7HB!_s79# frՅE@绥eevV" +70rrqKxS $Z xB~RY!X|qh. .p+Qh7a\mk6wXs;:CӽB!Adl^5Ym'\Ql9L1ms?RJ-29 (]m]Rv (k@]w&wl0LeyE# B3pp]A']. v5Nzl2R>& @'2F²+Ѹe)ߕtsjzQҁʎn81!2=}2~6YYxݼsNTU5cEHeܟtsyL8ф 2SS[0H28t㇛c <zPҁm]t/BȔj(Eߣ<^GegIǡ.neRV9ps@y.QzQH Nv^!0%-efWXB6tݨl'$ QSvP²ۅ!3H e%Lvp'ۺ3$&!26[=J I (ZdvOlM~̈OR#;R\Mɩe‰.lD7BPCOfNeSs72/CB6i5Hd](Y}܁RqZuZp'ںMe~!z@)duu+Z8L7BSOL1<H_0x?.Twq6!"TRS#>ވq5-Xn?0B7ns$z>dU sTssLhFe{7^!~0hqd' bOOQHnurEu;㦥Tpܨ2:z`s %BA[q#$y稫mA" 8 $ʦnQ 9^P xk%xIwmè4ݽTlBHHj (sca-]HO {!P? "ّ>j Q*7AKU=AyhD!L~bJ`$F2HT ^$䌊P5uh3a̎Ki[Ӆ.:zPen5BN[ z "ɟS]ӊu^bNh"v(ž8e17..Ck`n\ Ju$pͨEmw/{|D!]\b2\@vsCɳIX Rq &8 J̊I@R8atPՋ^J!BB(1'''9;GM&\@ҍa 9P,r3Ƣmhu:<Q*5r"b+0 7pQӇ.j{6$!"/9v#-5ΟKv!P4$]z  8x eWFi=}C[n:&BDkG):1@W#"^ "x @Nͽ_f9 !_ gz Fw\uu+]3׏+~!P66u CNph@=}h3sB!K (rekk ͎L0Q&6!2ɄsgI9ZD*&JÉNw;#AM~4^!vsRc\:aX|&MإKN!mv ό~3Ͱӽ,B!$ UEIJ(=MHK Ĝ ([Zm!'%N@iwo><18leB!d @%NBLJi'D ?!+#!LR#&ڀ6-\ִ U'YV? v:h mtB fN`wRR2sB*Iˍ!t-4wf̏؆3hE݃tY3hŐN!ET)C l:o11;Jj5;EE. ޠERz$0jՐ!7)Fb\p{Ђ~3oFϐT2]6j Nqr2$J(p8X_tB";@L1m؁!aNn/ҟTB*a!J$g3I%PȥЪШIoOj iQH8gv h=`kQI6.waz(DBCtMD+H@zT$#k'=xt80 X`w^V;,6;XlvXa9@\&F8`F%ABA ^ "*R)ң#E;a mf4}NCD㎉(i2 (w(͔MaԨHGE Rk8n3l$ ӊ>p WEN(FZ$D3\)"3ƈ#9h4ԏf]iNa;o@)t: e@wM;DhёgH9Xlvv{mfxx†CO=1?.Im@b$c@\RowO%,TFǃ΁!ԛhG}O!GB($B[NT2M! 3ڈ8#rbA{m&Էއ ~ ghlҟKX1:$`DfR2ӌ+0aCBKRy{qmTg7y__ik a()T+1toΜ;͡խ(nCk$lWB& /-1'?".׳ (dFYL8ځS=Ա$TD#vs,}@iիrtCt 9N)FW.մpEUR&6Ɖvmǿ?:`^A*2JXpQэ2QB!!B(v`Y*\pм專^)py<{@ cQHdZ8z+p'Q = jO f˥3lN'ۺpMSNH0s<S*B0(\dLȡiHĜ&B.~4`&.oƩ.pC&im%*9f%c^A fe' ޻TdyiI0 YqQ%B冒߽JӑwsU(A'QN %'xj[AId }A)bn~ LDDQZ5daM~&{pmTu "P mjP1_,+!7R f$avJҢ"D?gq 7v"Iv{U10g =,;0Ȍ1"3ƈͳ\#Gh<M `ʀ@G#h$%B&w=)벯'jᢻ$H p >=\VE3ҰxF:rRcxSH9p G0Hݼ HBRj())'#)JML) F3ǡ KpT %֐70d+JDGhdfDB4H dau^:M8؊NZ@kQءG;)=:ӒP+]sg>?ZeM = d%Gc,, _ E ~04hSB`'tYL:B_X.F.KKB5V;hhXj[zP҃m=X>; 3 P+&?2PiVTuQf!))PPe]bz.;dxbR,HFAb,<GUbZj{$9]nhDɉFDTX:++d!)6bgXA^|4aq08;B“7IAP*M S0#qzg{y{J`P=B?h{rd&Eaռ,%I^LMGE{74MH!:J/9Ȉ 3Q[wkFЉmNPE3T7WЉʆNDTX5/ R?~Á#bPRߌ#mNBRbjLBH@ex`dFaiV c.)'NGA :)X(q7jT4#k2q %uRXB# ##}@I;dXFp|OgGj`?7M8x iX8Kfe@&B!biV*gj;0ū&$ Pr'(EP ZM`*XIЈ%ZQ߉K*q9$jbBчv܅y X%Ƣojqr_v(C  RTRPB-^ŒTLpx=DcG-vSذ8)״L4 c]37a0'Cܡ#'  vI)Yͦl'Naj Z퓿@BHr9?ހhFuh *%6`U^6c_mݳ$akk#vcbb@ZcLro@&eYJҬ굂LغN6Q6!kZMx=x#8wa,ȅV=YTř)X.iBKyWLڔشQe]J_a4r'aQF2Jȩf|h`7hۏ]q,l\RhcGYơ(1~mBe{7ճ$aA( R 9"v(fbvJງN:ZWwpVH %;Va*$ae&;>-*iQ0 Y G"v((lۆB h4G ^6 XJ|zzkBUXU+R#qILtQZ5/ǚ,o KFH P١#R o _f. ,NC 8ՄFQYe⋦><^(] gE-aU^NCiK;4{ڶ!tZ(tJ P@B[&a1'%KSaԌ* ouT >SN# !Sj`Ȇ7vO`,lZRYJ%,%anj":M]ӈFS.I CPRNGR0IiQȰ(c$cR-u8]ZV* !tc*8X8oi!rb0 ⣑>34 $H ]()' y;xx˥i (j,JEqJ"t1Ǫ2MDzTt`oM#%&\RI@`$1ݸ2 9X"hq7='QjuBTxغKgeeHsHO"&ӄ=5heYbaLolr*Iv8H->WIY!L'îX-ff'b"fƏ9xHc-K .8Ѕ@@)`?6*M ĎUqyBU FlZZEiH(aP EY@&BQbgh=SAZTe"ODM[-Ǟ:8]ԚzwOb B8AJ!DL&~Z I:%J0ueo/U7w,à01˲Ґ9vQ߳Uw='QZJ6rZO k`Ò|Dƾs˰"7KSq{j5hp (69~~G0 2ʨGss7^Jo(MMĒDZ5hӄS !;p_NI/Ix.4wjL–Бf0) B&"Eڶ>^OOR`z4}DBZ>s޲Be&;>'6 9Q&o鄛$ڌޡ]i RLt/qobYV*f$A2NQhC!mOAK܂l7PC+lNRđM`Fo; h,JAfQuZuoQ ! <cռIѫXW8Ԇ}uئx$T1Qbr;; eY2X#mdS$+ZBp[Tm!d[O˰j^6,·Ѡs\*,HƩmB#J' 9x.BB.7>=\ώ`fV"6.-q48dpcJ[0`oK‡Q$sI mՎaI!jPVӆlXR{R%᜜4,NEE{׵P!h @*rttB(LJ#Qp$B!=5 rvA. Z՘cYAQb10tqx, ji.%M (nD2Pekԡ[xBHƛ;'hF:6,)@8,.lnjY8<\DjT`f܏\nwקN2::8؄h=.a:b|+H;N}b\sӰ4+՝#uݔ:ut-b:IfXRqKKo@B!TcN5v!:BsaռhTcg hGgȊ8Q($' _ !Pj*DDhϟ=EQO/6C6;'A߂||o~Zsfgb|$$Fk8of-FYK6}@\v([r6 2R f(dJ`[("'%Jtt/ckڧ{LUq X(scߥK%IhVh$%Pbr?I'N6 ()$2>]UU"i@rghs;5R &PY7b˭B_dVRF( !mG槥X<3#';\%aiV*fZQ 7 JZ2@hex!Ptw`x%Oˤ(=D$%BtcבZ:R[iJ#2chs4dM᪉/rltGG,9Y\փmQ@I&ͷ=e1i z {^!Ӫ0hc,͸JayN{p m]pqt2 %ٝh2%&DaxZZ6bl@/ ήx(I4ndDG"#:(m6tM(O2 )X@)>q]B!xXU+U":Brr^-3S83-}8؆-ptK//"PEB.,J`$kJ(kxF"5%O0 k #3ۺp~3M#Զ j@iq8|wLuu+ Zz5zT_&#!|Pr%'y9X>;:rgR)&bNj",6mnGis;BB0û?ԔX-)`\@$8fzPBtm=faQOy٘eF5XS?5d[OD]]*&\%0bq?#-Z!pd#lDt+fcl'0 s&g' KgC#PQ"Nźl[~tdk'lN<4I n T,/VGG/̃V_F ( !6LxC0+',;0HDzT$ΛNvTG% tm6[PS55;'g܏Sb!@tq8x O6!BYX>' GRE~B btQiNTu(q؄B4LMB!WUPRb!`?őxZKge`tDdL"AQb,cpQكm]MG%!G!CJrՠB4H$L@yEGJ!{ދ||qXZP+ʥHÌ88\nNTw>ڡJMB4dRDǜ zC7qBHx<(@y}^zf&a &A&>+J03)3O5]&Tw'z"J$䈭B7qG]DbNAF<>?ZϥB!Sr`y7AcaQ*LGAzo20\&Ƣ01nCCO?*ڻP}>qd'kI'dJNlŅ~ =JB!!:k5J/LŢ4ǁeoEVYFlAkݨhFPhVFɈ(Ol5O&!PƊJ)=^Pf{iBH0[`vA;\#/-V0dɑ$G0=AU M~=2]`LxI!5%: ـR" ?Jeeu"&R>j]E!$ c*l?P N\łaKi`Yv.tPiBuW,<*RBZ;T ‚4H$VYP<+S0mА Zj1qPB +6l+9m%San^203RJL8όӻˌޏZQ[.8OL?(0x@`ǃ'w=JB!aІP5Js0 9P*dϳ )Fdap؎^vP #pw/lG~L (V^n r)(4B!$(Yw7@&eQy)hxNIMAMps)-dnnB:˥(,H=(Ztѱ7!rXU+UwJ9yI(MBJ 6a$FNJt8\n4PՋ^tN_-hF?'B;YG/B UM]j«ێ":B$MFaF<2q )rqȍ Ӈ>dDBOfkI+J1wޣB=3r񘓗ٹI04)#6;{` 2{-.PRO MO=tGI!t9HNʼn޽JxaԏFS?LCVe ex?rF|@9z1E{FQ:t3Zh.f^<@yD=ݟ3,HSܟ${r_I8-$rx$Ӫg}4dzkڧ{hCSG>_ aŒ8!75:&a30!<8/.'0 (`w5m&%S@I!ǃ^4bޑ/!Z"'5 znuu+ztk (,.s/lo9o@9+'  PBcF{kIAnj,rSc`T*LXW0ԄM@(x+)ǖ׏qJܴXTw{B3e+q H7"3) YJF|w w'%E#%9Ƨ5 (ExݼՇ`KnQ d35x{{.-=mSJrLF}u/*3Vb z (䜋.X:'_ Yf B7$!kp '", n;9DC'@PVQXE!; EܟiUQe (R w09yQWB B-aQ.ȇD2ğ`V% .r*1sR$B!#Up܂,Y\% (υD"i {gD#"΄B >Q5-)f.?/_ aPj4*ۻ`%nn܏3 3E'BH[237+vjX<+pF wm6[qdePB!`'08h'ܲGe@x ;?=HEaEB.#ǯ? ;v50? i@oDzz]t9!BЉ®HG|_ *aP%Ecl8xê>!JY偃 εTDlªS֬]pGl FihߊaJ!z|(ϗBHpOQ_eN+p֮% (SSbVqK*04d JJtBˊ D IFJrt֮~'t{ _%!tdR ٵ8"]=O NaZY (٥V1;Z1B!dv^TJ91,`~ZUp 2ʨlqeu#TB!e@>QΆѨײRޡu98rwO+V;fvN*9lqmbl`?=vt/Bh &i֮%(-œonw۱?J%X2+J gl@z='8LTP+[j0"@VjXWߎv1k_3xB!$]v7tγdITTJ`[9IOÊ!2Y c}ړ"*Ƅ (̛ ^#8np 8w!RB!LhwrG1577_ jPbʕ 7WE !2}"t*/HsZY T⯥5 (OeޏK%,V.ED!j^$8_\k¼(< ?IIтB{w(N!#aDf# ))]v 1 7՘Gs!@2'?F=;Q7-( (ϲ~|D܅p:]c.B!dj+PpmɤX<-+$P@yA?`{N)ʌGB_K#B$FQ;fdd93a0W 'P~λx?0 :'Be~w?oDr(EHKw55cɂ\F!餐K|vV7 Εt?,tP@ `yv)ɨr, L!ɵ|vJ9;I8cr ΅Rv|z f3]ˊ!2M$,`ǧR*Xw\?,P@9Vs8.l H-L!33#-ȆFCB8.pqWOgv !0ˋxǸ\n>.Xe$ (ǑR1ɱTBbs;f/8,ga!Jb߉Np;3$B!$8OEuE%% cԅ%cSbNl!P1WKp,^ﯥ$ (yH$\"+CBԸ`\ǃS\_ ՕC-V+Uyeucf$"=该B!d FNsiUS3 ( \tW^~s>RB!Jhw /X Z1%< (EeP(dFUu !⛄h=9UՂ#GkR*d"JJ""8oBQc#Kɲ -Dz!)+СNy`cY!J.lR [:+}CZ!BGqNq&ƦNsBp.T/]3; ߥU]]F?!BFe\zǃ^*jr18K KPN-7n5J*xda\jH!~q>"t_{}Q^=m[6cYa NªŢƾ⇂~xZ+!GQ3xǸCQ^5ٔ=!PNЖ yN|hλ`F!υZ)ma46u %ZZآrqަEƾp8c{&"/B!*;%ggb;Nµ`y_vf- >1,;/!0 n8`&Cw|J ^_ kPѨå,5 w%E)!fAcf ^yu.t9F?(+/_? xŏ)AB9MQ=Gظ^+.[釕JhT5U-cb"q,B zW J;TU >z@|߸z 4?F@_]xy<<[82BwzO!$e'Gcɧb166l^P@Wr 7,=Pym;;eYK4B!$(1 E$|!Tj5 ׭\.LKģ]39ɢ> :!0+#B9(cnkci,PD\,ق_N+cyBHЪ%G0qw\,! Nlڰ@>NI ciBHиf\h8U-xk۸yy)X ('M[6B'д&Ag3B̃Vyq89(* H"aq+6ABHs /E"w(g$+OŦt !R.꨻x}97mX|JętPNlNxc6OG߄BBε uۆxEG8S)`0hpovCqtM!$=~h5M[6`D@9oBdKy=(;^'8 !Gݥeux락I¦ '4"Sě:BH{uSGGS ?\$.l"7!`&u_z2Nti PNo؀$q[Ei~y=$2DF!L)F[.s]wԝ-׋k,B)TqVQu<=Pen9[~\)!m;ZQjfd ~2D)좳}߻JK$^r(Jǥ/oыU@.8~O$_iX0hO2eE(8ooEGg9/(Jʈ/{l^;ñZq:.BT9)1|lqG39)7Qr}~f',_VJ!&J;X.m5'0W@cPNEsDO-8 '߸nDF!L[.^bO<.9E\C9n~JJ*#8v'18 Ν;V*:.|LN!pmbleL^瞽w-r>o;mۏ3%9L^ppz yy)HMЯn57>J!0[00%#!M]x7Eɲt(;K;U*8[7G&{#2MVƒY>P"jNT r9u (H^^ ni =ۛBXnj nؼPp\[ MK҈Q@`.d9/,5j77N8Jpd jEE!djDԸꕐ ܛt8\̓U pJCea? DDns/lC!de0hUc{a+[EmL*z kpUJ7wc֬,,]\;n\xTB c/Zb2͆lvx-K!=(8nrݢ^eYk E'S5sFn=^G?k˓w#..w-מV-)2 !$,Cw_4 +^w<GE3A;;ȣ~3f'A+WalQclm!8 !o+Hekن/ap&j޹srp&:2( ` ZQk#p,PBErlu2G^E]]y##K9:hG?Zwǿ^.8.*B=p#X;B!DV]ze>sBԼ G?.L2C7^C^ކxl;?qɚBnQ CAs+8N2}Eb7^#^Po^{.ʎzQȫHLBfF︋6.@UM+>~?I!acl@b]wf kCG3gd׮# !H?rx؁% w/l !Qs /~ ۝捌_e (HLL~!yﱳt_mAF>&"ʹٸ9N?'j^T_zDlAA wyOg5nClRK#&'  x﾿o= z.]MdydP@n<̛#zyJk*0`9LfQs, R)!4Q.Iss7~`v]7| .^%B\.7AJ\X۰? hn=-7 'D ( _箝+z|fw{ͥqɕ'_‡ds'&D_\H2DF܌q=~7\l`#6^ ߻Qc]x;2RfDF 'BeJL}"jz}QQc"I!ryq}Qo[Rw܄Dh$xP@˟_u$~5<,w]JA%!s[D؃O^=L*/~=|] pP9q}_Ҳ:QR߼d"K$6\Tt0YZV_e0 x5ΞI2ĭ\Q;%v'~pD?ŔC!A켋}w\$j_=w}!V.H2 \|R|5;? [PI!B B^RtΉ Ϟǰ׬E( [_o9>&Qu3.j5u!`\wrϞǰk6m\ Ce`y ,x0jG?;N|pǖ7!0eqӭ-LVj9XmvѯtI!%T=D)/xU(-5V L3 (\d?xRcD?c;_(Nc]}.B2wCUɎ#/^hyJr mMG%ѨC^n7xsQe2)c#Q2 !4gQ!ɤ_NRRbЃhLB($(=^3Ͼ=<q|9\./$‡/o58׿ˤoGtޗUD%9hᑇnGȋۣ^gxEHYXQF!~J%]rc~뤧?N;K($_ÿիBMdV¹9Ͼ  SxoᢍZچ^¶Gz< ݙ$_G%-~Vde/~Vf3Sҳ".ٻcvB!_KKދyƛG!q:Feg%a&㠀IoM#TT6.ݑZÜE{B1wq1=zExTgJ2.N{ ,.kn Ƴ,~yn% !Dòu/+f}Chn굖.)ă8& ~I作Q=&3gpT\Wsӟ]O= !L)Ov=n\=VLf^뼍 \G ( DufW pw'Њx橻]#!Ae3Oޅ Dx{4:Enţݎ9\%!#($>NŸ Fv߼{=jO6\eX/ !$R qm9چ~O ۝^^bBdg'\BP@I&(11 zx? EkZ:#@3?~7\J3a~K~Q^?K($ãݎsw=>N39 ?Ĭ^! ?D~}^Zv.}vDF~\.ݲ,]v|?cr^_߈[}1$2%!Iq/#l\.w: nu3~p^!|($~0 l~-Pz 뻢U._ .B]tB4]%q8\xw_A/k4JW7KWPIWP[ O<]$%yݸ޿$8p+RGBH`%\p '6i_39)O<],LԔX<`n>UswݺONڭ$h<]^=ٮR|Buuׯ9^. %Y\BE%4:}F\vrZǟ^LJBH@:{W2;=^sv??fu/t9~-^?KXPI%H[ǽ߻vH;yM]^=G@dcS'IAׯ)Jpn=zM&dJl\6DFzB߹aJBtuW>v߽ 7txZ<ذn (ɔ)*LS3gdx݉} =*l^n%!d:+i#^ ̜| Ҽ~_Q@ITLL~V\{JVl~8JjzKԺ2XﻒJkq۷mۏx koEt MD"7`L<7gqKpˍDFv+p)x?)or/(?UH9͎=ᄋϧ׍G?sdu?yw:K_6~롥c?F]<-ҲZv>YJ?P@IUd͸ ^lFw+'~aWϮZZ7clx9o$D"KVyfǓO^6˲iF77Q?n2(v,W݃FoV]Y^=k.=zUTnni9)] .&K_)h~|5C#!v(I9#fc^|Qv ߗ^ !_qNV6##=E^?_vxݽ>T*n; dP@I0xow݄~M^=˲,<(^PƇ R70w:̓V7qvw%SRb#e,!`HK_/1xspǫ=}?v}~7ߴAL< xn `KÏl(y}uP|@% h ݲ,} -=^1`[\ܜdJ3~GOOv@ . 0wo[SoTkHI}߿:ސ@% ix{?>oVaf|'y"ܸe:Wϝrv9g>:};".cߺky/Z)`r+I$h(2~8g <>Vz<~]o܄;5f`Ͳxǰ xx4:|ul"6$'E^I$P@INQa:y?w+f+7Cߎ͋pɦxxnzBR^uO>=map%`hW% (IPR(d֭㜥3_GďY-מX1}tKBBĺ -!CH =RovV(OVٿ7ލ[ukA"L*oMX_!0O!ODl$b%.?sm?s&$y(O< (I(O.~tϿ!l>=G_]iF,Y\S!KO_?Y F! d%aY>xo9C46uMh=:7mـMz@F% ) [f?֏| x(,L-7nLNT6!!5'@:!Rqۍhn΄*;^^؊򊦉-aiܴe# jHB$$ |˰iB<[nyF|`|ܼe#22|gF~*xvW੿ p"|ò)Jwnٌ\+7Ͻ!rsqw.F~^ʄ"$PQ@IBZ~^ ф.Зāvu34Oan2zvt۰wq-BFeg!&J?;zccFo޲7,m($!ϟl?O?+Mp+\1Qzˀ{.ÛoAGs'"ħҋ|+F~>]xk \.Ė08o^O$}_!""&7,7Xs!:ՌW^ۉ'| XfoD% K aYxvL='{ ̝\Y>F |Z|Z8Z &xGH0aołYXi-N9Zz߸z .ؼ/.!Jr.d96؅79;j>rG ?/\#9ٰxdt4ueW@0 v\qbT q^9~gS*PB($4\|e;>Z2TX\u*^5VoR)p+q+m2o|{oB$$DFbٲ2N,S{O?pAQ -u׬Ed/s ($,Fw}b\v9xJ'vm;4\tRts\2?K8|qlq=ሐR+Hú5spSU Eyy‚TxF.xB!JBD]eʫ;qH_-/oDyy#" lX7-DBďXXpTzǎV099pKK|{c\\}j̚sB 0 gexV[>='m?`W_`ydQx6#//ΞA TJ`$,3dq!.9oф[ rc_I{?ϛQ e3p3># ( AnN2~ҍW_8.pH5Ѩùkb9HmׯׯۏbRUnrqX 2 OZƺvlq8A-Jp9UHI܄C~] ױKFH]իf#6&iOKqp^=vߗ$I8}riX8?V#51z^;vÎGQ ۴]|RM.h?ߺ^ooCC6_Ё?Ͻ!f5s|Lh*01-uRբ7J8̞u+鿝 ٰkqlqOOJ~V.XK.Z6]/P@I l~=|%`?zw_y<աO<6/ǚspA_Ŀ*'3KADeM*ű:6u9Lf()OřXى VbΣ()sF byVO!($d \qJ\z9_Rكc~} Ӆ{N@Ua93fl(JDdNRڍp `*䈊Dzzfc\&MBۍ'c1}ܯ_5gv6.`)/ʟ ( T˖a"46vwb#o!>>t:5-ǒE77wR &Ō!/;gm&LC0Zת SB4,a:R%(9PɫTʱy%HK!|rb ro#JP<+bɢENgt'dE#zk2:h EJ%P0F鑘i Guva_I/G:&띞͛aݹs~1!Lǃ'%y٤;[ff,*EIk_4vDe3k܅> 1lGӔ FHMENVf!e{LsV+)ǾoהɤXqL"S!rB  >0Z֞)yMc`|̜N=%+Fem+O=&31hj1ۏ'D&\)ZNFD}7U'iӦneg٠lq^ ވ("ׁ,-P:h{7m? P~=>pn!47{9၈nV˫p2<Nؼu"%8G°q6`$#uuu8}®h5H*|\(Fn~ JQRRr-t:jk aaQg4O+]/P*J(P98*jk_fDP7B}ƾW WdBRp".)9)s5<\1h`' چq4Ȏ0P)ހ#Ge1ݗ L0EU eU()ByZTik7Pgү:#KRRRR Ĵ BR n.NhjW#e''GmwAQM=%j(ٗ-[!Z0cG L*0@k( }c۳n$Q#@Icؼl֏@oDEuT0"ȔU")9 IY8T l֟Q!ipg 6I@IԈmDZw_E;*0r6=<`Vlr&|1D^-u"". My#"[Oc߁S8 PL`D Bh ]k:de ++HNFRrYCRSǖ޽!0]""3`$j*+8|,?Π܂[ۙϷ}쇐_"8^\xހ GfV23 |ٺ{Amۻwk͢DM%QWorbӧ]h.]JD@o!8>>owx{{..NMdL]]jPT\2 *DfV>r. w4b{E#}8&j(n uuuǾpXN[8;9>vW}v]qVCm+hDuu-5V@[CyyU}H,>8899}t8vBlh5OD@It#l&Nĥ"dJ "454Nи8Egg5\\.O''5TJ J%ZʋXW:c`4QWW55A:Ti/h:-Rء:ul6C`n0Pe0k:t&IID@IDפqlNLe899"]8:ulNZuP Dtu D$`0|F>YHMns9;9e FTdDž4D$d1CRR&g!)) )٨ns;gg5"[ FTT0BCRf5 DdvY%e!|. Wb}oT ^ EX?.nHDf@IDV";r=̬de..u%//5;Ç!$͛{Cvu@IDvR̬˻deb]ƢrTiklEp8쇐`:CDv /GQq9*pɫqs\,~q'"헋_,Ԯ8ٺDD b$&VmښZT],>\utz a\ļ\\B j4..N?~O]_Pﳆ#5% DDDD$ , DDDD$ %@IDDDD0P, DDDD$ %@IDDDD0P, DDDD$ %@IDDDD0P, DDDD$ %@IDDDD0P, DDDD$;@Dt-'RҴ i Bj8BIDDDD0P, DDDD$ %@IDDDD0P, DDDD$ %@IDDDD0P, DDDD$ %@IDDDD0P, DDDD$ %@IDDDD0P, DDDD$ %@IDDDD0P, DDDD$ %@IDDDD0P, DDDD$ %@IDDDD0P, DDDD$ %@IDDDD0P, DDDD$ %@IDDDD0P,WAA)s QTTJtjv .'x{#4l]Caa2 PXXmu ZtZqFFf 򅿿T>%(,,CIIjtѣVW ]*j#\5k/4o Ba۝: 3+%%FzjhpvV  Jenۥ¢2TWBh4Npvv<>>;,C~~ JJ+QSS 7W 5pw 8G &:v~d#1)g3*W*Վpvr;|}!0 Z!22a751Pژ^o@TKE|BRRQYY-|FF@t]. 'D\ N"==5;81#Хs$:uj/O7 t:=S}ju_Eㄖ-Sǖ% o`TVV'RpD N#3+:^xJ/kVE>vEn eHH8iH8|I_Pvah߾:D 0B=oZm 6o9"%5mK*QZV 8-ێ-|a =;g_**l[ScDMM-ˁR$ǟ{xG6FэQٺ7X!JRl3{M%K`0b ~펗^SUU5Xn?sʅ F;O0PZQ]]v_"-:*=: ;LQUUUk?wsqq˜ѱ0~<<\-~>z?5dQ*Я|`BCvD̹P_ۆm;4k0;7 dFTNĥb8t8Ѫ?bZ`ճ]#PFp/ހ9pqwݫޙutzvkuG#{nӇmi\3HNc s:_]>x[м}tc.\4EUU߷c}~k hV;(NUk;QTlzFcÞ}p]q]и_YYGEgscc>6X8xKތ369s8a0dPg܊ֻKp*l٦M e_˱bޟ*tn9˫س7ӧ__㼢5"WhNsx}%&}БDj%s mև*m lz>3ݻY_d4y1,Yyy%Vn‘Ixn.IVWWV%6"y-EEXZYO=>{i*oV3 G`xl.ܹ>۶(5sObހ Wc}iNq'S1/{вEsFűa */´b6 RŌz~_[ǟo0y Ř"bWWOf/0ySi2u.23mI.^e0yB̜|{I#>f4[&L^ԩtM@ +ڍh~> 33oZ*\Gjiz?sw~_U9ٺ+7J 0 xdwHYny:sY<<ϳLx6Si_,u%?mbPTT~@e}lŻl RPXWuW!11ON®%dcԹv9oee1˥~̭m088 ̓a[qI.I:JDX?ƶǸqC0~='zlp9g oQ8Ρo_$T*жmtD+\](PYEYYRlR&M@2Pb;?a7/Xt'G G-ѦM(| Q[NW* 'I8s&LXE_\\->&Y`ėudR@XXۆEh@oY ''G F֢eHKERr&OF~~sffO~kp,Z.nұJQAiP *Z:#+YH849uXp 1iH^>t$oTVy*Gڴ /C??˅0: +Y;g37[K³1lh7oith% 1#6}檁FJ ]e+FL{JbZm fo}19Q K#)9'b8:V;K>c4gϿvK: 7unn^^Ac >~y KtΒ ,Z?;Nqm?? F6n] F߭!!~@`4z {o֮;⣣%xՅbֻƱc&V;G6/{wq!!~(#ub#yQ:ku8x xG&I F|2{*qm}%=,ml~\mt0;{nӨz@:vhiRfPՒ`<0VR_j@[\_vc_1<\z@+:Ix罥55'7_6Z.rI'e>8~"3f~/yq ÆuǽwYt!?6r ^}. Ŭ}:_~>Mym)O݊ژg˰i jOopvxl(l#.ێntq}06J3:|,H3_]:O^~ z'MۭBT`1}և\<Is06Y #Vڃo7oEss6+& Wه^%|Q^ϕJ%s{zT*ј5bV*s>PY' ꌇGO3o#7!Զ;vYG 1lh7|5w uisSGmF|UT~>w1wd ^ 톹'#H_U_l7;iW3w7 >zŗ_=PM1bXI$mtWLK~p-U7^?2#"1)4q$j)<jmqr,ާ瞹L;.?(J<>IlQ3v[GST'm^~/ |}i -Z/>4|l>1Ieۑc';u]DL[wdnxmVLѣz }Ha4WoQ;̛4yh>#\#ƣWOؿ822in]]~qRp-P,?WRDtp/9Τ̔1&*mRwIzt kija+yax \NZF xKە\Ox ;ͧ)b4ByvV/7^ Z׭Bm7-[6<±o)d4-eBN9oHwCp˨޶UՎx덇%βs Rpџ l[ě3~Ժi;nZ XsSu7d Aw;vuS}qvNN«I>텧IM}܄/tLrJ6f!~sK, dW;Q\M~~ Jlg+?K![jkP^^ -j.FMM-tzt 0.%Õ7y6뻨6Co앫3:th!4x2>դ'w Q*ltt8]P^^`ۓ笺vXXi#{&xٚhzS@iFݻFmqfdc׫5ءĶG. @OIm Ӳ13Wùy4Zm sq.ΟE^^ JQPPJhi:6Gk.(MC77IJmۄ'[[c.^MPs *h4jh@F@if?;O<5Gx38~"OߚsV11-6'G0׮J r=/.@\\ Nĥ"d*g7)!սvaBra0$-mV,]P̶rQ~W/@ jGGJb43o|~wf*,,Öǰek}fh>bZCL jd?~{c}FEb b{p* F Md?@`Dnn $>z ;3R\"qsPx7E ЩcK_>{&`i!um6C,Bn}=kDgj><|!jmJ)tݸ[$9P^\и@ -**99g M#d^ 2G8GʂC$9~OO7uBvRUV0lk?p.Z`A{5-쮤R)!譢B+鵍Fȿ R se%c޽/'-] l|6ADxƍ폛weW[TL?_څ5}{\]-gPJ6jxM^G(ɶGF,,_̞SnpZz.>|92 if{]^|qymLaRƾݿ9 αVK{!s+{V;PBiS}aJ+P*sK,~e^xk̛:55Z=>7|6g7EM2&em`oj"AgC{IDGVԬ+fs`m;,T]]Vދl=!x(V) IIC4h& #Ԏprr\fMw~?Kމ$JuоFkkS&JppkvNVph>q*Nw^oGMM-xὕMT]p#dHԦ`R;k _n/a[SxOK}/(mLF>⾹OH+%5쏘32ʌo1w4k*/?+jllfYnRlQ#{[bDQXTf.UqIP;7vjpIJzZDq&(팗;::>} !>! N7Avv!y)>1WQDnn150n?.gg5&; ʡAޓJp9 ᕼܑ_P` Ev*JxpcMD@i4jt]"/H8#GbמxYOĥb]0~)g"lgNZ|ZIt 82lnȖsA|E/#(s%ԇ{r247GGt >2?|2 ƌmҟ7\1A}%4KMފJ݌ޟd0ip:]0gպ݄u!lR슾s"/FuT>}~i~p\\M.juXnIDŽ ;s漤׽ F]@<-jA<]0ӂ'GwbbVψ]e}܄OA%nAI5Zl.4"-Fw2bHꌻ' pV O <*Ԯ}tI>ZlMy))&T -P[SK"'&3݃IG S^^cS۫T*k`;~hIF'po*겲*>rݐ%9%.`Q!udwM0`S0FX;t]i[wA{N S*ia9n`n#28>9#j('Zl!I}b ;~"Yٖ-ޘK{ W[wA= 7z6j۱CK\"=(\(‘IBmsF1P6a.٣PleBZhϿHzZX Ѩ F̹s9Vےhwغ&ٳ7^xnekЀ‹V+\f~rK" M\.QBJ+%u"?D75E<=Â#=b߇p>#ݐh4b/[:::ΝĊh<qV'6<ZǍQS@ë%Æv@AZ=~iojvS,m[#ՂU$ET:M#%h4beZY)Q- e')7 "v#K7!=2#2{%ql$%gٺ=uںBʫ Bm ]o egoN`KIYXNƫ FaX(-(33]Z;Bj;{6<ְmq͘Wػ__+%f?T?*zpB[7T^=ڢEDYΫT*qp:g TU]KlV?[&|vۘ>&.FDW@iKތǞi_2t3BeL:K/ `?l19\%v[o}x6v۴rͷ~Dyֶv)J>0ԬzSW R94oJi.㶾эbM X.8!FDX+eđw{&`&rr kc?oj9Y mRl4w> =gyz/;3f~oO`ނG艨`O=>F]'f9s)<4\V%":J31Xb'&?3K56پTӢEs =D˶eL>9eeߡ#Yvs/Gnns9;%vxheW!ړarJ6^~u]moDrwwć[/zE^/iZe?֭?]+>*2F`n\ fS-7߮Nϟg7[}w_?Wӣ{kY?E7otd9L}KM_ ^O.%zsag_MQk_ #g1} Sux;!>bՂ돨œ/x58H~dZv?2%fWڜUkafdcsttiH|.I){1m(++]ZVij+Zh>~.s| v(7Fc .ێXj S(FA;YWO^zNI,n->7tˍɥOÂnHx֝$ JJK+ǍuϒS1H~uk ׫GruuƘ+;|hwu Iaֻ?!;2_j~u+{r69+WWgPh4b7mqЕN)s~áky(WSQ5g$yԱP`ғu[YQŒ6Ⱨ>cɒ {oOx)ttL<<|[>/E1#0,3">h|.cz/o<Cu5Gcۑ_*؇WWg}5>2%*q{w)  $ɼrl~ ˖@q!x~8$;aCԮpƽ 葽dLbo`ЛزXYQa@R*0噹/h3l\nzM%;6ª5LZaШ9)J=(H|a6ĦG0lXwl+3flrԤE_x͇̉nT f0i(ϓ6;}}wD~:*DhךJ?N3s. 1c B_NJ:h4b˶cز"0p@GtmBB^o@\:};w!d*Ft z1ki[cX͈F>1G)8v<[Z<S$Py<0P.nkjjz~^aZ}t`٤,KCgfb*Dt\j~aGUk]wk@@mN["]pE©tK} HkUAx,Dd% fR)11m8} ˰XnTJ#,nn8AjTVՠ»k\^~V<:8ګOKKEOMP`_xyAq UU5FUU rsv,i/%iw)n>C=~Jp7B@'WWg8;;VJJ*S̬|Ṿ@(L~DC% OLұ3p># 򅿿'\4NpvVCRZj+UB̅wE@Xi)Kqqq»o=?[;N.\-P*Dp/|}A㬾|V]ZRde 7D\4_`0^[FƬb=8{ǥd}Qtz/@ᑇ o)iy^}n<|݌rq u2/qvr;o=Q2;ٕ&"<_̞W_[ ?C~~)$ѽ ^}nxhڥ A巭&Or.!\2zdOLy68:Ț(nj\]58wEtp{&j4 f &><¢aW|$5D87Bts4xl sT- s?"-8:cۏMDPsM3sS1L9;1s}xf]aarhasw _~~M[TTzto7uC&tmѫg[$&f`݆ص'KIqBlh ]:GZ\D$ڷ鄲J}G&TjHtDm۫\]^Dao*+kIr]DuN7\]04k* JCgpDYF}|<>:=A1gTz O3yuzC{cr";@icuuuHIAJj623UBTUVCաJ[p<⋰ A@o[U|FΜ@fV>s S*h:TP[5\4NpwwAPBCV-c뿆٤ԙt<:8::8Mͽ⇐?mjbٍI~~ NNGFf>23 ]JTUՠJ[''G8_hh&͛79STtGnAA)kxV_F쬆24X FDdizf(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""Y(HJ""""da$""""YtBSIENDB`seaborn-0.11.2/doc/_static/logo-wide-darkbg.png000066400000000000000000003572111410631356500212600ustar00rootroot00000000000000PNG  IHDR uM9tEXtSoftwareMatplotlib version3.3.1, https://matplotlib.org/ pHYs.#.#x?vIDATxwdYwkeʪV%ZwL0F#F̒4]%$ aXXqHpw  AϠeuWk]SGջw"#21 Tv{]w1fyp~9'Nw0f:lSG pNa6#8u0f:lSG pNa6#8u0f:lSG pNa6#8u0f:lSG pNa6#8u0f:lSG pNa6#8u0f:lSG pNa6  IiI{T #IuI;Chd^'qΝXc}?9{Z{lЪ靟C<}}dtvfp&sF$]4_Rӵ_&%$S>2rz[hlxL=Z9ŭ=pGF1Ffvcq_s>,I/X;'ɜ֞%N]Ղm?CI?47plZ2VIZpmC-nSҏU ޽>2Z=MfPz4?muEmnq"Pc񨤏Ik"ؖ|h47n }TgU .)v{™-[E{@;facl^{]OiZ2^/;ddd$#iϵGΟkUVUWoSpwOsCF xƍk~M/ S,R,VvjƓ1OsNNmlUժAUjρ{"pے)$#I^ a6)7VKzCttwt4#)*(( ka?m jAje/]hRFJIO7 Bm+#S8OxVU IS⡰RR^x-*;'uN[6*pFJYio ˒~O`_]=30+_Ty$}ZRtwTƕƔvע yioTv'VzRQ+rJjt}d)ـ'XaQuI8HPXX\O,&2uϼRPjq[+ⶶFmI@r}d 8=ـ'Xa_ҟ'%rZH#r'L9jTʾV|[Y\>͍Ǐ0+%>/FcꊧO3 *U-noia{S Z-OcVlGFOc"Qc_V-H??^K*8#A mK/]=Q@fΐ¸Qҿ q]qIjm߾>2/0"ـ3`0%_48CKfԛL^Cvm\S1>K$mI}}dtq]l)iaNR䤯ԗ̨/V.1/gsN+mlkzcMkV7%I[O.lc6VJT =22N$l=[fv6to~.oI@{fmI.)yR׉xzi%3I,sNFF1guf7UI^nS(鿽>2hl +$qIϞužTVC:3d1֦7U|cE43R sZUMnr/Ko$#Փh a6Ӓ II'q %JԛH ΠYYtkuQWU>2S仕:G`fu}E3';鞤Zߺ>2~R#Xa[ۿ-)w'5Ω?U?{[8Yڬ*]w빎3x,6Vu}E ۛ'u՚хha6 Ò}IxDb4){yжkR&WV.%$II\40Ђ5I7IRRk3y sFc\P<Ŭs>j3 -noʂ*Y}%Ӻ1R;q/V_^ݍlH߰RYyc+X[Q~+/_}݋f0VPNR[A=.d;ԗ ,tgmYwזTv/%II׮KDŽ0P$EIsݎXBrHeN-iX߿*pS>'.N朓uNSRqﱐi8H[?tӪM5\ј>5ҶrFCkd(K LJ\ӭE-ڽ[~۽0xa6cWTkhvtNOECYap~ґؑ8TVߜ}oH}́zvp9y~8ws;N6'/(Y)nׂ/IGFEB,+_Q_hךPX:+kYomuƱa6ޜ@A?3.+|:͵ltceAdM dT'I.+!}OwWukuQJ;eI#\Ylz#QZn!ttrKWTXzԎn0%#3Xf$Ƴ~CRkcq]/!v|cԛLkb}c{#wBCߪVxrz@AE#c GLѹL^f6򜖋SHҿ)_+W#-ofóm0S-g$ucDJWjTFW98|$gg,h(|ZŠTםeecq/I2(u󳱘Zy9שPZޞ2KI{ߒ+֒<4V%?/ik2w+kK[ݬsΩT^.)pV?|4.'?*'6pAo޿="h9DŽ KpGW 9 'yt1ש-4?2 #xfCu'RN\K\kuNIUҿ;V%OGFx̛v?&oHpkIÙF]mوZn򂖊[<< k{!HW`JAU/*t!ו||cZՉ _7Gk IMm47HK+rzWzׅLǩ UhZeewJrQM/M /ߺ>2,0c.IIҿZNi1:HKp-kT`3}WsnjKΩ+jxI眪3f/UyWVm xrفsubޭwd~|.߭=Ŗ[͆5ޡvZ- R}wTl*@y[_޻_$)rK.*etguIXI3GFZlx%7ZgbjoO1A]FK4;m`sX_Ji[jEQ*Q>lRxASY)YMc%IFQG O%jHdZ/t)}DibmE?y/^g4Uw~8{_ʑvH߰"~L55jTԗ>]ZmTčgt)۩ ֪Tf֕ĔkMY-k-]w?9 uNkm}Z:b 1tQ~hYncH7= ᩴoZ^SW< V_rs?N}ɌZkn?-3F?xMs oөZ{kKǦ#Q}z#A]Z+۪y v+A 3(\R1ʓQ:U>o<kZ*n}ϩ{$-lo 3 =C8I/I!G8a6%gjdfauzmb;g3kSZf=BDa#Ѻ_k*.k8to}E.;n?o]NS0xcLҿZ!XSž:u1y5\YJq[NR4i$ץX(|> L׾>2Zq͌nJ[2y|qVwV]H$ѱr}dm$O lx"EՂ M'_^T.o֎S)k7Tuk˲C=z馯ߓHƘCY9l5}JAU[R4Y r:{'XLhsa{S ۛ'Gƍ:^.ݥ9=Oc&W=a{SHO}ɴ 16{"~HL^?ri٥F$}c0$ѦId$} KPAC=djV?lm[?zFSV+bS8 opӑ3&VoM96ȶq}yط]+ ]%T8Eo\,^ߥ\g ӑ%PϠ"u|a$)I?C Ox|0W%Ɗ^v BqTZlo<]w7|l4!hA8L':q#َS1ڌ ´|%iTg'fHV۲/.7qهB!䟘f1.\lG+K]Xa+?-g3i0jmlJM6cq}f^Tvkuk@ֺtl4;{^#{׾_ o<}wXPnj =a;8W۬5UvBUkY)kk|7ر=_h*xHe E>1x`]`*A j{\"~H/ 3WoHMI?=@vj!—׋}:ɟj(:jsbPQJK9CMGz}"_wyJGby\4qc=O;UwVu}UYc ]w+ +􄵲ڒ;}Ǎ٧6|g#ϻ8N;yV6{KsZ)m?g9٧zVm:ZYri[Uk{R. PIe][gTn9Z+Iq'$#̆3k0QIj\vSib;XsAsyBC3yZ]ԝ%mW+{&ҺRgV#k.8;J3V$BMޣpD/+lZ:#8%v4sNVNdorሶZo9_3AW9cTU%L{_{<էTFy-lo9\,+N%3-?11Dc1TWfr\[Sw|go KG9p~$}s0%#pfÙ2Vےhk\uξ3δ<]v6v6HDŽ;h39'٧kz ={&֬s2c5[mkUUQR6x}=O]cUkOe|g`V[݆F8t>qI}܈~0sh1’ώ#p3m0II?VAD8O ]\e% 3>0yK@6]R9ڮVtseA-y-no*pVAgkoS"5ġcB؃l;} ۮiUlC}bH1I?+[w6ऍ}IZҟWKNuRuѣ+S}^Uk-no6tH[:zNecZw2ѥl.d;Nd9o7 ߯ 2uwmIܹG}6#vo,1j8ƪ?3.:lhܯj~0͵IR I/\:DŽ0NXaO$ ͜GJϠnYkUuVoMrqcvs^vJazoiNcž /> ݵ%O{lw"WϷ}U%|֗L#} y6+ej*ۯ3O佩@sJY9E}_2ye"1S_k]`qVJM%y8[I~}dt#̆S3V+/f;0眬sڨtcyA-)4|C;񢇩@ ЍMmTx<|& S jzsFdZ;nKûﷴ'.+T 齿<^3F_8UPXCkU f7 ܥ#Q=ѣdl)n'$ZNKRKJGF"̆n0H3N$Ba;ĄTS$##ɝZ^U3^H{7w8Ny- ×%AkZ:}8pD;?"_Y3u>7\WZ/[']^wjGϑ{hGP>;Lu[ڜLz}dgM4VIVMن9}Sd+y=O!?A6_}_HTHT{Zڲ~0-hjcUYUE=9ltcya'Tܾ`)^'yzmCYcRGO &I蛓 IX_>nWR`A d`;Ώh8kt#$}y0>֍̆f0&ƊK=:ɷcO:xnzy^7Tužz57@ߟz12c_h8HXE-T?k |}5yUnj>pA=^7UiL'd{k^saf%GFj81WM++Hf"1}XH{iq{KZ+X+k0U2bh'Vo/LҁWlMb.f;`a*6;̶|gqV֗l`*~}~0 f1Hebg" }/V*MMoiRHJEG:+N,edt}6ƚSmaS&ZBSghmĭP_xMPX-no*Ǐ&+K3F֖5rPS/хljc\;ȩ6̡g2oܮxR8:ehn]vh$ߥܽHa>wNѹGZ+#iRrͼ ´_\m|/a6x^HF y^`:=ek Mo}s]~! ɺۼu]̄f63?z1Uk5Nk/vbSH:զ96szoiN.5u~2΍9Jns`]=>ӆ7/*7_[ 6էwt^&Wf_/IGF cd'+_5dEluշ4dk!fBPh2.f;ٖR9uΩTed}<_!SSZUcns;?'Izws4D:٨ZLНե߬sX_їT u5b0g(7s/KXa6o 9 vFctUdX;Tα亏if&Zz~0359 ]ݥY}fBߛ秵RR`3FcY |-CDZlhK]uSw~ܫUGڪ=]*Ct9Tp|Gp41V7~S_ IžWzԟʜޞF9]?ϝQoWӬsڮV:>mL9 HeD=nRRquUhvs]Ɍ2zҕglhxL QhL=w٬s@UkUH*VW[nU۬\nӮL/d;ۼzi}wPU`-׽[[i8TV_w ۮrI߸K;-mLv4 y:I}bpgFcJ#FbHeɡK mK 7n\_9TF=wEHá9O)w8hfC 㽒~W=w +Ï4>%Uhb}E+*U=l4|1 5mUm 'euNιcCU7orcy|ӥ\ с;nW6{Ks\t$˹. suVN+-Z)m>$ޚSWKuNߞ_}ZG6VUm \GOC#@=c$cD5/'H;}R)o_6|u |pDNh  !̆/KK{GuȜq ՝%8CRQ֖%s-bY9WkC+,kfs]H:RS1 sOkvVJ۵cO[Swm {Ӑ)K#} JAU!Sg,t$PѓiyljȧUVFbqKU(+52z@icW$F i =IZ4-{7sW֪X{3jRY;$ߘC~ֹ0,(=O>{nD.jjcm_KP:kx^{]BWW;IvZYPay^}-`;X(ܖTyE̹ܔ7<7 jGOmxͨj׬:F+ fZ;ߌ1٫L4L(hᲤoƿx}dG'EE M+^ҘT#Ca:p^hD.9-mo)U5uGXZmKc:IZ/K [ڙO!{ґ^ԇzU)K2J#ܱth%XdS2k$q٪֪<hZ*nk7#m-716TUbh)Ձ *UZYF$뜢~H2yuStSڝ㝜Z+3jFl1I/tGs{/ JGޯ=6V~}d'=D +p#uƓX9ECg 6WZ/=ԫ@ߜ}lmrQoM'.q12Zf7vÍ15"KZ';23|Eߝٴ+ mC޿4Vw$ ]t ODMARay^U Ҝ^K--lo> eSg<':FyhZ t\_=vtJˣW%45  y¾So+^Zwwjfs}/ȶ~:?߿&֗ &B_fc֗jm8^UO"DJW\GϩvEC!1xA2 ߷I3VO`[Hԍƍ 6r]uȾNJ3#w3꧌4:b 7|]ujbڈLuN9E|,!'5,$!$Rhzt$x{>N-Ke$ sZ[mأ{oZZYSYxnJ3h9Y sԖ=?H_{%#Sx札yǙ4VI;j0<>xDl Jq[3kZ.n:HRFv>9MzXzom:4v˹.顈K`6+%}s{]9y[]Xok y>;tHh94kl#2SH[^t1Y\w?0[`-dofs]?P`BJ#I  66U=U]u 7~;?PHcjcEb^LFjb}E/iZy#n]vܐkrje:t~kbG h(HjE_ȚŠt? m!ӫ; ^R"wH8'R=0VasjW3ҕ| u%ԛLmטX@jMɌ"~H:{N>7\ytsrrZ)n< YdT{M/ԟJ9h[w"O]ַnX=pMRXa}dt$px^җ`-SCO$VV-s$Xj|aZo/L^O*dx {Ta6=_p,C &Um#}guI:Fb>|6nSCG:'GOI:U+Թt^^ABk=:{l.vgӹ~:?%cj+.$RMbgґe*U} }-l=O'i~W+|_%SC G= l8XaSҗ%}>9tOՊ?IŖ45tt)׹׮iU}<#Q}܈:bNsWGZœm<͕ޜU)էyRj[_p<^=O>=|YB!=٫.<}{@W#kFɡKM;QN[s)7dyxy ad4ݯKΖ?9@_wCґǖ@o޿M#%cFOa\m8xƻ%6r^o";Uh|~Zwז=v0˽ ;NZ?*uK.g;|Wڲ~<7$8faZ*nʂ7w7ҕ|ȡzzp;41R)RV9=eqCaY<88+##+'xr;RS%^.)(H@FQ"Qد$u3k0ՇzOir|XG,Wz XMl(X<}ⵖUVߛL𧒾p}d4 t2VO%y^+<ͺ\ly/a[SY@3FޞV5׶ԝH=O]rѸB dm-L{dždj/:ѷ 6/BU99-oy 6sc2_6i<=.XjEyM(x(%t%ץTZѡY-,Ҷ}=O݉p4ҶՊ1J#E:+ 7ѻZ+nZ;KF>y#n۶H;U,ƪCt$+. sOvǕY]jjWr]X |q-5|YHPf;1r:<=tR<}inks*e@/1z_Q?ʥ c]:1=4gXa_W%]#! g'Չ{X`Xv6vj\7&o=8;L&'/83VeXLjF=ѣK[Uk5M>2^7GϵeTgZm3kly>7#Wk-5uȵ {^L$FUk5>?k C yԞi^8_ܓ4Vo/T$gbmY?$}6@;f{ƍ{%}E ϫ7>"V?ԽCHecJ99Q)*6owZwgiT<8#i ˽C ri[oMVg+ԫzNUhjcU|ӹtNPmA5}vs>N=#FOh5 y>3|EpX1\e߿N!=wFp,lm#q%ߥ;z'Vm[F9U3X/^X(ܖ4jvs]ߝgmI%YٞacnՂl/{o<6p^]mziN7Wh: y|%V3zypYu>׵CeͰ9iRVay^3kX+#) \:KN(t@uV咾6qSmNްsNֹ#G6cRypg~s|kՉZ/9hh\D;!֪4^# /{}dtᄶxf{F;%}Y='d<:xA]mNaڮV v܁AknDb%9mm譩; \[W;zN=\&=wO$} m 𔣊4VKC5dT9!yFl02zwisVJۚXkzg;#ZmOّuNwV؆fjcmO{*6Zca=izc=M|V=}~xQ/ *zlu5߭/l]A6i7yzcUV:D+A#!=٫_tM}̙ 4Bkʳ/HW=8d<6phd:+##ϘC31xjEpcfwZC県1EQ.|݇P51a՝ ۛ¾xR1|8ӨTs$c# ۥ/ i294Wi!;h:n,4RO"Uױi(P:jEۛ(O*9WWmW6F)ݸL^CgUV)yrN =8dۥ#kFkf$GmF0Vw$z/3<]utgmIՊ<e1 srrm S e-Љ!q 2zFjE4͵vh$%3zGH^<V.Jikvœ $cq-_kxR 뜂1c3[iT͕EmVTkRjr;mnʂj⺒V*-J0ۅL|WO2c* =Vm7܌z sT{#5!oVE-noTaG,7d|Xg0s^{ b]-R M]7jX+~2?Gbi[d8 *Kq'izsMӛkz{@ÙJKߟhs]4yJKkb仏ly9lI s2 vޚr[tN/ 6eUZ9yKҁ,{J%ECBTVuiWt9t1-yhLPXu3^v* [[ݵew֍B:|6/=m{ UYƴ|,=NZn'}{N#NO=4J gXa~OR]DFҫԛLZaVOTX\o ^ XtomI?jh\p$VkޚrPË}z [zsvC>J2΍4lZU`~|oXkZ*nսFW<.7,4*溾=uȲ_>2. xn'f0:lJIΛo\Z.n뇳ڊկL9__Q1SsihL޸z9{elׂ'rJZ WM?ifcRPᬐK{x(:{CG>3_xMHQyh9]QKs#J"W6 37>ֆ~2?uFji{So/4(ݵQ)魩;u[FNIKQ< Xc^I_Twg$ߥ+T8tsen빎^'54 K-][Lּ+.n,/{J2VlKc 4کžsogX+Z/TuV!)RC19tY9PQ-8=mjo7Y:IZ-Z.ʢe`er:ɫlU@HaWe9paziwh#^ԓHN􀱖 ^yƨ}8[ 5zZ=-=I\Υs`Ϡ݀-T-6p %3-e˽C*Unm{%t}dUNaoJzhB= fuwmig4dMay^pDT>8qxaA98bq#Q99i3F-hWg;MYs=OWJ*]>wNk=w),iilp9vaXJ4xQRhy}s򖖋~"鯶S}C D_T8.<ֱ=Z]| ȶkRַ&oki{S!!5=O!Wȫu60kKKf`'h2 rS#[.GB[BKΦνQw"X(bZyPXvU{KsZ+f)v%<2M섿|5چ<_ֆѯ]O"v…K35^vZ x ƚ.twmI2mVmWSn7=cZn@it/ֳU ~|z#yz}n!9\Y؊ 4zh[P>WwwEHcSY}|]'cm|Ftl ,k-X~8;tg+秵Q.u@ -ߏ^pD\<5!IpwF 5VI_%sDќZ-߰5zh7>>pAuڮu1p;uN3k 󰩍5Ca֑jB x~,K syc畋%MR!ăXYO1!}e}jSYquԯ\xN/5bW_{ѣzL.ܿz[v{3XkͫXkûB|wZnKlF&;L4o \E/{ԝHv0qᣐ +M5ѭ\4XZ^Yj9FtomYVJ-u.o{#SBDZt < gw+ :ome [R TO4'JLSs5ΩXݥιԥ\Wo7af՛L)dwm$#酮?GPX]dïE>W*}wVKEXx(|,D8,{<%BamU+-=]=~t'RO=W$eIybiٞ@cEo{l.ie3-P:jEjEQo6*;fr0ԟ̜xZ3B ]zO咪*yJGrrM5QV.܆u~{<]w+靥Yሮvtk(k,`6$iŀsNKŭǞZ]ҕ|B^k#=cʽB݁X(W/2'-E 3V]߬xR?ϘZCOm[uyH(*y)fzU+wWz::'M@LrGJjm~9w&À<2MV.ꒊՊjcq顶zcL~ke@Y:<[lg3>s5yL$7/*/\XwgtsesAU.ڒ>NHLRS7VT$28}_땒o[cu{=ٞ c!I_R]BPX;'ϜJx+̖GO^ȨmcvBT@RS]d ZVISt8KN 2FrOɹZ3|c8@{S*OvXܗUkkoLK1*58vzcM-\uڞž'/i &ҺRg?ug CAGӢY)j"_{~l4 =哒`Zi1V/ξk=YWcQ)kRo5ջKs<do):-VߛP* CCǷީ@Fү^z^ns[SwZ^3F-}ޫSwY93yv7}YImZ]VY=_.sU{j= ;ZUm-hrcРl:|R#C9=bM#Ll#rN/]M_p jqOz}d?9=8 } zd7e1e΁BMٌݏZvL^Cܾf'ϘXPhlfs]u =tZt~Jwז]cZԧ/+<XM6UVs#zW Վy-H1N$DhUlLGѕ|WCazjb}E7Qy/loy%z1Ω-A6Iz yB饞37Uɑ:ICƣZ鬵*A#~7+et~ZVŖF>ѣrQSkn$c񗮏6}Q6VU UJPK9IP`S;x(rf) $Mn|N`dUuVߜ+%M7'2 uǓMu5(gI`*Aspcε45 ]}{Zٝ=^{qjẔ?Υs1(ܤ~<7y`mu}u7T7wԵTܒy] $RzG/v>v(cǫZIodooOm5;L@s;V O9alӒ>_ρaii(ӇzIi }sN堾~CKF*7N)ވê tu(Iu'R %c;a/0fk=l,Q;MM6FN6ֺ]f'{n.j\ x87gEb堪ozG?z@0cmC^l*0XM~L뜾3uW+m9V>f|5|kA%md۵b<:p^DS1gXa|ǿ;D604n7OyMn>P*C(4(7>/-ookRnj7W4I9;>Pk5tNݵe|qȐo\WMwdsj[ ڬCԟI7Uk8 RP}q sw)8dӭXY80y}EVG,v+팴՗U~8s_:sHL8\[{Yڲ-빎]w>2fiΠx:.:՟ʜ즞1VHPDK=rIUkosehzsMbM8=oL{lul0Ⱥ)ϴYOet9YoΓ >2XD vƌƍߖ4Xh\/vΊ݀n(rm Ummڊ[ڨS7P:Pv=c$xgbeOj~{I5}5b_o7Zv.@*w8d}9rI7VX|CaϯNZMot8̪+][V9s*U}u#nǹRP۟y%էtkMU#QuƓDrê ܑA]-B6_1 yvwAU u~scyAɌ:b/->y?fkwߋ]}ZJiÇ$Xa׮p=!s`smi9˂ ZYF9=OF݊-5h:OW%-log3JPrIv}mƱvii/xCk=2R+SV~4;HkٻKsN!}m;Ҷ~xLm%]wsZ.n5d54ѣWk,{kNA't. iw"xRYIj]+Umm/['kuw]+mUlT$8ҵQɴ67ڊ|tk҄OW&n9Jx?&ߓZ8LySPO$zgPpwtk5/~Oߞ mU+*UmTʺ?9y[ZҶ{80JIo޿-L7u`Q2P y^d8@Ø1FpkcZ66jOF Z▾q֡76{7TcǨ狳 n,<SW2=+ u=3 )+ymW+ ])n˵admJ[*/3bqR.V--moXj#_G'ph\~HT/UH/?MNa3b0?ζ  s'XwgcGW.{7Rn(`ZכnMM@yEk@WpD/^S.$ <]v6}Dj'nMٶ>HcVN;6`ʪRV_ojcUn,6ήYX^=t= 9W\G~kAϫ݇AZT?6qS_K]}gꮖۇo!H}Im|sq0k&0%=WρHL]'UVwVtcesgm7Шw1fb}ESg*|]v ׮|@vT Cac{%׵7z3Rѐ!P:ת T -Zږu:+.u=ʻk˲ΪjlTZlz-7=^7 +߸דHJ4ϣ6 cѾch\W:RP՗twmM}MlhFcuǽ3pb#GԝONS00VMoso<}5V#MW:UǷUmfm};O:JP gUx.=O1wcbW_EN$BG\:/WjU9G{T_wC_n.hCCA,V+22rNZ+6Tb~H [T|ĩHPS[n$ )3v7r>\47. sSw.s~0s]u;xzgPCBm ˯?,cWOd3ND=vƋRπҧx\w@Pf7մf4B֮ԖvjzcM?f'4>?M63Fh\/}N* d"1 M# ZgOa9} Mn>+UlaFo/Lgj]vB<ƨq%#QK-Fy!ӵC#3^81xQ͸'Ϗ(={ܨ6ZC2ޡz:N1Zg~zWh\HT^ӹ)饞e$܏ôye}k!#}甋%Nle]t9۩N72|= y>5|Ym^YRqko1xIںI}Vۺr{i]u34Œ-%~ݡxWGލEwHѩ;jKf~{ӅlYj|L$]v(d<JGZ*n5."~{GBg~H9wE+m^]RZ Rf5wڣ}>][ZGutzc˨+T6Վ %9RP{< 3#ê߿>2f;%c^u}wbUn,^s$&ҺRG,I;FiVmpֱfXTB}dN/_|k='dLm;d ]zWw֖t{e`o9P@2V:FqnVJC+#T%Ӛ\kͽ=_Ɍ1 _s](ܐ)pu,#)imf3+V{_C }_\th$]$/iS]onwC; y^7'osߏ?t}dt8!N_4RρNNx; V7R֟٭un\&h;;x|67ԛLHZ+P yFFqp^ťl.:8`g?!ϗu頦;Z󏌍Fgu>/~<:HBg6|g%(k\67Fh\PX?on\:bc# b=n=_NN {9pI yyJ`:نs;ln5Jj[dw8)NXazMzd7tCl[[V|goR6ZhNEcG{7i5ڒ:cfUkrwזU~XDu9ץtNN:dg{[bWf77;FyoeuْG[e)!&@G $MK!$ $H Fh ^3-7V1d[\Cc}5HGDDDDDDDDDDDDDfg/hƋ\ sUl.fnyp6nnl}@I0iŦ`xǗ5 RmRoPuykf|ӎFu^ŞѮ~:c{h+.skLYX_LD(BGh ϫ= 0-mb2i`,Ubg&ur>i&V(y.a+@ssvY}d&m8>q{zՕ;k ަm4 :bMڻGލeGO_6n0ۆ=m]ލyxXymlgÃ'{O[}&!^({^e!"Ÿݧ7EDDDDDDDDDDDDDd_)̶~8_³6y(IZGց?qpaUdLޚXUf AOSwQOSǖz)yPne斳5ˮKU;^|a|Gk:zC\fݕ&V0?M nt!ZC& Ti^'zOaԖ2MvH%zގGZޓ)K\͒/p=pei2n9.`I0y{v'x_4¤[qg69&  Ăk/l}X{/4|d{zśuk`jEn4i*sǍ10[4۪DDDDDDDDDDDDDDfGY~C׀Fbre6y𸧵f'u)0E473"?ѵk'ʞK{,65mqk ȵpU=Kuy~*s5\+g^d)BVj@%x30j4oN w4 Lcنa0 f{ۺu3 j YK'7btśIt Go.eƗiSPX x`qXI Xw?1ã)& ݱs\u0ǥm#<,'[ 0ޑn&KkA[{]ԏl$8^=cz gSm\adibPZo*`Z[:jIm20jnMXw}FzGu;vcJn}O]'!жq_tŚ:q+i80M0wia2jyD &,g1JѮV!Ͽ)庲0c˛^KY|щMaǀihg}/dtf{liƤeTMȲE,n!lmL#|щsFb5w2 ࡎט GsT$JG?66z GhbeqW[kඃl%F=a` e6%9j'Dp+.oLmdrqϏ a7cMMպ1a9""""""""""""""Rg̶Zƃ!kkx `x0Jm8hٓeLr:s,){.AӢ-'m0MS\{f|[mD@ߙ,<7vՕMםNrO[צcw.s}W\ca}mX,Os.6x.jjHX],VrRv_w+eeT׿-h0@45 n;rۺbaN$R W7ZsOewҙKDDDDDDDDDDDDDDvOaS@K- 9^&jwM6xXa``zޮCNTh&X0 [;9_0M;t~3BmM ηE چѕy37:6\͐n˶Jn؛cyDA$[ Z:< k,WVNy=U IE#$0Xd4B<\xî䖹8\ץ I:Sz#]|~r-[߽UH=(̶Dz59;lmx ir:RjuL4׬:*I"OY]kFdZI*+L؛qak=\2Lʞ[sho?z,'V( "\S;덩1Fn _]=>"^*1gzy饕k^X]XڃYZ[gdΣ5I2)F{S8#Ti&CKʶaFT9sUs&`m(jֱe ya9:@YǶ_emȲEod&?scW7{ivO4\:McXx.e%ݗa6{ GȲv"dv=bt*t~y?3×X.onfu?ȇ @ZK+L/ G=؊e<3yeH8MJ-Nwh0ؘbw乔ܝlek7yܳnu1PKp~!O3IيȞPmoUZMݸz Of3?ZL8ls亘, eL  4J90 7,*u$HIvͩ]l<]],ӊ\㹱+[6\gF{vxmzc%r/1?X\[gqm׻@K\"DDDDDDDDDDDDDDWֱہemK$DW3iDy;mw'~Rn77i8j pYədNh\hk$F$nSwg[amZr sNlc-2^򀹕UVVG'?7w#j9`iVF#bK e /C͡HݻxJpu ]: 3u-BDDDDDDDDDDDDDDFaOjuz w*rea7g7]3ʼn!b ?K }IF0&^yg掂YE3?S}-./r:պkRse6FGJ}xpmT#XA_*[8ٚ$hS./l@D8yK|nrM#W>3|ڻ9lk0 kYފ """"""""""""""u0:#F:l%qE\ݹKEl"=ycۡ<5pj%epawݔ `Tͷo+}ͩ='̥.y׾RyY.N1AObR;[9َV";rӉV2= V [ncNK2 s;악9ηtԷ2Ү;}7"tJZ貺ix)Z"]Ý}e&[߶`zOJ_ ?6"[_Xb|a?}2(tpOw'ZZI^7\nۺ}]./G =߮eݭeaƖoRaɕ%O!bajSv8F!f+sK9uS}fWmn/w€wf'wBz>N4I|ݼ:5Rϴ;̣]XYtixI.s)7s[wiq.F_sH [IfqmHͭ!8DS8==d8Ֆuw #^ЅBVZ;xkfߟu_HgjZ,""""""""""""""{Ka@yi@3ɶ]_Yuy~ꖣ7W>3|/>&0[0iJ2T;Xb m/u_90MΥ7Ewyǵow@M.m9ҕr:e.f[˳Fs D8Bk$d+A+pqe&-J*iZdG'ystյ,^2KWFiyL-\\;C|1z6.fY--?$""""""""""""""5Qm}ZֵNS1emzě32A._CâXvycd7F&ID |zO ;Rmg~E'S$ s[k Gs[![4LʮyXqpSFȭV.bb q&Fдzj#y./Pro00DKM!)Ag ?{h\\_y0i]m&hO72#t4xt!ں[[#ă!N'[\eaK[(̶ YǾ @}6%7ݽj=ύ\bcE|/M 3RHN0x&x~p{dR]D`z9w.@o駯%nnil-Q:{{_lPm{.~VmK'[aW[t|[-kӭ{ߕ:oޞkӣca}SXI{;zyczlۺަd]CU')jZ0LC6:my|~򶃈Ku>;|<_5i&]f:bMK̭ɭz!Ӣ-%tێ5^QV֋z"R"[ģ ֎Zlߖu4ΌqI""""""""""""""R0[~ަ͡>;%3?ϬPv];wڎKYz`L?  RMG5׭+ۆiq{sk1x^6eygvrA +NHW0 , fCxlt[y;4= #\j("BP䙋C|xt?鮶}hE'_Y4*jukY{WKWSun[_z8)`E|\e07Kt{H dgSmvd+.Z,R( &MGM],CxybxD4䱮~R؞,\j [:HE!iLjJe^,W*"88ƣ<~GO ];?9<-""""""""""""""5S6TMtĚjBDy,"l 땱SgmKSr,St0Z"!hYxeȪ\YrnbVV-XE 0u䖹8?Cn} 8h=9S]yoGj~eGxkDp[Ytw/<{X AK$FGjKc?UlUd;|o-kj2abQ\,2p ߭PP6a!+@{['eٱ+̬vz{sS-/so4 &ur.4 ya0Q[7&WV9<{3hѷ^*/ p'ϝ|Gkݞ6sWkӣUl:O3K{Z(VRDG㻾^.O3H\2 "qη.?ԡ{u C;XUr˼65v ۍ <;:bx6qq;Q\JEP]rׇ'x0SK[DhgjgjO='z=%ez _I!""""""""""""""rl[:vXUwtLuy}jM/%&K43|<$[w|y0Hvo\l2g:6(.6F{OXn[?眫 J|Ooh3p$vza[:xerBْl:G7vu]qq~{]&Vwh2 ۻysf|[M(.DMc=+T;&cSne=˓Nn{;ap&۳,^{}%""""""""""""""R )۫3 Sɖ]_Jw4"FirWk'pXd4i6%t4|i<;vՕm]]DbXJFYT-YTj-uzR_5_1\N&nqjdێ[-̥!\q""Y/y0poOO?IKr[8lݹZ:~{ֱx Yq"""""""""""""")ہXE)B IoSxrʞKд5s&يap#ۭ,8Ӽ77]5dx_ Cۺ=gd0+u3ΨH)vU[Dr+@*%Ze tjoDifu1 bba[&hZ4", L@twkvPtAȴ(y.a`w\+ s4›v織)\ 8l\0n~Vq)7mL(m1`I x}joap_[g0^4_(/ 1""{mve{黗'DGS(X05p?4:*""""""""""""""(v'EgS`[yibtXu_Iu𸔛mk"VVΥگuqqtugϣNf^ }q,dJE nim0@\2Lڢ1η-.Smn`mM0&˛5VXsyfd-KE^c~}:zoq7n;05R\浡qpiٕoa k$MלNfDֱv QJ:Q'tE{Lx(<ыaxGs sWjU<^6ޘ+ּ0e" <0 ގPr].MzDfuygnڻw4 4P  .̱^.M ,zc+ ̮}ًS?GC \mٶcv9. 8r""WF]|9l%l""""""""""""""uw C$AX}qL|]5c7/-$p{ts{͡ȞOm"" WF<4 N$Rl:$""""""""""""""(̶[kYߜ"<ϣ)S٭,Kk;rJ]f򻫷躬oOQNZ#1:ي2/?,E]oSD䰺2czim<״!""""""""""""""C$. GsyF(%L$r6F4 sG&s|-ue+.W渔!_=lta~Xu F캌,帘tg*N:cMXIozkaj$q$™fhqBmOpWk͡Ȯl3K+| O9N|oNDME?կ:Iaǃ5'~eʼn!&Vn2_f:u+\5.Ḙ\[_tǛy RmiӘ vuy0}2M,޶nkflyſ_u""""""""""""""O -AoS}G2gvt|jϻWu\ea}&g~[;G֮]ܭ\(W㽹ۂlrp:J:6%׹ߝk[?Go: Pr+MZU)DDDDDDDDDDDDDDdk6̖u{Ǫ7r=_bn}`! K9<#ݲصZM j."D\2oL38|,@kyĦ_ad~~_xky""=l=4dYtǛk}w]9m Zň5nV*}l^Z?MGۻ axa|{{o+OQr˄LZmu$LtMWi=h9^e-gkJI.C[û)z.+ݖvjhۻ9H3"1k&ny\Rg~2Aˢ' A4}^r8;ym0 E{ں0 ?2e2Mz\]LֱO3WYqq,leQsrO<``phk4y76gɍI*Ӕ23+̯)ڣM- 000tCuygvK;e.suq3VhUH2L;|-\X'^ލ#S v]igL..ӕhtMs0^'55Lh|O0)Ž.>~HN4(P;L=D%fJuywv ۭ0 {ۺk àwt |-J˛|4ZDDDDDDDDDDDDDDv34c'W[o檷inieam7 :Z#MϳLim+lVv].fjZ{)7Kia2k?;ػ?e~w_y[A6ۢ3itǛkG*""""""""""""""M.|:u{ZD"de[1 Ǻ,cN4„5+.j m8?Cuwt}eץ)e7ă]]NWgA uٙuF\SAz$"""""""""""""r0[ao",䡎^kkbO9ٴkdkK׹+Mwޭ0 &v|XY_Ͽ̿ 6<㐞w'<3քAMU5DDDDDDDDDDDDDDv*̖u ֵGupVe<ϣ]$B;10kJѓiڢ-G{L[;wU-)Y0A^亵4 SztK.nj.?қ\j{""ޛغhв4+]""""""""""""""qi Um^e7kJTXgt9G\4 0-j <{ _`@{Na0ieB뼧7g&uhhlǝx~pƦ(;_DD +ģ73_}Ա<#︅jo:nbHP20 cL$E'V. Gy4:ߜb~}˜hN:řd"r5]5㡎MǸnX.$0xTiw'yM'xczM} l˱ e۠0["!1Uhii.Mqya[H @:d+a#YzN%[y{vW[iNLںiFxw~R:qΦ7o+<8_+|u|ZED`xwbf0[,"XX:HgԦSDDDDDDDDDDDDDF&iF%0tk'wu2gfu|ăacqRu<rC-""GZg;Z7]OfV[ԾeL^+.Ë>=v k<7v{Nk$0-Υڹz]`d`֋%ա1]"39:ѽ&.UhԨH!Z Vr,X.p=ik"ǦafoNlztnra+@_SC`uW<.8oOQ,.IDD+[ٶf8>ZmA2!dY5o京K\bdiv榈CKq*ڐnmsky!|J@"yGihW?cˋovO4,w<  UǾlipf~CE2eGW]""""""""""""""GՑe; |ںhS,.+<9y[)xcz9>web5`5iX i-p)7ba x3gIABƦxmx\cDEDVVYX]#<f@ֱz] 9bt xDŽ\/N \Ra _'oZ"1 wb͡w4I#d:zѰnlr&fxcdgjyPDDd7gyDϦG(~u,MDDDDDDDDDDDDD9aԲ0[uyiA R'x$qa}IFܭtͣnw0;q=y)K"""Ǖ*aZ!If±%BVVurV} LG:)+7w9}m 묶cE^͑I .GDDܖ,d8R˨я?QDDDDDDDDDDDDDD#f:vxںhSM3 8.f>,K//bה$`Z:6ěcS5:J刈16_%_#|4w{4^KY 3("""""""""""""r0p/`y ݍZ@B4 !ηn2oN3 qefOnmRnf=u,MDDDDDDDDDDDDDH9ajYմ|q#.n0vajq66J٭lK(&"""""""""""""cfC 8q%歱)`C`pf~CVX HTǀXDDDDDDDDDDDDDDcfKN!&\cm+`殷sO4F$""-¡Mפ"Q5DDDDDDDDDDDDDDdG2̖upu uĚ0oekx\iޛay}#ZEDDi,]]훞 G[^m 3'"""""""""""""rD0pP5f3 xTo*ҁuDG\)YK^DDR0[ b(oթ,#娆j Gj`8Ҿ0[,-8Bae.L s n։\c DDDDDDDDDDDDDD訆 wS(}MIFu9xϫ*]g09Åk.IDDd_営 yK'"""GJ0[]`KGDDDDDDDDDDDDD8Ja ܺ[ʟs*WDDD֙- a^M݅l""""""""""""""ט.0alzyWZ-+Q50W WTw9NRѦ,kȱUr]k[j5Zӈt(-%eVEEDD3+[34֙꾋qhtuSmA4*L-硳m#U9*uvdtpj+& B^!`:lvPDjuʩhVBw8O@opozr Qqyé 9>[O>RTiyF/DcCgӕrjg 龍ow x"2׍OC#J9貎 {ZpJ?ozk.rckQ*I^N9*Ng+_۹\,sihu3Όm%m]9VnZNkxaXX|@:3/7Pcf^7daPH- [_-@_01u0p?y7pEnFvxYǞxxc eM"GNֱM~3gk]uHY]~So 읬c[ooBN>Zb>X9mGl~ֲ}Xqzw )ֻHicW2\g9߲=m?vxxrzw YͶEw `Kjvcq=v+["E:yqA;(u'6nK=?<Э"A!guIYz8C彇tfiە@xrF;C{iymXgPh(H4  ^H0eX57^i\+.eץz|u){ZjjHXdPbP| E#9>zߦOyftM53u+LʋLJ yu` ~%3|cK_YǾ @p~f W/_Л"S92@ȍS/prɁuQ̏5:v/\િE]// 3Wu4c܇m0[ܾExu YmhU"c_ |xX9ҙ#@ֱUNOq8޳ĵZ8:oՔ"֙mI0x8D2! E*FHD4CNZ4 f]U*,˯Nnu5r,T^*DD 5m=rT>o<. > ځ/~ۛo|tfU.G]w:l{ӁtRj9p*PB|oߨt|S}#xr=p4~譜dYǞzƔ'"Yǎ6ZZވ͕*3WSԢ_ |~dwIr e^kɤF3_=71{Ɣ'vl7Wa6OE{SMq:7I"u 5Z2iho^-[^eziR*)"GS1],َc߃+?DqCEs@:F#ىcwῡq8GSrKYǾ> 3z,BA{"}N?Nl]vUG| ~sМw}U?3-KQ Uj <Y#Fe,k-kᵎ׊2uWYY-TN,uuHR*e_?e\, `Y,uX$DS4D<& h) a/Bm-nkb?6[bx~\~-" bwٌXZlvd;?kl59G:/9ά6,9NQ:~q yel-ح|цt!+ݑ~ _'R]ֱ[Che\IZtuIzPYǎw_NG9gƲ_HgilY"_e7?9ـqv`E*Ÿ5^:Y}(5I?a6k뛺ekӤ%[8ђ$ztN.yfVSnżuaka"%&%㴥b%ⴥ' 8™5:;ϕ3ˇ.x/"a*A c?a[ч駳k/0/7&9²}G27H:_`ȢD6T!߄ G=>^9]:Ave 3ߡcH 1I:o ..w<~3F%rd;?JFމJaUFۣвL:L!wTQuf[,IN8՞?$`@$ɭ06f^dz~wgz*K02s5Ek"Nw[3Iz;'HwuhNtЉn ~m&M*"Te!\eߣBcFcU4#H保u?@:$GDeWƲ37"9od9:@:D*AClhp9M c'.ӯqIc73 Vs=Q9\ֱ;K I@Op4 [l|g+=f,s5]dt*  =aP,31"]dS䵐[oG-;qovZ(re6ũY.N2"M[VPgȟQȟ4<䐪| !2R~c[gҙ$@ֱ2~7ծ#[ }YHg+YNpkƒ_'UP`k3Y>_F]f "7:=܃P9dXhpM" u?ѵ=vP:0[ֱcP.c~:'ҙ?mtARCf?ҙm]T[m;[L4j{\ ZNʝ.];׾T r}mD#mm)F{S'ϝTv2;ũ9YV}3D6ھGE*!9nc?ȇ]cDzHg&]LY~lp9rgQ]LkCrwɜnp9{':'O3v cᇤ>cq}%YֱQ]H?S0}5ʁL`h9(ztfkѬcPuoOe?0lЇ٨ imy~QarxBAj瞞u lѩ^+{u|+_fz~޺ ijӝsOwѼm˼ֱxwbwƧ:)"rTwq&`nH 49"Ya 3zGؗ/wf?7ŔlK% w .G+Y 3k Gdײ sF0^d F$Rc9~E"Y7#uyY>: . HgJXl3.в8<&૲ҙ_lt1f鍀u@%h{{ڹSm)L6\chrw'y$ M~z\'t׵[K"VRO=gO/yobwǧ8=q"z=n0>`C" "mYHֱF";?odG_HgC-[:v+HCa.XcFaUy .GiP.@Z8tpoO}>=6}q7/sahJ#CσFԋlm3]Z.z|Wֱg 91*!ixxGؖC"9F"{ YHgktA"}7ρit-/Mc/kIJw_ѵP=r~~^Ye$__>aYtw](ҙr0u'x{;ikֆ"o]vx3=R%1Ms}Y~k@xx!I1J:>uoHgF]V}/4_|eֱT +؝OYt6 :vSvcaj$ NֱtF(֔j*cF| -G[?F :W? Z Zֱe F#rH)7* |[e|և_m?cޒmGֱ )Qֱ? Rfޏ+:wῇG*60[4vYRg- @_m-ʼui ,,QrT,y־³ Nuؽ'xjzy4)\,%WIھa W^/p@ֱ",x_kt1R_Y> 2~w9>:u>1L7 ?YǾ,Nځ߯^Uِu_lt-PaߞuO 3/6 9~HcG7π:ά5 ZdG3|15>Ό7jҙrֱi|= e 轌KtBf:Zg(;ׁ yL7'[5}ܰV 0o8yOvW?}f?v Ο4 ZvqokOcNrizV^(˸echhUL9dr_:v@:.Fv/~w=6g_ |0m F#{+!q?xJ|%ɁtFoHT!0Pˑ๬c"F$GOֱO>ds!Gr7 T%4CF 0[ֱ?гW! q|ֱJ4Qxl[QW2 j=ՆUgw*;K D-m[ymM{vX[^6 >:LT[T&|w*`WPg::v m\쯟:v'%J7|Qkc&}|eq׀]* 8u0]?Y~ F" |/uYt4 9:xV 5E<}O?XnWƱG'TC-aPd ]4YtFOLֱ,pYtF#u%]_aֱZ~:~=W$GB𿳎 3:Uv,MZ6#Y΁tS.F qw5C0V]uo:v_v@jgֱ@:.: aLuZWgh xGNЕs5^x{g4lryξsb s2gl7cPo*Ao*HsarWp&g)w[Pg]ѵHC}3:*v8d; _>49ڲ݊I79tc@:F#Oֱ?lkCYWHg]_YnTk9SHuf:i6 W9g.8:a6=8,v9K 0jZ9 o]*#W~__Ls1{eH~t 59*=79:?@:11RUֱlu1 |,߬nn UɡW@g]qslnjV y]cF\k>Z1xOfpm!:wΝdxnDj3) Z@f` Z9 -㍮Eo:HgF]Ԧ2Woht-r$= ߬cIe?${fҙF$Wֱѵȑu/~1Z#a N 鲎}+?rɁtit1Q3tzT/g[HuN.WY]ׄ(9:/;v$gy$b[^DkI⁻xcd1OUVLk 3dQM==D 벎m? F" Yt氽{TƊ6~W$_1ά79~"GVmmt1rd]yW474vTZֱ-Ƿ/nt-a1 3[Ml&[tp@+@l9.B-RmU-̦NGuօ- t 2s 76oOg{gyޓg,KWFG'9Skh}2rcht-rcO 3B "GCoHg{;:7waԾ쇏wF"[ֱ;uHgE#?\_: 3]MY6\awl.`7wntr~/_:]qplU?%6غ#yu+ &v^NtBs^+ Ǒzǰ/yS|sus| >]أxe">Dm=IwwtqufE_:=BcfۚBT;Exߩ>;K<2]3^ŷV8c"u_+//ݖCu1czyT/08=:Iur>D@}l5(MF"FoY~D#uR_dg _d;5H:v79~, 3Bp:ZXde|TuNAJu6jcHg]6ʱRY~@se+ߛu}wGf8g ,.gHCnbv?{y>-F$ږҼru2ׁ"{ɣj ntr~#_9(⍮Eu$7oc |ѵȱfu7ҙw]YV?fOgҙ]쟬c߇"!Jֱ_d}7Qzx eowgtFr0[H& ^?Gk|NQ</3LEu=^0kFiI#ciS& `8S0839:=l'_`GE TN#$0U9Zk YWNm@ @ǾݒˁT9>:wx͹["p 0wqZ+M{@SHpJwNQ'dֱ@:s0=*]E4G+_`Uu#7|w3[9(qά3F#[ֱ u`00xu6-@ ___9Y9O; |!_Cֱr]?*ƹhX$(͏G-n|<ꫬp>߮1ȲSYN= ٽ6b٘Ѭceg]Gߓ\.su {A \|WoftP&GჅ:Uc+f?OUυU>%>E&Ba~1O36o9Gߗ摻;? ]]*/^啡1>W.rtUVqФ&?{w&WyqqwwFHN! RZJ{Z(P_SAjR(*Z=As d?nݙyΙ?5m읕#d_^^<^kS",2Ugyc@_A~waQxqr.j^#~-z>:Q}g摷 +ϟ!>-tԍ>ˀ!JHR0Sd~uN#|y\llQ؇mqG9m|uRUln{V*+*}p2 C\=OD< ԅM455ܫxU כ܍COeE ۛfMyn]5[1HiUBr0:K"`s'T_@\xp@݁#Pj*{u(<u" A{uW0yk ~232oؔ;O[)AN (՟|'.H}=1724e^qңC(͢Qx+HQq⛬ZPO(67jckLWG|]nF /RJ-ۉO뀹oyvף}3m{\Ji:+£}m#0pub.D,ni*2_G|-MkKe?y\֟Q88:}D፾R*ޕ,»!4mpv:͏/Z8[Ο82w^'zCbՊ pc8|tLCeeEģ-TM#Ac'm/_Mr=](hD_[(ąyەc :\;,(¯?QʶK< YwuRxg G_D1(B瀫QJnEQ؃c)¶ Asb67;u09^#8;IįIe" DI)ɓ']ȶiiyotowoYe w:EF]ȶ3y9 (Fc@ 4|xӂP)tf827x8ݙ7Vq3/:ZUVT0c02\_WcϿk+rߊݪ{D{:SW[3%YMDϝɼms~5Ql{GA o7ou](i:@|VX AV%h;!.SJJ7zO{!Qvd;N *R#P nfŤJͪ6oQ8Ƹ&*ZzrwZtG?"q4r42Ee 4~w}Av#Y08]%`y9wJHj1[cIǦ ěٵ $b }cyr9p!uϴ/Y(%P/m:h]7DZw}zϔ̛0sO}qjn{%~e%i9X>=9dp:C͹~m|iКS}]nDD)eF[j?/ΐ p:%cq PysxnR5 =pp^BVDD+W}AQtB4R(f av4 /K13产J!o =ʊMu<,_8E*+{&ȹ~{M-WFTDrܼc|UyCwޱ rA~WV(܋x,,;yFAF\n]=wmA.Szp&m*`:A^F'-6,&Av'.n1{Tr4{ ܬz Luo >AV7SH #"Qx"nji;qg_hddUC<=MM< ;ourb -zl%.`%aIB-A`Dc-]*e i4J@:٦CrxQeEؑA 8>fD\yS~uGU&Rx7eog^}{9yߞ39 m.EFc<,G3Aj#O^{X@|Q(>D4iD!y-?6(4]h\DatLOJ7; NM:yUϬCH}t=T![kwg3HpEGZ2ߣRjWOw wJyg5 fLa@GaWk;K*d/BF"`ߩ27f{I(<:D(]4ףteލ&a)ػW}^ZNCcHǼj#ayS:/Ơ[]ۭ='a cxmFymњ 숔ݲC:y,SfNYaQyXIy;OQx;G5q[I 7t+|]o$|D2_R|Qx0P\^ta8>(r>:C}cD^_Α2\;OY w޿( TG$7:BOO gWҬ-yoX)QgQx!GII: mB\•( uIӾT6Q/wBڴ\+M(A4\l9nD>xlG;fl <[ee*d"9<:.A>qC>sa1t,U$#L?pu^U!λxg۫Y}Ag" (\O<6f0|;`d⑍i gTen3R.GU![ak3Q~駙1R‚( \A>w /:u0=#QK%+=pmA`/|-q`.񨻴;-´LX\'JƁkB6i;yG- `o'saO490C#g}S=AA8al8b?mB˶5x_Y_{on,bRIFzuueKiR:_;w&g?LLix(\yBz!}l"}|b$| U@d"%.x"Do+TRxB_Y uR;u=w]|wO4ywKc]<2yj3}19!rg3Da%q1[R  !Y:Kk|;/I)soNow1ߠO?(..;Q Cu/;׾j 2%y#x&]:HHQY9;Yr|wx4^\N2߃Kեw:D_^ͱs!T po R< V m6JQ]<&fH5yۉ78J{D iWl 9r݈NU,4//{&u yWW~}#w/ZLm }Ecl_q)6Voosm*L߇.J$z̖{b$ ga&52>:Kʴ*(<8:G;5?򝗆.A%w%c^wC2yW(<c8}!(ͪlVW|w 0wޝuSgkߋ4Zw'}mS|U񦀴v\Dp R4 ?H[j sH}jyEm :K*C])3FrΡ3&s֑s3گnG k|㢛9w4u/$Ύ] uX'=S؋O4cFd")Fjh$+K;uC|5puf;Ù1CÀ{wuFf|,9|5}C2y?!Vw"|>"s E_VbKz!:H Mo}dFw޶My:Af\e߷A'o w7܍ %,ӑ O\,P+IW-R>GQ6 >DaiV %;ftX>gu:} }{hs5x_ș\eFDNCcK/'_ZNccە/{OGdWʾH*‘9ZƸ:wޙ$B쟉o h$-2'Zgi?wc[dΫGr@\ Z)qn"3Kt9z`CH[; \7G*i:f/ A#}vlҍ51>is|;|睥nl`΋8OwOCY)֔FreEݫ0d \8/'{껞a֝EL*" q/_ܵh1umܷ7̙YG3/KHm0ר ufHO뤳`> \mWFk@ZFm>;y;Hw^sy) uR;xK~WC}}w^oH"Ȳ:H)h%fɩ긼:5嘵iݝ}p>y|N;p>3G" h:~87pPSk)57lO7=]ǵw=Ck;Y'$Ucڝ6M}:DPyMW?oQ ;\~ |҆Og~楋d);=ty;CA [8wމz.Bk׾OCi;~ Gi,_J{w2,T%A2@SOG@yˊyE$u ,̦Fn%=(:DZE1@C=u+H^UIc9}9i&í|[ye9DDmB.;o=~ {XxПH{XG910:D. !:wލĻ&xzpuR;Qdx\M%;y9qR2n!uLC 1/D ?ipY;˙RΣkt9V9xw>W !y:*I;*d;2 ׉NPqw$aGu8l={#ӣ{k;/<ڍi:F553WGijz(bRz;n8e\\k~x.s .5I.";Z39Zur]s{e#Q)u!ǻH}p>y|N;p>3G"ؼ^_yWWs#>*uN+"i'8ؼuGk{&q֑޹3ޯ"Ŕ!ױG3e[D4`uf6WY(#g/v9^yI16?X=-.k'Q:D4pW,'׀[h.,KGlޝyO;tlT(fTdY0yX;/"k*2dIlwuv)ZXib6y@uu곿ٳP^U8}9iF{~)4tXD^fmVU^{S5ze?vu쑱9s,SV,p=!EkA#UAzsd8wAp|}WY(n>7R-Lpw*C}fDw94x/Ix4Ak[ JI [D;Q "9;/KRXbBCM2̵>|g~={p)}{tm{Ю:ne XrC?HZ74ӯz/,ie飆/;,]/Eʀlُ=2rÔpur;9gk;w^ۭ=%oAVwf ۀ}cD)S!Z8 [(6kE0\29k 8X"wޯ/[Ab"2_ÓsahgHq[ Nܵ>zohiM=p[PKܑmuyF;sdqu)b] 9:!͈}#ӣ{k;ZĺMe;YLDmhN}-<44=ry̠|`l8b?m=U1\cFs{d-9j]0C ; Ļ{k#C᭝Z)JQZ:7¤xy%xevJi+$ruIe+^"v|u<]DaҔ. 8PG;M05QqHwu<|^﵉*fkR)f[kA(Ffʰ|t|}7q ݪr\Wͥ#yuxlYDDr{}F.Ae~]5m Ӌg;>55g,BU+lFc7:@3OeΫ>cًSaQ8w+-т>b\d.Owu)y$ {YD[ ;/ p!Ky|:D}kdMGuT2SG뭳Aad.!K|:d;opu6DaۣmJ-lo(shUE{gYpw}OJ]}DDoo}ϻ+o[۞(׫{7:3܏TMb]̖M$>neG^ʇꁟ|=VTe[pmvy7ZhAod$Iw-Zh7uIun"o9 :DDiDD+9UyX[BwKCHbcB$/[:@YH)fqC9Wg $kH466Kŏ/g)WBfǮZn~wUSmXrCED=yon_^ 4e>|}Y0i,ݳWVRSSSyhҊ"w@.ݽ|=iHl?N2)\=h7(H׹du:$.ݾ";o'q7Yr` CE s ,η߰ΑC7!.xyADeyI(.u6l MJlή& M5fޯ;L:.ߗG'lڲ+oϻ+7ooGt^]O|'߸f|f mo֯1:gFiM4זUْI`o˫"T:&¹sA!ĎE$ 8"w&:ͨMk>;! l/$tpu9=a5p\Hyw.:j1AnB|Q$%n+!א{>{Q[Ҫ)fѝm`^7OS؋SYʲ՛ڇ8ر͵""u{.zܧGw63ڟ۝YFaFB2euZ.fOI͙7oȨQa3$DTwuILAsp`tA`#$y|MIr8 vY |%_-[OY\_:G+zCE$uОkfq^3h@uE8OwngАϦ-;:e.bmmVY! ~{2}԰ї3{gV$טѝs{uf+b6Q8:G XD:{BMCRz딎7{1۶<9إeZ|}t =X-!$ѾMr.:M"]w^ pui]/[λ:Cdf2_oYyI,");o9 |:t/[gB$XɼcDiPJ'+QjXnllsI}DDJڎ]u9URs&g?G܍{)$MJ^[SyZ.Mob |mp-0"ۺmз폏1RЯw&߶5d+hu ̚2Qƈ^ݻngx^]E` YG\K̶6k|U$Pz=C43:H:i"v:J ,!{N~oBD= oFSHiw}&iA;ou Aц|mBJ Ҋ t3A ;'Hz|:t[Xhfu4(b(f۞[huf׻'}>{UYG5 KV’ՌҟNySӫ: ܈C޹'7V&"'I̦-I B6R`}=iBw^S?DI1:EYq&$ɛ <5 F>O2Փ~b z}YZ ;ouz!D$Ja=J|O*)~TnI;q꬜73:Ll""Ҷ5[䋿o|4ҁ{q89qǔaiEb*f]Z ̵!RhA,Αι*yG=q;{΄!T:kh]Y_S 8D )nsdTZpuC(BHB ҥhBʏ%sGowJR~u!Qu}=gB$OZz:@ $]G Il"iwΣzv]f\v1Dl5w= =Y8b4UU̝0Fnv|c,I "f.b6Hf1۾Au9a"76Zp=p1`Rp f fߡe!ru<բk:fuھ,@4mZ6f:Hר޲-fX]}#=]Q9l43^=۾N9_8bxu<*^_"rmͫ%/Z) :y(8: tS+$3[/ewu)_iYZ| 1:D+.WW6)߇Qx\wK"n!|筳!;o}KIQޢbJj̨]m>m!$""e՛/O7=5VY#~{rqдIճHi%уgٖB ZhA!YAΑq.I:@ƑAfL'"=Hu|HŤ Њ&R^|\c:ۥ~+)b"IT̖Cufxlפl kMbB~˫"ݮ:Z-ƍa t}{s8lYSo"Z$^,c \1Ka I8 mγ!%_$Fp`>ɺ&R Й-i0uy/Qx+N,-ZJAfXh-zDJ!Zqpuiu.XmBL<|:Dr(b1ndsάO0_o -; ݓzu D#Zݍ8F mH3cpSӷØٻ4Գ.z o%:DQX? Q؛& . %>I])%I̦1y[y[Hʹ$Ыo }cBڅZчdJn@n fiu+bnoޕs1C FDD$}/r,[)=?q; 0sp**<?ǫs[4b6w^u,~fB"km58$]')cQ"duJiKxj`(t!JӀY9Zğ)/7kC"ŧvM!D :Y $e않r( 3s-5lDD]+|㢛'&{Jf98w͞AٻIQ̖${f=f 3(ԜtIYȸw^"u"%fu)9KC :D+Jx$]? "f|ъc(T[;/ ݕEq<$R2M"o: %7:w^Cg[dr~<{1!") ~~=k"5iFd;}{`ƳnٰmϭXs+Va{]ä crYd#pE "xuHQf8t)*fBF"-D~\]$)w^ "]RK!Z:dM&7[s{Z@r*b'U̖c~<(d.izvKm@$LvRPS[OOƨ8`;f{CS8tVn³+V5l-R1WO̾ 0ױEFw'y<[w ?;0"yHB夣`Uֲ^vS16!DDyRCs2"QpuBlI It;I:@ҕQr-Q_GmC7&""IzFu /QIuᑤ RGaB=oDJ٤P{:HЊR섔SR|ъ$J."uI2T̖CIޭWO;sx֘QIr]pí+lݞ^P*fɉ ׳G5yՈѶ:@ , [u)92ZQ#I̸s0ˍDp31|IX;U"y!|[@l9u1ێZjԟ9yTXhhl+Ϯkh% ';V7v$',;{ c۳{'a~y5P1[ۮ6Z ѣ pu )k{[x::@\""?uvuVcoI\ykI^ YfFIžjCHb u1;~|椑 #""jy_ o|Iccvݪ*>j3;c3O"$3yhb\ͨ vIU)+A^Da/K[` cu)+˭dpuk=@^mߋ()e !isJbe~ /XDYg@r+boY?>sDD4m^OD;/O7=KΫIC8! 9ddFOEӤܖ^.Dk  ?DaOPR( :Ui:H{KbqAYL-ܝq%TXhْi"uI@Ϋɵ. a1|pBImkb~||k*l5?͘gY98ΝQQUɻ>\3&y끋s߁A"iց%++ig@DDxwb"pu*! `ɻHKOCby:H;34::@8,ۂ]64d>sHmʫ늈HUoɝ>GPY_^=kX8Fn5Yf'mbFv چkH{o┃(A> wvH!-RlI̦b6ILg̣?/Oѻʣn@f-bIE"V[h$c\{%&kD{4WU[ՙ_RFlҜͤ@9avFgNOVL"""iNxxɞ1xf6sw]VUC:b(`,^hm5btC~]@ly)҅<. M6Ru I%[٤(` [<*^Әs)gwXgjHʦHJTY$/0 $HZDl@r+bǁ]@`s*p,ٶ~~=M;m}{9}DjXaȫ6nkޅaE39G1[q;ES&~̱ź<(wJdVdW:@3SHzQ8L-;EI:EJʔ &>OLgtl-<;/ٻ;EbXhْCl&KQZHDiCIΫɜe5FЁ}1k5jTDDzxqO fL'q 7ѭi#1m0ūk7ں+ 9 ۛó>lΫ)H2;>Oo(|5xT'ڒK bu5$n$Y23=M"*"<]ĕ4dDayA:"3v{uX $ҚђbwoXi$qt=QPlf j~ɣXI'EDDZjhl%yaj|91wXNeeE^3w/MüchjjꭼnIJ m(n=Ol T|=D/Xg1P< lΌ# wpX34b6)$]|Dy[Hqen3:@35쨘M:- #&z:H. &YZHV)_/ @j$T̖*f4Q1{$)PNl9m1jtQZS,[e7^G*Oɬ)}(&BeeEsX ߗy{jjYYi 6oտ;}Tb ueʧg$GfuA>A\v?qF٤p[h>tF[|{&эm)JZ}oEWDR`7,$\}\$4WZt(bEՕY痭پ5k1یI#ӫ;viZHk*Z3׻g۶&"V[@*W 1}z`Ƥ̘4'b¨TE=9z83G[bVnŠM[Xe=U1q蠬ϵz|>Nc$y(|/(0:Ou< 4Qmh4SQhCDcui]m_DxEO`@@V42Ruc$ž$oAҾ"XLْ4%KH6H[ʢw^Mc[hjJ1<.킔"׿_NA[=*Є-Bڱ^^S/_L4q#1b@tokhzg Um&Voߚϧx(mzwފ !.oTs2Qvn{w^dO"ɢ҄ pq 5SԩMJE *q 48{${+;C4cduoIT|)->I,2 G1ۮz6dpeq*f1mg-O'_Z@>=q3mMΔC֭۽a6f'5 ymڸ$+yQxp3:O y DunhOZg@D ‘cӌHq4X)y(\ xW̖Ct@Ҋ0]tmPRduI$@9k[qcQ3EDDbێ~eOwnޭc2m܄ȭ_Œ=~3{Hλ#k(cYK#DZ[(|-򝧖vtW$YY(WAVN%-"MyC2U̖Ʊ~I;WZf@#:$Rn LͶm[9tdۻ&8J]}#cuTTa6a8n1v *+?mbFF+@wCNy&oDݙ"yy/a@D:YO[",i7E:kufھi\:H%g7G"bguIy AZǐF o̶nK.vҧ{ۓMb6<޲:F8/]ru ɡ Vfպj}UzƤ1C2v(S cʸa 7쨫eKm^o6]wAW^a ]kuKQ8"iODT&guxޑI+i?}(;:f"1eS̖q#9 5:e6?>o8v%RltD$v5~ jmF&(|w dy\Dc50MVvFie p2 #@aȋH)Xn@V"w?"4ǂ(<=AƙJInЈ$))AģOEl"VX)7e^m@#2'6*i"\Dcʪ-r߹֭۱k>PDDD$r54nG^&; $/"4X\ 2S tW$YTV`Ak,"`@6[hauHZ服Dlu-"ɥ")UVl9~5۲7}|y3ݱ&y*5;o-p(puK:OQxP 3:HA| BWDF":@ iD̛tf-$[DKl")Ul95:u0SL"""bh>6nX5 2 Iw^SXgG(mu= w\ MD$GRj0:@$\)iSH "*fI+fW ܗkݚ[ijjJEEx MDDDM͏765z|̱w0 :DA_*OO"?:K!Q/'}H3ZHYΕ6[ZHkP1HJE֜jذs{5b6R_ ;SؐSihB[;x$`ui ĝ>h&tWDJB/g݌HH$E3ef@$"ib6*bYbk*GfA#"""FY׬غ9ߧCwޕK[ZMDAzaLcYE$Ղ(Dρ;yD$jXnp$i:%i$I~Hʲw֭VMCcc5͙RX"""b WՆFVmkrh;oi!2Ia[v$yJ <DA@+"D95*)HIڍ4nIZ}OE#i?I^!"U ϵ5;wo$*** JDDD}d/f[c+u93Bd;N`.Y4z4mQ^0 "JA< aEDJ٤$p$h9iSHo~EDDY0w[غ1n1d`_fLKK+*_y/t1iCɺ#F<|5 7 zۦv A^ ;ouP1J݁_""%ElRjT8yI;WJT=$i-frO{e[zVjQUܮb~{rWO{t>.۳wixw+ %E;o3pN?$.j8 8${:'Q5YDDDDDDDDD$y(AU۪Y8k"ݻRDD$]wbYƦ|N]Rw:y_&wj[gI7x0oQXaPvG*MD:HIZW4v?LڹRҾ"푴4&H;mg GQߊ4pHӫscы ODDDc+۾U&""">btG]-wUs %|p7X`(|7YҺ ? xx#`L$ԙMJM7X8s}OE#i?I+b+Qrk5n6?>׍_lٱsS?u[J1!""@>=cZڤ[oND EܥD@bi(|=`xSIDDNǀgg3}5DFlRjV8e딤^DDDT-Zusbnݪwdx޲-f_DDD$9&ѭ*|G >;'O le6y]A;aDpE$(.F^8H9o@3.HI;W!AI{MJ﷈XD ImEu.Lp1 gxuNn𝷭 $|5wwQy`?⢶YfN<&?Z)""-Quʫmp1]4#RX)AZHyG2 b6WlW -{|cA̘4jsHR͜4f]gW6ЈѲ`qvcw{U]%pi}|:L{&H1Qx&#Ep#pGÈX ŷ:@$\)iSHf"""ҵTX ˶hM:ʊ6pDDD,WWƦ&oٜS'!灟Q888 8d\U?ZB%[B;ou&fuxޑ̃t -$[DDD LlZ >mݮzVo˜~mo@k; SDDD:iP̟9>۷>Z;opUA2YV wޏt;ouDIost5+|hFD4:H B; )i6:@ IS1lK7f-fVU!2!{9*YZu: yˈfD `[6e ( ~o $i7Xe"R|A#.*BjH* R`|%-("Da0:G IS1= D˶hmlom9t4nzy QDDD:Cg}g{m kwl"B;o3pKA=m?4*. p묃X.ઘM Q8Q ksul2{Nf@$sx u-ywe}A |{wO} fZ+_3݃JI.H ;p /"\8yQ!H*M R`I;NyG9iS|%gw:'>uf_~v xcFfAeE۵//-/l:#LFؒuυ$ҖL (y@UƔD|yoZ)Mds{SZ5}wJ0"):R3:@ kt@Ε yS|%g7G"""N֌;mh`ն-Y̚2*t˜akܾچ|1HQ[;Jy_'08!p/2_Da&iMY"RR(<uN;TA!R@I+\ft+i?i|=vR1]Ϣ%7dxEE#"""qɹۛXA;6yw(08`! $I7ht$R&(\b^^y/Z[) ís|%- Z逤'w[DDDA ̍vyάks7zt/"""ԳG7[5wd㮼Zy#_ Ay4[VM!2v[D)!ک0wmaDK̲ R I;n$I:WzKҾ"HϭDDDʀZ>Kwpӫ湈tF^=3 ";yw*>\Tۦ3W 9~A:w^ 7$¡!DkQ88:G;= ;ﻙM)M*fR‘5CWΕޒH>s1"""e@l =ע[7SPu1ϢPDDD*+8ݳig ~dd'! I "]b -m+:rO$=T8R8I:W}oE ;0:G I.bV 5׺Ʀ&ިޔuǛT`""".x6({ӥ77ؔW+NVRB|λwޗ|Nܵ3ɊAγI[hfO"u(􁽬sipo˾Q%n$$g:!IJH.,KHS1[~Ϣ%rPs6c>e1ױHZewޞT+Ʊ]i>b:@3sHȼN~:G{:t٤Q8:H$mM&\ %i?@dBDDD;\vM6n >@DDD${NYu3;y}X\"i;5y?7%q-N Iz}Rg6哎sw[gD:uI}tFC 9ZHF{%- Xi;/"""n*f,6)Gwc]@"""\MMMDǙ1H)cO#uQ:D% ZJIWF3;:t٤3 II+doW'{,Җ&wZDDDٲ XkVoߚuÙ1idrH3'dY׼} [kkyeͅ%fvλwB`/ vNNAI)ܨ!"w u,jλ:Dj::@ bs̷ s e;U>koZl""" qAyteۘW1d{wG݀;;AVYh/y5`@D 4Ywu)$uN8:@ s%i:D *f)*f\6۳0iL=o꘬kΦ<&ci&? EmSCtQ1H pqg[CHA3tFwP"Da`t{-tI,,$ɃsM>ko&""ҥrueXWdD$ y}} dl_A4w}CH| f ?wu)A$ A{CuHҹ9!DNQ1HP1[~~l߲n66ʾl6HEDDa?sB5v`m<7."1y}} pFDOCtc:S6!K wuJZ$_M}}!b 8y9׽a MewS&"DYdTYhyoZR1[ kًZfM)c MDDl6ٻ6i$޼ %""Rgɺflܵ#ߧZCH[ :G<OPw6IZw{HQLT,]a ^"(:D+v>QHI>bB[݁sy!DDDT 7OUn&""Rűκ7ݕt;:D+Rw+ 9Z\㎈$\}9Yzd]l&t"bw*y2ulIZN!"yH=;/@)Yv`p N=I; I7zAjBZu;U|־a Y|w&*f8wJ5 qmO{yu:ʥZ1!Œx"1Zx:!}:H3g[hC# -ƳHy ppuV;oubvzk*++9 ~b N~B**ה,&~Xl"R0۬Ci|=lÂ(:4:@ uzAv!% x^L3Ifg3c8KZBϕ pu)KI-:Q1[[ \W6!T҃5DDDJa FɺgE$!|Xh8'u)ilV eD;p%OZQiBKsbX;*f+ۮu aM'WN{BI}{\556߻E$ynЊa:/@^/Etlu*il+HQc@A:L9ZQ :D%\ v,&@uVw^ubW 8K7yάkF[h"""wۻg5v`閍>|mt0 OZh ;MWX:dI?-Da7(RV~DaoKy&:D+n:D$\ '5R(b  """TVXY캕455e]O~"HjM7-뚦&[*ߧ\D˼ltBG7}CHFXhAl@`u)+OXDd9P#IWhER(iS[̖[ .QH)e$d9 fBg(\`B$~L ;~C2ϕ\I yC$<ǁKYzF6ʾx MDD$5>x`#ZdQ1[˴4Wod9ոQ)9}zkyzcO{$6.$Hro#)^1Vd$-٤sd@gQxu)_X{$sYDǬCH:Q8u,^"|'"""*f+Y7f4u(9^֬)7_l.11:@ ߀׬C ?iB#/fY?(=T$e*A eC_(lD-$X)y7ZH+Cdk0:D?يwz򼙾7ιNFED;^QwgރQ8$Qxu)QZ(גvg0™!$(<~'Qx\G*%0:N$}Hd0pmI{/)Da $ݭd}I)I3#lEDisd2q!Aox*K6oиQ)yݱ%}'s̷WN~D*KGFʕRg6[hC,Q8Ge{ Qx p 0:#_ۥB p̶l|߶ΑLA濯 \"mKZ7n|yYA5iC-HZ᫾GQ2גa#QxuI oIgXdR1[{ e>kwzҖx7^{ >ʇO /n"sDo<;GhgXdtU#p`'xⱣ $% HR_r$(<:C XVyYH:yYȢu{7wHy0:Dg+El g ߱-cTuwH2 ۋEov&$S(<e([X5«B@K9ryYD!<\ uk(9ANJҦSYd p8p-:K 9!R,cdpe1$oDW[ywYR1[Bƍf6yH>qڻ ND$Y=ߧ}$ߟAsm5+}&{ $u =:t 7:K%?/"""7U:%ǷG崥v7s;k?O7#ȹ.\&[j2{PC('O`H?A&a(<8:G+:i)Da֒hA o׬b`QT9yHr۟Q8: 㭳&ZH>w>7´9I;QXDEo}:Dw%wyD;HaeK&GW 2v-"""bȌH(PYusu>(̜عp"""t^SwVl-}:c|]جF}/³>'m:y $݁ǂ(\`D:&3CY<DqAw^ u,Y q!۞Y uI7Q8:NXgi/Y(a[S O"D1N#|`BDDDGl ;o񅏼faukkxf"F8M.xOFA@EZesɂFWi*?\_]*DΌ*v-=p\޵ ;;wir^ޅrq ,Z(˻Z5yQ7iRMA(q;Tg $IfψP|aww}3ЇD]%Ia(:FYo^AJ&4 ١8MK.549Lyx=x6ZP\8VP&4)p06zOF(੼8 d 482RT4yq*I]&' W??ͻr஼(SIw!*M&;w6*a$IÐɦa( 5<~ +64tL?O?FB/cV\oņfY =HU8M~ypc&d"q ]p x_2!y+>7w-eZ:BGvU孁G]j?\8(ZS7irTޅoqL'`tkpVjEAXw-e;N]^Z&#{o޵TQ_EHVmoF(ǀOK9jLңD?XJݏ D4"L4?~wC%s%9ھ}vurcIMH7X(g'N#ȺtTk;4yM .i2cpPqsýѐ8M>w-e =NOWEA؝wA#Y&ud \w!9pE`49mT&.@%NkDAؒwAOq |.Z*Λ^K49=n5u||7NcK \wAzNqyR[:K$p!}}fm;I<wurEtȞU=n'VM$;ٮ $Nydmr./_˻a*J|_2̌>#N bF8Mv$7/(ͻDA0pOu5H4y'sB&U] .伋s47?SA6K gETQB<ɉyL&G d[J6Y.$ cQK( /^}}=_E59Iv4èciP%4wF/TbyRYq|,N_'8Mvɻ>,n̻x> Xw-:x(NiR-#^&uyS+ >"o]@'{%Nq\\Sac3y"~/46bFb7w +nȻ m !N4w1#U&S4: }ɺDV]I4L!0JKx|C3O}6\GWSkIp4iXs֑;u\,74pV%5DA4@=IOqvO w-% Y/x.䀼uqD6^Z`bT{Q~<"MyQ w!"NC7 !,~5D|!٪PqӗOi8MxiM1{CdM, YSj{>yqTk*ir6$Z Qޝw!$vxWz8M> |X,SƎcIS_rv}EsP4Ҝr^{`ӛ:k"z %_o,>EA x3P]<َxiqir d{^ET]|x*N Lq )Q]pS t;:*t>G]H5d8M~F6*xlE3lH1v.P-]X~Ev:#NW71dkNpQޅEA((jWOv}*N4wA.N-4x 8|fy!IjS-X1 \ 5{V.g_y[v$ #ݙ>-rŴu| 8- eOK>w4cqw1Q&'_^=eCE ! I5]&Gir YCg lλan-qBq| Nj:h4e3o Q^?365ck_ѝ{% \nWiɹ 947DAxsޅ4Q^OvP{^D=8MӤQ #\&ir0nb% n]$IR0V R_شޘ1Wɔx*Iza̝;_J~ i- זk uzy_&4,{% ^(w!*Oޝwlqk4wQE&&7]:߬M\*N.h(iO&& }[/ha<~`Q~~uC[nh&\Sn47<|ő#(w!L_>w즶__*NZ ,N4ůck%/fpl.D$>RQvirp78TYWnSƎ{ugϜ5wtutLKxǏsvY9 OgDA]yQ>o {uqDAXVsdم3Ө[w_ʻZ8Mw=`pJqY/ +߲^1w9t.u(]Hq*ZTMfM5p4x-Vw-|1y^rc8*Bi'*8M~* '-kpiONKU, iYmCZs/cq@6Le93R ۲ $I#a*8MN! 9;/mwb˜ly|g*$U tw0k>me -uM)Y 1N*5c%b(ZU#׎N^ʷ/tH499zd"mQYX4G`{ =Edʽxw!hމ8M~ ]i2,p)/N4aW&7]yYqT&ϐu=3Uzwq#;/ 8mE6Z (6_vn{!I1d^7 i2x%ϝȷ!p\+.D$Bm7Aܱl!o# ^?.{#_)J$ՐNֵ-\'{i)Z/H{\/, /i2/;l 74y?ϽQYܿ+#jeQ>w& %vͻ!=_FlsVaů/Zz,1P ^|5Zd.4, r+p i2 8,v`cԦ(|S&Sɞ'l9JO(]$IYj# ?i;/mvdtKg:Y~Fʙ'I~=uyh7vrl*&DA TՋCq$yU^^\>ti cSCp+KAd!7(w* qIIwO^]\4Hd4 CUPڮd ^?9 Tټ 'knGɣdxH \3TC> ޗltTsy3prfM]L0[)ڎ' ѻe C6 Nd?-.j Y\*~ݗ,@:mNּ @ۏY7L$}\Ͻq<z{:|)'{4$6fW>׵q׽3_J5o)I=XĺrvCjsĤw!08%Bh~%ş4Y<< 7[֐ݡtN〱e<ٿ,QdaB,/=o=A;A8M'FxH_רqd]g Ee)YHbfK{RsͿ'{!{.~?1? ߒuV1Y8M# lȹ4اq#{O4l96}ض5(ժxc~$] lw#*֦e)udM[]<MCq<66,k_߱ ޗw!\Q"Nyדz n+Nd6V{M祍dIQYntKed K+(˺+I4P ՈE =m[[xq)ޖ>mw"]= @,)V(xq)E%aO&o 2zrTGvu.h0 f(:a2Q'iy3Ll8кxPwy ssdcȻa(Xywf56 Ge &~4GjDAxs&\p2&Mz8f%KZ֗gFfWPjTdμkQ^cy2~kшU \bm":T+enwg9_, I7Z4bxA`k A6I7l5& F%I׭uKZo?N>$I݉'{.:u^珢Fsc_uPIc_}0]ҼGVٳHC5?ti\uy1EA%s.E#R(Pp;7]=^Z5%9WZ$)'~]Ӷ;dT_8( OJk+ |EoQ~8x,RTwyDApwR>׳*+GQvFAn5qR쵊H, (? ϻՌU`(̹Idm(.B/w._ֿ֒8GA]ILƣWڶV\BY7~X%I. 8Xo%Fd|>BTbe? \IQ`A?EAx?pИw-R?N J}[ о*1!Ni9_@102ּkQU!(yC)=ޛ:K |cE%IR506BDAXm{{c֕hy_+Iypy'VX޲lQ|D pkѰ좬#2PQp2zTQv]^\+#õY,Ȼ#9_@qq8&]}{ xe+ Q^ ȻUa]$IRL Qvg(u^n_m|+gP?zTUJKQ\38JZm[+/+;s4 \  ||;thxI 4X =;Yz!=Fuݟw1*MOUm%tCK ({ rѰ|xYw\jL7_P:XOGAxN.F$F(ۀSJݦ;/`Mƒ?Uoc؆ $ 1G]k6VґBm(" (]r?dc{]F]̹ /Ou8Pvĩ2ŎFϻ Q~il]ms>(';] lȹ 7u>U4X l!9᫓F0 ›.F$F(בm{{sBVh;`\̸I*R4PMWN{_ݶ;-,7وu(+ ;}[r1 ,J{]̹ v8zQEAxQ&R+UKy}1̦P5`Oyף\Ύ(Ӽ=^/K; + lϻIJf \C6R.rj-`ǭ7Ŕ-VX$l9|]Q"B4 (vb߼ѐ+?v QjQ~yע~!( m5EA8 WgKGC  Ih䉂' ko9|= DQy#I_F(WMO]Ҹ}fϜ&NTaJv{=sZI7nl )9c1QDi@DAx Ux[DAxcEG(dGȹ EAxnqCfSI㔯"d]FU[~G%mQ.ϻisQ>)QyףA,s_Л;%IRM1& \Ip_R^WZ#1cF3g,XXY=/_̘1KZuqiWٟ.>HFDAxp]jCQE7 ҹXsIDA(˚wa! ou%zDAxjȡ6?cnn*( ܚw=Y(_awAKQLvyף|ᇣ lλ I`M@vllO#GYGWPb'w=FwwOEJ^Zp|o.iB#WhG,p@\j 7 .FOQ^ D89:=Cio(k7+ …Q 8Zv4Jo(cw8Mfx|U(1+Ge+7//ywAR9 c~MUQ^ai&$Ia6KΑ#l[K-1ʃS?jTEJ^PQץ㲋O-iB谹GȂlvdӰa{_# GToɸLK1Զq_9<  i( \ t\H7`(\GAC||U(oxyף>(O # › <8ˠﳫRrǢ \wA$IC0' gF>\vKZsEt>}>v9<*%I?yWz%e}z,lJy8, cWhu eeQ^a{ë_' Dy`8~I6:x(`M/$ –(/#{<?pj)(39]QlGW /ˁ+Q5 G.HHQdF8rx8! EAؔwA$IC0Cm]ck /OgOiG\~c/Rmg~{:;{}|[7{clajaWv% E%1CWGAXI\Qy`? LGetyQ*Uq9ˁ_am=HnQ:Z^90LFA:a,_wgVTڸT {mK9Vi'{{`FAx7IjthxpM&G~Eۭko%9tyL[L>txQ$<'lK˦vurDzl*3^" p pK&[Z /(w1]Qɻ>|wU]qXd7C1VUS|6 ]H;||ՠ(J %@i-_#ߏ Fd_![ESs-ld>([$I`M/* uq86tupg8t2v\Of~[(8TFzc8cKަ;- pfmn(U(WWi8#`ruըɂ-?t\RkyCȺ lkaNpMŨir9pp!UUU uQޝw1%]r>jXdd!3ms-]>(-EAx'pgFȂmGt (K$ ; ltd]KJߗ>Á[m Jes)]޸/I/d̸>7qͪ ܳb]凅.F({?d^ ׋I]}}qlSV @ 4 DD&& ʱjq?ϫpw5BDAP& g]U =p(7\4of)8Mfz-p(eY; J$Bw qW-g: 5;LN]]]I,^|:4_0ݶ3u=g;Jj3\Sʗ?sYBMkIV-g1J#V&Sm'}P^p*$xlq&:p:p2vzK+٨n,\i2d#J12o9$Il*I1pE&W)pxxr;y٬+mfǻ??|QꖤZOǖ~o]\PEQ6\y&dCvg-dʹ,..i}`|p80>^6DA؞oIEA|r&ېuW=?Ⱥ~QzaEr1W\(Vt!xW$RK@Q6If:]Cvks6]#tN$IRF]) k4yeAYؼ3?~,߾b?s/Bn˒FQKަ{V,bM{kΎn(wip( 0>|KV W;,v0p ?櫋ltm߁GA>׊2|=N dգ z7s(~,FmZN҇/98cyCa!ϽpSrU+7l_Cx|t([$IRm0̦EA8MV(ȚuL[Zsx>~ՏhmPYT&N'8W6Mmܵ|m] * u q4uk;x;p[< '! --.F"I4<fSDAL11;%{b4w]_ o~|?emsK5y>ɷ[Mwo/7.aΉwFAXƒ80-.jPe>V4_#7faY$82"I$B\5xHGn|C3-y֮Β=ؖl=$ [{;? vurےg* 8 $I$I$I$i0̦a;f2m;YQFa W_.xGĨ16TFҕ0 Vlh/C+%I$I$I$ILiDAX1Riږ9cu,p!ώ|?dՕ.I9 >sa쒷5+~M%lΈl,I$I$I$I`3\r}fn[2%oNQR\Qw_Vmcg_: SA$I$I$I$IÕa6 ( mw%Oe}Y]zI\KՌr)ICfV3r'ݲeӬhJ6$I$I$I$i(fӠpppMvr%<ظޒ v.mͻz򊖤6c\s ts%4wWrN(_ƒ$I$I$I$ 5;iHDA=(m;Y\^S!I[7㯋4iM$I$I$ITM iDAx?r)q)\BwoOY^zI\KՌr+I ~\zIem}+pRz Jw`(dcI$I$I$IbMC* ¥SKZs뢔Uy]FۥM]y7UuQʒ\Jv I$I$I$IRF]F(+4c`v9۷vwq0u:{Θi^p?Pye-I}yV(k]Mk+=t#pVTI$I$I$Ifg6& ›+~AZn].m[ނo^>rŹLbr%癴d>rŹ|Ko/;ȶ[?l7{d$I$I$I$U;lU Groг-o#ݓ_ÜQclR(|ƌ5ίapi̞9WnƥgFAJw I$I$I$Ipcg6 Q]%۷vue x.mgN_+>~OFg/'sUo);C.wc $I$I$I$fӰjd}@EIMkmwW?SN?ѣGX~uFn]iM%tNH$I$I$I4\fӰa! kÀ'+ƮNt>=Ξ_͏A>`w*)ARc۲i{=uvtгvuVZaQ~9 B;$I$I$I$i8(d݈_.hZ˲ Muufn9/]y,l_g_FHv}]Zv7 tTZE/p5(*݉$I$I$I$I0bxq|إ}tXԼgnôq~yda l=o+uPxh2ֵ+$p^wg'$I$I$I$U nj*DAx'YkoK Fn.9l1{z%%HϞ=|:{yeu //7&I$I$I$II̦QvYKK¦,oib88G?36TRadɜsq~Am?@#E"vgv"I$I$I$IR53Nwui ufGo[ +tIqO?>UI;ag}<7ֵrےg d'1&I$I$I$I̦TͺgySǖiUC:s_O^I) 8yy0}ttF6o9)(o$I$I$I$IfTբ =N}OʛZi-ZeLv%kZ0w]x|㷿5" GNS7ѣ*GOo/Utoh ([#I$I$I$Ija6Ubq\|دt_ݦb-+/7z(y뛎͆LR)ofm_{!P(yO}t($I$I$I$IT3 ;N[T.xvO_[ae^{A< =Jʑś>xY( ckVߒaںI$I$I$ITk !ߎgȂmm>n{̘Ì+iݶ'4=2?.(ipܝ·A궍;?{ή'ժ c{]9yޜ~ﯥ֮biʑ8 {fw$I$I$I$4l6zlJ`n*[ֱe[M.gŸ ׶9|cŏ~y=_Ibixihko婵Xy`11ǎ$I$I$I$24"&?zG-Y'?ʦ64mg:{xVmFF#GQo:٩+ nSkWmT7 lJ$I$I$I4fӈV |9N*ojebxv>9'Wj8;sw_x7r?+7R( PDicp/9{BI׮b]GT FKQX{7I$I$I$IF2lapE&_.umܽbl1m'Or5'v]ʔ-fmaW)B-I׭c@ tqv*I$I$I$I IJq| 8lƥ<nS`=fn9w]x$^owloҐ0e"+gv7g͢u,hZC[w׀pE N%I$I$I$IR0 |83N/N>ۺxlJXȶ-6n|ky>7p?WǓtut{@3y]tˀ{}{ְe=;OQ>0;$I$I$I$IgMz ʉq|8-Xܼ>n;Nے&NaT}}=d]9d]'t<"7{ROoqGc_R?bc3ׯam |$ ¿ %I$I$I$I2& ¿irpp9p@wm{+kW0jOμә8##'?˃[iob4yy):v;,j^GgO@OdoG$I$I$I$IC0Tb_4p:P}w[En&Lb̙8~v@Q&]w?ōvxH#Pݨzqk9xN{ArʀP`f4eU놁}5(K$I$I$If*P >N]0@g[7lƍvSiL;~ v Sw<_n{O-}}> u{a̠%-Yڼ}# Q>9;$I$I$I$I3&C18M>0n&]tj4cfT& MpګW Wr_⁇a٢tu رTƏepxjn=uPҖ&sPBDAd0 I$I$I$IcM0̻4p)p 0mΣkV蚕?m'OcISi5jyst ׇxgXp0~,̝>qMVzΞohbIzִmì|5 ՃuI$I$I$IT>l*c>Bms6m#ZΜv4fǪ< [=/˹嶇x,[HWZ2f\1sփ~̞^[[XҼz :b۷ lH$I$I$IfA1,8M,v@P`fohfL}=[Oʶ e~^iUqSP1zl[ڂyf8]!9vo,mY Mtnn{0$I$I$I$I044/\`@EXԼ13q2s&NaքIQ,-o\Ú5hnieʤj41cSpmmʍͬBWo`n#Q>6$I$I$I$I0x#$8Mo&C`:fNpۄ1 yۣ/bլ]LkK+݃hm;^w~3iR?z&O`Sz6idfVllauF ͣFA4$I$I$I$I08MÀ=i6as&MfS6vKqMiUrn0[]&c)̚sEVov[o98,QP`}G+66rC ͝CqWۣ _I$I$I$I I@&3x[[0;j:m9~" 8\=<2{r df֯k֍mtw՝wFMø&L)dfl9nn;mUgOk6k|hGϐwIɺ CuPI$I$I$I4x IH[AY:Ա1~"3OvekXl+VqzVnb lhiBoRazG3jhF͘1hhh`L632{43lcƁ)m61$6Y  p]$I$I$I$I-٤a*N1q@W[=45od]s+-on muC@殮Koov>Z_G]}Gb\FQ_WDŽc4a,SLV*sx 7EA5H$I$I$IaMqL&  u Ma.AC5myz:QQ$I$I$I$IZ٤*gUDŽc6n<q UΞnַڝk~< $I$I$I$IC0T4 xY׶0j gךGpmlO |2Z$I$I$I$IR I5"Ny)H=ic3u8&7ebX&!:"ttv6چGp +[ \k5$I$I$I$i0&ՠ8M'^ loEi1LbmRCƌe˜.oX-hdCW:7}`CWyB' ) ¦$I$I$I$IÐa6i28,v s:`☆bƍc;j4FaܨьFoB^{i.ڻ6vuRgknpX$I$I$I$IaMA4v% GrkQUW`۸c;z4Fa ]_zF3.[PKOo/ť.ڻi颣; m Ty ,dU$I$I$I$I0&`q4G bt]1ez#_+ d+PԺ t=şըMᵿ{ ̳ I$I$I$IT ImGFm*5I$I$I$I4 IzQq m 2020-09-07T14:14:00.795540 image/svg+xml Matplotlib v3.3.1, https://matplotlib.org/ seaborn-0.11.2/doc/_static/logo-wide-lightbg.png000066400000000000000000003631051410631356500214450ustar00rootroot00000000000000PNG  IHDR uM9tEXtSoftwareMatplotlib version3.3.1, https://matplotlib.org/ pHYs.#.#x?vIDATxwdY9rwuwuzr$6 %edc `Ye`B-YdI%+K =st{f:T u5<:۱ tf8lG va6#q;0`f8lG va6#q;0`f8lG va6#q;0`f8lG va6#q;0`f8lG va6#q;0`f8lG va6#q.J+$).?]DK~$l4+inϵw=G \kN[KjZӼ+wA @utvyIMҭR`m$gKXIl{Y3t0ttv$RpmfI, m+W{dw(lr\Fע;XoV~^!.&l6t7.-kWEIK7zv pe#`rxvIoRxI EI_׷mOnD utvtn{tv^sGHHhHhXXXHXhxwpPu9#dZ+R-,\Lka1dJ -&ӺB?g}Hz' 7lUˑtXһ$u2 E"!T:WMu\^:xLRX- -\wGj5f)vK.43YMLhbY/w2yYf%=&G?Iz_0 gـ+\GgWPҽZ Shs窹F--jiQmmj+Vj;]ZXHܖش?q2r\KKzHv8lR۴`{-huujmSkKZ[jRZ56$yNwI}_CÓ:{vL:?svTc;]z&%uk)#d`gf]{y] )-iOkoR[[RpVHhK"-&X DN"+-\"ήÒ~PwI~gu`hсK?e;]$MOs~&v,IzQtyu;0pj%[wj8Т[u@oQEl%+(-Z }:߫vz\B‹'w2ܶ(%/t\̓0p r$%%}ڋq^k^ܹmcFK۞>Wl%VҏHQI.9ktu{gl( 0;^j#ǑF589b*r)z;]|׋/_4;{HK#<(a6`ttv%GI?%m; tם7ݢ_wHP`;NKoq%fIM:S<Q4tVO}3z*ٶI'tݮfJNKK*n}^hOKo3i=R)?DU˻Dm+^ЗᤶiJK>}dl;N C (ή6I?'?H*y¬}ox-znR]]UEfUvyDZ+q k5I+gNȷ&sѵ5|mʳzg{rp;N I#}ql@::KCJ:糺:noY{6rBrWf9(fQrm|qA'ƴX穱,5r)z⩣9wd;\^)w)Ew=9/?-}g41Q I)HO0Pή$'ɠvPy뇚1-f3zubXs32*j't^ו\P[ Ma-d3:+uS}<='tfṲ5_p жvlO'3J~ӥl0.Gb(Uez[ovjJU7F}}:ma/7S$̫;VZsEAY _k::9ʃa]Aד\o;t*ƶ|E7007o_|JS~S#=G"!䠣͒>!R{ݵ{wޭ^`I^|c􅓯*c I=dZk507ScN%W U:S l84kNّKe8׷~tjQ9iW馺f+_:գ_Q˧J>}Ro .D DGg׭ZR 7ެ|=ڿT^:;;gny(lzVYcXIM7<Α[XVn-FnkhUKJm2m9u;ޯWU:-_{|sfٵ_KؾT{W][o]e2#ꛝ^U11tc{T-;JWovg;>9 gX v̩3jff^>Ϗkdd[N/t)Ea6`ήFI"?H*ݻ}y\yݰ. kuffRώ|X |;$m2>{`ACaзg#^vf;_6G:}fdJ#I}dTu]GgWb6}+ ֥s5s 5>nՉοhrqA529Zl:nM'`HWK9zM7'__zha]n+C~Ӓ>}dba6\::"Z Ry-&tcj u\5W(yJDb 3M̤*#Qg7&}K+#A-d3EF,\麮:}׵z^&;:>)z\*ǘQ\::SOKtGw4Rm4ѦZȑ##+qdvld[}Vh&O*fFTT놺-{幎iM&r( +s\9'5\y͝MԐ=^xOU/rح-= Rʑ{v7}?6}5|;U9̲^\ZەĶ &e\:LJ4<:,ZHhkpnF/ *g=B6*ป'1_MovrP[mLum}kdձ1ذHLT-54ZlF 2֪6ZPD0Os{^ Z=+?33ѣѣ0#E@ҭSWW~[omoX}l[a/I:5=gG :zrj6gt.IzpMIZM&XuGQ [w).1rih~Vgf&WUW*ປ^e/Gg4NX穥R!/ q6 eDrAOmb;_CY\5):5*ꩡ3tRºMP8`k#!߱ڼ^Ӌ3FGiI?}0.{]1I!駵 xL?ߤwnBշX3:15䢬%^UMꥱ]nBFDPN7 A+}5NmG* XZk9wY+qT ]|Um`nZ ͻX 7Wz rmU ԩ7F/wjXJtu,wcQjOwR)kR~VWfMV7׷>j >},4|VC3gB& :s2On7FWcC}E2NՉ;g8ꛝr2):yʃu|xL?z~A-ήߕ#Gҳ~V*+n}hA6Ig 蛝drᜮO먡@ہĹ!dT*Y93ɨҲ`hPDnNmd[klq^_;{BkSZf9ޓs=_U$]x8tUnUTёI.p!̆FGgWw$=NJ:w~]?9uylej|q>5&oN㶋މm[#g50>xܰoNNO襱!Ym>ºež'7ǎWZщk]*q##NXD$:A.l1KcCEF4j"O[^ k507om@NYc40#J-8jW麚F"sJzoOJXlttv~VoI GsS~#߫[ڊ&Gv.6 qYked95}1-9rWEB75]Bd>rI{& zrtI1H.eUU$U5j,(8VnO ꚚzO MEF=G\qU+鳒=;mFB%TR٬Oךxnoa0 ꩡ39sH@X+G*8ܗd6/|${=Y~$IRU8Z{OYk[X֘~KPۤݕ՗\6IZL'ڭ׋%HOpWή%=lM{%d$uXḱDݎt5\We1IfU-f3:15#zmrTo̶uJj pDd㱻s\V|cLA6IzalP㗼RFBN}TScuۼCҳ]4vٰ::<- X~׻yɆJY}q-d3[{^u;a*k8Hpu{Ԣ^:uCjўmX9|~ǾkHzatPg&/NқwPE8R:qwZi}sU蚚K;ۊdZOTp6_үH#ۗ"!̆(/$=Xj}gW7^¶1FYkiM&=s]SӠ5;;oN薝[X߱ 0:urմuf_YcY\klS4IKԩlHLݖk5|kurj\s {vU$TqrZTjZxlOSw(tI>2\²attv=(/%5;y>ˣ2j.1M&[P[J+%/}ܴ~㡰vW$Z hp~&^eq޸ΥS׊vF$-^k:}P@PN G(g耆g CaWcYŖR~Vݽ]׽-{U+/zaq1?n}/taI?}K%, 0.ή"%坨il5$G$+cA\e/qWLH x[kWq#F_s\s^sS}U޳锾\0ۭ U8w1Kiyi1?KO$}O#̆IƊF"!O|:]|cdּ= QS֞+zAmD^]2740w=cߵyw_ͦO.( #1UJfzlo₮{[(lXogؠMkWzeFuz&.b!u֯' '3zO՗W/k|q^厂ejeD Pn/[hlޭ:XI}k){*u QOFblܥPx0ʘܹLJRYh)[R_(tߕz70MGgWB$%ߵXX]a|c͡> P^YHNtDV_Oih>1;ڷ35F# ϱiq$c:u~-eщ:1R`Hou`Q'_Y].}@/ }?{[*]6c|cLV:]WۨugB<ZXH}Ofp]^ \6::KzRhOdˁo8WMN. sܼECyq55JdQԘF ZlrTөE؝qycX"9%]@Uݖ!zz>mo,ZelFN C}zj^TrA1>gq >7nZ\Wn/^Y=1pW:k9w55ܯLzõiד]oO}H+dAIO.Fp?lˑQI-A$1GWwuv%2Gri+W5:$::"LUɨ}S|X5FY+EwUϤ5+SkvW$r>~Oe\')/n^ eqжe7Fg&sc3gfN]_7ȶb6jrKZUy;j.)p]UGt_^=vW$T,Re( ڮ7>J}fөkZωɱsƭ쟝uۛ]J.vpI3 wݠ{J_Xd/Gf7;T6+uT@V@Hi+k|y!-qdvP +;Db`vZU]jѱ1v@W xZkWe8#x(}UjWYMٕmTj1뼝z|TN]zb޺5pu榕#w>HPX75JrH}#bS~Vөd;onlq^ln>_~Ok;54$~C}]H:=G<B E'鋒]~r.qѩ :>yTRgf&tg.1r$'5*Q<[M4ʽ+ozu QwN/tw}ԖEIjʚ}7Jdt5clhE(|FF#!}P'wJz=GNlCy0 unIDB}^=HѱML.3](詡 DVK#zmrD啺Ul3֖la^@纪]fkI5^5 jB\Owi\NJKIԘM* Xr0/$ 5zNMO`u9\WU޸kkcV -YkVUG[v~s]WgO~~NT^}#ڦa6I W/N}YO Ie{*1y95{ T,gau_Ufg>ߛҸx.:lKGgOJT;uTUU^KaĈ& []'tdUM,=S-S`:Z]I<՞쵷ZuB][훙*zi9/k ҩi *gy||q^Ϗ ov@[,\-T" 0p]UTښU+D޶߶1*בO~PyGKC>m( M1f9r$_gK|S6)dʺ9:;3Up&T)p޸Ѡi_Uf [Z1V ldVI?tRU"JcY+xx(h{JEK]HjWif|8u5 k/NkZgG44?ws]֨5^S%iJ׵}U[Y#i{ߥb[+ ߩY|^&j+=Gvu#si;.I]IKyˣ:nK7F5J.jh~F{BeC]j%cU씲e܆I㡈jζV:oĨo3)=R=ꉁzpU-k| nkh+8Hq{qTÈ`$[Y̍Oۆ0oNLd[keo G,R}|𞷦].A+lٍX][Pyy4Gl;lTGgWT'~tMJ^ofS+}i=wB_iƒה5FG'FHsKfzalP/  )h)\ eJ,wsZOt_K{ށ`H_L˰ک sy%ܥqoݧX0Ӛ{T^uVuîb2'p_'z:\]- GuO˞{-=?2PtG8KUu5"rjh~vyZI=5tF_8RݻQ֖{qwJwۆ0U#!Iû1oO!M99=reeZQGR{U`MuN[s]Ca=뀪#kWM+zt vilVNL]ذ[*j,of)=߻eR6^Qw+º-ugӮ-ZeQu6Y Sy+c͖$} PtQB7_2ʪ4f1ZftlrTSIDb_UCmTsQ M>\ /.hpnFSE%9`HUvU$ 9w 5\#}'^w}mVlkxqE6g]zlCYX` QKEg}d!)o}P ܴyx(UjW>YWzjzW"}\WI?qZ }l< &SA6JvʑkJ1 (m|9B^@MeڟUJ\Zꟛ䂞:H {[ 7 fY4bTRrX\};:NɃcY7;=33}gz6 љ ήf-~w߿=E'c|=7үiV4u{c[WK#Y-f2\WՑ𖁫d6NSکt_ h2S[WqeZ+՞X ґZy=rcjeyOI@+5TԸ2⡰Uը-ȩJǰF58rTߢ(1z|ԖZ,Z7}}kfJRRGy V\:Z↑%eG%6 f::$}E\x{oٶ3\Woy}-{U-˹oXRv\:nEt[|c69xFɅ-rGT׫6V ّ ƫ6⺽iWIFuf¬:ԙmT-K=>pzӽ i٫P@сƁ}PAo{61VY-]ͻ}\lp셂>CV/?-O|?QIoC]::$=tc61t:W{*uma ɑueA4*+FJ󙴎Mjh~FcHO=Q#qjl:GN[X(|-d9u4p8z˞C||cM6XpTӝmYctlbDG'G #-{OUS$-{) mz==IԄyƲ W$Ě5<х.J]X{>+d~%6 fJutvHzHҍFC_7\w JY-f3 A|cOmEtk[ѣY[8%n@o:sLtQ羡I+Ku._Yű!{Zi8;QTM>x(RW*cEiޣq8Vܔʆno|1Fc)M/x<躺YE?Y^{(Q;|œ_%y}<'遞#T G+PGgWBҗG- 7?K"&IדjWj_V*ye-k|)&I3?)S\OΚold;15VtM^-ɇ#Ggg Z{zfRk#YXVSEs%$]&Z/nԳ@v_qQutvE%s][vvu8rH^kR)xg)8;]fGZfD4\Pv5XXq587StMS[!V)}1!qtw_+kjޣc9KcCZn=:Iwby<::кڱRpu{bynr9]GgW@$ݛ뚲~A޵}]$ZN)[@غ].tJlݱVڱl"|_`J繥p]Qu]Ug4J9t3stvv*}}kuff{u\CuBAIG^Ц"Ǜnk٭#rvi9MGg#%+5+Ej۾."cNOOv23c(_/鞳Uyv*e o)XuUX=-{uK}ʃ y&)&M&qeJ繮⡰޺noQ vU$թ -f3r2QkJV$R_:==f:a/pTJDrta 1M'ĨgΩБXVC* EAUJ-}hkF6ZIHTZ.Z&c1&c3j&ԉqgr%%"1K*zuԘ旈DNMqIw 'j H,uWrr윰j/}4fѷF.eȹ :8*6+A,k|9r4Nj|qA)?+Y<p=_d*`::{~Q'\l+ZGg{$}N9F&~M7ooaEQ ^xu+u_ѳ$-{)5NmK+ ts}jXc#zeb7}*kL-YmcUܴ9ԗylڭƲ55jpnZOZw7Q}|î~ZYk7;IRSYZUr t{uiW1FchfөM]@ vYcꟛιAUǞC .zM9vgX I% OhweBRim7F5ze|X}S.-ϗDu(QXyI= үRڬ>>9F nI9$poemZ=œ?׹7S ͻli==|65klSSY|cXIM$rޣ6Z\PF_}ua#@@oshNe%Y} %vzr0,Ngi?`5⺣iWNZ9B7:9/A'[Utf!|c;=Ɔ<6c od'o:TTmyYc4_s;Ξ#ubmg6]J1&I﹤lRpѳ9&zfj"qO(c7'jk5I-՛GMNNOсՑHL^ᓵc.u:֖﹮n٣D8HLwd$cjgjYZ鉁=1xz+ [$ixaV 57[d뉁S':mmW}+5w=D]kbbVw交Aҿutv}d8ٮ ]IԞo߹}EmZ'LY/ \W4*鵉ͬӝ*U55G5ɩqͦSZ*ZrZ+c|M5^o3 ts}sqØ,f3놄VC,hjHu㝥V¼fIkHET+ӾZWiW<33:15~AǴPxxK/cRy5 j㠙#'.a-CKh2e/F/_Ʋ S<:R˹y|ctrz҉Pm¤Jѝr#G'F2Q>M3.';:}$u0ˑGuG~Y%9ꛙ\&-qXWU$*̓mOs]5WBN)cK%"Q5WVR::1u,::1Ʋ Srhg2(΍NBASKy3{:;*Z_ÕNmk_V:tRNLkWEB75kWEvU$62/ǑquGHn,n/ສoOx[ lq\Q̷f)ᛥSg4Z8+ip~F3ҍ-V!qtI][Xd KK!g50Ʋr]W\)::~ȥ#]9>*齹|]/6c:=3<rɱQCEHlE z#9"QCaYYi#8Q^tj:єJ.>wuu0Qx0F1uܼ\֪,Y.ʂἺZYk1F_;{Bt[ϙI- y`޵Sms>~]mcNϫ,.qreVGb[:wm.ңg{s$씦RIڮK8E N'L|c5IùFT-/zԨ$~~H?^=ڗJzN]p}C::.s=NQ4ƪo Q9A󙴾R7kXX-*z KalA063j,(*4VAoT^=FM[U8twE\WU5mٲZZ=z7 ۊLZ_;+S^OS[EBw4RE(PDw4R[<Ȗʜϻv{X55,٠z|kٓyV̤zbTw-A{a+3dIG4o|}jkgowtv]YK.s]%rlS[STQQaUܴf7=JzjOo{xUu|s8Uyɼl+^Tu[L-ꛝ*h_ꚚF]WۨtJ JYrT(4Vs:Gu2E%ٜAUG4"kNh&,LZ/膺會f +P76zI sJvNnpPaVV/u-)顳E5O.̄vUT京5RIk|kVTWwh.~Vq먨(o}grYJ\Ggם=G^-m5 .c] ITQo|@-O~*/溤B?.b.S]I#@.{_{gw1Navz縺yjrLioeMݙϩІf U9̱p4@89w?+纪t{Ӯ-\莦ݪrMD![>upGa/7ڧ׷RUᨪ#1[- yu𲲺6zdpڽfK27FO #}'4075/cNLK_Ӌc% \Oy<ǁDmѝ7 A"ٶϋ{{僒z p/_-#׃ޥoڷll%`G={tvvJ'/E@uѼD1*+33jWi*X^*%T>W:693\몭":\/`\I[Te"M9]5^d6W'FrZ^Yڜ8[ϻ /̪\po1xfnq''HQ4Tm,"<+ڦSIMh D$X0(ku5 bNJXٶ~|>ďK!%otDG(%0e}~6wܽm.ao[Kƫ(q\Gqj.{z3iʋ#TV p]ܐmuM5FU<-纚ɣffɢamW:U"zebX`HZ^_J% PItOk& 9=흞Dn󺎣;v3rEղ.Yc4I ϬxсDǑ8jOe^.yHRTVw_uG::^>m, p"vGz7C?mhk#s:nAHPX n.WYcr mֆւke񭕻)XURl6b ZuX^fIP2R0jwet^\bLZck[cy9698<wOmu"ѽ-{X+C:15ڤի:=3;v"j1}!Iu%q?.93^ټms8q乮nk퍻tG.X׼t/9nA}KsbTh$Qs3%y*z]/uN<X\wS]=9xz Zٌ׫yeCr]Qu- _!5'r]Ŵutv$=,\DB̧?M[XȦݔ7S,[@Y|k;5Ɔ&B&N5@V h>c'eʂ8<]kp)̤z4I>p]'?Q844DbeOqW+cҦn`+Z+us}KcV3}95/|_e/QYcؐNLd[s=|ku|rTNO,vW.zh~#d2K&=G@\q3zUdK&&-W׫"kMVs䨹B6*v,&-uSQdž:vW&p]z`A86332H+CТH\Ykl2r7:0̒5:15Sje:S]LK}ctt|DyYc;=qFtG.yxIleqEUGc%&TsדY[%vir$xkҩlҨo--u]H)сu[Yy ]p.c8|&1 svؾIO_~I Й2/Jr~xp{*JLZ9i/=zC>EDu$u_GfXZ,SE:~MQ1zbs^" >odԢz'I˷FASM$D_vP$RήFIl?EuqT2rp=+8H:X]wAewW$]u:76|c81z~t@g&c!#gNmT ӱ7Fώ+kg{]tmM^kZE 8betU,Gy$.|O7F}S:>9̅lq^G'GX׵5V% IRԸn+z uS}& _WuU")X¸YW:co󙴞ԙ)ۺWA0~uԐ Hήz |Re%R!7>:`IY0[ Z{KCІ;ݛ$n51:33SmE=RpM7MRHԨTV.ZVU&ey5QW]E mq5T#}Zڽp+KrxW)[#zv ZCJq.)fTjsg" 缫lPumҵ5]YPXn9^Hzt ZSE=1p?< A_vtv+0|iϒrjk(8D\W*\ߒsxǑtkC+/jZ[h%G:65>)?S#Ywj\tgZto-;;a50fh,fyNFVGN¬^}zj茞PөR( 궆k鞖rKY՗NQl:yNl#=[ZUo+2h`nZ-*մ[g&Ц!'qt^jtWJY+M|&-kP[E*d-1щ1TϞ8jW@V@hZ+czt|jl )AUGbIHKOۭg*cx2ѝͻ& z^:3333:T]-cuuwM,.F$I5W@NdjtqNg 0h%}c޺$Z|{Aw8҉iOpk[%!{g.ήty.i.A]Z9vwߧv{,<4r)ƺfT߬3M5FUE(]UTTXezat05啫_gfs/ztJi?lI;sj,P" ¬SIe2U}\mKaR8wy䂞XwD¬B:ٰ1꟝ҳ#Ot]S] Hƺ 3(ӚIoYk"Z]S up]Eձ^R̔WU5rGh.orQ hwEb9jeJDWZ#ߚޓ:>9V58?JE qqJdd|AֱyN{VCN4W_wtv}$@H\?xUx_tX bS!Y732)YkjT t>Ǒ\ǽ$Y=Ou2.t|]L5!!ky}ER}YZ"{Z+#+DX+k)oBݕ R(dܴ>qi!ͦyqYctjzB/V<=3umYkJ߉ mԘR~v>uIQ6i"y%Y9T &ZpX~;`5^4ȶĈTmq|}VPi?O9>9 UGbEVe85 z[ϾS/tJUg;:}-;0ۥg%}G.ba}~@}i# s\:%Z]W52H)A7FZ=?:StJilq^/< - G/ ԝꆺf=w|`8Yy]*e伽.8*U'kfI}keN.VZ ^ ʶV3E=Em)Cڟ[ k5\+ȶbp~FM@uRDzNd[[_u4]owK[ U-FsntNKY,̩z3w*\1ZT*uG櫡,.ĝقҎuÁ|cJ 1 ?-,^p3~.)LJCґ\Oj)}Q=1pZ# sZfˤuzfR_:?LJYfh:Gշn\s=۫ :vpGeiٻZOuuO^CtsG`q#Pe8Pp9m5FS}l7GJcT}kp޹ScĔze|8]-=Sc9̋C7):nؽn1kTrQ#keefJWj~][8rrqR;M&48d67}lޡ]%x>k57#?,9u{I8lή?WߴE02>gG]9\WΜTj1oq=2I''H%h\WU޶mT,Z}, FmaUaƒu.ux]$$]rHzrA7l*tDeZ#h">kH' goO)[=]HPKU+5=JytTɁCL雝%h V]׃joea%*tU7]Gj*/n4dukk& N-Xu129^6NJQyzfReLZc=ߙɂFA;w$ɫͬttG7 GupڵJY=tNL^=otq^_;{BC3*ÑT߼m#Gߩ{s=;Wlή%}4c#~~@pq#!/u F/ *k5^ڲV(YZepV&r>u:9?[uQrG"YU.V:ZT>k'ds|U1JY}k#g'wz|@NNo *vDe2#GJ3ƂJR hl!.qIٜ_}nj֚vFg.Y9uZsWDctff$ٲX=;ү[Sk qTꖆהúeᜊPD- u>簌or %^ҋc^GNd6s.oˁ%Q<* 5NZstA6*9nk1hQ[YEU},vPex렞Ѣ+锦s>ZMz5lF4(* &Muz{5jWv>\w%}vp t'$oSns9;J/nԸl0oLʹFhpnZ/ *=̤ʂ!MȖ] p]W)3뇖I啺E:!uukC2\{ݷEzubxӱ''VezmrG@!X (#+Z1)f?u}zuCvAw*۶Q9Ҿ06XrGV\p `HoۧNNk":uU"j(k_U\>_f&r씮ߤJtU+k8<ǽףgr!I.綷*v#̶::XwnsEHiߗVAϓs9ɑSt'&5Xы3Xc7;F6꾖vU7V-ӛwTrQ'4N-]{ScY\U5rgiΦzilHf& @UGbiBshu\ɡq]W۸AѪ.Z=XV!o:TV땉JҞjW⡰& XWڑ:^@oܵ_SEP2YRVyv߹ܥxzfB3|>vGFTs3v)([#G[$ktS~6mu=Ds?]zœΩG::׷.!̶C::bT9Ew.֪wj\'V;IRC,ZUGb%bf>n6:VcҾHT֘MlkzC'W+״&ZPD4HY]SӠS3:95~N5qҁD\+ L83 ÍDmC;uXlN{zj*8 8UtpD$٩YH*7~sspD+[=vڊ8]_;ŞjH> .gpr9q$tvGG* ?GHӎή{,nsimBm|Bҁ\|;M7r[LFUj? ^ծnk.imWجc؊^,|Ǭ1:15I?95sd8R@1:r*<,ʵhѾ_'z2t㮆6tMc!/c6op=][ӨFr^Sps) Tj*UX6sUz~q竎Ķ|'u9mpߖĠPuEK+{WZ+0niOx.wG*v9nWή$L.66Vނam33zy|hNhٻ07ߪBt3Cr2'(H%kɒ$9Hrlsm۲wukZ:%&<&9Pu(4C4ntx?σUP:[\秈?=e@*mPiku~fTRR*+|eK|eK| 3m;me5̨Z7\8[!+s[ݹir]5Ys*J_Ζyw;Xl0 6uk]]PT֛ix-ۚw񶽇9o=pKS;o{=d[ ׹`MCrhnwdFxllE߻ESͩќAƋ \q`_Fs\;H}@0 XыI4 S}W,R9Ҷ͋cLO\2O 4+Umc0ˋliR .q; 1-510j?H]0eoh@3 ϒrl0#xL3e+xc~dlq*>6Z6 P%6pj"} no2MlhC+_=m\.PAmbGV5qC}3``楱ٺo:vXC+UBv(aCfE ϼOnmnvpp9.mۜ WLOOj 0ޑ>O28tnWh,]w]""""""""""""""R: mh,nJE?3!oQHk1ʌ,i/Rb*ɹncw!fX//ZuDe>4cZi;f`7\jH{v9891­%8Igٍ-E~ ouEKE5M|2͂sc}3G[^e1vƊJ,[zKS{Ufoκ.m<9הmsnz9бi5 ϼO,'XDwW""""""""""""""R6'(pK> pƏ]YE2MnoXC ~+{2Byw2 xCk5FA64+Z{ RjSTmǡn 컲zd[Np1 ,1-,Ӥ2H=k討-xK;5M*pX\QMc000萓4.XC3 CR&wata3cBm10X*ڊѕX͍m<k9blvѹyFg/{zqd.-3LGkz-p0@}e *hrnJ08l+fK,T%j;q'wwu>uHq;P4?gm\Q,u{ydªc/Mv+/uOLCX8.e)ސaɄV{# ϲ. Rи@nSvg+(06\f|)gK4'2RiY&g蟚a`j9֓L/21Àp u%zm8,̌0.#`mcSWK&&靝|w?zzX]%-BDDDDDDDDDDDDDDJFa[@MEa|Ǧty̫;]Kڶ9?= c\Z񡋄<^^׾?- Lڪ]#`t+u=0 f%4'r/¹qDj HNLs~|SO5:RJÁKJ2 "쮋]a=ڎMKE5skmv)w.=ˍ ו4f&wKA,jq?\DDDDDDDDDDDDDDf[X9}Op@:W1R͙QNN~!kpl@zqXL%Y#iU,v˗ǴEfGk1T58n7Vq(j?UL`|q~ͷO&H69~').Oq~| S N͒.lpqb|Là9\ɞE]!+ZxL5 kq7޳,Sk cCx-HIlgDc?|dH(VbXrD WW08wmEN@9޲ujx<0KAZf9=9k`kV)a&ohC=L./\?RuMWqҎL.-2ϲ TP༦s4 @QpDz@C6Ƨ8=2N$CӳM؎,ScI|~!<Hߚl+)TUn|瘙9R4Yy""""""""""""""R4JYCzUu.gc؎+k XH%z]e)f9t@]5P1i.LflqyD9rj kYc󳜙c|!0h s mlENz&U62M à:c%fk ݰeמY)(|-pzd#$Нgsx-u4q2Ml7{1AkZӶ0K$)xia5v|uBc|>ۀg Px|>Ѽ2o[r*RΎ8:39FC 5ҶͩNM^\/qO>| lLJ.wfj e?Ri4UT"N1HrL@tc{vRMSFse5:"y}ٴUE 9:@KEgt@pݻ-RNrzd3#L.'_2msfd3#|D4r} yض 7tّ~fqVWqks~˃8+zSoh-z;Wz[o q|Z4C)VZ?tZdwczH6');}U7`Uڶ_gbic1L" k;uKԥ㸝ֹla1(z;KS|bLJY[v khȔcs&K7tYZ*}yJ6ggzu"m ^!Tah 7j`pG;yMH.s+}~+1|GgyypGYHn36I$-*4pʂgoøka'\Z(y u=vGY__vwuN)ls@mEUU!>KômJrzrɥEB^uԇ*aúVʶmsr|8km<;ǞZ6\20O,gLbfِe\*'8PPڂ2Ҷ Em穑>޺â<2pen^䎖BE_qhatq.kmr: colSi1 Jtm ׂHQgWU @R} k9?=񚵖apuwQ[+aoϟ8Eb*I2y^i1M6b;0 !ؘ%ĶBiFgyQfy\/&h[Gۛh\ Z*=^ByOS>??yv]f+ޯq&r-E,8{}4. H6'LJKҷankL-쩮gzt?ȾH]A,SsYm%c^`'iS⑨if&Qb3%.yp˿g}+\f+B4x>k?aYw2 [{8-\Q]Lj]sxcabngeDAnN`);Mv875x6j _I.RvhTFxw cSt8HaFfs|9vՆyW 76f?,L=Zk^^S[.e_z%""""""""""""""S8g GJBFS2(iӶ{XL{Ui2X׷ECWw0rqҎS \#p?7M`YGKE53yװ1J.YѶqz&ywFHv O$'81?őnha_C-famPB4'XZ}49Ѳ;.pםsy ,$u.IDDDDDDDDDDDDDDX(T4xk>ko+ٛnm+`mLf0 0i UHI<ݙm98ʼnҌ.f |kyW Gۚs!8LqqfP%j1sƋ^{<|f4SEDDDDDDDDDDDDD@a¼ 3"0vbǴ8XӐwmOu-l\#9rl4*qpmX}伖Uxg'^m.L;\ctemP]#Cs,Αm?{#uxM몑5`u.lǡnTWgWuM^!•Cse5Cusp]#O ir R_喒)  Nϖ-)y O_[wrכv`&էY (&"""""""""""""R Q4ZX\VirIݥ *Mmy&H8=9Jk[azZ+8pUmL-/P5{--[(؎iAa90ގ8,/vsĚa`tmcfBp\ ${`&m.L18_p{z'yB?/\sfxf?p}K#ieO]M|ϑݑX'7.ylkWWNTnEXp-{µ 088T쮮4UCgsP/3\3ē}<7:-M4*vڶyebӓٻŭaliK׏e؎N幍lګ#8'-l蝝bda%ReACvC&x]a&'Zv;&\u5 TTR2ų?50XȚn }?a`bpKc; ^(h{ZJ0h5b7prl?Bpzr^׾ih3 aq{.F8915ךA[eֆRK#TɱVX6Z.i:BD^=v([Cέ98X曽gwׁH4i!Tr:Sˋ؎ϴ VPNI~旓o_ {%in{ک|+O89GQ4ξKDDDDDDDDDDDDDDf[uw (8);ɱo?8ONm&ݵk-NZcaJ'XNrkZFcd]VxLZ3;ZvgS6RvLJd[1Lp?77^eL!jVWy5q8;:c=}DMb!3yE4qǞv6m]Ms < zqfP-/X|־}s5a`?;U6f*0PuE^ 'yL`=gj{uabڑ-m`b2I"cT|8yuq ׷ɡ^g&rkS;@h]m&iNi4 CFH<;#z=RUD6/8=k]Rv01 qH;6af \EA6p%t q%J8gy]W6$m<t|?up\K#]BDDDDDDDDDDDDDDזib9}-Lڶyv/ )R,gHSض]Ծ|E-jU>۸-fZ;Lb_Uj[e_:ç IDW/>^8B ߕwGՏf DDDDDDDDDDDDDDd)̖;\vjGQڶyr9We{S\q`o[AOsEUQuyxK,mۜ[mLtlj.㐶m5|-Խuⷪt|6'"R*Cg/_~|yz'K<],|[Iw.""""""""""""""WQ>Ϣ}׽EtfZJp?6<2Mxy|T+ګ"8DABoѣ/kqd*qI,63%;ەm0`bqyeTUqpL0&#Fmᕡ1:{ S_lᅁ^ƢGn&~SHV "?7/j_pfrm]Ćg4 ͼ06<nocZm|bu,"XvۮAv`gzk*u5 =^<~?B5<^ h͖Hyeb~q'"[ DBNݭ##Sů|HV #,z׷݅S\j)brym!虚@!\sr<'.\ЈYuAϿ:0h s=׌vLۚ;Vy4Sa]NK96sN18O;}=n_׾HݚrZkKvR {6kJό pqfm!B*.5M Bf385?=" T_~xy&׼ͯDcw """"""""""""""yQgk k}o6iZgYH&H;6^Ӣ1Tp-lWLZpzrW&Fs*L.pUcpUM,-pyiۛ5ee_Sٿ@M D/sZ9}=;+<7:@&¸,!6~7/t!fo(a""""""""""""""lc,z;* @M=O }j%m88Uk=ZG/uq|t)3w't '8i+\îRIviR84&W>_9TxUӤ2LcƇ蛝"}oiV )*6>g/t GB"~9c|l+(&"""""""""""""Rb ]&ɵ.XAKE5- \<Ԗ#Y)l9#Nn)gƹPklkaח%IDDDDDDDDDDDDDDf[},z7k㐲$%mS88ɥƒ\24_M6]{1 m6(A-NxOǟwrmlUǦ~nwsyCHn3p UZoo {K i&蟝&a0 ˾H2̂BMc8N1\bj@iaƥm/S%4$!Qt˶2*<"%wڶyr/gmCm/ЪdQ6=5:fh~G.ru7NPiqJ6 '奩R,P1L0dޭ۪"T3#X4LCi楁<{)Ms}CYl dzϝ˵DcΜ3IEDDDDDDDDDDDDDdu;:9q=FN-/2078#yU؎=k Yc>5v-U)[ڱm3Sc\`)zaV`M!w!ja^[L9RQى^c)"]p{o'fS~yr͵{t);͙ɱnL0\+mL3\x(j)EpmSQ#3m /+#W NՋ83쩮[Z+(vk5_<ͧ _xᴂl""Je:[f,1dclX0pkuw8BeeiGeڎL?35FʱaPpHnEpfk/jŰmTo%NuF静^׾ǛԱR蛜xC٩vzK\${õXRTH2LF }N3X&^#S uQ`3s T])0TDDDDDDDDDDDDDDvf{W!naϺ0n RHN4 z(J#r;xL-ՙW6)UA+LO`G i bomB0e`zv'""{wp`oaUU!fgrm](&"""""""""""""RHxxcu'n?eϷcZ/趕^?aϛA}. V2MDD0ӋMd]籁54;1̖sĨcqֵۭ49J{U~[,ΖTv -5Fں /ޭ0 }c,ןΌ}1,8'""ho:u:צjp9QDDDDDDDDDDDDDDfk]fA$τ LXs40 z g>v;[-q lU&MGUlnooTSyݦ6xCc\WLyG{*|<ɡQڿk:q?oϦޅl""""""""""""""kcƌFcqžUלp>a 1ȃ%,ODDDDDDDDDDDDDd)a7Z`ǎۀRm34?Zf}ZKX4Wa@ (Ejb~/Af]lb,.Q\}DI نVO3m9[/rhڱIv>6Dʶ静KA6 \`daڶ i?R׾B^ -'S/Ҵ*60sY]_Nf8䙋48B2zhODDd5'9ѼGo؃aAz'x]>il\ʀ]k,֦&8;5Lb Ɗ*k1 Ol$/L 'wYEW~?/0Xթ9""""""""""""""Yl0p;ǏfK6# <>ԛwe{{w0 VaN0ָkMj -L([7d:+Cc<7qvn"""䦎U?vl?6nr.ɶ{M浱ml+RI-֚o_0h 7{ne{µu61_8N\/ J9aad֚&fjLJo2sC)XѢٌ,̑L7,1-nnl'm; ϼ溎5Z Ll/ 3(w9""Me*}-}JDDDDDDDDDDDDDd{ڶah,ʵرF(3SckƳA!24t *xLkSنgya`G_(w9""M.,2D$hf;{:S%-PDDDDDDDDDDDDDdٶa6HmyqfWUׂʶM-^bzq%lyӋ-'V]s@[>a#KDDDDDDDDDDDDDd؎ar-ؿe#023ʼni.OqqbZ5u205uM^`+>bEDDdc-d 5̳gsmFa6+(V٥Kyg'K}ٜ]Y5f¶ EcИk]njN/.16|S4wwuG*"""""""""""""Sl0p0E801F[`9.Iq"""=g!8xzDDDDDDDDDDDDDDf9˲LjqEDDDv\ٚj,tס0%f (a֖:,ZzqXX,iQ"""L/b;Ϊ{<-͵l*籋Nl9nje1:D^im3uM{~F."""""""""""""Sl0[[uYxQ|c+JR6)w%ԒkA}}89ޘ[kuuُ92r]Kp٥\* {\, $2%`2 \z ]H^xoxw^v| k\տ/0L^pwWڎlB.Mռ\0L>CKp ̿미qS;{cX=fʿ^*EdU@GҎ{SsK{˟C%qVKO]ϯSDs=fcfv..lseh$qCξdguu9f>\j5.ְ20{e&q]ʶluYYfٙ 4 Ҏ^R"uuُ92f8 }y؏$N4gg纻:Idۉ&GqךJ@Rh,~7vxo$~"J(den uXorW``)NW\Nvwu&K]G4c-L>+Ya)e<܅bq0[Ē1`]-IdcEc뀻3ql5C/=<<CE&sfa@llgGp89W֊dSr_)s e[qW7ZTpz xxByK\A;p_V}twWbYe: _2-:/x8!s񚥃{"d^w= -lJVe~Vd0 p0@8' d  PSa4fIJ,-12SK,.1t˩tI%"f3{tyn>Bn37z͙K /| rwWX)L׵p}(.G]sODcӸvwu-Sm"Np_7̏5XD2hHe:B=5sy'I /<!uRMYZܓߓ\*c'||y|d^ ={+Yn q@X.Lj)7n2˸ݸ=L<)fƲl BWVXUA}UPdarX&!j+V/&L-2:;ܼqvɅEm9RD\cFkk/06>z;H" .G}wwu>WD śp_|e[J3⧀]7:dYv̙7XW\G$xp'V+=syLD6F4=^yKmy+p_;2__\ٳ"$7@/n繛p%DcszUu) Ty< |;VnL^~a şΧYlf |BS[ƴH!|EKƪ *+ PUA8F륭K[kæ)fbn!p[`tv9f5TD.W>{YZNT^||2/ wWAf4GZHX|/{[pe.? ,Dcc˶X>z[rZN_h,py~Qo}SXK?-[e"D4=;y$؁e.?YG>lY+axC>,wqʾzū'b4> SwWeLv lYchʶYH|Y$+e-REkH5u!u yLJ+_D* S s ȆZL$I4 à\R-&p/i|Ou>l;U޲D.]p; Hv!B47/twu0r2ފy m-ǀEc yel]X|()}ſ |WuH_4o 'NV W;rkpö"_2? ߆;Il6m-9΅rVߋb;Vr6'ODc3_sez.a]MrӈQY#0ZMZCUņג4s -.3\rT&IӤ+~N,e\>X>*>*~*>*>*3 |f>Ş\R2lwr[Dd-%VYWWMٶh,fKKXqπ\,oYDcq0ވ:*|0]/,좱x-n ݸ#e-h WtG34O$h,^{G,Ht_vHҋ"WkZC x27]/,/3}/sNشa6`1X"_;%]lz:|h,y(P暤AaftBd1Mk]aO} 5a|k]JL,0>=^Yp?N313$hu jԅ+.4V^{k[jmzqcS\cs[.x/"r0[ mbXF#߇.3߉\km,_(e.g='ׁ?}c8U֪D22B^y l+1e.g@բGqO) fr% C[qG}Ɏ y磱Aۿ*gQ"[M4yN1;Q)̖ Rj,Th,)ܓ yyUg6YeQfO]#a̜FHQfs;fӘM( U]6V{&Dc/2$DfD 2S)sɲV$;R4v^ =R +X+sM"eŽw.s9;McI$X'f;|:]O&M-`gƾ#]@FC XM4}l'7V4$g m]%̖I(ސ4 k9XǁZZUXf5vdBkc$;{F"fh|9•Z×Bn av5ͭ&nhm`1gF932Τƒ-g?PgM"ȷ(эcoޞ]&٢2onmlZ_~.7:\XzCT\d:h,[ԋrDcW?n?O2$n>7?m6_V4c'Gۙm3tS2a~Y[rGc_sg3fƌH?^cC-oq04>K8= sapksK|}ՅCms/mOiiH[8G9?>I2mk&G>׈ q Dcw+w-;ܛ7Gc/q?h,^fOmF2pG||4twundoKy4~6ǻ:O Yo?| H60OGc 5L4=nwX&ݙK_4\HYEc]@'rײ l0f̶ilX< Duq;\4hwWW]oˇ2ss>N <8XKcueQ\. -,%KT\ 8ء*DZe*bEcfwpG7]Xթ` {p_Eeh,:dJ$@^g ȖFh,/vwu.Ec* XR.o}A׻:XD ]Z$"?ϟ=R' ɶ;+s)`wWf fdZ4 wNټxG4]Zb$f녀@0{e91I8HK=GZ]45oö.OrgW. fo+m0>=Mn5yLܹ;uH'G93:q";r*5tTmd0~ U??XZ$'nGE R{&OFc_SԒU41\r4](wA"o}tܵȺNh,@gwWLkYX?FٯD f Y0[4o},>`!YoUDcwk;fK!-_cUGZ8@kFy/ &G+6qVnBE0I]-ܼD* ^ᕡ1QlwԘ ŏRZ wvk]*w1Q{]D ??չcHdBl?brx0񗺻:d[LE X_|⇀UZdCTs_>c&.2mxOk٤k(ڍYf$/zc+x!HwW?vuVn0GP%Ҳ95WWrc[76RWI^<7xcSPlF#L'`9vc[ZG7?ǺL6ƧxFlS΃l0 x\n4SSeG6@&kVˑһx,wZcGq( ={}4wWg &>2"]ۣg_)wAsEcFWa{'k h,r$Rh,~Hkf_`vw05wk>N6p`3jr!-G]EmkbMu C]ԻcX]OY?Xq+w-L'Dcr$p]kMu3X{:TbD)1ܑk{R:1gƧ-_E-_׀WHيۀD\flSCGlqXGp_s͙]vD-첤l[8ƶ&5yh2ųqqkT~,2Ǐvwu)8_ gIn;>}o?:wgN.wA"ɼg-rl0[4=@zCb/@4NR)Ban&`)ue<,຦znuMuXfǦfyd/O=CPu0<1|?7j#a_3vpӮ斖yogz۠E4S&gsmFJ$Ge.E6oDcF৻:ߢ2xCkMl4;9z3xw PZdK1`o4O] .Fvh,~ [r" |h,#]Qd߆ZoK_{2t \䦿SZ0btfx.ah,qܓqCF|5gwW]Nr drk Wq5QƉ}Gctwut )s)|pG4owW3.FJ#_>mKdſMoF? |uW$B X3ǻ:u&,WHk-7Ecvwu~ňDc -f+]%>n_ەMh,]_,w1;vf dZVg8ͻZh}%{"?é8J& r}XVjZ#ռƃ'mM į~Vg"eM}h-w-RVw(ж5DcGcLDc[Cs)pϔskq;I~{k-G7Gc|`ܵȖtGc?swWR +UZfV٢ܓjeF~D;"̦1ej]l4n{yxnl^ KIx;llE_]l :FnnX(X`$|s3+kFn}SΩr#Sor"[~XNmu1ſGݯ2&QZr!̖8,2ATɖ-s"݃.fa-S3j+ܺwPD2ӯ=<{zB/OOQ[č^e]]wq]NL~^!߁f"_g6-P4 .w-|0lBX-+]l9 Wb$?1ܵȶtţ'++(H"ųx]4_wWP +7ܵȶuOvwue#AAB=Qh3XtX'u̝w]]Nl̶Iy2]n.ʅx,peMcbf^롗imq}Tޮ6LGm؍\O\`pzvlrL̶na(&h,>.D^M7\ZdK x,sdƊ#nWr' ˱r#[_4 ̥L.F6h,Fr"^h,~;*-[o-l+љ,XGNn> ?M"kz:V}eo8'F5 Mogdmm`t~WxE~o{ܶGO"唫k?n9e䵢nh(w-z4r!h,mǯ B Y]4? ƸHQxud |%܅l>ۥm_ +.s)[V -bd>l碱xIJnۡ3[@ie_b;NɊ٩< ܺ=u5yf)䉗.!cȶ?3gqn^~՘vjuS5o:} L[Fu ayu%'lh,^;^ZdKh,|wW.d'˼(`km%c4Oyh_k.ROwwuZ )UZd~sK+ʼnA:QC&h,/ٖ~@Z-U ?NM`;y;̖+BT~>{[o+a>oRl" KI~Ͽ/x]x}[wcovnhcnv!W#G̈¿n,w-e/ו* wqEcq_4 ǯDcquԒ5@A6Xų*Vg5d]X]l/c_)w`wW`X-9/,C6u<5z#QjM @cs?yu{1Fmv~??ogYJl@Ⱥ៾ ?{̧Wyd/mgM[M5q|P nP";Cc3pv8(EȖm& )(pt9Yo5aftI4@k")w!uDc/PYRdg:<ů/w!2?GqD6Xzn%T Ol0[4ܮ["}1f4Lj/EzyW+wm"s8tn=y'_%ٶ3yT?5!x~xA#&򺃻.ΌXOu_'ԙmDcw?W:6P8\a`$sn\. aþ~3Xg9p+̺ߴx/;N{H$wՊS:*ʨ(; N!!$^l&W; h^>h?}>|OZ$Q[3ѱS>C KC9x4^FBƌ:Uy<`.Xć_/hNt9І=vߝS Uv\0$paAٚX8/cyE󟛶7^ EJÆyYmoI=r^k'{hF)P2p\m1 wSwTdzޛF`1w#z_^0t;SQ)PUsDES. 1]v9Kuq%ڷ-u9ީaB4-OsaF٩Ǿ9J} kCT6h?ԣn?ubUl;wmee߻ G:e]_]]+yLJuai2\ Vб]k9hǍJEtnۚGl5-Z[ Z8e^0,Rc tΒG;W:<Loz Q5> |sB)'j=F]ZzWQ@[I@Hi, Do{HH 6KvAJz^Do^T7aDKO8ڿamDq]d-BDI׀O{Kh}zoEt\rpvw9. ۬H:XLdrRRcuhߧn#z?:Lԇ[e]m.^Iocպ؜)їfosޜ8qcN[ۼCY^[9֪Fd3ZFv˗o$1¼ߙe}Ax!q y)-_Y)FuX <zZKٻ'xAYg9Ld5|[=a^gZ]<$Q[K3* E\Lx xf\0{_:ׂ(¶2ba'X(f]hOSj)L=.w\op{lpz'ũ˧O$0(oo~Uɇɶe*OtsID]h R6w??a z=qD7m+&iA:Xs\iɴ朳ܴ n^yY#{vԵs{V2e<\.VD5oV 9tdη{o%3dʢݪk"Q< >ͷ}6tgbS9`/Ob H: I Rqu&@XqVkOz?kalX9w8јѤ :[8'ջضY7*S+M0W[)&P$YiuƱ|WP j֌}&Mk7ley0}[' ݅&1h<`*N" eazH +I@a]<|Bqp:Hr\,?ȬF+ 9QN~oV#6v1Ibr*Vzu[DxܤbHGTD8NSH#G)5ZAԡwQć׸qfD.N0T uRb6sHM끇n^ͮM &u9i}븙AT;7!Ϥg_QL iy1|S~%+yXZ7mZq~ܿ7-g0΢U<ڻ9w9I9H6-98G_ǶYׯټɋU*f$"كg\^i??}m3O~G9xlݽThhR lnK dO7 7 =aqDmI 82 Ϊ TtalTQ.H 2 *&^[E'jUD]n*"lR>x(8V mA$wBq'R +F%wja'fLJ8GIq1]Dw㪚hL߀GTbr|b6>p! V;hQYh̛AEz*K!VY1Jԡ~9bHFFy/vb)sXzcaHlݾ^zwN>t$tO{v|j(N5-غK7m"bp]0KHN!';A4hqhGtBB I ]G!+/:H\ z%p)ɸz֦oJ')7,B;7QqowIǷ׻% YApRE"ylow[HnRߧ;xTySqmacFs\lriQg?hdnR~8ScLzN[È=%l[ "0b+A?ħm 3^`ѩzGj2$Ĕ'IPSbKx%QyYcGg3~6msp#MUVU3eɍ[v}ۖ-8n .;H>s(ztHҵKmJl^G9rt 0 j 0P}q!q1C$w>' r0 !Śm+̸Ѹ{`0B O^SCHd|8V_a?[֙q=:dzIq\EԽz`_@.Dys@0DJXg 7kwU&)%zV ^VSƍQD^Q,a'cgveӢxejތCغUs_Swz]D򟧧s8a8ڷsmrߛfL^סw)U͛eއؕ['ì2%bu,6_ q q= JF6>k$)<su  pz7|;8R~ |:H1 zn=wDq~0Յ $9 ;uMԕ0u%O{ 6zXBU0Лf+E~؞͖YeJzFBp}Tٶ:K=}q#C$z-(OTȖaD#໎}:DJ],8XkSl~U#7:O|oLwˁ/cOlzITОxƧ'qRQK=' UȖa/ +D:_M8 gDqq>`TW!o@ݝ< c7p%p rpQ`ޑ!Ass]QZ6Kn>:p#8|H_xY\vWxo&[Y+o-7>nyi.*CkmԸQ\v7bmZ$Ceۇر3lI?O:[e0Z0Ivr0:DB$@.aߠB Dwb?a%G;7:D 1IϮCڿ= J:M5'fxT8w.pa0szM ub=D#4 ) %0e D'lz!^dqřa/SK.Y@HZÔ07-:ڵpzݬH5:]8 (S wt]ㅁ_7ZgQpu+bvd銒B-sA\iKݎ7r q0crz/y巹˽/hy'eeeՍ:3zwER$uw6EcF#q.x_ׁ묳԰T_Hu, H@9aP'l6d/h+t4]lejz`0# kR< KNQR_XhXH1lNM_"Hyѧ&ӕ\bouM;:DEюڮ {Qp\/ɻ`M^3aoIxQW9 uir' >L$^ub qu$+bLv|3~yY+_G>7q,py5pcs5p3ظeGp]:{]۹mkN;N>G]"]kQQ`Q)ϓAg|s/N +ao4ʐT?2m+0 u_*]v|-"fպ.v/mӢ9ǎ'j(e,Wؑ>EwGwma:DH]"P yMy7;˙Lk6 p&:KHKI & yڞ40wslB0VףdI8Xm~kHCD#4/!_Hھ:K=p\uɿT'O ^ 8G] #uS㧁mQ9qCC$UIUWWsg-5[h^Qā}9gX1ڪj>?'y%YDD m]<,.Anyx2+nJYE9 ͷO8/z t*\PFV̶sgK-5bh.QlMT̲n ؁ RkM45k{-$;WNVq=%" za]]# q<8:EDcn" +D`uzqC;.QON׸ WWnu^47:v~*hʋ 7 (5U' q>o -\:3[ۖ9a`.?(>qHk+Ѯ{ z?" -`Rٳ-?wNūҮ-++cDϮ|Cc&p@Y7XVQИѓ>tiTjoD؞nQ ~ kFn" &xz,pubkDѸJ ! OsR4'5:4^ˑ#Y qwK8׉Ɵ\R%=N@XX\χw.T!0o~,w7L.6DN;nwd>Wޢ0l]۵2H16-]i}M.~Y}THu5̘`TU?wҧsΚ0OCt2)Q$etRhA q!$qW:HwR!oCd8:DC1[NLvf)fkټi;߯#8N85nڥ,RHddQF%2Fncp\ IK`u u3u8q!J7g3|`B&B Z稃N,KCl>&E. +iبl 78:DH}/oΑu adDF 7J#?[:Hoc_Cdpu$J|1[M֭2}˖Dr&'9oiVUU1e{oO۞fƼT'pXDc/ͥkADX|< .Ia_ x92!&l)+-X>ߙ]8r0|;ڦs~mWa%-_/"T{+xͅ/>ۋҿ1gWz!| Ӄ ]/EJllY SCuRLwgs4 [ Jzuhf%)a?gDytgC07kj i:1\R7s:G>f@cpEJSg#G:wuK}ϵΑDfX p"QkItCH]g?:H:"j iJ-Kg,3޾-7˜#9v ڴhv;7SYd'|Du5̜n8g.2ޝ:p#8bHZ6(`Z)uƌf+OQ1[|tƯ\:T ?. W6R98:D}:4TQ꧁Qjqx0"xiJQ+į$Xu[эo,MI}Pu4*IaN ~]orz_!0!owmyXVd[rQauܵ3_:l>0Лf鿝n&s5ȋٺ'dxzneC]GwjӊS'ɣf% oȩH}"7:@ i FeG]=O^gPtgn)U hTN:,RaWBckeBbm9p\I#M, _ΑC$qV |IM+u<XgK^)k:Hn !$0O[H0ҏv(b]Mъ2ۓ W Oa ꆴ:Ks\oup\o"pu,^iB%JY4~oBLu|/ 8u:@ZHb9!Νپ=sCv-?2K'>7q,9p܏Oٰy;w?=~;7`zD`p ELOvm8\~Q8j[6^Z.Fn߾)ufR~.0~N/"iQY;aO%^6F[)VaoQ٤j+abD- D q(]\m"AXzX^ ?a_Erz'Z3эO' k>:[R)fvHئcNaTn?^Yj7 ^ 6wNVDDΒUpٵKߏ۴hαrGكW"2btaktP]ɬřDdT(޷Q%D\Fvu\ouuITAsdqzwe#J0Kd 㱯VPqIg u07[CKIQ,; 92jGs!ݩCu3C1^yk:Hlؼ}2ƭi6+/g\^||刃ѳkK|o3[#FٺZEud8w puZgꂌu]f#oBX<:G H,oCH3pu,~׫~m#u0!>gB j/0>F^2vb )%Sb!%ۅ-w۳/?,o/ZU""R;wsY샌wۏ/z 9pܗFK@1 SuuPKۏ%5Zb"E5o@̽ |:$<αc}Ew!$CLH%03_WCH㄁x:GT̖bAzܻזm;y\zdؐl""b)g7O2JMK6||.?(N9`۴.`ZIlcF׭iԻŭ3[1DQ!$ >wP^O"EI5 6:$Sۀ/u+!=?lFIR#ΑHlRq"9 |- u:-Gtlycn▇'ssoi_$k/rWeǮ[5oCsIcԵsJ׮MƏ[>Ge^xȕjY DR0gw[Hi4!RԙtU_ uI0~j#QY!blš( CH"]@CXEo(:̓^{)o V˺Y V<-`ʼxXa+z .~4>ظ-2FyG̅MiV^L:\̶|y}rw)rg^["Mq1i9j0WYX 9tN640w/XHzɺ6 ǬCH!2PH ŏsdxq," :sd0 8:4:HV!uZZC$A1϶`c_1s @Dm߹wY4TώA#9i:nU7&lyNY]\YlR ~Y0!|.)Cmhp :(ȯ>k"FN'c߲!92_i7۬Cao!ywu:u3Vm^+V2}2>DD$*2}~'Mg-*-[p\9hi ܶ5ekAdl*K[g6]^o 5.!{5g@=mv_Luɛ0`#ˬH?xQɓCqz'[&qqj!#I:.)ݷU̖*fb !oDD$Y.}ˮ}G^Ŗcw|C0-** Vtmy2DPlͬ2:H:D :$COǦ8:H .~~*:i ?ZHp&Zޡa9Ҙ \gBCYwwuy*,Nav7:@S1[N-*f]G8W)RY0Y qXgv\F4b@ 0Zq)0Mi|u#&~`a?ރgˎ]z ({_XKaDZW5kRqDOףہSce0gWHzoON;y-Ҏ[q!\v=a vnYr3b6i.8"ށD);Ј iۈN[ ~D )H[ \iBJG3XH|[(|u4n u)j%VlB0߳!#^0w[稡%[%X̖sviӪDDvˤ7ÿ<Ưo}oOUUUݝy0x8G OqS2iռZf<1*fupu&puuI0Wqtu"5 8 !xtNa@WuE<K !-=:4hNXy:@-]];3yQ""Rs[͟~ˮ}_mNum׆S;?XX]fʶcn~)M 8Dqfsl!En)91"y ϝH jBJOۭsQ#qou) XX! t3CfRPZu+09]^ =t`86 ;Wi6Mcr1D̬۴/d聜4q8CuKyEb\^ݲi`ҕlsQ!b&9veA<w\0g[ɓS!Ru) 7-՛n"I;;~iZWR~E=z_S[Et7Z{|:K L! ruFtJT,@-L-\ֻ:Hشy6o1D╷[г3'LcѪeڷqCr-okS]ܒ]bSYJ]\Ou<ˈCHqz'[g>N|48tfke@̳a?eBJW;wpuZ_tF}QԕM 79. uec!Drp9]Б 4ٲ(1a϶n̍@/_DDĽj=:p??:7d\߬+G%'1ҾUlzujKtN:@::Hc9 8:Gm:9$yXHqx34H.ⷰ MgDk /ufSu:TZ}9uVD-bخ,\2V1[A֥UQj_ds:gcXn0a80ֹmkN5Ff ^Geuwޝ:d95ЈH\.p}8Hc}i5pu):O }݀DH1Cg!Dq':K-#, MqHux< +v pu:mBE%)>&Y33C-b,f{,F|gvٯCoޞ\%]ckt-z 7& gEReeٍ=mnZKVj[Үe DݼJ8?+IO[Hy: E!?qE`,DFU&_q̦bs"5\MNkkHi u^R) _k0[c"Xf[u+1)ӳ-Xf7o˸fHۺ}?>̯nyWZ=۶lCs&r~m,лcl6mc͚ljZ>Fi4nv\o^k8:&RLЙMcFKahBd0%^kW>ks!]gmgѯDb&N7mwX̖lS0"""n.}YjCӫc{>>f8W|(>7q,zviKSN3~<Ǯlb6^ΑC4D'}B'eL0Wl@q3-uqO-R߭Yh* F[C<Q8GU[Nib= .cT̖E1 dnF U&""M8e.W? &1d7Ս:+>v?`8}:e"&;?lۈxBb KӜtIXH; !hd %0:Hi@"a"ue\<ǯm'ς0wXécnM ?ݕErLYkI8,̷ΑPtt\opdug̦1M_ͦ,^O cH-^ŏON8 =i۲ aCnvf.[eXmGRY _f ?,2:@t\owYɅz'sc@Wj8t)*f|sU!Dj zRgC4Sag nkIA$#4 Iu18:̖UL#K1[)۵s6lؼ=MiSuhN"R vˋo.7ҳK7 KǶ]vm8~`96ֲU^v(yЪ%Zf 0l1^$ =ٚ80#8^J2 V%f]ذԛ$ZWvz:H>98:Gn <^Rɨ-T$INb'.Z[b3 0=ۂUֳyKB5ugYn3=;KynyEޓ}[\ׯ'-U qq6btmZMYQJI 9rq^ىRJmiJ3^u"ҒO:H+CR` @ EteqI[ ̲!u!\\يXm=_rz KA@DD䣪y{*n{MW^V])Gs΄1Ч;-*TؖIueufKR,~<^0"$̰0:HQg6'ueXK:GM _ӽ,1sPiю!i=%A^ϫ[HQ1[Z67?*fرkO[/oy!~ak6lWT5~ {4gO8ѽӼXw.[gٶzH҉H8ׂxCaBͶP8"EDl/:HPbǯ)Ͻ0s!_ٱE Y}pܿ7m[6/@xԵsƏo޲U)71ʀFP@Ru)) ""czCpu<\0>!%I慁:FN*VI{?>j`|d[?'qG^E+~]rFʧw?v4_=`ҏmZ A]2β/Q;puF:wzq^KPRt u))+Ԡ""qu?M;:DcQ9jq%W[M"0:Ċh Lζnl];[v%""K٩m!youNm=;焑qcGq#ճ-**8yܦ5t{+˾Dʫ}I# sّqցť+\RrVZau`nC4j)[ȃ ZXܞk[El3:H=3Դ:d:@{8!ӂ Wuڵkvͨ=XakӦ;x<\:o3aНڷj!pȀ>쭬u^mq٠2~|˖,\5 q|\\d_=aɧRtX -NT& zڥmRֵeDkw9PsXR$Ջp(W_q,s-_aW;2I,5^EA-/ b6If2f~i׌ԃiHQٰeᔹSҮMKїq޴hZrv]XeVX~{ U4\~Ꜻ=<. u&txqz!phMSI18:@٤-@lW 㻦Lt@Z}ks., @o".74F\X iB$/bh $,!:dWl;ޚ(K1[<Iwқ yͅlь1C{3~T? K-rFm־-Gn ׬gݲm̜wSDk5cz/4$h)Bd$9 =^JƢEJJTUJtq9hJS$u:->T&I:J-q:(iu1[RGΜ(vtlKXQ"""ڽ7Y,{0~4/]:i-U0GW ;Yf= F4嗐~m[ӱuzxʲjqSCszO@[IDtΑ:8|b@%<~NRrFZAlRhq:qao"P~pP1x-."U"4:@-`uAn{:H.R7a2"b6-N%R(f{>ۂjf^eh^b6lyٴkӒC{3nxM-sFYY;uw=lUU,߸l6dFT6D]2z=sS~Jt7zͦpfzK~=`+԰:j 9W!78Zs\"_S4tu؟ű%ϱH:Q1Ml$*f;Jm*Hv xk֢l{9Hۺ}\̫3S^^ư~7/Mj(/~_G>UU,߰Dۖު )g\fvH=q%?c_qۋDcJ^愁S8 %v.GXq\p8p$хÀD$׬4FRyԐ⑸e? !D!nŗCk~܎KZ"b6-n7J-u!ULf,nԊ@5 %GlhN#q~,644"y2`uFdkP ~ Dlqb6IImFT̖Quu53g-G]sЈ08F0:H>q@7,5#ylq+xY&" "̸K Jl b(bG-;CEyU!(""[͹xsnԹyr?*rk&?c'Oy'ץ?t\,>Jk_Rqlgzaq0ՒԎNK'qrciDEt uqUq{}H| " SRlAu3Zȶm;39hD|}۶5sQ.z" y %9 60&D:`z7;wqb"b2I' ˀOkH&ꭣu-u\;y'MD$GRl6Yu۱RܞSo'"*fI+f BuSΡ*8l̠|F#DYYYڏWVV29lԾ a`4pu߀z*MK"?J9qiyD$QԙMF$,nJ4F$=IDlM$J$k_7oge\s*f)Gd>زe{.҈јmgXzHԩsa@'pE(8w<p8$"y$w-V"$I*$T=ˢ3LGDDDD;g\3n.} )0F]ڒ?ow\ouXVI4Z8{uID,n^X8z${$U޿D$G%Yb`vu/4ݻf\s%"""uZݽ{//<+M |d MKC;׻qabH'pE$x=G%"y")6qěh9nϩH}7no*"0|l l7f\s؁-NYY\9lݺ#=Lt\F&M3b`z3:+"4@,"RT&&n#I&nG~~N.Y x2HB3LbH܎G~~<+b;RV]]s3Kfܰ>k݂;vLwׯfm$ѧwWxͅ1DDDhצ%\4rܝ$BJ~Lԥl@7`X`zuJE%)] \:~Lfo\i"1b6)6q+IqG2}JLI"""RP*f֙Mz.s1[f>vᔹ i66m*b5IDDD1iVqx#FKH  xxqoA e6n]a D'pE$SF/a3#R [ɳL' R܎C4Pޓ<+b080pNud +' op1ΉGd+Y^NCk^BI"_ L&9mOC,5': [.""9CJm#<J<"h?"y:@-I<[NDuZT&""RJ-N< .P֧{'FtsH\؃>:f\줜FTǶS;8 p(Paԕ7;& 4MDDjr\R9 `:H9:GX ;aD$.DuZtܑeVIINj[DDDLlYd,fkVQqf3q <ǟx-сdg_u@+|%׎ oA@F̱!"D_T"W~ }Y/#Wq:HC8WQK^"""g*fk|`XEoLʕ+~OΣ/ͦ:EDD*8~|_Xi粹|FZ4*)np\omXɳ8U1H r\;8J>0uHdn-<}\-sܞS\u[%noɳ̭IJHM}r[{)$*:k0~ޓad@uw[D/!ip0wY&1Iq~uqGXiT"$% joM}P"""Ҵ76Zd\#9we[j_A$a~?nJnrYiTA)9:4-X稧iAaLl"Emu<[Sc}܊"n?3*""RTV۲- 6oμ#GSQDDDJGEy*6mc3r6} ^7&8m23Zȃ8ݕ|uir7!rT 8, R]'RJ2$ G'NJV$#s׵4! ?۽{/S;Bw܎ O0 kMg\60K0| m?f&+ ;:D#-PAD8i9ZOigOqf$$gt{nEٟuiz*fK\=d̸G@TYvt)3{VsdN"I0WscB9W>:@ HHO:G?=ICs\u<6Inc%s+M~fCHS1[a M2 V>Z `ͭHn@1QK7+n8uz[1ۂ0XbˢykN;d[UUy>O R- >WƑʯkatfzCH~%+[0~w*f8:H#ŭpuQ_qX$5-"""M@l= ,Ͷ%kxmJNCuc%"""؃e\3wYtM.[<\"I0o "pp'PLwγ@snGX;8:Dχuɋ$Ѵ"IwuZC4Bo@$5#~*f)*f rY{Q]]qid[]]:M|? D$% a \l1/s\:D} XdH]` g:Mh::H#cDDrt0:D-*f)*fol][33֛S,^1fhkf9s6_@/ "ui_!huT&RDNԙ-!$ԙM9pup\/]߶q:V82\] /=IDDDr,\ڻ&""ҤueӕԾd0 &j4T5^"7_YHwadBuI<:HtQ NJ=cC n"b6b ؑm73ok&ߟ^DD)ځg\3w2 m'q" }?U0SuD)&YH)!I wu[85/ʈ_iaWZP1[_ ttg+//9tD>bH-5kѕ0?ht(0/Yi[hīx"x o"MBlX#nBVY a"v{Ejg8HQ1[d[od隌k;+=tg\ռ\6w4Bo;Ǚ뵰QZAl"SҸ$ !ɨMt"^kxv۬C4V Np\u NPf" I}|Ү(gtq"IWVGv[>;vQbQQN{_us ?pIku= ޸p`uuiYC!I E !DxuZp$NJ=u4>n$"""Y~|+@eNzsн{k5ӽS^Hao#ڼ\x=lپ:A0qtk֬esKD>|:O>C^j3_[q\pu::4A(0߳"8LGq;V9W1Ďz݉ߍu㱈HiјzHux ۺ*Ōkؽ2_DDDJYA*{{ʪ\6w: 40wnΒS׫QOruir3:49Ic@$YC܎/#_Kn wװ4!_Njk̷G&5zpO+㚕xs'ziBaWpu,Y0s0q!!D8bi9 H?@~γ Á o^KCK ;wu:ı< """bz 94[̾ok|)Ieepspuwiͩ#aOot0ŷCdqux:@- " vuZ^ -FY2qClu(‘+A<{)auNQ8~EDD 5Pl&g~ 3>m^>~56od:vh-7]nC `p.,ZIu&R aq\ u`u4~eBk!j8}EJz̭o RH*ut+δQb,۱]~u3 J"""RX*fk2v[o_q͐=s)v'6Nd\3wx%n%" /9cv&4qu)) >eB_ZYC܎')_[+!98:Gu)<5Boem?IeeU5߻T4oh"""ECVvTVVqOɻRE_=!ꐸ;N mo:H9j! &Zvu:|:@:DSgAJ'!pub17ۢ_3g\ӣkG>~%""R>`ڴjqM4_&%]."_A$RYY_D2{737k=>?IJu<^e  "iP6ζ %闎u!%'@3uXd"'cBDDDAlM$ Ky:u/~E+t6OuS̙4 u &":@-4V׭sԡ-hƺA@:u M+/kBO!!x6 !?x:GcKCHqp\';\g@DDDClMf {plϸf^|\FED}!s[sA|D2-.bpu4>CHr9wp xqKL%_ܺt M")-9:Td;B?s\ou) ":fa?mBDDDClM( J[ݴyTu]FEDx2^;bE)RE$96XZHOu!^_8[Sǧ[7>I !RoCHp\f3= 'C!h :$zδΑHZTߜǞ¬ًOq""Rtڴjxљ9O9 86nu|^J4&RNc( LRdĭ-٤qk@:wu)0:D?'c_ZTlBq.߬sd.w""""b!ۢjpͽؙЁ=5nTDDN9$x;wkr>acsjYk ,}:$zN˰H`z(R[7m{%Q;8:Hʀ:7:$zw0CwsCdg;:DL b'ӸQ)&rn`)b;:UO{Ijr\\(Mn?W;#jg@q2{H3pzq{/ ^H[g(ULQ;R#lErgsd0TDDDhpLeCYӸQ)}k"z\7;|Kr9q`6p6T_<6R:Xl"6tY2q\f94UH#mh'3CdqpuI?íCd0PW̵CH287:GRW6 $B:7*"""W_xj[߽RkM@_wlJqNī3`u| /n!^K^>|U3TI/n(CH^!si҆>H[at%]i#_9wu7WowYxR1[?c.kWZϭYiܨ$YEo^EKΕ"1Lku|>]DYeQ^9v6%.!v+pt#6s~IUVX.}_$>~!$כ\o#':D܅?u *Y)-#CdqH:*f++,|Wykfqٗhtxж|: yќNj~Ԙ\Or\j9`Pw\ObI8:@- 4Twsd{N!+y8 nkd8J q#H9 hi%J  r9N1q=]p\gsdxZR1[8\Ǎ^{/;vʸfP|O!Hveev[ҾMi=>H>{$g/c^EK"!4_v\ɃIS۸,fu 97:Kg!Lw`zq=u8q!Q !;:ēzGw}% C$EΑD_Eq:16 zf[jzn- ?qY<צasRD$>Z|?eօWÖ홋%SaVou׆1RO;$Wc?nX҄Hrꪜ`>F4&:O:wX+È nlMnu\o 0+s$QW \mDq[i@1azmmm~b"~@*.p\o~";=)Ln ]"""oqߡ)f?" c<:o-̺uhl.&5_78q\uoXMuxѢkDEi ㇎9y % uZ[hJ q?zH9i4mujϑ$O@Pr\puH xqAA$V>c"? u }*9qӭC-ule%uSɁNѫs7ڢEs~s)oצ ""RڶngMEE];vqu}Ew46s\qˈƊ҈Mw;7U0 %q[C$o!(t\x b#u@tΒ+_gBDDDOlRЮe g]7jX_>œMDDI\){ŸP}Ƌ^ &r|IxTvqaq9j~ϔpu<^k 8p(BLq\,Α^~iT&uh bq+_ZAuRo7sѱRGJ9  4:$C!n~D4N)38!8'enRr1u<Ίd)6j6mftЖ[nurQ{:FlSEq F{p^0yے_' a9xHBФHLX; 29t_bĈ~=\>wؾik~di66m*b6is>1r5/*+/l;Ho>ބ$&?cJz/Ys B-dq2EO:z7[V:7f Xm"cCHzIKlIgu) .%$ˁ+C+tE' t\0Y^i`u]~GHhh ?d}V;w׿7vɸM_\Ojղ9>h7|-h׮=7bgq5\k(f;MsɹP~MLgCRo(6wHHH@ҥ  HW:ҥCW)*)]2J%$@!KPȞs6)wb)3B(V#QtT(N~ \D)&Mυ0GqrXzS,)}{}L89J'OQ(n:F'+Q X=t2 X~^$l6:(Jk?Yc9-%IRh Xo]׻yvNU(NEqr3p3Я/}\'`8~_(g7F}UdMҋ_z/$ݺnBdp5F) U%87zWQ9.8ٔ]L9tJR:T +-a(>Q :O^=ɳt(FMXg'KEqU(mlj5Q\>Q._Ri?߅щ<:DvdAT(NVmF" :s#{8qMvt:@GgGTx)CW~6:K>vKvUÒKwT;)}408t6)W8:t5-~K t RYZm۫VOBg*Q:/SvZ/tf]w!I[(4,}<Ӏ nKew~ƨ1l7IRO^{[$a[02Dq,rn,:O y PMz7FqpL*Ql\ Η/o'ΣA%  K7I',MyR+)M!p=mqyVN!ұlɍy:P-d}Rg]gC$I(u yΦx]Rkӧ\Oir_lwp&$u6n&mM>ϸ3jog\}82tITjRg@YS}<3ϹYw,jN$-[[f͚͙ȸ'{yiI>u,y>&tZbbQl:LGɎ+aT8 ; Bh%WQV .8yb&Q@gA:hiU(NnKm"89{a@1N(5/ljx8X4`w/Պ,8%tfZ-0&GqK~U'Eq;:`Z[p@ DɳjsKI8:TGICJwVKFPy $$z2gW~}z䲋ʡG])Z!$V Xokwk˯ɥW>@lyNdw9jcEWC_gjSQXz8jcYZH}UGqE ,ɽ_,傚T|k`G,3wm|(_JSn ,*?YaLm.u#Bfѐe?WCCg}[v;`0=cqflĒj٤-YgӟcX?_r$}= YO{3Oב$Րg2aT04ĉ_p)W1~rO@PM=gEQ vP.enyN kS|@-Ň{Zκ$S[ʳy mXr8`j'^ʳCdI`S`unAC)l~Cgi#PLp:SG!ŵˁ0&+Ei jeyN$:O+[8͊iY~pM)-q*sM̀o t`g&8@v2}L߭ST-0ۢgsndȊn\bR;X 5tvԕ@>pt8y ͳC:0ͤx{GiIٟbJ%PFPSʳtl'[PL#ܸkS6~[>(I|66{VǗ~I}үD[ÿ(&vɘ,(N(l.8N{SQL> x| ͳ(s tʥZ~ge'Iנ.}A84c|;Enǣ~}ᗸVx;氝w۬$Ib7Cv,k߫~ӿEddSm*{x:tլ,:HgRBgQHQ9"[)-9Tȳ =m@뫍Y:f΢N>`3lOFsH%,E6Ieg8`%,rr?/k_dvef$ut;)KY~cz~CgyNuʳ43ʳ!c_Ԟ{YzA KJfz2[W;(-A'Y\Z*U*GŒRHqW"IjRo;e_G^.kڍL'Iꨶi#?jײ}Vro==TOiǝ(LZ˙y^:Dgg?Ey`;)Mn(u6:HgS* ::G=,-z%__,mOBgQlgucɳ\ǟS%IRGcYpPs4pY70e?9{l-H(IvelC^}3κ9s^)ؿ3I*K}WK\gBW %t՜FK[@$ҏ(&::],ւs|JY2['g8mRNG)GCgQ͛gK7J$u(j\ǔ8k:v'mƓ$qQaosi0cƬJ.qt5+:}&'v5 ]{[43$)]ܒإ}'z5++g 'Yz6=`J,γCiy^ "~/Y:?UKalgV8ǭpX2P3,9]8{⹊H-ylԴFzy8$I|Yf⽲?氝[C]sIM9Н}rW3}Jr%pZsI-Mni *_>`Y I1QjWyl EyNmҹW' :Ra#̡| GUF`,)8MV|l)mGAϳ4ʳtT0,}p0`Y:9C=CQyX'[CQʳtFi?BQknVɳ^6ͳ1y>B1aQTQʳtZ;]3tmP Yz3 pE8j[ %,uYgy :'π5,+tIb˳t ]e1c~-?RH}z؏]yN.˵K=G^ӯeFeEE6'b)'[YϞ=/[oK (9M^,EH/Q}OL)\7*~ x3p$U``Y*u΍C=N̳A[PVx}Հwr)W1~J/wo*=Py~o݁>!sըW)-YX^ijmQ,6U՚\g鳡Ô) |}հ &،2K/W>g2qbg⟹t?sE0 \=Osٟ說gc:(NRvoΫxb]T 8J1u`O`ԔK5 S*IAEq2b>KD E~XŝQ|XIϧp972tە^n&Uzx"&foJ۳y Ij]y&lFQjؐ_kҡYyDRJ?..EAu`Mj>g߷O>pTy&IoLd`0'`_yxɥSvV ]^xX7G{{ g VX{~rloqY70u*B9X{>r3 ^n2{TzYY:IwimkVz n|a.?Τ4 e]DqRJ!``]`uC/( )mY:5d(-W(7-mQ]F`4_~> _6tͲYg5WռTeV^V'r0}ƬJ/7,}%})ҙ|yw2QtxwҶ\i[afoQ!; `KaHT|];8YK_WnfS|3Ͼomo^={pIz8v4̞ӂqw؞ުcs5o;Yj[XjyΡxs%|K˻֕2 _չ㝴&\S|q_$үͳ-R0g/_u,EcQz?υ8K17Ѓr7 ŒS3f|N}֗gky>&}6LOQ^}{$$Ie6Ho mwQ,Soy@9[z+qocgX:>3aՕ*igrÚsg,ל%LiQMR *ݹ?=:sqϙ%&Y: n)-Zi$IW(pC>P~~e7cV[yinW|;Vz9I`5nWƍϏE끭,I$I$I$I: ljyNN*9v8?3>{n\p!vtAIիKvtޭJzf8GQl.=vK$I$I$I!Rɳ @r@[r1~Y~Ѐ9s6W^}\r"O7`I$I$I$Iڒ,̀JO/Ǘ}.ۂ.]*$.]{[YQ.kn `E6I$I$I$IRGeMm"+=QpdžTt1%<Ŗ\KJRYlŸc8氝+:džpQcs,ٜ%I$I$I$Ijf, \XSt'3f*WX)m:ye7c,.ӝ1uڌ\B໥dI$I$I$I:Y:8>a}2|r~ Zsۮigȧ}ZYhIje-gv@E%6w9o39 g9X$I$I$Id64/-)cM$I$I$ITk,CɳS`x`Vs=OWV>emVNd=tQI_k#g`5W#/⮻jnY/],ܓH$I$I$IQQgi#paiU*=LJ|w|ӫc9НgHp'8ƆJ#Huu Zii&L.S&O{iIy>ߒH$I$I$Iԑ9MV.p>P񍍍}3xN}9̃Xq堮Zr\~q&هEl ↆ99,I$I$I$IjԡY: e'wߪ'}}sKWt KCŇ|TiIUfأvcoo?]OF|%1FYڢn$I$I$I$U '*Y4pЬ5?G|{ ?4yjǯ\w9Xd@DЏO9.yl&O?;)&RkM$I$I$Iԙ8MU4ҔkhƔF{U~|H̎:l&l&w,7 _LRi Î #ypohh_k2&M,;7(=ݒH$I$I$IȚgSS.S&MEcqoY9y0wd~D^ 4k{fF2.-)5RL\"$I$I$I$r2RiJLi[9登`9xUvז\~<njӛER;k6ȃ{n:ǤI_pup_5 8$'[rI$I$I$ITJuh收Kǘ9sV޽34=`\wsHjc .ԛ}؁rsl3g;CmIkl$I$I$I$9M5 ҩ/8X9˔/{A B["`-w7n9i E1O&4ҋO,}'$I$I$I$VXfSȳ(Niy>d"\t;?:xG6hu*>v k#ާa:9q$sіn9y\}-ppesZz2I$I$I$Ije6ՔR9/QVy x9Y}e9\YXux% `\URY`ЊKs!;2x[tWk2^KS5W'YiKO&I$I$I$IR-̦T*ɕK~>]~?3xU;K6Vuލ_CXi%Z|w-=#~9sZe,}5N&I$I$I$IRgbMȑ[SOoDNntIQI$I$I$I2:R(NnNmya<0[w%{kY{Eꚵ)ݻwmˏۖ^~^/(U]Xy8۲:+|2-n߼򛭐9yNoJ$I$I$IYfSV*\ /^-9K/K/ɪ,þ?ؚF}}}Ϸ++1m nz%ƾ7IIJ]]K.n.{}w#zs6443sf[!%_Y:N*I$I$I$IRgfM,ɥc9YvP~Vl:tڥٳ%%x'OhILCX"lZY_V9s^û}*fʳt\kT$I$I$I$Yf"ұO88 ا|9[e獉w؀ZE\|Ѿ{{μXnQ^|~$&ҸRwA*זvމ {yi>V;/w\=bՖ%f]|-[3g执qO1|vޒ,wkX$I$I$I$/lR,w'['۷yed7`xC\r\\a#_(&|2[| ձ κ+kk2/7{|b/D,E$I$I$IvbM*S?Qk?n?Fl`vվٳ+oB}7]н{6?G^G^&g7Y:O.I$I$I$IW jRKEq ps+e={#cxݱ|:~OIS4fNɜYCZ֥[W/Н^ O^,H[//2UWH׮]Z#zt">)1sLf͚Y3{6 hlhh+Sߵ.]ҵ[WuBݧ' /ܛ[/RKcEٳGglM'Oeko1Ѽ5#ȧMNa$I$I$I$Xf:(N?;г]r̞=`¤L<'MeiL2f0{i56P>1I} },8$I$I$I$I,I5.Ŷ]&.]Yr~,=pq\E]EˢЯ__-҇^zPWW:jhlld|6a2}6MfI?1cOhg̙:jSޤ( x2فH$I$I$I2ԉDqRBQj(Jn=jztߢ}Y__CEۢ}ӧ=N^=@XA6440},M36}&'O- jV>lg O9Emd$I$I$I$Ie&ubQt6*mUYn+GϞ@zA=X`Ko=ztK}uuuSWWG]} p 464HCCihdƌ_R:uӧd괹ŵL63-=gia%I$I$I$IR۳&Snۚܶ 5ZnS,I$I$I$I6eM7_f*mcQ|YXgĀ$I$I$I$IZe6IU!|Yl[X~p/Kj~ï?ʳtJ$I$I$I$I2Iwwi3>M@l]J4|6bzڔyM7%ҙm7 I$I$I$IT,I$I$I$I$I@$I$I$I$I$l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$I,I$I$I$I$I&I$I$I$I$I 2$I$I$I$I$)8l$I$I$I$ItöIENDB`seaborn-0.11.2/doc/_static/logo-wide-lightbg.svg000066400000000000000000010360141410631356500214550ustar00rootroot00000000000000 2020-09-07T14:13:58.676334 image/svg+xml Matplotlib v3.3.1, https://matplotlib.org/ seaborn-0.11.2/doc/_static/logo-wide-whitebg.png000066400000000000000000004207761410631356500214660ustar00rootroot00000000000000PNG  IHDR uM9tEXtSoftwareMatplotlib version3.3.1, https://matplotlib.org/ pHYs.#.#x?vIDATxwxeWytH:uf1cp7RH !! @ pB( -B/. :.ʨKGҩ{A3rF3<~ƖZ=gsuVl w 0`fl8l G 6a6#p0`fl8l G 6a6#p0`fl8l G 6a6#p0`fl8l G 6a6#p0`fl8l G 6a6#p0`fl8l G 6a6 ltPI\^TFTVs¿g4*,,̱eyYke=Ok$q9#=P(X4H4X4h4h4g-~A~"M07Ɠ Nk|<ƖlzK-HUUDq5רFc]__k7TS±m`̤?#:?1Ni|b>Je7 T__D\hoPgG:;ѨFP2lNcBk# QMNltygjuv4.CnjokP( F @ё##zf_סG?Isj8:ut4jm֡:$s7<fP7{w=_(mtiXF$֞vmޡ[;}{6u5. <fP0;g ?0 3Z8֞6m֩;mL R(k3}zzz^kpPwmj.B. <fpL63ړOV6p:kp۹=A Ta69,kϞ>=؁cgc5q,\#q?SOke|L&TFTVsRRsc?Kz6~͞n۹K 6P:lsV}úC{Z$RCCq5U__?㪉·բ|Pu7Vco[:trNcIOk|ϱ;nH:ͺ,q6,p!ިH$ѥ=+Y ̇ቍ.M|嗝.ݭ. &g޾arC'uFzm֡;}?55U]$MMj貴eKl y0pm?߰:Z}{oԎڶC5؆ՃM'o_ۯh~ n`m]wz5竡f0p?~d9[qmk;GۣBpYj:9'8G;G=}tp]G^M]{bSz}1|>Ie2Svkӭ)>N33)= A]~YEv(NٵE 8Xkӽ).5*ԹJ\sB޵I㜒S0p};\>/ݭ_Kӭ8cCQdrN=~P)'Ou;:7=_?sT[[uJ a6`Xkoޣ;|LӮ.ݭ/;K۷uu&;1ڻo@wI80 <]}չz^sB6`ـSlv6A}ox]xKw9:ztBI=9yS~ŗgHUULC 8EnNg:551]z.]~Yv->|kd#GvfgzgtϽOܺ]+ .^zln]"#u=OzvxuT/\] u[y}.H#sM*g|yX \G FyumGw(a6G&_*~~Ow{хTWqjYk_*9u5fZ2w|kֺz L->+~~$7=_/P3)N ـ2 Kq~xCEk.u^\7\ǕYk7|Ԧ`Kir9kԖ7F3SzzwsGյ:]:3֪ozB \CM(k1Nާoݣ[ѳUWkc{gE!ZG=/|V=Pe4uUK⛞\~iddUWWZY gVY?d6#ᨬsb"MgӺAY45][ SsёE_)ZmVЛύV ׭joLѡϊ~s+̶=|_u_A׿"y/ ̆眧״wogy_t^ hSWs+o%obT;]W]'#ݸ|czf|DG[Ж%9Κc_')ַkz^kױ0[zhfJ L:n٭7D3g|}/+ĹMmRSaBrh.1ːqŰoX_[EOvlڵB SSO[O xz͗UjnNTYu8z~&5ƪ+2^Z1rX(*0Jt_7<Ǖ|(̷r׷<%Ss3ѤDž&s.o^]Z:uDzNwTOp{u.nڐ o&'=/Thk]:k74p9<ѣ0՞zWwU=VJgU>RԾ]rC5`=T>HLkR Vڊ5I޽?"cfrY޷$6eh1Jy;jBՅM4j28ږhT[Uٌ͆Τ# rWny%.;o~GXa6<+-w/ڬ_{: TV>o>mZj57FOվтmRcѦZȑ##+qd6ldNё5Ca]ӵ "MgqA喌\ɖzԶkQY×$åFqmgXU2֭J$E]y$c&sಣ(Ƶ944T2~^.նF\wu촒ٌ y:kgP\Pߊ!TuqKYc2=Z<-]c.)g*o[g>=Lu~vT8f/_?]}R9s</k\P(X OkpvZ'G5NJ :u^רH lphĈ*/~nAJ&o^1)ͬx w*.rk/8NƗo y\W:l ̔}ΌOқ }kwd~_) Io8zd ۢ&sǽNuRUZm{Jh2.˙XeTiUpPD&w -5G_r}cthj\|O0Lά2DuhAwizN}/[oWZ}wMo+=0LNJ+8^S|nrOʬXj={'F۟<ՎDS׮ G_;rr_5V^Zu^u֟q#F㣃-~KT& 9t`j"ukuhj\f30vvWZIU\H M533R;ҟQ9??d/p&9xNzzӛJ_%wc۵Kw KT,>tt./qՆRa/۷ʵ DWt)8y#y%6bK503B0ofsJ:ypd}ASm8Zu9 a6G?%)i8zɋ/^{tb;|_3SE~U/3"4o:05CSJ:%׶D1IŇ|SsёEz"q➳ s$Y\oc~pt]ܳ[3ӺY193=1:8ӖX\oTWce5UEUHҊ_Jؐf+ĴY Ѫ.8ӧ?]}[YIWkg8fi|N˿~[_mspu3#Ew;s`t|c82Wy[Nc~pxo&`Z)7?Ynޭܸɀ%>1OiĈFS֪.ӶVՔd+bGze jwC5Z c ?ߤW3r.k!̆Ԭ>_ֽ=]/+ݤh$T*#*jOgV4w0of3<[v*Z9) 9}`Ro)GV?LEΓpTWuv/LYk[swyc6d|o&ƫԹm\[uhT:O__qg\zjj*X?xzzӛ [[kSo~N $y֪xQ{'6jYsڠlZQ*Q3#KʷF)_!\_V`H^LDV8q[uOym}E$A_z*! ͿsGSm%q}OMo``cљ )q|ˮ^ib[*u[>skU߬m 07FwXj}MU߼!h*cG5LUzj~]3_ןzI DIzѦ G*rr|cuƴ֪fJ?+u zZvg[Jg/J溮^_BPٰǓǿWz?•cÚH-snhQwmÆN=;1fV7,vxz\ )V6Wμ+>ky] fsYО:-銎u7y˷V'4XiSMB5u1MfR%t]ԽG?544^.ܮwJ$06̃G>EML̔^z3ۉ2j&ѾQM[PWNk% /}̔~CamIhKm9"tQZ⺤uSfϔuU=jVI=31gGJ>u]y QёM.Y֪߬5e}ੲ뺲[M9R>'z;.p{+!̆S}}~/|Vty[סS/o|IIVvÂ`_ 09"[8Nq2{[5][Jr|ctWA}o-U^y%E|]֮dQ2]3f;MQ߼j J-]ꪩ+SG/hh]/k_^wa(a6R##Gǟ8TX?h4\p卑#/9LJhL52VE::=Vt oqYkw2EvmS,{i`fK]#G/zV]Ɨ#GlZc9e}_U}$pTVV|^w9Ppʎ-"+٤T*[XXPAȠOE,zg\gzs1WoQsUXñԬRJ/t R]$RRr}kJgVI$:ku A ww_M|ݛtKռ={qԜ&5M7FASm8ZUCr}:UםتچFyG}4MTMݻŰScG՛P~㨣V6Iґ|(Nq^]Z5u{45iMfRIա6$ѷ\`H/ڴ}Q>Բ] qzbtu]ѭptڌ5cC:2=y,(؉Vݵ;=W47)zm׻Kc J98c w~oK ީd+o8TTML̇J sܢECE㪻^]#sZe~2W5T&%G) 0RTcV:gY/VE-%ݑhZ=phFڷOƏfU_rRU˔0zu-׼|_U۷wwowduP)G*~FE}+_oPsIHj{'F4Iɜ挎/"@VNg7Ū)p,RiC}e_ i}#`ų*o|9==~T?Cz|dP9,]Q,XH {p]5⺠VOmzt1Ѿռ~+_U#wP#̆?v}.z ^<Z7^`0sVO8qуGّ @E@QY+^TM5\7j<=W'tt6yJ1(g|7ث|Z&Fut.m}u[>MeRv*먮7<ޖĚxV"x̪`HtjgC?|h&womA2iͬx+W(oKVhQ{gg?+?Sll6?/[E$wuegSuaU_c zCQ&ҩ U6k6$ ^^SהWd7.nZ3Tױɢӭkr}Yq]2GhbKۉ:uE_;j[@.ສV骎n]y6$T*Rm(]٣mޮZ´+?1zܸգɂu-^qYyzzڊg?}_|Yly?*X(DR}g^}չzJHh*Q_rR}IeyypTBr`382Zfj 1R]8@.kމQv@W xZjWm8=M& kk]uFVeve[4I|דVwRfZ3pX7v$p=oԲV~HPX6JrH } #b3~^St'n45+wɠb^V Ғ_oc~Ȉ>*S0200w/zk_s~arԸVT& %"1]ڶI![FwC;1٤rƗ#)jSU"ZieHLP+챕Rܪ/+oIc9卯pTa/ Zٌ&3wс1mO4 {]޾E??f@Qzj]ProF%lN5H{kBޯD#!]-}O>F7SƱ2łg?&'WQH$?zuՕSe7F{Ƈwbd͵a/k7mSh!tt"9?Էjȑ^] [:9ΊA?' ~- t3"szzfkI3^] - ^E;YkKN)Y!~n9;1ܒ.` H XTzhg'6gV h45X w+ErR!r57V؛}kCk?=7w=?Um[a#zXtfCI|h|FT}- }O_u2Od=pH/zXAd3c )%]գ|J6|63纊ºCth.*dLWr:kU U$ edП-oNhI{soۼ]!mG&JF!-uU -yL.#c^@j$GZ1 뭬77FӻjKt̨Y zꁹMWY(xQ_Յl[ Xa6=|K{N_Q]]:UV9z ۢLZlFuq?_w\3ȶt6e:VJ"Y ՆkX(uuNUÓ?#Y\vq%64+uGQ iWCqk;&, O8?zOk"R;6UGu6$p\1c@ 蹮ڪktiף#~n'qC-Ӯ[{`RO>?zǫu _ Xa6.?N/To~ \ϗ\;1 [:ʚ|7Ԭ'u3֪!Zd瞨1Z:tz6\W[j ԷeG'\+뛞Y e01;+F ]#kh:O jXjVcY=9:sZO A\j|c.I\W]5uԘFSzp -G?kۿo}/|AzˮX 8ٙ,†ӟ>[纮[/K_r:U>JY~ʑ#ӓ%ױwbT' 508I,*̦p\%"1YT:Ӆc)x(h{^KeK]HizhQ"'^)1:09' )olRnQgN+RJŰֺFu6Hw:ؖ zzzuC~Z&'zk~f9'"̆5;߽}Q=o|MHds {Ն#V Mi#}k0֪/9 c 1271ˎ"jV5rk]t¨DzhƗtM]hT.n)#MZn֦8-`Dfu0$ߘu]۠ǎ5`v[Q:|ctxz RCI=pOt)zڑhᩉGVCjq)uU׼p=k:usG/{l>oI߿p'z[^!乸Ta6*Cɢ56>Fm\Hƨ/9gƇE I[jD}4B'c%ב5%wx[j:V$JsoIgFil{Nvh .k۬Zs$.jT,Z1V :A=-TD$xŻt-v{|t3f^=v54qK=k>Y5j2މ X+GֺFU+]Ʌl_~Mz{EcwԬGp!*鹗FA9ݟ,:ȶY^ [$)A=>:8߹ *?յDz/֪ىr_wAU=E˪!]sR0)o|=6:j݇ƕ-`$泑Ώ[}AVł.oߢU-Ǎ]1ZM\O꛵&ںpTWtlصѣEOȀg>pQͱ:UPDn2enݧ;l봒sW9&3|_{K>񗿭'wRd\VC ˚;zCE۽kߤ2HszfNR5Mu+Κ8zuWZ ֔+̈E* MUt^WNnڦ%k?9ӕ纪YxnlUty|&]e|8rgs]GtMVTŏ<hwC.oR]꼦v]־Y Ѫ~_ ܦ6]SRycutngn-Y̴\WmVg39:M=u evnݧd6ڬ#5I]KssBҮ]E{C?gMO>8֖:3w}R7һֱXy뱑AXsmGu.lP` Z~x lmc!cz'p5HMݻ3z"rTc{Ī-ѤX0bP'M뇇y]ҩWl#Ǒ2tJYWsU*X[t(Z+9rdd9B@k_*Jf2B_/L61G@HAo>$8{EյX7F#s3ztdN'tQK"`EB1oQڀ]e}o*wwEnG?k++!̆LL$:thQnbT ~!1wfs5׆nYXD%`@[Gu.j<)oxJy[ZX.mTpZ1 Ɨ1a*ueYt>\;ܲ&sNITyOCUZYU[so9QrR $"1mkT[u\꡶bt}c9 Lk2R:8 TjSMB@P4zT9޷}4a]Ëy_WmҪ?_W@]NI޾^7@*zGxh=5:#3S2'w⡰53^ldq\顩jV(o@^p]r/ǑNh~,ӮZv,zJȤV]A6Jv=Ǖ#)עcY?}Vr$ڪj-ѨU4\QԊk&sW@@Wv(̲v~fd|eMj%"1YYiu5!Ըz*ZJOW)y{;љ ѱi;I}G `7^Wux9~IYR%] \͏TfJr\W𚁫t>[UکtUg|c4NiFSʟ=*RguzB:Yk5]W\[*ZwzB&4]<ֺuuT[pFSmUE)l.p\kBeC9CG403];2**'OL•_YԞ&G~C 5Tๆ046>?Hh{<;*] ױMSoߚVC[1RVcofpKM(z\&^=u\ICJo^ssJ{+XMa~xC_~Y_xIoҡ Pٞ&&z7Z4}]ruly1zd_״W▮5JZYk5HrJ9s%"1U T&:GR{u.l\v o&2)P~h.kRk_ Li*S8XmhfZ?9WmRs,~}c4G}~OvmS,\w2VV+O|NL=?M%ױ2saC' ?g+[o:y-:a0ofzfbxpY&] -ˆJal.#V9"6I4seǣkft{~?M B>ZkZ\V?p8-;O 趾}Jf3+\]]8;{ζ1;>=ō ^*g\60[ܣ/LIl٩X0gz&q9y֪uTV$Ě7L{G.^*<_y*]Dm˖V}#ڪu lG9jzzVx?D!}oйtceYvt6T>p M.[7FOվɵǪք"G׭Ѽ-U9%n@onݫ\kئ͵:WMw[]v5WDzN/6oW<)g+c{fWoQsɣ(}ctpjLu~KU\>1g&2~t]Ԯڲ>Z CEݙhҎ W>AO>t6}~]55sP29?OdEIR乮ښhTgNd_ It6ʔ<`qyד8kOd'Ɔd+zёɒHa[FǰE{'F7~<_i锖N{Յ%p]]ҹf-o-d1z''zQSJ"G:zˏc==[7* Aѻ>drn+}}#O*xOUUD_]֯SX%M.UTIf3JHrrúeKo*3ϷV~g;ϭx݀]!8nڦH3Q"-#ɂUDYτ븪tuVՃ=Kme7]gެ~׋ =!}#PcXkW5}ϓY-sg:Vvkuxj锲̰j,5[3ٌ Uuvd o)Xuu豱q]Wa/+:uas׆;߽hHΈlZkȑ8+f\G+`JR ,tsZk8ΊAZde_~^:(0>r45+yjVueGw)Vu+9q[tUdԩXB7mh+1jsMB&GK).hꬩSgNsRfde RM8"kmA!E9S\5_b]5 u[1\WAו*uZ+#{RڽYR}};>~Ɗxv#_}>[ ^Age*]R\9 GH0ƗoOMh"=B@'TVhTN^Q#AW7FlZόhhv IU5Y߬P^4T]8Lk6F\2JJDHRM*(k/VVVәOi6+)ikQA+C]8wxvF&G5zB7D$muMjK*C^u-XVmKMb]o縒3|deDžU|E7#^a=49'\ǑG\>fț/GirHL+)xڢ} z{ELa-/k=Gw?_s^Ǫ##MF)ٍDž_d|I׍JXGFԟ<)6j<=Sc tAsر@8kQȼ1ף#JojWWM׳-Ѩ }ݭuuPsGu g--Ѵj7cKNjhvZVUx"BB?lߍorT:]QT.crJf3ˮHtPbQ8(]Q]kZt]mM4}qtͫ G *;UC Pzp M6&$UJ|c[Ǝ/9BDHT;jUs^~*S`J$uum_fC}Iqy}^:WV:cRnݷlMZӝ/yhBXu%|c4M2A沺<u5DJzeUP{hpfj ۉC^8jQ}$VuUjU7FuU(TF`()VxZcxlRCI=4ܯ~F|N.":QoIA$'u`k-Zm}V -5鶾}gk .jT"-h"qtEGwEZ@pg7FO [##̤JyMezltP:=Z},V NA6i>xa?ԫ)}y^aZʺ.oX{J\s*Vw9YuMSzc!qO(PBf[}Z2I:85GFaXES-jU8{h6{7{K.ީwU}>}˺.m1O>]oD/u|ZIN*]ĘЁ:YI;nm84Z{xYMfYuJ:g9M%?4jhf0R?LJ:{e1@Pg5芎G>\W:/J!UVlFӫtRʤ5+V̘}ٲ~^#ق]4Im8 .m۬tnU{uͲqx( uc.CUGujMU߬wZ 7mW,Z67Z+rFfӊԬ-.Ph&&I׽B7^RTFz400VNMimb"w)xϫ5W\UUV'A\WL|⡰薻9.:βA1zf|XsEtZ{Z՟R+ו42UK{pj\j !"q9'Qs3U bjUZ{AVʑڑhR$#Cx^]fj"=kHTF|kOϭ=+o|)х-E=U}42V/q U +u-ѤMMj.xjUsyYTA:">+遣G:c<=\(k+_qǓ^{EDb.yl"lN}g48Tx/֯ujycbuZXP:UHLόkzTAզv7s7V'ǔfFU<QO]\9.:*9TꟙRgn>ܦ>/C5(UeCB%WSXHu|h+oOjdnVٴr$Մ"jUik]:)PO1&_ z &SYI[V9rvdIG93|kpoZjIPжXKs0oN8W?9z,T[02Et\ɞamM}$7ib" SP"5=KXkW<]z[_Yr/QfrY֪"Qɮl+u,oN|몽VmU5Jf3:TwJDj=Z̤g|XGgYgZjI tQ+KNFASGufs9.՝[:*+NmK-=tIJiMe?9M5 ԮM5uTPG 'q!bG9V9eZ%ƽrG23ߚ`ZoCkj"Zq48;imy38:\F@jh-QkUMgԬɞ`S*=9zTFC;1`H4w(عqxAr98q$Q]$x(,++-tsG1:05'FV=ih6m/h4e&өcs]H4) 둑~La#GUz^&< ׉ NxV`VZG+LNOh45t(ydeuvcXg7ZrPf}$fJF‘2ȁ=IMNj2՝= +) cU}:05v\ml.xjV !cZ  3^fXs]\Ow>/0vFuxz\U5eƖ!hZ۪ktÖZנ*.%]}"@]5il)c7j+)cgtt.:+>Խk\O449q\udE=62:k՗,XjwCnlU2xjNi?/Wj%"ђjZ)VH t>_X ڝўaMg%9걑^pl5UkuZ_\O4idnFi9V}j\8>TZ'=0tc9צܗW4ioͱ])ZBz)U2jjQo}itlzX}#_[8tf;%sz?ºTWGgz\?9Zڜ50;bD4hA] u5+D3>RԞ-ىB'\ՆkfoR[u"<˱V #颖Ns7Q3 ,B^@!/P{t=]ijBtTҶQ[U%<V3̚B<56,o&szhg[c#fVXuQ{uM۫k~+\eߨhA2z?drg&lg(? x5ڲu+;YMe ՟Z19.mߢm;Mm(;VCkwZ̴…9VSqg\W.i۴fu=m"CS5:4=^tq9&GO&\Q M[uTת.U}$u qN]YT/+\oVI.=kp&Y1WԱy9ctxzB?8H-zڞ(||扶'WJ5Ⱥ}_toi>GG~A_s3 a3?˷C{ ^7Lu+ZbPk+ڷe;\⺪G;ꛋeǨz'+$iSMp]5a6T_НyÖjUU1V#s3stnVz s\GtAs[U`{p=uimPO]*q4-Kp"7ػSzbt"h ":C.JOy_cY3ꛞL6#cI!*\cUr[zY3O}{]ݯ|΂׿ŗ/|+Z]#ڏI3^T>t>'qT sBt26u-bejYnknHg5V%QueeKD幮䷚lnX)Uj[I5?dvg*Ҏ&uV4VJsU2[k5+x適qmK4)ʺ8mnݫߏH :0ofs==6e/- ڞh8rGZmדcGzr=5ee:ppPֽWPVxֹ20' ^޹=7t+ZHMꂻmm/[8:VuG%kw+Tޘy}Zk-h|k.b*m1,V)$9evZd VZuZ]iW:|0jsmdURJ#G)jމVV'2q乎^i~HkjB]ѭpoɱU?z~&Մ":UqMq7Lz:t\tsz.ĪFF&~N|a!֖]`pc3jG`HƅUlbRmPcZ }'uבON7FBAb-^7FY?1*olV9ϭ8q乮mj%M:}YrܢlV8eŨ\phg+:\UtuWxšX\Wvt녛*ys;xx R|N?;Ԭ ! K . GSpʺݿDAsy_>- pڡ3}}_DaH$ﵪ+oe%8D$퉦U)]OWoc26uՄ#Ra WU(zzAV=p1URT7F??w8zDʎnUC#qu8\XǷBa5G:09\M%GXՆ#kY(o;jtT*(wr n`jBY[z7b=:2Y;l3kڥ2|cg|D&F9hjXX`.o1WCM޽s6z|d#8^q]P\8T(kl{Z2-KodLJ5ʷFASC$DG>łGs˿x:WU:q$Qm8pdYV0#iG}I\Pgv#ο/儮|c  & n'0e+"˛я}ӓOi&V!/X\ZsubeHjM*'t*V J2Yzbfgk٦ګk=ZYyZYk5̔|cZxX􄲾v;ZLm}O궖}cy+Oٍ:M9f8Ru0hjaYӀazz|x ۢ=R[XIqp?YReF 8Ttmi}ј6-m1HVCJI+G3S]/5<7֪-oOީpd&U )yˎt-VKU\n;Tb0лK|Bs;WNW}m2PO'?׿T[[:V|cԗ=5<7|N?\V'e&o2)}-d;L.;%]o=8`HWtt B-'຺[Uq]Q4X(H`>TT&\y\hmdzNw9͑ڻOY_so;Ztoq1%c;Z)?jL_тF!+uGSuW옕A;NV`dm\E,‘t5V?{ X:2o$'Ց4+ Luc%ȷF&F//1Y;? yg}^yAM+v4*Ƞu)޷_?:r@98tjp=mO4\#k^p~ŗ6ѿʄl|{;2Rڞ67^m1:45}*VbZI_rR3SU-zmwe˶7Pwmb%*tmuFoV]hȀ3^[׼}M4IXS}1HrRDzNsKA4S_HH_R8\Hӝ#+㣃ʛ卯GGG*iV9>+൜s]s?kuQrGg7"Y.USXhSc,)-]5(cnݧܣSckڌ->|vpjlPP#*9VΔ7T"^@sw[IZU)㡰jBS.uVDEzHLm*sa]8+;W YkuUˬ}⹮kj׶W6w3PBaA]HW':.N=l,c5.ݷB׹eQ2݅ ut6)Q_QxE卑F3zx_c#oMI!\Q]8 [: S ʎ95:kK#jWCsaZֻO3S~^OU#Y:/8lWqKUJf3eL6[ҾjW}ՃJ7th"=nTo)k(ju s4?Zw#hࣕ]uLЬ{vkwC⡰"1ԮVGnpYW(a(<6wuQ7\nuhYI?9+[44;]`mmӓwZI LAe'NO*ZTeTUGu"^P h6|hɑ^] [:.\Wt*g| |U B䍯Ǐ:Ը5*^vbx\Q,Pz*JG*yJ9s]k~pِ]uMmeAAݑچauU ][5Ns#PD-Uqmk븫>yxuINU:0.H5AsSz?nb=>}Ȉ>7=~ mǟ8~΂ֶw~ֹqXȹ)d:j5lZn1KN沺]ѣUzLrTlf޻Zk8vO\WmC:4=f>6+yf\UoMVMꢞ֪y+oq^O?&W+e5+EZʎ,֝*U!?ÁAhYTI5Fi9媊CQUddMy?R9Ed%hH) DA5B!m]]v79ܐUz4EUz݊8O+J)$ :l΢c[뚡nE#Jn'+i$&wuYCoD(,zkIFee0KQJk (#813V0Elp5rbш V;dU))0WK;wç>y/^|P>|9n0[ ??g;Pڦ&~+/rE7e ۵F#AhE܍en¢!VA4ސ?%FrDA(kp$\0hSH$A:Nu:YPFpV\Gf?gE oTh20]77s!Q@ޱ[bu]:iᩑm=z9AWLD^Ѯ7`;T,TVX8g}oƿ8v >xྭغs #"""""""""""""cmRƋ~aNjo ۜ\ U;]:.G]@F40Zыm_#Qw]Iry; ) od89hlCɺ6YQ @ 7C8DNQEtԛ,pMP@Q&w#fcn`;Iݎ:GS% ڬvsA#E_G(m4p;_2~u\ hoEA@ɂzuF35AE(m-xw=n62WUm~[ꚠ*q~lgAtཱcw׷@@WUUȪzul2h/Po<?P8\""""""""""""""Zz -uLNZ'Î]\uǡpdŒDklN\MA.aD&R bǙ sjX[pF(杭)+ OX̆NƶBUU(HA/MkQ0H8iw͎-"LcpoK'M^y tnP(w$ Ђ]W<= U҉`ȏD.EU:n0ή]sF 6mnx,:=zu .wlڸf *#"""""""""""""jamȲo~빢Nj?<5&+]!Y-QBÍd.pq$AN4]o0;VD#*F/:&@Z{Z*ljJ'&UUU;nlI;ڏ醾@uld|$Q,;Ehtzlk¶f$HPTI ͍R*h7[lyɊ3c-z) ~x1| ZQih UUo$i'Qu%kG?Q|~l߶E6:IBy[D[ꚰz)Ү7C2%t;[ʮѡ7b`D\n MQUT0J#K5T컱C%M fXRI#JD-;9G)SN ]+McPn*c{>}E]䊈ؙm q|ٍ~\Q~Fflc؜Dkl.t]G1#ICVhE n=1zkЈ"Z|S%4q_ڪ$AN/_p]ͅkEX.H8uκ>"c(@8*{d zJ9BQD xU@QU$3Y$3Y$gW_2 YQ +\?0;Uūo${ZIQQojg8o_kdF"dRns<,կ<Ǯ`rp~5&@@ņngl:CP2;͙]YS1?C*{j`/H& UUahV@s3'S'W_}K?,b.*,=F=FlFz8L؍5]"9EFWֱ!ڬvTutfǍ~7%ᅥ?-AeDDDDDDDDDDDDDDT)٥ˣxcE'vcֵ\Qa(X4PpoGC+T,9Q פ**4^;zK1 JΏȔf]( "MVgC{@DȪRthүD5(2JuH9Ed,6젙v:D,|S7Eь] 0hW:/77w8BV.odr҈zVa7Pg1bFՌz ڕRDFK6'I!ªWmօO=/|W_??~6nh_ʈ+eH|+jfWTI@ZYp( Mf|ۙ(~F z$Q @ڥ,*K2Rh{ $͇ʡ4$be+D ME^R M/Η㝱~<V"mlӑ(8lp-G$^8b b w.Qo1] d֦ T9b"eMV(:s*TUFᗿ{,"Duz?$k"""""""""""""aEG7QZM\Qi4{[:ͤ"J@V$-贻u8 ( +FZQemysZIPVj͎$#NGQFUQo%N܊UA99E!<ڹ̳-\S(&CQL"F'V}hTTNV:lh[_]*scNQSd fPT6F3YM>g k]p8~ﵢnЎGs+*$p aSe2nP0b$DF~ܤ`B+LEq)k\`ASY)AVMYEPf-Mß߹ZYH8NΖL F0`2/bRII0ps[Lhq2psX֘$WGafoEg<nZȈf`pWS;:Cջx>]xգ|e|5֨¦CQU,jq<{cZׄN6Q? Ov>hh4~5j'"""""""""""""bm\2W_?VO~b7z[S>~<h}o( UUeIĐUdE f+t0XQΆID3GkYkwdDS~o`M Xu*" xųȊIZߓl#AC E!Y 1&n':t0*X (Y_VmU @JΕty46=mxx׾1<=ذj'"""""""""""""adY7\Q͌/%j) 2^tmNV|\HL:OĪw+D}A/:y:=z%?$Ać9x9TUPTTT T$ nV*:iDnp qLe_Dew,tȜ`BǏ!_(Gb2d(4,s:Ǣ%3+!詗4XpNyK9BZn_cط,"#UU|97pلU U[oDoxQkjZ䊖RbmN"LrgODDDDDDDDDDDDDD0[e2Y_(j x쑻eHOЇ#. h7Ao;hӸ V~Im9E%t ۵C>h%  lCɊC,AN5p p~sx&0VZNQ0`,V4YlЈ"VGџ_>VG!G@يwKr@0Q|YYA'~O .t7UA[_ے Cp3hlj^@QU |{"Ž v\|x;tK8╈ c^xpu k5dE`8PT9E\JVqR TA`D (A TUE .6'(849nRee]vw @#` U*b<霜Q(a[}3QG:IŽ;8iDnG#IpyڇKS^ L%S8>2#n65ccs=&C `GC ׷`(L<@E8t;롗4󷯲4t!"IingRQG3ڟmQO/|ZUa*Ǔ)j;7?BQԵUE2xT..99EA<4<؂nMohȜ?O7L(825wf<{cxxM i;Fx+IV;? O"ilr7͏Ո"9&e:h2NUmp8:>؎2C~R )ȪIa2ewA+in9.R(D݉H:p: EUa2JdN"871LTX:8><A- ֈ6geeU]g@1Z/7w]*ŏ_7Yz8 Uhb҇W^=R/*F*X ۜyv4@Si f i9Wv|%੨kN ft][+ vtTtRiFziq~dUx赒, z5(bB#"*!TBwLei8M q-J i2b[k#5f)y/-lF&ufNGo<]p˯> ZZ܋Z0[7 JuwT|>YU~"Ǝ[ n(Pq?Se3}>'"5P؏CoD]8˜@&6**h+]YUHTYQTX$B\9 @+,^زBML #DQ0ľaF|5X.:Ip2Cm]7TXɳx}{I|_~}q """"""""""""""f@_}LQkC*:% //z%h2*>w>(b3h 0x76>D[ù Brb0C 0h贻2Ԉe9e'd¡R-LNƅINMbZaƱ ^4cKK# w $QD݅N tƢ!䊼費 #H~ }>E0[WZes'ݽ*,g4e!J[PсuMh:Eɲ>Iԙn -c䪪U'g&0yD, F;a Z6L#Eװ,biKsbQUC NMc4h WbSs=v7@Hވ R׌X#GjE;bY ٌ͛;pHkow*""""""""""""""Zle:u'N=y?L()e,4ۺa(GJrmь 8 W(U1Tn*׽?>7e|ve0<$"jjǡ:5Q߲h݌$QMo()\WUg^-OΌO#!"++8;>3ބ]ͨEt؜9I0c,BN HmpoK'm\5 ?ыgsǺ%na2g#n][s+ q$ Nj/DHnGzu&☌UdF. ZII ju`USd\mN#S#x|mN|(ޖN<n i: & (awB,0SE݀Xd @#h4[`@UA9&\:. ,@D0#8?1tcD F0}ڈ;;Ze{Vpᆬ(Kr~8vR/AlDDDDDDDDDDDDDD0[~wZчwa檟߬couN\ %r Q!+ U$0+f4Sd0B$Rr"Xtz &u[MnCxPU0rRx5~9 ]Mf:-UL6prfmnG]+Jes8;># Gk]m)++85:SShYpgG v7=n9tm޷O\ӟ0Q0V޾qX$h80 ]ǎX #0W"*L$W͋܀;;[vֺۏ}g+Б>c}OUFDDDDDDDDDDDDDDsf+ϊwѵhuH vq N ,:U'0rM8‰qNvԛ,7-+ < 7g2/qE$PT6'z#6*[)rX4O"p:@Fԙ͆L:GEnD3kYGVNLlgƦp|d3X Z 383>z wvbg{3LjɅ'ߍ_.lD5%>4-gmZZu/~CxÐ8q/ꗨ2""""""""""""""ڵO }}]hhXqk]$hD hQAkL N·4.< `6wtjU;*tm4%\HTUENUp`b|ӷ ]KQUxwYEoEtwiCo\p(h: hA-(8;>z8ݣ812 QeeƦ;SS.ˆ'~hWuOuؙHoo}:A_T\3:o]Kp?=-NTTG8d,V(dNFWZ,Th90^#J BQU,9ͤol(9hdAZ!J NBQUD nN Z.x'S864Kzn"*D(_/aךfyR/|o=Y0ۧ;KS1VWp,'iDntnP؏  r!BX#($YddQE.n u81=xk-ll`ZqY@_0zpJ됴cD qzl %DT=*?fpgmXvi*fW^=a*""""""""""""""bϽp"Fޱ{ږe ;[IPlsձsns!Ȉ*a4$AUռ!+YQ0 `0G<^ҠB @!Hhw6#J'!* :m.ԙE5wj zqGc뒏-V0đqD*W;^$^;߇w.bg{3tm1> 6oqTut|;.I]DDDDDDDDDDDDDDl)<⡢z B YU bU)* ʲ\jfe[5Iԙn-:&Ko,-p%d,A+~5(bWh6<1OL/T3($a;2 LAD+_:'8=]ﻅ*f_lj+{ܥ$XkwŇYPa0¤V<UKW)H&]1L @22@XtzZP% DA()i G*+>ň?d%۟8?Iڜ6ܷn 64T(Tt>""""""""""""""ʏa Lx;*:El:8$5o  5A#J[qlz: ]:H  T{ۍ3&؆B~ H"w36kUvTL-%h͖89:Ãcē~>"Zƃy8L^ێ;;Z`Жw桏 x1 xCE{B,`sK BN_e҈"]X/m滇I&f[u~SC6UU! 8x&qtjg<'P* lzCIuXu2z&+ E-llPm$ě#VDtڜUBR8o+zd# %foc~zDy1O=qoQkJDDDDDDDDDDDDDDab${:N3˦z-IՀa\' ڬv<F{h7IqWS;Gc$oŜ`x4o2hblW]Rg];)D.}cH˹;h@TYSE(^^%BQ<}̓8?TϛT霌CcoϏx0\?v7Etw{sI%"""""""""""""Z43z o=TݳGvf+=hD-;6D2) Hd3UZQBɊvn$"mD_Ћ+oY34NSV^pf aߺ6oK_:CUUBnr7:띄Co`pMxPf*-+*zg880!_jKQU\¤k\v݁ MuEf-&\O:-{UdgƧpXR 1z,f#]4C+'+f_=O?yu!c""""""""""""""np0FF=m܉֪ iJ?LX1)+ A@VɤUdhE V:IDo\jd @'Jȩ $A(ܺ"c((pV~nqF#A ׅ  X稃$U&=fQQ#Xpàgo `k0^48#.oL.bǓx}y{]tb\8w\0[V\"""""""""""""Uarq]ٞԽU; vX4m+(Pb p8dIN u]ܮzmpJ+(YZA?tkN9eQdhDf]BViD1o.I#I/xyC~lp5TFldE4ƣ!ȷBDA@ŎF$MEA6,C856\T"ZHd婋OW0/ra6"""""""""""""E058:݌Z󊂀f f⑂ ;$82ດ!?v5d.V Aa(6' " fN@0 c<FFAFN ni@\(*"Uŭs7c{} " e47P+~ q("?HDb xXWZpmnF(hjط_ v9:""""""""""""""* lxmyѻVDw5gЂ,Z=o[{QՔS 8."(6/Fewc$,^@PT3UnB6 äѢY56MoV)T-}i07BlNѩF1rqlhlou_܇]"ѪVFUUѢ>=R$ЊG:֣N (Ivn% E1*+ |fRY_{IawA[AnRG*ӾҔjv'"={F#cQpp$^T7""""""""""""""*lz}h֡*r<&+*TuR0U~V!ėެ _poZe62 T!̖e7ώX0\DDa_h>SNDDDDDDDDDDDDDD-H"ơ ZیkYQ$Qr&]74I 5)UrTLz8 &hE sU2 UUZl"StkzZFȀFWzT38:4Cg "Z%km3v6ahx:>]@"ɤvDDDDDDDDDDDDDDΪ8xtËԕMV \GoЋMДnC}:Flp5d$mWMFΑnE8F|c=^F"&h2۰UPQ(NM#"ZMΎO[{>.|{' x䣅ǒQ~>V̈QAGvVܲ(Ѽ<ԎF`MQdGFH% 825&w7a0yҘ̥Л*:^5ڛ>."$Q&w60 #J (fvDA0\)|AE %"d6?65/v ޷O2FDDDDDDDDDDDDDT:DpLu;wQsˊ3lsT8>=Ze0-IfvTELǣ829- @Rʡn8\@2x6 EUh4[`>@f7`f:O悈V-#XQ!'=800PqODD4۝-_ށۻp`}NG evDDDDDDDDDDDDDDʪ(Jb K:Fp3v,FQU*)6Ǔ!͠bDJيZ7eU?H8rQjYV@Ísީ/(e椳91""ZL`.|wv )w;ʃ.hU?rNZ޿)2Md3&P՛Cx`(G8]~/WAjhdfdEA,#Wp%)̆Ƣ!=ڇH*N Xl0q\B^ЇqcL-۠]UhZa1& ۳{,TTH|ȩM3 AUU8V.=*( ,@F:w c Tu}:w6/8ƃa9[q<""*͙鼗[,Fٽ>}V,"""""""""""""UiՆ.GUsG)nP* #*C-6du|ɂ-uM,bѩQ䔛CN"ЉZӍׁ$XkwޖE ) }q~]<þ ":\>\}oJDDDDDDDDDDDDDD ԺZ9pB5VwߵΖQNU 0TQU˦J%pwMc(/[ r I!TU*UIͤɔ7~?Fw#ڑS }Gs ѢD@z-$#HVuo"" Xp=wobD, | ,hXaC/\cuK碜_f pYw;n)"m5XtzW[hDmVͶa7ẐW>9EeMA ME}#AU䲎S) O&E?-4ҽ[tj5!MБ 3oxg \Źz4g]YZzƛ>. JK2LEhD(Z;#WaLs(,ޏF,;WWo'/2FD 3$I}Ƃ9;x_[c瘵pϞz޹(?(NM#"uyڛXphK*hUYUa\NW ۱cL:{CEjhm \#@@Ŏm4Elp5TUm0X09WNQPn (i z*3K!|̓*ڏߕ]Dfvl*c!˕&"""""""""""""Z7K6p0bdu=bZ(@Ŏh&XY(0k9!҉-Q= -D (!Q1:I snt5oF3FSٝaÅIdhyqI8ٽ'N'KoDDDDDDDDDDDDDDUf+f(ܻa6`VjK(%wHFjaijs mllg6t vBٽfkn$sY E2e1 2O8&CG%"ڻ<Ž,x7[|}0QV͘QUUq뺺\nMEhD Q,{ԧ$0kxxM68͠`[]3l^ .BMEt]|DA@Uq8I݄0hm܁ں`zIy_~ylDD+FZ\pC/BU١3S;tݻ{TDAՀ  q$rYZ=Lf8*UH9D3)4J p|fຝ @]ٮ%"m;M1#@DXlh:`OQUplhPh9faj\sMʻc^iovDDDDDDDDDDDDDD+֪ ӕ ݳ2ls4WCju& \F3Eٰ|< No Tb bh`G7(b]B;x&UhE[l] sA$%t812#'SUEU?Ƕܻ{3~ӷ uEوJjl'O\rYݲԆ(;Ɋ@2C7u ST0>"&jDMf0 c<FV%ZhءU ]Kq|xP8.hU<fiiE S}/|\"̖pHuTjslz4(d.3sAVMflWcN:-D&cS864dH&Ó#$e5ue`7,0[:ÉIo{ QQ|o߾tWzǫTʶۄ9WxĨ ضum{Be2PTQBb\O/xy;yKZ예nԫ']TUŠ/ӣS8AV^8GDDa;ڛ|ۖNP0~ nruDDDDDDDDDDDDDD+ϊ>3PpMX,Ƣ) rƣaa+Z9谹 ai^T<, l;iDw4AX[d,w-,cS8=6p2Ur67 j5a]WsQg7jFDDDDDDDDDDDDD"0[&K#m^ќ`&q(y:ijNa(]J"Da_7]#"llFW!"Dh0[(0Ȗdqa҃cS GDDT@D YY^@F b*vM5VPgƧq~|t*3 j5a]WsQg7UhYa6Yqhu۷7bTHC>lr7AD!2IqgS;rShءedGq~҃ ZCDDX0D(ôhۻ .^,ːu9%"""""""""""""ZVlmtԃt:[pݎmŅ2re-G#!l[xTbЈ"4κ%=of"1I|1؈hs¿o_g~?tc^lvyDDDDDDDDDDDDDD+Ɗ On˖΢%*2TB; HlۤhҐ?mk%&f#"""""""""""""cņ ٚ\YME'J%TUgGqiʋ x`#"/rՄ&#ǣY҈Vf/RW@Ŋ(@4E0uIDDD% %R3u n-f+k,j"l,c`pພ7Y P+dANQYၶ`)/LKW>&Co[=x{ LBHʾ@DDDDDDDDDDDDDTf"\l lT lDDDDDDDDDDDDDD2llMM.جuh1 t`gcTnS9Y/zg|&R.hILy/hjtbz&w]_؝,hXa":AZ(63OjIK>g){aDFP""Z}"4b ,z݂kz[ ѭ0,+(3ll5`(GVQ\k $Ȗ0 b@'Q פe؅-ObĐ/a_T%v&Ñf448 +.62:SpMOOe]ٖPڐ/p2U뒈n{dtn5=ݭlc*WFDDDDDDDDDDDDD20Ąu]KPPTH 0F! -P{] w"F+*̖d1S 7,~1$1 c4h @\e Sh0[{[}=ffdrV0"""""""""""""gQ'PUຶօ^nƃL# 1B)s$""y/om-fSUSS~tt4V,"""""""""""""aENjڲxcqc sr5#""f+68?>džm(hEX1abFJFg5o_@"""Z ufkjtBDFOLxf#"""""""""""""Xe|[pMK$-x$,V`< EU\*O1A|"""""""""""""dń&&״1J + ?9MDDDSkڊ5Zt"""""""""""""bń|p5-9^ښ>jCDDDDDDDDDDDDD"hj]@тk(4b a۝> ѭqx!x|a|0$2,ҙ,2Lp>I "Z F&FfFNNN lZtZ1)e{y$V9D(U%"*F4{۝>re29`hhc^L{16Ԕt9ٜ AiASnlBWW36M) 1<2LO> *?!(TVhiq k; -"YV15L3}0By'r ):N֣MXшh4ҢJDK/Kbxxz~  UǍ=pͨzio5 holho"Z>8F<`t̃q/0TUΑAHwNAێz;ݳG X^F'Dq "LLpfZll1hoGۡFl޸[*J.'c`p CS}bxdT(2dңɅFXހ&tv4dҗQ+"̖HL~UIBOL-(¨[;50h5D ̾oWa@V+U9YA*C2E2E"E2C2k.dxDxZpF#אָtI\<+W0041dYuiQU@@/^wfFWW3aNlTJjc|܋ Fp&1::TŒ01ĄN_wiEgg#׵blCJd1: Gg0226 .d29y16š?j7ؼi LLDˇŋø;> pZPj'ɛ.smXހV_߆ADT01 Lbtԃp$^ drcrʏ3gLӠka}6lhGVpOtRUCS8yges[HqwWzǯlډm[mk'xt:WFqΝ¥K#1D") N]qAP&lІ͛:a};_BVD1]X.4R:؍؍zM6a4fâ-(BSWdT D dd d dNWM"Zg(D0JdqN]qt?NϟC[Ŷ-cWvv{CQM~\0GrQQv<Op WF&S]g-#ܷܽsDTX<‘8ًD!25kGkG!6{7`]WK#ZU<x8^8gVN9{Y '['ӫ EQ{ƒoŃlækj]-ffs1S$2gv0ݞ&+Z6:lp[Li$JE4,h]ʥLNd8P[X7"ZRL9EY plݧ0 Nbۧs+j|Oyar /|'Nt/lO~C2Y&B/u% )I Mպeez:هه7n<;.0nO+bq(H6^,Ɉ%҈%ӈ'3W҈>"Sd<dY$@#IHuo5j$ :X:z::Xb~ܠ(Vi$ttx*Tx66b,F(b l݅lv{Œxxc뛨u9)?~Ob6ӿC 05Ͼ}n }h}NtiuiD=YV޾3/]5/YƝ\.۲o0۹Cx8t"dYu9k=wowߵeQ0[l2F洡@gN;tiQΕ# ٷPHb}8@$Djy?.v3\v3v363܎jjΉu}!%r."9T:oPWXٖi<}+;WأwOދfwˢl|܋_=7:p$^rVh,_:_:ۻOދIZJ&{go9;?k@QT;ދc{> {.ډnϽpwLH>8ňh;{.'"px8w8_$~{av<]x;`k]m%፷OUu.NT.uY‘wl6w/ۏ~Ol? NNXVW1FI]vtsa"VUP/Io`X奪FF\63VRoGs vo7 ;gDҙ`/3GQE;E -Gǎ_Sé.eŊƒxsqם;wպ,Z!TUcWp@YΜępm'vӟVe*$2o#2\LNo=ћ̧g?Otd{ .^u9J,˯˯A{{=}x;yDJ2o /ؘj1÷_ălï|lX|-7?|cU>ƾgyF#h j]‚{ ^x ANi_=~5PmlE| BҴ򉂀6 nt7lB+)i& Md*T~deL#Gpb@K}>Roǚ&g,z4bKld&a?=~9JKZ(Hpr*l/G.!EQqe=vvvw~lY6xx}frGyO=|b>,"MT1}`5px_}|^t+IbI<aM ycwӟMZEhxŃx¶d29}} 7w೟ܿݯnp0_XՏ_#FJ$v%iD˯"[~}o} {HUlZDKcEm[ֺ,9w~O?\pfrw.^A}O~^|=09V7'x3.zyge25׺yT/txeUJ7 /)<{?4vuIT>̖4;QDہ7z\hY*ap‡kkTJҭ 46zjͅ-nta4JڷbBń=ڑ  "YŴҎŒfsls\#.⟿*Ξu)t'pd>7ɂidrxC/E(NLQ:/+'؃n7׺,%iq =˼3[1SkB8\wEko:;Ȩ'|;'QLFR~3"sWhTݾL:-64acs=ջӔߒy .` ` G/DQF'6t6`cG#6t6ZB6$w8zJpyڋKS^CPTr w(d*0 w^޷Oպ}gqD/~'Aj]-,׏? /\rٜ_>y_G_y_|B3KH0 Ҝ=7?೟y_Gik]Q4~ԻLs loø .hN\~/ԺX,d/~<}< Ѫ߿ >3PR ˺l:-f[Zw@xiϯꑸͻѣ_}O~ZCϜ@?Ȓr Jb7v@,OEQ1:\Qq>`~;PS OΎ7g w&tcOW;,Lpyʋ~HVtN{y"St=U,x෉x>z ϿȑtRy.a/wqC;~﫟].hAg⟿ ._+n;d?;xOߏ_G`6j]Ѣxۿ{ogyeyEԨ3??U}T6HYv4pEşaهCW>VlkvS0[F4- V365cSs=ZLz8?Spd 7 o{\Vl^ۈZu]3B'&4cךfdr2\ʴIbVL0[!9ffi M/)UPF_W>ZC5044J"Q|?s[<KK/MgsUfl̋?/~o?I*>R;sv?0YRh\2{lƿO%⟾ XRh R׎<=D+F"_~)\KY/@Sk[4\󩪊_>|U>q8~?ֺ>VQcJ'iyjY[Zඔ1Ml$|Φ D D~uuӂ=-X.jN#(eE?30ahH Ņadqr2~w𓟽dYe\0/,WD^>EaGܕ1|OO|bsMfw?'{ ULQ!׺$gzu).otXj]bP 7kG!sȪ o=^8'pߞ͵.">yj]ʲӽ_h.w2kZ33_~)6&?a|$qr*l P`W'ipքmK:VUULqo7xEQ7Eߘϼ}V[5c{O u^ēj(ޅzؾ΍(B"=  ڢ)6O~+. [^D ?H$QRh)_:OuITCG^?|%Oj] -/ ?_]wu9D7Q=HBD!{_G'_dYs/Mv!ys: ;Gm+ߟ"ݴn7=Ag6Aj>b3]g_y^/~O7`?sۇEt31t,3f7걵ZK!ɸ00pwa 4ơsɅ;7Mhot<^Eo:de3>A9v!Z1 wEF`D,_gxPSɟ??vwˡ*x<'zk] -P( 8?g`UedtsOOW>}!722Ӹty֥P %i_›{OOYl^hre g9Ҙnr|_Gt^lKTkx=|{BQj~Z\.+ܮҞ'Ia2km=S(e'OoϯeH UTYvZvL:-4`[[ָE}Lepw/lRvޣ*0<Tϼ}.lmc;!/--҈T6S^7ƯH ddBh %e k] -l6_P^:|U`^ _,kˡEϟz])/EQwFG=$$IuIr2~wӟ,_ MWL[+_z f% '^>-(;gpnK˗,+o>^?VRn {9Va"Νo?F ]sқ_W"eӮdٻ(ʯ߭{BB轃(*@PwSłJ"{my@yEΆlٽs8c&o`3>h8ueH`Gz!ٛۺp|Ng! zN@Mc¶#Y4E\'y#R2)bW#5zJOѵ^/M&734fTwۻ[kp\(DV-x9@ _Ehnkoch1@)GqI (D@l;F05ռyPXTլRش(IK0֑9~"~͉*+0yR<=t ,,XG"}} ޯ8t8u 9T~_w6d=ۻXG!l>mCn|8|$CK/=^}$&c {TbufcDhOWyRkɬRK+BniTFxӋq<2ćzaD"]!h0nk w[kL BnMΕV"Jzo{ 憍O? ZYG! ݟnT&ݽ?v%AWYـ%fg R^~~*>'Y'oqll,X!?GeBf_'q.O=1ᾬ*ƻEUu#(DZkF޻g@.g1Nw.eEpՙlۄR]ӈ^eeJq\!A TjW"=2$E`L\5~5`WTq9*}㺆;tvWVu"}~ fŨUU5EEE=(Ā፷}͸X|.+{#nu2(D +ҫߣubJVچexjİaN((Ï~qA{s1/H`$|TF(=_}:Q|16!-ճ570$B3S\ p*ݴC:L>ɇ5FcL\5!K%HqG;*Zq5P>BBTT=JլS([?ϖPoFrrߡubj5V~]Xx4q>>xv9: 1R? /O': cdz{ubDJGmm3^{e!l-Y! 45WVS7H\{{{[,Y|fLDwkv!xVpfc@ba V~u c6Ed/=wi˗k!v @<Y,m8RgPU:! V'":!6Cş5 Ǵ -ĩ 4uv15!G ܰT|Um٬ck7_KQx5Ea2xk*$'C# YG!FLVcG>.ؽ,_V ѵ9extgxջC-cRRR^5MBCee^8ȠuxֈQ@5|n!|5n~tbŠ Ml'K3qHEVUX}gp& %-2P*)Hɭ9`\Bm.c.atFz#'ʑW@:BN:тVN7R..vpwm`kg[[ X2r2R  ]ډƦ646uu(.AYY-[:XzuL.~\w>u})|z(2pv\\` k+sX[[f&I%J%ɤBR ŸtvvhnnGcS+/UU a-3ϣƲn( jjn~"/H 89N6pr9,-.4D9?== G>zzՃV44 mPY^|"j7E]{]fcmgg[8;Ύ6pn2ՅL*D*x=BDB moB[['ںމNյ5MmF[;m&,Y^z6g$ΞkonQxE*vKV|yTH T>')PЧPmm^jBccۅQMQ B;_ۏ<A&W/=:; l(H$dSr1tJ*pj SST*5 % !{ٍ /lECC+PЊz44ujgW~|#{ ٨3sB'<] ⇱ `|4!!!ơc Da° ąxX'`G4utTqΖVO 1\\DZ>n-xwenny 8ApwM\.\=U,BzFΟ/Ewa~׽G\l (oC=і BB?_WjbJkjjCeUJJkQPX*U]{B e󩋀7gWu^EpDNk6*ss M9==}(.FAa% PPP*/|-.ǟmO,`خgM!5 zeP&?g'[BQq JJjP\RT݈::Kᙧn¸1Ѭ!t`*yQP"xy: >.q Q]"JRQQQÕ:v]$QWי=?:UJ%pw+|}\x;p>q(f݋ʪ"/((4My׽),(ffDwwBm9W(¾y8SNBjiH˯`@8kXള0Ԉ@LCZy ] 1L!B;7Z-s$K@!(]/- ] Y%8|4GfEyARÏa.-C?`mc荙1 Bdܘw"}/?ZҲZdf##Y%j`rs&&2,]x{cj*GxüPoXXqbxz8FF\]:_de"+ݽ _zFzv%zX[qOk"vc vfU\RIPE6 BXBCLe?r =wUUR VB\g݇(Q!|1-s |<k3S;zplEuua#DZڻPDL;&% :н"^Dw$ pJz *Zt5fc:i11h&&2 O Ť q|I5>J%GL?yaǮسAunnv簎b~Y߮:ƐsuǘQHJ Ed%J$]ؕ킙3p:T*dٺLMx٬O1tTW;.ζamFdTj(*T*]=hkBkk'PWׂfס]j`m,`Smx1bxRd ($1Ǝ a ٳq`~X eeuc J,{uCY&J$WTqcc0nl :;{pP*n;r姟bw[nx;3+_t%8!D_P]v =mO zlݟcYG!Vw~JbeP0er<ƍ1dRHLjptwlsLqSDm{;X2֘>u& 8zjL &`8oXJ|ptaԆZ:ƀXX"6&1 `~su=Ǝ֠nmdV~Ͻq5#1yrL茧k1 'N"=u,]0.vvTz(f\LR#f5c\8N3ES(q8S IH\vrI~H@A]#FW6#1Fd<1D,caIpreGD"Ə1Qس/Khp͓XGv7 9:Xc5#1cZVf fLKŒiIϯDZkY 'mXG1Ha>xj|/12'aXxblqQv(:܉Ra-*DwZ˫YPd?jdfLej*q1?.UՍغv<taffB#Gj#Y&91rD8$ZL1d>>.Xx4~d=?\A6s`{֒u3-XN1bee3:qAJCLl( w-7MQ}/129#q> ݋DzXn\ O?u߳3Mxowh9;g\NA<&ĘU6γ)I!gqV$!mmDZ2UC!˄ l U`7[Y0{ X,ƔI?[ c$-5dO63A;&Rq/#0-w>67.DT7G+8iTV5 vtX,FRbJư`8cFGas?Ս# Hccq'~Q]ݽxըoae05cQk=w>bQZ&|05aDQly _|u I%4)ogT!%H0ftƌBYy~9>u4)-3ϯKKfx|V^oD ĬfEƬ ?uKxݱlQ.4 t'2ͷ!#ug9\hzb0_̦Ug6b6]'o{ FloGRy4' !Do;{P&:aޘ:<>lmaS8YTސo:f"Waè_zu0#Ŕ xo(.aizxtuҹ|/ᾸHLap]3p }Ax/~L@ee=<:֬1{f2fHqZbS$`Xlz߮ގn 6m>: #Jwy n &Mu>ɤ:%S&Lvr9:jj5>x= OoT|:i`j*I4bRK^NXy m?)&M +܋ݷ 8R @*WdnfY3cάd9CS*xΏѭɘ9# ,#}T|hjjgij5~;zQ 骦Rj^`1EdHwg 5 -}"棝Fb*5NdDF 159ɑv!0c|F G P* BtBE*DGGu Lss}p>^܌N3`۶ļkGÓvHQQ^yGD/m YG S,c*^3k~ރ-ێCq5MxA"[CayiH%g;\?o fLKuAJ%n( O +~iJF?ȚwqdH7i"&0fT$ƌSjvV5h* _fHfhpl.``L&5q*9:&◵{タxw}S*XǸ)獁98'~ ϗXY3qi.6nl グr6lqu;u:Ec M.zz4]hʤHqGlL9)Ŏ8]Ղ;!U6Gγ0,aku'X8owy JW:{c%CףE'CmeGΓQ4O.0QxD!!>?]P#J/Fss;^z{tv w^8GE"XvX\̞5|E#i%-߮ގ:AJ%x۱/x97-chMG]X=u:) `AjZ~Yu ['c(*bӡP K)]_ ƧXqI5^k +)0qB>..Vll,s0X.{Vg6%7BڶYǸH&b޵q4 \k~:/ނN֑8zS14݅G~q.vɘ<1 X7GRb(^{GWԳoӏQkm;BGk} L QCA,cx য়w j\tjjj /܂EFbs:ArvSyc1?m?w74(>~Yuƌ}̄=(>t=g$JpۭlBKcFE"0*XZzF22: TnS/>ƨ@o;s:kS}2MmxBaGTHj!3F#.~F!:YY`nl&dQ9N!!u ܌[]y1+[d/^y3>h^'ޏǸpwdC>d=2Y0HiSp3wLp;Gpzou7W{xeש,Y3C&su+Ƌ~q4-ǑTjZY藩 7\? nO#D&b 0q|,r3Jg6W>8@ww/^|e5jjlm,p=31ur|끟ۅA{U:Uv9`ܘhQmq uu-cKϣˤ_;w|f_?O?1~~n2= zg?:~cg6WG 􁏃-U-~<O !SRZ[ajr(T~d&rL ǘ VhA:ߝ0ḆԢ39ufJ-[ѯhݐs$y7۟:N,0k1K_G ovn;/WnF7ݮ\:;{ֻ?@*B惧ǛZ]R ; 2u~9:FGl-pX3: ?NN%GGm8{.~^GKFGjZ䖳rfLO}wτ9(FgʤxLê;eq_;Z{-BY!zr+q2o[n\%d.l|׿YnLx;u qrD6Jj: F\]ݚB&# #`}SYPDzW^Bx ?n;u{S1aX}?Md <1q% L9:ukәƌN`OB*dDg\3S9m?:e&#]kkҥ"|v1D,cpǭa͚9axy=r8/?ٍ:76UvBsI11oڌ[0WSgr: ѱj|1#YG!グꛭYG2oq/k 15_r=CYG1jfxk1fT$ ;*?O%8D8zv]|]S7Q+5|]ss,[zRJHXx"6)?j5֭?ǖ\: w.(\ ѬYȐ$_OXh~ %cDZlV oݽz8ۏf!13FL"anNssByS "*%+WwXo/g*bqWeX,zx.uc㑸@c-ȥo ~bqR̞D"̻v^8tk,g?-~Z(J(Oͦ{AՄ!u451x|(LXك?7 G2*5vLvTZ0 H%!q7uwbϾscLX`Z,A&_ nN6|$XKI !)Gӟ=rۧX; 3̊SGaR?,9MgrbH1\̔~%M,Y|Ǝbmƫweaii$PvN_镘̓7ÔΫ!}5d_:AJ%xuww܏TMtC,٧nFx7(ڼĀXJr'{$g :: ̵3 5X;Dݜpߘaw09Aa<zO| O栧BчV|KǺ=)hi￸\.ø?<>eƆɒ.p];T*tq.K];p0uL2 &Ʊa0b1y&N| ÓBu AVv`xx8ӏ,͝3/^nj65!#uK[adRmpsl߸˖ 6P==}xo_ÂG: 0fT$^|6xXЦRTR7jW_cPv %%66x!~OOv::1 nhØ6e(d"#|G/u#5u Zu1 o^6T \.Ų7{gwIM+Y?ϋD"^8 ͞%ilTDt?8u N{ XFQ3;Y01˦P[KW]'r'ڃ/cRB:} %3?EvQuNJD": x`l""ݝ97W!6#F yP`{f!!*D5s8@cַu N_7ozx;xYRXT[0}T#圝0"9u i dpۻ)БJ%x|hn~,^{y螧 HX$-OfkdRxPPcW}%or'JLMdx;141B6?r-[Y: ɿL9Lxg nNʺt X?f!EQe0}ۏf1vfeSGaRX,M{b605b|.TGEbL,D"̝3+buƪ;d2)f̻n (dƌ£\:e{ͷX0 7/SG𙛛n= g Բ@*D$1.IXJ,E 3̞}p tT277o܃a! wNJ(ZS֢O: s3H c7=8[coda?]_? vhk[ƏE@3 07'',i/O?>~fB7w`Xb=:f h3 }Q'2O\m0bסu,-0nl4FgyP޽k?K;PU:F,0nl (Df;n:eJ21’4!x;~\wwܽp:9p0u"@f&xՅ5s8(D^87K0 |--c|F1bmm߹Q~!ሏ?4g*RPX_coyYG!Dwr4< swc_OgD|1L&)q #:LtE&#K&MI J‰b6NVǍ!D:zP&[+7EymsJ%b$cd:<zIȕXjvhТH !gsU,0nLLLOtD瞹Ewh|Rz#EEUQ1ej*ޅa 3א~j|1T*KoӉA22fW~u)+CI %ڳ6ޏzdLn7?0u voa0f+Z[۴Ȓ}>yB +^KѯGYyd2)^}NBFxy=I$b{ 1U[یb=P/f{{+c5/([頓Blh̎E݃{y<&|!V4 BP)*:W>ߌ܃ª~DquݣDDy@,]İXsq5!˫`#a9^NXx9S&Dj|J%?;Cw#6&u2xFzqb::Q4[[ˋox`؀dR~1.sl.D XH];wg&Zٵ R XԴ}u Nffr] pg艝%{>;©O?]:)DUcnB@'ϳ 8Q`oyL#Wg6iM,pml2 B`. ̄GK{~}-_B]SU]B14j5WwVK_nű" ָqX$N0Dƌr|ɤR 4 ljbxX|D}0#sddqE̍9TW^VV%껿T*Y0 nx;`j*Cƣ\ W &O]'3TFl<_WQCowLeC+ 1W?:':+{kz{%iر4?|-F&AEϥߵZw;X[['Yu+Oủ11=ż+IK/Boo8:Xw;F bny;u NeuX>1~4H$O,&#dw޺6pZVvAxFcci֌$̜AU BB|W}T#4Q̦MWQ_Z(Dt㓰pT<\4.āSst!U}s~~Kڝgﱦ2)F)#qCBlZlZզ P54uBBXG0j176/qӂ % fe[ yc7\#2rD];uKvo"%a(UOO22KX -Ln:a@.n 2T&MU2RBBxD^NN6h@*r9e u "#|duc3: w9u J^u \.ŋ 3Qt=3:׌b YG1lU*7 >IEXxTH5\/)c\&2/u PH0upH,++s:ES"O>>?ŹV7[Y5YnDBr7͟ѣs}u%'NscFy(@!qX0Ϣ(Q\9SB:`D1oD)6榈tŃq9A,oiu\w}[gw@ !FiuZ[i>eǦ¼piMH8F66im Ucw:}d‚6=ҏoƻ{ 2THne2\XǸhì#bfϨіTTֳAxbS1~\ D BBṬchQDEZ8t8ŬcK,gn1nۓχ}mFY >JnJ*`4w-u=pl1.*ANn9;qQAA%LN-7MdI>-u [G4P*UchtםS:9ss~UXX}=Η薽nD k:lY 0urDW_:`D1 ̸/ȸ~?֧P|oˣb6mD1]w-[[G76Kc+㛍P^|!Rq" }2K4;XcfTMA37cZ"$\cF9썸+򫘍o *sp%ưAxffihԢXruYGٌ^zv5lz&D3S9yfHyQr/{x T*u~'@,6e=3'aԈ1UVVX z0q|,'A'Tǖ(28'ųp|-굷^o*Zj1,Y{0!0WVOoXS#dܒ ?G;=&%B`oi ڎI7TaW~ueS"rE~n76u ovu B VT?LMQ7v:M"rQQc+gHͻn4S n`O1*7J5?+PL:(kk ,]W&MBa_GGoϱ ƅaQ3S=%%|#ac߿{'" B`]TF/Hqc1zT$<=1a|,fAJ:WƦv1;A X,cKMQ 1xБ ގ65cײAnDr8Ǝb_kxENp'M:!Z077aG;yLeU"8SWB1*J'2Jʿp,Je;S-LǦMQ4YAaBDUu#xy1:55@ȐohšlmmG:׏e;O@Jbɤ<RQ wǍplш %??7$ fpX&::X yy:܌YG ztS:1P:nSeX.21>_c\]7u b p3XYG :6~l4"}Y D+= / 5XǸDY9i`ٴŵP@lb[;,]6CG!%Ąi1#2qVh1b<=0[_<RQIlp)m 1. 󦝎dn >;:!I$Ϗ3{,"|r 3u~Qw6`=6$yi g3ԩƀbyT1Jt?bX C(1!uKTUqOw"N4%5wa9ߏ:ИQ"`ɓ,w@fۧ3&~wgʤd/5TQ9rkՙ(l/2}<UV]"!BtP4'X,#ͥdj(@*fLf̬b1h̨Exyl 1++ gtutqΖTWϩPp*fb\.?KVMeᢩS!8ȓu "@"3'YGAzF1iT.!C 'Z[;YG z0gxAă#~_J6m>{i1(*1.#qX *&:FFLQ.{ܵphI$bv$1JTbc+ `O: ֣`P<5UHlBW؆wAjcfIax|(,HD8ٴ3xtC&kOEkk' *Y D +q>u FLA6e/T*9]O^wt@%sĈcx1hSPs(Jlqu+9c8B3VӇݻϰAiسAVn /\.C(62)x3Iw2Zt9Z`4LW!^soĩT䔈ňpwQX.u 'Q~#\Frߝ h0\Wׂnc\iB/'k7`Ӂtvtk<#Ĵјo{=%%`u-CcF[f;L]vQ*س/u ):1ưHM+D+? :X@{{:"l65u b0uJWsin.~~3'ёw- JXǸLaQrY W)!>~ncӒXG z:%XG=j ]z? EEܿtV.cfZ:Ǟl:_LN FWEK2nvC][ΔT" }zLM‘cĨ6]v+f.EQq5|]YG!D'NEs3?ڕϝ3VHg|棰JN<) M9[{P1{g;14 ve~-_䠱 N粎qLSW6.6%[::e< O1UfъL*mJ24B3fZZ fDW "jZqܝh -tb")T8Z#q`Ic,0=2pK*QTͿшqig"nn$Xvv+fm#eɓ&&2\7w@XY#&g汎gr!  DbBĈax(P*Uص 8uص,T*6eue#zuki<ߟ ܠ 'AVF-7ܓcccW;T4hNM.ٖ󸲲ZqQ"B!Ʈ m>'Pq$ႅ#tH .nw(sуNs*<' Od:e=":4Egqn# E>uL"sǭYG _7"uuPع߫H$׍f;:eڻpX&dfLOD0fN(8?]?[Z1Q n%ÑJܩMD"̸1%hӇ=rT.011#} J; 3L PlI%jESfm5w/| 1z_:;{Ú]X𵬣2(ǎg}WHaPo1ҁrkL!BLJlJ*f3H1cx`XB0Naeeu(*(C eeܛmxb(<=X Fyc`9p( ŰA`$\N1cqP:h̨6 ۅsH^^ƏXf,L3Y ^+C^Yl?1W 6bNuuBGO/R˫RZV'3X5uIZ[5GۅuB`ND<=jFQpl.c<(fS(N]0o,Б /f;t u=m"x!2Ŭ\twTXapsf1D.Fוu;YG=) < +};!!كfɏ7U;p$}M,L' C㓐 LOE6sNP1ptWTuP"&>9:`#:4O~1$|̦l'*ҏ7;@^-Б a~~ `yomwO/~ɕM:!Z1Q<="Q.ql.\0ZTmF4v|u10-iA ՍhmӼC!MuC+ؓVl;wHj!z(̨`<1m4X/WHiFvuRfrDGQPw~ƏkvCRCFRDc: 3ju B \.caaa:!gR“ ]:Z_lւx.BDŽq1#\/9ȕYוu BbmmoGyxMIkM -H|]8~C!ZFfa5\w6CnI-E":;`^|>7%F!r i9bT˝^DaT]xUhjjc~Hfuu B ]>uh$y<tY}OcGGd+cx,.ACc+AFx_ \XG(Jx_W$:_y}Bta|jx#6HG62Θ?, OpwLbՙMb6S9r4_OOK:Baff:1nـ_X:!C*D7QW6"a>pveUՍj`Cg*+P]:%m:!QCi(h_89R1PjP'1y 6fpU$B!DXX^K+BOR "=\pSR4>7D"57\1tf :%ur> |7^:qAjZ!pZ!aff{;S:Bߌ˿xBD$!yx9Z:Τӈ#rEÇn _ ;+ <Y D+RA#2Xb6n[Ľ`\Uֶ݈NDŽ*!„RFJn0;\wiyZ](ɥDy`~b1A;,m(4~Sݽ##aee:UQ8p0 >##v>u@HĈxx9%~ +M ‘4,ud/eAg$%@Z"X5V}ohE]]3?"$OGR1O88jD`b~Y usBqUHjaHA+b;1B\:Ȯ:4ur2*?lZ6TsFয়rT*v:]{`ԈpXB.7/lTK/;[KQ!3:uB$6&&&2r_6߾bcX _I-gsd:@wњ Glmwf# 33LyO!=wVƣ՛O *s"|0#2M'a|l&љ--Do/']E2H^3 ORБ ,y ,]?ݟ^-2X5hn`6p蕃vRiB1"YG d@LLdWqSii :;{XղqؘB Du[Q*h6_*f#bcCQ`;I$z#%UsZb6G[K8Y]!^iiƞSs*6fH H{;svlW+Xab?ں{[S5Ww04`knTk D"U,dccYn!Qt&3YŰ6 5c8<= 옦87{{+.GTb<蚄RB|N::E*y [M}Jf=aeeNQ.`c:!Zqeuf-fhbJwҬc|]Bhi9}2vVfpo8C,njej$x@TBq[M;莟ƏuubuȰzDuWt??Bx7&Nø1ѰQtDrxCÑubdәuB.UFWWzՃtu}COEOOS} JTJJJ%J*5ԸPV_CK A#rUyX>Tl|D ߉D"DF,Q.˯RDbC̄:Alߨ30x1[]QH(ˆ~ ss#!Cօ]'rD,Maa^ p\iL*#Եu y5 (mlBwqFgkXElڰ2N_:ʐ.EVv)faXqc!˯`A]P~YYpQERThoBkk'ZZ;ډֶNv mmhkBڻqxƜGlD`aanQ.c!@{0u!ǫb^ߏ'mf$!I:KK?t1[Hr)z{5?KM/(f B!йȥ tǰ0/{Lk8YY}ЫP ȯkD][Q̖]nb"Cp"9׎S(,x'dJ NHDGaHL xDKjXG8:K1yTQ:ֶN4u-hjjCCc؆Ʀ6vn,!|'JE/X0::E%< +#"W%Tuu$D?.f˥5V6pD-%B= *RH"`X7BqZFUK PT߄҆f*C\+[[[' ]{N"87^?k:ީj#7k~ KK3bXB0bFղp𩘭ݼDWjT"%%5(-M1!$<̇uB%(ЃuKT*O+xQ.8ȓuB"HQ.*)cG%89ڲ@x DsVQ1jR1!¥e5|(&tGlayHw[kZcL**PT߄&66O9#}4gC}-B.wmSptޅqP:1Q @L?\]'$|Pʣb6k+sKHCc+=3pJ E.Ann9 *PZZ B/!W:!·b6BP8&5Ƅ'(gl6VNTF:nC 8 &&2hU^E!كiE8VX /'{"&^.M"6 JV_VwqIKLLd WC.⹧o#K>弎3&ص,v9 Bq[T" o/gDuC$G;&2J%#\^άcAVv 2.9]4" b6"ppvEmm3( xo]\h@w.QYBɻ@cdcc:!Zsre 6ǒ+3%\0KU--Pm,`6]#BJFNI-rJjvYZ! Qn pq%` [BBesJQЌt^}4wUIvpgO7[س=@d/"#}@wH$Е׳p= Ʒs\Cc+d %Ο/ٔ|K2F@89u B-(Wl|=Peη B[RBEE=|| kxOkԙmXi,f/D{{_1An8xBQjn‘BI-Hx!2 Qql;܆@@}[J،҆4ttjz.?։BQZMreg&#+vaEZZ;pX&ꃨH_DE!4&tfpkXG xIb.@lBT[ۄc'qt :!*RWlb|}]/AM 7_|]YG d@|] Dib6.D89c"5!0(cj5 1rDDąxbq*f#BtMJPZ݄mG IWbF':ZY>vM-(ojEes+z⪹-=j3CL45tu(.F^~(Ջp\ <(?DNJPW:Eg?:!L57@V[c'qD ؤ@?ooL gVW:UofN#2 R.vj`"\+SS9hˍLJ_)8 &&2h7vyl.07vr%+ԙX?c]=ТƅAoHϯP_+t44# anjԷuͭ(ojEMk;R |l5։&&2Sg]ex;c: BDVv)K zq,idftttA[;-]=s-* Ods~D򙓓-x.<ėsjU n>psGd#/v~QW6Brrʰ`*JtgB79F j5[:ʀ׷hU_umxz81.kz>J%#SS9_e"b4+qAlm˯E%>ē!ڻzq& g&rB3pU7(2W( 46qV|\Ѐ>,/p^|y5KUՍص ~CT/fG!.!BlԆ`S(gb^fEղ4[UNxl|Ehkk CǓg_hS/гA"#a4l#ð껿8;vv"KF$K%nkoR:Qhjjǡ8t8`nnpߋm^ˍ涂w!:RT8}&8'wBr茓-ϊٚYGZ+9;ٲ@U{ƌBF+\a0U'o/g;R\c'~L`gd:"!BtOBnirK"h`o'x:"NWq`go 1كSs.˥AlL b ZKsK;lكmO`æ#EsverF*XǸY|ij֬#rUmj Q1لhD"F ú 5WXX&8;{L|{_V"5k Zu BkAe] 0vT.=OG8Xh|&Uq~aMP7&2o qZo)HI-s3DE!>6GEj&F]ᓞ>FCc+6n:-N΅;u!dlXGD-# ޻ݽP(Ji#%!%&D 0 9#x|'l?h%>!W58_>>;ݕ F 029ov7^~{tvCՃ'- 1;: ..TئkTA(4rUU7_bs룂vBNTF #Gxγ?<+P$D[J%P(s;[K1!`fn:!*Up_XiJ8B<ƃB1$ζ ,ױ4Cdb1hp_u> +يK?W|6@S(hPijng_l};NQ!!{{+.!N.aoO7DD" !D[ő2b6Ta!ǥ[1q!E!эԴBIJ DB7HO?^p_Q9eWqmobG 7u$G EGG7Vw.6m>ʫ6gH6~ig,-x})ZVrvDƏ_Q6!meO}*qxKB)xQW:!DdHE Fb q :7b:2_|&DB *~]}艛=|}]9;x(wYJ!۫霯 W{]":dwwߺWk@/ixh'?:(!BQRREK?҈VBȠP14R- Mɳt"BƷ[_?߄E#BQǴwnjHD'AB!DD"FDk.f;y<۹ hFjqmA؊ —-\KGFTaæ#pX&8B!BmI<'(,N` ֹ:![m%B uV&NŚ_pg9ⴽB}]\K͗jBP_g[kc1ib."!&1gLZpch/fLOC\3S9HndB(J|?eqQ!:C÷'!^o ޿ԥB1|F[匠 UhlW!B4I%H \spVc)&MQ*ogt,clmhPTV6'~ Z:3|@!)#~Bl!!ĮQh¿b»۽BbƷTjZ!aԿǏoBҼw_ fLKr{TV#B!dĆxp(fĨX,q:JEX0í7O Бtlq iEF]$%J xɛrL*A_ ۩9xMbj"-l`o X66033fr!!\#J X2eӬ#SB.Ap_a۽҅E^#YL&s!Bƨ4Vںf8;{?*f#BdTMHM+|@[*aD1yb<*+})}  ǿk/ ;qJ"E%Bqٻ_;uM339<OOGxy:ζN c] ^SDȔ<{"!BtOxw :6ybg1Zƾ)Xp~ 򀥙]W>W<Y B& xZ֤Dxw/wLEZzLő#hn1k6!+,¢%nGT8z# !-F^w iI ,HoJTFo"M!3bQ##apػOs1T*h?:sU9ZZ;b V#B!T"x6#FMMd=2BWOb 6&i8t8Gfuu!8.f7Lxy9!:>󆧧HѩNķ{^W#qb@{o?߄B=/f377p?JaQ1B!3))D PT\:#FDDWH$b".6>cDzPY:+z{0}Z"8CN&3[:B^*dsrA|\bc{k֑1:mm#Sݬ#\B&+wt@U:B!_WLY{^ lζu]#BF1{rweItX,Fd"#}QT\'qd6Ο/Jů DRcӇk:ΐ07eB.n!|1\PF$apj_OzmmZd'd::Ufii:€^o ޿;'B!d` @B|0l-Ҫy=gjLlBqueS(ؽ,X[ !>HWu+n^0mm8{.!%%5MZ ݽixq!kcr1&Mq1pve/mԙuA}2ʊ_?߄B=*f J0n\4|Lqm8~"GE{LB7lL !ޱ2ð0/;6?.Fc;1nVV76jkQbddjqJ껿RpMYG+@cc;\Zf~xN"BP1Bሊzǝ>9{̄`l>pB!B'0a池 8s6<<꭫hYZaxR'-GFf123]6ڵK~66acY@}}+c܎_==} D"1Xp8CfѩV.a%>+@CajnnBdB<'B!d`o"'amn${F6G\'Ng2"!B!. ֚wnV5kpD"]" I_v\ܖY&!NRC `Ggg1> /K1mj"0.BB-Sϳkk!vAӽ7@mkj*ɄBb:e~8vy w>2Y}BBɉ?۫]9_G&b]"D"|}\zq4f}C+22.WCR1N*,=}xpfG'XG 1>[Ec\5XqSh:!djkYG DU"B D"̚ձ\\MyB!CkLlMv]fHX腗OKSO,?>~.cFG”unkDk+:HBt/'7ac@<MB6B XqI Dy ZB*1`|WߊF޷4fB1TvS&ÔcO\5*1-zB!,H"񘖖F1er†JK߆uk_ޏ o/gјnĪﶳ1h|SPX:!d}zT*1"q͓量ۅuB+嘴APpD#tP. 5Mc\o?ׄBTvf<{Wvx%|uB!G[K5eR<,,t$ bpj2|Xx4ʹy1W1(v#\_P:!d9+@m RRZû,B>'B!D{T֏׌?B!1dB~D1Q(ؼV7wH]"<=pKpם:֐S Pլ\5/Ot+(b6B ZƏkv@|a\_ D8Z:!Wqcﮣ:6?#qwBB׮K3 :h[ ºи+[`i9D¦ѫGTdqRt" .#:¿""Hm"*/Hke}ڢc3KEG *'HOg`pt3JJJ{OE *ԊMgDDDis033)tKq, g{k]E#""pEPDz/s3tٸq"zCņSЪeu2ёv3>ny$:B$)77W# سUz9 Vt *I#plJpB Ǣt /Z-A^P(XXѮM-THw6\uG -Ѵ+[aggXDFSKǢR2hmǮ3#h-ϝ%:͈':+-0iېɤG&:aDZZ!\.GrޢcԮE^/ݺc$4DkHs,fAT*A|"o钻-T (pMX?q}) ݵ@/|p$Fx Jq4v5扎KK3s"EG "8y:\t5DǠ{f4#\u7w&k%|.rrrE z3"DGx,Qџ'JأyӰBkjl0k 9R3qDu-J?`iPE cv󪅶բY\\ud2vi5٢-uHJMW(Z AtLǏa<|]DG!bpAz]CUcP O лgS14v]dgK)Ԁ Gt(^|.j βc1zh/@]HƁ0_K:qM %^)4X[; H"[-:2NUVqq8xRdѽ?|}\Ţw0o8%N?u +f |SADE;w!ڈ@%ildny(: FΞ^Hh:#k%8{.l$IIiu/qvDDDFcF mG \#ɐhDDDFG0^*?n;|UjP"; 33?ڴ):JΞ3CJV %ǎ_ۅW-rޢcP {d ":Fj5N):+ªAgvH%/2Ngܔu@ժ&""b6-Ұ=]g4oepFR X֣5O>ǎ]g4G,BIFDG)PrJ:nFC+NpwNQqDTD/Jo~-EGԴL1@X&7 $1^\}#k%KB1^qH‚DG ""Rb6- )7/7+t]v^.b% ѲZ6nڋ;V@/]D#B;3T@WAk$#EG "x/)_ezATGj pDjJ;H$% &_"""*YJQM5{%te-f@ 膎)c;[+]=It "@~r*pMԽ'8xFcW6ңT*0sF?;k<8WochZ }VtAxSExIEGR+:{ϣb2cJc,fjlVǯoA.g cǯAR?789ڊADDDlEPrBC| ;Vnn|2gp묬-tKNIGrIB.CU ]vnBוEJEdhlm0eRLFߧ-">UP(BXePq_LJDzǢ#T&+A lcG_Ũ`ee.: k]%r\U Ԯ !!WC*eEG! aHDDD%OHL&5Z{lξB8*H+ݩ^so6dFDZT Z4/|\Yٹ_ZIn\DG "-ݍV1[⋀ df`sc=3$ JZ d>/yի`1[U DjvB;M29iD.M ΡVv.Wz9K$`#nő*#G ==Kt "҂Ԋ٪Vy1Pո?wI~F@jj.:+HzBvN\CjjD%JKK3T cg6"""cbb6FcpהwCjEDDdZ ekNG͂Gc6EDE`oꊎZOEGZz@FWJeuFz0٢cz9N ^BNN.Iϖ/AF.//_kD): bb BU4Z~n]IQlѡa7UXaFkڤ*%8fؼվ.JxZv/ϓv&"i' c cP)| :_~;.:+v>+:+(:F ȸ:41^PDDDT:XVLBQ_ctL,X7g;POWш RV`inZ}. ~\R(ԿQ18;J-EG(]>| GADKJ%r DǠRp=Ȁ]~WEAH Ԯ#J򕻢cbѷ jD+"""cbbtF5Zq^ݡma`hDDD' ѢcF`B3.@anfRdשrYGɈ L&jڙ?O!>>o溊GD:P?,,u2//Q2q%͈cQ! ?*M^^,f3IoK򕻸$1cq-1^oi]C)^+szTk7,: b6ptA. 5ZOQu+ B]D#""2jñ5))q!׭kC8:"鈉U*ϤUءV-j M?Hg )#Ӄ]tF;vg# 1IvnRz%EJjިG2l8z@D """X̦#=n[[Bץf`=5&J]D#""{6Vf޼jm܃Bݻ5A2"ҵ5EGxEJJEҲE5Һ;}&{"::؉@̹#t]Xll"(:+L 4o&:Fwe*O?n9J%:+4 SS>#%""2f;[SVVݳFkw:[mE4"""׳e5XY&CuV VV溈FD:%:+2EG('G[ԬQNtjlشWt "*@3iuf0iҽ|\x[t 2"݉|='O/:+FJ5ϱg1^M+uO$""b6V]VXBv78[3ٙ>zEt2B6e%?dW@''[Ԩ..e%MJya Qz8Ho1nY]t"""Xf4~c`HI(pMzX_ɈOA(T ]KFfaFD%GbrsJڴ^עpM15VkfJP~~>.:4߸Gt 2"w>~o[ =*uƋAl,573A,f#"""B!1 )ؠMS{AT"ް4CU ]~\6vtgAKol%v|^^ұC]I~\rsHRHw.]t1H"5MT\V J-:+LLx}]1$k|VCt 2PWFcVakc):I @HjF_;ps@uHonU ֖fu!v:ڵ`]D#fa!r"e.hPxrLt "̖1p{Jƒ~a,'(1^YpC{4>s. JªZ2 ]:Y(b2dPhZƲA*ɸᬫxDDDV% FRaߠVf% jxDT, .d-m&&?k#?D||d𐣘tBjcFt'99]]Ax܍z XNN.֬):u"ԵKCh꿐~ҝ{ǢcV*:IJ jڈػB~rǍa47Nu ]gyDh8f6DofQHWҦT9xP?H;eVV^dZN\n6f4#( }{$mڋ{OD x$At 2c_1^+q1@<5vFR} `1[ jۺ4Z]HI(pMPq(mjѶ)VÛ/傼FD$ r4lMN ×E =ve;~ ii1s=~kCjYfUsi1y'Dҝz} BM U#--StwGD """ a1[ R(;FkSұaB׍qDDd4/ 6AJJE 0vLg(-a!hFn0梃xw2܍z\F+?zb6Cs|$n~$:+j5>[]"o=wFA^WӤQeH5|w"bv$Hq>!q5g 5|Im(=Ǎ!t4~0dPk*@KI豫=OV'%;[,Q(۝ V~'bNJAz{];%a¤8v\#uTZl,F4,jZ]"QR21M{O#++ |Qި;$ѷtn22P9i8qo&*ADDDbRҮM-ziO굨Bq(MNj^8>жuF#;s6^Ų#99]t$3ؖ-Qtŧ ~d*Pbb*>=5{?Z]F+3=vw@rǟS,_>z&: D]ve{&[ٮ^mM +StŮlDDDzϟGW.8BZZ&Kh̨ -D)B[FQ`WA?Mj|y?>{drml 9/+m*ܺPt ҁ<۸Gt "?tPt 3;w=D(!m4~Fa 9l-:Fo܋+W o@-##?N>hڤDDD$Q|[BC|ѥfgO>Ǻ ]qDD4/vEvF#z- #FϾ>rEңH3R'h{7 Ӹs\/$FV%?{EDZ0aJ'.b7 n@:#xXZW ỽ8|'"#~frY=Xt kؠ*T*>]B苭x ^t, ^^Iްt(LōFDDTl1&E?5/A-MNN.V S ,ݎgϒK1Ѣ#It!0|h;1 zN\|Gt ݨ3n>oZtW\b6}H+j؊Ѣ%%aμMr9F k/:xg[~yb*.:Fj A_p&"""bRfnnk>nt.pM_7 .u_f4_d2`d)dhf6xTIBn~c_AR69%~#Ut5iK0b6Z5CDx|>1cEG! عN a0uj+Q`"9g7=ߐ+:rrѬ Lґs6">^/Um]~c荐`4i,׮ү$؉X/r\/xl%@ ڹ>zеO>Ƿwaѱ.N:JIDz6֯-I@F/؊ԌׯA[n&jϮ]bb&#)??Ԫ8{>vC陸zK]mq-T,MMWKVtȱXLrm~CC~RVqUtHt*]{9ޯ"|_WqHBT*/7#R K 3 Jt 3tP?q]v>/OgDt1XB_}Zu$wj4@qy -t݂xH|\v󪅮tFr`.Xm_.k?n9 o@2*ID /OZcRzPѡ}#EKŒ#EFُ%Zppze1T+w^t*$ZE"Bo#)9_'OBj_8q߯lD;nnգZ~7&: 3̜99-[[K Z"""*v555!W*tH,L1{B;dff/iOLu."`* ~9ۏ5043Q*P%nVpíQ1c:df刎B7R}-?7xx8i|4E;cY]M=KHig| 1 X]:5Co>.cHVc4j@).> ?)颣jİX]hv!46V^ oǷHu goX,[Eui xian*l~9*:ia݆=-:F3N\\ޛ 1OEG!~v+:Fr&Q`j]D(TNNfـ1P)Kx)Wk|U*кe 1HO<5Μ o R-[T/pMp-D^8DT:\/**;[+]=It ײv0jn wFv 5;w7BF!h3|f|(^+e[Z-(yPj5{eiw=|.f݈Ԕ9 ՁxBeb9viXǗx}v|$nGE=p ]ٙTL @IOXvV]5MIR3}/R̘@`YOq$&a5x8AtB(1~lW1HL0eR嚍jo u uqi$9%ˀ J#z.?òjOE A̘Kny!?.^%K\ϱ%1^QrYJո1ab".\>Zc`z -"* +Wϗw G.X~TQ`Oz*ǤRR20y7FlV|qvmD0#JKŭEGY2LFRЫGɏ%"""ia1C7 ?:Z s>H,s3ۣ!Lw'~, ǽݵʇ"" r g Ξ(cw߬ďCE۝B W^G.ZXC#WFaHaAYa7VK20ǔk&:J ፮]ΈAo<ހQcF0pz~&`S 3q|7XZa00vtg14)W{P y&N^Q=ADDDzN1+JV0jtD""kHps-t^Sめ[7 yǼ? 55ԎR;i8v([Vco7FhH#xo*%BŔ7£eXtR'ij1gÙ,:N3i ?_%_oқ"'ҎJW_~:(::vZ5CD08MWA&UEHFF6p-]tncLOc:Dҋ.DDD$-,f33L fJyG^)t;[!]kH&QrB>r;vj6RI=aj*fgF`;_ѫB}yQ!ǦWc[Exʉ B!I=$]_10vrDF>ٳdL _0aJIޖSS;ধgiϟ Ͳ;I_nlgF * D物4*427#ceu(ǟ~+WDG)5aU=jp;pQt YsÍhQt*';Wt3_]7OC VpQ4T*0}J/y1EY[[`{= DGH^^>r;֬zn/G1sFddfUʢkcb1tujj6##)jaa/>3S]D$""*%j RQຜ<|{oԩʛ z(%v7Ú;E0j;vS7Dx-oz!T$__ٹỽP8N}~#YVBJw5ujI<ٗ_mggF231c:\t[tcg6*'Oc{+gyQnG _#QүO U D.uoj#&ddO!Ͽ܆oJ?2I ?HzX&12 S&ufo~wӧLDDTLMЧ)- ]v.ܾH8aʤ Grrrj_2}5>}.:+n;/|7rcQUDGN&aڔEcjx|))ddd~Ē }1KJyU](S^΢c(''>N]> ]jj`;f3Qvv.3j5n;ISѫ1~Pz9Dt 1tp[npOϊ4 0iv9':Vd2NW{QHIlmސ5I8yvꄢGōGDdt^ 6Kg#ԇcN/hrL [[+Bpn?_a/$莝gHOhD<$Y#H-ޟuT\x~1u!F]C/''[̙5(Gv5ԃ )5gǏ00x99oѫ9K\rWt@Jjfވ5kw Onxj/"{Q*胾ӯTcؽWcW1jRܾKRҫGԭS^t """sJ*U,Zaw{4ZXx9nXu.)^|k wD "qWB~+66?ߪogr5 >X7Ν V9Vx!&]\\"ޟiQ@x{!ѯOs|y(ZKɫЧWS Ecdj |v'rEъ |4NѤqu!ff4(K۵0brr{$IWdL} :uLt$znc,:LLhF?'=ޟ*~^t|6\|cFuͫ=Il|/}Vt" AZADDDHXMP-,Pi5;dfvpv-n<""TAf̹5.f^=3%:Vcb1vrvM(AOccJ~\EGk|]"%* 8 Vg'O0yjO+d#8Gt ax DO/EF!/_PIL{@DGxZoİw>) kPebzYcrޢcjaؿEr% GBtFZl.v`Zov$""")rL > Q^r_- qCDD:jwևY* oAT`Sz&Q(>MEGJRR}߯WoU[1nr;N1$G.cpqH""1_⧭TJV?Ob/qZ8EүOs4i\xk߶v)_hU22K0aطB/JRFqcIt׊O91wf'c.^_;Oz?]ڶ紡ݳ)ԯ(:F$$ዥې&.== _- >q.=$"""* K Oͻcǯ vx-0ok0؉W&app)nDF%:T*5ۼIiS1y*$&J[,}{k̝5HTwae| :{8S_W, KQ50?/FZY[x0^tq#<.őWKx66;3@s]>J j~}s/m!= jDD 9SE(1vVXzRrhW Ύ68C^rvmk̹<t.^z1aܘhؠ8z)???t8\\ЮM-1$-'O9lۛQQO0hݪl5SY91扎SdDǐ ssStx.~rHt܏Øq_a]%9rP_a㦽QT?jd_hQ(F3 ]QVv.6}ÐmЬIUTQrs'"B?7̜OEsg ¸VRR}Ż;lH޽'Xw\~Otb257,:}[{~x <=Pƣuĭ;w)]$"2)HN6b6MkW ]w71.IVX6/N4f}0wf4nTcGwHzx,\z_05Qa׫!ZcݢJƮpU]:Շ ׁx":JT@!ݥ!~8sEGXff, WŘw:L..]e+~ÃVl^Ylۍq\t[t?Kg`G0o 4_Qx]CVqelشW; ++ Q8:0z3h7 à :AKR}YmSz4  Bfr0sF$i0kʻQVhq#ب"Z .t]RRfN㇟`Zo(|[cʤ̑W1lv*-* ~]WlNhݪzWh٢Ŗ5kwbؽtw7c0~ ,Xza05eBvE(]aإt(zYB -S_Slf0y"nj3^g쟽ta~6cԻ_p^wsxݥX'(d373dz͍ERn]T*5Ac՚Bc՚0h؋qPV-,c"""*,f#..hx!6k4eќA(W܈DDd$W Dly5`T`GlW܈$!5cKRr:>[&.Gq$H3f)zԹFZ-7j(Y%/a/p%K,\rSV <\?1!2Ezvo S1xL} Ϟ%#IXa7]G0rX{ؿ5;Q18:`z}nc̚#F-zl(jO\ǻaz":N(1g@B^M/#779~`?yUϞ%c?wl|bGx/J LP?׮ÊUjvpr)j4""2B1c9j5]0a\7eq!ۻ9׭ :FED>I+14VׅQ15uTtթ:٥X[J3gC̣Gϰ`Oz0_,݆?_ԹgO|XXBDDDL^KiUOkiqGDU__ ~!"Qc7\^x=;NcǮ[׆hRd2Lc'|D)Z/ȱhڤ*zvo ?wѱJKwop\ގA25Ubc-ssS|2gޛJ/ X7|i:UzSIIsYDпkpqâ#(:^xkCuە#33߮ۅE. ЩC=ȗ8_? Y\0iۯJq{!iܨ =N{DGXa76mއ+]ZZر v9$,_̭W(6k"3++V!:csؽ<ª[^Pv[~~>N?:W`  okU-pV|9FL]D%""d6I.߬Qi[x,-0g;ktq%ԭn]rѱJTNN3q=; X I+au+LNIvo[5kӨj\r f9`Ђ FkO :J%c=غ:wׅX%k)8tYB 9>DEvfKa1Aӫ=~.R$y8| O'jYVqN7E; F/M MTKbڂj\t/݁-ZMMt4!bس<%nwwG,t8њJB}0vrل%)s=l$ƿolxS20gcXi>^S̝*J}{{9c>|Hx{`^9gVqT8N 3Z-|.]CG.QC}ѽ[#1 --W㩁/NǩprE㆕ѸQeR~>NDZ @<=hp:w7=.HQt"== x?n9Ѻe ԭSs,%%O\q38/4uJlM qqIz?&ln/6|e<Шae4jX >^ؖ o8u&}Z.a¸hӺ(TL}z5+hdz|A|A@&UиQa8r {_aapvܙ3B66;k NمOK3b a9ÏBDD2WGB&"MVV;{[:C1rx{Z(:3۰ۋjaW ѻYz- .±א&:N`{=PGĔq8%&!!vvnhPjTFJe`jj":^de8y*g4\bn { ItJso[@PfV e/tH8{>G]Kwً&z ޸5LSRҳQ*QL[Dz(:ND{{O=pBPZ~vl+w J&DUDG!ӫ,,̰? ž3iuEY#+rrp=;s#s?NtR tE >1>Ag )Z|xlm \[jYLuȰde 15>nw-INNEr>֥!ӰâJn[ @`YOԩU\榢#$== <+wpDz(ɇ޺2bX{T\]w0ujwp-ɏpzC&ҼJV#--vv?#bee>3""8:Iu u J傼_U/بj1xDCcjGCڡuTXv_1s?~9ssSC~(璕HO͈^7ߌ194U. (DDDddywRfikwjx̘ E!R[5/𯯈?q4WеY9pz iZ5Cܰ!m{ϋR};wcP usT*'ѣg/2y[wɓv5mRߪ+:rvFbM\="xr(wzLY9x08sfR(pm _ET""O^GF6kV{v׆ʼnGb¸nHNᢣ|n~۷?Nd2\\OOg8:XְLM05Q_ #''/~e!%%IIiHJ~{bR3R,Ϟ%#>> ' M=r9>|/|4Z/zLbrr:;I(++sd(>hќ$$ !!gFu{;+/g;Zk XY[D B&J 9UG^nUCzZ&RR_|f %5IOBlSS3-H >=傼EGָQXbHOGJXDľuRgg;j^sϡnT~~>r򑕕Le=5KBl܋ϣgϒPS T´=af&DDDdXf d2&늇5.&8{._| S&(tAm?9DD$qZҨMR/H B!LJkq=qQՈKB\\.\-:A4ìaa!#08q8scSzNR` m0ur|4{wHMD䭇PtI9jx1b62\/ چ჏ֽTnlӑ۷bx9$? tZX~>ZHF^^>>}nݤg&: Pk@LMM0w@xz8i;]7=vjPxDD'unуhvwO'̝5| >=C}EG!P1sF?it3gCDG!#%0bX{; JP!߷ڵ]l)-1)UYZa_(dj ţXfՒ1 {'wð!mYFDDDnqpOZmn;h{t@|EDdt#;hv/Gu` MQyPJYQի(r #\ΛTzLM>ޭ(Fo[$a Wĸwhb4ǧannYGMDG!#өC=̝5PHe? *BF~< <ZPZ/8xIGwB$E:4Fkoxߖftxzj=Z5CDG!үOs㹫pnBt2.vX`5,:ѐdxj/$f`?7 oC.fKd1ѐd:-N8dL0xwt"}6a!mM(KǢZX(DDDD♱ `Th͢/܅HNוmDDE:6VN .Db[5޷R3 0ЫȈ`|+tۍ1+1ok`ұ(.: jaXlʇbt,,psu$rw+J|1ƦeX88DG!匯AEG!ٽ GN4RԦU |d4_@&"""a1 iSzj<>?_9o•Q6+Z[DD$;Ӹջ&4Z/0mj/U ,ND22J3AOEשC= Nt NXd 4b,-L=b'Caoo-:rrŧ -@ dz̤y";l6I:נ~E,_:ei>++3fÍhֿNtڨ鈈Hc~#|YZ :+j<"@10m1bX{1H]4IJ/UtcAA^Xlj l'>tv"1j(>Fѝ?Oz|Dt's$YPV|5];pҜBAmw(:LΝc}$lFbЀVhӺ33sp3F o.=|QHd@M1vfo`G됕!ڶ[5!K׫($aJM^=BZ(ơcQ(ק9b ITP-}5&MTY`nlm-v\EZZ4ԮoVN@*eEG!="ѧW3|d D!=U?Y1mhlu\`0y#NJ^`1d0 )6٘>[DF>hAm0d[q dr h}DZddfk|zuc]8tL|t,*W $ =ejj;9`#:?7,] JBt*@`YO,Y< ,84 :S{R&Nڄvgc>tn1T(o/g,Yls*6 37m|<{yDk, WND@Q4ƪ##P(0czT6٘ߌh} 1yj/yND$9rB.5Z~3>X *V@=;;+,? }{7ڡ;]TBG}WV֮mj ^K.g&X8zCtƗBP*\CѝJ^r+D! 21Qb@fD#[ujbIV]^yz:a>-D!""" 33̝5e=5&##Xoո >x(Lt3R>Vhh?cVle=1w@U=BAZE#9VЦuM|d ߂{FEto,Y i SSv7NXh4,: 阝2n\bpr-}k"Yaɢw0李OzZX V}[\Jƍ錥_A͟q1Q*зw3Y9իHXfll,٧PCm22kq=׬ZK sk 0WKƠFz-JѢa> jjY]t& ss>@4D+` ۻL٨YZaؐfv8sff&~_ КD DP5GV8ە lr;ǷC,0f6>>t8E!#_aia&:IH*ej |ᘈٌ>t4&33|Wh}P]1Nbo:3[';|b<5Z]hr4>;~:VEIT$f^?EǡRV5GFu$ejĠ7P^qd2h^ 뾝ݛĄN EͰ`08:IӲy5,Y< nn%~,glIBOO? s3NAfaR(xk#lX7o E FUt"""b٭õ:ŌqBF\aDX3QsvņUbs"1czdej| ?_W,?vv,d#q֯ NFNyրY[[`nX88NJOO'h6e9J Ɨ´=(v P:[[K|~LffCtg86k``lrڴkbЀV`W,vQv8TNXj"K`DDDd898`тvx\|4kzk |0Zx`p`2IƼ+*D!kҨ2֬zR9˿I^@%e_HQ#`oO?qv ?A:J;kw+H\\033A>-)hټd2T64k`1ibw8;ş@H8TMѫgSlX;:ԃBH8h F`x0^mUX'$%[71QǢMH{acQ|fVyQP$5eݱ8| VϒEGbw:r%( rmZD&aع,l=g )cQ1y;Ofh٢LfrtHVV5Z!b;Ʋd{SvN\rWt$*L:CѷwsPU*Ų/ŹLjD:dia8) ,?)W8[/<aCja_7;5^G~&rJwc#ZhQ5i\]gug'[mZׄ\f*SS%:wvmja3H`QpB^MѲ9،#/ax1~;Kt$*MW]tbjaAz- ~93goBRErqCuѡ}XYYCDDDTjXF/qƢ1ou07-t}jAQ|0s=2S3YXӏbF3r8w>Rx`ž􎉉A5ElvDT:lѵKCծ6,,D!=djjNC:8}&r$F~ ԡGS,-0r[Ա6|_R匡ڠaJQh1' v|2 CC[~>¡R8HrrEuоmmA\)+ɓI{٢cEN аAEDDDDFl {{k,Z0Swks|2glm /t⋍'aoPѦDDܼ]lH;c|qzOiN$UJmZDV5p]N6?7t-UӨ/Qar9խzu+G1=v y-kkS ogg;qH9b^vclN΢T6z4A$9.sa&샙G||qCRRXF|/:uF +󚄌FAZac)&L^ C71}&\:^DDƧZ Q?6%bkAU?ˢ$Lja)ص,v>dьR*uQ23Q )08xv=(ͻOS) Ԭ41*29?O[QXz4EZ!U?y&d\\1dPG]@䭇cg'[4oV ZVB$:wΝ#f =#Ǯ"==Kt4V-i(Q""""lTK,7 S5/4{ [O?2X8{ ~?pj'"*L.G>1o  )Z^`ZhH/89ڢ_ݳ.^/ ] (ТEuI邭in~{HMEfФQevs%qI=0tpy;v穢c% S4nTBh8[l٨䘚*Ѳy5l^ w8t2e]25U^hղW dJ"Qʇ|F'N^Ǟ}q]v-ANNh4 ZV8DDDDb6* f}Kvxg )xo*?V)6@^Xf52A317i}ШN\|smF۵qvB1d 9jF37q=',*wG4nT͚iQI+rA5-\tǎ_HMM/d2z~ h֤*<v5_۽"99]P? 3mrhZŠUjpoؿƟD*== gE̹\p IY 2 傼PNԫ[lq]?q dBj@ԭڵCh+:Sqe>r_OԯWT~飜<\x g󑈏{%E[Z,ǝ@zz&Nqmdd&d2ʔqG:Q^EHoJ4"0h@k`Ū?41Rloc_/l^3 fE͘F'"2~rP;(a߱cY#0fTGtP(1 96MBR!C="q#>,Nj7êEAZ,9BB@Pz9Lpx/Ƶ+:Pr ~T j(05qtݺ4ijg8qNkףk\Vr9*VGzPn9DDdjD:QNy/UΜs|LicmʕV5aUwr$2DVVh, ͛!??۸t.nތa׶IXIGBDDDT4,f#tXVl.Pv<0sF?N-,j^! /Bn=bVo6{ZDиaemc\_b`VHM@\1GZZ\7 8>pvHʔ@2ޭrrpF4]͈|) DJX*XDsvCЩC=de(\x EtL+rzrTT+ʊ߳Ds^= == 1ADD<@FawLrws@przrTQ)R((~קrp=ܸpC5? 9R!R,<E""""2,f#5nTV=;n\~sgD@yg`+ԩQ>ބ䴢F&"KVv֘QT6wc֜ձ,-0{U 2%񲱱DZ]+<͈=qz[fcmooDpBוɨ*Q-,^|Tո w>qz~n(O @OF榨]+kRS3p#<7£w>֫ULwGx ʇ|yX{hYYf`Ԭ Ŕ{O~>nyX幊\.#|}]Q. |;;+ш?M_Rx"n=@d܍z㐜.8i*?7+%@/k5QI`1IX@,^8|II%b{+0urO4_QmVǏb썈zJpX4{cEkXx `OA`1?LL|=~{OqS<~x4aLLp ]◷ GaL&|кeX'YB Q*pq#o/g񀧧;QDڡS;߯=~;w!~?xgZS뒅)\wG!ku}@DG.l'x5ZظDDG":)ߏD<}TIr9 |}\ o/g6'C2ohV߯'%{u~>z$'!#C&&J8;q+|]k'"""Rb6*@/,|4O>xdkit`ia?[{@ȕ {4xJ<6xNPxz:i-Nߜ<'!69% %9)HM@JJ;YY9C^^>r󑛗|章Tab&01yϦJXY6/~5`o[[KvZ#W{أfRy*SԴi<#/_\( (aT& vV߿ZnpvB.DosT49ⓐt$#%9)/Sr򐛛ܼARCCBR!D ssSX[6wK8:Xvpry DJYd?KFJJIi/~OKDzF֋<Ջ를 J9J߿+`T ֯~&X.vpvI٨X<=ј9g#""hw^SLC{6Aj1{#R%2d89Yef`-8v Y`DT|/FTFDI.#HD`BDTbJzJ9.: I_{bsp GEj/ǎ_ÄI+6AiTT񈈤JP~Vl0iE Z6d!IH'LMM0uRrvm`دptV>3&l0HDKaQx hRDE=xr #ǔI=:QIc1L&CnA4j ̚+yoףc]z=8􏳇3V~==:x5/-kh-(,fRS?/JhmVv.~ oAcإD(j7l,\؊,-,R2X8z["""""""""""""""XF G ɴ~߁1K\zW^ꦱ!SIDATVqDD+ލ1j qe2nE N퉈DS@GP`Ѐ֨\) 49&O[bධ0xqۣ{z' 29Q+)=vXnT`SzZX`'""""""""""""""vf#a\|ǟvnX(hltl"v`w.dr.Fȅl,%"""""""""""""" P6P rcG_ti_Y9Zmۤ^lwn &JեoMUjl|wLO>rCODDDDDDDDDDDDDDdXCr * ~'q\&M*:owp n_׮||0u\kQX">+)DDDDDDDDDDDDDDDRl$*PVHw+bFY`h[q#k]Ȗw7o\VV(V-B6""""""""""""""28,f#IdzaP*Eo;kwYkT'6s(DRFwG ZoZFZ?Ii׿"%5Cmr9kM맡JP"e " 4T\ݏΔ |4<)b76|xuiIDDDDDDDDDDDDDDdz$+8+bǠR^Vq5 [ЪX,Z ތ@DLcخ(* ]Q?ŽIADDDDDDDDDDDDDD/XFfffڣAXV<|L}$/c箳;3yk}Y7|.^+~'Zg ";Z2EX7DD>(roL~;ʇyDDDDDDDDDDDDDDDlʇa l8j[ED>F1xPkXj}JeqRT&k45M"""""""""""""""Cb6{xgD4nTV۷i?Xq -WBAR!}aHfegʼHyHw]г{cV"#?_}.\ <_~?b9H2 ];GvDlv;vA^^~dhHQ[[%""""""""""""""hrh^^>~$y͛g&q-.jNS|>\"P)MMPz0 h@b/~,|]B~~ Ƹ1̑DDDDDDDDDDDDDDDZc1F8x 7E\\R}oEԯWz4Ep9"/ fGNN.6|ER| {{i] 7I?n=‹=N\]1_+4oƑDDDDDDDDDDDDDDDEb62J ZVGƕᧃHI(j5'ZX zhUB&i&ڷ9mh9Hȕ c`YlVq].^%jȎȘ ui6jGc.xϋ; AMQv(wjZa~NIIL&;7 C¬TT8u&~zt077E. ѽ[#XYdDDDDDDDDDDDDDDDƎlD,0h@kt|6x;wA~X|Ys+zvoMBTyfн1tob$Wԯ_ n'G[3//_Ɩ#~NPѾ]m6:'b6pt1ѭKln/R}܏Ϸbæо.ڶ ;;bGq').DfjzK[+T#]gMNNǮ=SK~6A[Ig$""""""""""""""b6tƌ}Xa7_U}%a]>4m\;C`WYSzNĶO(f{DfbfJxs}ԭ^N}~$-?FjAh߮RArbEϒ38}&ߏ*/_' &SuʣKpq1qM4.^MA?d2ԯ^=\NMDDDDDDDDDDDDDDDa1Q a.:7iGE[)"#mۏby04mR.:?D?žWp] KԮCQZ9XYXs86 iHÌ%1cbf GAkHOpM9Ԍ;)ZNoՅ[tKVբCL;vF=R@S<CMvf ?~ )H̀FJ&o[3k.K=\%l ~B Sgnp\mkej(cHZ738z*rK+ ukn \.v,G>qMDrb ҳ=h*b6 Vw|}\TBT2ԐJۏpL8N{OJ&&J4jP o !JDDDDDDDDDDDDDDDT2XF$ػv:G1lPv(j Ae`ccY*D^^>">Bx#܋yg )HJLEjJ23<1辘Ma),,`ck {8;٢;{!J.Djj]3g#pMA6ճ9u7?~7g+sBb6xAܾ3ʕ+ϯo`R=)Is{tW뭤[N{~z i^Íldso%Iffy x͜:٩<'lz`k|{?WOo[N~9rz:y"ǎT{=ztwg8`x/  6roszޗ,_ş7v>Ǐ'!1>_+WsfkfNuﭤVݥt;t9VFgb:fgg/9V67?pp#=zn޼>ͥrݷ^̌@lN9+WOzl5Nw)R:tNnbݥ,..=?B+Vgl6KN&>xw/㝽wsp0Z{Vk6o]x=?xw5.^ș φ ^b>\o_^?qWnϥ=J{v+sYXh=Z4i4i6i4i4i>&iii&Ii&?vvpm/~'6ko~fgU01m7r7n#^51;?ʵk7/o^2l.zylٜ?&ss5_A>a>̍뷲+p;[9?~VW{TO^"b6`Y[0kk\q+-rY=;J;t.Y:~|t,nUOFـÇ2x/;㽌e:f2f:d2f:f29|D64fFF接n=?ݛBggS1@@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@91lPN@^s5IENDB`seaborn-0.11.2/doc/_static/style.css000066400000000000000000000055761410631356500173120ustar00rootroot00000000000000body { color: #444444 !important; } h1 { font-size: 40px !important; } h2 { font-size: 32px !important; } h3 { font-size: 24px !important; } h4 { font-size: 18px !important; } h5 { font-size: 14px !important; } h6 { font-size: 10px !important; } footer a{ color: #4c72b0 !important; } a.reference { color: #4c72b0 !important; } blockquote p { font-size: 14px !important; } blockquote { padding-top: 4px !important; padding-bottom: 4px !important; margin: 0 0 0px !important; } pre { margin-top: 11.5px !important; background-color: #f6f6f9 !important; } code { color: #49759c !important; background-color: transparent !important; } code.descclassname { padding-right: 0px !important; } code.descname { padding-left: 0px !important; } dt:target, span.highlighted { background-color: #ffffff !important; } ul { padding-left: 20px !important; } ul.dropdown-menu { padding-left: 0px !important; } .alert-info { background-color: #bbd2ea !important; border-color: #bbd2ea !important; color: #2c3e50 !important; } .alert-warning { background-color: #e09572 !important; border-color: #e09572 !important; color: #222222 !important; } img { margin-bottom: 10px !important; } /* From https://github.com/twbs/bootstrap/issues/1768 */ *[id]:before { display: block; content: " "; margin-top: -60px; height: 60px; visibility: hidden; } .dataframe table { /*Uncomment to center tables horizontally*/ /* margin-left: auto; */ /* margin-right: auto; */ border: none; border-collapse: collapse; border-spacing: 0; font-size: 12px; table-layout: fixed; } .dataframe thead { border-bottom: 1px solid; vertical-align: bottom; } .dataframe tr, th, td { text-align: left; vertical-align: middle; padding: 0.5em 0.5em; line-height: normal; white-space: normal; max-width: none; border: none; } .dataframe th { font-weight: bold; } table { margin-bottom: 20px; } tbody tr:nth-child(odd) { background: #f5f5f5; } tbody tr:hover { background: rgba(66, 165, 245, 0.2); } .label, .badge { display: inline-block; padding: 2px 4px; font-size: 11.844px; /* font-weight: bold; */ line-height: 13px; color: #ffffff; vertical-align: baseline; white-space: nowrap; /* text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25); */ background-color: #999999; } .badge { padding-left: 9px; padding-right: 9px; -webkit-border-radius: 9px; -moz-border-radius: 9px; border-radius: 9px; opacity: 70%; } .badge-api { background-color: #c44e52; } .badge-defaults { background-color: #dd8452; } .badge-docs { background-color: #8172b3; } .badge-feature { background-color: #55a868; } .badge-enhancement { background-color: #4c72b0; } .badge-fix { background-color: #ccb974; } .navbar-brand { padding-top: 16px; padding-bottom: 16px; }seaborn-0.11.2/doc/_templates/000077500000000000000000000000001410631356500161325ustar00rootroot00000000000000seaborn-0.11.2/doc/_templates/autosummary/000077500000000000000000000000001410631356500205205ustar00rootroot00000000000000seaborn-0.11.2/doc/_templates/autosummary/base.rst000066400000000000000000000002471410631356500221670ustar00rootroot00000000000000.. raw:: html

{{ fullname | escape | underline}} .. currentmodule:: {{ module }} .. auto{{ objtype }}:: {{ objname }} seaborn-0.11.2/doc/_templates/autosummary/class.rst000066400000000000000000000011421410631356500223550ustar00rootroot00000000000000.. raw:: html
{{ fullname | escape | underline}} .. currentmodule:: {{ module }} .. autoclass:: {{ objname }} {% block methods %} .. automethod:: __init__ {% if methods %} .. rubric:: Methods .. autosummary:: :toctree: ./ {% for item in methods %} ~{{ name }}.{{ item }} {%- endfor %} {% endif %} {% endblock %} {% block attributes %} {% if attributes %} .. rubric:: Attributes .. autosummary:: {% for item in attributes %} ~{{ name }}.{{ item }} {%- endfor %} {% endif %} {% endblock %} seaborn-0.11.2/doc/_templates/layout.html000066400000000000000000000014751410631356500203440ustar00rootroot00000000000000{% extends "!layout.html" %} {%- block footer %}

Back to top {% if theme_source_link_position == "footer" %}
{% include "sourcelink.html" %} {% endif %}

{% trans copyright=copyright|e %}© Copyright {{ copyright }}, Michael Waskom.{% endtrans %} {%- if last_updated %} {% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %}
{%- endif %} {%- if show_sphinx %} {% trans sphinx_version=sphinx_version|e %}Created using Sphinx {{ sphinx_version }}.{% endtrans %}
{%- endif %}

{%- endblock %} seaborn-0.11.2/doc/api.rst000066400000000000000000000045121410631356500153020ustar00rootroot00000000000000.. _api_ref: .. currentmodule:: seaborn API reference ============= .. _relational_api: Relational plots ---------------- .. autosummary:: :toctree: generated/ :nosignatures: relplot scatterplot lineplot .. _distribution_api: Distribution plots ------------------ .. autosummary:: :toctree: generated/ :nosignatures: displot histplot kdeplot ecdfplot rugplot distplot .. _categorical_api: Categorical plots ----------------- .. autosummary:: :toctree: generated/ :nosignatures: catplot stripplot swarmplot boxplot violinplot boxenplot pointplot barplot countplot .. _regression_api: Regression plots ---------------- .. autosummary:: :toctree: generated/ :nosignatures: lmplot regplot residplot .. _matrix_api: Matrix plots ------------ .. autosummary:: :toctree: generated/ :nosignatures: heatmap clustermap .. _grid_api: Multi-plot grids ---------------- Facet grids ~~~~~~~~~~~ .. autosummary:: :toctree: generated/ :nosignatures: FacetGrid Pair grids ~~~~~~~~~~ .. autosummary:: :toctree: generated/ :nosignatures: pairplot PairGrid Joint grids ~~~~~~~~~~~ .. autosummary:: :toctree: generated/ :nosignatures: jointplot JointGrid .. _style_api: Themeing -------- .. autosummary:: :toctree: generated/ :nosignatures: set_theme axes_style set_style plotting_context set_context set_color_codes reset_defaults reset_orig set .. _palette_api: Color palettes -------------- .. autosummary:: :toctree: generated/ :nosignatures: set_palette color_palette husl_palette hls_palette cubehelix_palette dark_palette light_palette diverging_palette blend_palette xkcd_palette crayon_palette mpl_palette Palette widgets --------------- .. autosummary:: :toctree: generated/ :nosignatures: choose_colorbrewer_palette choose_cubehelix_palette choose_light_palette choose_dark_palette choose_diverging_palette Utility functions ----------------- .. autosummary:: :toctree: generated/ :nosignatures: despine move_legend saturate desaturate set_hls_values load_dataset get_dataset_names get_data_home seaborn-0.11.2/doc/archive.rst000066400000000000000000000002241410631356500161460ustar00rootroot00000000000000.. _archive: Documentation archive ===================== - `Version 0.10 <./archive/0.10/index.html>`_ - `Version 0.9 <./archive/0.9/index.html>`_seaborn-0.11.2/doc/citing.rst000066400000000000000000000026361410631356500160130ustar00rootroot00000000000000.. _citing: Citing and logo =============== Citing seaborn -------------- If seaborn is integral to a scientific publication, please cite it. A paper describing seaborn has been published in the `Journal of Open Source Software `_. Here is a ready-made BibTeX entry: .. highlight:: none :: @article{Waskom2021, doi = {10.21105/joss.03021}, url = {https://doi.org/10.21105/joss.03021}, year = {2021}, publisher = {The Open Journal}, volume = {6}, number = {60}, pages = {3021}, author = {Michael L. Waskom}, title = {seaborn: statistical data visualization}, journal = {Journal of Open Source Software} } In most situations where seaborn is cited, a citation to `matplotlib `_ would also be appropriate. Logo files ---------- Additional logo files, including hi-res PNGs and images suitable for use over a dark background, are available `on GitHub `_. Wide logo ~~~~~~~~~ .. image:: _static/logo-wide-lightbg.svg :width: 400px Tall logo ~~~~~~~~~ .. image:: _static/logo-tall-lightbg.svg :width: 150px Logo mark ~~~~~~~~~ .. image:: _static/logo-mark-lightbg.svg :width: 150px Credit to `Matthias Bussonnier `_ for the initial design and implementation of the logo.seaborn-0.11.2/doc/conf.py000066400000000000000000000226531410631356500153040ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # seaborn documentation build configuration file, created by # sphinx-quickstart on Mon Jul 29 23:25:46 2013. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os import sphinx_bootstrap_theme # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration --------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. sys.path.insert(0, os.path.abspath('sphinxext')) extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.coverage', 'sphinx.ext.mathjax', 'sphinx.ext.autosummary', 'sphinx.ext.intersphinx', 'matplotlib.sphinxext.plot_directive', 'gallery_generator', 'numpydoc', 'sphinx_issues', ] # Sphinx-issues configuration issues_github_path = 'mwaskom/seaborn' # Generate the API documentation when building autosummary_generate = True numpydoc_show_class_members = False # Include the example source for plots in API docs plot_include_source = True plot_formats = [("png", 90)] plot_html_show_formats = False plot_html_show_source_link = False # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'seaborn' import time copyright = u'2012-{}'.format(time.strftime("%Y")) # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. sys.path.insert(0, os.path.abspath(os.path.pardir)) import seaborn version = seaborn.__version__ # The full version, including alpha/beta/rc tags. release = seaborn.__version__ # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build', 'docstrings'] # The reST default role (used for this markup: `text`) to use for all documents. default_role = 'literal' # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'bootstrap' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. html_theme_options = { 'source_link_position': "footer", 'bootswatch_theme': "paper", 'navbar_title': " ", 'navbar_sidebarrel': False, 'bootstrap_version': "3", 'nosidebar': True, 'body_max_width': '100%', 'navbar_links': [ ("Gallery", "examples/index"), ("Tutorial", "tutorial"), ("API", "api"), ], } # Add any paths that contain custom themes here, relative to this directory. html_theme_path = sphinx_bootstrap_theme.get_html_theme_path() # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. html_logo = "_static/logo-wide-lightbg.svg" # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. html_favicon = "_static/favicon.ico" # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static', 'example_thumbs'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. html_show_sourcelink = False # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'seaborndoc' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'seaborn.tex', u'seaborn Documentation', u'Michael Waskom', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'seaborn', u'seaborn Documentation', [u'Michael Waskom'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'seaborn', u'seaborn Documentation', u'Michael Waskom', 'seaborn', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # Add the 'copybutton' javascript, to hide/show the prompt in code # examples, originally taken from scikit-learn's doc/conf.py def setup(app): app.add_js_file('copybutton.js') app.add_css_file('style.css') # -- Intersphinx ------------------------------------------------ intersphinx_mapping = { 'numpy': ('https://numpy.org/doc/stable/', None), 'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None), 'matplotlib': ('https://matplotlib.org/stable', None), 'pandas': ('https://pandas.pydata.org/pandas-docs/stable/', None), 'statsmodels': ('https://www.statsmodels.org/stable/', None) } seaborn-0.11.2/doc/docstrings/000077500000000000000000000000001410631356500161545ustar00rootroot00000000000000seaborn-0.11.2/doc/docstrings/FacetGrid.ipynb000066400000000000000000000203321410631356500210470ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns\n", "sns.set_theme(style=\"ticks\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Calling the constructor requires a long-form data object. This initializes the grid, but doesn't plot anything on it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")\n", "sns.FacetGrid(tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assign column and/or row variables to add more subplots to the figure:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.FacetGrid(tips, col=\"time\", row=\"sex\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To draw a plot on every facet, pass a function and the name of one or more columns in the dataframe to :meth:`FacetGrid.map`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"time\", row=\"sex\")\n", "g.map(sns.scatterplot, \"total_bill\", \"tip\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The variable specification in :meth:`FacetGrid.map` requires a positional argument mapping, but if the function has a ``data`` parameter and accepts named variable assignments, you can also use :meth:`FacetGrid.map_dataframe`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"time\", row=\"sex\")\n", "g.map_dataframe(sns.histplot, x=\"total_bill\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Notice how the bins have different widths in each facet. A separate plot is drawn on each facet, so if the plotting function derives any parameters from the data, they may not be shared across facets. You can pass additional keyword arguments to synchronize them. But when possible, using a figure-level function like :func:`displot` will take care of this bookkeeping for you:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"time\", row=\"sex\")\n", "g.map_dataframe(sns.histplot, x=\"total_bill\", binwidth=2, binrange=(0, 60))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The :class:`FacetGrid` constructor accepts a ``hue`` parameter. Setting this will condition the data on another variable and make multiple plots in different colors. Where possible, label information is tracked so that a single legend can be drawn:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"time\", hue=\"sex\")\n", "g.map_dataframe(sns.scatterplot, x=\"total_bill\", y=\"tip\")\n", "g.add_legend()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When ``hue`` is set on the :class:`FacetGrid`, however, a separate plot is drawn for each level of the variable. If the plotting function understands ``hue``, it is better to let it handle that logic. But it is important to ensure that each facet will use the same hue mapping. In the sample ``tips`` data, the ``sex`` column has a categorical datatype, which ensures this. Otherwise, you may want to use the `hue_order` or similar parameter:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"time\")\n", "g.map_dataframe(sns.scatterplot, x=\"total_bill\", y=\"tip\", hue=\"sex\")\n", "g.add_legend()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The size and shape of the plot is specified at the level of each subplot using the ``height`` and ``aspect`` parameters:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"day\", height=3.5, aspect=.65)\n", "g.map(sns.histplot, \"total_bill\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "If the variable assigned to ``col`` has many levels, it is possible to \"wrap\" it so that it spans multiple rows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"size\", height=2.5, col_wrap=3)\n", "g.map(sns.histplot, \"total_bill\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To add horizontal or vertical reference lines on every facet, use :meth:`FacetGrid.refline`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"time\", margin_titles=True)\n", "g.map_dataframe(sns.scatterplot, x=\"total_bill\", y=\"tip\")\n", "g.refline(y=tips[\"tip\"].median())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can pass custom functions to plot with, or to annotate each facet. Your custom function must use the matplotlib state-machine interface to plot on the \"current\" axes, and it should catch additional keyword arguments:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "def annotate(data, **kws):\n", " n = len(data)\n", " ax = plt.gca()\n", " ax.text(.1, .6, f\"N = {n}\", transform=ax.transAxes)\n", "\n", "g = sns.FacetGrid(tips, col=\"time\")\n", "g.map_dataframe(sns.scatterplot, x=\"total_bill\", y=\"tip\")\n", "g.map_dataframe(annotate)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The :class:`FacetGrid` object has some other useful parameters and methods for tweaking the plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"sex\", row=\"time\", margin_titles=True)\n", "g.map_dataframe(sns.scatterplot, x=\"total_bill\", y=\"tip\")\n", "g.set_axis_labels(\"Total bill ($)\", \"Tip ($)\")\n", "g.set_titles(col_template=\"{col_name} patrons\", row_template=\"{row_name}\")\n", "g.set(xlim=(0, 60), ylim=(0, 12), xticks=[10, 30, 50], yticks=[2, 6, 10])\n", "g.tight_layout()\n", "g.savefig(\"facet_plot.png\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import os\n", "if os.path.exists(\"facet_plot.png\"):\n", " os.remove(\"facet_plot.png\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You also have access to the underlying matplotlib objects for additional tweaking:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"sex\", row=\"time\", margin_titles=True, despine=False)\n", "g.map_dataframe(sns.scatterplot, x=\"total_bill\", y=\"tip\")\n", "g.figure.subplots_adjust(wspace=0, hspace=0)\n", "for (row_val, col_val), ax in g.axes_dict.items():\n", " if row_val == \"Lunch\" and col_val == \"Female\":\n", " ax.set_facecolor(\".95\")\n", " else:\n", " ax.set_facecolor((0, 0, 0, 0))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/docstrings/JointGrid.ipynb000066400000000000000000000137351410631356500211210ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns\n", "sns.set_theme()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Calling the constructor initializes the figure, but it does not plot anything:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "penguins = sns.load_dataset(\"penguins\")\n", "sns.JointGrid(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The simplest plotting method, :meth:`JointGrid.plot` accepts a pair of functions (one for the joint axes and one for both marginal axes):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.JointGrid(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\")\n", "g.plot(sns.scatterplot, sns.histplot)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The :meth:`JointGrid.plot` function also accepts additional keyword arguments, but it passes them to both functions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.JointGrid(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\")\n", "g.plot(sns.scatterplot, sns.histplot, alpha=.7, edgecolor=\".2\", linewidth=.5)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "If you need to pass different keyword arguments to each function, you'll have to invoke :meth:`JointGrid.plot_joint` and :meth:`JointGrid.plot_marginals`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.JointGrid(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\")\n", "g.plot_joint(sns.scatterplot, s=100, alpha=.5)\n", "g.plot_marginals(sns.histplot, kde=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You can also set up the grid without assigning any data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.JointGrid()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You can then plot by accessing the ``ax_joint``, ``ax_marg_x``, and ``ax_marg_y`` attributes, which are :class:`matplotlib.axes.Axes` objects:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.JointGrid()\n", "x, y = penguins[\"bill_length_mm\"], penguins[\"bill_depth_mm\"]\n", "sns.scatterplot(x=x, y=y, ec=\"b\", fc=\"none\", s=100, linewidth=1.5, ax=g.ax_joint)\n", "sns.histplot(x=x, fill=False, linewidth=2, ax=g.ax_marg_x)\n", "sns.kdeplot(y=y, linewidth=2, ax=g.ax_marg_y)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The plotting methods can use any seaborn functions that accept ``x`` and ``y`` variables:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.JointGrid(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\")\n", "g.plot(sns.regplot, sns.boxplot)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "If the functions accept a ``hue`` variable, you can use it by assigning ``hue`` when you call the constructor:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.JointGrid(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\", hue=\"species\")\n", "g.plot(sns.scatterplot, sns.histplot)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Horizontal and/or vertical reference lines can be added to the joint and/or marginal axes using :meth:`JointGrid.refline`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.JointGrid(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\")\n", "g.plot(sns.scatterplot, sns.histplot)\n", "g.refline(x=45, y=16)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The figure will always be square (unless you resize it at the matplotlib layer), but its overall size and layout are configurable. The size is controlled by the ``height`` parameter. The relative ratio between the joint and marginal axes is controlled by ``ratio``, and the amount of space between the plots is controlled by ``space``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.JointGrid(height=4, ratio=2, space=.05)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "By default, the ticks on the density axis of the marginal plots are turned off, but this is configurable:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.JointGrid(marginal_ticks=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Limits on the two data axes (which are shared across plots) can also be defined when setting up the figure:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.JointGrid(xlim=(-2, 5), ylim=(0, 10))" ] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/docstrings/Makefile000066400000000000000000000005071410631356500176160ustar00rootroot00000000000000rst_files := $(patsubst %.ipynb,%.rst,$(wildcard *.ipynb)) docstrings: ${rst_files} %.rst: %.ipynb @../tools/nb_to_doc.py $*.ipynb @cp -r $*_files ../generated/ @if [ -f ../generated/seaborn.$*.rst ]; then \ touch ../generated/seaborn.$*.rst; \ fi clean: rm -rf *.rst rm -rf *_files/ rm -rf .ipynb_checkpoints/ seaborn-0.11.2/doc/docstrings/PairGrid.ipynb000066400000000000000000000147011410631356500207230ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns; sns.set_theme()\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Calling the constructor sets up a blank grid of subplots with each row and one column corresponding to a numeric variable in the dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "penguins = sns.load_dataset(\"penguins\")\n", "g = sns.PairGrid(penguins)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Passing a bivariate function to :meth:`PairGrid.map` will draw a bivariate plot on every axes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(penguins)\n", "g.map(sns.scatterplot)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Passing separate functions to :meth:`PairGrid.map_diag` and :meth:`PairGrid.map_offdiag` will show each variable's marginal distribution on the diagonal:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(penguins)\n", "g.map_diag(sns.histplot)\n", "g.map_offdiag(sns.scatterplot)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It's also possible to use different functions on the upper and lower triangles of the plot (which are otherwise redundant):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(penguins, diag_sharey=False)\n", "g.map_upper(sns.scatterplot)\n", "g.map_lower(sns.kdeplot)\n", "g.map_diag(sns.kdeplot)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Or to avoid the redundancy altogether:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(penguins, diag_sharey=False, corner=True)\n", "g.map_lower(sns.scatterplot)\n", "g.map_diag(sns.kdeplot)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The :class:`PairGrid` constructor accepts a ``hue`` variable. This variable is passed directly to functions that understand it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(penguins, hue=\"species\")\n", "g.map_diag(sns.histplot)\n", "g.map_offdiag(sns.scatterplot)\n", "g.add_legend()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "But you can also pass matplotlib functions, in which case a groupby is performed internally and a separate plot is drawn for each level:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(penguins, hue=\"species\")\n", "g.map_diag(plt.hist)\n", "g.map_offdiag(plt.scatter)\n", "g.add_legend()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Additional semantic variables can be assigned by passing data vectors directly while mapping the function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(penguins, hue=\"species\")\n", "g.map_diag(sns.histplot)\n", "g.map_offdiag(sns.scatterplot, size=penguins[\"sex\"])\n", "g.add_legend(title=\"\", adjust_subtitles=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When using seaborn functions that can implement a numeric hue mapping, you will want to disable mapping of the variable on the diagonal axes. Note that the ``hue`` variable is excluded from the list of variables shown by default:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(penguins, hue=\"body_mass_g\")\n", "g.map_diag(sns.histplot, hue=None, color=\".3\")\n", "g.map_offdiag(sns.scatterplot)\n", "g.add_legend()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The ``vars`` parameter can be used to control exactly which variables are used:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "variables = [\"body_mass_g\", \"bill_length_mm\", \"flipper_length_mm\"]\n", "g = sns.PairGrid(penguins, hue=\"body_mass_g\", vars=variables)\n", "g.map_diag(sns.histplot, hue=None, color=\".3\")\n", "g.map_offdiag(sns.scatterplot)\n", "g.add_legend()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The plot need not be square: separate variables can be used to define the rows and columns:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x_vars = [\"body_mass_g\", \"bill_length_mm\", \"bill_depth_mm\", \"flipper_length_mm\"]\n", "y_vars = [\"body_mass_g\"]\n", "g = sns.PairGrid(penguins, hue=\"species\", x_vars=x_vars, y_vars=y_vars)\n", "g.map_diag(sns.histplot, color=\".3\")\n", "g.map_offdiag(sns.scatterplot)\n", "g.add_legend()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It can be useful to explore different approaches to resolving multiple distributions on the diagonal axes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(penguins, hue=\"species\")\n", "g.map_diag(sns.histplot, multiple=\"stack\", element=\"step\")\n", "g.map_offdiag(sns.scatterplot)\n", "g.add_legend()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/docstrings/axes_style.ipynb000066400000000000000000000036371410631356500214100ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "id": "dated-mother", "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns" ] }, { "cell_type": "markdown", "id": "prospective-sellers", "metadata": {}, "source": [ "Calling with no arguments will return the current defaults for the style parameters:" ] }, { "cell_type": "code", "execution_count": null, "id": "recognized-rehabilitation", "metadata": { "tags": [ "show-output" ] }, "outputs": [], "source": [ "sns.axes_style()" ] }, { "cell_type": "markdown", "id": "furnished-irrigation", "metadata": {}, "source": [ "Calling with the name of a predefined style will show those parameter values:" ] }, { "cell_type": "code", "execution_count": null, "id": "coordinate-reward", "metadata": { "tags": [ "show-output" ] }, "outputs": [], "source": [ "sns.axes_style(\"darkgrid\")" ] }, { "cell_type": "markdown", "id": "mediterranean-picking", "metadata": {}, "source": [ "Use the function as a context manager to temporarily change the style of your plots:" ] }, { "cell_type": "code", "execution_count": null, "id": "missing-essence", "metadata": {}, "outputs": [], "source": [ "with sns.axes_style(\"whitegrid\"):\n", " sns.barplot(x=[1, 2, 3], y=[2, 5, 3])" ] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 5 } seaborn-0.11.2/doc/docstrings/color_palette.ipynb000066400000000000000000000105021410631356500220510ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns; sns.set_theme()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "# Add colormap display methods to matplotlib colormaps.\n", "# These are forthcoming in matplotlib 3.4, but, the matplotlib display\n", "# method includes the colormap name, which is redundant.\n", "def _repr_png_(self):\n", " \"\"\"Generate a PNG representation of the Colormap.\"\"\"\n", " import io\n", " from PIL import Image\n", " import numpy as np\n", " IMAGE_SIZE = (400, 50)\n", " X = np.tile(np.linspace(0, 1, IMAGE_SIZE[0]), (IMAGE_SIZE[1], 1))\n", " pixels = self(X, bytes=True)\n", " png_bytes = io.BytesIO()\n", " Image.fromarray(pixels).save(png_bytes, format='png')\n", " return png_bytes.getvalue()\n", " \n", "def _repr_html_(self):\n", " \"\"\"Generate an HTML representation of the Colormap.\"\"\"\n", " import base64\n", " png_bytes = self._repr_png_()\n", " png_base64 = base64.b64encode(png_bytes).decode('ascii')\n", " return ('')\n", " \n", "import matplotlib as mpl\n", "mpl.colors.Colormap._repr_png_ = _repr_png_\n", "mpl.colors.Colormap._repr_html_ = _repr_html_" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Calling with no arguments returns all colors from the current default\n", "color cycle:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Other variants on the seaborn categorical color palette can be referenced by name:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"pastel\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Return a specified number of evenly spaced hues in the \"HUSL\" system:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"husl\", 9)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Return all unique colors in a categorical Color Brewer palette:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"Set2\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Return one of the perceptually-uniform colormaps included in seaborn:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"flare\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Return a customized cubehelix color palette:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"ch:s=.25,rot=-.25\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Return a light-themed sequential colormap to a seed color:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"light:#5A9\", as_cmap=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/docstrings/displot.ipynb000066400000000000000000000126461410631356500207060ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns; sns.set_theme(style=\"ticks\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The default plot kind is a histogram:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "penguins = sns.load_dataset(\"penguins\")\n", "sns.displot(data=penguins, x=\"flipper_length_mm\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Use the ``kind`` parameter to select a different representation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(data=penguins, x=\"flipper_length_mm\", kind=\"kde\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are three main plot kinds; in addition to histograms and kernel density estimates (KDEs), you can also draw empirical cumulative distribution functions (ECDFs):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(data=penguins, x=\"flipper_length_mm\", kind=\"ecdf\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While in histogram mode, it is also possible to add a KDE curve:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(data=penguins, x=\"flipper_length_mm\", kde=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To draw a bivariate plot, assign both ``x`` and ``y``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(data=penguins, x=\"flipper_length_mm\", y=\"bill_length_mm\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Currently, bivariate plots are available only for histograms and KDEs:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(data=penguins, x=\"flipper_length_mm\", y=\"bill_length_mm\", kind=\"kde\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For each kind of plot, you can also show individual observations with a marginal \"rug\":" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.displot(data=penguins, x=\"flipper_length_mm\", y=\"bill_length_mm\", kind=\"kde\", rug=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Each kind of plot can be drawn separately for subsets of data using ``hue`` mapping:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(data=penguins, x=\"flipper_length_mm\", hue=\"species\", kind=\"kde\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Additional keyword arguments are passed to the appropriate underlying plotting function, allowing for further customization:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(data=penguins, x=\"flipper_length_mm\", hue=\"species\", multiple=\"stack\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The figure is constructed using a :class:`FacetGrid`, meaning that you can also show subsets on distinct subplots, or \"facets\":" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(data=penguins, x=\"flipper_length_mm\", hue=\"species\", col=\"sex\", kind=\"kde\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Because the figure is drawn with a :class:`FacetGrid`, you control its size and shape with the ``height`` and ``aspect`` parameters:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(\n", " data=penguins, y=\"flipper_length_mm\", hue=\"sex\", col=\"species\",\n", " kind=\"ecdf\", height=4, aspect=.7,\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The function returns the :class:`FacetGrid` object with the plot, and you can use the methods on this object to customize it further:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.displot(\n", " data=penguins, y=\"flipper_length_mm\", hue=\"sex\", col=\"species\",\n", " kind=\"kde\", height=4, aspect=.7,\n", ")\n", "g.set_axis_labels(\"Density (a.u.)\", \"Flipper length (mm)\")\n", "g.set_titles(\"{col_name} penguins\")" ] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/docstrings/ecdfplot.ipynb000066400000000000000000000056341410631356500210270ustar00rootroot00000000000000{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Plot a univariate distribution along the x axis:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns; sns.set_theme()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "penguins = sns.load_dataset(\"penguins\")\n", "sns.ecdfplot(data=penguins, x=\"flipper_length_mm\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Flip the plot by assigning the data variable to the y axis:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.ecdfplot(data=penguins, y=\"flipper_length_mm\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If neither `x` nor `y` is assigned, the dataset is treated as wide-form, and a histogram is drawn for each numeric column:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.ecdfplot(data=penguins.filter(like=\"bill_\", axis=\"columns\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also draw multiple histograms from a long-form dataset with hue mapping:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.ecdfplot(data=penguins, x=\"bill_length_mm\", hue=\"species\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The default distribution statistic is normalized to show a proportion, but you can show absolute counts instead:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.ecdfplot(data=penguins, x=\"bill_length_mm\", hue=\"species\", stat=\"count\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's also possible to plot the empirical complementary CDF (1 - CDF):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.ecdfplot(data=penguins, x=\"bill_length_mm\", hue=\"species\", complementary=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/docstrings/histplot.ipynb000066400000000000000000000260521410631356500210720ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns\n", "sns.set_theme(style=\"white\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assign a variable to ``x`` to plot a univariate distribution along the x axis:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "penguins = sns.load_dataset(\"penguins\")\n", "sns.histplot(data=penguins, x=\"flipper_length_mm\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Flip the plot by assigning the data variable to the y axis:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(data=penguins, y=\"flipper_length_mm\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check how well the histogram represents the data by specifying a different bin width:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(data=penguins, x=\"flipper_length_mm\", binwidth=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also define the total number of bins to use:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(data=penguins, x=\"flipper_length_mm\", bins=30)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Add a kernel density estimate to smooth the histogram, providing complementary information about the shape of the distribution:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(data=penguins, x=\"flipper_length_mm\", kde=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If neither `x` nor `y` is assigned, the dataset is treated as wide-form, and a histogram is drawn for each numeric column:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(data=penguins)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can otherwise draw multiple histograms from a long-form dataset with hue mapping:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(data=penguins, x=\"flipper_length_mm\", hue=\"species\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The default approach to plotting multiple distributions is to \"layer\" them, but you can also \"stack\" them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(data=penguins, x=\"flipper_length_mm\", hue=\"species\", multiple=\"stack\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Overlapping bars can be hard to visually resolve. A different approach would be to draw a step function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(penguins, x=\"flipper_length_mm\", hue=\"species\", element=\"step\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can move even farther away from bars by drawing a polygon with vertices in the center of each bin. This may make it easier to see the shape of the distribution, but use with caution: it will be less obvious to your audience that they are looking at a histogram:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(penguins, x=\"flipper_length_mm\", hue=\"species\", element=\"poly\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To compare the distribution of subsets that differ substantially in size, use indepdendent density normalization:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(\n", " penguins, x=\"bill_length_mm\", hue=\"island\", element=\"step\",\n", " stat=\"density\", common_norm=False,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's also possible to normalize so that each bar's height shows a probability, proportion, or percent, which make more sense for discrete variables:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")\n", "sns.histplot(data=tips, x=\"size\", stat=\"percent\", discrete=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can even draw a histogram over categorical variables (although this is an experimental feature):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(data=tips, x=\"day\", shrink=.8)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When using a ``hue`` semantic with discrete data, it can make sense to \"dodge\" the levels:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(data=tips, x=\"day\", hue=\"sex\", multiple=\"dodge\", shrink=.8)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Real-world data is often skewed. For heavily skewed distributions, it's better to define the bins in log space. Compare:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "planets = sns.load_dataset(\"planets\")\n", "sns.histplot(data=planets, x=\"distance\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To the log-scale version:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(data=planets, x=\"distance\", log_scale=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are also a number of options for how the histogram appears. You can show unfilled bars:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(data=planets, x=\"distance\", log_scale=True, fill=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or an unfilled step function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(data=planets, x=\"distance\", log_scale=True, element=\"step\", fill=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Step functions, esepcially when unfilled, make it easy to compare cumulative histograms:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(\n", " data=planets, x=\"distance\", hue=\"method\",\n", " hue_order=[\"Radial Velocity\", \"Transit\"],\n", " log_scale=True, element=\"step\", fill=False,\n", " cumulative=True, stat=\"density\", common_norm=False,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When both ``x`` and ``y`` are assigned, a bivariate histogram is computed and shown as a heatmap:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(penguins, x=\"bill_depth_mm\", y=\"body_mass_g\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's possible to assign a ``hue`` variable too, although this will not work well if data from the different levels have substantial overlap:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(penguins, x=\"bill_depth_mm\", y=\"body_mass_g\", hue=\"species\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Multiple color maps can make sense when one of the variables is discrete:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(\n", " penguins, x=\"bill_depth_mm\", y=\"species\", hue=\"species\", legend=False\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The bivariate histogram accepts all of the same options for computation as its univariate counterpart, using tuples to parametrize ``x`` and ``y`` independently:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(\n", " planets, x=\"year\", y=\"distance\",\n", " bins=30, discrete=(True, False), log_scale=(False, True),\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The default behavior makes cells with no observations transparent, although this can be disabled: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(\n", " planets, x=\"year\", y=\"distance\",\n", " bins=30, discrete=(True, False), log_scale=(False, True),\n", " thresh=None,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's also possible to set the threshold and colormap saturation point in terms of the proportion of cumulative counts:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(\n", " planets, x=\"year\", y=\"distance\",\n", " bins=30, discrete=(True, False), log_scale=(False, True),\n", " pthresh=.05, pmax=.9,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To annotate the colormap, add a colorbar:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.histplot(\n", " planets, x=\"year\", y=\"distance\",\n", " bins=30, discrete=(True, False), log_scale=(False, True),\n", " cbar=True, cbar_kws=dict(shrink=.75),\n", ")" ] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/docstrings/jointplot.ipynb000066400000000000000000000112541410631356500212440ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns\n", "sns.set_theme(style=\"white\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In the simplest invocation, assign ``x`` and ``y`` to create a scatterplot (using :func:`scatterplot`) with marginal histograms (using :func:`histplot`):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "penguins = sns.load_dataset(\"penguins\")\n", "sns.jointplot(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assigning a ``hue`` variable will add conditional colors to the scatterplot and draw separate density curves (using :func:`kdeplot`) on the marginal axes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\", hue=\"species\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Several different approaches to plotting are available through the ``kind`` parameter. Setting ``kind=\"kde\"`` will draw both bivariate and univariate KDEs:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\", hue=\"species\", kind=\"kde\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Set ``kind=\"reg\"`` to add a linear regression fit (using :func:`regplot`) and univariate KDE curves:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\", kind=\"reg\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "There are also two options for bin-based visualization of the joint distribution. The first, with ``kind=\"hist\"``, uses :func:`histplot` on all of the axes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\", kind=\"hist\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Alternatively, setting ``kind=\"hex\"`` will use :meth:`matplotlib.axes.Axes.hexbin` to compute a bivariate histogram using hexagonal bins:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\", kind=\"hex\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Additional keyword arguments can be passed down to the underlying plots:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(\n", " data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\",\n", " marker=\"+\", s=100, marginal_kws=dict(bins=25, fill=False),\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Use :class:`JointGrid` parameters to control the size and layout of the figure:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\", height=5, ratio=2, marginal_ticks=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To add more layers onto the plot, use the methods on the :class:`JointGrid` object that :func:`jointplot` returns:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.jointplot(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\")\n", "g.plot_joint(sns.kdeplot, color=\"r\", zorder=0, levels=6)\n", "g.plot_marginals(sns.rugplot, color=\"r\", height=-.15, clip_on=False)" ] }, { "cell_type": "raw", "metadata": {}, "source": [] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/docstrings/kdeplot.ipynb000066400000000000000000000156331410631356500206710ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns; sns.set_theme()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plot a univariate distribution along the x axis:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")\n", "sns.kdeplot(data=tips, x=\"total_bill\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Flip the plot by assigning the data variable to the y axis:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(data=tips, y=\"total_bill\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plot distributions for each column of a wide-form dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iris = sns.load_dataset(\"iris\")\n", "sns.kdeplot(data=iris)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use less smoothing:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(data=tips, x=\"total_bill\", bw_adjust=.2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use more smoothing, but don't smooth past the extreme data points:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ax= sns.kdeplot(data=tips, x=\"total_bill\", bw_adjust=5, cut=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plot conditional distributions with hue mapping of a second variable:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(data=tips, x=\"total_bill\", hue=\"time\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\"Stack\" the conditional distributions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(data=tips, x=\"total_bill\", hue=\"time\", multiple=\"stack\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Normalize the stacked distribution at each value in the grid:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(data=tips, x=\"total_bill\", hue=\"time\", multiple=\"fill\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Estimate the cumulative distribution function(s), normalizing each subset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(\n", " data=tips, x=\"total_bill\", hue=\"time\",\n", " cumulative=True, common_norm=False, common_grid=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Estimate distribution from aggregated data, using weights:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips_agg = (tips\n", " .groupby(\"size\")\n", " .agg(total_bill=(\"total_bill\", \"mean\"), n=(\"total_bill\", \"count\"))\n", ")\n", "sns.kdeplot(data=tips_agg, x=\"total_bill\", weights=\"n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Map the data variable with log scaling:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "diamonds = sns.load_dataset(\"diamonds\")\n", "sns.kdeplot(data=diamonds, x=\"price\", log_scale=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use numeric hue mapping:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(data=tips, x=\"total_bill\", hue=\"size\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Modify the appearance of the plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(\n", " data=tips, x=\"total_bill\", hue=\"size\",\n", " fill=True, common_norm=False, palette=\"crest\",\n", " alpha=.5, linewidth=0,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plot a bivariate distribution:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "geyser = sns.load_dataset(\"geyser\")\n", "sns.kdeplot(data=geyser, x=\"waiting\", y=\"duration\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Map a third variable with a hue semantic to show conditional distributions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(data=geyser, x=\"waiting\", y=\"duration\", hue=\"kind\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Show filled contours:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(\n", " data=geyser, x=\"waiting\", y=\"duration\", hue=\"kind\", fill=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Show fewer contour levels, covering less of the distribution:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(\n", " data=geyser, x=\"waiting\", y=\"duration\", hue=\"kind\",\n", " levels=5, thresh=.2,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Fill the axes extent with a smooth distribution, using a different colormap:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(\n", " data=geyser, x=\"waiting\", y=\"duration\",\n", " fill=True, thresh=0, levels=100, cmap=\"mako\",\n", ")" ] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/docstrings/lineplot.ipynb000066400000000000000000000234151410631356500210520ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import seaborn as sns\n", "import matplotlib as mpl\n", "import matplotlib.pyplot as plt\n", "sns.set_theme()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The ``flights`` dataset has 10 years of monthly airline passenger data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flights = sns.load_dataset(\"flights\")\n", "flights.head()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To draw a line plot using long-form data, assign the ``x`` and ``y`` variables:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "may_flights = flights.query(\"month == 'May'\")\n", "sns.lineplot(data=may_flights, x=\"year\", y=\"passengers\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Pivot the dataframe to a wide-form representation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flights_wide = flights.pivot(\"year\", \"month\", \"passengers\")\n", "flights_wide.head()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To plot a single vector, pass it to ``data``. If the vector is a :class:`pandas.Series`, it will be plotted against its index:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lineplot(data=flights_wide[\"May\"])" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Passing the entire wide-form dataset to ``data`` plots a separate line for each column:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lineplot(data=flights_wide)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Passing the entire dataset in long-form mode will aggregate over repeated values (each year) to show the mean and 95% confidence interval:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lineplot(data=flights, x=\"year\", y=\"passengers\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assign a grouping semantic (``hue``, ``size``, or ``style``) to plot separate lines" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lineplot(data=flights, x=\"year\", y=\"passengers\", hue=\"month\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The same column can be assigned to multiple semantic variables, which can increase the accessibility of the plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lineplot(data=flights, x=\"year\", y=\"passengers\", hue=\"month\", style=\"month\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Each semantic variable can also represent a different column. For that, we'll need a more complex dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fmri = sns.load_dataset(\"fmri\")\n", "fmri.head()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Repeated observations are aggregated even when semantic grouping is used:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lineplot(data=fmri, x=\"timepoint\", y=\"signal\", hue=\"event\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assign both ``hue`` and ``style`` to represent two different grouping variables:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lineplot(data=fmri, x=\"timepoint\", y=\"signal\", hue=\"region\", style=\"event\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When assigning a ``style`` variable, markers can be used instead of (or along with) dashes to distinguish the groups:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lineplot(\n", " data=fmri,\n", " x=\"timepoint\", y=\"signal\", hue=\"event\", style=\"event\",\n", " markers=True, dashes=False\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Show error bars instead of error bands and plot the 68% confidence interval (standard error):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lineplot(\n", " data=fmri, x=\"timepoint\", y=\"signal\", hue=\"event\", err_style=\"bars\", ci=68\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assigning the ``units`` variable will plot multiple lines without applying a semantic mapping:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lineplot(\n", " data=fmri.query(\"region == 'frontal'\"),\n", " x=\"timepoint\", y=\"signal\", hue=\"event\", units=\"subject\",\n", " estimator=None, lw=1,\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Load another dataset with a numeric grouping variable:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dots = sns.load_dataset(\"dots\").query(\"align == 'dots'\")\n", "dots.head()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assigning a numeric variable to ``hue`` maps it differently, using a different default palette and a quantitative color mapping:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lineplot(\n", " data=dots, x=\"time\", y=\"firing_rate\", hue=\"coherence\", style=\"choice\",\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Control the color mapping by setting the ``palette`` and passing a :class:`matplotlib.colors.Normalize` object:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lineplot(\n", " data=dots.query(\"coherence > 0\"),\n", " x=\"time\", y=\"firing_rate\", hue=\"coherence\", style=\"choice\",\n", " palette=\"flare\", hue_norm=mpl.colors.LogNorm(),\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Or pass specific colors, either as a Python list or dictionary:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "palette = sns.color_palette(\"mako_r\", 6)\n", "sns.lineplot(\n", " data=dots, x=\"time\", y=\"firing_rate\",\n", " hue=\"coherence\", style=\"choice\",\n", " palette=palette\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assign the ``size`` semantic to map the width of the lines with a numeric variable:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lineplot(\n", " data=dots, x=\"time\", y=\"firing_rate\",\n", " size=\"coherence\", hue=\"choice\",\n", " legend=\"full\"\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Pass a a tuple, ``sizes=(smallest, largest)``, to control the range of linewidths used to map the ``size`` semantic:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lineplot(\n", " data=dots, x=\"time\", y=\"firing_rate\",\n", " size=\"coherence\", hue=\"choice\",\n", " sizes=(.25, 2.5)\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "By default, the observations are sorted by ``x``. Disable this to plot a line with the order that observations appear in the dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x, y = np.random.normal(size=(2, 5000)).cumsum(axis=1)\n", "sns.lineplot(x=x, y=y, sort=False, lw=1)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Use :func:`relplot` to combine :func:`lineplot` and :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`relplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of the semantic mappings across facets:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(\n", " data=fmri, x=\"timepoint\", y=\"signal\",\n", " col=\"region\", hue=\"event\", style=\"event\",\n", " kind=\"line\"\n", ")" ] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/docstrings/move_legend.ipynb000066400000000000000000000076141410631356500215130ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "id": "8ec46ad8-bc4c-4ee0-9626-271088c702f9", "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns\n", "sns.set_theme()\n", "penguins = sns.load_dataset(\"penguins\")" ] }, { "cell_type": "raw", "id": "008bdd98-88cb-4a81-9f50-9b0e5a357305", "metadata": {}, "source": [ "For axes-level functions, pass the :class:`matplotlib.axes.Axes` object and provide a new location." ] }, { "cell_type": "code", "execution_count": null, "id": "b82e58f9-b15d-4554-bee5-de6a689344a6", "metadata": {}, "outputs": [], "source": [ "ax = sns.histplot(penguins, x=\"bill_length_mm\", hue=\"species\")\n", "sns.move_legend(ax, \"center right\")" ] }, { "cell_type": "raw", "id": "4f2a7f5d-ab39-46c7-87f4-532e607adf0b", "metadata": {}, "source": [ "Use the `bbox_to_anchor` parameter for more fine-grained control, including moving the legend outside of the axes:" ] }, { "cell_type": "code", "execution_count": null, "id": "ed610a98-447a-4459-8342-48abc80330f0", "metadata": {}, "outputs": [], "source": [ "ax = sns.histplot(penguins, x=\"bill_length_mm\", hue=\"species\")\n", "sns.move_legend(ax, \"upper left\", bbox_to_anchor=(1, 1))" ] }, { "cell_type": "raw", "id": "9d2fd766-a806-45d9-949d-1572991cf512", "metadata": {}, "source": [ "Pass additional :meth:`matplotlib.axes.Axes.legend` parameters to update other properties:" ] }, { "cell_type": "code", "execution_count": null, "id": "5ad4342c-c46e-49e9-98a2-6c88c6fb4c54", "metadata": {}, "outputs": [], "source": [ "ax = sns.histplot(penguins, x=\"bill_length_mm\", hue=\"species\")\n", "sns.move_legend(\n", " ax, \"lower center\",\n", " bbox_to_anchor=(.5, 1), ncol=3, title=None, frameon=False,\n", ")" ] }, { "cell_type": "raw", "id": "0d573092-46fd-4a95-b7ed-7e6833823adc", "metadata": {}, "source": [ "It's also possible to move the legend created by a figure-level function. But when fine-tuning the position, you must bear in mind that the figure will have extra blank space on the right:" ] }, { "cell_type": "code", "execution_count": null, "id": "b258a9b8-69e5-4d4a-94cb-5b6baddc402b", "metadata": {}, "outputs": [], "source": [ "g = sns.displot(\n", " penguins,\n", " x=\"bill_length_mm\", hue=\"species\",\n", " col=\"island\", col_wrap=2, height=3,\n", ")\n", "sns.move_legend(g, \"upper left\", bbox_to_anchor=(.55, .45))" ] }, { "cell_type": "raw", "id": "c9dc54e2-2c66-412f-ab2a-4f2bc2cb5782", "metadata": {}, "source": [ "One way to avoid this would be to set `legend_out=False` on the :class:`FacetGrid`:" ] }, { "cell_type": "code", "execution_count": null, "id": "06cff408-4cdf-47af-8def-176f3e70ec5a", "metadata": {}, "outputs": [], "source": [ "g = sns.displot(\n", " penguins,\n", " x=\"bill_length_mm\", hue=\"species\",\n", " col=\"island\", col_wrap=2, height=3,\n", " facet_kws=dict(legend_out=False),\n", ")\n", "sns.move_legend(g, \"upper left\", bbox_to_anchor=(.55, .45), frameon=False)" ] }, { "cell_type": "code", "execution_count": null, "id": "b170f20d-22a9-4f7d-917a-d09e10b1f08c", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 5 } seaborn-0.11.2/doc/docstrings/pairplot.ipynb000066400000000000000000000114751410631356500210610ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns\n", "sns.set_theme(style=\"ticks\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The simplest invocation uses :func:`scatterplot` for each pairing of the variables and :func:`histplot` for the marginal plots along the diagonal:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "penguins = sns.load_dataset(\"penguins\")\n", "sns.pairplot(penguins)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assigning a ``hue`` variable adds a semantic mapping and changes the default marginal plot to a layered kernel density estimate (KDE):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(penguins, hue=\"species\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It's possible to force marginal histograms:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(penguins, hue=\"species\", diag_kind=\"hist\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The ``kind`` parameter determines both the diagonal and off-diagonal plotting style. Several options are available, including using :func:`kdeplot` to draw KDEs:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(penguins, kind=\"kde\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Or :func:`histplot` to draw both bivariate and univariate histograms:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(penguins, kind=\"hist\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The ``markers`` parameter applies a style mapping on the off-diagonal axes. Currently, it will be redundant with the ``hue`` variable:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(penguins, hue=\"species\", markers=[\"o\", \"s\", \"D\"])" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "As with other figure-level functions, the size of the figure is controlled by setting the ``height`` of each individual subplot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(penguins, height=1.5)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Use ``vars`` or ``x_vars`` and ``y_vars`` to select the variables to plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(\n", " penguins,\n", " x_vars=[\"bill_length_mm\", \"bill_depth_mm\", \"flipper_length_mm\"],\n", " y_vars=[\"bill_length_mm\", \"bill_depth_mm\"],\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Set ``corner=True`` to plot only the lower triangle:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(penguins, corner=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The ``plot_kws`` and ``diag_kws`` parameters accept dicts of keyword arguments to customize the off-diagonal and diagonal plots, respectively:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(\n", " penguins,\n", " plot_kws=dict(marker=\"+\", linewidth=1),\n", " diag_kws=dict(fill=False),\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The return object is the underlying :class:`PairGrid`, which can be used to further customize the plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.pairplot(penguins, diag_kind=\"kde\")\n", "g.map_lower(sns.kdeplot, levels=4, color=\".2\")" ] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/docstrings/plotting_context.ipynb000066400000000000000000000040651410631356500226300ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "id": "perceived-worry", "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns" ] }, { "cell_type": "markdown", "id": "seventh-volleyball", "metadata": {}, "source": [ "Calling with no arguments will return the current defaults for the parameters that get scaled:" ] }, { "cell_type": "code", "execution_count": null, "id": "roman-villa", "metadata": { "tags": [ "show-output" ] }, "outputs": [], "source": [ "sns.plotting_context()" ] }, { "cell_type": "markdown", "id": "handled-texas", "metadata": {}, "source": [ "Calling with the name of a predefined style will show those values:" ] }, { "cell_type": "code", "execution_count": null, "id": "distant-caribbean", "metadata": { "tags": [ "show-output" ] }, "outputs": [], "source": [ "sns.plotting_context(\"talk\")" ] }, { "cell_type": "markdown", "id": "lightweight-anime", "metadata": {}, "source": [ "Use the function as a context manager to temporarily change the parameter values:" ] }, { "cell_type": "code", "execution_count": null, "id": "contemporary-hampshire", "metadata": {}, "outputs": [], "source": [ "with sns.plotting_context(\"talk\"):\n", " sns.lineplot(x=[\"A\", \"B\", \"C\"], y=[1, 3, 2])" ] }, { "cell_type": "code", "execution_count": null, "id": "accompanied-brisbane", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 5 } seaborn-0.11.2/doc/docstrings/relplot.ipynb000066400000000000000000000150621410631356500207040ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ "These examples will illustrate only some of the functionality that :func:`relplot` is capable of. For more information, consult the examples for :func:`scatterplot` and :func:`lineplot`, which are used when ``kind=\"scatter\"`` or ``kind=\"line\"``, respectively." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns\n", "import matplotlib.pyplot as plt\n", "sns.set_theme(style=\"ticks\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To illustrate ``kind=\"scatter\"`` (the default style of plot), we will use the \"tips\" dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")\n", "tips.head()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assigning ``x`` and ``y`` and any semantic mapping variables will draw a single plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(data=tips, x=\"total_bill\", y=\"tip\", hue=\"day\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assigning a ``col`` variable creates a faceted figure with multiple subplots arranged across the columns of the grid:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(data=tips, x=\"total_bill\", y=\"tip\", hue=\"day\", col=\"time\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Different variables can be assigned to facet on both the columns and rows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(data=tips, x=\"total_bill\", y=\"tip\", hue=\"day\", col=\"time\", row=\"sex\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When the variable assigned to ``col`` has many levels, it can be \"wrapped\" across multiple rows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(data=tips, x=\"total_bill\", y=\"tip\", hue=\"time\", col=\"day\", col_wrap=2)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assigning multiple semantic variables can show multi-dimensional relationships, but be mindful to avoid making an overly-complicated plot." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(\n", " data=tips, x=\"total_bill\", y=\"tip\", col=\"time\",\n", " hue=\"time\", size=\"size\", style=\"sex\",\n", " palette=[\"b\", \"r\"], sizes=(10, 100)\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When there is a natural continuity to one of the variables, it makes more sense to show lines instead of points. To draw the figure using :func:`lineplot`, set ``kind=\"line\"``. We will illustrate this effect with the \"fmri dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fmri = sns.load_dataset(\"fmri\")\n", "fmri.head()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Using ``kind=\"line\"`` offers the same flexibility for semantic mappings as ``kind=\"scatter\"``, but :func:`lineplot` transforms the data more before plotting. Observations are sorted by their ``x`` value, and repeated observations are aggregated. By default, the resulting plot shows the mean and 95% CI for each unit" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(\n", " data=fmri, x=\"timepoint\", y=\"signal\", col=\"region\",\n", " hue=\"event\", style=\"event\", kind=\"line\",\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The size and shape of the figure is parametrized by the ``height`` and ``aspect`` ratio of each individual facet:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(\n", " data=fmri,\n", " x=\"timepoint\", y=\"signal\",\n", " hue=\"event\", style=\"event\", col=\"region\",\n", " height=4, aspect=.7, kind=\"line\"\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The object returned by :func:`relplot` is always a :class:`FacetGrid`, which has several methods that allow you to quickly tweak the title, labels, and other aspects of the plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.relplot(\n", " data=fmri,\n", " x=\"timepoint\", y=\"signal\",\n", " hue=\"event\", style=\"event\", col=\"region\",\n", " height=4, aspect=.7, kind=\"line\"\n", ")\n", "(g.map(plt.axhline, y=0, color=\".7\", dashes=(2, 1), zorder=0)\n", " .set_axis_labels(\"Timepoint\", \"Percent signal change\")\n", " .set_titles(\"Region: {col_name} cortex\")\n", " .tight_layout(w_pad=0))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It is also possible to use wide-form data with :func:`relplot`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flights_wide = sns.load_dataset(\"flights\").pivot(\"year\", \"month\", \"passengers\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Faceting is not an option in this case, but the plot will still take advantage of the external legend offered by :class:`FacetGrid`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(data=flights_wide, kind=\"line\")" ] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/docstrings/rugplot.ipynb000066400000000000000000000056131410631356500207200ustar00rootroot00000000000000{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Add a rug along one of the axes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import seaborn as sns; sns.set_theme()\n", "tips = sns.load_dataset(\"tips\")\n", "sns.kdeplot(data=tips, x=\"total_bill\")\n", "sns.rugplot(data=tips, x=\"total_bill\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Add a rug along both axes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.scatterplot(data=tips, x=\"total_bill\", y=\"tip\")\n", "sns.rugplot(data=tips, x=\"total_bill\", y=\"tip\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Represent a third variable with hue mapping:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.scatterplot(data=tips, x=\"total_bill\", y=\"tip\", hue=\"time\")\n", "sns.rugplot(data=tips, x=\"total_bill\", y=\"tip\", hue=\"time\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Draw a taller rug:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.scatterplot(data=tips, x=\"total_bill\", y=\"tip\")\n", "sns.rugplot(data=tips, x=\"total_bill\", y=\"tip\", height=.1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Put the rug outside the axes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.scatterplot(data=tips, x=\"total_bill\", y=\"tip\")\n", "sns.rugplot(data=tips, x=\"total_bill\", y=\"tip\", height=-.02, clip_on=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Show the density of a larger dataset using thinner lines and alpha blending:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "diamonds = sns.load_dataset(\"diamonds\")\n", "sns.scatterplot(data=diamonds, x=\"carat\", y=\"price\", s=5)\n", "sns.rugplot(data=diamonds, x=\"carat\", y=\"price\", lw=1, alpha=.005)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/docstrings/scatterplot.ipynb000066400000000000000000000167431410631356500215760ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import seaborn as sns\n", "import matplotlib.pyplot as plt\n", "sns.set_theme()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "These examples will use the \"tips\" dataset, which has a mixture of numeric and categorical variables:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")\n", "tips.head()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Passing long-form data and assigning ``x`` and ``y`` will draw a scatter plot between two variables:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.scatterplot(data=tips, x=\"total_bill\", y=\"tip\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assigning a variable to ``hue`` will map its levels to the color of the points:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.scatterplot(data=tips, x=\"total_bill\", y=\"tip\", hue=\"time\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assigning the same variable to ``style`` will also vary the markers and create a more accessible plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.scatterplot(data=tips, x=\"total_bill\", y=\"tip\", hue=\"time\", style=\"time\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assigning ``hue`` and ``style`` to different variables will vary colors and markers independently:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.scatterplot(data=tips, x=\"total_bill\", y=\"tip\", hue=\"day\", style=\"time\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "If the variable assigned to ``hue`` is numeric, the semantic mapping will be quantitative and use a different default palette:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.scatterplot(data=tips, x=\"total_bill\", y=\"tip\", hue=\"size\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Pass the name of a categorical palette or explicit colors (as a Python list of dictionary) to force categorical mapping of the ``hue`` variable:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.scatterplot(data=tips, x=\"total_bill\", y=\"tip\", hue=\"size\", palette=\"deep\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "If there are a large number of unique numeric values, the legend will show a representative, evenly-spaced set:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tip_rate = tips.eval(\"tip / total_bill\").rename(\"tip_rate\")\n", "sns.scatterplot(data=tips, x=\"total_bill\", y=\"tip\", hue=tip_rate)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A numeric variable can also be assigned to ``size`` to apply a semantic mapping to the areas of the points:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.scatterplot(data=tips, x=\"total_bill\", y=\"tip\", hue=\"size\", size=\"size\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Control the range of marker areas with ``sizes``, and set ``lengend=\"full\"`` to force every unique value to appear in the legend:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.scatterplot(\n", " data=tips, x=\"total_bill\", y=\"tip\", hue=\"size\", size=\"size\",\n", " sizes=(20, 200), legend=\"full\"\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Pass a tuple of values or a :class:`matplotlib.colors.Normalize` object to ``hue_norm`` to control the quantitative hue mapping:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.scatterplot(\n", " data=tips, x=\"total_bill\", y=\"tip\", hue=\"size\", size=\"size\",\n", " sizes=(20, 200), hue_norm=(0, 7), legend=\"full\"\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Control the specific markers used to map the ``style`` variable by passing a Python list or dictionary of marker codes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "markers = {\"Lunch\": \"s\", \"Dinner\": \"X\"}\n", "sns.scatterplot(data=tips, x=\"total_bill\", y=\"tip\", style=\"time\", markers=markers)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Additional keyword arguments are passed to :meth:`matplotlib.axes.Axes.scatter`, allowing you to directly set the attributes of the plot that are not semantically mapped:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.scatterplot(data=tips, x=\"total_bill\", y=\"tip\", s=100, color=\".2\", marker=\"+\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The previous examples used a long-form dataset. When working with wide-form data, each column will be plotted against its index using both ``hue`` and ``style`` mapping:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "index = pd.date_range(\"1 1 2000\", periods=100, freq=\"m\", name=\"date\")\n", "data = np.random.randn(100, 4).cumsum(axis=0)\n", "wide_df = pd.DataFrame(data, index, [\"a\", \"b\", \"c\", \"d\"])\n", "sns.scatterplot(data=wide_df)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Use :func:`relplot` to combine :func:`scatterplot` and :class:`FacetGrid`. This allows grouping within additional categorical variables, and plotting them across multiple subplots.\n", "\n", "Using :func:`relplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of the semantic mappings across facets." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(\n", " data=tips, x=\"total_bill\", y=\"tip\",\n", " col=\"time\", hue=\"day\", style=\"day\",\n", " kind=\"scatter\"\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/docstrings/set_context.ipynb000066400000000000000000000041201410631356500215530ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "id": "thorough-equipment", "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns" ] }, { "cell_type": "markdown", "id": "canadian-protection", "metadata": {}, "source": [ "Call the function with the name of a context to set the default for all plots:" ] }, { "cell_type": "code", "execution_count": null, "id": "freelance-leonard", "metadata": {}, "outputs": [], "source": [ "sns.set_context(\"notebook\")\n", "sns.lineplot(x=[0, 1, 2], y=[1, 3, 2])" ] }, { "cell_type": "markdown", "id": "studied-adventure", "metadata": {}, "source": [ "You can independently scale the font elements relative to the current context:" ] }, { "cell_type": "code", "execution_count": null, "id": "irish-digest", "metadata": {}, "outputs": [], "source": [ "sns.set_context(\"notebook\", font_scale=1.25)\n", "sns.lineplot(x=[0, 1, 2], y=[1, 3, 2])" ] }, { "cell_type": "markdown", "id": "fourth-technical", "metadata": {}, "source": [ "It is also possible to override some of the parameters with specific values:" ] }, { "cell_type": "code", "execution_count": null, "id": "advance-request", "metadata": {}, "outputs": [], "source": [ "sns.set_context(\"notebook\", rc={\"lines.linewidth\": 3})\n", "sns.lineplot(x=[0, 1, 2], y=[1, 3, 2])" ] }, { "cell_type": "code", "execution_count": null, "id": "compatible-string", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 5 } seaborn-0.11.2/doc/docstrings/set_style.ipynb000066400000000000000000000033141410631356500212330ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "id": "practical-announcement", "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns" ] }, { "cell_type": "markdown", "id": "suffering-emerald", "metadata": {}, "source": [ "Call the function with the name of a seaborn style to set the default for all plots:" ] }, { "cell_type": "code", "execution_count": null, "id": "collaborative-struggle", "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"whitegrid\")\n", "sns.barplot(x=[\"A\", \"B\", \"C\"], y=[1, 3, 2])" ] }, { "cell_type": "markdown", "id": "defensive-surgery", "metadata": {}, "source": [ "You can also selectively override seaborn's default parameter values:" ] }, { "cell_type": "code", "execution_count": null, "id": "coastal-sydney", "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"darkgrid\", {\"grid.color\": \".6\", \"grid.linestyle\": \":\"})\n", "sns.lineplot(x=[\"A\", \"B\", \"C\"], y=[1, 3, 2])" ] }, { "cell_type": "code", "execution_count": null, "id": "bright-october", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 5 } seaborn-0.11.2/doc/docstrings/set_theme.ipynb000066400000000000000000000070471410631356500212040ustar00rootroot00000000000000{ "cells": [ { "cell_type": "code", "execution_count": null, "id": "flush-block", "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "markdown", "id": "remarkable-confirmation", "metadata": {}, "source": [ "By default, seaborn plots will be made with the current values of the matplotlib rcParams:" ] }, { "cell_type": "code", "execution_count": null, "id": "viral-highway", "metadata": {}, "outputs": [], "source": [ "sns.barplot(x=[\"A\", \"B\", \"C\"], y=[1, 3, 2])" ] }, { "cell_type": "markdown", "id": "hungarian-poster", "metadata": {}, "source": [ "Calling this function with no arguments will activate seaborn's \"default\" theme:" ] }, { "cell_type": "code", "execution_count": null, "id": "front-february", "metadata": {}, "outputs": [], "source": [ "sns.set_theme()\n", "sns.barplot(x=[\"A\", \"B\", \"C\"], y=[1, 3, 2])" ] }, { "cell_type": "markdown", "id": "daily-mills", "metadata": {}, "source": [ "Note that this will take effect for *all* matplotlib plots, including those not made using seaborn:" ] }, { "cell_type": "code", "execution_count": null, "id": "essential-replica", "metadata": {}, "outputs": [], "source": [ "plt.bar([\"A\", \"B\", \"C\"], [1, 3, 2])" ] }, { "cell_type": "markdown", "id": "naughty-edgar", "metadata": {}, "source": [ "The seaborn theme is decomposed into several distinct sets of parameters that you can control independently:" ] }, { "cell_type": "code", "execution_count": null, "id": "latin-conversion", "metadata": {}, "outputs": [], "source": [ "sns.set_theme(style=\"whitegrid\", palette=\"pastel\")\n", "sns.barplot(x=[\"A\", \"B\", \"C\"], y=[1, 3, 2])" ] }, { "cell_type": "markdown", "id": "durable-cycling", "metadata": {}, "source": [ "Pass `None` to preserve the current values for a given set of parameters:" ] }, { "cell_type": "code", "execution_count": null, "id": "blessed-chuck", "metadata": {}, "outputs": [], "source": [ "sns.set_theme(style=\"white\", palette=None)\n", "sns.barplot(x=[\"A\", \"B\", \"C\"], y=[1, 3, 2])" ] }, { "cell_type": "markdown", "id": "present-writing", "metadata": {}, "source": [ "You can also override any seaborn parameters or define additional parameters that are part of the matplotlib rc system but not included in the seaborn themes:" ] }, { "cell_type": "code", "execution_count": null, "id": "floppy-effectiveness", "metadata": {}, "outputs": [], "source": [ "custom_params = {\"axes.spines.right\": False, \"axes.spines.top\": False}\n", "sns.set_theme(style=\"ticks\", rc=custom_params)\n", "sns.barplot(x=[\"A\", \"B\", \"C\"], y=[1, 3, 2])" ] }, { "cell_type": "code", "execution_count": null, "id": "large-transfer", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 5 } seaborn-0.11.2/doc/index.rst000066400000000000000000000102161410631356500156360ustar00rootroot00000000000000.. raw:: html seaborn: statistical data visualization ======================================= .. raw:: html

Seaborn is a Python data visualization library based on `matplotlib `_. It provides a high-level interface for drawing attractive and informative statistical graphics. For a brief introduction to the ideas behind the library, you can read the :doc:`introductory notes ` or the `paper `_. Visit the :doc:`installation page ` to see how you can download the package and get started with it. You can browse the :doc:`example gallery ` to see some of the things that you can do with seaborn, and then check out the :doc:`tutorial ` or :doc:`API reference ` to find out how. To see the code or report a bug, please visit the `GitHub repository `_. General support questions are most at home on `stackoverflow `_ or `discourse `_, which have dedicated channels for seaborn. .. raw:: html

Contents

.. toctree:: :maxdepth: 1 Introduction Release notes Installing Example gallery Tutorial API reference .. toctree:: :hidden: Citing Archive .. raw:: html

Features

* Relational: :ref:`API ` | :doc:`Tutorial ` * Distribution: :ref:`API ` | :doc:`Tutorial ` * Categorical: :ref:`API ` | :doc:`Tutorial ` * Regression: :ref:`API ` | :doc:`Tutorial ` * Multiples: :ref:`API ` | :doc:`Tutorial ` * Style: :ref:`API ` | :doc:`Tutorial ` * Color: :ref:`API ` | :doc:`Tutorial ` .. raw:: html
seaborn-0.11.2/doc/installing.rst000066400000000000000000000117711410631356500167020ustar00rootroot00000000000000.. _installing: .. currentmodule:: seaborn Installing and getting started ------------------------------ .. raw:: html
Official releases of seaborn can be installed from `PyPI `_:: pip install seaborn The library is also included as part of the `Anaconda `_ distribution:: conda install seaborn Dependencies ~~~~~~~~~~~~ Supported Python versions ^^^^^^^^^^^^^^^^^^^^^^^^^ - Python 3.6+ Required dependencies ^^^^^^^^^^^^^^^^^^^^^^ If not already present, these libraries will be downloaded when you install seaborn. - `numpy `__ - `scipy `__ - `pandas `__ - `matplotlib `__ Optional dependencies ^^^^^^^^^^^^^^^^^^^^^ - `statsmodels `__, for advanced regression plots - `fastcluster `__, for clustering large matrices Quickstart ~~~~~~~~~~ Once you have seaborn installed, you're ready to get started. To test it out, you could load and plot one of the example datasets:: import seaborn as sns df = sns.load_dataset("penguins") sns.pairplot(df, hue="species") If you're working in a Jupyter notebook or an IPython terminal with `matplotlib mode `_ enabled, you should immediately see :ref:`the plot `. Otherwise, you may need to explicitly call :func:`matplotlib.pyplot.show`:: import matplotlib.pyplot as plt plt.show() While you can get pretty far with only seaborn imported, having access to matplotlib functions is often useful. The tutorials and API documentation typically assume the following imports:: import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt Debugging install issues ~~~~~~~~~~~~~~~~~~~~~~~~ The seaborn codebase is pure Python, and the library should generally install without issue. Occasionally, difficulties will arise because the dependencies include compiled code and link to system libraries. These difficulties typically manifest as errors on import with messages such as ``"DLL load failed"``. To debug such problems, read through the exception trace to figure out which specific library failed to import, and then consult the installation docs for that package to see if they have tips for your particular system. In some cases, an installation of seaborn will appear to succeed, but trying to import it will raise an error with the message ``"No module named seaborn"``. This usually means that you have multiple Python installations on your system and that your ``pip`` or ``conda`` points towards a different installation than where your interpreter lives. Resolving this issue will involve sorting out the paths on your system, but it can sometimes be avoided by invoking ``pip`` with ``python -m pip install seaborn``. Getting help ~~~~~~~~~~~~ If you think you've encountered a bug in seaborn, please report it on the `GitHub issue tracker `_. To be useful, bug reports must include the following information: - A reproducible code example that demonstrates the problem - The output that you are seeing (an image of a plot, or the error message) - A clear explanation of why you think something is wrong - The specific versions of seaborn and matplotlib that you are working with Bug reports are easiest to address if they can be demonstrated using one of the example datasets from the seaborn docs (i.e. with :func:`load_dataset`). Otherwise, it is preferable that your example generate synthetic data to reproduce the problem. If you can only demonstrate the issue with your actual dataset, you will need to share it, ideally as a csv. If you've encountered an error, searching the specific text of the message before opening a new issue can often help you solve the problem quickly and avoid making a duplicate report. Because matplotlib handles the actual rendering, errors or incorrect outputs may be due to a problem in matplotlib rather than one in seaborn. It can save time if you try to reproduce the issue in an example that uses only matplotlib, so that you can report it in the right place. But it is alright to skip this step if it's not obvious how to do it. General support questions are more at home on either `stackoverflow `_ or `discourse `_, which have a larger audience of people who will see your post and may be able to offer assistance. StackOverflow is better for specific issues, while discourse is better for more open-ended discussion. Your chance of getting a quick answer will be higher if you include `runnable code `_, a precise statement of what you are hoping to achieve, and a clear explanation of the problems that you have encountered. .. raw:: html
seaborn-0.11.2/doc/introduction.ipynb000066400000000000000000000443111410631356500175640ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _introduction:\n", "\n", ".. currentmodule:: seaborn\n", "\n", "An introduction to seaborn\n", "==========================\n", "\n", ".. raw:: html\n", "\n", "
\n", "\n", "Seaborn is a library for making statistical graphics in Python. It builds on top of `matplotlib `_ and integrates closely with `pandas `_ data structures.\n", "\n", "Seaborn helps you explore and understand your data. Its plotting functions operate on dataframes and arrays containing whole datasets and internally perform the necessary semantic mapping and statistical aggregation to produce informative plots. Its dataset-oriented, declarative API lets you focus on what the different elements of your plots mean, rather than on the details of how to draw them.\n", "\n", "Our first seaborn plot\n", "----------------------\n", "\n", "Here's an example of what seaborn can do:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Import seaborn\n", "import seaborn as sns\n", "\n", "# Apply the default theme\n", "sns.set_theme()\n", "\n", "# Load an example dataset\n", "tips = sns.load_dataset(\"tips\")\n", "\n", "# Create a visualization\n", "sns.relplot(\n", " data=tips,\n", " x=\"total_bill\", y=\"tip\", col=\"time\",\n", " hue=\"smoker\", style=\"smoker\", size=\"size\",\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A few things have happened here. Let's go through them one by one:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-output" ] }, "outputs": [], "source": [ "# Import seaborn\n", "import seaborn as sns" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Seaborn is the only library we need to import for this simple example. By convention, it is imported with the shorthand ``sns``.\n", "\n", "Behind the scenes, seaborn uses matplotlib to draw its plots. For interactive work, it's recommended to use a Jupyter/IPython interface in `matplotlib mode `_, or else you'll have to call :func:`matplotlib.pyplot.show` when you want to see the plot." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-output" ] }, "outputs": [], "source": [ "# Apply the default theme\n", "sns.set_theme()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This uses the :ref:`matplotlib rcParam system ` and will affect how all matplotlib plots look, even if you don't make them with seaborn. Beyond the default theme, there are :doc:`several other options `, and you can independently control the style and scaling of the plot to quickly translate your work between presentation contexts (e.g., making a version of your figure that will have readable fonts when projected during a talk). If you like the matplotlib defaults or prefer a different theme, you can skip this step and still use the seaborn plotting functions." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-output" ] }, "outputs": [], "source": [ "# Load an example dataset\n", "tips = sns.load_dataset(\"tips\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Most code in the docs will use the :func:`load_dataset` function to get quick access to an example dataset. There's nothing special about these datasets: they are just pandas dataframes, and we could have loaded them with :func:`pandas.read_csv` or built them by hand. Most of the examples in the documentation will specify data using pandas dataframes, but seaborn is very flexible about the :doc:`data structures ` that it accepts." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-output" ] }, "outputs": [], "source": [ "# Create a visualization\n", "sns.relplot(\n", " data=tips,\n", " x=\"total_bill\", y=\"tip\", col=\"time\",\n", " hue=\"smoker\", style=\"smoker\", size=\"size\",\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This plot shows the relationship between five variables in the tips dataset using a single call to the seaborn function :func:`relplot`. Notice how we provided only the names of the variables and their roles in the plot. Unlike when using matplotlib directly, it wasn't necessary to specify attributes of the plot elements in terms of the color values or marker codes. Behind the scenes, seaborn handled the translation from values in the dataframe to arguments that matplotlib understands. This declarative approach lets you stay focused on the questions that you want to answer, rather than on the details of how to control matplotlib.\n", "\n", ".. _intro_api_abstraction:\n", "\n", "API abstraction across visualizations\n", "-------------------------------------\n", "\n", "There is no universally best way to visualize data. Different questions are best answered by different plots. Seaborn makes it easy to switch between different visual representations by using a consistent dataset-oriented API.\n", "\n", "The function :func:`relplot` is named that way because it is designed to visualize many different statistical *relationships*. While scatter plots are often effective, relationships where one variable represents a measure of time are better represented by a line. The :func:`relplot` function has a convenient ``kind`` parameter that lets you easily switch to this alternate representation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dots = sns.load_dataset(\"dots\")\n", "sns.relplot(\n", " data=dots, kind=\"line\",\n", " x=\"time\", y=\"firing_rate\", col=\"align\",\n", " hue=\"choice\", size=\"coherence\", style=\"choice\",\n", " facet_kws=dict(sharex=False),\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Notice how the ``size`` and ``style`` parameters are used in both the scatter and line plots, but they affect the two visualizations differently: changing the marker area and symbol in the scatter plot vs the line width and dashing in the line plot. We did not need to keep those details in mind, letting us focus on the overall structure of the plot and the information we want it to convey.\n", "\n", ".. _intro_stat_estimation:\n", "\n", "Statistical estimation and error bars\n", "-------------------------------------\n", "\n", "Often, we are interested in the *average* value of one variable as a function of other variables. Many seaborn functions will automatically perform the statistical estimation that is necessary to answer these questions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fmri = sns.load_dataset(\"fmri\")\n", "sns.relplot(\n", " data=fmri, kind=\"line\",\n", " x=\"timepoint\", y=\"signal\", col=\"region\",\n", " hue=\"event\", style=\"event\",\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When statistical values are estimated, seaborn will use bootstrapping to compute confidence intervals and draw error bars representing the uncertainty of the estimate.\n", "\n", "Statistical estimation in seaborn goes beyond descriptive statistics. For example, it is possible to enhance a scatterplot by including a linear regression model (and its uncertainty) using :func:`lmplot`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(data=tips, x=\"total_bill\", y=\"tip\", col=\"time\", hue=\"smoker\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _intro_distributions:\n", "\n", "\n", "Informative distributional summaries\n", "------------------------------------\n", "\n", "Statistical analyses require knowledge about the distribution of variables in your dataset. The seaborn function :func:`displot` supports several approaches to visualizing distributions. These include classic techniques like histograms and computationally-intensive approaches like kernel density estimation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(data=tips, x=\"total_bill\", col=\"time\", kde=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Seaborn also tries to promote techniques that are powerful but less familiar, such as calculating and plotting the empirical cumulative distribution function of the data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(data=tips, kind=\"ecdf\", x=\"total_bill\", col=\"time\", hue=\"smoker\", rug=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _intro_categorical:\n", "\n", "Specialized plots for categorical data\n", "--------------------------------------\n", "\n", "Several specialized plot types in seaborn are oriented towards visualizing categorical data. They can be accessed through :func:`catplot`. These plots offer different levels of granularity. At the finest level, you may wish to see every observation by drawing a \"swarm\" plot: a scatter plot that adjusts the positions of the points along the categorical axis so that they don't overlap:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(data=tips, kind=\"swarm\", x=\"day\", y=\"total_bill\", hue=\"smoker\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Alternately, you could use kernel density estimation to represent the underlying distribution that the points are sampled from:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(data=tips, kind=\"violin\", x=\"day\", y=\"total_bill\", hue=\"smoker\", split=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Or you could show only the mean value and its confidence interval within each nested category:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(data=tips, kind=\"bar\", x=\"day\", y=\"total_bill\", hue=\"smoker\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _intro_dataset_funcs:\n", "\n", "Composite views onto multivariate datasets\n", "------------------------------------------\n", "\n", "Some seaborn functions combine multiple kinds of plots to quickly give informative summaries of a dataset. One, :func:`jointplot`, focuses on a single relationship. It plots the joint distribution between two variables along with each variable's marginal distribution:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "penguins = sns.load_dataset(\"penguins\")\n", "sns.jointplot(data=penguins, x=\"flipper_length_mm\", y=\"bill_length_mm\", hue=\"species\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The other, :func:`pairplot`, takes a broader view: it shows joint and marginal distributions for all pairwise relationships and for each variable, respectively:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(data=penguins, hue=\"species\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _intro_figure_classes:\n", "\n", "Classes and functions for making complex graphics\n", "-------------------------------------------------\n", "\n", "These tools work by combining :doc:`axes-level ` plotting functions with objects that manage the layout of the figure, linking the structure of a dataset to a :doc:`grid of axes `. Both elements are part of the public API, and you can use them directly to create complex figures with only a few more lines of code:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(penguins, hue=\"species\", corner=True)\n", "g.map_lower(sns.kdeplot, hue=None, levels=5, color=\".2\")\n", "g.map_lower(sns.scatterplot, marker=\"+\")\n", "g.map_diag(sns.histplot, element=\"step\", linewidth=0, kde=True)\n", "g.add_legend(frameon=True)\n", "g.legend.set_bbox_to_anchor((.61, .6))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _intro_defaults:\n", "\n", "Opinionated defaults and flexible customization\n", "-----------------------------------------------\n", "\n", "Seaborn creates complete graphics with a single function call: when possible, its functions will automatically add informative axis labels and legends that explain the semantic mappings in the plot.\n", "\n", "In many cases, seaborn will also choose default values for its parameters based on characteristics of the data. For example, the :doc:`color mappings ` that we have seen so far used distinct hues (blue, orange, and sometimes green) to represent different levels of the categorical variables assigned to ``hue``. When mapping a numeric variable, some functions will switch to a continuous gradient:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(\n", " data=penguins,\n", " x=\"bill_length_mm\", y=\"bill_depth_mm\", hue=\"body_mass_g\"\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When you're ready to share or publish your work, you'll probably want to polish the figure beyond what the defaults achieve. Seaborn allows for several levels of customization. It defines multiple built-in :doc:`themes ` that apply to all figures, its functions have standardized parameters that can modify the semantic mappings for each plot, and additional keyword arguments are passed down to the underlying matplotlib artsts, allowing even more control. Once you've created a plot, its properties can be modified through both the seaborn API and by dropping down to the matplotlib layer for fine-grained tweaking:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_theme(style=\"ticks\", font_scale=1.25)\n", "g = sns.relplot(\n", " data=penguins,\n", " x=\"bill_length_mm\", y=\"bill_depth_mm\", hue=\"body_mass_g\",\n", " palette=\"crest\", marker=\"x\", s=100,\n", ")\n", "g.set_axis_labels(\"Bill length (mm)\", \"Bill depth (mm)\", labelpad=10)\n", "g.legend.set_title(\"Body mass (g)\")\n", "g.figure.set_size_inches(6.5, 4.5)\n", "g.ax.margins(.15)\n", "g.despine(trim=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _intro_matplotlib:\n", "\n", "Relationship to matplotlib\n", "--------------------------\n", "\n", "Seaborn's integration with matplotlib allows you to use it across the many environments that matplotlib supports, inlcuding exploratory analysis in notebooks, real-time interaction in GUI applications, and archival output in a number of raster and vector formats.\n", "\n", "While you can be productive using only seaborn functions, full customization of your graphics will require some knowledge of matplotlib's concepts and API. One aspect of the learning curve for new users of seaborn will be knowing when dropping down to the matplotlib layer is necessary to achieve a particular customization. On the other hand, users coming from matplotlib will find that much of their knowledge transfers.\n", "\n", "Matplotlib has a comprehensive and powerful API; just about any attribute of the figure can be changed to your liking. A combination of seaborn's high-level interface and matplotlib's deep customizability will allow you both to quickly explore your data and to create graphics that can be tailored into a `publication quality `_ final product." ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _intro_next_steps:\n", "\n", "Next steps\n", "----------\n", "\n", "You have a few options for where to go next. You might first want to learn how to :doc:`install seaborn `. Once that's done, you can browse the :doc:`example gallery ` to get a broader sense for what kind of graphics seaborn can produce. Or you can read through the :doc:`user guide and tutorial ` for a deeper discussion of the different tools and what they are designed to accomplish. If you have a specific plot in mind and want to know how to make it, you could check out the :doc:`API reference `, which documents each function's parameters and shows many examples to illustrate usage." ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", " \n", "
" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/matplotlibrc000066400000000000000000000000251410631356500164110ustar00rootroot00000000000000savefig.bbox : tight seaborn-0.11.2/doc/releases/000077500000000000000000000000001410631356500156005ustar00rootroot00000000000000seaborn-0.11.2/doc/releases/v0.10.0.txt000066400000000000000000000043751410631356500172540ustar00rootroot00000000000000 v0.10.0 (January 2020) ---------------------- This is a major update that is being released simultaneously with version 0.9.1. It has all of the same features (and bugs!) as 0.9.1, but there are important changes to the dependencies. Most notably, all support for Python 2 has now been dropped. Support for Python 3.5 has also been dropped. Seaborn is now strictly compatible with Python 3.6+. Minimally supported versions of the dependent PyData libraries have also been increased, in some cases substantially. While seaborn has tended to be very conservative about maintaining compatibility with older dependencies, this was causing increasing pain during development. At the same time, these libraries are now much easier to install. Going forward, seaborn will likely stay close to the `Numpy community guidelines `_ for version support. This release also removes a few previously-deprecated features: - The ``tsplot`` function and ``seaborn.timeseries`` module have been removed. Recall that ``tsplot`` was replaced with :func:`lineplot`. - The ``seaborn.apionly`` entry-point has been removed. - The ``seaborn.linearmodels`` module (previously renamed to ``seaborn.regression``) has been removed. Looking forward ~~~~~~~~~~~~~~~ Now that seaborn is a Python 3 library, it can take advantage of `keyword-only arguments `_. It is likely that future versions will introduce this syntax, potentially in a breaking way. For guidance, most seaborn functions have a signature that looks like :: func(x, y, ..., data=None, **kwargs) where the ``**kwargs`` are specified in the function. Going forward it will likely be necessary to specify ``data`` and all subsequent arguments with an explicit ``key=value`` mapping. This style has long been used throughout the documentation, and the formal requirement will not be introduced until at least the next major release. Adding this feature will make it possible to enhance some older functions with more modern capabilities (e.g., adding a native ``hue`` semantic within functions like :func:`jointplot` and :func:`regplot`) and will allow parameters that control new features to be situated nearby related, making them more discoverable. seaborn-0.11.2/doc/releases/v0.10.1.txt000066400000000000000000000023221410631356500172430ustar00rootroot00000000000000 v0.10.1 (April 2020) -------------------- This is minor release with bug fixes for issues identified since 0.10.0. - Fixed a bug that appeared within the bootstrapping algorithm on 32-bit systems. - Fixed a bug where :func:`regplot` would crash on singleton inputs. Now a crash is avoided and regression estimation/plotting is skipped. - Fixed a bug where :func:`heatmap` would ignore user-specified under/over/bad values when recentering a colormap. - Fixed a bug where :func:`heatmap` would use values from masked cells when computing default colormap limits. - Fixed a bug where :func:`despine` would cause an error when trying to trim spines on a matplotlib categorical axis. - Adapted to a change in matplotlib that caused problems with single swarm plots. - Added the ``showfliers`` parameter to :func:`boxenplot` to suppress plotting of outlier data points, matching the API of :func:`boxplot`. - Avoided seeing an error from statmodels when data with an IQR of 0 is passed to :func:`kdeplot`. - Added the ``legend.title_fontsize`` to the :func:`plotting_context` definition. - Deprecated several utility functions that are no longer used internally (``percentiles``, ``sig_stars``, ``pmf_hist``, and ``sort_df``). seaborn-0.11.2/doc/releases/v0.11.0.txt000066400000000000000000000411541410631356500172510ustar00rootroot00000000000000 v0.11.0 (September 2020) ------------------------ This is a major release with several important new features, enhancements to existing functions, and changes to the library. Highlights include an overhaul and modernization of the distributions plotting functions, more flexible data specification, new colormaps, and better narrative documentation. For an overview of the new features and a guide to updating, see `this Medium post `_. Required keyword arguments ~~~~~~~~~~~~~~~~~~~~~~~~~~ |API| Most plotting functions now require all of their parameters to be specified using keyword arguments. To ease adaptation, code without keyword arguments will trigger a ``FutureWarning`` in v0.11. In a future release (v0.12 or v0.13, depending on release cadence), this will become an error. Once keyword arguments are fully enforced, the signature of the plotting functions will be reorganized to accept ``data`` as the first and only positional argument (:pr:`2052,2081`). Modernization of distribution functions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The distribution module has been completely overhauled, modernizing the API and introducing several new functions and features within existing functions. Some new features are explained here; the :doc:`tutorial documentation ` has also been rewritten and serves as a good introduction to the functions. New plotting functions ^^^^^^^^^^^^^^^^^^^^^^ |Feature| |Enhancement| First, three new functions, :func:`displot`, :func:`histplot` and :func:`ecdfplot` have been added (:pr:`2157`, :pr:`2125`, :pr:`2141`). The figure-level :func:`displot` function is an interface to the various distribution plots (analogous to :func:`relplot` or :func:`catplot`). It can draw univariate or bivariate histograms, density curves, ECDFs, and rug plots on a :class:`FacetGrid`. The axes-level :func:`histplot` function draws univariate or bivariate histograms with a number of features, including: - mapping multiple distributions with a ``hue`` semantic - normalization to show density, probability, or frequency statistics - flexible parameterization of bin size, including proper bins for discrete variables - adding a KDE fit to show a smoothed distribution over all bin statistics - experimental support for histograms over categorical and datetime variables. The axes-level :func:`ecdfplot` function draws univariate empirical cumulative distribution functions, using a similar interface. Changes to existing functions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |API| |Feature| |Enhancement| |Defaults| Second, the existing functions :func:`kdeplot` and :func:`rugplot` have been completely overhauled (:pr:`2060,2104`). The overhauled functions now share a common API with the rest of seaborn, they can show conditional distributions by mapping a third variable with a ``hue`` semantic, and they have been improved in numerous other ways. The github pull request (:pr:`2104`) has a longer explanation of the changes and the motivation behind them. This is a necessarily API-breaking change. The parameter names for the positional variables are now ``x`` and ``y``, and the old names have been deprecated. Efforts were made to handle and warn when using the deprecated API, but it is strongly suggested to check your plots carefully. Additionally, the statsmodels-based computation of the KDE has been removed. Because there were some inconsistencies between the way different parameters (specifically, ``bw``, ``clip``, and ``cut``) were implemented by each backend, this may cause plots to look different with non-default parameters. Support for using non-Gaussian kernels, which was available only in the statsmodels backend, has been removed. Other new features include: - several options for representing multiple densities (using the ``multiple`` and ``common_norm`` parameters) - weighted density estimation (using the new ``weights`` parameter) - better control over the smoothing bandwidth (using the new ``bw_adjust`` parameter) - more meaningful parameterization of the contours that represent a bivariate density (using the ``thresh`` and ``levels`` parameters) - log-space density estimation (using the new ``log_scale`` parameter, or by scaling the data axis before plotting) - "bivariate" rug plots with a single function call (by assigning both ``x`` and ``y``) Deprecations ^^^^^^^^^^^^ |API| Finally, the :func:`distplot` function is now formally deprecated. Its features have been subsumed by :func:`displot` and :func:`histplot`. Some effort was made to gradually transition :func:`distplot` by adding the features in :func:`displot` and handling backwards compatibility, but this proved to be too difficult. The similarity in the names will likely cause some confusion during the transition, which is regrettable. Related enhancements and changes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |API| |Feature| |Enhancement| |Defaults| These additions facilitated new features (and forced changes) in :func:`jointplot` and :class:`JointGrid` (:pr:`2210`) and in :func:`pairplot` and :class:`PairGrid` (:pr:`2234`). - Added support for the ``hue`` semantic in :func:`jointplot`/:class:`JointGrid`. This support is lightweight and simply delegates the mapping to the underlying axes-level functions. - Delegated the handling of ``hue`` in :class:`PairGrid`/:func:`pairplot` to the plotting function when it understands ``hue``, meaning that (1) the zorder of scatterplot points will be determined by row in dataframe, (2) additional options for resolving hue (e.g. the ``multiple`` parameter) can be used, and (3) numeric hue variables can be naturally mapped when using :func:`scatterplot`. - Added ``kind="hist"`` to :func:`jointplot`, which draws a bivariate histogram on the joint axes and univariate histograms on the marginal axes, as well as both ``kind="hist"`` and ``kind="kde"`` to :func:`pairplot`, which behaves likewise. - The various modes of :func:`jointplot` that plot marginal histograms now use :func:`histplot` rather than :func:`distplot`. This slightly changes the default appearance and affects the valid keyword arguments that can be passed to customize the plot. Likewise, the marginal histogram plots in :func:`pairplot` now use :func:`histplot`. Standardization and enhancements of data ingest ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |Feature| |Enhancement| |Docs| The code that processes input data has been refactored and enhanced. In v0.11, this new code takes effect for the relational and distribution modules; other modules will be refactored to use it in future releases (:pr:`2071`). These changes should be transparent for most use-cases, although they allow a few new features: - Named variables for long-form data can refer to the named index of a :class:`pandas.DataFrame` or to levels in the case of a multi-index. Previously, it was necessary to call :meth:`pandas.DataFrame.reset_index` before using index variables (e.g., after a groupby operation). - :func:`relplot` now has the same flexibility as the axes-level functions to accept data in long- or wide-format and to accept data vectors (rather than named variables) in long-form mode. - The data parameter can now be a Python ``dict`` or an object that implements that interface. This is a new feature for wide-form data. For long-form data, it was previously supported but not documented. - A wide-form data object can have a mixture of types; the non-numeric types will be removed before plotting. Previously, this caused an error. - There are better error messages for other instances of data mis-specification. See the new user guide chapter on :doc:`data formats ` for more information about what is supported. Other changes ~~~~~~~~~~~~~ Documentation improvements ^^^^^^^^^^^^^^^^^^^^^^^^^^ - |Docs| Added two new chapters to the user guide, one giving an overview of the :doc:`types of functions in seaborn `, and one discussing the different :doc:`data formats ` that seaborn understands. - |Docs| Expanded the :doc:`color palette tutorial ` to give more background on color theory and better motivate the use of color in statistical graphics. - |Docs| Added more information to the :doc:`installation guidelines ` and streamlined the :doc:`introduction ` page. - |Docs| Improved cross-linking within the seaborn docs and between the seaborn and matplotlib docs. Theming ^^^^^^^ - |API| The :func:`set` function has been renamed to :func:`set_theme` for more clarity about what it does. For the foreseeable future, :func:`set` will remain as an alias, but it is recommended to update your code. Relational plots ^^^^^^^^^^^^^^^^ - |Enhancement| |Defaults| Reduced some of the surprising behavior of relational plot legends when using a numeric hue or size mapping (:pr:`2229`): - Added an "auto" mode (the new default) that chooses between "brief" and "full" legends based on the number of unique levels of each variable. - Modified the ticking algorithm for a "brief" legend to show up to 6 values and not to show values outside the limits of the data. - Changed the approach to the legend title: the normal matplotlib legend title is used when only one variable is assigned a semantic mapping, whereas the old approach of adding an invisible legend artist with a subtitle label is used only when multiple semantic variables are defined. - Modified legend subtitles to be left-aligned and to be drawn in the default legend title font size. - |Enhancement| |Defaults| Changed how functions that use different representations for numeric and categorical data handle vectors with an ``object`` data type. Previously, data was considered numeric if it could be coerced to a float representation without error. Now, object-typed vectors are considered numeric only when their contents are themselves numeric. As a consequence, numbers that are encoded as strings will now be treated as categorical data (:pr:`2084`). - |Enhancement| |Defaults| Plots with a ``style`` semantic can now generate an infinite number of unique dashes and/or markers by default. Previously, an error would be raised if the ``style`` variable had more levels than could be mapped using the default lists. The existing defaults were slightly modified as part of this change; if you need to exactly reproduce plots from earlier versions, refer to the `old defaults `_ (:pr:`2075`). - |Defaults| Changed how :func:`scatterplot` sets the default linewidth for the edges of the scatter points. New behavior is to scale with the point sizes themselves (on a plot-wise, not point-wise basis). This change also slightly reduces the default width when point sizes are not varied. Set ``linewidth=0.75`` to reproduce the previous behavior. (:pr:`2708`). - |Enhancement| Improved support for datetime variables in :func:`scatterplot` and :func:`lineplot` (:pr:`2138`). - |Fix| Fixed a bug where :func:`lineplot` did not pass the ``linestyle`` parameter down to matplotlib (:pr:`2095`). - |Fix| Adapted to a change in matplotlib that prevented passing vectors of literal values to ``c`` and ``s`` in :func:`scatterplot` (:pr:`2079`). Categorical plots ^^^^^^^^^^^^^^^^^ - |Enhancement| |Defaults| |Fix| Fixed a few computational issues in :func:`boxenplot` and improved its visual appearance (:pr:`2086`): - Changed the default method for computing the number of boxes to``k_depth="tukey"``, as the previous default (``k_depth="proportion"``) is based on a heuristic that produces too many boxes for small datasets. - Added the option to specify the specific number of boxes (e.g. ``k_depth=6``) or to plot boxes that will cover most of the data points (``k_depth="full"``). - Added a new parameter, ``trust_alpha``, to control the number of boxes when ``k_depth="trustworthy"``. - Changed the visual appearance of :func:`boxenplot` to more closely resemble :func:`boxplot`. Notably, thin boxes will remain visible when the edges are white. - |Enhancement| Allowed :func:`catplot` to use different values on the categorical axis of each facet when axis sharing is turned off (e.g. by specifying ``sharex=False``) (:pr:`2196`). - |Enhancement| Improved the error messages produced when categorical plots process the orientation parameter. - |Enhancement| Added an explicit warning in :func:`swarmplot` when more than 5% of the points overlap in the "gutters" of the swarm (:pr:`2045`). Multi-plot grids ^^^^^^^^^^^^^^^^ - |Feature| |Enhancement| |Defaults| A few small changes to make life easier when using :class:`PairGrid` (:pr:`2234`): - Added public access to the legend object through the ``legend`` attribute (also affects :class:`FacetGrid`). - The ``color`` and ``label`` parameters are no longer passed to the plotting functions when ``hue`` is not used. - The data is no longer converted to a numpy object before plotting on the marginal axes. - It is possible to specify only one of ``x_vars`` or ``y_vars``, using all variables for the unspecified dimension. - The ``layout_pad`` parameter is stored and used every time you call the :meth:`PairGrid.tight_layout` method. - |Feature| Added a ``tight_layout`` method to :class:`FacetGrid` and :class:`PairGrid`, which runs the :func:`matplotlib.pyplot.tight_layout` algorithm without interference from the external legend (:pr:`2073`). - |Feature| Added the ``axes_dict`` attribute to :class:`FacetGrid` for named access to the component axes (:pr:`2046`). - |Enhancement| Made :meth:`FacetGrid.set_axis_labels` clear labels from "interior" axes (:pr:`2046`). - |Feature| Added the ``marginal_ticks`` parameter to :class:`JointGrid` which, if set to ``True``, will show ticks on the count/density axis of the marginal plots (:pr:`2210`). - |Enhancement| Improved :meth:`FacetGrid.set_titles` with ``margin_titles=True``, such that texts representing the original row titles are removed before adding new ones (:pr:`2083`). - |Defaults| Changed the default value for ``dropna`` to ``False`` in :class:`FacetGrid`, :class:`PairGrid`, :class:`JointGrid`, and corresponding functions. As all or nearly all seaborn and matplotlib plotting functions handle missing data well, this option is no longer useful, but it causes problems in some edge cases. It may be deprecated in the future. (:pr:`2204`). - |Fix| Fixed a bug in :class:`PairGrid` that appeared when setting ``corner=True`` and ``despine=False`` (:pr:`2203`). Color palettes ~~~~~~~~~~~~~~ - |Docs| Improved and modernized the :doc:`color palettes chapter ` of the seaborn tutorial. - |Feature| Added two new perceptually-uniform colormaps: "flare" and "crest". The new colormaps are similar to "rocket" and "mako", but their luminance range is reduced. This makes them well suited to numeric mappings of line or scatter plots, which need contrast with the axes background at the extremes (:pr:`2237`). - |Enhancement| |Defaults| Enhanced numeric colormap functionality in several ways (:pr:`2237`): - Added string-based access within the :func:`color_palette` interface to :func:`dark_palette`, :func:`light_palette`, and :func:`blend_palette`. This means that anywhere you specify a palette in seaborn, a name like ``"dark:blue"`` will use :func:`dark_palette` with the input ``"blue"``. - Added the ``as_cmap`` parameter to :func:`color_palette` and changed internal code that uses a continuous colormap to take this route. - Tweaked the :func:`light_palette` and :func:`dark_palette` functions to use an endpoint that is a very desaturated version of the input color, rather than a pure gray. This produces smoother ramps. To exactly reproduce previous plots, use :func:`blend_palette` with ``".13"`` for dark or ``".95"`` for light. - Changed :func:`diverging_palette` to have a default value of ``sep=1``, which gives better results. - |Enhancement| Added a rich HTML representation to the object returned by :func:`color_palette` (:pr:`2225`). - |Fix| Fixed the ``"{palette}_d"`` logic to modify reversed colormaps and to use the correct direction of the luminance ramp in both cases. Deprecations and removals ^^^^^^^^^^^^^^^^^^^^^^^^^ - |Enhancement| Removed an optional (and undocumented) dependency on BeautifulSoup (:pr:`2190`) in :func:`get_dataset_names`. - |API| Deprecated the ``axlabel`` function; use ``ax.set(xlabel=, ylabel=)`` instead. - |API| Deprecated the ``iqr`` function; use :func:`scipy.stats.iqr` instead. - |API| Final removal of the previously-deprecated ``annotate`` method on :class:`JointGrid`, along with related parameters. - |API| Final removal of the ``lvplot`` function (the previously-deprecated name for :func:`boxenplot`). seaborn-0.11.2/doc/releases/v0.11.1.txt000066400000000000000000000043341410631356500172510ustar00rootroot00000000000000 v0.11.1 (December 2020) ----------------------- This a bug fix release and is a recommended upgrade for all users on v0.11.0. - |Enhancement| Reduced the use of matplotlib global state in the :ref:`multi-grid classes ` (:pr:`2388`). - |Fix| Restored support for using tuples or numeric keys to reference fields in a long-form `data` object (:pr:`2386`). - |Fix| Fixed a bug in :func:`lineplot` where NAs were propagating into the confidence interval, sometimes erasing it from the plot (:pr:`2273`). - |Fix| Fixed a bug in :class:`PairGrid`/:func:`pairplot` where diagonal axes would be empty when the grid was not square and the diagonal axes did not contain the marginal plots (:pr:`2270`). - |Fix| Fixed a bug in :class:`PairGrid`/:func:`pairplot` where off-diagonal plots would not appear when column names in `data` had non-string type (:pr:`2368`). - |Fix| Fixed a bug where categorical dtype information was ignored when data consisted of boolean or boolean-like values (:pr:`2379`). - |Fix| Fixed a bug in :class:`FacetGrid` where interior tick labels would be hidden when only the orthogonal axis was shared (:pr:`2347`). - |Fix| Fixed a bug in :class:`FacetGrid` that caused an error when `legend_out=False` was set (:pr:`2304`). - |Fix| Fixed a bug in :func:`kdeplot` where ``common_norm=True`` was ignored if ``hue`` was not assigned (:pr:`2378`). - |Fix| Fixed a bug in :func:`displot` where the ``row_order`` and ``col_order`` parameters were not used (:pr:`2262`). - |Fix| Fixed a bug in :class:`PairGrid`/:func:`pairplot` that caused an exception when using `corner=True` and `diag_kind=None` (:pr:`2382`). - |Fix| Fixed a bug in :func:`clustermap` where `annot=False` was ignored (:pr:`2323`). - |Fix| Fixed a bug in :func:`clustermap` where row/col color annotations could not have a categorical dtype (:pr:`2389`). - |Fix| Fixed a bug in :func:`boxenplot` where the `linewidth` parameter was ignored (:pr:`2287`). - |Fix| Raise a more informative error in :class:`PairGrid`/:func:`pairplot` when no variables can be found to define the rows/columns of the grid (:pr:`2382`). - |Fix| Raise a more informative error from :func:`clustermap` if row/col color objects have semantic index but data object does not (:pr:`2313`).seaborn-0.11.2/doc/releases/v0.11.2.txt000066400000000000000000000125101410631356500172450ustar00rootroot00000000000000 v0.11.2 (August 2021) --------------------- This is a minor release that addresses issues in the v0.11 series and adds a small number of targeted enhancements. It is a recommended upgrade for all users. - |Docs| A paper describing seaborn has been published in the `Journal of Open Source Software `_. The paper serves as an introduction to the library and can be used to cite seaborn if it has been integral to a scientific publication. - |API| |Feature| In :func:`lmplot`, added a new `facet_kws` parameter and deprecated the `sharex`, `sharey`, and `legend_out` parameters from the function signature; pass them in a `facet_kws` dictionary instead (:pr:`2576`). - |Feature| Added a :func:`move_legend` convenience function for repositioning the legend on an existing axes or figure, along with updating its properties. This function should be preferred over calling `ax.legend` with no legend data, which does not reliably work across seaborn plot types (:pr:`2643`). - |Feature| In :func:`histplot`, added `stat="percent"` as an option for normalization such that bar heights sum to 100 and `stat="proportion"` as an alias for the existing `stat="probability"` (:pr:`2461`, :pr:`2634`). - |Feature| Added :meth:`FacetGrid.refline` and :meth:`JointGrid.refline` methods for plotting horizontal and/or vertical reference lines on every subplot in one step (:pr:`2620`). - |Feature| In :func:`kdeplot`, added a `warn_singular` parameter to silence the warning about data with zero variance (:pr:`2566`). - |Enhancement| In :func:`histplot`, improved performance with large datasets and many groupings/facets (:pr:`2559`, :pr:`2570`). - |Enhancement| The :class:`FacetGrid`, :class:`PairGrid`, and :class:`JointGrid` objects now reference the underlying matplotlib figure with a `.figure` attribute. The existing `.fig` attribute still exists but is discouraged and may eventually be deprecated. The effect is that you can now call `obj.figure` on the return value from any seaborn function to access the matplotlib object (:pr:`2639`). - |Enhancement| In :class:`FacetGrid` and functions that use it, visibility of the interior axis labels is now disabled, and exterior axis labels are no longer erased when adding additional layers. This produces the same results for plots made by seaborn functions, but it may produce different (better, in most cases) results for customized facet plots (:pr:`2583`). - |Enhancement| In :class:`FacetGrid`, :class:`PairGrid`, and functions that use them, the matplotlib `figure.autolayout` parameter is disabled to avoid having the legend overlap the plot (:pr:`2571`). - |Enhancement| The :func:`load_dataset` helper now produces a more informative error when fed a dataframe, easing a common beginner mistake (:pr:`2604`). - |Fix| |Enhancement| Improved robustness to missing data, including some additional support for the `pd.NA` type (:pr:`2417`, :pr:`2435`). - |Fix| In :func:`ecdfplot` and :func:`rugplot`, fixed a bug where results were incorrect if the data axis had a log scale before plotting (:pr:`2504`). - |Fix| In :func:`histplot`, fixed a bug where using `shrink` with non-discrete bins shifted bar positions inaccurately (:pr:`2477`). - |Fix| In :func:`displot`, fixed a bug where `common_norm=False` was ignored when faceting was used without assigning `hue` (:pr:`2468`). - |Fix| In :func:`histplot`, fixed two bugs where automatically computed edge widths were too thick for log-scaled histograms and for categorical histograms on the y axis (:pr:`2522`). - |Fix| In :func:`histplot` and :func:`kdeplot`, fixed a bug where the `alpha` parameter was ignored when `fill=False` (:pr:`2460`). - |Fix| In :func:`histplot` and :func:`kdeplot`, fixed a bug where the `multiple` parameter was ignored when `hue` was provided as a vector without a name (:pr:`2462`). - |Fix| In :func:`displot`, the default alpha value now adjusts to a provided `multiple` parameter even when `hue` is not assigned (:pr:`2462`). - |Fix| In :func:`displot`, fixed a bug that caused faceted 2D histograms to error out with `common_bins=False` (:pr:`2640`). - |Fix| In :func:`rugplot`, fixed a bug that prevented the use of datetime data (:pr:`2458`). - |Fix| In :func:`relplot` and :func:`displot`, fixed a bug where the dataframe attached to the returned `FacetGrid` object dropped columns that were not used in the plot (:pr:`2623`). - |Fix| In :func:`relplot`, fixed an error that would be raised when one of the column names in the dataframe shared a name with one of the plot variables (:pr:`2581`). - |Fix| In the relational plots, fixed a bug where legend entries for the `size` semantic were incorrect when `size_norm` extrapolated beyond the range of the data (:pr:`2580`). - |Fix| In :func:`lmplot` and :func:`regplot`, fixed a bug where the x axis was clamped to the data limits with `truncate=True` (:pr:`2576`). - |Fix| In :func:`lmplot`, fixed a bug where `sharey=False` did not always work as expected (:pr:`2576`). - |Fix| In :func:`heatmap`, fixed a bug where vertically-rotated y-axis tick labels would be misaligned with their rows (:pr:`2574`). - |Fix| Fixed an issue that prevented Python from running in `-OO` mode while using seaborn (:pr:`2473`). - |Docs| Improved the API documentation for theme-related functions (:pr:`2573`). - |Docs| Added docstring pages for all methods on documented classes (:pr:`2644`). seaborn-0.11.2/doc/releases/v0.2.0.txt000066400000000000000000000132411410631356500171650ustar00rootroot00000000000000 v0.2.0 (December 2013) ---------------------- This is a major release from 0.1 with a number of API changes, enhancements, and bug fixes. Highlights include an overhaul of timeseries plotting to work intelligently with dataframes, the new function ``interactplot()`` for visualizing continuous interactions, bivariate kernel density estimates in ``kdeplot()``, and significant improvements to color palette handling. Version 0.2 also introduces experimental support for Python 3. In addition to the library enhancements, the documentation has been substantially rewritten to reflect the new features and improve the presentation of the ideas behind the package. API changes ~~~~~~~~~~~ - The ``tsplot()`` function was rewritten to accept data in a long-form ``DataFrame`` and to plot different traces by condition. This introduced a relatively minor but unavoidable API change, where instead of doing ``sns.tsplot(time, heights)``, you now must do ``sns.tsplot(heights, time=time)`` (the ``time`` parameter is now optional, for quicker specification of simple plots). Additionally, the ``"obs_traces"`` and ``"obs_points"`` error styles in ``tsplot()`` have been renamed to ``"unit_traces"`` and ``"unit_points"``, respectively. - Functions that fit kernel density estimates (``kdeplot()`` and ``violinplot()``) now use ``statsmodels`` instead of ``scipy``, and the parameters that influence the density estimate have changed accordingly. This allows for increased flexibility in specifying the bandwidth and kernel, and smarter choices for defining the range of the support. Default options should produce plots that are very close to the old defaults. - The ``kdeplot()`` function now takes a second positional argument of data for drawing bivariate densities. - The ``violin()`` function has been changed to ``violinplot()``, for consistency. In 0.2, ``violin`` will still work, but it will fire a ``UserWarning``. New plotting functions ~~~~~~~~~~~~~~~~~~~~~~ - The ``interactplot()`` function draws a contour plot for an interactive linear model (i.e., the contour shows ``y-hat`` from the model ``y ~ x1 * x2``) over a scatterplot between the two predictor variables. This plot should aid the understanding of an interaction between two continuous variables. - The ``kdeplot()`` function can now draw a bivariate density estimate as a contour plot if provided with two-dimensional input data. - The ``palplot()`` function provides a simple grid-based visualization of a color palette. Other changes ~~~~~~~~~~~~~ Plotting functions ^^^^^^^^^^^^^^^^^^ - The ``corrplot()`` function can be drawn without the correlation coefficient annotation and with variable names on the side of the plot to work with large datasets. - Additionally, ``corrplot()`` sets the color palette intelligently based on the direction of the specified test. - The ``distplot()`` histogram uses a reference rule to choose the bin size if it is not provided. - Added the ``x_bins`` option in ``lmplot()`` for binning a continuous predictor variable, allowing for clearer trends with many datapoints. - Enhanced support for labeling plot elements and axes based on ``name`` attributes in several distribution plot functions and ``tsplot()`` for smarter Pandas integration. - Scatter points in ``lmplot()`` are slightly transparent so it is easy to see where observations overlap. - Added the ``order`` parameter to ``boxplot()`` and ``violinplot()`` to control the order of the bins when using a Pandas object. - When an ``ax`` argument is not provided to a plotting function, it grabs the currently active axis instead of drawing a new one. Color palettes ^^^^^^^^^^^^^^ - Added the ``dark_palette()`` and ``blend_palette()`` for on-the-fly creation of blended color palettes. - The color palette machinery is now intelligent about qualitative ColorBrewer palettes (``Set1``, ``Paired``, etc.), which are properly treated as discrete. - Seaborn color palettes (``deep``, ``muted``, etc.) have been standardized in terms of basic hue sequence, and all palettes now have 6 colors. - Introduced ``{mpl_palette}_d`` palettes, which make a palette with the basic color scheme of the source palette, but with a sequential blend from dark instead of light colors for use with line/scatter/contour plots. - Added the ``palette_context()`` function for blockwise color palettes controlled by a ``with`` statement. Plot styling ^^^^^^^^^^^^ - Added the ``despine()`` function for easily removing plot spines. - A new plot style, ``"ticks"`` has been added. - Tick labels are padded a bit farther from the axis in all styles, avoiding collisions at (0, 0). General package issues ^^^^^^^^^^^^^^^^^^^^^^ - Reorganized the package by breaking up the monolithic ``plotobjs`` module into smaller modules grouped by general objective of the constituent plots. - Removed the ``scikits-learn`` dependency in ``moss``. - Installing with ``pip`` should automatically install most missing dependencies. - The example notebooks are now used as an automated test suite. Bug fixes ~~~~~~~~~ - Fixed a bug where labels did not match data for ``boxplot()`` and ``violinplot()`` when using a groupby. - Fixed a bug in the ``desaturate()`` function. - Fixed a bug in the ``coefplot()`` figure size calculation. - Fixed a bug where ``regplot()`` choked on list input. - Fixed buggy behavior when drawing horizontal boxplots. - Specifying bins for the ``distplot()`` histogram now works. - Fixed a bug where ``kdeplot()`` would reset the axis height and cut off existing data. - All axis styling has been moved out of the top-level ``seaborn.set()`` function, so context or color palette can be cleanly changed. seaborn-0.11.2/doc/releases/v0.2.1.txt000066400000000000000000000014751410631356500171740ustar00rootroot00000000000000 v0.2.1 (December 2013) ---------------------- This is a bugfix release, with no new features. Bug fixes ~~~~~~~~~ - Changed the mechanics of ``violinplot()`` and ``boxplot()`` when using a ``Series`` object as data and performing a ``groupby`` to assign data to bins to address a problem that arises in Pandas 0.13. - Additionally fixed the ``groupby`` code to work with all styles of group specification (specifically, using a dictionary or a function now works). - Fixed a bug where artifacts from the kde fitting could undershoot and create a plot where the density axis starts below 0. - Ensured that data used for kde fitting is double-typed to avoid a low-level statsmodels error. - Changed the implementation of the histogram bin-width reference rule to take a ceiling of the estimated number of bins. seaborn-0.11.2/doc/releases/v0.3.0.txt000066400000000000000000000146041410631356500171720ustar00rootroot00000000000000 v0.3.0 (March 2014) ------------------- This is a major release from 0.2 with a number of enhancements to the plotting capabilities and styles. Highlights include :class:`FacetGrid`, ``factorplot``, :func:`jointplot`, and an overhaul to :ref:`style management `. There is also lots of new documentation, including an :ref:`example gallery ` and reorganized :ref:`tutorial `. New plotting functions ~~~~~~~~~~~~~~~~~~~~~~ - The :class:`FacetGrid` class adds a new form of functionality to seaborn, providing a way to abstractly structure a grid of plots corresponding to subsets of a dataset. It can be used with a wide variety of plotting functions (including most of the matplotlib and seaborn APIs. See the :ref:`tutorial ` for more information. - Version 0.3 introduces the ``factorplot`` function, which is similar in spirit to :func:`lmplot` but intended for use when the main independent variable is categorical instead of quantitative. ``factorplot`` can draw a plot in either a point or bar representation using the corresponding Axes-level functions :func:`pointplot` and :func:`barplot` (which are also new). Additionally, the ``factorplot`` function can be used to draw box plots on a faceted grid. For examples of how to use these functions, you can refer to the tutorial. - Another new function is :func:`jointplot`, which is built using the new :class:`JointGrid` object. :func:`jointplot` generalizes the behavior of :func:`regplot` in previous versions of seaborn (:func:`regplot` has changed somewhat in 0.3; see below for details) by drawing a bivariate plot of the relationship between two variables with their marginal distributions drawn on the side of the plot. With :func:`jointplot`, you can draw a scatterplot or regression plot as before, but you can now also draw bivariate kernel densities or hexbin plots with appropriate univariate graphs for the marginal distributions. Additionally, it's easy to use :class:`JointGrid` directly to build up more complex plots when the default methods offered by :func:`jointplot` are not suitable for your visualization problem. The tutorial for :class:`JointGrid` has more examples of how this object can be useful. - The :func:`residplot` function complements :func:`regplot` and can be quickly used to diagnose problems with a linear model by calculating and plotting the residuals of a simple regression. There is also a ``"resid"`` kind for :func:`jointplot`. API changes ~~~~~~~~~~~ - The most noticeable change will be that :func:`regplot` no longer produces a multi-component plot with distributions in marginal axes. Instead. :func:`regplot` is now an "Axes-level" function that can be plotted into any existing figure on a specific set of axes. :func:`regplot` and :func:`lmplot` have also been unified (the latter uses the former behind the scenes), so all options for how to fit and represent the regression model can be used for both functions. To get the old behavior of :func:`regplot`, use :func:`jointplot` with ``kind="reg"``. - As noted above, :func:`lmplot` has been rewritten to exploit the :class:`FacetGrid` machinery. This involves a few changes. The ``color`` keyword argument has been replaced with ``hue``, for better consistency across the package. The ``hue`` parameter will always take a variable *name*, while ``color`` will take a color name or (in some cases) a palette. The :func:`lmplot` function now returns the :class:`FacetGrid` used to draw the plot instance. - The functions that interact with matplotlib rc parameters have been updated and standardized. There are now three pairs of functions, :func:`axes_style` and :func:`set_style`, :func:`plotting_context` and :func:`set_context`, and :func:`color_palette` and :func:`set_palette`. In each case, the pairs take the exact same arguments. The first function defines and returns the parameters, and the second sets the matplotlib defaults. Additionally, the first function in each pair can be used in a ``with`` statement to temporarily change the defaults. Both the style and context functions also now accept a dictionary of matplotlib rc parameters to override the seaborn defaults, and :func:`set` now also takes a dictionary to update any of the matplotlib defaults. See the :ref:`tutorial ` for more information. - The ``nogrid`` style has been deprecated and changed to ``white`` for more uniformity (i.e. there are now ``darkgrid``, ``dark``, ``whitegrid``, and ``white`` styles). Other changes ~~~~~~~~~~~~~ Using the package ^^^^^^^^^^^^^^^^^ - If you want to use plotting functions provided by the package without setting the matplotlib style to a seaborn theme, you can now do ``import seaborn.apionly as sns`` or ``from seaborn.apionly import lmplot``, etc. This is using the (also new) :func:`reset_orig` function, which returns the rc parameters to what they are at matplotlib import time — i.e. they will respect any custom `matplotlibrc` settings on top of the matplotlib defaults. - The dependency load of the package has been reduced. It can now be installed and used with only ``numpy``, ``scipy``, ``matplotlib``, and ``pandas``. Although ``statsmodels`` is still recommended for full functionality, it is not required. Plotting functions ^^^^^^^^^^^^^^^^^^ - :func:`lmplot` (and :func:`regplot`) have two new options for fitting regression models: ``lowess`` and ``robust``. The former fits a nonparametric smoother, while the latter fits a regression using methods that are less sensitive to outliers. - The regression uncertainty in :func:`lmplot` and :func:`regplot` is now estimated with fewer bootstrap iterations, so plotting should be faster. - The univariate :func:`kdeplot` can now be drawn as a *cumulative* density plot. - Changed :func:`interactplot` to use a robust calculation of the data range when finding default limits for the contour colormap to work better when there are outliers in the data. Style ^^^^^ - There is a new style, ``dark``, which shares most features with ``darkgrid`` but does not draw a grid by default. - There is a new function, :func:`offset_spines`, and a corresponding option in :func:`despine` called ``trim``. Together, these can be used to make plots where the axis spines are offset from the main part of the figure and limited within the range of the ticks. This is recommended for use with the ``ticks`` style. - Other aspects of the seaborn styles have been tweaked for more attractive plots. seaborn-0.11.2/doc/releases/v0.3.1.txt000066400000000000000000000017411410631356500171710ustar00rootroot00000000000000 v0.3.1 (April 2014) ------------------- This is a minor release from 0.3 with fixes for several bugs. Plotting functions ~~~~~~~~~~~~~~~~~~ - The size of the points in :func:`pointplot` and ``factorplot`` are now scaled with the linewidth for better aesthetics across different plotting contexts. - The :func:`pointplot` glyphs for different levels of the hue variable are drawn at different z-orders so that they appear uniform. Bug Fixes ~~~~~~~~~ - Fixed a bug in :class:`FacetGrid` (and thus affecting lmplot and factorplot) that appeared when ``col_wrap`` was used with a number of facets that did not evenly divide into the column width. - Fixed an issue where the support for kernel density estimates was sometimes computed incorrectly. - Fixed a problem where ``hue`` variable levels that were not strings were missing in :class:`FacetGrid` legends. - When passing a color palette list in a ``with`` statement, the entire palette is now used instead of the first six colors. seaborn-0.11.2/doc/releases/v0.4.0.txt000066400000000000000000000101571410631356500171720ustar00rootroot00000000000000 v0.4.0 (September 2014) ----------------------- This is a major release from 0.3. Highlights include new approaches for :ref:`quick, high-level dataset exploration ` (along with a more :ref:`flexible interface `) and easy creation of :ref:`perceptually-appropriate color palettes ` using the cubehelix system. Along with these additions, there are a number of smaller changes that make visualizing data with seaborn easier and more powerful. Plotting functions ~~~~~~~~~~~~~~~~~~ - A new object, :class:`PairGrid`, and a corresponding function :func:`pairplot`, for drawing grids of pairwise relationships in a dataset. This style of plot is sometimes called a "scatterplot matrix", but the representation of the data in :class:`PairGrid` is flexible and many styles other than scatterplots can be used. See the :ref:`docs ` for more information. **Note:** due to a bug in older versions of matplotlib, you will have best results if you use these functions with matplotlib 1.4 or later. - The rules for choosing default color palettes when variables are mapped to different colors have been unified (and thus changed in some cases). Now when no specific palette is requested, the current global color palette will be used, unless the number of variables to be mapped exceeds the number of unique colors in the palette, in which case the ``"husl"`` palette will be used to avoid cycling. - Added a keyword argument ``hist_norm`` to :func:`distplot`. When a :func:`distplot` is now drawn without a KDE or parametric density, the histogram is drawn as counts instead of a density. This can be overridden by by setting ``hist_norm`` to ``True``. - When using :class:`FacetGrid` with a ``hue`` variable, the legend is no longer drawn by default when you call :meth:`FacetGrid.map`. Instead, you have to call :meth:`FacetGrid.add_legend` manually. This should make it easier to layer multiple plots onto the grid without having duplicated legends. - Made some changes to ``factorplot`` so that it behaves better when not all levels of the ``x`` variable are represented in each facet. - Added the ``logx`` option to :func:`regplot` for fitting the regression in log space. - When :func:`violinplot` encounters a bin with only a single observation, it will now plot a horizontal line at that value instead of erroring out. Style and color palettes ~~~~~~~~~~~~~~~~~~~~~~~~ - Added the :func:`cubehelix_palette` function for generating sequential palettes from the cubehelix system. See the palette docs for more information on how these palettes can be used. There is also the :func:`choose_cubehelix` which will launch an interactive app to select cubehelix parameters in the notebook. - Added the :func:`xkcd_palette` and the ``xkcd_rgb`` dictionary so that colors can be specified with names from the `xkcd color survey `_. - Added the ``font_scale`` option to :func:`plotting_context`, :func:`set_context`, and :func:`set`. ``font_scale`` can independently increase or decrease the size of the font elements in the plot. - Font-handling should work better on systems without Arial installed. This is accomplished by adding the ``font.sans-serif`` field to the ``axes_style`` definition with Arial and Liberation Sans prepended to matplotlib defaults. The font family can also be set through the ``font`` keyword argument in :func:`set`. Due to matplotlib bugs, this might not work as expected on matplotlib 1.3. - The :func:`despine` function gets a new keyword argument ``offset``, which replaces the deprecated :func:`offset_spines` function. You no longer need to offset the spines before plotting data. - Added a default value for ``pdf.fonttype`` so that text in PDFs is editable in Adobe Illustrator. Other API Changes ~~~~~~~~~~~~~~~~~ - Removed the deprecated ``set_color_palette`` and ``palette_context`` functions. These were replaced in version 0.3 by the :func:`set_palette` function and ability to use :func:`color_palette` directly in a ``with`` statement. - Removed the ability to specify a ``nogrid`` style, which was renamed to ``white`` in 0.3. seaborn-0.11.2/doc/releases/v0.5.0.txt000066400000000000000000000110701410631356500171660ustar00rootroot00000000000000 v0.5.0 (November 2014) -------------------------- This is a major release from 0.4. Highlights include new functions for plotting heatmaps, possibly while applying clustering algorithms to discover structured relationships. These functions are complemented by new custom colormap functions and a full set of IPython widgets that allow interactive selection of colormap parameters. The palette tutorial has been rewritten to cover these new tools and more generally provide guidance on how to use color in visualizations. There are also a number of smaller changes and bugfixes. Plotting functions ~~~~~~~~~~~~~~~~~~ - Added the :func:`heatmap` function for visualizing a matrix of data by color-encoding the values. See the docs for more information. - Added the :func:`clustermap` function for clustering and visualizing a matrix of data, with options to label individual rows and columns by colors. See the docs for more information. This work was lead by Olga Botvinnik. - :func:`lmplot` and :func:`pairplot` get a new keyword argument, ``markers``. This can be a single kind of marker or a list of different markers for each level of the ``hue`` variable. Using different markers for different hues should let plots be more comprehensible when reproduced to black-and-white (i.e. when printed). See the `github pull request (#323) `_ for examples. - More generally, there is a new keyword argument in :class:`FacetGrid` and :class:`PairGrid`, ``hue_kws``. This similarly lets plot aesthetics vary across the levels of the hue variable, but more flexibily. ``hue_kws`` should be a dictionary that maps the name of keyword arguments to lists of values that are as long as the number of levels of the hue variable. - The argument ``subplot_kws`` has been added to ``FacetGrid``. This allows for faceted plots with custom projections, including `maps with Cartopy `_. Color palettes ~~~~~~~~~~~~~~ - Added two new functions to create custom color palettes. For sequential palettes, you can use the :func:`light_palette` function, which takes a seed color and creates a ramp from a very light, desaturated variant of it. For diverging palettes, you can use the :func:`diverging_palette` function to create a balanced ramp between two endpoints to a light or dark midpoint. See the :ref:`palette tutorial ` for more information. - Added the ability to specify the seed color for :func:`light_palette` and :func:`dark_palette` as a tuple of ``husl`` or ``hls`` space values or as a named ``xkcd`` color. The interpretation of the seed color is now provided by the new ``input`` parameter to these functions. - Added several new interactive palette widgets: :func:`choose_colorbrewer_palette`, :func:`choose_light_palette`, :func:`choose_dark_palette`, and :func:`choose_diverging_palette`. For consistency, renamed the cubehelix widget to :func:`choose_cubehelix_palette` (and fixed a bug where the cubehelix palette was reversed). These functions also now return either a color palette list or a matplotlib colormap when called, and that object will be live-updated as you play with the widget. This should make it easy to iterate over a plot until you find a good representation for the data. See the `Github pull request `_ or `this notebook (download it to use the widgets) `_ for more information. - Overhauled the color :ref:`palette tutorial ` to organize the discussion by class of color palette and provide more motivation behind the various choices one might make when choosing colors for their data. Bug fixes ~~~~~~~~~ - Fixed a bug in :class:`PairGrid` that gave incorrect results (or a crash) when the input DataFrame has a non-default index. - Fixed a bug in :class:`PairGrid` where passing columns with a date-like datatype raised an exception. - Fixed a bug where :func:`lmplot` would show a legend when the hue variable was also used on either the rows or columns (making the legend redundant). - Worked around a matplotlib bug that was forcing outliers in :func:`boxplot` to appear as blue. - :func:`kdeplot` now accepts pandas Series for the ``data`` and ``data2`` arguments. - Using a non-default correlation method in :func:`corrplot` now implies ``sig_stars=False`` as the permutation test used to significance values for the correlations uses a pearson metric. - Removed ``pdf.fonttype`` from the style definitions, as the value used in version 0.4 resulted in very large PDF files. seaborn-0.11.2/doc/releases/v0.5.1.txt000066400000000000000000000011321410631356500171650ustar00rootroot00000000000000 v0.5.1 (November 2014) ---------------------- This is a bugfix release that includes a workaround for an issue in matplotlib 1.4.2 and fixes for two bugs in functions that were new in 0.5.0. - Implemented a workaround for a bug in matplotlib 1.4.2 that prevented point markers from being drawn when the seaborn styles had been set. See this `github issue `_ for more information. - Fixed a bug in :func:`heatmap` where the mask was vertically reversed relative to the data. - Fixed a bug in :func:`clustermap` when using nested lists of side colors. seaborn-0.11.2/doc/releases/v0.6.0.txt000066400000000000000000000256221410631356500171770ustar00rootroot00000000000000 v0.6.0 (June 2015) ------------------ This is a major release from 0.5. The main objective of this release was to unify the API for categorical plots, which means that there are some relatively large API changes in some of the older functions. See below for details of those changes, which may break code written for older versions of seaborn. There are also some new functions (:func:`stripplot`, and :func:`countplot`), numerous enhancements to existing functions, and bug fixes. Additionally, the documentation has been completely revamped and expanded for the 0.6 release. Now, the API docs page for each function has multiple examples with embedded plots showing how to use the various options. These pages should be considered the most comprehensive resource for examples, and the tutorial pages are now streamlined and oriented towards a higher-level overview of the various features. Changes and updates to categorical plots ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In version 0.6, the "categorical" plots have been unified with a common API. This new category of functions groups together plots that show the relationship between one numeric variable and one or two categorical variables. This includes plots that show distribution of the numeric variable in each bin (:func:`boxplot`, :func:`violinplot`, and :func:`stripplot`) and plots that apply a statistical estimation within each bin (:func:`pointplot`, :func:`barplot`, and :func:`countplot`). There is a new :ref:`tutorial chapter ` that introduces these functions. The categorical functions now each accept the same formats of input data and can be invoked in the same way. They can plot using long- or wide-form data, and can be drawn vertically or horizontally. When long-form data is used, the orientation of the plots is inferred from the types of the input data. Additionally, all functions natively take a ``hue`` variable to add a second layer of categorization. With the (in some cases new) API, these functions can all be drawn correctly by :class:`FacetGrid`. However, ``factorplot`` can also now create faceted verisons of any of these kinds of plots, so in most cases it will be unnecessary to use :class:`FacetGrid` directly. By default, ``factorplot`` draws a point plot, but this is controlled by the ``kind`` parameter. Here are details on what has changed in the process of unifying these APIs: - Changes to :func:`boxplot` and :func:`violinplot` will probably be the most disruptive. Both functions maintain backwards-compatibility in terms of the kind of data they can accept, but the syntax has changed to be more similar to other seaborn functions. These functions are now invoked with ``x`` and/or ``y`` parameters that are either vectors of data or names of variables in a long-form DataFrame passed to the new ``data`` parameter. You can still pass wide-form DataFrames or arrays to ``data``, but it is no longer the first positional argument. See the `github pull request (#410) `_ for more information on these changes and the logic behind them. - As :func:`pointplot` and :func:`barplot` can now plot with the major categorical variable on the y axis, the ``x_order`` parameter has been renamed to ``order``. - Added a ``hue`` argument to :func:`boxplot` and :func:`violinplot`, which allows for nested grouping the plot elements by a third categorical variable. For :func:`violinplot`, this nesting can also be accomplished by splitting the violins when there are two levels of the ``hue`` variable (using ``split=True``). To make this functionality feasible, the ability to specify where the plots will be draw in data coordinates has been removed. These plots now are drawn at set positions, like (and identical to) :func:`barplot` and :func:`pointplot`. - Added a ``palette`` parameter to :func:`boxplot`/:func:`violinplot`. The ``color`` parameter still exists, but no longer does double-duty in accepting the name of a seaborn palette. ``palette`` supersedes ``color`` so that it can be used with a :class:`FacetGrid`. Along with these API changes, the following changes/enhancements were made to the plotting functions: - The default rules for ordering the categories has changed. Instead of automatically sorting the category levels, the plots now show the levels in the order they appear in the input data (i.e., the order given by ``Series.unique()``). Order can be specified when plotting with the ``order`` and ``hue_order`` parameters. Additionally, when variables are pandas objects with a "categorical" dtype, the category order is inferred from the data object. This change also affects :class:`FacetGrid` and :class:`PairGrid`. - Added the ``scale`` and ``scale_hue`` parameters to :func:`violinplot`. These control how the width of the violins are scaled. The default is ``area``, which is different from how the violins used to be drawn. Use ``scale='width'`` to get the old behavior. - Used a different style for the ``box`` kind of interior plot in :func:`violinplot`, which shows the whisker range in addition to the quartiles. Use ``inner='quartile'`` to get the old style. New plotting functions ~~~~~~~~~~~~~~~~~~~~~~ - Added the :func:`stripplot` function, which draws a scatterplot where one of the variables is categorical. This plot has the same API as :func:`boxplot` and :func:`violinplot`. It is useful both on its own and when composed with one of these other plot kinds to show both the observations and underlying distribution. - Added the :func:`countplot` function, which uses a bar plot representation to show counts of variables in one or more categorical bins. This replaces the old approach of calling :func:`barplot` without a numeric variable. Other additions and changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~ - The :func:`corrplot` and underlying :func:`symmatplot` functions have been deprecated in favor of :func:`heatmap`, which is much more flexible and robust. These two functions are still available in version 0.6, but they will be removed in a future version. - Added the :func:`set_color_codes` function and the ``color_codes`` argument to :func:`set` and :func:`set_palette`. This changes the interpretation of shorthand color codes (i.e. "b", "g", k", etc.) within matplotlib to use the values from one of the named seaborn palettes (i.e. "deep", "muted", etc.). That makes it easier to have a more uniform look when using matplotlib functions directly with seaborn imported. This could be disruptive to existing plots, so it does not happen by default. It is possible this could change in the future. - The :func:`color_palette` function no longer trims palettes that are longer than 6 colors when passed into it. - Added the ``as_hex`` method to color palette objects, to return a list of hex codes rather than rgb tuples. - :func:`jointplot` now passes additional keyword arguments to the function used to draw the plot on the joint axes. - Changed the default ``linewidths`` in :func:`heatmap` and :func:`clustermap` to 0 so that larger matrices plot correctly. This parameter still exists and can be used to get the old effect of lines demarcating each cell in the heatmap (the old default ``linewidths`` was 0.5). - :func:`heatmap` and :func:`clustermap` now automatically use a mask for missing values, which previously were shown with the "under" value of the colormap per default `plt.pcolormesh` behavior. - Added the ``seaborn.crayons`` dictionary and the :func:`crayon_palette` function to define colors from the 120 box (!) of `Crayola crayons `_. - Added the ``line_kws`` parameter to :func:`residplot` to change the style of the lowess line, when used. - Added open-ended ``**kwargs`` to the ``add_legend`` method on :class:`FacetGrid` and :class:`PairGrid`, which will pass additional keyword arguments through when calling the legend function on the ``Figure`` or ``Axes``. - Added the ``gridspec_kws`` parameter to :class:`FacetGrid`, which allows for control over the size of individual facets in the grid to emphasize certain plots or account for differences in variable ranges. - The interactive palette widgets now show a continuous colorbar, rather than a discrete palette, when `as_cmap` is True. - The default Axes size for :func:`pairplot` and :class:`PairGrid` is now slightly smaller. - Added the ``shade_lowest`` parameter to :func:`kdeplot` which will set the alpha for the lowest contour level to 0, making it easier to plot multiple bivariate distributions on the same axes. - The ``height`` parameter of :func:`rugplot` is now interpreted as a function of the axis size and is invariant to changes in the data scale on that axis. The rug lines are also slightly narrower by default. - Added a catch in :func:`distplot` when calculating a default number of bins. For highly skewed data it will now use sqrt(n) bins, where previously the reference rule would return "infinite" bins and cause an exception in matplotlib. - Added a ceiling (50) to the default number of bins used for :func:`distplot` histograms. This will help avoid confusing errors with certain kinds of datasets that heavily violate the assumptions of the reference rule used to get a default number of bins. The ceiling is not applied when passing a specific number of bins. - The various property dictionaries that can be passed to ``plt.boxplot`` are now applied after the seaborn restyling to allow for full customizability. - Added a ``savefig`` method to :class:`JointGrid` that defaults to a tight bounding box to make it easier to save figures using this class, and set a tight bbox as the default for the ``savefig`` method on other Grid objects. - You can now pass an integer to the ``xticklabels`` and ``yticklabels`` parameter of :func:`heatmap` (and, by extension, :func:`clustermap`). This will make the plot use the ticklabels inferred from the data, but only plot every ``n`` label, where ``n`` is the number you pass. This can help when visualizing larger matrices with some sensible ordering to the rows or columns of the dataframe. - Added `"figure.facecolor"` to the style parameters and set the default to white. - The :func:`load_dataset` function now caches datasets locally after downloading them, and uses the local copy on subsequent calls. Bug fixes ~~~~~~~~~ - Fixed bugs in :func:`clustermap` where the mask and specified ticklabels were not being reorganized using the dendrograms. - Fixed a bug in :class:`FacetGrid` and :class:`PairGrid` that lead to incorrect legend labels when levels of the ``hue`` variable appeared in ``hue_order`` but not in the data. - Fixed a bug in :meth:`FacetGrid.set_xticklabels` or :meth:`FacetGrid.set_yticklabels` when ``col_wrap`` is being used. - Fixed a bug in :class:`PairGrid` where the ``hue_order`` parameter was ignored. - Fixed two bugs in :func:`despine` that caused errors when trying to trim the spines on plots that had inverted axes or no ticks. - Improved support for the ``margin_titles`` option in :class:`FacetGrid`, which can now be used with a legend. seaborn-0.11.2/doc/releases/v0.7.0.txt000066400000000000000000000053741410631356500172020ustar00rootroot00000000000000 v0.7.0 (January 2016) --------------------- This is a major release from 0.6. The main new feature is :func:`swarmplot` which implements the beeswarm approach for drawing categorical scatterplots. There are also some performance improvements, bug fixes, and updates for compatibility with new versions of dependencies. - Added the :func:`swarmplot` function, which draws beeswarm plots. These are categorical scatterplots, similar to those produced by :func:`stripplot`, but position of the points on the categorical axis is chosen to avoid overlapping points. See the :ref:`categorical plot tutorial ` for more information. - Changed some of the :func:`stripplot` defaults to be closer to :func:`swarmplot`. Points are now somewhat smaller, have no outlines, and are not split by default when using ``hue``. These settings remain customizable through function parameters. - Added an additional rule when determining category order in categorical plots. Now, when numeric variables are used in a categorical role, the default behavior is to sort the unique levels of the variable (i.e they will be in proper numerical order). This can still be overridden by the appropriate ``{*_}order`` parameter, and variables with a ``category`` datatype will still follow the category order even if the levels are strictly numerical. - Changed how :func:`stripplot` draws points when using ``hue`` nesting with ``split=False`` so that the different ``hue`` levels are not drawn strictly on top of each other. - Improve performance for large dendrograms in :func:`clustermap`. - Added ``font.size`` to the plotting context definition so that the default output from ``plt.text`` will be scaled appropriately. - Fixed a bug in :func:`clustermap` when ``fastcluster`` is not installed. - Fixed a bug in the zscore calculation in :func:`clustermap`. - Fixed a bug in :func:`distplot` where sometimes the default number of bins would not be an integer. - Fixed a bug in :func:`stripplot` where a legend item would not appear for a ``hue`` level if there were no observations in the first group of points. - Heatmap colorbars are now rasterized for better performance in vector plots. - Added workarounds for some matplotlib boxplot issues, such as strange colors of outlier points. - Added workarounds for an issue where violinplot edges would be missing or have random colors. - Added a workaround for an issue where only one :func:`heatmap` cell would be annotated on some matplotlib backends. - Fixed a bug on newer versions of matplotlib where a colormap would be erroneously applied to scatterplots with only three observations. - Updated seaborn for compatibility with matplotlib 1.5. - Added compatibility for various IPython (and Jupyter) versions in functions that use widgets. seaborn-0.11.2/doc/releases/v0.7.1.txt000066400000000000000000000037171410631356500172020ustar00rootroot00000000000000 v0.7.1 (June 2016) ------------------- - Added the ability to put "caps" on the error bars that are drawn by :func:`barplot` or :func:`pointplot` (and, by extension, ``factorplot``). Additionally, the line width of the error bars can now be controlled. These changes involve the new parameters ``capsize`` and ``errwidth``. See the `github pull request (#898) `_ for examples of usage. - Improved the row and column colors display in :func:`clustermap`. It is now possible to pass Pandas objects for these elements and, when possible, the semantic information in the Pandas objects will be used to add labels to the plot. When Pandas objects are used, the color data is matched against the main heatmap based on the index, not on position. This is more accurate, but it may lead to different results if current code assumed positional matching. - Improved the luminance calculation that determines the annotation color in :func:`heatmap`. - The ``annot`` parameter of :func:`heatmap` now accepts a rectangular dataset in addition to a boolean value. If a dataset is passed, its values will be used for the annotations, while the main dataset will be used for the heatmap cell colors. - Fixed a bug in :class:`FacetGrid` that appeared when using ``col_wrap`` with missing ``col`` levels. - Made it possible to pass a tick locator object to the :func:`heatmap` colorbar. - Made it possible to use different styles (e.g., step) for :class:`PairGrid` histograms when there are multiple hue levels. - Fixed a bug in scipy-based univariate kernel density bandwidth calculation. - The :func:`reset_orig` function (and, by extension, importing ``seaborn.apionly``) resets matplotlib rcParams to their values at the time seaborn itself was imported, which should work better with rcParams changed by the jupyter notebook backend. - Removed some objects from the top-level ``seaborn`` namespace. - Improved unicode compatibility in :class:`FacetGrid`. seaborn-0.11.2/doc/releases/v0.8.0.txt000066400000000000000000000104201410631356500171670ustar00rootroot00000000000000 v0.8.0 (July 2017) ------------------ - The default style is no longer applied when seaborn is imported. It is now necessary to explicitly call :func:`set` or one or more of :func:`set_style`, :func:`set_context`, and :func:`set_palette`. Correspondingly, the ``seaborn.apionly`` module has been deprecated. - Changed the behavior of :func:`heatmap` (and by extension :func:`clustermap`) when plotting divergent dataesets (i.e. when the ``center`` parameter is used). Instead of extending the lower and upper limits of the colormap to be symmetrical around the ``center`` value, the colormap is modified so that its middle color corresponds to ``center``. This means that the full range of the colormap will not be used (unless the data or specified ``vmin`` and ``vmax`` are symmetric), but the upper and lower limits of the colorbar will correspond to the range of the data. See the Github pull request `(#1184) `_ for examples of the behavior. - Removed automatic detection of diverging data in :func:`heatmap` (and by extension :func:`clustermap`). If you want the colormap to be treated as diverging (see above), it is now necessary to specify the ``center`` value. When no colormap is specified, specifying ``center`` will still change the default to be one that is more appropriate for displaying diverging data. - Added four new colormaps, created using `viscm `_ for perceptual uniformity. The new colormaps include two sequential colormaps ("rocket" and "mako") and two diverging colormaps ("icefire" and "vlag"). These colormaps are registered with matplotlib on seaborn import and the colormap objects can be accessed in the ``seaborn.cm`` namespace. - Changed the default :func:`heatmap` colormaps to be "rocket" (in the case of sequential data) or "icefire" (in the case of diverging data). Note that this change reverses the direction of the luminance ramp from the previous defaults. While potentially confusing and disruptive, this change better aligns the seaborn defaults with the new matplotlib default colormap ("viridis") and arguably better aligns the semantics of a "heat" map with the appearance of the colormap. - Added ``"auto"`` as a (default) option for tick labels in :func:`heatmap` and :func:`clustermap`. This will try to estimate how many ticks can be labeled without the text objects overlapping, which should improve performance for larger matrices. - Added the ``dodge`` parameter to :func:`boxplot`, :func:`violinplot`, and :func:`barplot` to allow use of ``hue`` without changing the position or width of the plot elements, as when the ``hue`` varible is not nested within the main categorical variable. - Correspondingly, the ``split`` parameter for :func:`stripplot` and :func:`swarmplot` has been renamed to ``dodge`` for consistency with the other categorical functions (and for differentiation from the meaning of ``split`` in :func:`violinplot`). - Added the ability to draw a colorbar for a bivariate :func:`kdeplot` with the ``cbar`` parameter (and related ``cbar_ax`` and ``cbar_kws`` parameters). - Added the ability to use error bars to show standard deviations rather than bootstrap confidence intervals in most statistical functions by putting ``ci="sd"``. - Allow side-specific offsets in :func:`despine`. - Figure size is no longer part of the seaborn plotting context parameters. - Put a cap on the number of bins used in :func:`jointplot` for ``type=="hex"`` to avoid hanging when the reference rule prescribes too many. - Changed the y axis in :func:`heatmap`. Instead of reversing the rows of the data internally, the y axis is now inverted. This may affect code that draws on top of the heatmap in data coordinates. - Turn off dendrogram axes in :func:`clustermap` rather than setting the background color to white. - New matplotlib qualitative palettes (e.g. "tab10") are now handled correctly. - Some modules and functions have been internally reorganized; there should be no effect on code that uses the ``seaborn`` namespace. - Added a deprecation warning to ``tsplot`` function to indicate that it will be removed or replaced with a substantially altered version in a future release. - The ``interactplot`` and ``coefplot`` functions are officially deprecated and will be removed in a future release. seaborn-0.11.2/doc/releases/v0.8.1.txt000066400000000000000000000026441410631356500172010ustar00rootroot00000000000000 v0.8.1 (September 2017) ----------------------- - Added a warning in :class:`FacetGrid` when passing a categorical plot function without specifying ``order`` (or ``hue_order`` when ``hue`` is used), which is likely to produce a plot that is incorrect. - Improved compatibility between :class:`FacetGrid` or :class:`PairGrid` and interactive matplotlib backends so that the legend no longer remains inside the figure when using ``legend_out=True``. - Changed categorical plot functions with small plot elements to use :func:`dark_palette` instead of :func:`light_palette` when generating a sequential palette from a specified color. - Improved robustness of :func:`kdeplot` and :func:`distplot` to data with fewer than two observations. - Fixed a bug in :func:`clustermap` when using ``yticklabels=False``. - Fixed a bug in :func:`pointplot` where colors were wrong if exactly three points were being drawn. - Fixed a bug in :func:`pointplot` where legend entries for missing data appeared with empty markers. - Fixed a bug in :func:`clustermap` where an error was raised when annotating the main heatmap and showing category colors. - Fixed a bug in :func:`clustermap` where row labels were not being properly rotated when they overlapped. - Fixed a bug in :func:`kdeplot` where the maximum limit on the density axes was not being updated when multiple densities were drawn. - Improved compatibility with future versions of pandas. seaborn-0.11.2/doc/releases/v0.9.0.txt000066400000000000000000000250251410631356500171770ustar00rootroot00000000000000 v0.9.0 (July 2018) ------------------ This is a major release with several substantial and long-desired new features. There are also updates/modifications to the themes and color palettes that give better consistency with matplotlib 2.0 and some notable API changes. New relational plots ~~~~~~~~~~~~~~~~~~~~ Three completely new plotting functions have been added: :func:`relplot`, :func:`scatterplot`, and :func:`lineplot`. The first is a figure-level interface to the latter two that combines them with a :class:`FacetGrid`. The functions bring the high-level, dataset-oriented API of the seaborn categorical plotting functions to more general plots (scatter plots and line plots). These functions can visualize a relationship between two numeric variables while mapping up to three additional variables by modifying ``hue``, ``size``, and/or ``style`` semantics. The common high-level API is implemented differently in the two functions. For example, the size semantic in :func:`scatterplot` scales the area of scatter plot points, but in :func:`lineplot` it scales width of the line plot lines. The API is dataset-oriented, meaning that in both cases you pass the variable in your dataset rather than directly specifying the matplotlib parameters to use for point area or line width. Another way the relational functions differ from existing seaborn functionality is that they have better support for using numeric variables for ``hue`` and ``size`` semantics. This functionality may be propagated to other functions that can add a ``hue`` semantic in future versions; it has not been in this release. The :func:`lineplot` function also has support for statistical estimation and is replacing the older ``tsplot`` function, which still exists but is marked for removal in a future release. :func:`lineplot` is better aligned with the API of the rest of the library and more flexible in showing relationships across additional variables by modifying the size and style semantics independently. It also has substantially improved support for date and time data, a major pain factor in ``tsplot``. The cost is that some of the more esoteric options in ``tsplot`` for representing uncertainty (e.g. a colormapped KDE of the bootstrap distribution) have not been implemented in the new function. There is quite a bit of new documentation that explains these new functions in more detail, including detailed examples of the various options in the :ref:`API reference ` and a more verbose :ref:`tutorial `. These functions should be considered in a "stable beta" state. They have been thoroughly tested, but some unknown corner cases may remain to be found. The main features are in place, but not all planned functionality has been implemented. There are planned improvements to some elements, particularly the default legend, that are a little rough around the edges in this release. Finally, some of the default behavior (e.g. the default range of point/line sizes) may change somewhat in future releases. Updates to themes and palettes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Several changes have been made to the seaborn style themes, context scaling, and color palettes. In general the aim of these changes was to make the seaborn styles more consistent with the `style updates in matplotlib 2.0 `_ and to leverage some of the new style parameters for better implementation of some aspects of the seaborn styles. Here is a list of the changes: - Reorganized and updated some :func:`axes_style`/:func:`plotting_context` parameters to take advantage of improvements in the matplotlib 2.0 update. The biggest change involves using several new parameters in the "style" spec while moving parameters that used to implement the corresponding aesthetics to the "context" spec. For example, axes spines and ticks are now off instead of having their width/length zeroed out for the darkgrid style. That means the width/length of these elements can now be scaled in different contexts. The effect is a more cohesive appearance of the plots, especially in larger contexts. These changes include only minimal support for the 1.x matplotlib series. Users who are stuck on matplotlib 1.5 but wish to use seaborn styling may want to use the seaborn parameters that can be accessed through the `matplotlib stylesheet interface `_. - Updated the seaborn palettes ("deep", "muted", "colorblind", etc.) to correspond with the new 10-color matplotlib default. The legacy palettes are now available at "deep6", "muted6", "colorblind6", etc. Additionally, a few individual colors were tweaked for better consistency, aesthetics, and accessibility. - Calling :func:`color_palette` (or :func:`set_palette`) with a named qualitative palettes (i.e. one of the seaborn palettes, the colorbrewer qualitative palettes, or the matplotlib matplotlib tableau-derived palettes) and no specified number of colors will return all of the colors in the palette. This means that for some palettes, the returned list will have a different length than it did in previous versions. - Enhanced :func:`color_palette` to accept a parameterized specification of a cubehelix palette in in a string, prefixed with ``"ch:"`` (e.g. ``"ch:-.1,.2,l=.7"``). Note that keyword arguments can be spelled out or referenced using only their first letter. Reversing the palette is accomplished by appending ``"_r"``, as with other matplotlib colormaps. This specification will be accepted by any seaborn function with a ``palette=`` parameter. - Slightly increased the base font sizes in :func:`plotting_context` and increased the scaling factors for ``"talk"`` and ``"poster"`` contexts. - Calling :func:`set` will now call :func:`set_color_codes` to re-assign the single letter color codes by default API changes ~~~~~~~~~~~ A few functions have been renamed or have had changes to their default parameters. - The ``factorplot`` function has been renamed to :func:`catplot`. The new name ditches the original R-inflected terminology to use a name that is more consistent with terminology in pandas and in seaborn itself. This change should hopefully make :func:`catplot` easier to discover, and it should make more clear what its role is. ``factorplot`` still exists and will pass its arguments through to :func:`catplot` with a warning. It may be removed eventually, but the transition will be as gradual as possible. - The other reason that the ``factorplot`` name was changed was to ease another alteration which is that the default ``kind`` in :func:`catplot` is now ``"strip"`` (corresponding to :func:`stripplot`). This plots a categorical scatter plot which is usually a much better place to start and is more consistent with the default in :func:`relplot`. The old default style in ``factorplot`` (``"point"``, corresponding to :func:`pointplot`) remains available if you want to show a statistical estimation. - The ``lvplot`` function has been renamed to :func:`boxenplot`. The "letter-value" terminology that was used to name the original kind of plot is obscure, and the abbreviation to ``lv`` did not help anything. The new name should make the plot more discoverable by describing its format (it plots multiple boxes, also known as "boxen"). As with ``factorplot``, the ``lvplot`` function still exists to provide a relatively smooth transition. - Renamed the ``size`` parameter to ``height`` in multi-plot grid objects (:class:`FacetGrid`, :class:`PairGrid`, and :class:`JointGrid`) along with functions that use them (``factorplot``, :func:`lmplot`, :func:`pairplot`, and :func:`jointplot`) to avoid conflicts with the ``size`` parameter that is used in ``scatterplot`` and ``lineplot`` (necessary to make :func:`relplot` work) and also makes the meaning of the parameter a bit more clear. - Changed the default diagonal plots in :func:`pairplot` to use func:`kdeplot` when a ``"hue"`` dimension is used. - Deprecated the statistical annotation component of :class:`JointGrid`. The method is still available but will be removed in a future version. - Two older functions that were deprecated in earlier versions, ``coefplot`` and ``interactplot``, have undergone final removal from the code base. Documentation improvements ~~~~~~~~~~~~~~~~~~~~~~~~~~ There has been some effort put into improving the documentation. The biggest change is that the :ref:`introduction to the library ` has been completely rewritten to provide much more information and, critically, examples. In addition to the high-level motivation, the introduction also covers some important topics that are often sources of confusion, like the distinction between figure-level and axes-level functions, how datasets should be formatted for use in seaborn, and how to customize the appearance of the plots. Other improvements have been made throughout, most notably a thorough re-write of the :ref:`categorical tutorial `. Other small enhancements and bug fixes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Changed :func:`rugplot` to plot a matplotlib ``LineCollection`` instead of many ``Line2D`` objects, providing a big speedup for large arrays. - Changed the default off-diagonal plots to use :func:`scatterplot`. (Note that the ``"hue"`` currently draws three separate scatterplots instead of using the hue semantic of the scatterplot function). - Changed color handling when using :func:`kdeplot` with two variables. The default colormap for the 2D density now follows the color cycle, and the function can use ``color`` and ``label`` kwargs, adding more flexibility and avoiding a warning when using with multi-plot grids. - Added the ``subplot_kws`` parameter to :class:`PairGrid` for more flexibility. - Removed a special case in :class:`PairGrid` that defaulted to drawing stacked histograms on the diagonal axes. - Fixed :func:`jointplot`/:class:`JointGrid` and :func:`regplot` so that they now accept list inputs. - Fixed a bug in :class:`FacetGrid` when using a single row/column level or using ``col_wrap=1``. - Fixed functions that set axis limits so that they preserve auto-scaling state on matplotlib 2.0. - Avoided an error when using matplotlib backends that cannot render a canvas (e.g. PDF). - Changed the install infrastructure to explicitly declare dependencies in a way that ``pip`` is aware of. This means that ``pip install seaborn`` will now work in an empty environment. Additionally, the dependencies are specified with strict minimal versions. - Updated the testing infrastructure to execute tests with `pytest `_ (although many individual tests still use nose assertion). seaborn-0.11.2/doc/releases/v0.9.1.txt000066400000000000000000000115101410631356500171720ustar00rootroot00000000000000 v0.9.1 (January 2020) --------------------- This is a minor release with a number of bug fixes and adaptations to changes in seaborn's dependencies. There are also several new features. This is the final version of seaborn that will support Python 2.7 or 3.5. New features ~~~~~~~~~~~~ - Added more control over the arrangement of the elements drawn by :func:`clustermap` with the ``{dendrogram,colors}_ratio`` and ``cbar_pos`` parameters. Additionally, the default organization and scaling with different figure sizes has been improved. - Added the ``corner`` option to :class:`PairGrid` and :func:`pairplot` to make a grid without the upper triangle of bivariate axes. - Added the ability to seed the random number generator for the bootstrap used to define error bars in several plots. Relevant functions now have a ``seed`` parameter, which can take either fixed seed (typically an ``int``) or a numpy random number generator object (either the newer :class:`numpy.random.Generator` or the older :class:`numpy.random.mtrand.RandomState`). - Generalized the idea of "diagonal" axes in :class:`PairGrid` to any axes that share an x and y variable. - In :class:`PairGrid`, the ``hue`` variable is now excluded from the default list of variables that make up the rows and columns of the grid. - Exposed the ``layout_pad`` parameter in :class:`PairGrid` and set a smaller default than what matptlotlib sets for more efficient use of space in dense grids. - It is now possible to force a categorical interpretation of the ``hue`` variable in a relational plot by passing the name of a categorical palette (e.g. ``"deep"``, or ``"Set2"``). This complements the (previously supported) option of passing a list/dict of colors. - Added the ``tree_kws`` parameter to :func:`clustermap` to control the properties of the lines in the dendrogram. - Added the ability to pass hierarchical label names to the :class:`FacetGrid` legend, which also fixes a bug in :func:`relplot` when the same label appeared in different semantics. - Improved support for grouping observations based on pandas index information in categorical plots. Bug fixes and adaptations ~~~~~~~~~~~~~~~~~~~~~~~~~ - Avoided an error when singular data is passed to :func:`kdeplot`, issuing a warning instead. This makes :func:`pairplot` more robust. - Fixed the behavior of ``dropna`` in :class:`PairGrid` to properly exclude null datapoints from each plot when set to ``True``. - Fixed an issue where :func:`regplot` could interfere with other axes in a multi-plot matplotlib figure. - Semantic variables with a ``category`` data type will always be treated as categorical in relational plots. - Avoided a warning about color specifications that arose from :func:`boxenplot` on newer matplotlibs. - Adapted to a change in how matplotlib scales axis margins, which caused multiple calls to :func:`regplot` with ``truncate=False`` to progressively expand the x axis limits. Because there are currently limitations on how autoscaling works in matplotlib, the default value for ``truncate`` in seaborn has also been changed to ``True``. - Relational plots no longer error when hue/size data are inferred to be numeric but stored with a string datatype. - Relational plots now consider semantics with only a single value that can be interpreted as boolean (0 or 1) to be categorical, not numeric. - Relational plots now handle list or dict specifications for ``sizes`` correctly. - Fixed an issue in :func:`pointplot` where missing levels of a hue variable would cause an exception after a recent update in matplotlib. - Fixed a bug when setting the rotation of x tick labels on a :class:`FacetGrid`. - Fixed a bug where values would be excluded from categorical plots when only one variable was a pandas ``Series`` with a non-default index. - Fixed a bug when using ``Series`` objects as arguments for ``x_partial`` or ``y_partial`` in :func:`regplot`. - Fixed a bug when passing a ``norm`` object and using color annotations in :func:`clustermap`. - Fixed a bug where annotations were not rearranged to match the clustering in :func:`clustermap`. - Fixed a bug when trying to call :func:`set` while specifying a list of colors for the palette. - Fixed a bug when resetting the color code short-hands to the matplotlib default. - Avoided errors from stricter type checking in upcoming ``numpy`` changes. - Avoided error/warning in :func:`lineplot` when plotting categoricals with empty levels. - Allowed ``colors`` to be passed through to a bivariate :func:`kdeplot`. - Standardized the output format of custom color palette functions. - Fixed a bug where legends for numerical variables in a relational plot could show a surprisingly large number of decimal places. - Improved robustness to missing values in distribution plots. - Made it possible to specify the location of the :class:`FacetGrid` legend using matplotlib keyword arguments. seaborn-0.11.2/doc/requirements.txt000066400000000000000000000001271410631356500172610ustar00rootroot00000000000000sphinx==3.3.1 sphinx_bootstrap_theme==0.7.1 numpydoc nbconvert ipykernel sphinx-issues seaborn-0.11.2/doc/sphinxext/000077500000000000000000000000001410631356500160275ustar00rootroot00000000000000seaborn-0.11.2/doc/sphinxext/gallery_generator.py000066400000000000000000000250571410631356500221170ustar00rootroot00000000000000""" Sphinx plugin to run example scripts and create a gallery page. Lightly modified from the mpld3 project. """ import os import os.path as op import re import glob import token import tokenize import shutil import warnings import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt # noqa: E402 # Python 3 has no execfile def execfile(filename, globals=None, locals=None): with open(filename, "rb") as fp: exec(compile(fp.read(), filename, 'exec'), globals, locals) RST_TEMPLATE = """ .. currentmodule:: seaborn .. _{sphinx_tag}: {docstring} .. image:: {img_file} **seaborn components used:** {components} .. raw:: html
.. literalinclude:: {fname} :lines: {end_line}- .. raw:: html
""" INDEX_TEMPLATE = """ .. raw:: html .. _{sphinx_tag}: Example gallery =============== {toctree} {contents} .. raw:: html
""" def create_thumbnail(infile, thumbfile, width=275, height=275, cx=0.5, cy=0.5, border=4): baseout, extout = op.splitext(thumbfile) im = matplotlib.image.imread(infile) rows, cols = im.shape[:2] x0 = int(cx * cols - .5 * width) y0 = int(cy * rows - .5 * height) xslice = slice(x0, x0 + width) yslice = slice(y0, y0 + height) thumb = im[yslice, xslice] thumb[:border, :, :3] = thumb[-border:, :, :3] = 0 thumb[:, :border, :3] = thumb[:, -border:, :3] = 0 dpi = 100 fig = plt.figure(figsize=(width / dpi, height / dpi), dpi=dpi) ax = fig.add_axes([0, 0, 1, 1], aspect='auto', frameon=False, xticks=[], yticks=[]) if all(thumb.shape): ax.imshow(thumb, aspect='auto', resample=True, interpolation='bilinear') else: warnings.warn( f"Bad thumbnail crop. {thumbfile} will be empty." ) fig.savefig(thumbfile, dpi=dpi) return fig def indent(s, N=4): """indent a string""" return s.replace('\n', '\n' + N * ' ') class ExampleGenerator(object): """Tools for generating an example page from a file""" def __init__(self, filename, target_dir): self.filename = filename self.target_dir = target_dir self.thumbloc = .5, .5 self.extract_docstring() with open(filename, "r") as fid: self.filetext = fid.read() outfilename = op.join(target_dir, self.rstfilename) # Only actually run it if the output RST file doesn't # exist or it was modified less recently than the example file_mtime = op.getmtime(filename) if not op.exists(outfilename) or op.getmtime(outfilename) < file_mtime: self.exec_file() else: print("skipping {0}".format(self.filename)) @property def dirname(self): return op.split(self.filename)[0] @property def fname(self): return op.split(self.filename)[1] @property def modulename(self): return op.splitext(self.fname)[0] @property def pyfilename(self): return self.modulename + '.py' @property def rstfilename(self): return self.modulename + ".rst" @property def htmlfilename(self): return self.modulename + '.html' @property def pngfilename(self): pngfile = self.modulename + '.png' return "_images/" + pngfile @property def thumbfilename(self): pngfile = self.modulename + '_thumb.png' return pngfile @property def sphinxtag(self): return self.modulename @property def pagetitle(self): return self.docstring.strip().split('\n')[0].strip() @property def plotfunc(self): match = re.search(r"sns\.(.+plot)\(", self.filetext) if match: return match.group(1) match = re.search(r"sns\.(.+map)\(", self.filetext) if match: return match.group(1) match = re.search(r"sns\.(.+Grid)\(", self.filetext) if match: return match.group(1) return "" @property def components(self): objects = re.findall(r"sns\.(\w+)\(", self.filetext) refs = [] for obj in objects: if obj[0].isupper(): refs.append(f":class:`{obj}`") else: refs.append(f":func:`{obj}`") return ", ".join(refs) def extract_docstring(self): """ Extract a module-level docstring """ lines = open(self.filename).readlines() start_row = 0 if lines[0].startswith('#!'): lines.pop(0) start_row = 1 docstring = '' first_par = '' line_iter = lines.__iter__() tokens = tokenize.generate_tokens(lambda: next(line_iter)) for tok_type, tok_content, _, (erow, _), _ in tokens: tok_type = token.tok_name[tok_type] if tok_type in ('NEWLINE', 'COMMENT', 'NL', 'INDENT', 'DEDENT'): continue elif tok_type == 'STRING': docstring = eval(tok_content) # If the docstring is formatted with several paragraphs, # extract the first one: paragraphs = '\n'.join(line.rstrip() for line in docstring.split('\n') ).split('\n\n') if len(paragraphs) > 0: first_par = paragraphs[0] break thumbloc = None for i, line in enumerate(docstring.split("\n")): m = re.match(r"^_thumb: (\.\d+),\s*(\.\d+)", line) if m: thumbloc = float(m.group(1)), float(m.group(2)) break if thumbloc is not None: self.thumbloc = thumbloc docstring = "\n".join([l for l in docstring.split("\n") if not l.startswith("_thumb")]) self.docstring = docstring self.short_desc = first_par self.end_line = erow + 1 + start_row def exec_file(self): print("running {0}".format(self.filename)) plt.close('all') my_globals = {'pl': plt, 'plt': plt} execfile(self.filename, my_globals) fig = plt.gcf() fig.canvas.draw() pngfile = op.join(self.target_dir, self.pngfilename) thumbfile = op.join("example_thumbs", self.thumbfilename) self.html = "" % self.pngfilename fig.savefig(pngfile, dpi=75, bbox_inches="tight") cx, cy = self.thumbloc create_thumbnail(pngfile, thumbfile, cx=cx, cy=cy) def toctree_entry(self): return " ./%s\n\n" % op.splitext(self.htmlfilename)[0] def contents_entry(self): return (".. raw:: html\n\n" " \n\n" "\n\n" "".format(self.htmlfilename, self.thumbfilename, self.plotfunc)) def main(app): static_dir = op.join(app.builder.srcdir, '_static') target_dir = op.join(app.builder.srcdir, 'examples') image_dir = op.join(app.builder.srcdir, 'examples/_images') thumb_dir = op.join(app.builder.srcdir, "example_thumbs") source_dir = op.abspath(op.join(app.builder.srcdir, '..', 'examples')) if not op.exists(static_dir): os.makedirs(static_dir) if not op.exists(target_dir): os.makedirs(target_dir) if not op.exists(image_dir): os.makedirs(image_dir) if not op.exists(thumb_dir): os.makedirs(thumb_dir) if not op.exists(source_dir): os.makedirs(source_dir) banner_data = [] toctree = ("\n\n" ".. toctree::\n" " :hidden:\n\n") contents = "\n\n" # Write individual example files for filename in sorted(glob.glob(op.join(source_dir, "*.py"))): ex = ExampleGenerator(filename, target_dir) banner_data.append({"title": ex.pagetitle, "url": op.join('examples', ex.htmlfilename), "thumb": op.join(ex.thumbfilename)}) shutil.copyfile(filename, op.join(target_dir, ex.pyfilename)) output = RST_TEMPLATE.format(sphinx_tag=ex.sphinxtag, docstring=ex.docstring, end_line=ex.end_line, components=ex.components, fname=ex.pyfilename, img_file=ex.pngfilename) with open(op.join(target_dir, ex.rstfilename), 'w') as f: f.write(output) toctree += ex.toctree_entry() contents += ex.contents_entry() if len(banner_data) < 10: banner_data = (4 * banner_data)[:10] # write index file index_file = op.join(target_dir, 'index.rst') with open(index_file, 'w') as index: index.write(INDEX_TEMPLATE.format(sphinx_tag="example_gallery", toctree=toctree, contents=contents)) def setup(app): app.connect('builder-inited', main) seaborn-0.11.2/doc/tools/000077500000000000000000000000001410631356500151355ustar00rootroot00000000000000seaborn-0.11.2/doc/tools/extract_examples.py000066400000000000000000000033571410631356500210670ustar00rootroot00000000000000"""Turn the examples section of a function docstring into a notebook.""" import re import sys import pydoc import seaborn from seaborn.external.docscrape import NumpyDocString import nbformat def line_type(line): if line.startswith(" "): return "code" else: return "markdown" def add_cell(nb, lines, cell_type): cell_objs = { "code": nbformat.v4.new_code_cell, "markdown": nbformat.v4.new_markdown_cell, } text = "\n".join(lines) cell = cell_objs[cell_type](text) nb["cells"].append(cell) if __name__ == "__main__": _, name = sys.argv # Parse the docstring and get the examples section obj = getattr(seaborn, name) if obj.__class__.__name__ != "function": obj = obj.__init__ lines = NumpyDocString(pydoc.getdoc(obj))["Examples"] # Remove code indentation, the prompt, and mpl return variable pat = re.compile(r"\s{4}[>\.]{3} (ax = ){0,1}(g = ){0,1}") nb = nbformat.v4.new_notebook() # We always start with at least one line of text cell_type = "markdown" cell = [] for line in lines: # Ignore matplotlib plot directive if ".. plot" in line or ":context:" in line: continue # Ignore blank lines if not line: continue if line_type(line) != cell_type: # We are on the first line of the next cell, # so package up the last cell add_cell(nb, cell, cell_type) cell_type = line_type(line) cell = [] if line_type(line) == "code": line = re.sub(pat, "", line) cell.append(line) # Package the final cell add_cell(nb, cell, cell_type) nbformat.write(nb, f"docstrings/{name}.ipynb") seaborn-0.11.2/doc/tools/generate_logos.py000066400000000000000000000155061410631356500205130ustar00rootroot00000000000000import numpy as np import seaborn as sns from matplotlib import patches import matplotlib.pyplot as plt from scipy.signal import gaussian from scipy.spatial import distance XY_CACHE = {} STATIC_DIR = "_static" plt.rcParams["savefig.dpi"] = 300 def poisson_disc_sample(array_radius, pad_radius, candidates=100, d=2, seed=None): """Find positions using poisson-disc sampling.""" # See http://bost.ocks.org/mike/algorithms/ rng = np.random.default_rng(seed) uniform = rng.uniform randint = rng.integers # Cache the results key = array_radius, pad_radius, seed if key in XY_CACHE: return XY_CACHE[key] # Start at a fixed point we know will work start = np.zeros(d) samples = [start] queue = [start] while queue: # Pick a sample to expand from s_idx = randint(len(queue)) s = queue[s_idx] for i in range(candidates): # Generate a candidate from this sample coords = uniform(s - 2 * pad_radius, s + 2 * pad_radius, d) # Check the three conditions to accept the candidate in_array = np.sqrt(np.sum(coords ** 2)) < array_radius in_ring = np.all(distance.cdist(samples, [coords]) > pad_radius) if in_array and in_ring: # Accept the candidate samples.append(coords) queue.append(coords) break if (i + 1) == candidates: # We've exhausted the particular sample queue.pop(s_idx) samples = np.array(samples) XY_CACHE[key] = samples return samples def logo( ax, color_kws, ring, ring_idx, edge, pdf_means, pdf_sigma, dy, y0, w, h, hist_mean, hist_sigma, hist_y0, lw, skip, scatter, pad, scale, ): # Square, invisible axes with specified limits to center the logo ax.set(xlim=(35 + w, 95 - w), ylim=(-3, 53)) ax.set_axis_off() ax.set_aspect('equal') # Magic numbers for the logo circle radius = 27 center = 65, 25 # Full x and y grids for a gaussian curve x = np.arange(101) y = gaussian(x.size, pdf_sigma) x0 = 30 # Magic number xx = x[x0:] # Vertical distances between the PDF curves n = len(pdf_means) dys = np.linspace(0, (n - 1) * dy, n) - (n * dy / 2) dys -= dys.mean() # Compute the PDF curves with vertical offsets pdfs = [h * (y[x0 - m:-m] + y0 + dy) for m, dy in zip(pdf_means, dys)] # Add in constants to fill from bottom and to top pdfs.insert(0, np.full(xx.shape, -h)) pdfs.append(np.full(xx.shape, 50 + h)) # Color gradient colors = sns.cubehelix_palette(n + 1 + bool(hist_mean), **color_kws) # White fill between curves and around edges bg = patches.Circle( center, radius=radius - 1 + ring, color="white", transform=ax.transData, zorder=0, ) ax.add_artist(bg) # Clipping artist (not shown) for the interior elements fg = patches.Circle(center, radius=radius - edge, transform=ax.transData) # Ring artist to surround the circle (optional) if ring: wedge = patches.Wedge( center, r=radius + edge / 2, theta1=0, theta2=360, width=edge / 2, transform=ax.transData, color=colors[ring_idx], alpha=1 ) ax.add_artist(wedge) # Add histogram bars if hist_mean: hist_color = colors.pop(0) hist_y = gaussian(x.size, hist_sigma) hist = 1.1 * h * (hist_y[x0 - hist_mean:-hist_mean] + hist_y0) dx = x[skip] - x[0] hist_x = xx[::skip] hist_h = h + hist[::skip] # Magic number to avoid tiny sliver of bar on edge use = hist_x < center[0] + radius * .5 bars = ax.bar( hist_x[use], hist_h[use], bottom=-h, width=dx, align="edge", color=hist_color, ec="w", lw=lw, zorder=3, ) for bar in bars: bar.set_clip_path(fg) # Add each smooth PDF "wave" for i, pdf in enumerate(pdfs[1:], 1): u = ax.fill_between(xx, pdfs[i - 1] + w, pdf, color=colors[i - 1], lw=0) u.set_clip_path(fg) # Add scatterplot in top wave area if scatter: seed = sum(map(ord, "seaborn logo")) xy = poisson_disc_sample(radius - edge - ring, pad, seed=seed) clearance = distance.cdist(xy + center, np.c_[xx, pdfs[-2]]) use = clearance.min(axis=1) > pad / 1.8 x, y = xy[use].T sizes = (x - y) % 9 points = ax.scatter( x + center[0], y + center[1], s=scale * (10 + sizes * 5), zorder=5, color=colors[-1], ec="w", lw=scale / 2, ) path = u.get_paths()[0] points.set_clip_path(path, transform=u.get_transform()) u.set_visible(False) def savefig(fig, shape, variant): fig.subplots_adjust(0, 0, 1, 1, 0, 0) facecolor = (1, 1, 1, 1) if bg == "white" else (1, 1, 1, 0) for ext in ["png", "svg"]: fig.savefig(f"{STATIC_DIR}/logo-{shape}-{variant}bg.{ext}", facecolor=facecolor) if __name__ == "__main__": for bg in ["white", "light", "dark"]: color_idx = -1 if bg == "dark" else 0 kwargs = dict( color_kws=dict(start=.3, rot=-.4, light=.8, dark=.3, reverse=True), ring=True, ring_idx=color_idx, edge=1, pdf_means=[8, 24], pdf_sigma=16, dy=1, y0=1.8, w=.5, h=12, hist_mean=2, hist_sigma=10, hist_y0=.6, lw=1, skip=6, scatter=True, pad=1.8, scale=.5, ) color = sns.cubehelix_palette(**kwargs["color_kws"])[color_idx] # ------------------------------------------------------------------------ # fig, ax = plt.subplots(figsize=(2, 2), facecolor="w", dpi=100) logo(ax, **kwargs) savefig(fig, "mark", bg) # ------------------------------------------------------------------------ # fig, axs = plt.subplots(1, 2, figsize=(8, 2), dpi=100, gridspec_kw=dict(width_ratios=[1, 3])) logo(axs[0], **kwargs) font = { "family": "avenir", "color": color, "weight": "regular", "size": 120, } axs[1].text(.01, .35, "seaborn", ha="left", va="center", fontdict=font, transform=axs[1].transAxes) axs[1].set_axis_off() savefig(fig, "wide", bg) # ------------------------------------------------------------------------ # fig, axs = plt.subplots(2, 1, figsize=(2, 2.5), dpi=100, gridspec_kw=dict(height_ratios=[4, 1])) logo(axs[0], **kwargs) font = { "family": "avenir", "color": color, "weight": "regular", "size": 34, } axs[1].text(.5, 1, "seaborn", ha="center", va="top", fontdict=font, transform=axs[1].transAxes) axs[1].set_axis_off() savefig(fig, "tall", bg) seaborn-0.11.2/doc/tools/nb_to_doc.py000077500000000000000000000136141410631356500174450ustar00rootroot00000000000000#! /usr/bin/env python """Execute a .ipynb file, write out a processed .rst and clean .ipynb. Some functions in this script were copied from the nbstripout tool: Copyright (c) 2015 Min RK, Florian Rathgeber, Michael McNeil Forbes 2019 Casper da Costa-Luis Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ import os import sys import nbformat from nbconvert import RSTExporter from nbconvert.preprocessors import ( ExecutePreprocessor, TagRemovePreprocessor, ExtractOutputPreprocessor ) from traitlets.config import Config class MetadataError(Exception): pass def pop_recursive(d, key, default=None): """dict.pop(key) where `key` is a `.`-delimited list of nested keys. >>> d = {'a': {'b': 1, 'c': 2}} >>> pop_recursive(d, 'a.c') 2 >>> d {'a': {'b': 1}} """ nested = key.split('.') current = d for k in nested[:-1]: if hasattr(current, 'get'): current = current.get(k, {}) else: return default if not hasattr(current, 'pop'): return default return current.pop(nested[-1], default) def strip_output(nb): """ Strip the outputs, execution count/prompt number and miscellaneous metadata from a notebook object, unless specified to keep either the outputs or counts. """ keys = {'metadata': [], 'cell': {'metadata': ["execution"]}} nb.metadata.pop('signature', None) nb.metadata.pop('widgets', None) for field in keys['metadata']: pop_recursive(nb.metadata, field) for cell in nb.cells: # Remove the outputs, unless directed otherwise if 'outputs' in cell: cell['outputs'] = [] # Remove the prompt_number/execution_count, unless directed otherwise if 'prompt_number' in cell: cell['prompt_number'] = None if 'execution_count' in cell: cell['execution_count'] = None # Always remove this metadata for output_style in ['collapsed', 'scrolled']: if output_style in cell.metadata: cell.metadata[output_style] = False if 'metadata' in cell: for field in ['collapsed', 'scrolled', 'ExecuteTime']: cell.metadata.pop(field, None) for (extra, fields) in keys['cell'].items(): if extra in cell: for field in fields: pop_recursive(getattr(cell, extra), field) return nb if __name__ == "__main__": # Get the desired ipynb file path and parse into components _, fpath = sys.argv basedir, fname = os.path.split(fpath) fstem = fname[:-6] # Read the notebook print(f"Executing {fpath} ...", end=" ", flush=True) with open(fpath) as f: nb = nbformat.read(f, as_version=4) # Run the notebook kernel = os.environ.get("NB_KERNEL", None) if kernel is None: kernel = nb["metadata"]["kernelspec"]["name"] ep = ExecutePreprocessor( timeout=600, kernel_name=kernel, extra_arguments=["--InlineBackend.rc={'figure.dpi': 88}"] ) ep.preprocess(nb, {"metadata": {"path": basedir}}) # Remove plain text execution result outputs for cell in nb.get("cells", {}): if "show-output" in cell["metadata"].get("tags", []): continue fields = cell.get("outputs", []) for field in fields: if field["output_type"] == "execute_result": data_keys = field["data"].keys() for key in list(data_keys): if key == "text/plain": field["data"].pop(key) if not field["data"]: fields.remove(field) # Convert to .rst formats exp = RSTExporter() c = Config() c.TagRemovePreprocessor.remove_cell_tags = {"hide"} c.TagRemovePreprocessor.remove_input_tags = {"hide-input"} c.TagRemovePreprocessor.remove_all_outputs_tags = {"hide-output"} c.ExtractOutputPreprocessor.output_filename_template = \ f"{fstem}_files/{fstem}_" + "{cell_index}_{index}{extension}" exp.register_preprocessor(TagRemovePreprocessor(config=c), True) exp.register_preprocessor(ExtractOutputPreprocessor(config=c), True) body, resources = exp.from_notebook_node(nb) # Clean the output on the notebook and save a .ipynb back to disk print(f"Writing clean {fpath} ... ", end=" ", flush=True) nb = strip_output(nb) with open(fpath, "wt") as f: nbformat.write(nb, f) # Write the .rst file rst_path = os.path.join(basedir, f"{fstem}.rst") print(f"Writing {rst_path}") with open(rst_path, "w") as f: f.write(body) # Write the individual image outputs imdir = os.path.join(basedir, f"{fstem}_files") if not os.path.exists(imdir): os.mkdir(imdir) for imname, imdata in resources["outputs"].items(): if imname.startswith(fstem): impath = os.path.join(basedir, f"{imname}") with open(impath, "wb") as f: f.write(imdata) seaborn-0.11.2/doc/tools/set_nb_kernels.py000066400000000000000000000010341410631356500205020ustar00rootroot00000000000000"""Recursively set the kernel name for all jupyter notebook files.""" import sys from glob import glob import nbformat if __name__ == "__main__": _, kernel_name = sys.argv nb_paths = glob("./**/*.ipynb", recursive=True) for path in nb_paths: with open(path, "r") as f: nb = nbformat.read(f, as_version=4) nb["metadata"]["kernelspec"]["name"] = kernel_name nb["metadata"]["kernelspec"]["display_name"] = kernel_name with open(path, "w") as f: nbformat.write(nb, f) seaborn-0.11.2/doc/tutorial.rst000066400000000000000000000116371410631356500164020ustar00rootroot00000000000000.. _tutorial: User guide and tutorial =============================== .. raw:: html
.. raw:: html

API overview

.. toctree:: :maxdepth: 2 tutorial/function_overview .. raw:: html
.. toctree:: :maxdepth: 2 tutorial/data_structure .. raw:: html

Plotting functions

.. toctree:: :maxdepth: 2 tutorial/relational .. raw:: html

.. toctree:: :maxdepth: 2 tutorial/distributions .. raw:: html

.. toctree:: :maxdepth: 2 tutorial/categorical .. raw:: html

.. toctree:: :maxdepth: 2 tutorial/regression .. raw:: html

Multi-plot grids

.. toctree:: :maxdepth: 2 tutorial/axis_grids .. raw:: html

Plot aesthetics

.. toctree:: :maxdepth: 2 tutorial/aesthetics .. raw:: html

.. toctree:: :maxdepth: 2 tutorial/color_palettes .. raw:: html
seaborn-0.11.2/doc/tutorial/000077500000000000000000000000001410631356500156405ustar00rootroot00000000000000seaborn-0.11.2/doc/tutorial/Makefile000066400000000000000000000003061410631356500172770ustar00rootroot00000000000000rst_files := $(patsubst %.ipynb,%.rst,$(wildcard *.ipynb)) tutorial: ${rst_files} %.rst: %.ipynb ../tools/nb_to_doc.py $*.ipynb clean: rm -rf *.rst rm -rf *_files/ rm -rf .ipynb_checkpoints/ seaborn-0.11.2/doc/tutorial/aesthetics.ipynb000066400000000000000000000276451410631356500210550ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _aesthetics_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Controlling figure aesthetics\n", "=============================\n", "\n", ".. raw:: html\n", "\n", "
\n" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Drawing attractive figures is important. When making figures for yourself, as you explore a dataset, it's nice to have plots that are pleasant to look at. Visualizations are also central to communicating quantitative insights to an audience, and in that setting it's even more necessary to have figures that catch the attention and draw a viewer in.\n", "\n", "Matplotlib is highly customizable, but it can be hard to know what settings to tweak to achieve an attractive plot. Seaborn comes with a number of customized themes and a high-level interface for controlling the look of matplotlib figures." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import seaborn as sns\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "np.random.seed(sum(map(ord, \"aesthetics\")))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Let's define a simple function to plot some offset sine waves, which will help us see the different stylistic parameters we can tweak." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def sinplot(flip=1):\n", " x = np.linspace(0, 14, 100)\n", " for i in range(1, 7):\n", " plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This is what the plot looks like with matplotlib defaults:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To switch to seaborn defaults, simply call the :func:`set_theme` function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_theme()\n", "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "(Note that in versions of seaborn prior to 0.8, :func:`set_theme` was called on import. On later versions, it must be explicitly invoked).\n", "\n", "Seaborn splits matplotlib parameters into two independent groups. The first group sets the aesthetic style of the plot, and the second scales various elements of the figure so that it can be easily incorporated into different contexts.\n", "\n", "The interface for manipulating these parameters are two pairs of functions. To control the style, use the :func:`axes_style` and :func:`set_style` functions. To scale the plot, use the :func:`plotting_context` and :func:`set_context` functions. In both cases, the first function returns a dictionary of parameters and the second sets the matplotlib defaults.\n", "\n", ".. _axes_style:\n", "\n", "Seaborn figure styles\n", "---------------------\n", "\n", "There are five preset seaborn themes: ``darkgrid``, ``whitegrid``, ``dark``, ``white``, and ``ticks``. They are each suited to different applications and personal preferences. The default theme is ``darkgrid``. As mentioned above, the grid helps the plot serve as a lookup table for quantitative information, and the white-on grey helps to keep the grid from competing with lines that represent data. The ``whitegrid`` theme is similar, but it is better suited to plots with heavy data elements:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"whitegrid\")\n", "data = np.random.normal(size=(20, 6)) + np.arange(6) / 2\n", "sns.boxplot(data=data);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "For many plots, (especially for settings like talks, where you primarily want to use figures to provide impressions of patterns in the data), the grid is less necessary." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"dark\")\n", "sinplot()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"white\")\n", "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Sometimes you might want to give a little extra structure to the plots, which is where ticks come in handy:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"ticks\")\n", "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _remove_spines:\n", "\n", "Removing axes spines\n", "--------------------\n", "\n", "Both the ``white`` and ``ticks`` styles can benefit from removing the top and right axes spines, which are not needed. The seaborn function :func:`despine` can be called to remove them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sinplot()\n", "sns.despine()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Some plots benefit from offsetting the spines away from the data, which can also be done when calling :func:`despine`. When the ticks don't cover the whole range of the axis, the ``trim`` parameter will limit the range of the surviving spines." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, ax = plt.subplots()\n", "sns.violinplot(data=data)\n", "sns.despine(offset=10, trim=True);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You can also control which spines are removed with additional arguments to :func:`despine`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"whitegrid\")\n", "sns.boxplot(data=data, palette=\"deep\")\n", "sns.despine(left=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Temporarily setting figure style\n", "--------------------------------\n", "\n", "Although it's easy to switch back and forth, you can also use the :func:`axes_style` function in a ``with`` statement to temporarily set plot parameters. This also allows you to make figures with differently-styled axes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f = plt.figure(figsize=(6, 6))\n", "gs = f.add_gridspec(2, 2)\n", "\n", "with sns.axes_style(\"darkgrid\"):\n", " ax = f.add_subplot(gs[0, 0])\n", " sinplot()\n", " \n", "with sns.axes_style(\"white\"):\n", " ax = f.add_subplot(gs[0, 1])\n", " sinplot()\n", "\n", "with sns.axes_style(\"ticks\"):\n", " ax = f.add_subplot(gs[1, 0])\n", " sinplot()\n", "\n", "with sns.axes_style(\"whitegrid\"):\n", " ax = f.add_subplot(gs[1, 1])\n", " sinplot()\n", " \n", "f.tight_layout()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Overriding elements of the seaborn styles\n", "-----------------------------------------\n", "\n", "If you want to customize the seaborn styles, you can pass a dictionary of parameters to the ``rc`` argument of :func:`axes_style` and :func:`set_style`. Note that you can only override the parameters that are part of the style definition through this method. (However, the higher-level :func:`set_theme` function takes a dictionary of any matplotlib parameters).\n", "\n", "If you want to see what parameters are included, you can just call the function with no arguments, which will return the current settings:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.axes_style()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You can then set different versions of these parameters:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"darkgrid\", {\"axes.facecolor\": \".9\"})\n", "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _plotting_context:\n", "\n", "Scaling plot elements\n", "---------------------\n", "\n", "A separate set of parameters control the scale of plot elements, which should let you use the same code to make plots that are suited for use in settings where larger or smaller plots are appropriate.\n", "\n", "First let's reset the default parameters by calling :func:`set_theme`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_theme()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The four preset contexts, in order of relative size, are ``paper``, ``notebook``, ``talk``, and ``poster``. The ``notebook`` style is the default, and was used in the plots above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_context(\"paper\")\n", "sinplot()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_context(\"talk\")\n", "sinplot()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_context(\"poster\")\n", "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Most of what you now know about the style functions should transfer to the context functions.\n", "\n", "You can call :func:`set_context` with one of these names to set the parameters, and you can override the parameters by providing a dictionary of parameter values.\n", "\n", "You can also independently scale the size of the font elements when changing the context. (This option is also available through the top-level :func:`set` function)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_context(\"notebook\", font_scale=1.5, rc={\"lines.linewidth\": 2.5})\n", "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Similarly, you can temporarily control the scale of figures nested under a ``with`` statement.\n", "\n", "Both the style and the context can be quickly configured with the :func:`set` function. This function also sets the default color palette, but that will be covered in more detail in the :ref:`next section ` of the tutorial." ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", "\n", "
\n" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/tutorial/axis_grids.ipynb000066400000000000000000000473571410631356500210570ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _grid_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Building structured multi-plot grids\n", "====================================\n", "\n", ".. raw:: html\n", "\n", "
\n" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When exploring multi-dimensional data, a useful approach is to draw multiple instances of the same plot on different subsets of your dataset. This technique is sometimes called either \"lattice\" or \"trellis\" plotting, and it is related to the idea of `\"small multiples\" `_. It allows a viewer to quickly extract a large amount of information about a complex dataset. Matplotlib offers good support for making figures with multiple axes; seaborn builds on top of this to directly link the structure of the plot to the structure of your dataset.\n", "\n", "The :doc:`figure-level ` functions are built on top of the objects discussed in this chapter of the tutorial. In most cases, you will want to work with those functions. They take care of some important bookkeeping that synchronizes the multiple plots in each grid. This chapter explains how the underlying objects work, which may be useful for advanced applications." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import seaborn as sns\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "sns.set_theme(style=\"ticks\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "import numpy as np\n", "np.random.seed(sum(map(ord, \"axis_grids\")))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _facet_grid:\n", "\n", "Conditional small multiples\n", "---------------------------\n", "\n", "The :class:`FacetGrid` class is useful when you want to visualize the distribution of a variable or the relationship between multiple variables separately within subsets of your dataset. A :class:`FacetGrid` can be drawn with up to three dimensions: ``row``, ``col``, and ``hue``. The first two have obvious correspondence with the resulting array of axes; think of the hue variable as a third dimension along a depth axis, where different levels are plotted with different colors.\n", "\n", "Each of :func:`relplot`, :func:`displot`, :func:`catplot`, and :func:`lmplot` use this object internally, and they return the object when they are finished so that it can be used for further tweaking.\n", "\n", "The class is used by initializing a :class:`FacetGrid` object with a dataframe and the names of the variables that will form the row, column, or hue dimensions of the grid. These variables should be categorical or discrete, and then the data at each level of the variable will be used for a facet along that axis. For example, say we wanted to examine differences between lunch and dinner in the ``tips`` dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")\n", "g = sns.FacetGrid(tips, col=\"time\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Initializing the grid like this sets up the matplotlib figure and axes, but doesn't draw anything on them.\n", "\n", "The main approach for visualizing data on this grid is with the :meth:`FacetGrid.map` method. Provide it with a plotting function and the name(s) of variable(s) in the dataframe to plot. Let's look at the distribution of tips in each of these subsets, using a histogram:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"time\")\n", "g.map(sns.histplot, \"tip\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This function will draw the figure and annotate the axes, hopefully producing a finished plot in one step. To make a relational plot, just pass multiple variable names. You can also provide keyword arguments, which will be passed to the plotting function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"sex\", hue=\"smoker\")\n", "g.map(sns.scatterplot, \"total_bill\", \"tip\", alpha=.7)\n", "g.add_legend()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "There are several options for controlling the look of the grid that can be passed to the class constructor." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, row=\"smoker\", col=\"time\", margin_titles=True)\n", "g.map(sns.regplot, \"size\", \"total_bill\", color=\".3\", fit_reg=False, x_jitter=.1)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Note that ``margin_titles`` isn't formally supported by the matplotlib API, and may not work well in all cases. In particular, it currently can't be used with a legend that lies outside of the plot.\n", "\n", "The size of the figure is set by providing the height of *each* facet, along with the aspect ratio:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"day\", height=4, aspect=.5)\n", "g.map(sns.barplot, \"sex\", \"total_bill\", order=[\"Male\", \"Female\"])" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The default ordering of the facets is derived from the information in the DataFrame. If the variable used to define facets has a categorical type, then the order of the categories is used. Otherwise, the facets will be in the order of appearance of the category levels. It is possible, however, to specify an ordering of any facet dimension with the appropriate ``*_order`` parameter:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ordered_days = tips.day.value_counts().index\n", "g = sns.FacetGrid(tips, row=\"day\", row_order=ordered_days,\n", " height=1.7, aspect=4,)\n", "g.map(sns.kdeplot, \"total_bill\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Any seaborn color palette (i.e., something that can be passed to :func:`color_palette()` can be provided. You can also use a dictionary that maps the names of values in the ``hue`` variable to valid matplotlib colors:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pal = dict(Lunch=\"seagreen\", Dinner=\".7\")\n", "g = sns.FacetGrid(tips, hue=\"time\", palette=pal, height=5)\n", "g.map(sns.scatterplot, \"total_bill\", \"tip\", s=100, alpha=.5)\n", "g.add_legend()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "If you have many levels of one variable, you can plot it along the columns but \"wrap\" them so that they span multiple rows. When doing this, you cannot use a ``row`` variable." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "attend = sns.load_dataset(\"attention\").query(\"subject <= 12\")\n", "g = sns.FacetGrid(attend, col=\"subject\", col_wrap=4, height=2, ylim=(0, 10))\n", "g.map(sns.pointplot, \"solutions\", \"score\", order=[1, 2, 3], color=\".3\", ci=None)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Once you've drawn a plot using :meth:`FacetGrid.map` (which can be called multiple times), you may want to adjust some aspects of the plot. There are also a number of methods on the :class:`FacetGrid` object for manipulating the figure at a higher level of abstraction. The most general is :meth:`FacetGrid.set`, and there are other more specialized methods like :meth:`FacetGrid.set_axis_labels`, which respects the fact that interior facets do not have axis labels. For example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with sns.axes_style(\"white\"):\n", " g = sns.FacetGrid(tips, row=\"sex\", col=\"smoker\", margin_titles=True, height=2.5)\n", "g.map(sns.scatterplot, \"total_bill\", \"tip\", color=\"#334488\")\n", "g.set_axis_labels(\"Total bill (US Dollars)\", \"Tip\")\n", "g.set(xticks=[10, 30, 50], yticks=[2, 6, 10])\n", "g.figure.subplots_adjust(wspace=.02, hspace=.02)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "For even more customization, you can work directly with the underling matplotlib ``Figure`` and ``Axes`` objects, which are stored as member attributes at ``figure`` and ``axes_dict``, respectively. When making a figure without row or column faceting, you can also use the ``ax`` attribute to directly access the single axes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"smoker\", margin_titles=True, height=4)\n", "g.map(plt.scatter, \"total_bill\", \"tip\", color=\"#338844\", edgecolor=\"white\", s=50, lw=1)\n", "for ax in g.axes_dict.values():\n", " ax.axline((0, 0), slope=.2, c=\".2\", ls=\"--\", zorder=0)\n", "g.set(xlim=(0, 60), ylim=(0, 14))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _custom_map_func:\n", "\n", "Using custom functions\n", "----------------------\n", "\n", "You're not limited to existing matplotlib and seaborn functions when using :class:`FacetGrid`. However, to work properly, any function you use must follow a few rules:\n", "\n", "1. It must plot onto the \"currently active\" matplotlib ``Axes``. This will be true of functions in the ``matplotlib.pyplot`` namespace, and you can call :func:`matplotlib.pyplot.gca` to get a reference to the current ``Axes`` if you want to work directly with its methods.\n", "2. It must accept the data that it plots in positional arguments. Internally, :class:`FacetGrid` will pass a ``Series`` of data for each of the named positional arguments passed to :meth:`FacetGrid.map`.\n", "3. It must be able to accept ``color`` and ``label`` keyword arguments, and, ideally, it will do something useful with them. In most cases, it's easiest to catch a generic dictionary of ``**kwargs`` and pass it along to the underlying plotting function.\n", "\n", "Let's look at minimal example of a function you can plot with. This function will just take a single vector of data for each facet:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy import stats\n", "def quantile_plot(x, **kwargs):\n", " quantiles, xr = stats.probplot(x, fit=False)\n", " plt.scatter(xr, quantiles, **kwargs)\n", " \n", "g = sns.FacetGrid(tips, col=\"sex\", height=4)\n", "g.map(quantile_plot, \"total_bill\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "If we want to make a bivariate plot, you should write the function so that it accepts the x-axis variable first and the y-axis variable second:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def qqplot(x, y, **kwargs):\n", " _, xr = stats.probplot(x, fit=False)\n", " _, yr = stats.probplot(y, fit=False)\n", " plt.scatter(xr, yr, **kwargs)\n", " \n", "g = sns.FacetGrid(tips, col=\"smoker\", height=4)\n", "g.map(qqplot, \"total_bill\", \"tip\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Because :func:`matplotlib.pyplot.scatter` accepts ``color`` and ``label`` keyword arguments and does the right thing with them, we can add a hue facet without any difficulty:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, hue=\"time\", col=\"sex\", height=4)\n", "g.map(qqplot, \"total_bill\", \"tip\")\n", "g.add_legend()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Sometimes, though, you'll want to map a function that doesn't work the way you expect with the ``color`` and ``label`` keyword arguments. In this case, you'll want to explicitly catch them and handle them in the logic of your custom function. For example, this approach will allow use to map :func:`matplotlib.pyplot.hexbin`, which otherwise does not play well with the :class:`FacetGrid` API:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def hexbin(x, y, color, **kwargs):\n", " cmap = sns.light_palette(color, as_cmap=True)\n", " plt.hexbin(x, y, gridsize=15, cmap=cmap, **kwargs)\n", "\n", "with sns.axes_style(\"dark\"):\n", " g = sns.FacetGrid(tips, hue=\"time\", col=\"time\", height=4)\n", "g.map(hexbin, \"total_bill\", \"tip\", extent=[0, 50, 0, 10]);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _pair_grid:\n", "\n", "Plotting pairwise data relationships\n", "------------------------------------\n", "\n", ":class:`PairGrid` also allows you to quickly draw a grid of small subplots using the same plot type to visualize data in each. In a :class:`PairGrid`, each row and column is assigned to a different variable, so the resulting plot shows each pairwise relationship in the dataset. This style of plot is sometimes called a \"scatterplot matrix\", as this is the most common way to show each relationship, but :class:`PairGrid` is not limited to scatterplots.\n", "\n", "It's important to understand the differences between a :class:`FacetGrid` and a :class:`PairGrid`. In the former, each facet shows the same relationship conditioned on different levels of other variables. In the latter, each plot shows a different relationship (although the upper and lower triangles will have mirrored plots). Using :class:`PairGrid` can give you a very quick, very high-level summary of interesting relationships in your dataset.\n", "\n", "The basic usage of the class is very similar to :class:`FacetGrid`. First you initialize the grid, then you pass plotting function to a ``map`` method and it will be called on each subplot. There is also a companion function, :func:`pairplot` that trades off some flexibility for faster plotting.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iris = sns.load_dataset(\"iris\")\n", "g = sns.PairGrid(iris)\n", "g.map(sns.scatterplot)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It's possible to plot a different function on the diagonal to show the univariate distribution of the variable in each column. Note that the axis ticks won't correspond to the count or density axis of this plot, though." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(iris)\n", "g.map_diag(sns.histplot)\n", "g.map_offdiag(sns.scatterplot)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A very common way to use this plot colors the observations by a separate categorical variable. For example, the iris dataset has four measurements for each of three different species of iris flowers so you can see how they differ." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(iris, hue=\"species\")\n", "g.map_diag(sns.histplot)\n", "g.map_offdiag(sns.scatterplot)\n", "g.add_legend()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "By default every numeric column in the dataset is used, but you can focus on particular relationships if you want." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(iris, vars=[\"sepal_length\", \"sepal_width\"], hue=\"species\")\n", "g.map(sns.scatterplot)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It's also possible to use a different function in the upper and lower triangles to emphasize different aspects of the relationship." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(iris)\n", "g.map_upper(sns.scatterplot)\n", "g.map_lower(sns.kdeplot)\n", "g.map_diag(sns.kdeplot, lw=3, legend=False)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The square grid with identity relationships on the diagonal is actually just a special case, and you can plot with different variables in the rows and columns." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(tips, y_vars=[\"tip\"], x_vars=[\"total_bill\", \"size\"], height=4)\n", "g.map(sns.regplot, color=\".3\")\n", "g.set(ylim=(-1, 11), yticks=[0, 5, 10])" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Of course, the aesthetic attributes are configurable. For instance, you can use a different palette (say, to show an ordering of the ``hue`` variable) and pass keyword arguments into the plotting functions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(tips, hue=\"size\", palette=\"GnBu_d\")\n", "g.map(plt.scatter, s=50, edgecolor=\"white\")\n", "g.add_legend()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ":class:`PairGrid` is flexible, but to take a quick look at a dataset, it can be easier to use :func:`pairplot`. This function uses scatterplots and histograms by default, although a few other kinds will be added (currently, you can also plot regression plots on the off-diagonals and KDEs on the diagonal)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(iris, hue=\"species\", height=2.5)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You can also control the aesthetics of the plot with keyword arguments, and it returns the :class:`PairGrid` instance for further tweaking." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.pairplot(iris, hue=\"species\", palette=\"Set2\", diag_kind=\"kde\", height=2.5)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", "\n", "
" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/tutorial/categorical.ipynb000066400000000000000000000505631410631356500211710ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _categorical_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Plotting with categorical data\n", "==============================\n", "\n", ".. raw:: html\n", "\n", "
\n", " \n", "In the :ref:`relational plot tutorial ` we saw how to use different visual representations to show the relationship between multiple variables in a dataset. In the examples, we focused on cases where the main relationship was between two numerical variables. If one of the main variables is \"categorical\" (divided into discrete groups) it may be helpful to use a more specialized approach to visualization.\n", "\n", "In seaborn, there are several different ways to visualize a relationship involving categorical data. Similar to the relationship between :func:`relplot` and either :func:`scatterplot` or :func:`lineplot`, there are two ways to make these plots. There are a number of axes-level functions for plotting categorical data in different ways and a figure-level interface, :func:`catplot`, that gives unified higher-level access to them.\n", "\n", "It's helpful to think of the different categorical plot kinds as belonging to three different families, which we'll discuss in detail below. They are:\n", "\n", "Categorical scatterplots:\n", "\n", "- :func:`stripplot` (with ``kind=\"strip\"``; the default)\n", "- :func:`swarmplot` (with ``kind=\"swarm\"``)\n", "\n", "Categorical distribution plots:\n", "\n", "- :func:`boxplot` (with ``kind=\"box\"``)\n", "- :func:`violinplot` (with ``kind=\"violin\"``)\n", "- :func:`boxenplot` (with ``kind=\"boxen\"``)\n", "\n", "Categorical estimate plots:\n", "\n", "- :func:`pointplot` (with ``kind=\"point\"``)\n", "- :func:`barplot` (with ``kind=\"bar\"``)\n", "- :func:`countplot` (with ``kind=\"count\"``)\n", "\n", "These families represent the data using different levels of granularity. When deciding which to use, you'll have to think about the question that you want to answer. The unified API makes it easy to switch between different kinds and see your data from several perspectives.\n", "\n", "In this tutorial, we'll mostly focus on the figure-level interface, :func:`catplot`. Remember that this function is a higher-level interface each of the functions above, so we'll reference them when we show each kind of plot, keeping the more verbose kind-specific API documentation at hand." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import seaborn as sns\n", "import matplotlib.pyplot as plt\n", "sns.set_theme(style=\"ticks\", color_codes=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "import numpy as np\n", "np.random.seed(sum(map(ord, \"categorical\")))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Categorical scatterplots\n", "------------------------\n", "\n", "The default representation of the data in :func:`catplot` uses a scatterplot. There are actually two different categorical scatter plots in seaborn. They take different approaches to resolving the main challenge in representing categorical data with a scatter plot, which is that all of the points belonging to one category would fall on the same position along the axis corresponding to the categorical variable. The approach used by :func:`stripplot`, which is the default \"kind\" in :func:`catplot` is to adjust the positions of points on the categorical axis with a small amount of random \"jitter\":" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")\n", "sns.catplot(x=\"day\", y=\"total_bill\", data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The ``jitter`` parameter controls the magnitude of jitter or disables it altogether:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", jitter=False, data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The second approach adjusts the points along the categorical axis using an algorithm that prevents them from overlapping. It can give a better representation of the distribution of observations, although it only works well for relatively small datasets. This kind of plot is sometimes called a \"beeswarm\" and is drawn in seaborn by :func:`swarmplot`, which is activated by setting ``kind=\"swarm\"`` in :func:`catplot`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", kind=\"swarm\", data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Similar to the relational plots, it's possible to add another dimension to a categorical plot by using a ``hue`` semantic. (The categorical plots do not currently support ``size`` or ``style`` semantics). Each different categorical plotting function handles the ``hue`` semantic differently. For the scatter plots, it is only necessary to change the color of the points:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"sex\", kind=\"swarm\", data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Unlike with numerical data, it is not always obvious how to order the levels of the categorical variable along its axis. In general, the seaborn categorical plotting functions try to infer the order of categories from the data. If your data have a pandas ``Categorical`` datatype, then the default order of the categories can be set there. If the variable passed to the categorical axis looks numerical, the levels will be sorted. But the data are still treated as categorical and drawn at ordinal positions on the categorical axes (specifically, at 0, 1, ...) even when numbers are used to label them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"size\", y=\"total_bill\", data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The other option for choosing a default ordering is to take the levels of the category as they appear in the dataset. The ordering can also be controlled on a plot-specific basis using the ``order`` parameter. This can be important when drawing multiple categorical plots in the same figure, which we'll see more of below:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"smoker\", y=\"tip\", order=[\"No\", \"Yes\"], data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "We've referred to the idea of \"categorical axis\". In these examples, that's always corresponded to the horizontal axis. But it's often helpful to put the categorical variable on the vertical axis (particularly when the category names are relatively long or there are many categories). To do this, swap the assignment of variables to axes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"total_bill\", y=\"day\", hue=\"time\", kind=\"swarm\", data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Distributions of observations within categories\n", "-----------------------------------------------\n", "\n", "As the size of the dataset grows, categorical scatter plots become limited in the information they can provide about the distribution of values within each category. When this happens, there are several approaches for summarizing the distributional information in ways that facilitate easy comparisons across the category levels.\n", "\n", "Boxplots\n", "^^^^^^^^\n", "\n", "The first is the familiar :func:`boxplot`. This kind of plot shows the three quartile values of the distribution along with extreme values. The \"whiskers\" extend to points that lie within 1.5 IQRs of the lower and upper quartile, and then observations that fall outside this range are displayed independently. This means that each value in the boxplot corresponds to an actual observation in the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", kind=\"box\", data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When adding a ``hue`` semantic, the box for each level of the semantic variable is moved along the categorical axis so they don't overlap:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"smoker\", kind=\"box\", data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This behavior is called \"dodging\" and is turned on by default because it is assumed that the semantic variable is nested within the main categorical variable. If that's not the case, you can disable the dodging:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips[\"weekend\"] = tips[\"day\"].isin([\"Sat\", \"Sun\"])\n", "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"weekend\",\n", " kind=\"box\", dodge=False, data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A related function, :func:`boxenplot`, draws a plot that is similar to a box plot but optimized for showing more information about the shape of the distribution. It is best suited for larger datasets:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "diamonds = sns.load_dataset(\"diamonds\")\n", "sns.catplot(x=\"color\", y=\"price\", kind=\"boxen\",\n", " data=diamonds.sort_values(\"color\"))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Violinplots\n", "^^^^^^^^^^^\n", "\n", "A different approach is a :func:`violinplot`, which combines a boxplot with the kernel density estimation procedure described in the :ref:`distributions ` tutorial:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"total_bill\", y=\"day\", hue=\"sex\",\n", " kind=\"violin\", data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This approach uses the kernel density estimate to provide a richer description of the distribution of values. Additionally, the quartile and whisker values from the boxplot are shown inside the violin. The downside is that, because the violinplot uses a KDE, there are some other parameters that may need tweaking, adding some complexity relative to the straightforward boxplot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"total_bill\", y=\"day\", hue=\"sex\",\n", " kind=\"violin\", bw=.15, cut=0,\n", " data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It's also possible to \"split\" the violins when the hue parameter has only two levels, which can allow for a more efficient use of space:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"sex\",\n", " kind=\"violin\", split=True, data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Finally, there are several options for the plot that is drawn on the interior of the violins, including ways to show each individual observation instead of the summary boxplot values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"sex\",\n", " kind=\"violin\", inner=\"stick\", split=True,\n", " palette=\"pastel\", data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It can also be useful to combine :func:`swarmplot` or :func:`striplot` with a box plot or violin plot to show each observation along with a summary of the distribution:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.catplot(x=\"day\", y=\"total_bill\", kind=\"violin\", inner=None, data=tips)\n", "sns.swarmplot(x=\"day\", y=\"total_bill\", color=\"k\", size=3, data=tips, ax=g.ax)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Statistical estimation within categories\n", "----------------------------------------\n", "\n", "For other applications, rather than showing the distribution within each category, you might want to show an estimate of the central tendency of the values. Seaborn has two main ways to show this information. Importantly, the basic API for these functions is identical to that for the ones discussed above.\n", "\n", "Bar plots\n", "^^^^^^^^^\n", "\n", "A familiar style of plot that accomplishes this goal is a bar plot. In seaborn, the :func:`barplot` function operates on a full dataset and applies a function to obtain the estimate (taking the mean by default). When there are multiple observations in each category, it also uses bootstrapping to compute a confidence interval around the estimate, which is plotted using error bars:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "titanic = sns.load_dataset(\"titanic\")\n", "sns.catplot(x=\"sex\", y=\"survived\", hue=\"class\", kind=\"bar\", data=titanic)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A special case for the bar plot is when you want to show the number of observations in each category rather than computing a statistic for a second variable. This is similar to a histogram over a categorical, rather than quantitative, variable. In seaborn, it's easy to do so with the :func:`countplot` function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"deck\", kind=\"count\", palette=\"ch:.25\", data=titanic)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Both :func:`barplot` and :func:`countplot` can be invoked with all of the options discussed above, along with others that are demonstrated in the detailed documentation for each function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(y=\"deck\", hue=\"class\", kind=\"count\",\n", " palette=\"pastel\", edgecolor=\".6\",\n", " data=titanic)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Point plots\n", "^^^^^^^^^^^\n", "\n", "An alternative style for visualizing the same information is offered by the :func:`pointplot` function. This function also encodes the value of the estimate with height on the other axis, but rather than showing a full bar, it plots the point estimate and confidence interval. Additionally, :func:`pointplot` connects points from the same ``hue`` category. This makes it easy to see how the main relationship is changing as a function of the hue semantic, because your eyes are quite good at picking up on differences of slopes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"sex\", y=\"survived\", hue=\"class\", kind=\"point\", data=titanic)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "While the categorical functions lack the ``style`` semantic of the relational functions, it can still be a good idea to vary the marker and/or linestyle along with the hue to make figures that are maximally accessible and reproduce well in black and white:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"class\", y=\"survived\", hue=\"sex\",\n", " palette={\"male\": \"g\", \"female\": \"m\"},\n", " markers=[\"^\", \"o\"], linestyles=[\"-\", \"--\"],\n", " kind=\"point\", data=titanic)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Plotting \"wide-form\" data\n", "-------------------------\n", "\n", "While using \"long-form\" or \"tidy\" data is preferred, these functions can also by applied to \"wide-form\" data in a variety of formats, including pandas DataFrames or two-dimensional numpy arrays. These objects should be passed directly to the ``data`` parameter:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iris = sns.load_dataset(\"iris\")\n", "sns.catplot(data=iris, orient=\"h\", kind=\"box\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Additionally, the axes-level functions accept vectors of Pandas or numpy objects rather than variables in a ``DataFrame``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.violinplot(x=iris.species, y=iris.sepal_length)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To control the size and shape of plots made by the functions discussed above, you must set up the figure yourself using matplotlib commands:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, ax = plt.subplots(figsize=(7, 3))\n", "sns.countplot(y=\"deck\", data=titanic, color=\"c\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This is the approach you should take when you need a categorical figure to happily coexist in a more complex figure with other kinds of plots.\n", "\n", "Showing multiple relationships with facets\n", "------------------------------------------\n", "\n", "Just like :func:`relplot`, the fact that :func:`catplot` is built on a :class:`FacetGrid` means that it is easy to add faceting variables to visualize higher-dimensional relationships:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"smoker\",\n", " col=\"time\", aspect=.7,\n", " kind=\"swarm\", data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "For further customization of the plot, you can use the methods on the :class:`FacetGrid` object that it returns:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.catplot(x=\"fare\", y=\"survived\", row=\"class\",\n", " kind=\"box\", orient=\"h\", height=1.5, aspect=4,\n", " data=titanic.query(\"fare > 0\"))\n", "g.set(xscale=\"log\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", "\n", "
" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/tutorial/color_palettes.ipynb000066400000000000000000001045751410631356500217360ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ ".. _palette_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Choosing color palettes\n", "=======================\n", "\n", ".. raw:: html\n", "\n", "
\n", "\n", "Seaborn makes it easy to use colors that are well-suited to the characteristics of your data and your visualization goals. This chapter discusses both the general principles that should guide your choices and the tools in seaborn that help you quickly find the best solution for a given application." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import numpy as np\n", "import seaborn as sns\n", "import matplotlib.pyplot as plt\n", "sns.set_theme(style=\"white\", rc={\"xtick.major.pad\": 1, \"ytick.major.pad\": 1})" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "np.random.seed(sum(map(ord, \"palettes\")))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "# Add colormap display methods to matplotlib colormaps.\n", "# These are forthcoming in matplotlib 3.4, but, the matplotlib display\n", "# method includes the colormap name, which is redundant.\n", "def _repr_png_(self):\n", " \"\"\"Generate a PNG representation of the Colormap.\"\"\"\n", " import io\n", " from PIL import Image\n", " import numpy as np\n", " IMAGE_SIZE = (400, 50)\n", " X = np.tile(np.linspace(0, 1, IMAGE_SIZE[0]), (IMAGE_SIZE[1], 1))\n", " pixels = self(X, bytes=True)\n", " png_bytes = io.BytesIO()\n", " Image.fromarray(pixels).save(png_bytes, format='png')\n", " return png_bytes.getvalue()\n", " \n", "def _repr_html_(self):\n", " \"\"\"Generate an HTML representation of the Colormap.\"\"\"\n", " import base64\n", " png_bytes = self._repr_png_()\n", " png_base64 = base64.b64encode(png_bytes).decode('ascii')\n", " return ('')\n", " \n", "import matplotlib as mpl\n", "mpl.colors.Colormap._repr_png_ = _repr_png_\n", "mpl.colors.Colormap._repr_html_ = _repr_html_" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "General principles for using color in plots\n", "-------------------------------------------\n", "\n", "Components of color\n", "~~~~~~~~~~~~~~~~~~~\n", "\n", "Because of the way our eyes work, a particular color can be defined using three components. We usually program colors in a computer by specifying their RGB values, which set the intensity of the red, green, and blue channels in a display. But for analyzing the perceptual attributes of a color, it's better to think in terms of *hue*, *saturation*, and *luminance* channels.\n", "\n", "Hue is the component that distinguishes \"different colors\" in a non-technical sense. It's property of color that leads to first-order names like \"red\" and \"blue\":" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "sns.husl_palette(8, s=.7)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Saturation (or chroma) is the *colorfulness*. Two colors with different hues will look more distinct when they have more saturation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "c = sns.color_palette(\"muted\")[0]\n", "sns.blend_palette([sns.desaturate(c, 0), c], 8)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "And lightness corresponds to how much light is emitted (or reflected, for printed colors), ranging from black to white:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "sns.blend_palette([\".1\", c, \".95\"], 8)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Vary hue to distinguish categories\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "When you want to represent multiple categories in a plot, you typically should vary the color of the elements. Consider this simple example: in which of these two plots is it easier to count the number of triangular points?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "n = 45\n", "rng = np.random.default_rng(200)\n", "x = rng.uniform(0, 1, n * 2)\n", "y = rng.uniform(0, 1, n * 2)\n", "a = np.concatenate([np.zeros(n * 2 - 10), np.ones(10)])\n", "\n", "f, axs = plt.subplots(1, 2, figsize=(7, 3.5), sharey=True, sharex=True)\n", "\n", "sns.scatterplot(\n", " x=x[::2], y=y[::2], style=a[::2], size=a[::2], legend=False,\n", " markers=[\"o\", (3, 1, 1)], sizes=[70, 140], ax=axs[0],\n", ")\n", "\n", "sns.scatterplot(\n", " x=x[1::2], y=y[1::2], style=a[1::2], size=a[1::2], hue=a[1::2], legend=False,\n", " markers=[\"o\", (3, 1, 1)], sizes=[70, 140], ax=axs[1],\n", ")\n", "\n", "f.tight_layout(w_pad=2)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In the plot on the right, the orange triangles \"pop out\", making it easy to distinguish them from the circles. This pop-out effect happens because our visual system prioritizes color differences.\n", "\n", "The blue and orange colors differ mostly in terms of their hue. Hue is useful for representing categories: most people can distinguish a moderate number of hues relatively easily, and points that have different hues but similar brightness or intensity seem equally important. It also makes plots easier to talk about. Consider this example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "b = np.tile(np.arange(10), n // 5)\n", "\n", "f, axs = plt.subplots(1, 2, figsize=(7, 3.5), sharey=True, sharex=True)\n", "\n", "sns.scatterplot(\n", " x=x[::2], y=y[::2], hue=b[::2],\n", " legend=False, palette=\"muted\", s=70, ax=axs[0],\n", ")\n", "\n", "sns.scatterplot(\n", " x=x[1::2], y=y[1::2], hue=b[1::2],\n", " legend=False, palette=\"blend:.75,C0\", s=70, ax=axs[1],\n", ")\n", "\n", "f.tight_layout(w_pad=2)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Most people would be able to quickly ascertain that there are five distinct categories in the plot on the left and, if asked to characterize the \"blue\" points, would be able to do so.\n", "\n", "With the plot on the right, where the points are all blue but vary in their luminance and saturation, it's harder to say how many unique categories are present. And how would we talk about a particular category? \"The fairly-but-not-too-blue points?\" What's more, the gray dots seem to fade into the background, de-emphasizing them relative to the more intense blue dots. If the categories are equally important, this is a poor representation.\n", "\n", "So as a general rule, use hue variation to represent categories. With that said, here are few notes of caution. If you have more than a handful of colors in your plot, it can become difficult to keep in mind what each one means, unless there are pre-existing associations between the categories and the colors used to represent them. This makes your plot harder to interpret: rather than focusing on the data, a viewer will have to continually refer to the legend to make sense of what is shown. So you should strive not to make plots that are too complex. And be mindful that not everyone sees colors the same way. Varying both shape (or some other attribute) and color can help people with anomalous color vision understand your plots, and it can keep them (somewhat) interpretable if they are printed to black-and-white." ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Vary luminance to represent numbers\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "On the other hand, hue variations are not well suited to representing numeric data. Consider this example, where we need colors to represent the counts in a bivariate histogram. On the left, we use a circular colormap, where gradual changes in the number of observation within each bin correspond to gradual changes in hue. On the right, we use a palette that uses brighter colors to represent bins with larger counts:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "penguins = sns.load_dataset(\"penguins\")\n", "\n", "f, axs = plt.subplots(1, 2, figsize=(7, 4.25), sharey=True, sharex=True)\n", "\n", "sns.histplot(\n", " data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\",\n", " binwidth=(3, .75), cmap=\"hls\", ax=axs[0],\n", " cbar=True, cbar_kws=dict(orientation=\"horizontal\", pad=.1),\n", ")\n", "axs[0].set(xlabel=\"\", ylabel=\"\")\n", "\n", "\n", "sns.histplot(\n", " data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\",\n", " binwidth=(3, .75), cmap=\"flare_r\", ax=axs[1],\n", " cbar=True, cbar_kws=dict(orientation=\"horizontal\", pad=.1),\n", ")\n", "axs[1].set(xlabel=\"\", ylabel=\"\")\n", "\n", "f.tight_layout(w_pad=3)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "With the hue-based palette, it's quite difficult to ascertain the shape of the bivariate distribution. In contrast, the luminance palette makes it much more clear that there are two prominant peaks.\n", "\n", "Varying luminance helps you see structure in data, and changes in luminance are more intuitively processed as changes in importance. But the plot on the right does not use a grayscale colormap. Its colorfulness makes it more interesting, and the subtle hue variation increases the perceptual distance between two values. As a result, small differencess slightly easier to resolve.\n", "\n", "These examples show that color palette choices are about more than aesthetics: the colors you choose can reveal patterns in your data if used effectively or hide them if used poorly. There is not one optimal palette, but there are palettes that are better or worse for particular datasets and visualization approaches.\n", "\n", "And aesthetics do matter: the more that people want to look at your figures, the greater the chance that they will learn something from them. This is true even when you are making plots for yourself. During exploratory data analysis, you may generate many similar figures. Varying the color palettes will add a sense of novelty, which keeps you engaged and prepared to notice interesting features of your data.\n", "\n", "So how can you choose color palettes that both represent your data well and look attractive?" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Tools for choosing color palettes\n", "---------------------------------\n", "\n", "The most important function for working with color palettes is, aptly, :func:`color_palette`. This function provides an interface to most of the possible ways that one can generate color palettes in seaborn. And it's used internally by any function that has a ``palette`` argument.\n", "\n", "The primary argument to :func:`color_palette` is usually a string: either the a name of a specific palette or the name of a family and additional arguments to select a specific member. In the latter case, :func:`color_palette` will delegate to more specific function, such as :func:`cubehelix_palette`. It's also possible to pass a list of colors specified any way that matplotlib accepts (an RGB tuple, a hex code, or a name in the X11 table). The return value is an object that wraps a list of RGB tuples with a few useful methods, such as conversion to hex codes and a rich HTML representation.\n", "\n", "Calling :func:`color_palette` with no arguments will return the current default color palette that matplotlib (and most seaborn functions) will use if colors are not otherwise specified. This default palette can be set with the corresponding :func:`set_palette` function, which calls :func:`color_palette` internally and accepts the same arguments.\n", "\n", "To motivate the different options that :func:`color_palette` provides, it will be useful to introduce a classification scheme for color palettes. Broadly, palettes fall into one of three categories:\n", "\n", "- qualitative palettes, good for representing categorical data\n", "- sequential palettes, good for representing numeric data\n", "- diverging palettes, good for representing numeric data with a categorical boundary" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _qualitative_palettes:\n", "\n", "Qualitative color palettes\n", "--------------------------\n", "\n", "Qualitative palettes are well-suited to representing categorical data because most of their variation is in the hue component. The default color palette in seaborn is a qualitative palette with ten distinct hues:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "These colors have the same ordering as the default matplotlib color palette, ``\"tab10\"``, but they are a bit less intense. Compare:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"tab10\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Seaborn in fact has six variations of matplotlib's palette, called ``deep``, ``muted``, ``pastel``, ``bright``, ``dark``, and ``colorblind``. These span a range of average luminance and saturation values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "f = plt.figure(figsize=(6, 6))\n", "\n", "ax_locs = dict(\n", " deep=(.4, .4),\n", " bright=(.8, .8),\n", " muted=(.49, .71),\n", " dark=(.8, .2),\n", " pastel=(.2, .8),\n", " colorblind=(.71, .49),\n", ")\n", "\n", "s = .35\n", "\n", "for pal, (x, y) in ax_locs.items():\n", " ax = f.add_axes([x - s / 2, y - s / 2, s, s])\n", " ax.pie(np.ones(10),\n", " colors=sns.color_palette(pal, 10),\n", " counterclock=False, startangle=180,\n", " wedgeprops=dict(linewidth=1, edgecolor=\"w\"))\n", " f.text(x, y, pal, ha=\"center\", va=\"center\", size=14,\n", " bbox=dict(facecolor=\"white\", alpha=0.85, boxstyle=\"round,pad=0.2\"))\n", "\n", "f.text(.1, .05, \"Saturation\", size=18, ha=\"left\", va=\"center\",\n", " bbox=dict(facecolor=\"white\", edgecolor=\"w\"))\n", "f.text(.05, .1, \"Luminance\", size=18, ha=\"center\", va=\"bottom\", rotation=90,\n", " bbox=dict(facecolor=\"white\", edgecolor=\"w\"))\n", "\n", "ax = f.add_axes([0, 0, 1, 1])\n", "ax.set_axis_off()\n", "ax.arrow(.15, .05, .4, 0, width=.002, head_width=.015, color=\".15\")\n", "ax.arrow(.05, .15, 0, .4, width=.002, head_width=.015, color=\".15\")\n", "ax.set(xlim=(0, 1), ylim=(0, 1))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Many people find the moderated hues of the default ``\"deep\"`` palette to be aesthetically pleasing, but they are also less distinct. As a result, they may be more difficult to discriminate in some contexts, which is something to keep in mind when making publication graphics. `This comparison `_ can be helpful for estimating how the the seaborn color palettes perform when simulating different forms of colorblindess." ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Using circular color systems\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "When you have an arbitrary number of categories, the easiest approach to finding unique hues is to draw evenly-spaced colors in a circular color space (one where the hue changes while keeping the brightness and saturation constant). This is what most seaborn functions default to when they need to use more colors than are currently set in the default color cycle.\n", "\n", "The most common way to do this uses the ``hls`` color space, which is a simple transformation of RGB values. We saw this color palette before as a counterexample for how to plot a histogram:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"hls\", 8)" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Because of the way the human visual system works, colors that have the same luminance and saturation in terms of their RGB values won't necessarily look equally intense To remedy this, seaborn provides an interface to the `husl `_ system (since renamed to HSLuv), which achieves less intensity variation as you rotate around the color wheel:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"husl\", 8)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When seaborn needs a categorical palette with more colors than are available in the current default, it will use this approach.\n", "\n", "Using categorical Color Brewer palettes\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "Another source of visually pleasing categorical palettes comes from the `Color Brewer `_ tool (which also has sequential and diverging palettes, as we'll see below)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"Set2\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Be aware that the qualitative Color Brewer palettes have different lengths, and the default behavior of :func:`color_palette` is to give you the full list:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"Paired\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _sequential_palettes:\n", "\n", "Sequential color palettes\n", "-------------------------\n", "\n", "The second major class of color palettes is called \"sequential\". This kind of mapping is appropriate when data range from relatively low or uninteresting values to relatively high or interesting values (or vice versa). As we saw above, the primary dimension of variation in a sequential palette is luminance. Some seaborn functions will default to a sequential palette when you are mapping numeric data. (For historical reasons, both categorical and numeric mappings are specified with the ``hue`` parameter in functions like :func:`relplot` or :func:`displot`, even though numeric mappings use color palettes with relatively little hue variation).\n", "\n", "Perceptually uniform palettes\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "Because they are intended to represent numeric values, the best sequential palettes will be *perceptually uniform*, meaning that the relative discriminability of two colors is proportional to the difference between the corresponding data values. Seaborn includes four perceptually uniform sequential colormaps: ``\"rocket\"``, ``\"mako\"``, ``\"flare\"``, and ``\"crest\"``. The first two have a very wide luminance range and are well suited for applications such as heatmaps, where colors fill the space they are plotted into:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"rocket\", as_cmap=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"mako\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Because the extreme values of these colormaps approach white, they are not well-suited for coloring elements such as lines or points: it will be difficult to discriminate important values against a white or gray background. The \"flare\" and \"crest\" colormaps are a better choice for such plots. They have a more restricted range of luminance variations, which they compensate for with a slightly more pronounced variation in hue. The default direction of the luminance ramp is also reversed, so that smaller values have lighter colors:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"flare\", as_cmap=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"crest\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It is also possible to use the perceptually uniform colormaps provided by matplotlib, such as ``\"magma\"`` and ``\"viridis\"``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"magma\", as_cmap=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"viridis\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "As with the convention in matplotlib, every continuous colormap has a reversed version, which has the suffix ``\"_r\"``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"rocket_r\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Discrete vs. continuous mapping\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "One thing to be aware of is that seaborn can generate discrete values from sequential colormaps and, when doing so, it will not use the most extreme values. Compare the discrete version of ``\"rocket\"`` against the continuous version shown above:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"rocket\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Interally, seaborn uses the discrete version for categorical data and the continuous version when in numeric mapping mode. Discrete sequential colormaps can be well-suited for visualizing categorical data with an intrinsic ordering, especially if there is some hue variation." ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ ".. _cubehelix_palettes:\n", "\n", "Sequential \"cubehelix\" palettes\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "The perceptually uniform colormaps are difficult to programmatically generate, because they are not based on the RGB color space. The `cubehelix `_ system offers an RGB-based compromise: it generates sequential palettes with a linear increase or decrease in brightness and some continuous variation in hue. While not perfectly perceptually uniform, the resulting colormaps have many good properties. Importantly, many aspects of the design process are parameterizable.\n", "\n", "Matplotlib has the default cubehelix version built into it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"cubehelix\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The default palette returned by the seaborn :func:`cubehelix_palette` function is a bit different from the matplotlib default in that it does not rotate as far around the hue wheel or cover as wide a range of intensities. It also reverses the luminance ramp:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.cubehelix_palette(as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Other arguments to :func:`cubehelix_palette` control how the palette looks. The two main things you'll change are the ``start`` (a value between 0 and 3) and ``rot``, or number of rotations (an arbitrary value, but usually between -1 and 1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.cubehelix_palette(start=.5, rot=-.5, as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The more you rotate, the more hue variation you will see:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.cubehelix_palette(start=.5, rot=-.75, as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You can control both how dark and light the endpoints are and their order:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.cubehelix_palette(start=2, rot=0, dark=0, light=.95, reverse=True, as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The :func:`color_palette` accepts a string code, starting with ``\"ch:\"``, for generating an arbitrary cubehelix palette. You can passs the names of parameters in the string:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"ch:start=.2,rot=-.3\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "And for compactness, each parameter can be specified with its first letter:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"ch:s=-.2,r=.6\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Custom sequential palettes\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "For a simpler interface to custom sequential palettes, you can use :func:`light_palette` or :func:`dark_palette`, which are both seeded with a single color and produce a palette that ramps either from light or dark desaturated values to that color:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.light_palette(\"seagreen\", as_cmap=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.dark_palette(\"#69d\", reverse=True, as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "As with cubehelix palettes, you can also specify light or dark palettes through :func:`color_palette` or anywhere ``palette`` is accepted:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"light:b\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Reverse the colormap by adding ``\"_r\"``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"dark:salmon_r\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Sequential Color Brewer palettes\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "The Color Brewer library also has some good options for sequential palettes. They include palettes with one primary hue:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"Blues\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Along with multi-hue options:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"YlOrBr\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _diverging_palettes:\n", "\n", "Diverging color palettes\n", "------------------------\n", "\n", "The third class of color palettes is called \"diverging\". These are used for data where both large low and high values are interesting and span a midpoint value (often 0) that should be demphasized. The rules for choosing good diverging palettes are similar to good sequential palettes, except now there should be two dominant hues in the colormap, one at (or near) each pole. It's also important that the starting values are of similar brightness and saturation.\n", "\n", "Perceptually uniform divering palettes\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "Seaborn includes two perceptually uniform diverging palettes: ``\"vlag\"`` and ``\"icefire\"``. They both use blue and red at their poles, which many intuitively processes as \"cold\" and \"hot\":" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"vlag\", as_cmap=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"icefire\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Custom diverging palettes\n", "~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "You can also use the seaborn function :func:`diverging_palette` to create a custom colormap for diverging data. This function makes diverging palettes using the ``husl`` color system. You pass it two hues (in degrees) and, optionally, the lightness and saturation values for the extremes. Using ``husl`` means that the extreme values, and the resulting ramps to the midpoint, while not perfectly perceptually uniform, will be well-balanced:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.diverging_palette(220, 20, as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This is convenient when you want to stray from the boring confines of cold-hot approaches:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.diverging_palette(145, 300, s=60, as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It's also possible to make a palette where the midpoint is dark rather than light:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.diverging_palette(250, 30, l=65, center=\"dark\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It's important to emphasize here that using red and green, while intuitive, `should be avoided `_.\n", "\n", "Other diverging palettes\n", "~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "There are a few other good diverging palettes built into matplotlib, including Color Brewer palettes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"Spectral\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "And the ``coolwarm`` palette, which has less contrast between the middle values and the extremes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.color_palette(\"coolwarm\", as_cmap=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "As you can see, there are many options for using color in your visualizations. Seaborn tries both to use good defaults and to offer a lot of flexibility.\n", "\n", "This discussion is only the beginning, and there are a number of good resources for learning more about techniques for using color in visualizations. One great example is this `series of blog posts `_ from the NASA Earth Observatory. The matplotlib docs also have a `nice tutorial `_ that illustrates some of the perceptual properties of their colormaps." ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", "\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/tutorial/data_structure.ipynb000066400000000000000000000504321410631356500217400ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _data_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Data structures accepted by seaborn\n", "===================================\n", "\n", ".. raw:: html\n", "\n", "
\n", "\n", "As a data visualization library, seaborn requires that you provide it with data. This chapter explains the various ways to accomplish that task. Seaborn supports several different dataset formats, and most functions accept data represented with objects from the `pandas `_ or `numpy `_ libraries as well as built-in Python types like lists and dictionaries. Understanding the usage patterns associated with these different options will help you quickly create useful visualizations for nearly any dataset.\n", "\n", ".. note::\n", " As of current writing (v0.11.0), the full breadth of options covered here are supported by only a subset of the modules in seaborn (namely, the :ref:`relational ` and :ref:`distribution ` modules). The other modules offer much of the same flexibility, but have some exceptions (e.g., :func:`catplot` and :func:`lmplot` are limited to long-form data with named variables). The data-ingest code will be standardized over the next few release cycles, but until that point, be mindful of the specific documentation for each function if it is not doing what you expect with your dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import seaborn as sns\n", "sns.set_theme()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Long-form vs. wide-form data\n", "----------------------------\n", "\n", "Most plotting functions in seaborn are oriented towards *vectors* of data. When plotting ``x`` against ``y``, each variable should be a vector. Seaborn accepts data *sets* that have more than one vector organized in some tabular fashion. There is a fundamental distinction between \"long-form\" and \"wide-form\" data tables, and seaborn will treat each differently.\n", "\n", "Long-form data\n", "~~~~~~~~~~~~~~\n", "\n", "A long-form data table has the following characteristics:\n", "\n", "- Each variable is a column\n", "- Each observation is a row" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "As a simple example, consider the \"flights\" dataset, which records the number of airline passengers who flew in each month from 1949 to 1960. This dataset has three variables (*year*, *month*, and number of *passengers*):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flights = sns.load_dataset(\"flights\")\n", "flights.head()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "With long-form data, columns in the table are given roles in the plot by explicitly assigning them to one of the variables. For example, making a monthly plot of the number of passengers per year looks like this:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(data=flights, x=\"year\", y=\"passengers\", hue=\"month\", kind=\"line\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The advantage of long-form data is that it lends itself well to this explicit specification of the plot. It can accomodate datasets of arbitrary complexity, so long as the variables and observations can be clearly defined. But this format takes some getting used to, because it is often not the model of the data that one has in their head.\n", "\n", "Wide-form data\n", "~~~~~~~~~~~~~~\n", "\n", "For simple datasets, it is often more intuitive to think about data the way it might be viewed in a spreadsheet, where the columns and rows contain *levels* of different variables. For example, we can convert the flights dataset into a wide-form organization by \"pivoting\" it so that each column has each month's time series over years:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flights_wide = flights.pivot(index=\"year\", columns=\"month\", values=\"passengers\")\n", "flights_wide.head()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Here we have the same three variables, but they are organized differently. The variables in this dataset are linked to the *dimensions* of the table, rather than to named fields. Each observation is defined by both the value at a cell in the table and the coordinates of that cell with respect to the row and column indices." ] }, { "cell_type": "raw", "metadata": {}, "source": [ "With long-form data, we can access variables in the dataset by their name. That is not the case with wide-form data. Nevertheless, because there is a clear association between the dimensions of the table and the variable in the dataset, seaborn is able to assign those variables roles in the plot.\n", "\n", ".. note::\n", " Seaborn treats the argument to ``data`` as wide form when neither ``x`` nor ``y`` are assigned." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(data=flights_wide, kind=\"line\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This plot looks very similar to the one before. Seaborn has assigned the index of the dataframe to ``x``, the values of the dataframe to ``y``, and it has drawn a separate line for each month. There is a notable difference between the two plots, however. When the dataset went through the \"pivot\" operation that converted it from long-form to wide-form, the information about what the values mean was lost. As a result, there is no y axis label. (The lines also have dashes here, becuase :func:`relplot` has mapped the column variable to both the ``hue`` and ``style`` semantic so that the plot is more accessible. We didn't do that in the long-form case, but we could have by setting ``style=\"month\"``).\n", "\n", "Thus far, we did much less typing while using wide-form data and made nearly the same plot. This seems easier! But a big advantage of long-form data is that, once you have the data in the correct format, you no longer need to think about its *structure*. You can design your plots by thinking only about the variables contained within it. For example, to draw lines that represent the monthly time series for each year, simply reassign the variables:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(data=flights, x=\"month\", y=\"passengers\", hue=\"year\", kind=\"line\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To achieve the same remapping with the wide-form dataset, we would need to transpose the table:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(data=flights_wide.transpose(), kind=\"line\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "(This example also illustrates another wrinkle, which is that seaborn currently considers the column variable in a wide-form dataset to be categorical regardless of its datatype, whereas, because the long-form variable is numeric, it is assigned a quantitative color palette and legend. This may change in the future).\n", "\n", "The absence of explicit variable assignments also means that each plot type needs to define a fixed mapping between the dimensions of the wide-form data and the roles in the plot. Because ths natural mapping may vary across plot types, the results are less predictable when using wide-form data. For example, the :ref:`categorical ` plots assign the *column* dimension of the table to ``x`` and then aggregate across the rows (ignoring the index):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(data=flights_wide, kind=\"box\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When using pandas to represent wide-form data, you are limited to just a few variables (no more than three). This is because seaborn does not make use of multi-index information, which is how pandas represents additional variables in a tabular format. The `xarray `_ project offers labeled N-dimensional array objects, which can be considered a generalization of wide-form data to higher dimensions. At present, seaborn does not directly support objects from ``xarray``, but they can be transformed into a long-form :class:`pandas.DataFrame` using the ``to_pandas`` method and then plotted in seaborn like any other long-form data set.\n", "\n", "In summary, we can think of long-form and wide-form datasets as looking something like this:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "f = plt.figure(figsize=(7, 5))\n", "\n", "gs = plt.GridSpec(\n", " ncols=6, nrows=2, figure=f,\n", " left=0, right=.35, bottom=0, top=.9,\n", " height_ratios=(1, 20),\n", " wspace=.1, hspace=.01\n", ")\n", "\n", "colors = [c + (.5,) for c in sns.color_palette()]\n", "\n", "f.add_subplot(gs[0, :], facecolor=\".8\")\n", "[\n", " f.add_subplot(gs[1:, i], facecolor=colors[i])\n", " for i in range(gs.ncols)\n", "]\n", "\n", "gs = plt.GridSpec(\n", " ncols=2, nrows=2, figure=f,\n", " left=.4, right=1, bottom=.2, top=.8,\n", " height_ratios=(1, 8), width_ratios=(1, 11),\n", " wspace=.015, hspace=.02\n", ")\n", "\n", "f.add_subplot(gs[0, 1:], facecolor=colors[2])\n", "f.add_subplot(gs[1:, 0], facecolor=colors[1])\n", "f.add_subplot(gs[1, 1], facecolor=colors[0])\n", "\n", "for ax in f.axes:\n", " ax.set(xticks=[], yticks=[])\n", "\n", "f.text(.35 / 2, .91, \"Long-form\", ha=\"center\", va=\"bottom\", size=15)\n", "f.text(.7, .81, \"Wide-form\", ha=\"center\", va=\"bottom\", size=15)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Messy data\n", "~~~~~~~~~~\n", "\n", "Many datasets cannot be clearly interpreted using either long-form or wide-form rules. If datasets that are clearly long-form or wide-form are `\"tidy\" `_, we might say that these more ambiguous datasets are \"messy\". In a messy dataset, the variables are neither uniquely defined by the keys nor by the dimensions of the table. This often occurs with *repeated-measures* data, where it is natural to organize a table such that each row corresponds to the *unit* of data collection. Consider this simple dataset from a psychology experiment in which twenty subjects performed a memory task where they studied anagrams while their attention was either divided or focused:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "anagrams = sns.load_dataset(\"anagrams\")\n", "anagrams" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The attention variable is *between-subjects*, but there is also a *within-subjects* variable: the number of possible solutions to the anagrams, which varied from 1 to 3. The dependent measure is a score of memory performance. These two variables (number and score) are jointly encoded across several columns. As a result, the whole dataset is neither clearly long-form nor clearly wide-form.\n", "\n", "How might we tell seaborn to plot the average score as a function of attention and number of solutions? We'd first need to coerce the data into one of our two structures. Let's transform it to a tidy long-form table, such that each variable is a column and each row is an observation. We can use the method :meth:`pandas.DataFrame.melt` to accomplish this task:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "anagrams_long = anagrams.melt(id_vars=[\"subidr\", \"attnr\"], var_name=\"solutions\", value_name=\"score\")\n", "anagrams_long.head()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Now we can make the plot that we want:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(data=anagrams_long, x=\"solutions\", y=\"score\", hue=\"attnr\", kind=\"point\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Further reading and take-home points\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "For a longer discussion about tabular data structures, you could read the `\"Tidy Data\" `_ paper by Hadley Whickham. Note that seaborn uses a slightly different set of concepts than are defined in the paper. While the paper associates tidyness with long-form structure, we have drawn a distinction between \"tidy wide-form\" data, where there is a clear mapping between variables in the dataset and the dimensions of the table, and \"messy data\", where no such mapping exists.\n", "\n", "The long-form structure has clear advantages. It allows you to create figures by explicitly assigning variables in the dataset to roles in plot, and you can do so with more than three variables. When possible, try to represent your data with a long-form structure when embarking on serious analysis. Most of the examples in the seaborn documentation will use long-form data. But in cases where it is more natural to keep the dataset wide, remember that seaborn can remain useful." ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Options for visualizing long-form data\n", "--------------------------------------\n", "\n", "While long-form data has a precise definition, seaborn is fairly flexible in terms of how it is actually organized across the data structures in memory. The examples in the rest of the documentation will typically use :class:`pandas.DataFrame` objects and reference variables in them by assigning names of their columns to the variables in the plot. But it is also possible to store vectors in a Python dictionary or a class that implements that interface:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flights_dict = flights.to_dict()\n", "sns.relplot(data=flights_dict, x=\"year\", y=\"passengers\", hue=\"month\", kind=\"line\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Many pandas operations, such as a the split-apply-combine operations of a group-by, will produce a dataframe where information has moved from the columns of the input dataframe to the index of the output. So long as the name is retained, you can still reference the data as normal:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flights_avg = flights.groupby(\"year\").mean()\n", "sns.relplot(data=flights_avg, x=\"year\", y=\"passengers\", kind=\"line\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Additionally, it's possible to pass vectors of data directly as arguments to ``x``, ``y``, and other plotting variables. If these vectors are pandas objects, the ``name`` attribute will be used to label the plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "year = flights_avg.index\n", "passengers = flights_avg[\"passengers\"]\n", "sns.relplot(x=year, y=passengers, kind=\"line\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Numpy arrays and other objects that implement the Python sequence interface work too, but if they don't have names, the plot will not be as informative without further tweaking:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=year.to_numpy(), y=passengers.to_list(), kind=\"line\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Options for visualizing wide-form data\n", "--------------------------------------\n", "\n", "The options for passing wide-form data are even more flexible. As with long-form data, pandas objects are preferable because the name (and, in some cases, index) information can be used. But in essence, any format that can be viewed as a single vector or a collection of vectors can be passed to ``data``, and a valid plot can usually be constructed.\n", "\n", "The example we saw above used a rectangular :class:`pandas.DataFrame`, which can be thought of as a collection of its columns. A dict or list of pandas objects will also work, but we'll lose the axis labels:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flights_wide_list = [col for _, col in flights_wide.items()]\n", "sns.relplot(data=flights_wide_list, kind=\"line\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The vectors in a collection do not need to have the same length. If they have an ``index``, it will be used to align them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "two_series = [flights_wide.loc[:1955, \"Jan\"], flights_wide.loc[1952:, \"Aug\"]]\n", "sns.relplot(data=two_series, kind=\"line\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Whereas an ordinal index will be used for numpy arrays or simple Python sequences:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "two_arrays = [s.to_numpy() for s in two_series]\n", "sns.relplot(data=two_arrays, kind=\"line\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "But a dictionary of such vectors will at least use the keys:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "two_arrays_dict = {s.name: s.to_numpy() for s in two_series}\n", "sns.relplot(data=two_arrays_dict, kind=\"line\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Rectangular numpy arrays are treated just like a dataframe without index information, so they are viewed as a collection of column vectors. Note that this is different from how numpy indexing operations work, where a single indexer will access a row. But it is consistent with how pandas would turn the array into a dataframe or how matplotlib would plot it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flights_array = flights_wide.to_numpy()\n", "sns.relplot(data=flights_array, kind=\"line\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "# TODO once the categorical module is refactored, its single vectors will get special treatment\n", "# (they'll look like collection of singletons, rather than a single collection). That should be noted." ] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/tutorial/distributions.ipynb000066400000000000000000000706621410631356500216200ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _distribution_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Visualizing distributions of data\n", "==================================\n", "\n", ".. raw:: html\n", "\n", "
\n", "\n", "An early step in any effort to analyze or model data should be to understand how the variables are distributed. Techniques for distribution visualization can provide quick answers to many important questions. What range do the observations cover? What is their central tendency? Are they heavily skewed in one direction? Is there evidence for bimodality? Are there significant outliers? Do the answers to these questions vary across subsets defined by other variables?\n", "\n", "The :ref:`distributions module ` contains several functions designed to answer questions such as these. The axes-level functions are :func:`histplot`, :func:`kdeplot`, :func:`ecdfplot`, and :func:`rugplot`. They are grouped together within the figure-level :func:`displot`, :func:`jointplot`, and :func:`pairplot` functions.\n", "\n", "There are several different approaches to visualizing a distribution, and each has its relative advantages and drawbacks. It is important to understand theses factors so that you can choose the best approach for your particular aim." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "import seaborn as sns; sns.set_theme()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _tutorial_hist:\n", "\n", "Plotting univariate histograms\n", "------------------------------\n", "\n", "Perhaps the most common approach to visualizing a distribution is the *histogram*. This is the default approach in :func:`displot`, which uses the same underlying code as :func:`histplot`. A histogram is a bar plot where the axis representing the data variable is divided into a set of discrete bins and the count of observations falling within each bin is shown using the height of the corresponding bar:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "penguins = sns.load_dataset(\"penguins\")\n", "sns.displot(penguins, x=\"flipper_length_mm\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This plot immediately affords a few insights about the ``flipper_length_mm`` variable. For instance, we can see that the most common flipper length is about 195 mm, but the distribution appears bimodal, so this one number does not represent the data well.\n", "\n", "Choosing the bin size\n", "^^^^^^^^^^^^^^^^^^^^^\n", "\n", "The size of the bins is an important parameter, and using the wrong bin size can mislead by obscuring important features of the data or by creating apparent features out of random variability. By default, :func:`displot`/:func:`histplot` choose a default bin size based on the variance of the data and the number of observations. But you should not be over-reliant on such automatic approaches, because they depend on particular assumptions about the structure of your data. It is always advisable to check that your impressions of the distribution are consistent across different bin sizes. To choose the size directly, set the `binwidth` parameter:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", binwidth=3)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In other circumstances, it may make more sense to specify the *number* of bins, rather than their size:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", bins=20)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "One example of a situation where defaults fail is when the variable takes a relatively small number of integer values. In that case, the default bin width may be too small, creating awkward gaps in the distribution:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")\n", "sns.displot(tips, x=\"size\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "One approach would be to specify the precise bin breaks by passing an array to ``bins``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(tips, x=\"size\", bins=[1, 2, 3, 4, 5, 6, 7])" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This can also be accomplished by setting ``discrete=True``, which chooses bin breaks that represent the unique values in a dataset with bars that are centered on their corresponding value." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(tips, x=\"size\", discrete=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It's also possible to visualize the distribution of a categorical variable using the logic of a histogram. Discrete bins are automatically set for categorical variables, but it may also be helpful to \"shrink\" the bars slightly to emphasize the categorical nature of the axis:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(tips, x=\"day\", shrink=.8)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Conditioning on other variables\n", "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "Once you understand the distribution of a variable, the next step is often to ask whether features of that distribution differ across other variables in the dataset. For example, what accounts for the bimodal distribution of flipper lengths that we saw above? :func:`displot` and :func:`histplot` provide support for conditional subsetting via the ``hue`` semantic. Assigning a variable to ``hue`` will draw a separate histogram for each of its unique values and distinguish them by color:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", hue=\"species\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "By default, the different histograms are \"layered\" on top of each other and, in some cases, they may be difficult to distinguish. One option is to change the visual representation of the histogram from a bar plot to a \"step\" plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", hue=\"species\", element=\"step\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Alternatively, instead of layering each bar, they can be \"stacked\", or moved vertically. In this plot, the outline of the full histogram will match the plot with only a single variable:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", hue=\"species\", multiple=\"stack\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The stacked histogram emphasizes the part-whole relationship between the variables, but it can obscure other features (for example, it is difficult to determine the mode of the Adelie distribution. Another option is \"dodge\" the bars, which moves them horizontally and reduces their width. This ensures that there are no overlaps and that the bars remain comparable in terms of height. But it only works well when the categorical variable has a small number of levels:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", hue=\"sex\", multiple=\"dodge\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Because :func:`displot` is a figure-level function and is drawn onto a :class:`FacetGrid`, it is also possible to draw each individual distribution in a separate subplot by assigning the second variable to ``col`` or ``row`` rather than (or in addition to) ``hue``. This represents the distribution of each subset well, but it makes it more difficult to draw direct comparisons:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", col=\"sex\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "None of these approaches are perfect, and we will soon see some alternatives to a histogram that are better-suited to the task of comparison.\n", "\n", "Normalized histogram statistics\n", "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "Before we do, another point to note is that, when the subsets have unequal numbers of observations, comparing their distributions in terms of counts may not be ideal. One solution is to *normalize* the counts using the ``stat`` parameter:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", hue=\"species\", stat=\"density\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "By default, however, the normalization is applied to the entire distribution, so this simply rescales the height of the bars. By setting ``common_norm=False``, each subset will be normalized independently:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", hue=\"species\", stat=\"density\", common_norm=False)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Density normalization scales the bars so that their *areas* sum to 1. As a result, the density axis is not directly interpretable. Another option is to normalize the bars to that their *heights* sum to 1. This makes most sense when the variable is discrete, but it is an option for all histograms:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", hue=\"species\", stat=\"probability\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _tutorial_kde:\n", "\n", "Kernel density estimation\n", "-------------------------\n", "\n", "A histogram aims to approximate the underlying probability density function that generated the data by binning and counting observations. Kernel density estimation (KDE) presents a different solution to the same problem. Rather than using discrete bins, a KDE plot smooths the observations with a Gaussian kernel, producing a continuous density estimate:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", kind=\"kde\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Choosing the smoothing bandwidth\n", "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "Much like with the bin size in the histogram, the ability of the KDE to accurately represent the data depends on the choice of smoothing bandwidth. An over-smoothed estimate might erase meaningful features, but an under-smoothed estimate can obscure the true shape within random noise. The easiest way to check the robustness of the estimate is to adjust the default bandwidth:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", kind=\"kde\", bw_adjust=.25)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Note how the narrow bandwidth makes the bimodality much more apparent, but the curve is much less smooth. In contrast, a larger bandwidth obscures the bimodality almost completely:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", kind=\"kde\", bw_adjust=2)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Conditioning on other variables\n", "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "As with histograms, if you assign a ``hue`` variable, a separate density estimate will be computed for each level of that variable:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", hue=\"species\", kind=\"kde\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In many cases, the layered KDE is easier to interpret than the layered histogram, so it is often a good choice for the task of comparison. Many of the same options for resolving multiple distributions apply to the KDE as well, however:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", hue=\"species\", kind=\"kde\", multiple=\"stack\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Note how the stacked plot filled in the area between each curve by default. It is also possible to fill in the curves for single or layered densities, although the default alpha value (opacity) will be different, so that the individual densities are easier to resolve." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", hue=\"species\", kind=\"kde\", fill=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Kernel density estimation pitfalls\n", "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "KDE plots have many advantages. Important features of the data are easy to discern (central tendency, bimodality, skew), and they afford easy comparisons between subsets. But there are also situations where KDE poorly represents the underlying data. This is because the logic of KDE assumes that the underlying distribution is smooth and unbounded. One way this assumption can fail is when a varible reflects a quantity that is naturally bounded. If there are observations lying close to the bound (for example, small values of a variable that cannot be negative), the KDE curve may extend to unrealistic values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(tips, x=\"total_bill\", kind=\"kde\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This can be partially avoided with the ``cut`` parameter, which specifies how far the curve should extend beyond the extreme datapoints. But this influences only where the curve is drawn; the density estimate will still smooth over the range where no data can exist, causing it to be artifically low at the extremes of the distribution:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(tips, x=\"total_bill\", kind=\"kde\", cut=0)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The KDE approach also fails for discrete data or when data are naturally continuous but specific values are over-represented. The important thing to keep in mind is that the KDE will *always show you a smooth curve*, even when the data themselves are not smooth. For example, consider this distribution of diamond weights:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "diamonds = sns.load_dataset(\"diamonds\")\n", "sns.displot(diamonds, x=\"carat\", kind=\"kde\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "While the KDE suggests that there are peaks around specific values, the histogram reveals a much more jagged distribution:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(diamonds, x=\"carat\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "As a compromise, it is possible to combine these two approaches. While in histogram mode, :func:`displot` (as with :func:`histplot`) has the option of including the smoothed KDE curve (note ``kde=True``, not ``kind=\"kde\"``):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(diamonds, x=\"carat\", kde=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _tutorial_ecdf:\n", "\n", "Empirical cumulative distributions\n", "----------------------------------\n", "\n", "A third option for visualizing distributions computes the \"empirical cumulative distribution function\" (ECDF). This plot draws a monotonically-increasing curve through each datapoint such that the height of the curve reflects the proportion of observations with a smaller value:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", kind=\"ecdf\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The ECDF plot has two key advantages. Unlike the histogram or KDE, it directly represents each datapoint. That means there is no bin size or smoothing parameter to consider. Additionally, because the curve is monotonically increasing, it is well-suited for comparing multiple distributions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"flipper_length_mm\", hue=\"species\", kind=\"ecdf\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The major downside to the ECDF plot is that it represents the shape of the distribution less intuitively than a histogram or density curve. Consider how the bimodality of flipper lengths is immediately apparent in the histogram, but to see it in the ECDF plot, you must look for varying slopes. Nevertheless, with practice, you can learn to answer all of the important questions about a distribution by examining the ECDF, and doing so can be a powerful approach." ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Visualizing bivariate distributions\n", "-----------------------------------\n", "\n", "All of the examples so far have considered *univariate* distributions: distributions of a single variable, perhaps conditional on a second variable assigned to ``hue``. Assigning a second variable to ``y``, however, will plot a *bivariate* distribution:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A bivariate histogram bins the data within rectangles that tile the plot and then shows the count of observations within each rectangle with the fill color (analagous to a :func:`heatmap`). Similarly, a bivariate KDE plot smoothes the (x, y) observations with a 2D Gaussian. The default representation then shows the *contours* of the 2D density:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\", kind=\"kde\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Assigning a ``hue`` variable will plot multiple heatmaps or contour sets using different colors. For bivariate histograms, this will only work well if there is minimal overlap between the conditional distributions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\", hue=\"species\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The contour approach of the bivariate KDE plot lends itself better to evaluating overlap, although a plot with too many contours can get busy:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\", hue=\"species\", kind=\"kde\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Just as with univariate plots, the choice of bin size or smoothing bandwidth will determine how well the plot represents the underlying bivariate distribution. The same parameters apply, but they can be tuned for each variable by passing a pair of values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\", binwidth=(2, .5))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To aid interpretation of the heatmap, add a colorbar to show the mapping between counts and color intensity:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\", binwidth=(2, .5), cbar=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The meaning of the bivariate density contours is less straightforward. Because the density is not directly interpretable, the contours are drawn at *iso-proportions* of the density, meaning that each curve shows a level set such that some proportion *p* of the density lies below it. The *p* values are evenly spaced, with the lowest level contolled by the ``thresh`` parameter and the number controlled by ``levels``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\", kind=\"kde\", thresh=.2, levels=4)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The ``levels`` parameter also accepts a list of values, for more control:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\", kind=\"kde\", levels=[.01, .05, .1, .8])" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The bivariate histogram allows one or both variables to be discrete. Plotting one discrete and one continuous variable offers another way to compare conditional univariate distributions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(diamonds, x=\"price\", y=\"clarity\", log_scale=(True, False))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In contrast, plotting two discrete variables is an easy to way show the cross-tabulation of the observations:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(diamonds, x=\"color\", y=\"clarity\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Distribution visualization in other settings\n", "--------------------------------------------\n", "\n", "Several other figure-level plotting functions in seaborn make use of the :func:`histplot` and :func:`kdeplot` functions.\n", "\n", "\n", "Plotting joint and marginal distributions\n", "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "The first is :func:`jointplot`, which augments a bivariate relatonal or distribution plot with the marginal distributions of the two variables. By default, :func:`jointplot` represents the bivariate distribution using :func:`scatterplot` and the marginal distributions using :func:`histplot`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Similar to :func:`displot`, setting a different ``kind=\"kde\"`` in :func:`jointplot` will change both the joint and marginal plots the use :func:`kdeplot`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(\n", " data=penguins,\n", " x=\"bill_length_mm\", y=\"bill_depth_mm\", hue=\"species\",\n", " kind=\"kde\"\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ":func:`jointplot` is a convenient interface to the :class:`JointGrid` class, which offeres more flexibility when used directly:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.JointGrid(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\")\n", "g.plot_joint(sns.histplot)\n", "g.plot_marginals(sns.boxplot)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A less-obtrusive way to show marginal distributions uses a \"rug\" plot, which adds a small tick on the edge of the plot to represent each individual observation. This is built into :func:`displot`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(\n", " penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\",\n", " kind=\"kde\", rug=True\n", ")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "And the axes-level :func:`rugplot` function can be used to add rugs on the side of any other kind of plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\")\n", "sns.rugplot(data=penguins, x=\"bill_length_mm\", y=\"bill_depth_mm\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Plotting many distributions\n", "^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "The :func:`pairplot` function offers a similar blend of joint and marginal distributions. Rather than focusing on a single relationship, however, :func:`pairplot` uses a \"small-multiple\" approach to visualize the univariate distribution of all variables in a dataset along with all of their pairwise relationships:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(penguins)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "As with :func:`jointplot`/:class:`JointGrid`, using the underlying :class:`PairGrid` directly will afford more flexibility with only a bit more typing:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(penguins)\n", "g.map_upper(sns.histplot)\n", "g.map_lower(sns.kdeplot, fill=True)\n", "g.map_diag(sns.histplot, kde=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", "\n", "
" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/tutorial/function_overview.ipynb000066400000000000000000000473621410631356500224720ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _function_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Overview of seaborn plotting functions\n", "======================================\n", "\n", ".. raw:: html\n", "\n", "
\n", "\n", "Most of your interactions with seaborn will happen through a set of plotting functions. Later chapters in the tutorial will explore the specific features offered by each function. This chapter will introduce, at a high-level, the different kinds of functions that you will encounter." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import seaborn as sns\n", "import matplotlib.pyplot as plt\n", "from IPython.display import HTML\n", "sns.set_theme()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Similar functions for similar tasks\n", "-----------------------------------\n", "\n", "The seaborn namespace is flat; all of the functionality is accessible at the top level. But the code itself is hierarchically structured, with modules of functions that achieve similar visualization goals through different means. Most of the docs are structured around these modules: you'll encounter names like \"relational\", \"distributional\", and \"categorical\".\n", "\n", "For example, the :ref:`distributions module ` defines functions that specialize in representing the distribution of datapoints. This includes familiar methods like the histogram:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "penguins = sns.load_dataset(\"penguins\")\n", "sns.histplot(data=penguins, x=\"flipper_length_mm\", hue=\"species\", multiple=\"stack\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Along with similar, but perhaps less familiar, options such as kernel density estimation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(data=penguins, x=\"flipper_length_mm\", hue=\"species\", multiple=\"stack\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Functions within a module share a lot of underlying code and offer similar features that may not be present in other components of the library (such as ``multiple=\"stack\"`` in the examples above). They are designed to facilitate switching between different visual representations as you explore a dataset, because different representations often have complementary strengths and weaknesses." ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Figure-level vs. axes-level functions\n", "-------------------------------------\n", "\n", "In addition to the different modules, there is a cross-cutting classification of seaborn functions as \"axes-level\" or \"figure-level\". The examples above are axes-level functions. They plot data onto a single :class:`matplotlib.pyplot.Axes` object, which is the return value of the function.\n", "\n", "In contrast, figure-level functions interface with matplotlib through a seaborn object, usually a :class:`FacetGrid`, that manages the figure. Each module has a single figure-level function, which offers a unitary interface to its various axes-level functions. The organization looks a bit like this:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "from matplotlib.patches import FancyBboxPatch\n", "\n", "f, ax = plt.subplots(figsize=(7, 5))\n", "f.subplots_adjust(0, 0, 1, 1)\n", "ax.set_axis_off()\n", "ax.set(xlim=(0, 1), ylim=(0, 1))\n", "\n", "\n", "modules = \"relational\", \"distributions\", \"categorical\"\n", "\n", "pal = sns.color_palette(\"deep\")\n", "colors = dict(relational=pal[0], distributions=pal[1], categorical=pal[2])\n", "\n", "pal = sns.color_palette(\"dark\")\n", "text_colors = dict(relational=pal[0], distributions=pal[1], categorical=pal[2])\n", "\n", "\n", "functions = dict(\n", " relational=[\"scatterplot\", \"lineplot\"],\n", " distributions=[\"histplot\", \"kdeplot\", \"ecdfplot\", \"rugplot\"],\n", " categorical=[\"stripplot\", \"swarmplot\", \"boxplot\", \"violinplot\", \"pointplot\", \"barplot\"],\n", ")\n", "\n", "pad = .06\n", "\n", "w = .2\n", "h = .15\n", "\n", "xs = np.arange(0, 1, 1 / 3) + pad * 1.05\n", "y = .7\n", "\n", "for x, mod in zip(xs, modules):\n", " color = colors[mod] + (.2,)\n", " text_color = text_colors[mod]\n", " box = FancyBboxPatch((x, y), w, h, f\"round,pad={pad}\", color=\"white\")\n", " ax.add_artist(box)\n", " box = FancyBboxPatch((x, y), w, h, f\"round,pad={pad}\", linewidth=1, edgecolor=text_color, facecolor=color)\n", " ax.add_artist(box)\n", " ax.text(x + w / 2, y + h / 2, f\"{mod[:3]}plot\\n({mod})\", ha=\"center\", va=\"center\", size=22, color=text_color)\n", "\n", " for i, func in enumerate(functions[mod]):\n", " x_i = x + w / 2\n", " y_i = y - i * .1 - h / 2 - pad\n", " box = FancyBboxPatch((x_i - w / 2, y_i - pad / 3), w, h / 4, f\"round,pad={pad / 3}\",\n", " color=\"white\")\n", " ax.add_artist(box)\n", " box = FancyBboxPatch((x_i - w / 2, y_i - pad / 3), w, h / 4, f\"round,pad={pad / 3}\",\n", " linewidth=1, edgecolor=text_color, facecolor=color)\n", " ax.add_artist(box)\n", " ax.text(x_i, y_i, func, ha=\"center\", va=\"center\", size=18, color=text_color)\n", "\n", " ax.plot([x_i, x_i], [y, y_i], zorder=-100, color=text_color, lw=1)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "For example, :func:`displot` is the figure-level function for the distributions module. Its default behavior is to draw a histogram, using the same code as :func:`histplot` behind the scenes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(data=penguins, x=\"flipper_length_mm\", hue=\"species\", multiple=\"stack\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To draw a kernel density plot instead, using the same code as :func:`kdeplot`, select it using the ``kind`` parameter:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(data=penguins, x=\"flipper_length_mm\", hue=\"species\", multiple=\"stack\", kind=\"kde\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You'll notice that the figure-level plots look mostly like their axes-level counterparts, but there are a few differences. Notably, the legend is placed ouside the plot. They also have a slightly different shape (more on that shortly).\n", "\n", "The most useful feature offered by the figure-level functions is that they can easily create figures with multiple subplots. For example, instead of stacking the three distributions for each species of penguins in the same axes, we can \"facet\" them by plotting each distribution across the columns of the figure:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.displot(data=penguins, x=\"flipper_length_mm\", hue=\"species\", col=\"species\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The figure-level functions wrap their axes-level counterparts and pass the kind-specific keyword arguments (such as the bin size for a histogram) down to the underlying function. That means they are no less flexible, but there is a downside: the kind-specific parameters don't appear in the function signature or docstrings. Some of their features might be less discoverable, and you may need to look at two different pages of the documentation before understanding how to achieve a specific goal." ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Axes-level functions make self-contained plots\n", "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "The axes-level functions are written to act like drop-in replacements for matplotlib functions. While they add axis labels and legends automatically, they don't modify anything beyond the axes that they are drawn into. That means they can be composed into arbitrarily-complex matplotlib figures with predictable results.\n", "\n", "The axes-level functions call :func:`matplotlib.pyplot.gca` internally, which hooks into the matplotlib state-machine interface so that they draw their plots on the \"currently-active\" axes. But they additionally accept an ``ax=`` argument, which integrates with the object-oriented interface and lets you specify exactly where each plot should go:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, axs = plt.subplots(1, 2, figsize=(8, 4), gridspec_kw=dict(width_ratios=[4, 3]))\n", "sns.scatterplot(data=penguins, x=\"flipper_length_mm\", y=\"bill_length_mm\", hue=\"species\", ax=axs[0])\n", "sns.histplot(data=penguins, x=\"species\", hue=\"species\", shrink=.8, alpha=.8, legend=False, ax=axs[1])\n", "f.tight_layout()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Figure-level functions own their figure\n", "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "In contrast, figure-level functions cannot (easily) be composed with other plots. By design, they \"own\" their own figure, including its initialization, so there's no notion of using a figure-level function to draw a plot onto an existing axes. This constraint allows the figure-level functions to implement features such as putting the legend outside of the plot.\n", "\n", "Nevertheless, it is possible to go beyond what the figure-level functions offer by accessing the matplotlib axes on the object that they return and adding other elements to the plot that way:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")\n", "g = sns.relplot(data=tips, x=\"total_bill\", y=\"tip\")\n", "g.ax.axline(xy1=(10, 2), slope=.2, color=\"b\", dashes=(5, 2))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Customizing plots from a figure-level function\n", "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "The figure-level functions return a :class:`FacetGrid` instance, which has a few methods for customizing attributes of the plot in a way that is \"smart\" about the subplot organization. For example, you can change the labels on the external axes using a single line of code:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.relplot(data=penguins, x=\"flipper_length_mm\", y=\"bill_length_mm\", col=\"sex\")\n", "g.set_axis_labels(\"Flipper length (mm)\", \"Bill length (mm)\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "While convenient, this does add a bit of extra complexity, as you need to remember that this method is not part of the matplotlib API and exists only when using a figure-level function." ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Specifying figure sizes\n", "^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "To increase or decrease the size of a matplotlib plot, you set the width and height of the entire figure, either in the `global rcParams `_, while setting up the plot (e.g. with the ``figsize`` parameter of :func:`matplotlib.pyplot.subplots`), or by calling a method on the figure object (e.g. :meth:`matplotlib.Figure.set_size_inches`). When using an axes-level function in seaborn, the same rules apply: the size of the plot is determined by the size of the figure it is part of and the axes layout in that figure.\n", "\n", "When using a figure-level function, there are several key differences. First, the functions themselves have parameters to control the figure size (although these are actually parameters of the underlying :class:`FacetGrid` that manages the figure). Second, these parameters, ``height`` and ``aspect``, parameterize the size slightly differently than the ``width``, ``height`` parameterization in matplotlib (using the seaborn parameters, ``width = height * apsect``). Most importantly, the parameters correspond to the size of each *subplot*, rather than the size of the overall figure.\n", "\n", "To illustrate the difference between these approaches, here is the default output of :func:`matplotlib.pyplot.subplots` with one subplot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, ax = plt.subplots()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A figure with multiple columns will have the same overall size, but the axes will be squeezed horizontally to fit in the space:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, ax = plt.subplots(1, 2, sharey=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In contrast, a plot created by a figure-level function will be square. To demonstrate that, let's set up an empty plot by using :class:`FacetGrid` directly. This happens behind the scenes in functions like :func:`relplot`, :func:`displot`, or :func:`catplot`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(penguins)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When additional columns are added, the figure itself will become wider, so that its subplots have the same size and shape:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(penguins, col=\"sex\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "And you can adjust the size and shape of each subplot without accounting for the total number of rows and columns in the figure:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(penguins, col=\"sex\", height=3.5, aspect=.75)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The upshot is that you can assign faceting variables without stopping to think about how you'll need to adjust the total figure size. A downside is that, when you do want to change the figure size, you'll need to remember that things work a bit differently than they do in matplotlib." ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Relative merits of figure-level functions\n", "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "Here is a summary of the pros and cons that we have discussed above:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "HTML(\"\"\"\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AdvantagesDrawbacks
Easy faceting by data variablesMany parameters not in function signature
Legend outside of plot by defaultCannot be part of a larger matplotlib figure
Easy figure-level customizationDifferent API from matplotlib
Different figure size parameterizationDifferent figure size parameterization
\n", "\"\"\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "On balance, the figure-level functions add some additional complexity that can make things more confusing for beginners, but their distinct features give them additional power. The tutorial documentaion mostly uses the figure-level functions, because they produce slightly cleaner plots, and we generally recommend their use for most applications. The one situation where they are not a good choice is when you need to make a complex, standalone figure that composes multiple different plot kinds. At this point, it's recommended to set up the figure using matplotlib directly and to fill in the individual components using axes-level functions." ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Combining multiple views on the data\n", "------------------------------------\n", "\n", "Two important plotting functions in seaborn don't fit cleanly into the classification scheme discussed above. These functions, :func:`jointplot` and :func:`pairplot`, employ multiple kinds of plots from different modules to represent mulitple aspects of a dataset in a single figure. Both plots are figure-level functions and create figures with multiple subplots by default. But they use different objects to manage the figure: :class:`JointGrid` and :class:`PairGrid`, respectively.\n", "\n", ":func:`jointplot` plots the relationship or joint distribution of two variables while adding marginal axes that show the univariate distribution of each one separately:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(data=penguins, x=\"flipper_length_mm\", y=\"bill_length_mm\", hue=\"species\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ":func:`pairplot` is similar — it combines joint and marginal views — but rather than focusing on a single relationship, it visualizes every pairwise combination of variables simultaneously:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(data=penguins, hue=\"species\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Behind the scenes, these functions are using axes-level functions that you have already met (:func:`scatterplot` and :func:`kdeplot`), and they also have a ``kind`` parameter that lets you quickly swap in a different representation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(data=penguins, x=\"flipper_length_mm\", y=\"bill_length_mm\", hue=\"species\", kind=\"hist\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/tutorial/regression.ipynb000066400000000000000000000435301410631356500210700ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _regression_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Visualizing regression models\n", "=============================\n", "\n", ".. raw:: html\n", "\n", "
" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Many datasets contain multiple quantitative variables, and the goal of an analysis is often to relate those variables to each other. We :ref:`previously discussed ` functions that can accomplish this by showing the joint distribution of two variables. It can be very helpful, though, to use statistical models to estimate a simple relationship between two noisy sets of observations. The functions discussed in this chapter will do so through the common framework of linear regression.\n", "\n", "In the spirit of Tukey, the regression plots in seaborn are primarily intended to add a visual guide that helps to emphasize patterns in a dataset during exploratory data analyses. That is to say that seaborn is not itself a package for statistical analysis. To obtain quantitative measures related to the fit of regression models, you should use `statsmodels `_. The goal of seaborn, however, is to make exploring a dataset through visualization quick and easy, as doing so is just as (if not more) important than exploring a dataset through tables of statistics." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import seaborn as sns\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_theme(color_codes=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "np.random.seed(sum(map(ord, \"regression\")))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Functions to draw linear regression models\n", "------------------------------------------\n", "\n", "Two main functions in seaborn are used to visualize a linear relationship as determined through regression. These functions, :func:`regplot` and :func:`lmplot` are closely related, and share much of their core functionality. It is important to understand the ways they differ, however, so that you can quickly choose the correct tool for particular job.\n", "\n", "In the simplest invocation, both functions draw a scatterplot of two variables, ``x`` and ``y``, and then fit the regression model ``y ~ x`` and plot the resulting regression line and a 95% confidence interval for that regression:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.regplot(x=\"total_bill\", y=\"tip\", data=tips);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You should note that the resulting plots are identical, except that the figure shapes are different. We will explain why this is shortly. For now, the other main difference to know about is that :func:`regplot` accepts the ``x`` and ``y`` variables in a variety of formats including simple numpy arrays, pandas ``Series`` objects, or as references to variables in a pandas ``DataFrame`` object passed to ``data``. In contrast, :func:`lmplot` has ``data`` as a required parameter and the ``x`` and ``y`` variables must be specified as strings. This data format is called \"long-form\" or `\"tidy\" `_ data. Other than this input flexibility, :func:`regplot` possesses a subset of :func:`lmplot`'s features, so we will demonstrate them using the latter.\n", "\n", "It's possible to fit a linear regression when one of the variables takes discrete values, however, the simple scatterplot produced by this kind of dataset is often not optimal:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"size\", y=\"tip\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "One option is to add some random noise (\"jitter\") to the discrete values to make the distribution of those values more clear. Note that jitter is applied only to the scatterplot data and does not influence the regression line fit itself:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"size\", y=\"tip\", data=tips, x_jitter=.05);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A second option is to collapse over the observations in each discrete bin to plot an estimate of central tendency along with a confidence interval:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"size\", y=\"tip\", data=tips, x_estimator=np.mean);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Fitting different kinds of models\n", "---------------------------------\n", "\n", "The simple linear regression model used above is very simple to fit, however, it is not appropriate for some kinds of datasets. The `Anscombe's quartet `_ dataset shows a few examples where simple linear regression provides an identical estimate of a relationship where simple visual inspection clearly shows differences. For example, in the first case, the linear regression is a good model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "anscombe = sns.load_dataset(\"anscombe\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"x\", y=\"y\", data=anscombe.query(\"dataset == 'I'\"),\n", " ci=None, scatter_kws={\"s\": 80});" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The linear relationship in the second dataset is the same, but the plot clearly shows that this is not a good model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"x\", y=\"y\", data=anscombe.query(\"dataset == 'II'\"),\n", " ci=None, scatter_kws={\"s\": 80});" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In the presence of these kind of higher-order relationships, :func:`lmplot` and :func:`regplot` can fit a polynomial regression model to explore simple kinds of nonlinear trends in the dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"x\", y=\"y\", data=anscombe.query(\"dataset == 'II'\"),\n", " order=2, ci=None, scatter_kws={\"s\": 80});" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A different problem is posed by \"outlier\" observations that deviate for some reason other than the main relationship under study:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"x\", y=\"y\", data=anscombe.query(\"dataset == 'III'\"),\n", " ci=None, scatter_kws={\"s\": 80});" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In the presence of outliers, it can be useful to fit a robust regression, which uses a different loss function to downweight relatively large residuals:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"x\", y=\"y\", data=anscombe.query(\"dataset == 'III'\"),\n", " robust=True, ci=None, scatter_kws={\"s\": 80});" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When the ``y`` variable is binary, simple linear regression also \"works\" but provides implausible predictions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips[\"big_tip\"] = (tips.tip / tips.total_bill) > .15\n", "sns.lmplot(x=\"total_bill\", y=\"big_tip\", data=tips,\n", " y_jitter=.03);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The solution in this case is to fit a logistic regression, such that the regression line shows the estimated probability of ``y = 1`` for a given value of ``x``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"big_tip\", data=tips,\n", " logistic=True, y_jitter=.03);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Note that the logistic regression estimate is considerably more computationally intensive (this is true of robust regression as well) than simple regression, and as the confidence interval around the regression line is computed using a bootstrap procedure, you may wish to turn this off for faster iteration (using ``ci=None``).\n", "\n", "An altogether different approach is to fit a nonparametric regression using a `lowess smoother `_. This approach has the fewest assumptions, although it is computationally intensive and so currently confidence intervals are not computed at all:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", data=tips,\n", " lowess=True);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The :func:`residplot` function can be a useful tool for checking whether the simple regression model is appropriate for a dataset. It fits and removes a simple linear regression and then plots the residual values for each observation. Ideally, these values should be randomly scattered around ``y = 0``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.residplot(x=\"x\", y=\"y\", data=anscombe.query(\"dataset == 'I'\"),\n", " scatter_kws={\"s\": 80});" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "If there is structure in the residuals, it suggests that simple linear regression is not appropriate:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.residplot(x=\"x\", y=\"y\", data=anscombe.query(\"dataset == 'II'\"),\n", " scatter_kws={\"s\": 80});" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Conditioning on other variables\n", "-------------------------------\n", "\n", "The plots above show many ways to explore the relationship between a pair of variables. Often, however, a more interesting question is \"how does the relationship between these two variables change as a function of a third variable?\" This is where the difference between :func:`regplot` and :func:`lmplot` appears. While :func:`regplot` always shows a single relationship, :func:`lmplot` combines :func:`regplot` with :class:`FacetGrid` to provide an easy interface to show a linear regression on \"faceted\" plots that allow you to explore interactions with up to three additional categorical variables.\n", "\n", "The best way to separate out a relationship is to plot both levels on the same axes and to use color to distinguish them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In addition to color, it's possible to use different scatterplot markers to make plots the reproduce to black and white better. You also have full control over the colors used:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips,\n", " markers=[\"o\", \"x\"], palette=\"Set1\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To add another variable, you can draw multiple \"facets\" which each level of the variable appearing in the rows or columns of the grid:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", col=\"time\", data=tips);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\",\n", " col=\"time\", row=\"sex\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Controlling the size and shape of the plot\n", "------------------------------------------\n", "\n", "Before we noted that the default plots made by :func:`regplot` and :func:`lmplot` look the same but on axes that have a different size and shape. This is because :func:`regplot` is an \"axes-level\" function draws onto a specific axes. This means that you can make multi-panel figures yourself and control exactly where the regression plot goes. If no axes object is explicitly provided, it simply uses the \"currently active\" axes, which is why the default plot has the same size and shape as most other matplotlib functions. To control the size, you need to create a figure object yourself." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, ax = plt.subplots(figsize=(5, 6))\n", "sns.regplot(x=\"total_bill\", y=\"tip\", data=tips, ax=ax);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In contrast, the size and shape of the :func:`lmplot` figure is controlled through the :class:`FacetGrid` interface using the ``height`` and ``aspect`` parameters, which apply to each *facet* in the plot, not to the overall figure itself:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", col=\"day\", data=tips,\n", " col_wrap=2, height=3);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", col=\"day\", data=tips,\n", " aspect=.5);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Plotting a regression in other contexts\n", "---------------------------------------\n", "\n", "A few other seaborn functions use :func:`regplot` in the context of a larger, more complex plot. The first is the :func:`jointplot` function that we introduced in the :ref:`distributions tutorial `. In addition to the plot styles previously discussed, :func:`jointplot` can use :func:`regplot` to show the linear regression fit on the joint axes by passing ``kind=\"reg\"``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(x=\"total_bill\", y=\"tip\", data=tips, kind=\"reg\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Using the :func:`pairplot` function with ``kind=\"reg\"`` combines :func:`regplot` and :class:`PairGrid` to show the linear relationship between variables in a dataset. Take care to note how this is different from :func:`lmplot`. In the figure below, the two axes don't show the same relationship conditioned on two levels of a third variable; rather, :func:`PairGrid` is used to show multiple relationships between different pairings of the variables in a dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(tips, x_vars=[\"total_bill\", \"size\"], y_vars=[\"tip\"],\n", " height=5, aspect=.8, kind=\"reg\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Like :func:`lmplot`, but unlike :func:`jointplot`, conditioning on an additional categorical variable is built into :func:`pairplot` using the ``hue`` parameter:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(tips, x_vars=[\"total_bill\", \"size\"], y_vars=[\"tip\"],\n", " hue=\"smoker\", height=5, aspect=.8, kind=\"reg\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", "\n", "
" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/tutorial/relational.ipynb000066400000000000000000000525501410631356500210440ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _relational_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Visualizing statistical relationships\n", "=====================================\n", "\n", ".. raw:: html\n", "\n", "
\n", "\n", "Statistical analysis is a process of understanding how variables in a dataset relate to each other and how those relationships depend on other variables. Visualization can be a core component of this process because, when data are visualized properly, the human visual system can see trends and patterns that indicate a relationship.\n", "\n", "We will discuss three seaborn functions in this tutorial. The one we will use most is :func:`relplot`. This is a :doc:`figure-level function ` for visualizing statistical relationships using two common approaches: scatter plots and line plots. :func:`relplot` combines a :class:`FacetGrid` with one of two axes-level functions:\n", "\n", "- :func:`scatterplot` (with ``kind=\"scatter\"``; the default)\n", "- :func:`lineplot` (with ``kind=\"line\"``)\n", "\n", "As we will see, these functions can be quite illuminating because they use simple and easily-understood representations of data that can nevertheless represent complex dataset structures. They can do so because they plot two-dimensional graphics that can be enhanced by mapping up to three additional variables using the semantics of hue, size, and style." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "sns.set_theme(style=\"darkgrid\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "np.random.seed(sum(map(ord, \"relational\")))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _scatterplot_tutorial:\n", "\n", "Relating variables with scatter plots\n", "-------------------------------------\n", "\n", "The scatter plot is a mainstay of statistical visualization. It depicts the joint distribution of two variables using a cloud of points, where each point represents an observation in the dataset. This depiction allows the eye to infer a substantial amount of information about whether there is any meaningful relationship between them.\n", "\n", "There are several ways to draw a scatter plot in seaborn. The most basic, which should be used when both variables are numeric, is the :func:`scatterplot` function. In the :ref:`categorical visualization tutorial `, we will see specialized tools for using scatterplots to visualize categorical data. The :func:`scatterplot` is the default ``kind`` in :func:`relplot` (it can also be forced by setting ``kind=\"scatter\"``):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")\n", "sns.relplot(x=\"total_bill\", y=\"tip\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "While the points are plotted in two dimensions, another dimension can be added to the plot by coloring the points according to a third variable. In seaborn, this is referred to as using a \"hue semantic\", because the color of the point gains meaning:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To emphasize the difference between the classes, and to improve accessibility, you can use a different marker style for each class:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", style=\"smoker\",\n", " data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It's also possible to represent four variables by changing the hue and style of each point independently. But this should be done carefully, because the eye is much less sensitive to shape than to color:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", style=\"time\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In the examples above, the hue semantic was categorical, so the default :ref:`qualitative palette ` was applied. If the hue semantic is numeric (specifically, if it can be cast to float), the default coloring switches to a sequential palette:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", hue=\"size\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In both cases, you can customize the color palette. There are many options for doing so. Here, we customize a sequential palette using the string interface to :func:`cubehelix_palette`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", hue=\"size\", palette=\"ch:r=-.5,l=.75\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The third kind of semantic variable changes the size of each point:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", size=\"size\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Unlike with :func:`matplotlib.pyplot.scatter`, the literal value of the variable is not used to pick the area of the point. Instead, the range of values in data units is normalized into a range in area units. This range can be customized:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", size=\"size\", sizes=(15, 200), data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "More examples for customizing how the different semantics are used to show statistical relationships are shown in the :func:`scatterplot` API examples." ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _lineplot_tutorial:\n", "\n", "Emphasizing continuity with line plots\n", "--------------------------------------\n", "\n", "Scatter plots are highly effective, but there is no universally optimal type of visualisation. Instead, the visual representation should be adapted for the specifics of the dataset and to the question you are trying to answer with the plot.\n", "\n", "With some datasets, you may want to understand changes in one variable as a function of time, or a similarly continuous variable. In this situation, a good choice is to draw a line plot. In seaborn, this can be accomplished by the :func:`lineplot` function, either directly or with :func:`relplot` by setting ``kind=\"line\"``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame(dict(time=np.arange(500),\n", " value=np.random.randn(500).cumsum()))\n", "g = sns.relplot(x=\"time\", y=\"value\", kind=\"line\", data=df)\n", "g.figure.autofmt_xdate()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Because :func:`lineplot` assumes that you are most often trying to draw ``y`` as a function of ``x``, the default behavior is to sort the data by the ``x`` values before plotting. However, this can be disabled:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame(np.random.randn(500, 2).cumsum(axis=0), columns=[\"x\", \"y\"])\n", "sns.relplot(x=\"x\", y=\"y\", sort=False, kind=\"line\", data=df);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Aggregation and representing uncertainty\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "More complex datasets will have multiple measurements for the same value of the ``x`` variable. The default behavior in seaborn is to aggregate the multiple measurements at each ``x`` value by plotting the mean and the 95% confidence interval around the mean:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fmri = sns.load_dataset(\"fmri\")\n", "sns.relplot(x=\"timepoint\", y=\"signal\", kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The confidence intervals are computed using bootstrapping, which can be time-intensive for larger datasets. It's therefore possible to disable them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", ci=None, kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Another good option, especially with larger data, is to represent the spread of the distribution at each timepoint by plotting the standard deviation instead of a confidence interval:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", kind=\"line\", ci=\"sd\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To turn off aggregation altogether, set the ``estimator`` parameter to ``None`` This might produce a strange effect when the data have multiple observations at each point." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", estimator=None, kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Plotting subsets of data with semantic mappings\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "The :func:`lineplot` function has the same flexibility as :func:`scatterplot`: it can show up to three additional variables by modifying the hue, size, and style of the plot elements. It does so using the same API as :func:`scatterplot`, meaning that we don't need to stop and think about the parameters that control the look of lines vs. points in matplotlib.\n", "\n", "Using semantics in :func:`lineplot` will also determine how the data get aggregated. For example, adding a hue semantic with two levels splits the plot into two lines and error bands, coloring each to indicate which subset of the data they correspond to." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", hue=\"event\", kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Adding a style semantic to a line plot changes the pattern of dashes in the line by default:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", hue=\"region\", style=\"event\",\n", " kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "But you can identify subsets by the markers used at each observation, either together with the dashes or instead of them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", hue=\"region\", style=\"event\",\n", " dashes=False, markers=True, kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "As with scatter plots, be cautious about making line plots using multiple semantics. While sometimes informative, they can also be difficult to parse and interpret. But even when you are only examining changes across one additional variable, it can be useful to alter both the color and style of the lines. This can make the plot more accessible when printed to black-and-white or viewed by someone with color blindness:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", hue=\"event\", style=\"event\",\n", " kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When you are working with repeated measures data (that is, you have units that were sampled multiple times), you can also plot each sampling unit separately without distinguishing them through semantics. This avoids cluttering the legend:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", hue=\"region\",\n", " units=\"subject\", estimator=None,\n", " kind=\"line\", data=fmri.query(\"event == 'stim'\"));" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The default colormap and handling of the legend in :func:`lineplot` also depends on whether the hue semantic is categorical or numeric:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dots = sns.load_dataset(\"dots\").query(\"align == 'dots'\")\n", "sns.relplot(x=\"time\", y=\"firing_rate\",\n", " hue=\"coherence\", style=\"choice\",\n", " kind=\"line\", data=dots);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It may happen that, even though the ``hue`` variable is numeric, it is poorly represented by a linear color scale. That's the case here, where the levels of the ``hue`` variable are logarithmically scaled. You can provide specific color values for each line by passing a list or dictionary:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "palette = sns.cubehelix_palette(light=.8, n_colors=6)\n", "sns.relplot(x=\"time\", y=\"firing_rate\",\n", " hue=\"coherence\", style=\"choice\",\n", " palette=palette,\n", " kind=\"line\", data=dots);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Or you can alter how the colormap is normalized:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from matplotlib.colors import LogNorm\n", "palette = sns.cubehelix_palette(light=.7, n_colors=6)\n", "sns.relplot(x=\"time\", y=\"firing_rate\",\n", " hue=\"coherence\", style=\"choice\",\n", " hue_norm=LogNorm(),\n", " kind=\"line\",\n", " data=dots.query(\"coherence > 0\"));" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The third semantic, size, changes the width of the lines:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"time\", y=\"firing_rate\",\n", " size=\"coherence\", style=\"choice\",\n", " kind=\"line\", data=dots);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "While the ``size`` variable will typically be numeric, it's also possible to map a categorical variable with the width of the lines. Be cautious when doing so, because it will be difficult to distinguish much more than \"thick\" vs \"thin\" lines. However, dashes can be hard to perceive when lines have high-frequency variability, so using different widths may be more effective in that case:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"time\", y=\"firing_rate\",\n", " hue=\"coherence\", size=\"choice\",\n", " palette=palette,\n", " kind=\"line\", data=dots);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Plotting with date data\n", "~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "Line plots are often used to visualize data associated with real dates and times. These functions pass the data down in their original format to the underlying matplotlib functions, and so they can take advantage of matplotlib's ability to format dates in tick labels. But all of that formatting will have to take place at the matplotlib layer, and you should refer to the matplotlib documentation to see how it works:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame(dict(time=pd.date_range(\"2017-1-1\", periods=500),\n", " value=np.random.randn(500).cumsum()))\n", "g = sns.relplot(x=\"time\", y=\"value\", kind=\"line\", data=df)\n", "g.figure.autofmt_xdate()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Showing multiple relationships with facets\n", "------------------------------------------\n", "\n", "We've emphasized in this tutorial that, while these functions *can* show several semantic variables at once, it's not always effective to do so. But what about when you do want to understand how a relationship between two variables depends on more than one other variable?\n", "\n", "The best approach may be to make more than one plot. Because :func:`relplot` is based on the :class:`FacetGrid`, this is easy to do. To show the influence of an additional variable, instead of assigning it to one of the semantic roles in the plot, use it to \"facet\" the visualization. This means that you make multiple axes and plot subsets of the data on each of them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\",\n", " col=\"time\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You can also show the influence two variables this way: one by faceting on the columns and one by faceting on the rows. As you start adding more variables to the grid, you may want to decrease the figure size. Remember that the size :class:`FacetGrid` is parameterized by the height and aspect ratio of *each facet*:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "subject_number = fmri[\"subject\"].str[1:].astype(int)\n", "fmri= fmri.iloc[subject_number.argsort()]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", hue=\"subject\",\n", " col=\"region\", row=\"event\", height=3,\n", " kind=\"line\", estimator=None, data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When you want to examine effects across many levels of a variable, it can be a good idea to facet that variable on the columns and then \"wrap\" the facets into the rows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", hue=\"event\", style=\"event\",\n", " col=\"subject\", col_wrap=5,\n", " height=3, aspect=.75, linewidth=2.5,\n", " kind=\"line\", data=fmri.query(\"region == 'frontal'\"));" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "These visualizations, which are often called \"lattice\" plots or \"small-multiples\", are very effective because they present the data in a format that makes it easy for the eye to detect both overall patterns and deviations from those patterns. While you should make use of the flexibility afforded by :func:`scatterplot` and :func:`relplot`, always try to keep in mind that several simple plots are usually more effective than one complex plot." ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", "\n", "
" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "seaborn-py38-latest", "language": "python", "name": "seaborn-py38-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.11.2/doc/whatsnew.rst000066400000000000000000000032241410631356500163700ustar00rootroot00000000000000.. _whatsnew: .. currentmodule:: seaborn .. role:: raw-html(raw) :format: html .. role:: raw-latex(raw) :format: latex .. |API| replace:: :raw-html:`API` :raw-latex:`{\small\sc [API]}` .. |Defaults| replace:: :raw-html:`Defaults` :raw-latex:`{\small\sc [Defaults]}` .. |Docs| replace:: :raw-html:`Docs` :raw-latex:`{\small\sc [Docs]}` .. |Feature| replace:: :raw-html:`Feature` :raw-latex:`{\small\sc [Feature]}` .. |Enhancement| replace:: :raw-html:`Enhancement` :raw-latex:`{\small\sc [Enhancement]}` .. |Fix| replace:: :raw-html:`Fix` :raw-latex:`{\small\sc [Fix]}` What's new in each version ========================== This page contains information about what has changed in each new version of ``seaborn``. .. raw:: html
.. include:: releases/v0.11.2.txt .. include:: releases/v0.11.1.txt .. include:: releases/v0.11.0.txt .. include:: releases/v0.10.1.txt .. include:: releases/v0.10.0.txt .. include:: releases/v0.9.1.txt .. include:: releases/v0.9.0.txt .. include:: releases/v0.8.1.txt .. include:: releases/v0.8.0.txt .. include:: releases/v0.7.1.txt .. include:: releases/v0.7.0.txt .. include:: releases/v0.6.0.txt .. include:: releases/v0.5.1.txt .. include:: releases/v0.5.0.txt .. include:: releases/v0.4.0.txt .. include:: releases/v0.3.1.txt .. include:: releases/v0.3.0.txt .. include:: releases/v0.2.1.txt .. include:: releases/v0.2.0.txt .. raw:: html
seaborn-0.11.2/examples/000077500000000000000000000000001410631356500150465ustar00rootroot00000000000000seaborn-0.11.2/examples/.gitignore000066400000000000000000000000201410631356500170260ustar00rootroot00000000000000*.html *_files/ seaborn-0.11.2/examples/anscombes_quartet.py000066400000000000000000000006561410631356500211460ustar00rootroot00000000000000""" Anscombe's quartet ================== _thumb: .4, .4 """ import seaborn as sns sns.set_theme(style="ticks") # Load the example dataset for Anscombe's quartet df = sns.load_dataset("anscombe") # Show the results of a linear regression within each dataset sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df, col_wrap=2, ci=None, palette="muted", height=4, scatter_kws={"s": 50, "alpha": 1}) seaborn-0.11.2/examples/different_scatter_variables.py000066400000000000000000000014011410631356500231370ustar00rootroot00000000000000""" Scatterplot with multiple semantics =================================== _thumb: .45, .5 """ import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="whitegrid") # Load the example diamonds dataset diamonds = sns.load_dataset("diamonds") # Draw a scatter plot while assigning point colors and sizes to different # variables in the dataset f, ax = plt.subplots(figsize=(6.5, 6.5)) sns.despine(f, left=True, bottom=True) clarity_ranking = ["I1", "SI2", "SI1", "VS2", "VS1", "VVS2", "VVS1", "IF"] sns.scatterplot(x="carat", y="price", hue="clarity", size="depth", palette="ch:r=-.2,d=.3_r", hue_order=clarity_ranking, sizes=(1, 8), linewidth=0, data=diamonds, ax=ax) seaborn-0.11.2/examples/errorband_lineplots.py000066400000000000000000000006031410631356500214660ustar00rootroot00000000000000""" Timeseries plot with error bands ================================ _thumb: .48, .45 """ import seaborn as sns sns.set_theme(style="darkgrid") # Load an example dataset with long-form data fmri = sns.load_dataset("fmri") # Plot the responses for different events and regions sns.lineplot(x="timepoint", y="signal", hue="region", style="event", data=fmri) seaborn-0.11.2/examples/faceted_histogram.py000066400000000000000000000005111410631356500210650ustar00rootroot00000000000000""" Facetting histograms by subsets of data ======================================= _thumb: .33, .57 """ import seaborn as sns sns.set_theme(style="darkgrid") df = sns.load_dataset("penguins") sns.displot( df, x="flipper_length_mm", col="species", row="sex", binwidth=3, height=3, facet_kws=dict(margin_titles=True), ) seaborn-0.11.2/examples/faceted_lineplot.py000066400000000000000000000010141410631356500207150ustar00rootroot00000000000000""" Line plots on multiple facets ============================= _thumb: .48, .42 """ import seaborn as sns sns.set_theme(style="ticks") dots = sns.load_dataset("dots") # Define the palette as a list to specify exact values palette = sns.color_palette("rocket_r") # Plot the lines on two facets sns.relplot( data=dots, x="time", y="firing_rate", hue="coherence", size="choice", col="align", kind="line", size_order=["T1", "T2"], palette=palette, height=5, aspect=.75, facet_kws=dict(sharex=False), ) seaborn-0.11.2/examples/grouped_barplot.py000066400000000000000000000006511410631356500206120ustar00rootroot00000000000000""" Grouped barplots ================ _thumb: .36, .5 """ import seaborn as sns sns.set_theme(style="whitegrid") penguins = sns.load_dataset("penguins") # Draw a nested barplot by species and sex g = sns.catplot( data=penguins, kind="bar", x="species", y="body_mass_g", hue="sex", ci="sd", palette="dark", alpha=.6, height=6 ) g.despine(left=True) g.set_axis_labels("", "Body mass (g)") g.legend.set_title("") seaborn-0.11.2/examples/grouped_boxplot.py000066400000000000000000000006061410631356500206360ustar00rootroot00000000000000""" Grouped boxplots ================ _thumb: .66, .45 """ import seaborn as sns sns.set_theme(style="ticks", palette="pastel") # Load the example tips dataset tips = sns.load_dataset("tips") # Draw a nested boxplot to show bills by day and time sns.boxplot(x="day", y="total_bill", hue="smoker", palette=["m", "g"], data=tips) sns.despine(offset=10, trim=True) seaborn-0.11.2/examples/grouped_violinplots.py000066400000000000000000000007511410631356500215320ustar00rootroot00000000000000""" Grouped violinplots with split violins ====================================== _thumb: .44, .47 """ import seaborn as sns sns.set_theme(style="whitegrid") # Load the example tips dataset tips = sns.load_dataset("tips") # Draw a nested violinplot and split the violins for easier comparison sns.violinplot(data=tips, x="day", y="total_bill", hue="smoker", split=True, inner="quart", linewidth=1, palette={"Yes": "b", "No": ".85"}) sns.despine(left=True) seaborn-0.11.2/examples/heat_scatter.py000066400000000000000000000022431410631356500200670ustar00rootroot00000000000000""" Scatterplot heatmap ------------------- _thumb: .5, .5 """ import seaborn as sns sns.set_theme(style="whitegrid") # Load the brain networks dataset, select subset, and collapse the multi-index df = sns.load_dataset("brain_networks", header=[0, 1, 2], index_col=0) used_networks = [1, 5, 6, 7, 8, 12, 13, 17] used_columns = (df.columns .get_level_values("network") .astype(int) .isin(used_networks)) df = df.loc[:, used_columns] df.columns = df.columns.map("-".join) # Compute a correlation matrix and convert to long-form corr_mat = df.corr().stack().reset_index(name="correlation") # Draw each cell as a scatter point with varying size and color g = sns.relplot( data=corr_mat, x="level_0", y="level_1", hue="correlation", size="correlation", palette="vlag", hue_norm=(-1, 1), edgecolor=".7", height=10, sizes=(50, 250), size_norm=(-.2, .8), ) # Tweak the figure to finalize g.set(xlabel="", ylabel="", aspect="equal") g.despine(left=True, bottom=True) g.ax.margins(.02) for label in g.ax.get_xticklabels(): label.set_rotation(90) for artist in g.legend.legendHandles: artist.set_edgecolor(".7") seaborn-0.11.2/examples/hexbin_marginals.py000066400000000000000000000005031410631356500207300ustar00rootroot00000000000000""" Hexbin plot with marginal distributions ======================================= _thumb: .45, .4 """ import numpy as np import seaborn as sns sns.set_theme(style="ticks") rs = np.random.RandomState(11) x = rs.gamma(2, size=1000) y = -.5 * x + rs.normal(size=1000) sns.jointplot(x=x, y=y, kind="hex", color="#4CB391") seaborn-0.11.2/examples/histogram_stacked.py000066400000000000000000000010621410631356500211120ustar00rootroot00000000000000""" Stacked histogram on a log scale ================================ _thumb: .5, .45 """ import seaborn as sns import matplotlib as mpl import matplotlib.pyplot as plt sns.set_theme(style="ticks") diamonds = sns.load_dataset("diamonds") f, ax = plt.subplots(figsize=(7, 5)) sns.despine(f) sns.histplot( diamonds, x="price", hue="cut", multiple="stack", palette="light:m_r", edgecolor=".3", linewidth=.5, log_scale=True, ) ax.xaxis.set_major_formatter(mpl.ticker.ScalarFormatter()) ax.set_xticks([500, 1000, 2000, 5000, 10000]) seaborn-0.11.2/examples/horizontal_boxplot.py000066400000000000000000000014001410631356500213530ustar00rootroot00000000000000""" Horizontal boxplot with observations ==================================== _thumb: .7, .37 """ import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="ticks") # Initialize the figure with a logarithmic x axis f, ax = plt.subplots(figsize=(7, 6)) ax.set_xscale("log") # Load the example planets dataset planets = sns.load_dataset("planets") # Plot the orbital period with horizontal boxes sns.boxplot(x="distance", y="method", data=planets, whis=[0, 100], width=.6, palette="vlag") # Add in points to show each observation sns.stripplot(x="distance", y="method", data=planets, size=4, color=".3", linewidth=0) # Tweak the visual presentation ax.xaxis.grid(True) ax.set(ylabel="") sns.despine(trim=True, left=True) seaborn-0.11.2/examples/jitter_stripplot.py000066400000000000000000000022221410631356500210370ustar00rootroot00000000000000""" Conditional means with observations =================================== """ import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="whitegrid") iris = sns.load_dataset("iris") # "Melt" the dataset to "long-form" or "tidy" representation iris = pd.melt(iris, "species", var_name="measurement") # Initialize the figure f, ax = plt.subplots() sns.despine(bottom=True, left=True) # Show each observation with a scatterplot sns.stripplot(x="value", y="measurement", hue="species", data=iris, dodge=True, alpha=.25, zorder=1) # Show the conditional means, aligning each pointplot in the # center of the strips by adjusting the width allotted to each # category (.8 by default) by the number of hue levels sns.pointplot(x="value", y="measurement", hue="species", data=iris, dodge=.8 - .8 / 3, join=False, palette="dark", markers="d", scale=.75, ci=None) # Improve the legend handles, labels = ax.get_legend_handles_labels() ax.legend(handles[3:], labels[3:], title="species", handletextpad=0, columnspacing=1, loc="lower right", ncol=3, frameon=True) seaborn-0.11.2/examples/joint_histogram.py000066400000000000000000000012701410631356500206200ustar00rootroot00000000000000""" Joint and marginal histograms ============================= _thumb: .52, .505 """ import seaborn as sns sns.set_theme(style="ticks") # Load the planets dataset and initialize the figure planets = sns.load_dataset("planets") g = sns.JointGrid(data=planets, x="year", y="distance", marginal_ticks=True) # Set a log scaling on the y axis g.ax_joint.set(yscale="log") # Create an inset legend for the histogram colorbar cax = g.figure.add_axes([.15, .55, .02, .2]) # Add the joint and marginal histogram plots g.plot_joint( sns.histplot, discrete=(True, False), cmap="light:#03012d", pmax=.8, cbar=True, cbar_ax=cax ) g.plot_marginals(sns.histplot, element="step", color="#03012d") seaborn-0.11.2/examples/joint_kde.py000066400000000000000000000005751410631356500173750ustar00rootroot00000000000000""" Joint kernel density estimate ============================= _thumb: .6, .4 """ import seaborn as sns sns.set_theme(style="ticks") # Load the penguins dataset penguins = sns.load_dataset("penguins") # Show the joint distribution using kernel density estimation g = sns.jointplot( data=penguins, x="bill_length_mm", y="bill_depth_mm", hue="species", kind="kde", ) seaborn-0.11.2/examples/kde_ridgeplot.py000066400000000000000000000025361410631356500202420ustar00rootroot00000000000000""" Overlapping densities ('ridge plot') ==================================== """ import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="white", rc={"axes.facecolor": (0, 0, 0, 0)}) # Create the data rs = np.random.RandomState(1979) x = rs.randn(500) g = np.tile(list("ABCDEFGHIJ"), 50) df = pd.DataFrame(dict(x=x, g=g)) m = df.g.map(ord) df["x"] += m # Initialize the FacetGrid object pal = sns.cubehelix_palette(10, rot=-.25, light=.7) g = sns.FacetGrid(df, row="g", hue="g", aspect=15, height=.5, palette=pal) # Draw the densities in a few steps g.map(sns.kdeplot, "x", bw_adjust=.5, clip_on=False, fill=True, alpha=1, linewidth=1.5) g.map(sns.kdeplot, "x", clip_on=False, color="w", lw=2, bw_adjust=.5) # passing color=None to refline() uses the hue mapping g.refline(y=0, linewidth=2, linestyle="-", color=None, clip_on=False) # Define and use a simple function to label the plot in axes coordinates def label(x, color, label): ax = plt.gca() ax.text(0, .2, label, fontweight="bold", color=color, ha="left", va="center", transform=ax.transAxes) g.map(label, "x") # Set the subplots to overlap g.figure.subplots_adjust(hspace=-.25) # Remove axes details that don't play well with overlap g.set_titles("") g.set(yticks=[], ylabel="") g.despine(bottom=True, left=True) seaborn-0.11.2/examples/large_distributions.py000066400000000000000000000005621410631356500214770ustar00rootroot00000000000000""" Plotting large distributions ============================ """ import seaborn as sns sns.set_theme(style="whitegrid") diamonds = sns.load_dataset("diamonds") clarity_ranking = ["I1", "SI2", "SI1", "VS2", "VS1", "VVS2", "VVS1", "IF"] sns.boxenplot(x="clarity", y="carat", color="b", order=clarity_ranking, scale="linear", data=diamonds) seaborn-0.11.2/examples/layered_bivariate_plot.py000066400000000000000000000011511410631356500221270ustar00rootroot00000000000000""" Bivariate plot with multiple elements ===================================== """ import numpy as np import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="dark") # Simulate data from a bivariate Gaussian n = 10000 mean = [0, 0] cov = [(2, .4), (.4, .2)] rng = np.random.RandomState(0) x, y = rng.multivariate_normal(mean, cov, n).T # Draw a combo histogram and scatterplot with density contours f, ax = plt.subplots(figsize=(6, 6)) sns.scatterplot(x=x, y=y, s=5, color=".15") sns.histplot(x=x, y=y, bins=50, pthresh=.1, cmap="mako") sns.kdeplot(x=x, y=y, levels=5, color="w", linewidths=1) seaborn-0.11.2/examples/logistic_regression.py000066400000000000000000000010321410631356500214710ustar00rootroot00000000000000""" Faceted logistic regression =========================== _thumb: .58, .5 """ import seaborn as sns sns.set_theme(style="darkgrid") # Load the example Titanic dataset df = sns.load_dataset("titanic") # Make a custom palette with gendered colors pal = dict(male="#6495ED", female="#F08080") # Show the survival probability as a function of age and sex g = sns.lmplot(x="age", y="survived", col="sex", hue="sex", data=df, palette=pal, y_jitter=.02, logistic=True, truncate=False) g.set(xlim=(0, 80), ylim=(-.05, 1.05)) seaborn-0.11.2/examples/many_facets.py000066400000000000000000000021261410631356500177120ustar00rootroot00000000000000""" Plotting on a large number of facets ==================================== _thumb: .4, .3 """ import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="ticks") # Create a dataset with many short random walks rs = np.random.RandomState(4) pos = rs.randint(-1, 2, (20, 5)).cumsum(axis=1) pos -= pos[:, 0, np.newaxis] step = np.tile(range(5), 20) walk = np.repeat(range(20), 5) df = pd.DataFrame(np.c_[pos.flat, step, walk], columns=["position", "step", "walk"]) # Initialize a grid of plots with an Axes for each walk grid = sns.FacetGrid(df, col="walk", hue="walk", palette="tab20c", col_wrap=4, height=1.5) # Draw a horizontal line to show the starting point grid.refline(y=0, linestyle=":") # Draw a line plot to show the trajectory of each random walk grid.map(plt.plot, "step", "position", marker="o") # Adjust the tick positions and labels grid.set(xticks=np.arange(5), yticks=[-3, 3], xlim=(-.5, 4.5), ylim=(-3.5, 3.5)) # Adjust the arrangement of the plots grid.fig.tight_layout(w_pad=1) seaborn-0.11.2/examples/many_pairwise_correlations.py000066400000000000000000000016141410631356500230550ustar00rootroot00000000000000""" Plotting a diagonal correlation matrix ====================================== _thumb: .3, .6 """ from string import ascii_letters import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="white") # Generate a large random dataset rs = np.random.RandomState(33) d = pd.DataFrame(data=rs.normal(size=(100, 26)), columns=list(ascii_letters[26:])) # Compute the correlation matrix corr = d.corr() # Generate a mask for the upper triangle mask = np.triu(np.ones_like(corr, dtype=bool)) # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = sns.diverging_palette(230, 20, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}) seaborn-0.11.2/examples/marginal_ticks.py000066400000000000000000000007531410631356500204140ustar00rootroot00000000000000""" Scatterplot with marginal ticks =============================== _thumb: .66, .34 """ import seaborn as sns sns.set_theme(style="white", color_codes=True) mpg = sns.load_dataset("mpg") # Use JointGrid directly to draw a custom plot g = sns.JointGrid(data=mpg, x="mpg", y="acceleration", space=0, ratio=17) g.plot_joint(sns.scatterplot, size=mpg["horsepower"], sizes=(30, 120), color="g", alpha=.6, legend=False) g.plot_marginals(sns.rugplot, height=1, color="g", alpha=.6) seaborn-0.11.2/examples/multiple_bivariate_kde.py000066400000000000000000000007421410631356500221270ustar00rootroot00000000000000""" Multiple bivariate KDE plots ============================ _thumb: .6, .45 """ import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="darkgrid") iris = sns.load_dataset("iris") # Set up the figure f, ax = plt.subplots(figsize=(8, 8)) ax.set_aspect("equal") # Draw a contour plot to represent each bivariate density sns.kdeplot( data=iris.query("species != 'versicolor'"), x="sepal_width", y="sepal_length", hue="species", thresh=.1, ) seaborn-0.11.2/examples/multiple_conditional_kde.py000066400000000000000000000007041410631356500224620ustar00rootroot00000000000000""" Conditional kernel density estimate =================================== _thumb: .4, .5 """ import seaborn as sns sns.set_theme(style="whitegrid") # Load the diamonds dataset diamonds = sns.load_dataset("diamonds") # Plot the distribution of clarity ratings, conditional on carat sns.displot( data=diamonds, x="carat", hue="cut", kind="kde", height=6, multiple="fill", clip=(0, None), palette="ch:rot=-.25,hue=1,light=.75", ) seaborn-0.11.2/examples/multiple_ecdf.py000066400000000000000000000005751410631356500202430ustar00rootroot00000000000000""" Facetted ECDF plots =================== _thumb: .30, .49 """ import seaborn as sns sns.set_theme(style="ticks") mpg = sns.load_dataset("mpg") colors = (250, 70, 50), (350, 70, 50) cmap = sns.blend_palette(colors, input="husl", as_cmap=True) sns.displot( mpg, x="displacement", col="origin", hue="model_year", kind="ecdf", aspect=.75, linewidth=2, palette=cmap, ) seaborn-0.11.2/examples/multiple_regression.py000066400000000000000000000007411410631356500215150ustar00rootroot00000000000000""" Multiple linear regression ========================== _thumb: .45, .45 """ import seaborn as sns sns.set_theme() # Load the penguins dataset penguins = sns.load_dataset("penguins") # Plot sepal width as a function of sepal_length across days g = sns.lmplot( data=penguins, x="bill_length_mm", y="bill_depth_mm", hue="species", height=5 ) # Use more informative axis labels than are provided by default g.set_axis_labels("Snoot length (mm)", "Snoot depth (mm)") seaborn-0.11.2/examples/pair_grid_with_kde.py000066400000000000000000000004751410631356500212440ustar00rootroot00000000000000""" Paired density and scatterplot matrix ===================================== _thumb: .5, .5 """ import seaborn as sns sns.set_theme(style="white") df = sns.load_dataset("penguins") g = sns.PairGrid(df, diag_sharey=False) g.map_upper(sns.scatterplot, s=15) g.map_lower(sns.kdeplot) g.map_diag(sns.kdeplot, lw=2) seaborn-0.11.2/examples/paired_pointplots.py000066400000000000000000000010601410631356500211540ustar00rootroot00000000000000""" Paired categorical plots ======================== """ import seaborn as sns sns.set_theme(style="whitegrid") # Load the example Titanic dataset titanic = sns.load_dataset("titanic") # Set up a grid to plot survival probability against several variables g = sns.PairGrid(titanic, y_vars="survived", x_vars=["class", "sex", "who", "alone"], height=5, aspect=.5) # Draw a seaborn pointplot onto each Axes g.map(sns.pointplot, scale=1.3, errwidth=4, color="xkcd:plum") g.set(ylim=(0, 1)) sns.despine(fig=g.fig, left=True) seaborn-0.11.2/examples/pairgrid_dotplot.py000066400000000000000000000021101410631356500207600ustar00rootroot00000000000000""" Dot plot with several variables =============================== _thumb: .3, .3 """ import seaborn as sns sns.set_theme(style="whitegrid") # Load the dataset crashes = sns.load_dataset("car_crashes") # Make the PairGrid g = sns.PairGrid(crashes.sort_values("total", ascending=False), x_vars=crashes.columns[:-3], y_vars=["abbrev"], height=10, aspect=.25) # Draw a dot plot using the stripplot function g.map(sns.stripplot, size=10, orient="h", jitter=False, palette="flare_r", linewidth=1, edgecolor="w") # Use the same x axis limits on all columns and add better labels g.set(xlim=(0, 25), xlabel="Crashes", ylabel="") # Use semantically meaningful titles for the columns titles = ["Total crashes", "Speeding crashes", "Alcohol crashes", "Not distracted crashes", "No previous crashes"] for ax, title in zip(g.axes.flat, titles): # Set a different title for each axes ax.set(title=title) # Make the grid horizontal instead of vertical ax.xaxis.grid(False) ax.yaxis.grid(True) sns.despine(left=True, bottom=True) seaborn-0.11.2/examples/palette_choices.py000066400000000000000000000017401410631356500205550ustar00rootroot00000000000000""" Color palette choices ===================== """ import numpy as np import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="white", context="talk") rs = np.random.RandomState(8) # Set up the matplotlib figure f, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(7, 5), sharex=True) # Generate some sequential data x = np.array(list("ABCDEFGHIJ")) y1 = np.arange(1, 11) sns.barplot(x=x, y=y1, palette="rocket", ax=ax1) ax1.axhline(0, color="k", clip_on=False) ax1.set_ylabel("Sequential") # Center the data to make it diverging y2 = y1 - 5.5 sns.barplot(x=x, y=y2, palette="vlag", ax=ax2) ax2.axhline(0, color="k", clip_on=False) ax2.set_ylabel("Diverging") # Randomly reorder the data to make it qualitative y3 = rs.choice(y1, len(y1), replace=False) sns.barplot(x=x, y=y3, palette="deep", ax=ax3) ax3.axhline(0, color="k", clip_on=False) ax3.set_ylabel("Qualitative") # Finalize the plot sns.despine(bottom=True) plt.setp(f.axes, yticks=[]) plt.tight_layout(h_pad=2) seaborn-0.11.2/examples/palette_generation.py000066400000000000000000000015731410631356500212770ustar00rootroot00000000000000""" Different cubehelix palettes ============================ _thumb: .4, .65 """ import numpy as np import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="white") rs = np.random.RandomState(50) # Set up the matplotlib figure f, axes = plt.subplots(3, 3, figsize=(9, 9), sharex=True, sharey=True) # Rotate the starting point around the cubehelix hue circle for ax, s in zip(axes.flat, np.linspace(0, 3, 10)): # Create a cubehelix colormap to use with kdeplot cmap = sns.cubehelix_palette(start=s, light=1, as_cmap=True) # Generate and plot a random bivariate dataset x, y = rs.normal(size=(2, 50)) sns.kdeplot( x=x, y=y, cmap=cmap, fill=True, clip=(-5, 5), cut=10, thresh=0, levels=15, ax=ax, ) ax.set_axis_off() ax.set(xlim=(-3.5, 3.5), ylim=(-3.5, 3.5)) f.subplots_adjust(0, 0, 1, 1, .08, .08) seaborn-0.11.2/examples/part_whole_bars.py000066400000000000000000000015441410631356500205770ustar00rootroot00000000000000""" Horizontal bar plots ==================== """ import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="whitegrid") # Initialize the matplotlib figure f, ax = plt.subplots(figsize=(6, 15)) # Load the example car crash dataset crashes = sns.load_dataset("car_crashes").sort_values("total", ascending=False) # Plot the total crashes sns.set_color_codes("pastel") sns.barplot(x="total", y="abbrev", data=crashes, label="Total", color="b") # Plot the crashes where alcohol was involved sns.set_color_codes("muted") sns.barplot(x="alcohol", y="abbrev", data=crashes, label="Alcohol-involved", color="b") # Add a legend and informative axis label ax.legend(ncol=2, loc="lower right", frameon=True) ax.set(xlim=(0, 24), ylabel="", xlabel="Automobile collisions per billion miles") sns.despine(left=True, bottom=True) seaborn-0.11.2/examples/pointplot_anova.py000066400000000000000000000007311410631356500206350ustar00rootroot00000000000000""" Plotting a three-way ANOVA ========================== _thumb: .42, .5 """ import seaborn as sns sns.set_theme(style="whitegrid") # Load the example exercise dataset df = sns.load_dataset("exercise") # Draw a pointplot to show pulse as a function of three categorical factors g = sns.catplot(x="time", y="pulse", hue="kind", col="diet", capsize=.2, palette="YlGnBu_d", height=6, aspect=.75, kind="point", data=df) g.despine(left=True) seaborn-0.11.2/examples/radial_facets.py000066400000000000000000000013531410631356500202030ustar00rootroot00000000000000""" FacetGrid with custom projection ================================ _thumb: .33, .5 """ import numpy as np import pandas as pd import seaborn as sns sns.set_theme() # Generate an example radial datast r = np.linspace(0, 10, num=100) df = pd.DataFrame({'r': r, 'slow': r, 'medium': 2 * r, 'fast': 4 * r}) # Convert the dataframe to long-form or "tidy" format df = pd.melt(df, id_vars=['r'], var_name='speed', value_name='theta') # Set up a grid of axes with a polar projection g = sns.FacetGrid(df, col="speed", hue="speed", subplot_kws=dict(projection='polar'), height=4.5, sharex=False, sharey=False, despine=False) # Draw a scatterplot onto each axes in the grid g.map(sns.scatterplot, "theta", "r") seaborn-0.11.2/examples/regression_marginals.py000066400000000000000000000006061410631356500216370ustar00rootroot00000000000000""" Linear regression with marginal distributions ============================================= _thumb: .65, .65 """ import seaborn as sns sns.set_theme(style="darkgrid") tips = sns.load_dataset("tips") g = sns.jointplot(x="total_bill", y="tip", data=tips, kind="reg", truncate=False, xlim=(0, 60), ylim=(0, 12), color="m", height=7) seaborn-0.11.2/examples/residplot.py000066400000000000000000000005521410631356500174270ustar00rootroot00000000000000""" Plotting model residuals ======================== """ import numpy as np import seaborn as sns sns.set_theme(style="whitegrid") # Make an example dataset with y ~ x rs = np.random.RandomState(7) x = rs.normal(2, 1, 75) y = 2 + 1.5 * x + rs.normal(0, 2, 75) # Plot the residuals after fitting a linear model sns.residplot(x=x, y=y, lowess=True, color="g") seaborn-0.11.2/examples/scatter_bubbles.py000066400000000000000000000007031410631356500205630ustar00rootroot00000000000000""" Scatterplot with varying point sizes and hues ============================================== _thumb: .45, .5 """ import seaborn as sns sns.set_theme(style="white") # Load the example mpg dataset mpg = sns.load_dataset("mpg") # Plot miles per gallon against horsepower with other semantics sns.relplot(x="horsepower", y="mpg", hue="origin", size="weight", sizes=(40, 400), alpha=.5, palette="muted", height=6, data=mpg) seaborn-0.11.2/examples/scatterplot_categorical.py000066400000000000000000000006021410631356500223170ustar00rootroot00000000000000""" Scatterplot with categorical variables ====================================== _thumb: .45, .45 """ import seaborn as sns sns.set_theme(style="whitegrid", palette="muted") # Load the penguins dataset df = sns.load_dataset("penguins") # Draw a categorical scatterplot to show each observation ax = sns.swarmplot(data=df, x="body_mass_g", y="sex", hue="species") ax.set(ylabel="") seaborn-0.11.2/examples/scatterplot_matrix.py000066400000000000000000000002641410631356500213520ustar00rootroot00000000000000""" Scatterplot Matrix ================== _thumb: .3, .2 """ import seaborn as sns sns.set_theme(style="ticks") df = sns.load_dataset("penguins") sns.pairplot(df, hue="species") seaborn-0.11.2/examples/scatterplot_sizes.py000066400000000000000000000011221410631356500211750ustar00rootroot00000000000000""" Scatterplot with continuous hues and sizes ========================================== _thumb: .51, .44 """ import seaborn as sns sns.set_theme(style="whitegrid") # Load the example planets dataset planets = sns.load_dataset("planets") cmap = sns.cubehelix_palette(rot=-.2, as_cmap=True) g = sns.relplot( data=planets, x="distance", y="orbital_period", hue="year", size="mass", palette=cmap, sizes=(10, 200), ) g.set(xscale="log", yscale="log") g.ax.xaxis.grid(True, "minor", linewidth=.25) g.ax.yaxis.grid(True, "minor", linewidth=.25) g.despine(left=True, bottom=True) seaborn-0.11.2/examples/simple_violinplots.py000066400000000000000000000006421410631356500213550ustar00rootroot00000000000000""" Violinplots with observations ============================= """ import numpy as np import seaborn as sns sns.set_theme() # Create a random dataset across several variables rs = np.random.default_rng(0) n, p = 40, 8 d = rs.normal(0, 2, (n, p)) d += np.log(np.arange(1, p + 1)) * -5 + 10 # Show each distribution with both violins and points sns.violinplot(data=d, palette="light:g", inner="points", orient="h") seaborn-0.11.2/examples/smooth_bivariate_kde.py000066400000000000000000000007341410631356500216060ustar00rootroot00000000000000""" Smooth kernel density with marginal histograms ============================================== _thumb: .48, .41 """ import seaborn as sns sns.set_theme(style="white") df = sns.load_dataset("penguins") g = sns.JointGrid(data=df, x="body_mass_g", y="bill_depth_mm", space=0) g.plot_joint(sns.kdeplot, fill=True, clip=((2200, 6800), (10, 25)), thresh=0, levels=100, cmap="rocket") g.plot_marginals(sns.histplot, color="#03051A", alpha=1, bins=25) seaborn-0.11.2/examples/spreadsheet_heatmap.py000066400000000000000000000006651410631356500214350ustar00rootroot00000000000000""" Annotated heatmaps ================== """ import matplotlib.pyplot as plt import seaborn as sns sns.set_theme() # Load the example flights dataset and convert to long-form flights_long = sns.load_dataset("flights") flights = flights_long.pivot("month", "year", "passengers") # Draw a heatmap with the numeric values in each cell f, ax = plt.subplots(figsize=(9, 6)) sns.heatmap(flights, annot=True, fmt="d", linewidths=.5, ax=ax) seaborn-0.11.2/examples/structured_heatmap.py000066400000000000000000000022721410631356500213260ustar00rootroot00000000000000""" Discovering structure in heatmap data ===================================== _thumb: .3, .25 """ import pandas as pd import seaborn as sns sns.set_theme() # Load the brain networks example dataset df = sns.load_dataset("brain_networks", header=[0, 1, 2], index_col=0) # Select a subset of the networks used_networks = [1, 5, 6, 7, 8, 12, 13, 17] used_columns = (df.columns.get_level_values("network") .astype(int) .isin(used_networks)) df = df.loc[:, used_columns] # Create a categorical palette to identify the networks network_pal = sns.husl_palette(8, s=.45) network_lut = dict(zip(map(str, used_networks), network_pal)) # Convert the palette to vectors that will be drawn on the side of the matrix networks = df.columns.get_level_values("network") network_colors = pd.Series(networks, index=df.columns).map(network_lut) # Draw the full plot g = sns.clustermap(df.corr(), center=0, cmap="vlag", row_colors=network_colors, col_colors=network_colors, dendrogram_ratio=(.1, .2), cbar_pos=(.02, .32, .03, .2), linewidths=.75, figsize=(12, 13)) g.ax_row_dendrogram.remove() seaborn-0.11.2/examples/three_variable_histogram.py000066400000000000000000000005431410631356500224530ustar00rootroot00000000000000""" Trivariate histogram with two categorical variables =================================================== _thumb: .32, .55 """ import seaborn as sns sns.set_theme(style="dark") diamonds = sns.load_dataset("diamonds") sns.displot( data=diamonds, x="price", y="color", col="clarity", log_scale=(True, False), col_wrap=4, height=4, aspect=.7, ) seaborn-0.11.2/examples/timeseries_facets.py000066400000000000000000000017771410631356500211320ustar00rootroot00000000000000""" Small multiple time series -------------------------- _thumb: .42, .58 """ import seaborn as sns sns.set_theme(style="dark") flights = sns.load_dataset("flights") # Plot each year's time series in its own facet g = sns.relplot( data=flights, x="month", y="passengers", col="year", hue="year", kind="line", palette="crest", linewidth=4, zorder=5, col_wrap=3, height=2, aspect=1.5, legend=False, ) # Iterate over each subplot to customize further for year, ax in g.axes_dict.items(): # Add the title as an annotation within the plot ax.text(.8, .85, year, transform=ax.transAxes, fontweight="bold") # Plot every year's time series in the background sns.lineplot( data=flights, x="month", y="passengers", units="year", estimator=None, color=".7", linewidth=1, ax=ax, ) # Reduce the frequency of the x axis ticks ax.set_xticks(ax.get_xticks()[::2]) # Tweak the supporting aspects of the plot g.set_titles("") g.set_axis_labels("", "Passengers") g.tight_layout() seaborn-0.11.2/examples/wide_data_lineplot.py000066400000000000000000000007271410631356500212550ustar00rootroot00000000000000""" Lineplot from a wide-form dataset ================================= _thumb: .52, .5 """ import numpy as np import pandas as pd import seaborn as sns sns.set_theme(style="whitegrid") rs = np.random.RandomState(365) values = rs.randn(365, 4).cumsum(axis=0) dates = pd.date_range("1 1 2016", periods=365, freq="D") data = pd.DataFrame(values, dates, columns=["A", "B", "C", "D"]) data = data.rolling(7).mean() sns.lineplot(data=data, palette="tab10", linewidth=2.5) seaborn-0.11.2/examples/wide_form_violinplot.py000066400000000000000000000020451410631356500216530ustar00rootroot00000000000000""" Violinplot from a wide-form dataset =================================== _thumb: .6, .45 """ import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="whitegrid") # Load the example dataset of brain network correlations df = sns.load_dataset("brain_networks", header=[0, 1, 2], index_col=0) # Pull out a specific subset of networks used_networks = [1, 3, 4, 5, 6, 7, 8, 11, 12, 13, 16, 17] used_columns = (df.columns.get_level_values("network") .astype(int) .isin(used_networks)) df = df.loc[:, used_columns] # Compute the correlation matrix and average over networks corr_df = df.corr().groupby(level="network").mean() corr_df.index = corr_df.index.astype(int) corr_df = corr_df.sort_index().T # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 6)) # Draw a violinplot with a narrower bandwidth than the default sns.violinplot(data=corr_df, palette="Set3", bw=.2, cut=1, linewidth=1) # Finalize the figure ax.set(ylim=(-.7, 1.05)) sns.despine(left=True, bottom=True) seaborn-0.11.2/licences/000077500000000000000000000000001410631356500150155ustar00rootroot00000000000000seaborn-0.11.2/licences/HUSL_LICENSE000066400000000000000000000020431410631356500166540ustar00rootroot00000000000000Copyright (C) 2012 Alexei Boronine Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. seaborn-0.11.2/pytest.ini000066400000000000000000000002261410631356500152610ustar00rootroot00000000000000[pytest] filterwarnings = ; Warnings raised from within patsy imports ignore:Using or importing the ABCs:DeprecationWarning junit_family=xunit1 seaborn-0.11.2/requirements.txt000066400000000000000000000000361410631356500165130ustar00rootroot00000000000000numpy scipy matplotlib pandas seaborn-0.11.2/seaborn/000077500000000000000000000000001410631356500146615ustar00rootroot00000000000000seaborn-0.11.2/seaborn/__init__.py000066400000000000000000000013501410631356500167710ustar00rootroot00000000000000# Import seaborn objects from .rcmod import * # noqa: F401,F403 from .utils import * # noqa: F401,F403 from .palettes import * # noqa: F401,F403 from .relational import * # noqa: F401,F403 from .regression import * # noqa: F401,F403 from .categorical import * # noqa: F401,F403 from .distributions import * # noqa: F401,F403 from .matrix import * # noqa: F401,F403 from .miscplot import * # noqa: F401,F403 from .axisgrid import * # noqa: F401,F403 from .widgets import * # noqa: F401,F403 from .colors import xkcd_rgb, crayons # noqa: F401 from . import cm # noqa: F401 # Capture the original matplotlib rcParams import matplotlib as mpl _orig_rc_params = mpl.rcParams.copy() # Define the seaborn version __version__ = "0.11.2" seaborn-0.11.2/seaborn/_core.py000066400000000000000000001475471410631356500163440ustar00rootroot00000000000000import warnings import itertools from copy import copy from functools import partial from collections.abc import Iterable, Sequence, Mapping from numbers import Number from datetime import datetime from distutils.version import LooseVersion import numpy as np import pandas as pd import matplotlib as mpl from ._decorators import ( share_init_params_with_map, ) from .palettes import ( QUAL_PALETTES, color_palette, ) from .utils import ( get_color_cycle, remove_na, ) class SemanticMapping: """Base class for mapping data values to plot attributes.""" # -- Default attributes that all SemanticMapping subclasses must set # Whether the mapping is numeric, categorical, or datetime map_type = None # Ordered list of unique values in the input data levels = None # A mapping from the data values to corresponding plot attributes lookup_table = None def __init__(self, plotter): # TODO Putting this here so we can continue to use a lot of the # logic that's built into the library, but the idea of this class # is to move towards semantic mappings that are agnositic about the # kind of plot they're going to be used to draw. # Fully achieving that is going to take some thinking. self.plotter = plotter def map(cls, plotter, *args, **kwargs): # This method is assigned the __init__ docstring method_name = "_{}_map".format(cls.__name__[:-7].lower()) setattr(plotter, method_name, cls(plotter, *args, **kwargs)) return plotter def _lookup_single(self, key): """Apply the mapping to a single data value.""" return self.lookup_table[key] def __call__(self, key, *args, **kwargs): """Get the attribute(s) values for the data key.""" if isinstance(key, (list, np.ndarray, pd.Series)): return [self._lookup_single(k, *args, **kwargs) for k in key] else: return self._lookup_single(key, *args, **kwargs) @share_init_params_with_map class HueMapping(SemanticMapping): """Mapping that sets artist colors according to data values.""" # A specification of the colors that should appear in the plot palette = None # An object that normalizes data values to [0, 1] range for color mapping norm = None # A continuous colormap object for interpolating in a numeric context cmap = None def __init__( self, plotter, palette=None, order=None, norm=None, ): """Map the levels of the `hue` variable to distinct colors. Parameters ---------- # TODO add generic parameters """ super().__init__(plotter) data = plotter.plot_data.get("hue", pd.Series(dtype=float)) if data.notna().any(): map_type = self.infer_map_type( palette, norm, plotter.input_format, plotter.var_types["hue"] ) # Our goal is to end up with a dictionary mapping every unique # value in `data` to a color. We will also keep track of the # metadata about this mapping we will need for, e.g., a legend # --- Option 1: numeric mapping with a matplotlib colormap if map_type == "numeric": data = pd.to_numeric(data) levels, lookup_table, norm, cmap = self.numeric_mapping( data, palette, norm, ) # --- Option 2: categorical mapping using seaborn palette elif map_type == "categorical": cmap = norm = None levels, lookup_table = self.categorical_mapping( data, palette, order, ) # --- Option 3: datetime mapping else: # TODO this needs actual implementation cmap = norm = None levels, lookup_table = self.categorical_mapping( # Casting data to list to handle differences in the way # pandas and numpy represent datetime64 data list(data), palette, order, ) self.map_type = map_type self.lookup_table = lookup_table self.palette = palette self.levels = levels self.norm = norm self.cmap = cmap def _lookup_single(self, key): """Get the color for a single value, using colormap to interpolate.""" try: # Use a value that's in the original data vector value = self.lookup_table[key] except KeyError: # Use the colormap to interpolate between existing datapoints # (e.g. in the context of making a continuous legend) try: normed = self.norm(key) except TypeError as err: if np.isnan(key): value = (0, 0, 0, 0) else: raise err else: if np.ma.is_masked(normed): normed = np.nan value = self.cmap(normed) return value def infer_map_type(self, palette, norm, input_format, var_type): """Determine how to implement the mapping.""" if palette in QUAL_PALETTES: map_type = "categorical" elif norm is not None: map_type = "numeric" elif isinstance(palette, (dict, list)): map_type = "categorical" elif input_format == "wide": map_type = "categorical" else: map_type = var_type return map_type def categorical_mapping(self, data, palette, order): """Determine colors when the hue mapping is categorical.""" # -- Identify the order and name of the levels levels = categorical_order(data, order) n_colors = len(levels) # -- Identify the set of colors to use if isinstance(palette, dict): missing = set(levels) - set(palette) if any(missing): err = "The palette dictionary is missing keys: {}" raise ValueError(err.format(missing)) lookup_table = palette else: if palette is None: if n_colors <= len(get_color_cycle()): colors = color_palette(None, n_colors) else: colors = color_palette("husl", n_colors) elif isinstance(palette, list): if len(palette) != n_colors: err = "The palette list has the wrong number of colors." raise ValueError(err) colors = palette else: colors = color_palette(palette, n_colors) lookup_table = dict(zip(levels, colors)) return levels, lookup_table def numeric_mapping(self, data, palette, norm): """Determine colors when the hue variable is quantitative.""" if isinstance(palette, dict): # The presence of a norm object overrides a dictionary of hues # in specifying a numeric mapping, so we need to process it here. levels = list(sorted(palette)) colors = [palette[k] for k in sorted(palette)] cmap = mpl.colors.ListedColormap(colors) lookup_table = palette.copy() else: # The levels are the sorted unique values in the data levels = list(np.sort(remove_na(data.unique()))) # --- Sort out the colormap to use from the palette argument # Default numeric palette is our default cubehelix palette # TODO do we want to do something complicated to ensure contrast? palette = "ch:" if palette is None else palette if isinstance(palette, mpl.colors.Colormap): cmap = palette else: cmap = color_palette(palette, as_cmap=True) # Now sort out the data normalization if norm is None: norm = mpl.colors.Normalize() elif isinstance(norm, tuple): norm = mpl.colors.Normalize(*norm) elif not isinstance(norm, mpl.colors.Normalize): err = "``hue_norm`` must be None, tuple, or Normalize object." raise ValueError(err) if not norm.scaled(): norm(np.asarray(data.dropna())) lookup_table = dict(zip(levels, cmap(norm(levels)))) return levels, lookup_table, norm, cmap @share_init_params_with_map class SizeMapping(SemanticMapping): """Mapping that sets artist sizes according to data values.""" # An object that normalizes data values to [0, 1] range norm = None def __init__( self, plotter, sizes=None, order=None, norm=None, ): """Map the levels of the `size` variable to distinct values. Parameters ---------- # TODO add generic parameters """ super().__init__(plotter) data = plotter.plot_data.get("size", pd.Series(dtype=float)) if data.notna().any(): map_type = self.infer_map_type( norm, sizes, plotter.var_types["size"] ) # --- Option 1: numeric mapping if map_type == "numeric": levels, lookup_table, norm, size_range = self.numeric_mapping( data, sizes, norm, ) # --- Option 2: categorical mapping elif map_type == "categorical": levels, lookup_table = self.categorical_mapping( data, sizes, order, ) size_range = None # --- Option 3: datetime mapping # TODO this needs an actual implementation else: levels, lookup_table = self.categorical_mapping( # Casting data to list to handle differences in the way # pandas and numpy represent datetime64 data list(data), sizes, order, ) size_range = None self.map_type = map_type self.levels = levels self.norm = norm self.sizes = sizes self.size_range = size_range self.lookup_table = lookup_table def infer_map_type(self, norm, sizes, var_type): if norm is not None: map_type = "numeric" elif isinstance(sizes, (dict, list)): map_type = "categorical" else: map_type = var_type return map_type def _lookup_single(self, key): try: value = self.lookup_table[key] except KeyError: normed = self.norm(key) if np.ma.is_masked(normed): normed = np.nan value = self.size_range[0] + normed * np.ptp(self.size_range) return value def categorical_mapping(self, data, sizes, order): levels = categorical_order(data, order) if isinstance(sizes, dict): # Dict inputs map existing data values to the size attribute missing = set(levels) - set(sizes) if any(missing): err = f"Missing sizes for the following levels: {missing}" raise ValueError(err) lookup_table = sizes.copy() elif isinstance(sizes, list): # List inputs give size values in the same order as the levels if len(sizes) != len(levels): err = "The `sizes` list has the wrong number of values." raise ValueError(err) lookup_table = dict(zip(levels, sizes)) else: if isinstance(sizes, tuple): # Tuple input sets the min, max size values if len(sizes) != 2: err = "A `sizes` tuple must have only 2 values" raise ValueError(err) elif sizes is not None: err = f"Value for `sizes` not understood: {sizes}" raise ValueError(err) else: # Otherwise, we need to get the min, max size values from # the plotter object we are attached to. # TODO this is going to cause us trouble later, because we # want to restructure things so that the plotter is generic # across the visual representation of the data. But at this # point, we don't know the visual representation. Likely we # want to change the logic of this Mapping so that it gives # points on a normalized range that then gets un-normalized # when we know what we're drawing. But given the way the # package works now, this way is cleanest. sizes = self.plotter._default_size_range # For categorical sizes, use regularly-spaced linear steps # between the minimum and maximum sizes. Then reverse the # ramp so that the largest value is used for the first entry # in size_order, etc. This is because "ordered" categories # are often though to go in decreasing priority. sizes = np.linspace(*sizes, len(levels))[::-1] lookup_table = dict(zip(levels, sizes)) return levels, lookup_table def numeric_mapping(self, data, sizes, norm): if isinstance(sizes, dict): # The presence of a norm object overrides a dictionary of sizes # in specifying a numeric mapping, so we need to process it # dictionary here levels = list(np.sort(list(sizes))) size_values = sizes.values() size_range = min(size_values), max(size_values) else: # The levels here will be the unique values in the data levels = list(np.sort(remove_na(data.unique()))) if isinstance(sizes, tuple): # For numeric inputs, the size can be parametrized by # the minimum and maximum artist values to map to. The # norm object that gets set up next specifies how to # do the mapping. if len(sizes) != 2: err = "A `sizes` tuple must have only 2 values" raise ValueError(err) size_range = sizes elif sizes is not None: err = f"Value for `sizes` not understood: {sizes}" raise ValueError(err) else: # When not provided, we get the size range from the plotter # object we are attached to. See the note in the categorical # method about how this is suboptimal for future development. size_range = self.plotter._default_size_range # Now that we know the minimum and maximum sizes that will get drawn, # we need to map the data values that we have into that range. We will # use a matplotlib Normalize class, which is typically used for numeric # color mapping but works fine here too. It takes data values and maps # them into a [0, 1] interval, potentially nonlinear-ly. if norm is None: # Default is a linear function between the min and max data values norm = mpl.colors.Normalize() elif isinstance(norm, tuple): # It is also possible to give different limits in data space norm = mpl.colors.Normalize(*norm) elif not isinstance(norm, mpl.colors.Normalize): err = f"Value for size `norm` parameter not understood: {norm}" raise ValueError(err) else: # If provided with Normalize object, copy it so we can modify norm = copy(norm) # Set the mapping so all output values are in [0, 1] norm.clip = True # If the input range is not set, use the full range of the data if not norm.scaled(): norm(levels) # Map from data values to [0, 1] range sizes_scaled = norm(levels) # Now map from the scaled range into the artist units if isinstance(sizes, dict): lookup_table = sizes else: lo, hi = size_range sizes = lo + sizes_scaled * (hi - lo) lookup_table = dict(zip(levels, sizes)) return levels, lookup_table, norm, size_range @share_init_params_with_map class StyleMapping(SemanticMapping): """Mapping that sets artist style according to data values.""" # Style mapping is always treated as categorical map_type = "categorical" def __init__( self, plotter, markers=None, dashes=None, order=None, ): """Map the levels of the `style` variable to distinct values. Parameters ---------- # TODO add generic parameters """ super().__init__(plotter) data = plotter.plot_data.get("style", pd.Series(dtype=float)) if data.notna().any(): # Cast to list to handle numpy/pandas datetime quirks if variable_type(data) == "datetime": data = list(data) # Find ordered unique values levels = categorical_order(data, order) markers = self._map_attributes( markers, levels, unique_markers(len(levels)), "markers", ) dashes = self._map_attributes( dashes, levels, unique_dashes(len(levels)), "dashes", ) # Build the paths matplotlib will use to draw the markers paths = {} filled_markers = [] for k, m in markers.items(): if not isinstance(m, mpl.markers.MarkerStyle): m = mpl.markers.MarkerStyle(m) paths[k] = m.get_path().transformed(m.get_transform()) filled_markers.append(m.is_filled()) # Mixture of filled and unfilled markers will show line art markers # in the edge color, which defaults to white. This can be handled, # but there would be additional complexity with specifying the # weight of the line art markers without overwhelming the filled # ones with the edges. So for now, we will disallow mixtures. if any(filled_markers) and not all(filled_markers): err = "Filled and line art markers cannot be mixed" raise ValueError(err) lookup_table = {} for key in levels: lookup_table[key] = {} if markers: lookup_table[key]["marker"] = markers[key] lookup_table[key]["path"] = paths[key] if dashes: lookup_table[key]["dashes"] = dashes[key] self.levels = levels self.lookup_table = lookup_table def _lookup_single(self, key, attr=None): """Get attribute(s) for a given data point.""" if attr is None: value = self.lookup_table[key] else: value = self.lookup_table[key][attr] return value def _map_attributes(self, arg, levels, defaults, attr): """Handle the specification for a given style attribute.""" if arg is True: lookup_table = dict(zip(levels, defaults)) elif isinstance(arg, dict): missing = set(levels) - set(arg) if missing: err = f"These `{attr}` levels are missing values: {missing}" raise ValueError(err) lookup_table = arg elif isinstance(arg, Sequence): if len(levels) != len(arg): err = f"The `{attr}` argument has the wrong number of values" raise ValueError(err) lookup_table = dict(zip(levels, arg)) elif arg: err = f"This `{attr}` argument was not understood: {arg}" raise ValueError(err) else: lookup_table = {} return lookup_table # =========================================================================== # class VectorPlotter: """Base class for objects underlying *plot functions.""" _semantic_mappings = { "hue": HueMapping, "size": SizeMapping, "style": StyleMapping, } # TODO units is another example of a non-mapping "semantic" # we need a general name for this and separate handling semantics = "x", "y", "hue", "size", "style", "units" wide_structure = { "x": "@index", "y": "@values", "hue": "@columns", "style": "@columns", } flat_structure = {"x": "@index", "y": "@values"} _default_size_range = 1, 2 # Unused but needed in tests, ugh def __init__(self, data=None, variables={}): self.assign_variables(data, variables) for var, cls in self._semantic_mappings.items(): # Create the mapping function map_func = partial(cls.map, plotter=self) setattr(self, f"map_{var}", map_func) # Call the mapping function to initialize with default values getattr(self, f"map_{var}")() self._var_levels = {} @classmethod def get_semantics(cls, kwargs, semantics=None): """Subset a dictionary` arguments with known semantic variables.""" # TODO this should be get_variables since we have included x and y if semantics is None: semantics = cls.semantics variables = {} for key, val in kwargs.items(): if key in semantics and val is not None: variables[key] = val return variables @property def has_xy_data(self): """Return True at least one of x or y is defined.""" return bool({"x", "y"} & set(self.variables)) @property def var_levels(self): """Property interface to ordered list of variables levels. Each time it's accessed, it updates the var_levels dictionary with the list of levels in the current semantic mappers. But it also allows the dictionary to persist, so it can be used to set levels by a key. This is used to track the list of col/row levels using an attached FacetGrid object, but it's kind of messy and ideally fixed by improving the faceting logic so it interfaces better with the modern approach to tracking plot variables. """ for var in self.variables: try: map_obj = getattr(self, f"_{var}_map") self._var_levels[var] = map_obj.levels except AttributeError: pass return self._var_levels def assign_variables(self, data=None, variables={}): """Define plot variables, optionally using lookup from `data`.""" x = variables.get("x", None) y = variables.get("y", None) if x is None and y is None: self.input_format = "wide" plot_data, variables = self._assign_variables_wideform( data, **variables, ) else: self.input_format = "long" plot_data, variables = self._assign_variables_longform( data, **variables, ) self.plot_data = plot_data self.variables = variables self.var_types = { v: variable_type( plot_data[v], boolean_type="numeric" if v in "xy" else "categorical" ) for v in variables } return self def _assign_variables_wideform(self, data=None, **kwargs): """Define plot variables given wide-form data. Parameters ---------- data : flat vector or collection of vectors Data can be a vector or mapping that is coerceable to a Series or a sequence- or mapping-based collection of such vectors, or a rectangular numpy array, or a Pandas DataFrame. kwargs : variable -> data mappings Behavior with keyword arguments is currently undefined. Returns ------- plot_data : :class:`pandas.DataFrame` Long-form data object mapping seaborn variables (x, y, hue, ...) to data vectors. variables : dict Keys are defined seaborn variables; values are names inferred from the inputs (or None when no name can be determined). """ # Raise if semantic or other variables are assigned in wide-form mode assigned = [k for k, v in kwargs.items() if v is not None] if any(assigned): s = "s" if len(assigned) > 1 else "" err = f"The following variable{s} cannot be assigned with wide-form data: " err += ", ".join(f"`{v}`" for v in assigned) raise ValueError(err) # Determine if the data object actually has any data in it empty = data is None or not len(data) # Then, determine if we have "flat" data (a single vector) if isinstance(data, dict): values = data.values() else: values = np.atleast_1d(np.asarray(data, dtype=object)) flat = not any( isinstance(v, Iterable) and not isinstance(v, (str, bytes)) for v in values ) if empty: # Make an object with the structure of plot_data, but empty plot_data = pd.DataFrame() variables = {} elif flat: # Handle flat data by converting to pandas Series and using the # index and/or values to define x and/or y # (Could be accomplished with a more general to_series() interface) flat_data = pd.Series(data).copy() names = { "@values": flat_data.name, "@index": flat_data.index.name } plot_data = {} variables = {} for var in ["x", "y"]: if var in self.flat_structure: attr = self.flat_structure[var] plot_data[var] = getattr(flat_data, attr[1:]) variables[var] = names[self.flat_structure[var]] plot_data = pd.DataFrame(plot_data) else: # Otherwise assume we have some collection of vectors. # Handle Python sequences such that entries end up in the columns, # not in the rows, of the intermediate wide DataFrame. # One way to accomplish this is to convert to a dict of Series. if isinstance(data, Sequence): data_dict = {} for i, var in enumerate(data): key = getattr(var, "name", i) # TODO is there a safer/more generic way to ensure Series? # sort of like np.asarray, but for pandas? data_dict[key] = pd.Series(var) data = data_dict # Pandas requires that dict values either be Series objects # or all have the same length, but we want to allow "ragged" inputs if isinstance(data, Mapping): data = {key: pd.Series(val) for key, val in data.items()} # Otherwise, delegate to the pandas DataFrame constructor # This is where we'd prefer to use a general interface that says # "give me this data as a pandas DataFrame", so we can accept # DataFrame objects from other libraries wide_data = pd.DataFrame(data, copy=True) # At this point we should reduce the dataframe to numeric cols numeric_cols = wide_data.apply(variable_type) == "numeric" wide_data = wide_data.loc[:, numeric_cols] # Now melt the data to long form melt_kws = {"var_name": "@columns", "value_name": "@values"} use_index = "@index" in self.wide_structure.values() if use_index: melt_kws["id_vars"] = "@index" try: orig_categories = wide_data.columns.categories orig_ordered = wide_data.columns.ordered wide_data.columns = wide_data.columns.add_categories("@index") except AttributeError: category_columns = False else: category_columns = True wide_data["@index"] = wide_data.index.to_series() plot_data = wide_data.melt(**melt_kws) if use_index and category_columns: plot_data["@columns"] = pd.Categorical(plot_data["@columns"], orig_categories, orig_ordered) # Assign names corresponding to plot semantics for var, attr in self.wide_structure.items(): plot_data[var] = plot_data[attr] # Define the variable names variables = {} for var, attr in self.wide_structure.items(): obj = getattr(wide_data, attr[1:]) variables[var] = getattr(obj, "name", None) # Remove redundant columns from plot_data plot_data = plot_data[list(variables)] return plot_data, variables def _assign_variables_longform(self, data=None, **kwargs): """Define plot variables given long-form data and/or vector inputs. Parameters ---------- data : dict-like collection of vectors Input data where variable names map to vector values. kwargs : variable -> data mappings Keys are seaborn variables (x, y, hue, ...) and values are vectors in any format that can construct a :class:`pandas.DataFrame` or names of columns or index levels in ``data``. Returns ------- plot_data : :class:`pandas.DataFrame` Long-form data object mapping seaborn variables (x, y, hue, ...) to data vectors. variables : dict Keys are defined seaborn variables; values are names inferred from the inputs (or None when no name can be determined). Raises ------ ValueError When variables are strings that don't appear in ``data``. """ plot_data = {} variables = {} # Data is optional; all variables can be defined as vectors if data is None: data = {} # TODO should we try a data.to_dict() or similar here to more # generally accept objects with that interface? # Note that dict(df) also works for pandas, and gives us what we # want, whereas DataFrame.to_dict() gives a nested dict instead of # a dict of series. # Variables can also be extraced from the index attribute # TODO is this the most general way to enable it? # There is no index.to_dict on multiindex, unfortunately try: index = data.index.to_frame() except AttributeError: index = {} # The caller will determine the order of variables in plot_data for key, val in kwargs.items(): # First try to treat the argument as a key for the data collection. # But be flexible about what can be used as a key. # Usually it will be a string, but allow numbers or tuples too when # taking from the main data object. Only allow strings to reference # fields in the index, because otherwise there is too much ambiguity. try: val_as_data_key = ( val in data or (isinstance(val, (str, bytes)) and val in index) ) except (KeyError, TypeError): val_as_data_key = False if val_as_data_key: # We know that __getitem__ will work if val in data: plot_data[key] = data[val] elif val in index: plot_data[key] = index[val] variables[key] = val elif isinstance(val, (str, bytes)): # This looks like a column name but we don't know what it means! err = f"Could not interpret value `{val}` for parameter `{key}`" raise ValueError(err) else: # Otherwise, assume the value is itself data # Raise when data object is present and a vector can't matched if isinstance(data, pd.DataFrame) and not isinstance(val, pd.Series): if np.ndim(val) and len(data) != len(val): val_cls = val.__class__.__name__ err = ( f"Length of {val_cls} vectors must match length of `data`" f" when both are used, but `data` has length {len(data)}" f" and the vector passed to `{key}` has length {len(val)}." ) raise ValueError(err) plot_data[key] = val # Try to infer the name of the variable variables[key] = getattr(val, "name", None) # Construct a tidy plot DataFrame. This will convert a number of # types automatically, aligning on index in case of pandas objects plot_data = pd.DataFrame(plot_data) # Reduce the variables dictionary to fields with valid data variables = { var: name for var, name in variables.items() if plot_data[var].notnull().any() } return plot_data, variables def iter_data( self, grouping_vars=None, reverse=False, from_comp_data=False, ): """Generator for getting subsets of data defined by semantic variables. Also injects "col" and "row" into grouping semantics. Parameters ---------- grouping_vars : string or list of strings Semantic variables that define the subsets of data. reverse : bool, optional If True, reverse the order of iteration. from_comp_data : bool, optional If True, use self.comp_data rather than self.plot_data Yields ------ sub_vars : dict Keys are semantic names, values are the level of that semantic. sub_data : :class:`pandas.DataFrame` Subset of ``plot_data`` for this combination of semantic values. """ # TODO should this default to using all (non x/y?) semantics? # or define groupping vars somewhere? if grouping_vars is None: grouping_vars = [] elif isinstance(grouping_vars, str): grouping_vars = [grouping_vars] elif isinstance(grouping_vars, tuple): grouping_vars = list(grouping_vars) # Always insert faceting variables facet_vars = {"col", "row"} grouping_vars.extend( facet_vars & set(self.variables) - set(grouping_vars) ) # Reduce to the semantics used in this plot grouping_vars = [ var for var in grouping_vars if var in self.variables ] if from_comp_data: data = self.comp_data else: data = self.plot_data if grouping_vars: grouped_data = data.groupby( grouping_vars, sort=False, as_index=False ) grouping_keys = [] for var in grouping_vars: grouping_keys.append(self.var_levels.get(var, [])) iter_keys = itertools.product(*grouping_keys) if reverse: iter_keys = reversed(list(iter_keys)) for key in iter_keys: # Pandas fails with singleton tuple inputs pd_key = key[0] if len(key) == 1 else key try: data_subset = grouped_data.get_group(pd_key) except KeyError: continue sub_vars = dict(zip(grouping_vars, key)) yield sub_vars, data_subset else: yield {}, data @property def comp_data(self): """Dataframe with numeric x and y, after unit conversion and log scaling.""" if not hasattr(self, "ax"): # Probably a good idea, but will need a bunch of tests updated # Most of these tests should just use the external interface # Then this can be re-enabled. # raise AttributeError("No Axes attached to plotter") return self.plot_data if not hasattr(self, "_comp_data"): comp_data = ( self.plot_data .copy(deep=False) .drop(["x", "y"], axis=1, errors="ignore") ) for var in "yx": if var not in self.variables: continue # Get a corresponding axis object so that we can convert the units # to matplotlib's numeric representation, which we can compute on # This is messy and it would probably be better for VectorPlotter # to manage its own converters (using the matplotlib tools). # XXX Currently does not support unshared categorical axes! # (But see comment in _attach about how those don't exist) if self.ax is None: ax = self.facets.axes.flat[0] else: ax = self.ax axis = getattr(ax, f"{var}axis") # Use the converter assigned to the axis to get a float representation # of the data, passing np.nan or pd.NA through (pd.NA becomes np.nan) with pd.option_context('mode.use_inf_as_null', True): orig = self.plot_data[var].dropna() comp_col = pd.Series(index=orig.index, dtype=float, name=var) comp_col.loc[orig.index] = pd.to_numeric(axis.convert_units(orig)) if axis.get_scale() == "log": comp_col = np.log10(comp_col) comp_data.insert(0, var, comp_col) self._comp_data = comp_data return self._comp_data def _get_axes(self, sub_vars): """Return an Axes object based on existence of row/col variables.""" row = sub_vars.get("row", None) col = sub_vars.get("col", None) if row is not None and col is not None: return self.facets.axes_dict[(row, col)] elif row is not None: return self.facets.axes_dict[row] elif col is not None: return self.facets.axes_dict[col] elif self.ax is None: return self.facets.ax else: return self.ax def _attach(self, obj, allowed_types=None, log_scale=None): """Associate the plotter with an Axes manager and initialize its units. Parameters ---------- obj : :class:`matplotlib.axes.Axes` or :class:'FacetGrid` Structural object that we will eventually plot onto. allowed_types : str or list of str If provided, raise when either the x or y variable does not have one of the declared seaborn types. log_scale : bool, number, or pair of bools or numbers If not False, set the axes to use log scaling, with the given base or defaulting to 10. If a tuple, interpreted as separate arguments for the x and y axes. """ from .axisgrid import FacetGrid if isinstance(obj, FacetGrid): self.ax = None self.facets = obj ax_list = obj.axes.flatten() if obj.col_names is not None: self.var_levels["col"] = obj.col_names if obj.row_names is not None: self.var_levels["row"] = obj.row_names else: self.ax = obj self.facets = None ax_list = [obj] if allowed_types is None: allowed_types = ["numeric", "datetime", "categorical"] elif isinstance(allowed_types, str): allowed_types = [allowed_types] for var in set("xy").intersection(self.variables): # Check types of x/y variables var_type = self.var_types[var] if var_type not in allowed_types: err = ( f"The {var} variable is {var_type}, but one of " f"{allowed_types} is required" ) raise TypeError(err) # Register with the matplotlib unit conversion machinery # Perhaps cleaner to manage our own transform objects? # XXX Currently this does not allow "unshared" categorical axes # We could add metadata to a FacetGrid and set units based on that. # See also comment in comp_data, which only uses a single axes to do # its mapping, meaning that it won't handle unshared axes well either. for ax in ax_list: axis = getattr(ax, f"{var}axis") seed_data = self.plot_data[var] if var_type == "categorical": seed_data = categorical_order(seed_data) axis.update_units(seed_data) # For categorical y, we want the "first" level to be at the top of the axis if self.var_types.get("y", None) == "categorical": for ax in ax_list: try: ax.yaxis.set_inverted(True) except AttributeError: # mpl < 3.1 if not ax.yaxis_inverted(): ax.invert_yaxis() # Possibly log-scale one or both axes if log_scale is not None: # Allow single value or x, y tuple try: scalex, scaley = log_scale except TypeError: scalex = log_scale if "x" in self.variables else False scaley = log_scale if "y" in self.variables else False for axis, scale in zip("xy", (scalex, scaley)): if scale: for ax in ax_list: set_scale = getattr(ax, f"set_{axis}scale") if scale is True: set_scale("log") else: if LooseVersion(mpl.__version__) >= "3.3": set_scale("log", base=scale) else: set_scale("log", **{f"base{axis}": scale}) def _log_scaled(self, axis): """Return True if specified axis is log scaled on all attached axes.""" if self.ax is None: axes_list = self.facets.axes.flatten() else: axes_list = [self.ax] log_scaled = [] for ax in axes_list: data_axis = getattr(ax, f"{axis}axis") log_scaled.append(data_axis.get_scale() == "log") if any(log_scaled) and not all(log_scaled): raise RuntimeError("Axis scaling is not consistent") return any(log_scaled) def _add_axis_labels(self, ax, default_x="", default_y=""): """Add axis labels if not present, set visibility to match ticklabels.""" # TODO ax could default to None and use attached axes if present # but what to do about the case of facets? Currently using FacetGrid's # set_axis_labels method, which doesn't add labels to the interior even # when the axes are not shared. Maybe that makes sense? if not ax.get_xlabel(): x_visible = any(t.get_visible() for t in ax.get_xticklabels()) ax.set_xlabel(self.variables.get("x", default_x), visible=x_visible) if not ax.get_ylabel(): y_visible = any(t.get_visible() for t in ax.get_yticklabels()) ax.set_ylabel(self.variables.get("y", default_y), visible=y_visible) def variable_type(vector, boolean_type="numeric"): """ Determine whether a vector contains numeric, categorical, or datetime data. This function differs from the pandas typing API in two ways: - Python sequences or object-typed PyData objects are considered numeric if all of their entries are numeric. - String or mixed-type data are considered categorical even if not explicitly represented as a :class:`pandas.api.types.CategoricalDtype`. Parameters ---------- vector : :func:`pandas.Series`, :func:`numpy.ndarray`, or Python sequence Input data to test. boolean_type : 'numeric' or 'categorical' Type to use for vectors containing only 0s and 1s (and NAs). Returns ------- var_type : 'numeric', 'categorical', or 'datetime' Name identifying the type of data in the vector. """ # If a categorical dtype is set, infer categorical if pd.api.types.is_categorical_dtype(vector): return "categorical" # Special-case all-na data, which is always "numeric" if pd.isna(vector).all(): return "numeric" # Special-case binary/boolean data, allow caller to determine # This triggers a numpy warning when vector has strings/objects # https://github.com/numpy/numpy/issues/6784 # Because we reduce with .all(), we are agnostic about whether the # comparison returns a scalar or vector, so we will ignore the warning. # It triggers a separate DeprecationWarning when the vector has datetimes: # https://github.com/numpy/numpy/issues/13548 # This is considered a bug by numpy and will likely go away. with warnings.catch_warnings(): warnings.simplefilter( action='ignore', category=(FutureWarning, DeprecationWarning) ) if np.isin(vector, [0, 1, np.nan]).all(): return boolean_type # Defer to positive pandas tests if pd.api.types.is_numeric_dtype(vector): return "numeric" if pd.api.types.is_datetime64_dtype(vector): return "datetime" # --- If we get to here, we need to check the entries # Check for a collection where everything is a number def all_numeric(x): for x_i in x: if not isinstance(x_i, Number): return False return True if all_numeric(vector): return "numeric" # Check for a collection where everything is a datetime def all_datetime(x): for x_i in x: if not isinstance(x_i, (datetime, np.datetime64)): return False return True if all_datetime(vector): return "datetime" # Otherwise, our final fallback is to consider things categorical return "categorical" def infer_orient(x=None, y=None, orient=None, require_numeric=True): """Determine how the plot should be oriented based on the data. For historical reasons, the convention is to call a plot "horizontally" or "vertically" oriented based on the axis representing its dependent variable. Practically, this is used when determining the axis for numerical aggregation. Parameters ---------- x, y : Vector data or None Positional data vectors for the plot. orient : string or None Specified orientation, which must start with "v" or "h" if not None. require_numeric : bool If set, raise when the implied dependent variable is not numeric. Returns ------- orient : "v" or "h" Raises ------ ValueError: When `orient` is not None and does not start with "h" or "v" TypeError: When dependant variable is not numeric, with `require_numeric` """ x_type = None if x is None else variable_type(x) y_type = None if y is None else variable_type(y) nonnumeric_dv_error = "{} orientation requires numeric `{}` variable." single_var_warning = "{} orientation ignored with only `{}` specified." if x is None: if str(orient).startswith("h"): warnings.warn(single_var_warning.format("Horizontal", "y")) if require_numeric and y_type != "numeric": raise TypeError(nonnumeric_dv_error.format("Vertical", "y")) return "v" elif y is None: if str(orient).startswith("v"): warnings.warn(single_var_warning.format("Vertical", "x")) if require_numeric and x_type != "numeric": raise TypeError(nonnumeric_dv_error.format("Horizontal", "x")) return "h" elif str(orient).startswith("v"): if require_numeric and y_type != "numeric": raise TypeError(nonnumeric_dv_error.format("Vertical", "y")) return "v" elif str(orient).startswith("h"): if require_numeric and x_type != "numeric": raise TypeError(nonnumeric_dv_error.format("Horizontal", "x")) return "h" elif orient is not None: raise ValueError(f"Value for `orient` not understood: {orient}") elif x_type != "numeric" and y_type == "numeric": return "v" elif x_type == "numeric" and y_type != "numeric": return "h" elif require_numeric and "numeric" not in (x_type, y_type): err = "Neither the `x` nor `y` variable appears to be numeric." raise TypeError(err) else: return "v" def unique_dashes(n): """Build an arbitrarily long list of unique dash styles for lines. Parameters ---------- n : int Number of unique dash specs to generate. Returns ------- dashes : list of strings or tuples Valid arguments for the ``dashes`` parameter on :class:`matplotlib.lines.Line2D`. The first spec is a solid line (``""``), the remainder are sequences of long and short dashes. """ # Start with dash specs that are well distinguishable dashes = [ "", (4, 1.5), (1, 1), (3, 1.25, 1.5, 1.25), (5, 1, 1, 1), ] # Now programatically build as many as we need p = 3 while len(dashes) < n: # Take combinations of long and short dashes a = itertools.combinations_with_replacement([3, 1.25], p) b = itertools.combinations_with_replacement([4, 1], p) # Interleave the combinations, reversing one of the streams segment_list = itertools.chain(*zip( list(a)[1:-1][::-1], list(b)[1:-1] )) # Now insert the gaps for segments in segment_list: gap = min(segments) spec = tuple(itertools.chain(*((seg, gap) for seg in segments))) dashes.append(spec) p += 1 return dashes[:n] def unique_markers(n): """Build an arbitrarily long list of unique marker styles for points. Parameters ---------- n : int Number of unique marker specs to generate. Returns ------- markers : list of string or tuples Values for defining :class:`matplotlib.markers.MarkerStyle` objects. All markers will be filled. """ # Start with marker specs that are well distinguishable markers = [ "o", "X", (4, 0, 45), "P", (4, 0, 0), (4, 1, 0), "^", (4, 1, 45), "v", ] # Now generate more from regular polygons of increasing order s = 5 while len(markers) < n: a = 360 / (s + 1) / 2 markers.extend([ (s + 1, 1, a), (s + 1, 0, a), (s, 1, 0), (s, 0, 0), ]) s += 1 # Convert to MarkerStyle object, using only exactly what we need # markers = [mpl.markers.MarkerStyle(m) for m in markers[:n]] return markers[:n] def categorical_order(vector, order=None): """Return a list of unique data values. Determine an ordered list of levels in ``values``. Parameters ---------- vector : list, array, Categorical, or Series Vector of "categorical" values order : list-like, optional Desired order of category levels to override the order determined from the ``values`` object. Returns ------- order : list Ordered list of category levels not including null values. """ if order is None: if hasattr(vector, "categories"): order = vector.categories else: try: order = vector.cat.categories except (TypeError, AttributeError): try: order = vector.unique() except AttributeError: order = pd.unique(vector) if variable_type(vector) == "numeric": order = np.sort(order) order = filter(pd.notnull, order) return list(order) seaborn-0.11.2/seaborn/_decorators.py000066400000000000000000000041161410631356500175410ustar00rootroot00000000000000from inspect import signature, Parameter from functools import wraps import warnings # This function was adapted from scikit-learn # github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/validation.py def _deprecate_positional_args(f): """Decorator for methods that issues warnings for positional arguments. Using the keyword-only argument syntax in pep 3102, arguments after the * will issue a warning when passed as a positional argument. Parameters ---------- f : function function to check arguments on """ sig = signature(f) kwonly_args = [] all_args = [] for name, param in sig.parameters.items(): if param.kind == Parameter.POSITIONAL_OR_KEYWORD: all_args.append(name) elif param.kind == Parameter.KEYWORD_ONLY: kwonly_args.append(name) @wraps(f) def inner_f(*args, **kwargs): extra_args = len(args) - len(all_args) if extra_args > 0: plural = "s" if extra_args > 1 else "" article = "" if plural else "a " warnings.warn( "Pass the following variable{} as {}keyword arg{}: {}. " "From version 0.12, the only valid positional argument " "will be `data`, and passing other arguments without an " "explicit keyword will result in an error or misinterpretation." .format(plural, article, plural, ", ".join(kwonly_args[:extra_args])), FutureWarning ) kwargs.update({k: arg for k, arg in zip(sig.parameters, args)}) return f(**kwargs) return inner_f def share_init_params_with_map(cls): """Make cls.map a classmethod with same signature as cls.__init__.""" map_sig = signature(cls.map) init_sig = signature(cls.__init__) new = [v for k, v in init_sig.parameters.items() if k != "self"] new.insert(0, map_sig.parameters["cls"]) cls.map.__signature__ = map_sig.replace(parameters=new) cls.map.__doc__ = cls.__init__.__doc__ cls.map = classmethod(cls.map) return cls seaborn-0.11.2/seaborn/_docstrings.py000066400000000000000000000145001410631356500175510ustar00rootroot00000000000000import re import pydoc from .external.docscrape import NumpyDocString class DocstringComponents: regexp = re.compile(r"\n((\n|.)+)\n\s*", re.MULTILINE) def __init__(self, comp_dict, strip_whitespace=True): """Read entries from a dict, optionally stripping outer whitespace.""" if strip_whitespace: entries = {} for key, val in comp_dict.items(): m = re.match(self.regexp, val) if m is None: entries[key] = val else: entries[key] = m.group(1) else: entries = comp_dict.copy() self.entries = entries def __getattr__(self, attr): """Provide dot access to entries for clean raw docstrings.""" if attr in self.entries: return self.entries[attr] else: try: return self.__getattribute__(attr) except AttributeError as err: # If Python is run with -OO, it will strip docstrings and our lookup # from self.entries will fail. We check for __debug__, which is actually # set to False by -O (it is True for normal execution). # But we only want to see an error when building the docs; # not something users should see, so this slight inconsistency is fine. if __debug__: raise err else: pass @classmethod def from_nested_components(cls, **kwargs): """Add multiple sub-sets of components.""" return cls(kwargs, strip_whitespace=False) @classmethod def from_function_params(cls, func): """Use the numpydoc parser to extract components from existing func.""" params = NumpyDocString(pydoc.getdoc(func))["Parameters"] comp_dict = {} for p in params: name = p.name type = p.type desc = "\n ".join(p.desc) comp_dict[name] = f"{name} : {type}\n {desc}" return cls(comp_dict) # TODO is "vector" the best term here? We mean to imply 1D data with a variety # of types? # TODO now that we can parse numpydoc style strings, do we need to define dicts # of docstring components, or just write out a docstring? _core_params = dict( data=""" data : :class:`pandas.DataFrame`, :class:`numpy.ndarray`, mapping, or sequence Input data structure. Either a long-form collection of vectors that can be assigned to named variables or a wide-form dataset that will be internally reshaped. """, # TODO add link to user guide narrative when exists xy=""" x, y : vectors or keys in ``data`` Variables that specify positions on the x and y axes. """, hue=""" hue : vector or key in ``data`` Semantic variable that is mapped to determine the color of plot elements. """, palette=""" palette : string, list, dict, or :class:`matplotlib.colors.Colormap` Method for choosing the colors to use when mapping the ``hue`` semantic. String values are passed to :func:`color_palette`. List or dict values imply categorical mapping, while a colormap object implies numeric mapping. """, # noqa: E501 hue_order=""" hue_order : vector of strings Specify the order of processing and plotting for categorical levels of the ``hue`` semantic. """, hue_norm=""" hue_norm : tuple or :class:`matplotlib.colors.Normalize` Either a pair of values that set the normalization range in data units or an object that will map from data units into a [0, 1] interval. Usage implies numeric mapping. """, color=""" color : :mod:`matplotlib color ` Single color specification for when hue mapping is not used. Otherwise, the plot will try to hook into the matplotlib property cycle. """, ax=""" ax : :class:`matplotlib.axes.Axes` Pre-existing axes for the plot. Otherwise, call :func:`matplotlib.pyplot.gca` internally. """, # noqa: E501 ) _core_returns = dict( ax=""" :class:`matplotlib.axes.Axes` The matplotlib axes containing the plot. """, facetgrid=""" :class:`FacetGrid` An object managing one or more subplots that correspond to conditional data subsets with convenient methods for batch-setting of axes attributes. """, jointgrid=""" :class:`JointGrid` An object managing multiple subplots that correspond to joint and marginal axes for plotting a bivariate relationship or distribution. """, pairgrid=""" :class:`PairGrid` An object managing multiple subplots that correspond to joint and marginal axes for pairwise combinations of multiple variables in a dataset. """, ) _seealso_blurbs = dict( # Relational plots scatterplot=""" scatterplot : Plot data using points. """, lineplot=""" lineplot : Plot data using lines. """, # Distribution plots displot=""" displot : Figure-level interface to distribution plot functions. """, histplot=""" histplot : Plot a histogram of binned counts with optional normalization or smoothing. """, kdeplot=""" kdeplot : Plot univariate or bivariate distributions using kernel density estimation. """, ecdfplot=""" ecdfplot : Plot empirical cumulative distribution functions. """, rugplot=""" rugplot : Plot a tick at each observation value along the x and/or y axes. """, # Categorical plots stripplot=""" stripplot : Plot a categorical scatter with jitter. """, swarmplot=""" swarmplot : Plot a categorical scatter with non-overlapping points. """, violinplot=""" violinplot : Draw an enhanced boxplot using kernel density estimation. """, pointplot=""" pointplot : Plot point estimates and CIs using markers and lines. """, # Multiples jointplot=""" jointplot : Draw a bivariate plot with univariate marginal distributions. """, pairplot=""" jointplot : Draw multiple bivariate plots with univariate marginal distributions. """, jointgrid=""" JointGrid : Set up a figure with joint and marginal views on bivariate data. """, pairgrid=""" PairGrid : Set up a figure with joint and marginal views on multiple variables. """, ) _core_docs = dict( params=DocstringComponents(_core_params), returns=DocstringComponents(_core_returns), seealso=DocstringComponents(_seealso_blurbs), ) seaborn-0.11.2/seaborn/_statistics.py000066400000000000000000000360021410631356500175650ustar00rootroot00000000000000"""Statistical transformations for visualization. This module is currently private, but is being written to eventually form part of the public API. The classes should behave roughly in the style of scikit-learn. - All data-independent parameters should be passed to the class constructor. - Each class should impelment a default transformation that is exposed through __call__. These are currently written for vector arguements, but I think consuming a whole `plot_data` DataFrame and return it with transformed variables would make more sense. - Some class have data-dependent preprocessing that should be cached and used multiple times (think defining histogram bins off all data and then counting observations within each bin multiple times per data subsets). These currently have unique names, but it would be good to have a common name. Not quite `fit`, but something similar. - Alternatively, the transform interface could take some information about grouping variables and do a groupby internally. - Some classes should define alternate transforms that might make the most sense with a different function. For example, KDE usually evaluates the distribution on a regular grid, but it would be useful for it to transform at the actual datapoints. Then again, this could be controlled by a parameter at the time of class instantiation. """ from distutils.version import LooseVersion from numbers import Number import numpy as np import scipy as sp from scipy import stats from .utils import _check_argument class KDE: """Univariate and bivariate kernel density estimator.""" def __init__( self, *, bw_method=None, bw_adjust=1, gridsize=200, cut=3, clip=None, cumulative=False, ): """Initialize the estimator with its parameters. Parameters ---------- bw_method : string, scalar, or callable, optional Method for determining the smoothing bandwidth to use; passed to :class:`scipy.stats.gaussian_kde`. bw_adjust : number, optional Factor that multiplicatively scales the value chosen using ``bw_method``. Increasing will make the curve smoother. See Notes. gridsize : int, optional Number of points on each dimension of the evaluation grid. cut : number, optional Factor, multiplied by the smoothing bandwidth, that determines how far the evaluation grid extends past the extreme datapoints. When set to 0, truncate the curve at the data limits. clip : pair of numbers or None, or a pair of such pairs Do not evaluate the density outside of these limits. cumulative : bool, optional If True, estimate a cumulative distribution function. """ if clip is None: clip = None, None self.bw_method = bw_method self.bw_adjust = bw_adjust self.gridsize = gridsize self.cut = cut self.clip = clip self.cumulative = cumulative self.support = None def _define_support_grid(self, x, bw, cut, clip, gridsize): """Create the grid of evaluation points depending for vector x.""" clip_lo = -np.inf if clip[0] is None else clip[0] clip_hi = +np.inf if clip[1] is None else clip[1] gridmin = max(x.min() - bw * cut, clip_lo) gridmax = min(x.max() + bw * cut, clip_hi) return np.linspace(gridmin, gridmax, gridsize) def _define_support_univariate(self, x, weights): """Create a 1D grid of evaluation points.""" kde = self._fit(x, weights) bw = np.sqrt(kde.covariance.squeeze()) grid = self._define_support_grid( x, bw, self.cut, self.clip, self.gridsize ) return grid def _define_support_bivariate(self, x1, x2, weights): """Create a 2D grid of evaluation points.""" clip = self.clip if clip[0] is None or np.isscalar(clip[0]): clip = (clip, clip) kde = self._fit([x1, x2], weights) bw = np.sqrt(np.diag(kde.covariance).squeeze()) grid1 = self._define_support_grid( x1, bw[0], self.cut, clip[0], self.gridsize ) grid2 = self._define_support_grid( x2, bw[1], self.cut, clip[1], self.gridsize ) return grid1, grid2 def define_support(self, x1, x2=None, weights=None, cache=True): """Create the evaluation grid for a given data set.""" if x2 is None: support = self._define_support_univariate(x1, weights) else: support = self._define_support_bivariate(x1, x2, weights) if cache: self.support = support return support def _fit(self, fit_data, weights=None): """Fit the scipy kde while adding bw_adjust logic and version check.""" fit_kws = {"bw_method": self.bw_method} if weights is not None: if LooseVersion(sp.__version__) < "1.2.0": msg = "Weighted KDE requires scipy >= 1.2.0" raise RuntimeError(msg) fit_kws["weights"] = weights kde = stats.gaussian_kde(fit_data, **fit_kws) kde.set_bandwidth(kde.factor * self.bw_adjust) return kde def _eval_univariate(self, x, weights=None): """Fit and evaluate a univariate on univariate data.""" support = self.support if support is None: support = self.define_support(x, cache=False) kde = self._fit(x, weights) if self.cumulative: s_0 = support[0] density = np.array([ kde.integrate_box_1d(s_0, s_i) for s_i in support ]) else: density = kde(support) return density, support def _eval_bivariate(self, x1, x2, weights=None): """Fit and evaluate a univariate on bivariate data.""" support = self.support if support is None: support = self.define_support(x1, x2, cache=False) kde = self._fit([x1, x2], weights) if self.cumulative: grid1, grid2 = support density = np.zeros((grid1.size, grid2.size)) p0 = grid1.min(), grid2.min() for i, xi in enumerate(grid1): for j, xj in enumerate(grid2): density[i, j] = kde.integrate_box(p0, (xi, xj)) else: xx1, xx2 = np.meshgrid(*support) density = kde([xx1.ravel(), xx2.ravel()]).reshape(xx1.shape) return density, support def __call__(self, x1, x2=None, weights=None): """Fit and evaluate on univariate or bivariate data.""" if x2 is None: return self._eval_univariate(x1, weights) else: return self._eval_bivariate(x1, x2, weights) class Histogram: """Univariate and bivariate histogram estimator.""" def __init__( self, stat="count", bins="auto", binwidth=None, binrange=None, discrete=False, cumulative=False, ): """Initialize the estimator with its parameters. Parameters ---------- stat : str Aggregate statistic to compute in each bin. - `count`: show the number of observations in each bin - `frequency`: show the number of observations divided by the bin width - `probability`: or `proportion`: normalize such that bar heights sum to 1 - `percent`: normalize such that bar heights sum to 100 - `density`: normalize such that the total area of the histogram equals 1 bins : str, number, vector, or a pair of such values Generic bin parameter that can be the name of a reference rule, the number of bins, or the breaks of the bins. Passed to :func:`numpy.histogram_bin_edges`. binwidth : number or pair of numbers Width of each bin, overrides ``bins`` but can be used with ``binrange``. binrange : pair of numbers or a pair of pairs Lowest and highest value for bin edges; can be used either with ``bins`` or ``binwidth``. Defaults to data extremes. discrete : bool or pair of bools If True, set ``binwidth`` and ``binrange`` such that bin edges cover integer values in the dataset. cumulative : bool If True, return the cumulative statistic. """ stat_choices = [ "count", "frequency", "density", "probability", "proportion", "percent", ] _check_argument("stat", stat_choices, stat) self.stat = stat self.bins = bins self.binwidth = binwidth self.binrange = binrange self.discrete = discrete self.cumulative = cumulative self.bin_kws = None def _define_bin_edges(self, x, weights, bins, binwidth, binrange, discrete): """Inner function that takes bin parameters as arguments.""" if binrange is None: start, stop = x.min(), x.max() else: start, stop = binrange if discrete: bin_edges = np.arange(start - .5, stop + 1.5) elif binwidth is not None: step = binwidth bin_edges = np.arange(start, stop + step, step) else: bin_edges = np.histogram_bin_edges( x, bins, binrange, weights, ) return bin_edges def define_bin_params(self, x1, x2=None, weights=None, cache=True): """Given data, return numpy.histogram parameters to define bins.""" if x2 is None: bin_edges = self._define_bin_edges( x1, weights, self.bins, self.binwidth, self.binrange, self.discrete, ) if isinstance(self.bins, (str, Number)): n_bins = len(bin_edges) - 1 bin_range = bin_edges.min(), bin_edges.max() bin_kws = dict(bins=n_bins, range=bin_range) else: bin_kws = dict(bins=bin_edges) else: bin_edges = [] for i, x in enumerate([x1, x2]): # Resolve out whether bin parameters are shared # or specific to each variable bins = self.bins if not bins or isinstance(bins, (str, Number)): pass elif isinstance(bins[i], str): bins = bins[i] elif len(bins) == 2: bins = bins[i] binwidth = self.binwidth if binwidth is None: pass elif not isinstance(binwidth, Number): binwidth = binwidth[i] binrange = self.binrange if binrange is None: pass elif not isinstance(binrange[0], Number): binrange = binrange[i] discrete = self.discrete if not isinstance(discrete, bool): discrete = discrete[i] # Define the bins for this variable bin_edges.append(self._define_bin_edges( x, weights, bins, binwidth, binrange, discrete, )) bin_kws = dict(bins=tuple(bin_edges)) if cache: self.bin_kws = bin_kws return bin_kws def _eval_bivariate(self, x1, x2, weights): """Inner function for histogram of two variables.""" bin_kws = self.bin_kws if bin_kws is None: bin_kws = self.define_bin_params(x1, x2, cache=False) density = self.stat == "density" hist, *bin_edges = np.histogram2d( x1, x2, **bin_kws, weights=weights, density=density ) area = np.outer( np.diff(bin_edges[0]), np.diff(bin_edges[1]), ) if self.stat == "probability" or self.stat == "proportion": hist = hist.astype(float) / hist.sum() elif self.stat == "percent": hist = hist.astype(float) / hist.sum() * 100 elif self.stat == "frequency": hist = hist.astype(float) / area if self.cumulative: if self.stat in ["density", "frequency"]: hist = (hist * area).cumsum(axis=0).cumsum(axis=1) else: hist = hist.cumsum(axis=0).cumsum(axis=1) return hist, bin_edges def _eval_univariate(self, x, weights): """Inner function for histogram of one variable.""" bin_kws = self.bin_kws if bin_kws is None: bin_kws = self.define_bin_params(x, weights=weights, cache=False) density = self.stat == "density" hist, bin_edges = np.histogram( x, **bin_kws, weights=weights, density=density, ) if self.stat == "probability" or self.stat == "proportion": hist = hist.astype(float) / hist.sum() elif self.stat == "percent": hist = hist.astype(float) / hist.sum() * 100 elif self.stat == "frequency": hist = hist.astype(float) / np.diff(bin_edges) if self.cumulative: if self.stat in ["density", "frequency"]: hist = (hist * np.diff(bin_edges)).cumsum() else: hist = hist.cumsum() return hist, bin_edges def __call__(self, x1, x2=None, weights=None): """Count the occurrences in each bin, maybe normalize.""" if x2 is None: return self._eval_univariate(x1, weights) else: return self._eval_bivariate(x1, x2, weights) class ECDF: """Univariate empirical cumulative distribution estimator.""" def __init__(self, stat="proportion", complementary=False): """Initialize the class with its paramters Parameters ---------- stat : {{"proportion", "count"}} Distribution statistic to compute. complementary : bool If True, use the complementary CDF (1 - CDF) """ _check_argument("stat", ["count", "proportion"], stat) self.stat = stat self.complementary = complementary def _eval_bivariate(self, x1, x2, weights): """Inner function for ECDF of two variables.""" raise NotImplementedError("Bivariate ECDF is not implemented") def _eval_univariate(self, x, weights): """Inner function for ECDF of one variable.""" sorter = x.argsort() x = x[sorter] weights = weights[sorter] y = weights.cumsum() if self.stat == "proportion": y = y / y.max() x = np.r_[-np.inf, x] y = np.r_[0, y] if self.complementary: y = y.max() - y return y, x def __call__(self, x1, x2=None, weights=None): """Return proportion or count of observations below each sorted datapoint.""" x1 = np.asarray(x1) if weights is None: weights = np.ones_like(x1) else: weights = np.asarray(weights) if x2 is None: return self._eval_univariate(x1, weights) else: return self._eval_bivariate(x1, x2, weights) seaborn-0.11.2/seaborn/_testing.py000066400000000000000000000050631410631356500170530ustar00rootroot00000000000000import numpy as np import matplotlib as mpl from matplotlib.colors import to_rgb, to_rgba from numpy.testing import assert_array_equal LINE_PROPS = [ "alpha", "color", "linewidth", "linestyle", "xydata", "zorder", ] COLLECTION_PROPS = [ "alpha", "edgecolor", "facecolor", "fill", "hatch", "linestyle", "linewidth", "paths", "zorder", ] BAR_PROPS = [ "alpha", "edgecolor", "facecolor", "fill", "hatch", "height", "linestyle", "linewidth", "xy", "zorder", ] def assert_colors_equal(a, b, check_alpha=True): def handle_array(x): if isinstance(x, np.ndarray): if x.ndim > 1: x = np.unique(x, axis=0).squeeze() if x.ndim > 1: raise ValueError("Color arrays must be 1 dimensional") return x a = handle_array(a) b = handle_array(b) f = to_rgba if check_alpha else to_rgb assert f(a) == f(b) def assert_artists_equal(list1, list2, properties): assert len(list1) == len(list2) for a1, a2 in zip(list1, list2): prop1 = a1.properties() prop2 = a2.properties() for key in properties: v1 = prop1[key] v2 = prop2[key] if key == "paths": for p1, p2 in zip(v1, v2): assert_array_equal(p1.vertices, p2.vertices) assert_array_equal(p1.codes, p2.codes) elif isinstance(v1, np.ndarray): assert_array_equal(v1, v2) elif key == "color": v1 = mpl.colors.to_rgba(v1) v2 = mpl.colors.to_rgba(v2) assert v1 == v2 else: assert v1 == v2 def assert_legends_equal(leg1, leg2): assert leg1.get_title().get_text() == leg2.get_title().get_text() for t1, t2 in zip(leg1.get_texts(), leg2.get_texts()): assert t1.get_text() == t2.get_text() assert_artists_equal( leg1.get_patches(), leg2.get_patches(), BAR_PROPS, ) assert_artists_equal( leg1.get_lines(), leg2.get_lines(), LINE_PROPS, ) def assert_plots_equal(ax1, ax2, labels=True): assert_artists_equal(ax1.patches, ax2.patches, BAR_PROPS) assert_artists_equal(ax1.lines, ax2.lines, LINE_PROPS) poly1 = ax1.findobj(mpl.collections.PolyCollection) poly2 = ax2.findobj(mpl.collections.PolyCollection) assert_artists_equal(poly1, poly2, COLLECTION_PROPS) if labels: assert ax1.get_xlabel() == ax2.get_xlabel() assert ax1.get_ylabel() == ax2.get_ylabel() seaborn-0.11.2/seaborn/algorithms.py000066400000000000000000000106031410631356500174040ustar00rootroot00000000000000"""Algorithms to support fitting routines in seaborn plotting functions.""" import numbers import numpy as np import warnings def bootstrap(*args, **kwargs): """Resample one or more arrays with replacement and store aggregate values. Positional arguments are a sequence of arrays to bootstrap along the first axis and pass to a summary function. Keyword arguments: n_boot : int, default 10000 Number of iterations axis : int, default None Will pass axis to ``func`` as a keyword argument. units : array, default None Array of sampling unit IDs. When used the bootstrap resamples units and then observations within units instead of individual datapoints. func : string or callable, default np.mean Function to call on the args that are passed in. If string, tries to use as named method on numpy array. seed : Generator | SeedSequence | RandomState | int | None Seed for the random number generator; useful if you want reproducible resamples. Returns ------- boot_dist: array array of bootstrapped statistic values """ # Ensure list of arrays are same length if len(np.unique(list(map(len, args)))) > 1: raise ValueError("All input arrays must have the same length") n = len(args[0]) # Default keyword arguments n_boot = kwargs.get("n_boot", 10000) func = kwargs.get("func", np.mean) axis = kwargs.get("axis", None) units = kwargs.get("units", None) random_seed = kwargs.get("random_seed", None) if random_seed is not None: msg = "`random_seed` has been renamed to `seed` and will be removed" warnings.warn(msg) seed = kwargs.get("seed", random_seed) if axis is None: func_kwargs = dict() else: func_kwargs = dict(axis=axis) # Initialize the resampler rng = _handle_random_seed(seed) # Coerce to arrays args = list(map(np.asarray, args)) if units is not None: units = np.asarray(units) # Allow for a function that is the name of a method on an array if isinstance(func, str): def f(x): return getattr(x, func)() else: f = func # Handle numpy changes try: integers = rng.integers except AttributeError: integers = rng.randint # Do the bootstrap if units is not None: return _structured_bootstrap(args, n_boot, units, f, func_kwargs, integers) boot_dist = [] for i in range(int(n_boot)): resampler = integers(0, n, n, dtype=np.intp) # intp is indexing dtype sample = [a.take(resampler, axis=0) for a in args] boot_dist.append(f(*sample, **func_kwargs)) return np.array(boot_dist) def _structured_bootstrap(args, n_boot, units, func, func_kwargs, integers): """Resample units instead of datapoints.""" unique_units = np.unique(units) n_units = len(unique_units) args = [[a[units == unit] for unit in unique_units] for a in args] boot_dist = [] for i in range(int(n_boot)): resampler = integers(0, n_units, n_units, dtype=np.intp) sample = [[a[i] for i in resampler] for a in args] lengths = map(len, sample[0]) resampler = [integers(0, n, n, dtype=np.intp) for n in lengths] sample = [[c.take(r, axis=0) for c, r in zip(a, resampler)] for a in sample] sample = list(map(np.concatenate, sample)) boot_dist.append(func(*sample, **func_kwargs)) return np.array(boot_dist) def _handle_random_seed(seed=None): """Given a seed in one of many formats, return a random number generator. Generalizes across the numpy 1.17 changes, preferring newer functionality. """ if isinstance(seed, np.random.RandomState): rng = seed else: try: # General interface for seeding on numpy >= 1.17 rng = np.random.default_rng(seed) except AttributeError: # We are on numpy < 1.17, handle options ourselves if isinstance(seed, (numbers.Integral, np.integer)): rng = np.random.RandomState(seed) elif seed is None: rng = np.random.RandomState() else: err = "{} cannot be used to seed the randomn number generator" raise ValueError(err.format(seed)) return rng seaborn-0.11.2/seaborn/axisgrid.py000066400000000000000000002527641410631356500170650ustar00rootroot00000000000000from itertools import product from inspect import signature import warnings from textwrap import dedent from distutils.version import LooseVersion import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt from ._core import VectorPlotter, variable_type, categorical_order from . import utils from .utils import _check_argument, adjust_legend_subtitles, _draw_figure from .palettes import color_palette, blend_palette from ._decorators import _deprecate_positional_args from ._docstrings import ( DocstringComponents, _core_docs, ) __all__ = ["FacetGrid", "PairGrid", "JointGrid", "pairplot", "jointplot"] _param_docs = DocstringComponents.from_nested_components( core=_core_docs["params"], ) class _BaseGrid: """Base class for grids of subplots.""" def set(self, **kwargs): """Set attributes on each subplot Axes.""" for ax in self.axes.flat: if ax is not None: # Handle removed axes ax.set(**kwargs) return self @property def fig(self): """DEPRECATED: prefer the `figure` property.""" # Grid.figure is preferred because it matches the Axes attribute name. # But as the maintanace burden on having this property is minimal, # let's be slow about formally deprecating it. For now just note its deprecation # in the docstring; add a warning in version 0.13, and eventually remove it. return self._figure @property def figure(self): """Access the :class:`matplotlib.figure.Figure` object underlying the grid.""" return self._figure def savefig(self, *args, **kwargs): """ Save an image of the plot. This wraps :meth:`matplotlib.figure.Figure.savefig`, using bbox_inches="tight" by default. Parameters are passed through to the matplotlib function. """ kwargs = kwargs.copy() kwargs.setdefault("bbox_inches", "tight") self.figure.savefig(*args, **kwargs) class Grid(_BaseGrid): """A grid that can have multiple subplots and an external legend.""" _margin_titles = False _legend_out = True def __init__(self): self._tight_layout_rect = [0, 0, 1, 1] self._tight_layout_pad = None # This attribute is set externally and is a hack to handle newer functions that # don't add proxy artists onto the Axes. We need an overall cleaner approach. self._extract_legend_handles = False def tight_layout(self, *args, **kwargs): """Call fig.tight_layout within rect that exclude the legend.""" kwargs = kwargs.copy() kwargs.setdefault("rect", self._tight_layout_rect) if self._tight_layout_pad is not None: kwargs.setdefault("pad", self._tight_layout_pad) self._figure.tight_layout(*args, **kwargs) def add_legend(self, legend_data=None, title=None, label_order=None, adjust_subtitles=False, **kwargs): """Draw a legend, maybe placing it outside axes and resizing the figure. Parameters ---------- legend_data : dict Dictionary mapping label names (or two-element tuples where the second element is a label name) to matplotlib artist handles. The default reads from ``self._legend_data``. title : string Title for the legend. The default reads from ``self._hue_var``. label_order : list of labels The order that the legend entries should appear in. The default reads from ``self.hue_names``. adjust_subtitles : bool If True, modify entries with invisible artists to left-align the labels and set the font size to that of a title. kwargs : key, value pairings Other keyword arguments are passed to the underlying legend methods on the Figure or Axes object. Returns ------- self : Grid instance Returns self for easy chaining. """ # Find the data for the legend if legend_data is None: legend_data = self._legend_data if label_order is None: if self.hue_names is None: label_order = list(legend_data.keys()) else: label_order = list(map(utils.to_utf8, self.hue_names)) blank_handle = mpl.patches.Patch(alpha=0, linewidth=0) handles = [legend_data.get(l, blank_handle) for l in label_order] title = self._hue_var if title is None else title if LooseVersion(mpl.__version__) < LooseVersion("3.0"): try: title_size = mpl.rcParams["axes.labelsize"] * .85 except TypeError: # labelsize is something like "large" title_size = mpl.rcParams["axes.labelsize"] else: title_size = mpl.rcParams["legend.title_fontsize"] # Unpack nested labels from a hierarchical legend labels = [] for entry in label_order: if isinstance(entry, tuple): _, label = entry else: label = entry labels.append(label) # Set default legend kwargs kwargs.setdefault("scatterpoints", 1) if self._legend_out: kwargs.setdefault("frameon", False) kwargs.setdefault("loc", "center right") # Draw a full-figure legend outside the grid figlegend = self._figure.legend(handles, labels, **kwargs) self._legend = figlegend figlegend.set_title(title, prop={"size": title_size}) if adjust_subtitles: adjust_legend_subtitles(figlegend) # Draw the plot to set the bounding boxes correctly _draw_figure(self._figure) # Calculate and set the new width of the figure so the legend fits legend_width = figlegend.get_window_extent().width / self._figure.dpi fig_width, fig_height = self._figure.get_size_inches() self._figure.set_size_inches(fig_width + legend_width, fig_height) # Draw the plot again to get the new transformations _draw_figure(self._figure) # Now calculate how much space we need on the right side legend_width = figlegend.get_window_extent().width / self._figure.dpi space_needed = legend_width / (fig_width + legend_width) margin = .04 if self._margin_titles else .01 self._space_needed = margin + space_needed right = 1 - self._space_needed # Place the subplot axes to give space for the legend self._figure.subplots_adjust(right=right) self._tight_layout_rect[2] = right else: # Draw a legend in the first axis ax = self.axes.flat[0] kwargs.setdefault("loc", "best") leg = ax.legend(handles, labels, **kwargs) leg.set_title(title, prop={"size": title_size}) self._legend = leg if adjust_subtitles: adjust_legend_subtitles(leg) return self def _update_legend_data(self, ax): """Extract the legend data from an axes object and save it.""" data = {} # Get data directly from the legend, which is necessary # for newer functions that don't add labeled proxy artists if ax.legend_ is not None and self._extract_legend_handles: handles = ax.legend_.legendHandles labels = [t.get_text() for t in ax.legend_.texts] data.update({l: h for h, l in zip(handles, labels)}) handles, labels = ax.get_legend_handles_labels() data.update({l: h for h, l in zip(handles, labels)}) self._legend_data.update(data) # Now clear the legend ax.legend_ = None def _get_palette(self, data, hue, hue_order, palette): """Get a list of colors for the hue variable.""" if hue is None: palette = color_palette(n_colors=1) else: hue_names = categorical_order(data[hue], hue_order) n_colors = len(hue_names) # By default use either the current color palette or HUSL if palette is None: current_palette = utils.get_color_cycle() if n_colors > len(current_palette): colors = color_palette("husl", n_colors) else: colors = color_palette(n_colors=n_colors) # Allow for palette to map from hue variable names elif isinstance(palette, dict): color_names = [palette[h] for h in hue_names] colors = color_palette(color_names, n_colors) # Otherwise act as if we just got a list of colors else: colors = color_palette(palette, n_colors) palette = color_palette(colors, n_colors) return palette @property def legend(self): """The :class:`matplotlib.legend.Legend` object, if present.""" try: return self._legend except AttributeError: return None _facet_docs = dict( data=dedent("""\ data : DataFrame Tidy ("long-form") dataframe where each column is a variable and each row is an observation.\ """), rowcol=dedent("""\ row, col : vectors or keys in ``data`` Variables that define subsets to plot on different facets.\ """), rowcol_order=dedent("""\ {row,col}_order : vector of strings Specify the order in which levels of the ``row`` and/or ``col`` variables appear in the grid of subplots.\ """), col_wrap=dedent("""\ col_wrap : int "Wrap" the column variable at this width, so that the column facets span multiple rows. Incompatible with a ``row`` facet.\ """), share_xy=dedent("""\ share{x,y} : bool, 'col', or 'row' optional If true, the facets will share y axes across columns and/or x axes across rows.\ """), height=dedent("""\ height : scalar Height (in inches) of each facet. See also: ``aspect``.\ """), aspect=dedent("""\ aspect : scalar Aspect ratio of each facet, so that ``aspect * height`` gives the width of each facet in inches.\ """), palette=dedent("""\ palette : palette name, list, or dict Colors to use for the different levels of the ``hue`` variable. Should be something that can be interpreted by :func:`color_palette`, or a dictionary mapping hue levels to matplotlib colors.\ """), legend_out=dedent("""\ legend_out : bool If ``True``, the figure size will be extended, and the legend will be drawn outside the plot on the center right.\ """), margin_titles=dedent("""\ margin_titles : bool If ``True``, the titles for the row variable are drawn to the right of the last column. This option is experimental and may not work in all cases.\ """), facet_kws=dedent("""\ facet_kws : dict Additional parameters passed to :class:`FacetGrid`. """), ) class FacetGrid(Grid): """Multi-plot grid for plotting conditional relationships.""" @_deprecate_positional_args def __init__( self, data, *, row=None, col=None, hue=None, col_wrap=None, sharex=True, sharey=True, height=3, aspect=1, palette=None, row_order=None, col_order=None, hue_order=None, hue_kws=None, dropna=False, legend_out=True, despine=True, margin_titles=False, xlim=None, ylim=None, subplot_kws=None, gridspec_kws=None, size=None ): super(FacetGrid, self).__init__() # Handle deprecations if size is not None: height = size msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(msg, UserWarning) # Determine the hue facet layer information hue_var = hue if hue is None: hue_names = None else: hue_names = categorical_order(data[hue], hue_order) colors = self._get_palette(data, hue, hue_order, palette) # Set up the lists of names for the row and column facet variables if row is None: row_names = [] else: row_names = categorical_order(data[row], row_order) if col is None: col_names = [] else: col_names = categorical_order(data[col], col_order) # Additional dict of kwarg -> list of values for mapping the hue var hue_kws = hue_kws if hue_kws is not None else {} # Make a boolean mask that is True anywhere there is an NA # value in one of the faceting variables, but only if dropna is True none_na = np.zeros(len(data), bool) if dropna: row_na = none_na if row is None else data[row].isnull() col_na = none_na if col is None else data[col].isnull() hue_na = none_na if hue is None else data[hue].isnull() not_na = ~(row_na | col_na | hue_na) else: not_na = ~none_na # Compute the grid shape ncol = 1 if col is None else len(col_names) nrow = 1 if row is None else len(row_names) self._n_facets = ncol * nrow self._col_wrap = col_wrap if col_wrap is not None: if row is not None: err = "Cannot use `row` and `col_wrap` together." raise ValueError(err) ncol = col_wrap nrow = int(np.ceil(len(col_names) / col_wrap)) self._ncol = ncol self._nrow = nrow # Calculate the base figure size # This can get stretched later by a legend # TODO this doesn't account for axis labels figsize = (ncol * height * aspect, nrow * height) # Validate some inputs if col_wrap is not None: margin_titles = False # Build the subplot keyword dictionary subplot_kws = {} if subplot_kws is None else subplot_kws.copy() gridspec_kws = {} if gridspec_kws is None else gridspec_kws.copy() if xlim is not None: subplot_kws["xlim"] = xlim if ylim is not None: subplot_kws["ylim"] = ylim # --- Initialize the subplot grid # Disable autolayout so legend_out works properly with mpl.rc_context({"figure.autolayout": False}): fig = plt.figure(figsize=figsize) if col_wrap is None: kwargs = dict(squeeze=False, sharex=sharex, sharey=sharey, subplot_kw=subplot_kws, gridspec_kw=gridspec_kws) axes = fig.subplots(nrow, ncol, **kwargs) if col is None and row is None: axes_dict = {} elif col is None: axes_dict = dict(zip(row_names, axes.flat)) elif row is None: axes_dict = dict(zip(col_names, axes.flat)) else: facet_product = product(row_names, col_names) axes_dict = dict(zip(facet_product, axes.flat)) else: # If wrapping the col variable we need to make the grid ourselves if gridspec_kws: warnings.warn("`gridspec_kws` ignored when using `col_wrap`") n_axes = len(col_names) axes = np.empty(n_axes, object) axes[0] = fig.add_subplot(nrow, ncol, 1, **subplot_kws) if sharex: subplot_kws["sharex"] = axes[0] if sharey: subplot_kws["sharey"] = axes[0] for i in range(1, n_axes): axes[i] = fig.add_subplot(nrow, ncol, i + 1, **subplot_kws) axes_dict = dict(zip(col_names, axes)) # --- Set up the class attributes # Attributes that are part of the public API but accessed through # a property so that Sphinx adds them to the auto class doc self._figure = fig self._axes = axes self._axes_dict = axes_dict self._legend = None # Public attributes that aren't explicitly documented # (It's not obvious that having them be public was a good idea) self.data = data self.row_names = row_names self.col_names = col_names self.hue_names = hue_names self.hue_kws = hue_kws # Next the private variables self._nrow = nrow self._row_var = row self._ncol = ncol self._col_var = col self._margin_titles = margin_titles self._margin_titles_texts = [] self._col_wrap = col_wrap self._hue_var = hue_var self._colors = colors self._legend_out = legend_out self._legend_data = {} self._x_var = None self._y_var = None self._dropna = dropna self._not_na = not_na # --- Make the axes look good self.tight_layout() if despine: self.despine() if sharex in [True, 'col']: for ax in self._not_bottom_axes: for label in ax.get_xticklabels(): label.set_visible(False) ax.xaxis.offsetText.set_visible(False) ax.xaxis.label.set_visible(False) if sharey in [True, 'row']: for ax in self._not_left_axes: for label in ax.get_yticklabels(): label.set_visible(False) ax.yaxis.offsetText.set_visible(False) ax.yaxis.label.set_visible(False) __init__.__doc__ = dedent("""\ Initialize the matplotlib figure and FacetGrid object. This class maps a dataset onto multiple axes arrayed in a grid of rows and columns that correspond to *levels* of variables in the dataset. The plots it produces are often called "lattice", "trellis", or "small-multiple" graphics. It can also represent levels of a third variable with the ``hue`` parameter, which plots different subsets of data in different colors. This uses color to resolve elements on a third dimension, but only draws subsets on top of each other and will not tailor the ``hue`` parameter for the specific visualization the way that axes-level functions that accept ``hue`` will. The basic workflow is to initialize the :class:`FacetGrid` object with the dataset and the variables that are used to structure the grid. Then one or more plotting functions can be applied to each subset by calling :meth:`FacetGrid.map` or :meth:`FacetGrid.map_dataframe`. Finally, the plot can be tweaked with other methods to do things like change the axis labels, use different ticks, or add a legend. See the detailed code examples below for more information. .. warning:: When using seaborn functions that infer semantic mappings from a dataset, care must be taken to synchronize those mappings across facets (e.g., by defing the ``hue`` mapping with a palette dict or setting the data type of the variables to ``category``). In most cases, it will be better to use a figure-level function (e.g. :func:`relplot` or :func:`catplot`) than to use :class:`FacetGrid` directly. See the :ref:`tutorial ` for more information. Parameters ---------- {data} row, col, hue : strings Variables that define subsets of the data, which will be drawn on separate facets in the grid. See the ``{{var}}_order`` parameters to control the order of levels of this variable. {col_wrap} {share_xy} {height} {aspect} {palette} {{row,col,hue}}_order : lists Order for the levels of the faceting variables. By default, this will be the order that the levels appear in ``data`` or, if the variables are pandas categoricals, the category order. hue_kws : dictionary of param -> list of values mapping Other keyword arguments to insert into the plotting call to let other plot attributes vary across levels of the hue variable (e.g. the markers in a scatterplot). {legend_out} despine : boolean Remove the top and right spines from the plots. {margin_titles} {{x, y}}lim: tuples Limits for each of the axes on each facet (only relevant when share{{x, y}} is True). subplot_kws : dict Dictionary of keyword arguments passed to matplotlib subplot(s) methods. gridspec_kws : dict Dictionary of keyword arguments passed to :class:`matplotlib.gridspec.GridSpec` (via :meth:`matplotlib.figure.Figure.subplots`). Ignored if ``col_wrap`` is not ``None``. See Also -------- PairGrid : Subplot grid for plotting pairwise relationships relplot : Combine a relational plot and a :class:`FacetGrid` displot : Combine a distribution plot and a :class:`FacetGrid` catplot : Combine a categorical plot and a :class:`FacetGrid` lmplot : Combine a regression plot and a :class:`FacetGrid` Examples -------- .. note:: These examples use seaborn functions to demonstrate some of the advanced features of the class, but in most cases you will want to use figue-level functions (e.g. :func:`displot`, :func:`relplot`) to make the plots shown here. .. include:: ../docstrings/FacetGrid.rst """).format(**_facet_docs) def facet_data(self): """Generator for name indices and data subsets for each facet. Yields ------ (i, j, k), data_ijk : tuple of ints, DataFrame The ints provide an index into the {row, col, hue}_names attribute, and the dataframe contains a subset of the full data corresponding to each facet. The generator yields subsets that correspond with the self.axes.flat iterator, or self.axes[i, j] when `col_wrap` is None. """ data = self.data # Construct masks for the row variable if self.row_names: row_masks = [data[self._row_var] == n for n in self.row_names] else: row_masks = [np.repeat(True, len(self.data))] # Construct masks for the column variable if self.col_names: col_masks = [data[self._col_var] == n for n in self.col_names] else: col_masks = [np.repeat(True, len(self.data))] # Construct masks for the hue variable if self.hue_names: hue_masks = [data[self._hue_var] == n for n in self.hue_names] else: hue_masks = [np.repeat(True, len(self.data))] # Here is the main generator loop for (i, row), (j, col), (k, hue) in product(enumerate(row_masks), enumerate(col_masks), enumerate(hue_masks)): data_ijk = data[row & col & hue & self._not_na] yield (i, j, k), data_ijk def map(self, func, *args, **kwargs): """Apply a plotting function to each facet's subset of the data. Parameters ---------- func : callable A plotting function that takes data and keyword arguments. It must plot to the currently active matplotlib Axes and take a `color` keyword argument. If faceting on the `hue` dimension, it must also take a `label` keyword argument. args : strings Column names in self.data that identify variables with data to plot. The data for each variable is passed to `func` in the order the variables are specified in the call. kwargs : keyword arguments All keyword arguments are passed to the plotting function. Returns ------- self : object Returns self. """ # If color was a keyword argument, grab it here kw_color = kwargs.pop("color", None) # How we use the function depends on where it comes from func_module = str(getattr(func, "__module__", "")) # Check for categorical plots without order information if func_module == "seaborn.categorical": if "order" not in kwargs: warning = ("Using the {} function without specifying " "`order` is likely to produce an incorrect " "plot.".format(func.__name__)) warnings.warn(warning) if len(args) == 3 and "hue_order" not in kwargs: warning = ("Using the {} function without specifying " "`hue_order` is likely to produce an incorrect " "plot.".format(func.__name__)) warnings.warn(warning) # Iterate over the data subsets for (row_i, col_j, hue_k), data_ijk in self.facet_data(): # If this subset is null, move on if not data_ijk.values.size: continue # Get the current axis modify_state = not func_module.startswith("seaborn") ax = self.facet_axis(row_i, col_j, modify_state) # Decide what color to plot with kwargs["color"] = self._facet_color(hue_k, kw_color) # Insert the other hue aesthetics if appropriate for kw, val_list in self.hue_kws.items(): kwargs[kw] = val_list[hue_k] # Insert a label in the keyword arguments for the legend if self._hue_var is not None: kwargs["label"] = utils.to_utf8(self.hue_names[hue_k]) # Get the actual data we are going to plot with plot_data = data_ijk[list(args)] if self._dropna: plot_data = plot_data.dropna() plot_args = [v for k, v in plot_data.iteritems()] # Some matplotlib functions don't handle pandas objects correctly if func_module.startswith("matplotlib"): plot_args = [v.values for v in plot_args] # Draw the plot self._facet_plot(func, ax, plot_args, kwargs) # Finalize the annotations and layout self._finalize_grid(args[:2]) return self def map_dataframe(self, func, *args, **kwargs): """Like ``.map`` but passes args as strings and inserts data in kwargs. This method is suitable for plotting with functions that accept a long-form DataFrame as a `data` keyword argument and access the data in that DataFrame using string variable names. Parameters ---------- func : callable A plotting function that takes data and keyword arguments. Unlike the `map` method, a function used here must "understand" Pandas objects. It also must plot to the currently active matplotlib Axes and take a `color` keyword argument. If faceting on the `hue` dimension, it must also take a `label` keyword argument. args : strings Column names in self.data that identify variables with data to plot. The data for each variable is passed to `func` in the order the variables are specified in the call. kwargs : keyword arguments All keyword arguments are passed to the plotting function. Returns ------- self : object Returns self. """ # If color was a keyword argument, grab it here kw_color = kwargs.pop("color", None) # Iterate over the data subsets for (row_i, col_j, hue_k), data_ijk in self.facet_data(): # If this subset is null, move on if not data_ijk.values.size: continue # Get the current axis modify_state = not str(func.__module__).startswith("seaborn") ax = self.facet_axis(row_i, col_j, modify_state) # Decide what color to plot with kwargs["color"] = self._facet_color(hue_k, kw_color) # Insert the other hue aesthetics if appropriate for kw, val_list in self.hue_kws.items(): kwargs[kw] = val_list[hue_k] # Insert a label in the keyword arguments for the legend if self._hue_var is not None: kwargs["label"] = self.hue_names[hue_k] # Stick the facet dataframe into the kwargs if self._dropna: data_ijk = data_ijk.dropna() kwargs["data"] = data_ijk # Draw the plot self._facet_plot(func, ax, args, kwargs) # For axis labels, prefer to use positional args for backcompat # but also extract the x/y kwargs and use if no corresponding arg axis_labels = [kwargs.get("x", None), kwargs.get("y", None)] for i, val in enumerate(args[:2]): axis_labels[i] = val self._finalize_grid(axis_labels) return self def _facet_color(self, hue_index, kw_color): color = self._colors[hue_index] if kw_color is not None: return kw_color elif color is not None: return color def _facet_plot(self, func, ax, plot_args, plot_kwargs): # Draw the plot if str(func.__module__).startswith("seaborn"): plot_kwargs = plot_kwargs.copy() semantics = ["x", "y", "hue", "size", "style"] for key, val in zip(semantics, plot_args): plot_kwargs[key] = val plot_args = [] plot_kwargs["ax"] = ax func(*plot_args, **plot_kwargs) # Sort out the supporting information self._update_legend_data(ax) def _finalize_grid(self, axlabels): """Finalize the annotations and layout.""" self.set_axis_labels(*axlabels) self.set_titles() self.tight_layout() def facet_axis(self, row_i, col_j, modify_state=True): """Make the axis identified by these indices active and return it.""" # Calculate the actual indices of the axes to plot on if self._col_wrap is not None: ax = self.axes.flat[col_j] else: ax = self.axes[row_i, col_j] # Get a reference to the axes object we want, and make it active if modify_state: plt.sca(ax) return ax def despine(self, **kwargs): """Remove axis spines from the facets.""" utils.despine(self._figure, **kwargs) return self def set_axis_labels(self, x_var=None, y_var=None, clear_inner=True, **kwargs): """Set axis labels on the left column and bottom row of the grid.""" if x_var is not None: self._x_var = x_var self.set_xlabels(x_var, clear_inner=clear_inner, **kwargs) if y_var is not None: self._y_var = y_var self.set_ylabels(y_var, clear_inner=clear_inner, **kwargs) return self def set_xlabels(self, label=None, clear_inner=True, **kwargs): """Label the x axis on the bottom row of the grid.""" if label is None: label = self._x_var for ax in self._bottom_axes: ax.set_xlabel(label, **kwargs) if clear_inner: for ax in self._not_bottom_axes: ax.set_xlabel("") return self def set_ylabels(self, label=None, clear_inner=True, **kwargs): """Label the y axis on the left column of the grid.""" if label is None: label = self._y_var for ax in self._left_axes: ax.set_ylabel(label, **kwargs) if clear_inner: for ax in self._not_left_axes: ax.set_ylabel("") return self def set_xticklabels(self, labels=None, step=None, **kwargs): """Set x axis tick labels of the grid.""" for ax in self.axes.flat: curr_ticks = ax.get_xticks() ax.set_xticks(curr_ticks) if labels is None: curr_labels = [l.get_text() for l in ax.get_xticklabels()] if step is not None: xticks = ax.get_xticks()[::step] curr_labels = curr_labels[::step] ax.set_xticks(xticks) ax.set_xticklabels(curr_labels, **kwargs) else: ax.set_xticklabels(labels, **kwargs) return self def set_yticklabels(self, labels=None, **kwargs): """Set y axis tick labels on the left column of the grid.""" for ax in self.axes.flat: curr_ticks = ax.get_yticks() ax.set_yticks(curr_ticks) if labels is None: curr_labels = [l.get_text() for l in ax.get_yticklabels()] ax.set_yticklabels(curr_labels, **kwargs) else: ax.set_yticklabels(labels, **kwargs) return self def set_titles(self, template=None, row_template=None, col_template=None, **kwargs): """Draw titles either above each facet or on the grid margins. Parameters ---------- template : string Template for all titles with the formatting keys {col_var} and {col_name} (if using a `col` faceting variable) and/or {row_var} and {row_name} (if using a `row` faceting variable). row_template: Template for the row variable when titles are drawn on the grid margins. Must have {row_var} and {row_name} formatting keys. col_template: Template for the row variable when titles are drawn on the grid margins. Must have {col_var} and {col_name} formatting keys. Returns ------- self: object Returns self. """ args = dict(row_var=self._row_var, col_var=self._col_var) kwargs["size"] = kwargs.pop("size", mpl.rcParams["axes.labelsize"]) # Establish default templates if row_template is None: row_template = "{row_var} = {row_name}" if col_template is None: col_template = "{col_var} = {col_name}" if template is None: if self._row_var is None: template = col_template elif self._col_var is None: template = row_template else: template = " | ".join([row_template, col_template]) row_template = utils.to_utf8(row_template) col_template = utils.to_utf8(col_template) template = utils.to_utf8(template) if self._margin_titles: # Remove any existing title texts for text in self._margin_titles_texts: text.remove() self._margin_titles_texts = [] if self.row_names is not None: # Draw the row titles on the right edge of the grid for i, row_name in enumerate(self.row_names): ax = self.axes[i, -1] args.update(dict(row_name=row_name)) title = row_template.format(**args) text = ax.annotate( title, xy=(1.02, .5), xycoords="axes fraction", rotation=270, ha="left", va="center", **kwargs ) self._margin_titles_texts.append(text) if self.col_names is not None: # Draw the column titles as normal titles for j, col_name in enumerate(self.col_names): args.update(dict(col_name=col_name)) title = col_template.format(**args) self.axes[0, j].set_title(title, **kwargs) return self # Otherwise title each facet with all the necessary information if (self._row_var is not None) and (self._col_var is not None): for i, row_name in enumerate(self.row_names): for j, col_name in enumerate(self.col_names): args.update(dict(row_name=row_name, col_name=col_name)) title = template.format(**args) self.axes[i, j].set_title(title, **kwargs) elif self.row_names is not None and len(self.row_names): for i, row_name in enumerate(self.row_names): args.update(dict(row_name=row_name)) title = template.format(**args) self.axes[i, 0].set_title(title, **kwargs) elif self.col_names is not None and len(self.col_names): for i, col_name in enumerate(self.col_names): args.update(dict(col_name=col_name)) title = template.format(**args) # Index the flat array so col_wrap works self.axes.flat[i].set_title(title, **kwargs) return self def refline(self, *, x=None, y=None, color='.5', linestyle='--', **line_kws): """Add a reference line(s) to each facet. Parameters ---------- x, y : numeric Value(s) to draw the line(s) at. color : :mod:`matplotlib color ` Specifies the color of the reference line(s). Pass ``color=None`` to use ``hue`` mapping. linestyle : str Specifies the style of the reference line(s). line_kws : key, value mappings Other keyword arguments are passed to :meth:`matplotlib.axes.Axes.axvline` when ``x`` is not None and :meth:`matplotlib.axes.Axes.axhline` when ``y`` is not None. Returns ------- :class:`FacetGrid` instance Returns ``self`` for easy method chaining. """ line_kws['color'] = color line_kws['linestyle'] = linestyle if x is not None: self.map(plt.axvline, x=x, **line_kws) if y is not None: self.map(plt.axhline, y=y, **line_kws) # ------ Properties that are part of the public API and documented by Sphinx @property def axes(self): """An array of the :class:`matplotlib.axes.Axes` objects in the grid.""" return self._axes @property def ax(self): """The :class:`matplotlib.axes.Axes` when no faceting variables are assigned.""" if self.axes.shape == (1, 1): return self.axes[0, 0] else: err = ( "Use the `.axes` attribute when facet variables are assigned." ) raise AttributeError(err) @property def axes_dict(self): """A mapping of facet names to corresponding :class:`matplotlib.axes.Axes`. If only one of ``row`` or ``col`` is assigned, each key is a string representing a level of that variable. If both facet dimensions are assigned, each key is a ``({row_level}, {col_level})`` tuple. """ return self._axes_dict # ------ Private properties, that require some computation to get @property def _inner_axes(self): """Return a flat array of the inner axes.""" if self._col_wrap is None: return self.axes[:-1, 1:].flat else: axes = [] n_empty = self._nrow * self._ncol - self._n_facets for i, ax in enumerate(self.axes): append = ( i % self._ncol and i < (self._ncol * (self._nrow - 1)) and i < (self._ncol * (self._nrow - 1) - n_empty) ) if append: axes.append(ax) return np.array(axes, object).flat @property def _left_axes(self): """Return a flat array of the left column of axes.""" if self._col_wrap is None: return self.axes[:, 0].flat else: axes = [] for i, ax in enumerate(self.axes): if not i % self._ncol: axes.append(ax) return np.array(axes, object).flat @property def _not_left_axes(self): """Return a flat array of axes that aren't on the left column.""" if self._col_wrap is None: return self.axes[:, 1:].flat else: axes = [] for i, ax in enumerate(self.axes): if i % self._ncol: axes.append(ax) return np.array(axes, object).flat @property def _bottom_axes(self): """Return a flat array of the bottom row of axes.""" if self._col_wrap is None: return self.axes[-1, :].flat else: axes = [] n_empty = self._nrow * self._ncol - self._n_facets for i, ax in enumerate(self.axes): append = ( i >= (self._ncol * (self._nrow - 1)) or i >= (self._ncol * (self._nrow - 1) - n_empty) ) if append: axes.append(ax) return np.array(axes, object).flat @property def _not_bottom_axes(self): """Return a flat array of axes that aren't on the bottom row.""" if self._col_wrap is None: return self.axes[:-1, :].flat else: axes = [] n_empty = self._nrow * self._ncol - self._n_facets for i, ax in enumerate(self.axes): append = ( i < (self._ncol * (self._nrow - 1)) and i < (self._ncol * (self._nrow - 1) - n_empty) ) if append: axes.append(ax) return np.array(axes, object).flat class PairGrid(Grid): """Subplot grid for plotting pairwise relationships in a dataset. This object maps each variable in a dataset onto a column and row in a grid of multiple axes. Different axes-level plotting functions can be used to draw bivariate plots in the upper and lower triangles, and the the marginal distribution of each variable can be shown on the diagonal. Several different common plots can be generated in a single line using :func:`pairplot`. Use :class:`PairGrid` when you need more flexibility. See the :ref:`tutorial ` for more information. """ @_deprecate_positional_args def __init__( self, data, *, hue=None, hue_order=None, palette=None, hue_kws=None, vars=None, x_vars=None, y_vars=None, corner=False, diag_sharey=True, height=2.5, aspect=1, layout_pad=.5, despine=True, dropna=False, size=None ): """Initialize the plot figure and PairGrid object. Parameters ---------- data : DataFrame Tidy (long-form) dataframe where each column is a variable and each row is an observation. hue : string (variable name) Variable in ``data`` to map plot aspects to different colors. This variable will be excluded from the default x and y variables. hue_order : list of strings Order for the levels of the hue variable in the palette palette : dict or seaborn color palette Set of colors for mapping the ``hue`` variable. If a dict, keys should be values in the ``hue`` variable. hue_kws : dictionary of param -> list of values mapping Other keyword arguments to insert into the plotting call to let other plot attributes vary across levels of the hue variable (e.g. the markers in a scatterplot). vars : list of variable names Variables within ``data`` to use, otherwise use every column with a numeric datatype. {x, y}_vars : lists of variable names Variables within ``data`` to use separately for the rows and columns of the figure; i.e. to make a non-square plot. corner : bool If True, don't add axes to the upper (off-diagonal) triangle of the grid, making this a "corner" plot. height : scalar Height (in inches) of each facet. aspect : scalar Aspect * height gives the width (in inches) of each facet. layout_pad : scalar Padding between axes; passed to ``fig.tight_layout``. despine : boolean Remove the top and right spines from the plots. dropna : boolean Drop missing values from the data before plotting. See Also -------- pairplot : Easily drawing common uses of :class:`PairGrid`. FacetGrid : Subplot grid for plotting conditional relationships. Examples -------- .. include:: ../docstrings/PairGrid.rst """ super(PairGrid, self).__init__() # Handle deprecations if size is not None: height = size msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(UserWarning(msg)) # Sort out the variables that define the grid numeric_cols = self._find_numeric_cols(data) if hue in numeric_cols: numeric_cols.remove(hue) if vars is not None: x_vars = list(vars) y_vars = list(vars) if x_vars is None: x_vars = numeric_cols if y_vars is None: y_vars = numeric_cols if np.isscalar(x_vars): x_vars = [x_vars] if np.isscalar(y_vars): y_vars = [y_vars] self.x_vars = x_vars = list(x_vars) self.y_vars = y_vars = list(y_vars) self.square_grid = self.x_vars == self.y_vars if not x_vars: raise ValueError("No variables found for grid columns.") if not y_vars: raise ValueError("No variables found for grid rows.") # Create the figure and the array of subplots figsize = len(x_vars) * height * aspect, len(y_vars) * height # Disable autolayout so legend_out works with mpl.rc_context({"figure.autolayout": False}): fig = plt.figure(figsize=figsize) axes = fig.subplots(len(y_vars), len(x_vars), sharex="col", sharey="row", squeeze=False) # Possibly remove upper axes to make a corner grid # Note: setting up the axes is usually the most time-intensive part # of using the PairGrid. We are foregoing the speed improvement that # we would get by just not setting up the hidden axes so that we can # avoid implementing fig.subplots ourselves. But worth thinking about. self._corner = corner if corner: hide_indices = np.triu_indices_from(axes, 1) for i, j in zip(*hide_indices): axes[i, j].remove() axes[i, j] = None self._figure = fig self.axes = axes self.data = data # Save what we are going to do with the diagonal self.diag_sharey = diag_sharey self.diag_vars = None self.diag_axes = None self._dropna = dropna # Label the axes self._add_axis_labels() # Sort out the hue variable self._hue_var = hue if hue is None: self.hue_names = hue_order = ["_nolegend_"] self.hue_vals = pd.Series(["_nolegend_"] * len(data), index=data.index) else: # We need hue_order and hue_names because the former is used to control # the order of drawing and the latter is used to control the order of # the legend. hue_names can become string-typed while hue_order must # retain the type of the input data. This is messy but results from # the fact that PairGrid can implement the hue-mapping logic itself # (and was originally written exclusively that way) but now can delegate # to the axes-level functions, while always handling legend creation. # See GH2307 hue_names = hue_order = categorical_order(data[hue], hue_order) if dropna: # Filter NA from the list of unique hue names hue_names = list(filter(pd.notnull, hue_names)) self.hue_names = hue_names self.hue_vals = data[hue] # Additional dict of kwarg -> list of values for mapping the hue var self.hue_kws = hue_kws if hue_kws is not None else {} self._orig_palette = palette self._hue_order = hue_order self.palette = self._get_palette(data, hue, hue_order, palette) self._legend_data = {} # Make the plot look nice for ax in axes[:-1, :].flat: if ax is None: continue for label in ax.get_xticklabels(): label.set_visible(False) ax.xaxis.offsetText.set_visible(False) ax.xaxis.label.set_visible(False) for ax in axes[:, 1:].flat: if ax is None: continue for label in ax.get_yticklabels(): label.set_visible(False) ax.yaxis.offsetText.set_visible(False) ax.yaxis.label.set_visible(False) self._tight_layout_rect = [.01, .01, .99, .99] self._tight_layout_pad = layout_pad self._despine = despine if despine: utils.despine(fig=fig) self.tight_layout(pad=layout_pad) def map(self, func, **kwargs): """Plot with the same function in every subplot. Parameters ---------- func : callable plotting function Must take x, y arrays as positional arguments and draw onto the "currently active" matplotlib Axes. Also needs to accept kwargs called ``color`` and ``label``. """ row_indices, col_indices = np.indices(self.axes.shape) indices = zip(row_indices.flat, col_indices.flat) self._map_bivariate(func, indices, **kwargs) return self def map_lower(self, func, **kwargs): """Plot with a bivariate function on the lower diagonal subplots. Parameters ---------- func : callable plotting function Must take x, y arrays as positional arguments and draw onto the "currently active" matplotlib Axes. Also needs to accept kwargs called ``color`` and ``label``. """ indices = zip(*np.tril_indices_from(self.axes, -1)) self._map_bivariate(func, indices, **kwargs) return self def map_upper(self, func, **kwargs): """Plot with a bivariate function on the upper diagonal subplots. Parameters ---------- func : callable plotting function Must take x, y arrays as positional arguments and draw onto the "currently active" matplotlib Axes. Also needs to accept kwargs called ``color`` and ``label``. """ indices = zip(*np.triu_indices_from(self.axes, 1)) self._map_bivariate(func, indices, **kwargs) return self def map_offdiag(self, func, **kwargs): """Plot with a bivariate function on the off-diagonal subplots. Parameters ---------- func : callable plotting function Must take x, y arrays as positional arguments and draw onto the "currently active" matplotlib Axes. Also needs to accept kwargs called ``color`` and ``label``. """ if self.square_grid: self.map_lower(func, **kwargs) if not self._corner: self.map_upper(func, **kwargs) else: indices = [] for i, (y_var) in enumerate(self.y_vars): for j, (x_var) in enumerate(self.x_vars): if x_var != y_var: indices.append((i, j)) self._map_bivariate(func, indices, **kwargs) return self def map_diag(self, func, **kwargs): """Plot with a univariate function on each diagonal subplot. Parameters ---------- func : callable plotting function Must take an x array as a positional argument and draw onto the "currently active" matplotlib Axes. Also needs to accept kwargs called ``color`` and ``label``. """ # Add special diagonal axes for the univariate plot if self.diag_axes is None: diag_vars = [] diag_axes = [] for i, y_var in enumerate(self.y_vars): for j, x_var in enumerate(self.x_vars): if x_var == y_var: # Make the density axes diag_vars.append(x_var) ax = self.axes[i, j] diag_ax = ax.twinx() diag_ax.set_axis_off() diag_axes.append(diag_ax) # Work around matplotlib bug # https://github.com/matplotlib/matplotlib/issues/15188 if not plt.rcParams.get("ytick.left", True): for tick in ax.yaxis.majorTicks: tick.tick1line.set_visible(False) # Remove main y axis from density axes in a corner plot if self._corner: ax.yaxis.set_visible(False) if self._despine: utils.despine(ax=ax, left=True) # TODO add optional density ticks (on the right) # when drawing a corner plot? if self.diag_sharey and diag_axes: # This may change in future matplotlibs # See https://github.com/matplotlib/matplotlib/pull/9923 group = diag_axes[0].get_shared_y_axes() for ax in diag_axes[1:]: group.join(ax, diag_axes[0]) self.diag_vars = np.array(diag_vars, np.object_) self.diag_axes = np.array(diag_axes, np.object_) if "hue" not in signature(func).parameters: return self._map_diag_iter_hue(func, **kwargs) # Loop over diagonal variables and axes, making one plot in each for var, ax in zip(self.diag_vars, self.diag_axes): plot_kwargs = kwargs.copy() if str(func.__module__).startswith("seaborn"): plot_kwargs["ax"] = ax else: plt.sca(ax) vector = self.data[var] if self._hue_var is not None: hue = self.data[self._hue_var] else: hue = None if self._dropna: not_na = vector.notna() if hue is not None: not_na &= hue.notna() vector = vector[not_na] if hue is not None: hue = hue[not_na] plot_kwargs.setdefault("hue", hue) plot_kwargs.setdefault("hue_order", self._hue_order) plot_kwargs.setdefault("palette", self._orig_palette) func(x=vector, **plot_kwargs) ax.legend_ = None self._add_axis_labels() return self def _map_diag_iter_hue(self, func, **kwargs): """Put marginal plot on each diagonal axes, iterating over hue.""" # Plot on each of the diagonal axes fixed_color = kwargs.pop("color", None) for var, ax in zip(self.diag_vars, self.diag_axes): hue_grouped = self.data[var].groupby(self.hue_vals) plot_kwargs = kwargs.copy() if str(func.__module__).startswith("seaborn"): plot_kwargs["ax"] = ax else: plt.sca(ax) for k, label_k in enumerate(self._hue_order): # Attempt to get data for this level, allowing for empty try: data_k = hue_grouped.get_group(label_k) except KeyError: data_k = pd.Series([], dtype=float) if fixed_color is None: color = self.palette[k] else: color = fixed_color if self._dropna: data_k = utils.remove_na(data_k) if str(func.__module__).startswith("seaborn"): func(x=data_k, label=label_k, color=color, **plot_kwargs) else: func(data_k, label=label_k, color=color, **plot_kwargs) self._add_axis_labels() return self def _map_bivariate(self, func, indices, **kwargs): """Draw a bivariate plot on the indicated axes.""" # This is a hack to handle the fact that new distribution plots don't add # their artists onto the axes. This is probably superior in general, but # we'll need a better way to handle it in the axisgrid functions. from .distributions import histplot, kdeplot if func is histplot or func is kdeplot: self._extract_legend_handles = True kws = kwargs.copy() # Use copy as we insert other kwargs for i, j in indices: x_var = self.x_vars[j] y_var = self.y_vars[i] ax = self.axes[i, j] if ax is None: # i.e. we are in corner mode continue self._plot_bivariate(x_var, y_var, ax, func, **kws) self._add_axis_labels() if "hue" in signature(func).parameters: self.hue_names = list(self._legend_data) def _plot_bivariate(self, x_var, y_var, ax, func, **kwargs): """Draw a bivariate plot on the specified axes.""" if "hue" not in signature(func).parameters: self._plot_bivariate_iter_hue(x_var, y_var, ax, func, **kwargs) return kwargs = kwargs.copy() if str(func.__module__).startswith("seaborn"): kwargs["ax"] = ax else: plt.sca(ax) if x_var == y_var: axes_vars = [x_var] else: axes_vars = [x_var, y_var] if self._hue_var is not None and self._hue_var not in axes_vars: axes_vars.append(self._hue_var) data = self.data[axes_vars] if self._dropna: data = data.dropna() x = data[x_var] y = data[y_var] if self._hue_var is None: hue = None else: hue = data.get(self._hue_var) kwargs.setdefault("hue", hue) kwargs.setdefault("hue_order", self._hue_order) kwargs.setdefault("palette", self._orig_palette) func(x=x, y=y, **kwargs) self._update_legend_data(ax) def _plot_bivariate_iter_hue(self, x_var, y_var, ax, func, **kwargs): """Draw a bivariate plot while iterating over hue subsets.""" kwargs = kwargs.copy() if str(func.__module__).startswith("seaborn"): kwargs["ax"] = ax else: plt.sca(ax) if x_var == y_var: axes_vars = [x_var] else: axes_vars = [x_var, y_var] hue_grouped = self.data.groupby(self.hue_vals) for k, label_k in enumerate(self._hue_order): kws = kwargs.copy() # Attempt to get data for this level, allowing for empty try: data_k = hue_grouped.get_group(label_k) except KeyError: data_k = pd.DataFrame(columns=axes_vars, dtype=float) if self._dropna: data_k = data_k[axes_vars].dropna() x = data_k[x_var] y = data_k[y_var] for kw, val_list in self.hue_kws.items(): kws[kw] = val_list[k] kws.setdefault("color", self.palette[k]) if self._hue_var is not None: kws["label"] = label_k if str(func.__module__).startswith("seaborn"): func(x=x, y=y, **kws) else: func(x, y, **kws) self._update_legend_data(ax) def _add_axis_labels(self): """Add labels to the left and bottom Axes.""" for ax, label in zip(self.axes[-1, :], self.x_vars): ax.set_xlabel(label) for ax, label in zip(self.axes[:, 0], self.y_vars): ax.set_ylabel(label) if self._corner: self.axes[0, 0].set_ylabel("") def _find_numeric_cols(self, data): """Find which variables in a DataFrame are numeric.""" numeric_cols = [] for col in data: if variable_type(data[col]) == "numeric": numeric_cols.append(col) return numeric_cols class JointGrid(_BaseGrid): """Grid for drawing a bivariate plot with marginal univariate plots. Many plots can be drawn by using the figure-level interface :func:`jointplot`. Use this class directly when you need more flexibility. """ @_deprecate_positional_args def __init__( self, *, x=None, y=None, data=None, height=6, ratio=5, space=.2, dropna=False, xlim=None, ylim=None, size=None, marginal_ticks=False, hue=None, palette=None, hue_order=None, hue_norm=None, ): # Handle deprecations if size is not None: height = size msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(msg, UserWarning) # Set up the subplot grid f = plt.figure(figsize=(height, height)) gs = plt.GridSpec(ratio + 1, ratio + 1) ax_joint = f.add_subplot(gs[1:, :-1]) ax_marg_x = f.add_subplot(gs[0, :-1], sharex=ax_joint) ax_marg_y = f.add_subplot(gs[1:, -1], sharey=ax_joint) self._figure = f self.ax_joint = ax_joint self.ax_marg_x = ax_marg_x self.ax_marg_y = ax_marg_y # Turn off tick visibility for the measure axis on the marginal plots plt.setp(ax_marg_x.get_xticklabels(), visible=False) plt.setp(ax_marg_y.get_yticklabels(), visible=False) plt.setp(ax_marg_x.get_xticklabels(minor=True), visible=False) plt.setp(ax_marg_y.get_yticklabels(minor=True), visible=False) # Turn off the ticks on the density axis for the marginal plots if not marginal_ticks: plt.setp(ax_marg_x.yaxis.get_majorticklines(), visible=False) plt.setp(ax_marg_x.yaxis.get_minorticklines(), visible=False) plt.setp(ax_marg_y.xaxis.get_majorticklines(), visible=False) plt.setp(ax_marg_y.xaxis.get_minorticklines(), visible=False) plt.setp(ax_marg_x.get_yticklabels(), visible=False) plt.setp(ax_marg_y.get_xticklabels(), visible=False) plt.setp(ax_marg_x.get_yticklabels(minor=True), visible=False) plt.setp(ax_marg_y.get_xticklabels(minor=True), visible=False) ax_marg_x.yaxis.grid(False) ax_marg_y.xaxis.grid(False) # Process the input variables p = VectorPlotter(data=data, variables=dict(x=x, y=y, hue=hue)) plot_data = p.plot_data.loc[:, p.plot_data.notna().any()] # Possibly drop NA if dropna: plot_data = plot_data.dropna() def get_var(var): vector = plot_data.get(var, None) if vector is not None: vector = vector.rename(p.variables.get(var, None)) return vector self.x = get_var("x") self.y = get_var("y") self.hue = get_var("hue") for axis in "xy": name = p.variables.get(axis, None) if name is not None: getattr(ax_joint, f"set_{axis}label")(name) if xlim is not None: ax_joint.set_xlim(xlim) if ylim is not None: ax_joint.set_ylim(ylim) # Store the semantic mapping parameters for axes-level functions self._hue_params = dict(palette=palette, hue_order=hue_order, hue_norm=hue_norm) # Make the grid look nice utils.despine(f) if not marginal_ticks: utils.despine(ax=ax_marg_x, left=True) utils.despine(ax=ax_marg_y, bottom=True) for axes in [ax_marg_x, ax_marg_y]: for axis in [axes.xaxis, axes.yaxis]: axis.label.set_visible(False) f.tight_layout() f.subplots_adjust(hspace=space, wspace=space) def _inject_kwargs(self, func, kws, params): """Add params to kws if they are accepted by func.""" func_params = signature(func).parameters for key, val in params.items(): if key in func_params: kws.setdefault(key, val) def plot(self, joint_func, marginal_func, **kwargs): """Draw the plot by passing functions for joint and marginal axes. This method passes the ``kwargs`` dictionary to both functions. If you need more control, call :meth:`JointGrid.plot_joint` and :meth:`JointGrid.plot_marginals` directly with specific parameters. Parameters ---------- joint_func, marginal_func : callables Functions to draw the bivariate and univariate plots. See methods referenced above for information about the required characteristics of these functions. kwargs Additional keyword arguments are passed to both functions. Returns ------- :class:`JointGrid` instance Returns ``self`` for easy method chaining. """ self.plot_marginals(marginal_func, **kwargs) self.plot_joint(joint_func, **kwargs) return self def plot_joint(self, func, **kwargs): """Draw a bivariate plot on the joint axes of the grid. Parameters ---------- func : plotting callable If a seaborn function, it should accept ``x`` and ``y``. Otherwise, it must accept ``x`` and ``y`` vectors of data as the first two positional arguments, and it must plot on the "current" axes. If ``hue`` was defined in the class constructor, the function must accept ``hue`` as a parameter. kwargs Keyword argument are passed to the plotting function. Returns ------- :class:`JointGrid` instance Returns ``self`` for easy method chaining. """ kwargs = kwargs.copy() if str(func.__module__).startswith("seaborn"): kwargs["ax"] = self.ax_joint else: plt.sca(self.ax_joint) if self.hue is not None: kwargs["hue"] = self.hue self._inject_kwargs(func, kwargs, self._hue_params) if str(func.__module__).startswith("seaborn"): func(x=self.x, y=self.y, **kwargs) else: func(self.x, self.y, **kwargs) return self def plot_marginals(self, func, **kwargs): """Draw univariate plots on each marginal axes. Parameters ---------- func : plotting callable If a seaborn function, it should accept ``x`` and ``y`` and plot when only one of them is defined. Otherwise, it must accept a vector of data as the first positional argument and determine its orientation using the ``vertical`` parameter, and it must plot on the "current" axes. If ``hue`` was defined in the class constructor, it must accept ``hue`` as a parameter. kwargs Keyword argument are passed to the plotting function. Returns ------- :class:`JointGrid` instance Returns ``self`` for easy method chaining. """ seaborn_func = ( str(func.__module__).startswith("seaborn") # deprecated distplot has a legacy API, special case it and not func.__name__ == "distplot" ) func_params = signature(func).parameters kwargs = kwargs.copy() if self.hue is not None: kwargs["hue"] = self.hue self._inject_kwargs(func, kwargs, self._hue_params) if "legend" in func_params: kwargs.setdefault("legend", False) if "orientation" in func_params: # e.g. plt.hist orient_kw_x = {"orientation": "vertical"} orient_kw_y = {"orientation": "horizontal"} elif "vertical" in func_params: # e.g. sns.distplot (also how did this get backwards?) orient_kw_x = {"vertical": False} orient_kw_y = {"vertical": True} if seaborn_func: func(x=self.x, ax=self.ax_marg_x, **kwargs) else: plt.sca(self.ax_marg_x) func(self.x, **orient_kw_x, **kwargs) if seaborn_func: func(y=self.y, ax=self.ax_marg_y, **kwargs) else: plt.sca(self.ax_marg_y) func(self.y, **orient_kw_y, **kwargs) self.ax_marg_x.yaxis.get_label().set_visible(False) self.ax_marg_y.xaxis.get_label().set_visible(False) return self def refline( self, *, x=None, y=None, joint=True, marginal=True, color='.5', linestyle='--', **line_kws ): """Add a reference line(s) to joint and/or marginal axes. Parameters ---------- x, y : numeric Value(s) to draw the line(s) at. joint, marginal : bools Whether to add the reference line(s) to the joint/marginal axes. color : :mod:`matplotlib color ` Specifies the color of the reference line(s). linestyle : str Specifies the style of the reference line(s). line_kws : key, value mappings Other keyword arguments are passed to :meth:`matplotlib.axes.Axes.axvline` when ``x`` is not None and :meth:`matplotlib.axes.Axes.axhline` when ``y`` is not None. Returns ------- :class:`JointGrid` instance Returns ``self`` for easy method chaining. """ line_kws['color'] = color line_kws['linestyle'] = linestyle if x is not None: if joint: self.ax_joint.axvline(x, **line_kws) if marginal: self.ax_marg_x.axvline(x, **line_kws) if y is not None: if joint: self.ax_joint.axhline(y, **line_kws) if marginal: self.ax_marg_y.axhline(y, **line_kws) return self def set_axis_labels(self, xlabel="", ylabel="", **kwargs): """Set axis labels on the bivariate axes. Parameters ---------- xlabel, ylabel : strings Label names for the x and y variables. kwargs : key, value mappings Other keyword arguments are passed to the following functions: - :meth:`matplotlib.axes.Axes.set_xlabel` - :meth:`matplotlib.axes.Axes.set_ylabel` Returns ------- :class:`JointGrid` instance Returns ``self`` for easy method chaining. """ self.ax_joint.set_xlabel(xlabel, **kwargs) self.ax_joint.set_ylabel(ylabel, **kwargs) return self JointGrid.__init__.__doc__ = """\ Set up the grid of subplots and store data internally for easy plotting. Parameters ---------- {params.core.xy} {params.core.data} height : number Size of each side of the figure in inches (it will be square). ratio : number Ratio of joint axes height to marginal axes height. space : number Space between the joint and marginal axes dropna : bool If True, remove missing observations before plotting. {{x, y}}lim : pairs of numbers Set axis limits to these values before plotting. marginal_ticks : bool If False, suppress ticks on the count/density axis of the marginal plots. {params.core.hue} Note: unlike in :class:`FacetGrid` or :class:`PairGrid`, the axes-level functions must support ``hue`` to use it in :class:`JointGrid`. {params.core.palette} {params.core.hue_order} {params.core.hue_norm} See Also -------- {seealso.jointplot} {seealso.pairgrid} {seealso.pairplot} Examples -------- .. include:: ../docstrings/JointGrid.rst """.format( params=_param_docs, returns=_core_docs["returns"], seealso=_core_docs["seealso"], ) @_deprecate_positional_args def pairplot( data, *, hue=None, hue_order=None, palette=None, vars=None, x_vars=None, y_vars=None, kind="scatter", diag_kind="auto", markers=None, height=2.5, aspect=1, corner=False, dropna=False, plot_kws=None, diag_kws=None, grid_kws=None, size=None, ): """Plot pairwise relationships in a dataset. By default, this function will create a grid of Axes such that each numeric variable in ``data`` will by shared across the y-axes across a single row and the x-axes across a single column. The diagonal plots are treated differently: a univariate distribution plot is drawn to show the marginal distribution of the data in each column. It is also possible to show a subset of variables or plot different variables on the rows and columns. This is a high-level interface for :class:`PairGrid` that is intended to make it easy to draw a few common styles. You should use :class:`PairGrid` directly if you need more flexibility. Parameters ---------- data : `pandas.DataFrame` Tidy (long-form) dataframe where each column is a variable and each row is an observation. hue : name of variable in ``data`` Variable in ``data`` to map plot aspects to different colors. hue_order : list of strings Order for the levels of the hue variable in the palette palette : dict or seaborn color palette Set of colors for mapping the ``hue`` variable. If a dict, keys should be values in the ``hue`` variable. vars : list of variable names Variables within ``data`` to use, otherwise use every column with a numeric datatype. {x, y}_vars : lists of variable names Variables within ``data`` to use separately for the rows and columns of the figure; i.e. to make a non-square plot. kind : {'scatter', 'kde', 'hist', 'reg'} Kind of plot to make. diag_kind : {'auto', 'hist', 'kde', None} Kind of plot for the diagonal subplots. If 'auto', choose based on whether or not ``hue`` is used. markers : single matplotlib marker code or list Either the marker to use for all scatterplot points or a list of markers with a length the same as the number of levels in the hue variable so that differently colored points will also have different scatterplot markers. height : scalar Height (in inches) of each facet. aspect : scalar Aspect * height gives the width (in inches) of each facet. corner : bool If True, don't add axes to the upper (off-diagonal) triangle of the grid, making this a "corner" plot. dropna : boolean Drop missing values from the data before plotting. {plot, diag, grid}_kws : dicts Dictionaries of keyword arguments. ``plot_kws`` are passed to the bivariate plotting function, ``diag_kws`` are passed to the univariate plotting function, and ``grid_kws`` are passed to the :class:`PairGrid` constructor. Returns ------- grid : :class:`PairGrid` Returns the underlying :class:`PairGrid` instance for further tweaking. See Also -------- PairGrid : Subplot grid for more flexible plotting of pairwise relationships. JointGrid : Grid for plotting joint and marginal distributions of two variables. Examples -------- .. include:: ../docstrings/pairplot.rst """ # Avoid circular import from .distributions import histplot, kdeplot # Handle deprecations if size is not None: height = size msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(msg, UserWarning) if not isinstance(data, pd.DataFrame): raise TypeError( "'data' must be pandas DataFrame object, not: {typefound}".format( typefound=type(data))) plot_kws = {} if plot_kws is None else plot_kws.copy() diag_kws = {} if diag_kws is None else diag_kws.copy() grid_kws = {} if grid_kws is None else grid_kws.copy() # Resolve "auto" diag kind if diag_kind == "auto": if hue is None: diag_kind = "kde" if kind == "kde" else "hist" else: diag_kind = "hist" if kind == "hist" else "kde" # Set up the PairGrid grid_kws.setdefault("diag_sharey", diag_kind == "hist") grid = PairGrid(data, vars=vars, x_vars=x_vars, y_vars=y_vars, hue=hue, hue_order=hue_order, palette=palette, corner=corner, height=height, aspect=aspect, dropna=dropna, **grid_kws) # Add the markers here as PairGrid has figured out how many levels of the # hue variable are needed and we don't want to duplicate that process if markers is not None: if kind == "reg": # Needed until regplot supports style if grid.hue_names is None: n_markers = 1 else: n_markers = len(grid.hue_names) if not isinstance(markers, list): markers = [markers] * n_markers if len(markers) != n_markers: raise ValueError(("markers must be a singleton or a list of " "markers for each level of the hue variable")) grid.hue_kws = {"marker": markers} elif kind == "scatter": if isinstance(markers, str): plot_kws["marker"] = markers elif hue is not None: plot_kws["style"] = data[hue] plot_kws["markers"] = markers # Draw the marginal plots on the diagonal diag_kws = diag_kws.copy() diag_kws.setdefault("legend", False) if diag_kind == "hist": grid.map_diag(histplot, **diag_kws) elif diag_kind == "kde": diag_kws.setdefault("fill", True) diag_kws.setdefault("warn_singular", False) grid.map_diag(kdeplot, **diag_kws) # Maybe plot on the off-diagonals if diag_kind is not None: plotter = grid.map_offdiag else: plotter = grid.map if kind == "scatter": from .relational import scatterplot # Avoid circular import plotter(scatterplot, **plot_kws) elif kind == "reg": from .regression import regplot # Avoid circular import plotter(regplot, **plot_kws) elif kind == "kde": from .distributions import kdeplot # Avoid circular import plot_kws.setdefault("warn_singular", False) plotter(kdeplot, **plot_kws) elif kind == "hist": from .distributions import histplot # Avoid circular import plotter(histplot, **plot_kws) # Add a legend if hue is not None: grid.add_legend() grid.tight_layout() return grid @_deprecate_positional_args def jointplot( *, x=None, y=None, data=None, kind="scatter", color=None, height=6, ratio=5, space=.2, dropna=False, xlim=None, ylim=None, marginal_ticks=False, joint_kws=None, marginal_kws=None, hue=None, palette=None, hue_order=None, hue_norm=None, **kwargs ): # Avoid circular imports from .relational import scatterplot from .regression import regplot, residplot from .distributions import histplot, kdeplot, _freedman_diaconis_bins # Handle deprecations if "size" in kwargs: height = kwargs.pop("size") msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(msg, UserWarning) # Set up empty default kwarg dicts joint_kws = {} if joint_kws is None else joint_kws.copy() joint_kws.update(kwargs) marginal_kws = {} if marginal_kws is None else marginal_kws.copy() # Handle deprecations of distplot-specific kwargs distplot_keys = [ "rug", "fit", "hist_kws", "norm_hist" "hist_kws", "rug_kws", ] unused_keys = [] for key in distplot_keys: if key in marginal_kws: unused_keys.append(key) marginal_kws.pop(key) if unused_keys and kind != "kde": msg = ( "The marginal plotting function has changed to `histplot`," " which does not accept the following argument(s): {}." ).format(", ".join(unused_keys)) warnings.warn(msg, UserWarning) # Validate the plot kind plot_kinds = ["scatter", "hist", "hex", "kde", "reg", "resid"] _check_argument("kind", plot_kinds, kind) # Raise early if using `hue` with a kind that does not support it if hue is not None and kind in ["hex", "reg", "resid"]: msg = ( f"Use of `hue` with `kind='{kind}'` is not currently supported." ) raise ValueError(msg) # Make a colormap based off the plot color # (Currently used only for kind="hex") if color is None: color = "C0" color_rgb = mpl.colors.colorConverter.to_rgb(color) colors = [utils.set_hls_values(color_rgb, l=l) # noqa for l in np.linspace(1, 0, 12)] cmap = blend_palette(colors, as_cmap=True) # Matplotlib's hexbin plot is not na-robust if kind == "hex": dropna = True # Initialize the JointGrid object grid = JointGrid( data=data, x=x, y=y, hue=hue, palette=palette, hue_order=hue_order, hue_norm=hue_norm, dropna=dropna, height=height, ratio=ratio, space=space, xlim=xlim, ylim=ylim, marginal_ticks=marginal_ticks, ) if grid.hue is not None: marginal_kws.setdefault("legend", False) # Plot the data using the grid if kind.startswith("scatter"): joint_kws.setdefault("color", color) grid.plot_joint(scatterplot, **joint_kws) if grid.hue is None: marg_func = histplot else: marg_func = kdeplot marginal_kws.setdefault("warn_singular", False) marginal_kws.setdefault("fill", True) marginal_kws.setdefault("color", color) grid.plot_marginals(marg_func, **marginal_kws) elif kind.startswith("hist"): # TODO process pair parameters for bins, etc. and pass # to both jount and marginal plots joint_kws.setdefault("color", color) grid.plot_joint(histplot, **joint_kws) marginal_kws.setdefault("kde", False) marginal_kws.setdefault("color", color) marg_x_kws = marginal_kws.copy() marg_y_kws = marginal_kws.copy() pair_keys = "bins", "binwidth", "binrange" for key in pair_keys: if isinstance(joint_kws.get(key), tuple): x_val, y_val = joint_kws[key] marg_x_kws.setdefault(key, x_val) marg_y_kws.setdefault(key, y_val) histplot(data=data, x=x, hue=hue, **marg_x_kws, ax=grid.ax_marg_x) histplot(data=data, y=y, hue=hue, **marg_y_kws, ax=grid.ax_marg_y) elif kind.startswith("kde"): joint_kws.setdefault("color", color) joint_kws.setdefault("warn_singular", False) grid.plot_joint(kdeplot, **joint_kws) marginal_kws.setdefault("color", color) if "fill" in joint_kws: marginal_kws.setdefault("fill", joint_kws["fill"]) grid.plot_marginals(kdeplot, **marginal_kws) elif kind.startswith("hex"): x_bins = min(_freedman_diaconis_bins(grid.x), 50) y_bins = min(_freedman_diaconis_bins(grid.y), 50) gridsize = int(np.mean([x_bins, y_bins])) joint_kws.setdefault("gridsize", gridsize) joint_kws.setdefault("cmap", cmap) grid.plot_joint(plt.hexbin, **joint_kws) marginal_kws.setdefault("kde", False) marginal_kws.setdefault("color", color) grid.plot_marginals(histplot, **marginal_kws) elif kind.startswith("reg"): marginal_kws.setdefault("color", color) marginal_kws.setdefault("kde", True) grid.plot_marginals(histplot, **marginal_kws) joint_kws.setdefault("color", color) grid.plot_joint(regplot, **joint_kws) elif kind.startswith("resid"): joint_kws.setdefault("color", color) grid.plot_joint(residplot, **joint_kws) x, y = grid.ax_joint.collections[0].get_offsets().T marginal_kws.setdefault("color", color) histplot(x=x, hue=hue, ax=grid.ax_marg_x, **marginal_kws) histplot(y=y, hue=hue, ax=grid.ax_marg_y, **marginal_kws) return grid jointplot.__doc__ = """\ Draw a plot of two variables with bivariate and univariate graphs. This function provides a convenient interface to the :class:`JointGrid` class, with several canned plot kinds. This is intended to be a fairly lightweight wrapper; if you need more flexibility, you should use :class:`JointGrid` directly. Parameters ---------- {params.core.xy} {params.core.data} kind : {{ "scatter" | "kde" | "hist" | "hex" | "reg" | "resid" }} Kind of plot to draw. See the examples for references to the underlying functions. {params.core.color} height : numeric Size of the figure (it will be square). ratio : numeric Ratio of joint axes height to marginal axes height. space : numeric Space between the joint and marginal axes dropna : bool If True, remove observations that are missing from ``x`` and ``y``. {{x, y}}lim : pairs of numbers Axis limits to set before plotting. marginal_ticks : bool If False, suppress ticks on the count/density axis of the marginal plots. {{joint, marginal}}_kws : dicts Additional keyword arguments for the plot components. {params.core.hue} Semantic variable that is mapped to determine the color of plot elements. {params.core.palette} {params.core.hue_order} {params.core.hue_norm} kwargs Additional keyword arguments are passed to the function used to draw the plot on the joint Axes, superseding items in the ``joint_kws`` dictionary. Returns ------- {returns.jointgrid} See Also -------- {seealso.jointgrid} {seealso.pairgrid} {seealso.pairplot} Examples -------- .. include:: ../docstrings/jointplot.rst """.format( params=_param_docs, returns=_core_docs["returns"], seealso=_core_docs["seealso"], ) seaborn-0.11.2/seaborn/categorical.py000066400000000000000000004240741410631356500175230ustar00rootroot00000000000000from textwrap import dedent from numbers import Number import colorsys import numpy as np from scipy import stats import pandas as pd import matplotlib as mpl from matplotlib.collections import PatchCollection import matplotlib.patches as Patches import matplotlib.pyplot as plt import warnings from distutils.version import LooseVersion from ._core import variable_type, infer_orient, categorical_order from . import utils from .utils import remove_na from .algorithms import bootstrap from .palettes import color_palette, husl_palette, light_palette, dark_palette from .axisgrid import FacetGrid, _facet_docs from ._decorators import _deprecate_positional_args __all__ = [ "catplot", "factorplot", "stripplot", "swarmplot", "boxplot", "violinplot", "boxenplot", "pointplot", "barplot", "countplot", ] class _CategoricalPlotter(object): width = .8 default_palette = "light" require_numeric = True def establish_variables(self, x=None, y=None, hue=None, data=None, orient=None, order=None, hue_order=None, units=None): """Convert input specification into a common representation.""" # Option 1: # We are plotting a wide-form dataset # ----------------------------------- if x is None and y is None: # Do a sanity check on the inputs if hue is not None: error = "Cannot use `hue` without `x` and `y`" raise ValueError(error) # No hue grouping with wide inputs plot_hues = None hue_title = None hue_names = None # No statistical units with wide inputs plot_units = None # We also won't get a axes labels here value_label = None group_label = None # Option 1a: # The input data is a Pandas DataFrame # ------------------------------------ if isinstance(data, pd.DataFrame): # Order the data correctly if order is None: order = [] # Reduce to just numeric columns for col in data: if variable_type(data[col]) == "numeric": order.append(col) plot_data = data[order] group_names = order group_label = data.columns.name # Convert to a list of arrays, the common representation iter_data = plot_data.iteritems() plot_data = [np.asarray(s, float) for k, s in iter_data] # Option 1b: # The input data is an array or list # ---------------------------------- else: # We can't reorder the data if order is not None: error = "Input data must be a pandas object to reorder" raise ValueError(error) # The input data is an array if hasattr(data, "shape"): if len(data.shape) == 1: if np.isscalar(data[0]): plot_data = [data] else: plot_data = list(data) elif len(data.shape) == 2: nr, nc = data.shape if nr == 1 or nc == 1: plot_data = [data.ravel()] else: plot_data = [data[:, i] for i in range(nc)] else: error = ("Input `data` can have no " "more than 2 dimensions") raise ValueError(error) # Check if `data` is None to let us bail out here (for testing) elif data is None: plot_data = [[]] # The input data is a flat list elif np.isscalar(data[0]): plot_data = [data] # The input data is a nested list # This will catch some things that might fail later # but exhaustive checks are hard else: plot_data = data # Convert to a list of arrays, the common representation plot_data = [np.asarray(d, float) for d in plot_data] # The group names will just be numeric indices group_names = list(range((len(plot_data)))) # Figure out the plotting orientation orient = "h" if str(orient).startswith("h") else "v" # Option 2: # We are plotting a long-form dataset # ----------------------------------- else: # See if we need to get variables from `data` if data is not None: x = data.get(x, x) y = data.get(y, y) hue = data.get(hue, hue) units = data.get(units, units) # Validate the inputs for var in [x, y, hue, units]: if isinstance(var, str): err = "Could not interpret input '{}'".format(var) raise ValueError(err) # Figure out the plotting orientation orient = infer_orient( x, y, orient, require_numeric=self.require_numeric ) # Option 2a: # We are plotting a single set of data # ------------------------------------ if x is None or y is None: # Determine where the data are vals = y if x is None else x # Put them into the common representation plot_data = [np.asarray(vals)] # Get a label for the value axis if hasattr(vals, "name"): value_label = vals.name else: value_label = None # This plot will not have group labels or hue nesting groups = None group_label = None group_names = [] plot_hues = None hue_names = None hue_title = None plot_units = None # Option 2b: # We are grouping the data values by another variable # --------------------------------------------------- else: # Determine which role each variable will play if orient == "v": vals, groups = y, x else: vals, groups = x, y # Get the categorical axis label group_label = None if hasattr(groups, "name"): group_label = groups.name # Get the order on the categorical axis group_names = categorical_order(groups, order) # Group the numeric data plot_data, value_label = self._group_longform(vals, groups, group_names) # Now handle the hue levels for nested ordering if hue is None: plot_hues = None hue_title = None hue_names = None else: # Get the order of the hue levels hue_names = categorical_order(hue, hue_order) # Group the hue data plot_hues, hue_title = self._group_longform(hue, groups, group_names) # Now handle the units for nested observations if units is None: plot_units = None else: plot_units, _ = self._group_longform(units, groups, group_names) # Assign object attributes # ------------------------ self.orient = orient self.plot_data = plot_data self.group_label = group_label self.value_label = value_label self.group_names = group_names self.plot_hues = plot_hues self.hue_title = hue_title self.hue_names = hue_names self.plot_units = plot_units def _group_longform(self, vals, grouper, order): """Group a long-form variable by another with correct order.""" # Ensure that the groupby will work if not isinstance(vals, pd.Series): if isinstance(grouper, pd.Series): index = grouper.index else: index = None vals = pd.Series(vals, index=index) # Group the val data grouped_vals = vals.groupby(grouper) out_data = [] for g in order: try: g_vals = grouped_vals.get_group(g) except KeyError: g_vals = np.array([]) out_data.append(g_vals) # Get the vals axis label label = vals.name return out_data, label def establish_colors(self, color, palette, saturation): """Get a list of colors for the main component of the plots.""" if self.hue_names is None: n_colors = len(self.plot_data) else: n_colors = len(self.hue_names) # Determine the main colors if color is None and palette is None: # Determine whether the current palette will have enough values # If not, we'll default to the husl palette so each is distinct current_palette = utils.get_color_cycle() if n_colors <= len(current_palette): colors = color_palette(n_colors=n_colors) else: colors = husl_palette(n_colors, l=.7) # noqa elif palette is None: # When passing a specific color, the interpretation depends # on whether there is a hue variable or not. # If so, we will make a blend palette so that the different # levels have some amount of variation. if self.hue_names is None: colors = [color] * n_colors else: if self.default_palette == "light": colors = light_palette(color, n_colors) elif self.default_palette == "dark": colors = dark_palette(color, n_colors) else: raise RuntimeError("No default palette specified") else: # Let `palette` be a dict mapping level to color if isinstance(palette, dict): if self.hue_names is None: levels = self.group_names else: levels = self.hue_names palette = [palette[l] for l in levels] colors = color_palette(palette, n_colors) # Desaturate a bit because these are patches if saturation < 1: colors = color_palette(colors, desat=saturation) # Convert the colors to a common representations rgb_colors = color_palette(colors) # Determine the gray color to use for the lines framing the plot light_vals = [colorsys.rgb_to_hls(*c)[1] for c in rgb_colors] lum = min(light_vals) * .6 gray = mpl.colors.rgb2hex((lum, lum, lum)) # Assign object attributes self.colors = rgb_colors self.gray = gray @property def hue_offsets(self): """A list of center positions for plots when hue nesting is used.""" n_levels = len(self.hue_names) if self.dodge: each_width = self.width / n_levels offsets = np.linspace(0, self.width - each_width, n_levels) offsets -= offsets.mean() else: offsets = np.zeros(n_levels) return offsets @property def nested_width(self): """A float with the width of plot elements when hue nesting is used.""" if self.dodge: width = self.width / len(self.hue_names) * .98 else: width = self.width return width def annotate_axes(self, ax): """Add descriptive labels to an Axes object.""" if self.orient == "v": xlabel, ylabel = self.group_label, self.value_label else: xlabel, ylabel = self.value_label, self.group_label if xlabel is not None: ax.set_xlabel(xlabel) if ylabel is not None: ax.set_ylabel(ylabel) group_names = self.group_names if not group_names: group_names = ["" for _ in range(len(self.plot_data))] if self.orient == "v": ax.set_xticks(np.arange(len(self.plot_data))) ax.set_xticklabels(group_names) else: ax.set_yticks(np.arange(len(self.plot_data))) ax.set_yticklabels(group_names) if self.orient == "v": ax.xaxis.grid(False) ax.set_xlim(-.5, len(self.plot_data) - .5, auto=None) else: ax.yaxis.grid(False) ax.set_ylim(-.5, len(self.plot_data) - .5, auto=None) if self.hue_names is not None: leg = ax.legend(loc="best", title=self.hue_title) if self.hue_title is not None: if LooseVersion(mpl.__version__) < "3.0": # Old Matplotlib has no legend title size rcparam try: title_size = mpl.rcParams["axes.labelsize"] * .85 except TypeError: # labelsize is something like "large" title_size = mpl.rcParams["axes.labelsize"] prop = mpl.font_manager.FontProperties(size=title_size) leg.set_title(self.hue_title, prop=prop) def add_legend_data(self, ax, color, label): """Add a dummy patch object so we can get legend data.""" rect = plt.Rectangle([0, 0], 0, 0, linewidth=self.linewidth / 2, edgecolor=self.gray, facecolor=color, label=label) ax.add_patch(rect) class _BoxPlotter(_CategoricalPlotter): def __init__(self, x, y, hue, data, order, hue_order, orient, color, palette, saturation, width, dodge, fliersize, linewidth): self.establish_variables(x, y, hue, data, orient, order, hue_order) self.establish_colors(color, palette, saturation) self.dodge = dodge self.width = width self.fliersize = fliersize if linewidth is None: linewidth = mpl.rcParams["lines.linewidth"] self.linewidth = linewidth def draw_boxplot(self, ax, kws): """Use matplotlib to draw a boxplot on an Axes.""" vert = self.orient == "v" props = {} for obj in ["box", "whisker", "cap", "median", "flier"]: props[obj] = kws.pop(obj + "props", {}) for i, group_data in enumerate(self.plot_data): if self.plot_hues is None: # Handle case where there is data at this level if group_data.size == 0: continue # Draw a single box or a set of boxes # with a single level of grouping box_data = np.asarray(remove_na(group_data)) # Handle case where there is no non-null data if box_data.size == 0: continue artist_dict = ax.boxplot(box_data, vert=vert, patch_artist=True, positions=[i], widths=self.width, **kws) color = self.colors[i] self.restyle_boxplot(artist_dict, color, props) else: # Draw nested groups of boxes offsets = self.hue_offsets for j, hue_level in enumerate(self.hue_names): # Add a legend for this hue level if not i: self.add_legend_data(ax, self.colors[j], hue_level) # Handle case where there is data at this level if group_data.size == 0: continue hue_mask = self.plot_hues[i] == hue_level box_data = np.asarray(remove_na(group_data[hue_mask])) # Handle case where there is no non-null data if box_data.size == 0: continue center = i + offsets[j] artist_dict = ax.boxplot(box_data, vert=vert, patch_artist=True, positions=[center], widths=self.nested_width, **kws) self.restyle_boxplot(artist_dict, self.colors[j], props) # Add legend data, but just for one set of boxes def restyle_boxplot(self, artist_dict, color, props): """Take a drawn matplotlib boxplot and make it look nice.""" for box in artist_dict["boxes"]: box.update(dict(facecolor=color, zorder=.9, edgecolor=self.gray, linewidth=self.linewidth)) box.update(props["box"]) for whisk in artist_dict["whiskers"]: whisk.update(dict(color=self.gray, linewidth=self.linewidth, linestyle="-")) whisk.update(props["whisker"]) for cap in artist_dict["caps"]: cap.update(dict(color=self.gray, linewidth=self.linewidth)) cap.update(props["cap"]) for med in artist_dict["medians"]: med.update(dict(color=self.gray, linewidth=self.linewidth)) med.update(props["median"]) for fly in artist_dict["fliers"]: fly.update(dict(markerfacecolor=self.gray, marker="d", markeredgecolor=self.gray, markersize=self.fliersize)) fly.update(props["flier"]) def plot(self, ax, boxplot_kws): """Make the plot.""" self.draw_boxplot(ax, boxplot_kws) self.annotate_axes(ax) if self.orient == "h": ax.invert_yaxis() class _ViolinPlotter(_CategoricalPlotter): def __init__(self, x, y, hue, data, order, hue_order, bw, cut, scale, scale_hue, gridsize, width, inner, split, dodge, orient, linewidth, color, palette, saturation): self.establish_variables(x, y, hue, data, orient, order, hue_order) self.establish_colors(color, palette, saturation) self.estimate_densities(bw, cut, scale, scale_hue, gridsize) self.gridsize = gridsize self.width = width self.dodge = dodge if inner is not None: if not any([inner.startswith("quart"), inner.startswith("box"), inner.startswith("stick"), inner.startswith("point")]): err = "Inner style '{}' not recognized".format(inner) raise ValueError(err) self.inner = inner if split and self.hue_names is not None and len(self.hue_names) != 2: msg = "There must be exactly two hue levels to use `split`.'" raise ValueError(msg) self.split = split if linewidth is None: linewidth = mpl.rcParams["lines.linewidth"] self.linewidth = linewidth def estimate_densities(self, bw, cut, scale, scale_hue, gridsize): """Find the support and density for all of the data.""" # Initialize data structures to keep track of plotting data if self.hue_names is None: support = [] density = [] counts = np.zeros(len(self.plot_data)) max_density = np.zeros(len(self.plot_data)) else: support = [[] for _ in self.plot_data] density = [[] for _ in self.plot_data] size = len(self.group_names), len(self.hue_names) counts = np.zeros(size) max_density = np.zeros(size) for i, group_data in enumerate(self.plot_data): # Option 1: we have a single level of grouping # -------------------------------------------- if self.plot_hues is None: # Strip missing datapoints kde_data = remove_na(group_data) # Handle special case of no data at this level if kde_data.size == 0: support.append(np.array([])) density.append(np.array([1.])) counts[i] = 0 max_density[i] = 0 continue # Handle special case of a single unique datapoint elif np.unique(kde_data).size == 1: support.append(np.unique(kde_data)) density.append(np.array([1.])) counts[i] = 1 max_density[i] = 0 continue # Fit the KDE and get the used bandwidth size kde, bw_used = self.fit_kde(kde_data, bw) # Determine the support grid and get the density over it support_i = self.kde_support(kde_data, bw_used, cut, gridsize) density_i = kde.evaluate(support_i) # Update the data structures with these results support.append(support_i) density.append(density_i) counts[i] = kde_data.size max_density[i] = density_i.max() # Option 2: we have nested grouping by a hue variable # --------------------------------------------------- else: for j, hue_level in enumerate(self.hue_names): # Handle special case of no data at this category level if not group_data.size: support[i].append(np.array([])) density[i].append(np.array([1.])) counts[i, j] = 0 max_density[i, j] = 0 continue # Select out the observations for this hue level hue_mask = self.plot_hues[i] == hue_level # Strip missing datapoints kde_data = remove_na(group_data[hue_mask]) # Handle special case of no data at this level if kde_data.size == 0: support[i].append(np.array([])) density[i].append(np.array([1.])) counts[i, j] = 0 max_density[i, j] = 0 continue # Handle special case of a single unique datapoint elif np.unique(kde_data).size == 1: support[i].append(np.unique(kde_data)) density[i].append(np.array([1.])) counts[i, j] = 1 max_density[i, j] = 0 continue # Fit the KDE and get the used bandwidth size kde, bw_used = self.fit_kde(kde_data, bw) # Determine the support grid and get the density over it support_ij = self.kde_support(kde_data, bw_used, cut, gridsize) density_ij = kde.evaluate(support_ij) # Update the data structures with these results support[i].append(support_ij) density[i].append(density_ij) counts[i, j] = kde_data.size max_density[i, j] = density_ij.max() # Scale the height of the density curve. # For a violinplot the density is non-quantitative. # The objective here is to scale the curves relative to 1 so that # they can be multiplied by the width parameter during plotting. if scale == "area": self.scale_area(density, max_density, scale_hue) elif scale == "width": self.scale_width(density) elif scale == "count": self.scale_count(density, counts, scale_hue) else: raise ValueError("scale method '{}' not recognized".format(scale)) # Set object attributes that will be used while plotting self.support = support self.density = density def fit_kde(self, x, bw): """Estimate a KDE for a vector of data with flexible bandwidth.""" kde = stats.gaussian_kde(x, bw) # Extract the numeric bandwidth from the KDE object bw_used = kde.factor # At this point, bw will be a numeric scale factor. # To get the actual bandwidth of the kernel, we multiple by the # unbiased standard deviation of the data, which we will use # elsewhere to compute the range of the support. bw_used = bw_used * x.std(ddof=1) return kde, bw_used def kde_support(self, x, bw, cut, gridsize): """Define a grid of support for the violin.""" support_min = x.min() - bw * cut support_max = x.max() + bw * cut return np.linspace(support_min, support_max, gridsize) def scale_area(self, density, max_density, scale_hue): """Scale the relative area under the KDE curve. This essentially preserves the "standard" KDE scaling, but the resulting maximum density will be 1 so that the curve can be properly multiplied by the violin width. """ if self.hue_names is None: for d in density: if d.size > 1: d /= max_density.max() else: for i, group in enumerate(density): for d in group: if scale_hue: max = max_density[i].max() else: max = max_density.max() if d.size > 1: d /= max def scale_width(self, density): """Scale each density curve to the same height.""" if self.hue_names is None: for d in density: d /= d.max() else: for group in density: for d in group: d /= d.max() def scale_count(self, density, counts, scale_hue): """Scale each density curve by the number of observations.""" if self.hue_names is None: if counts.max() == 0: d = 0 else: for count, d in zip(counts, density): d /= d.max() d *= count / counts.max() else: for i, group in enumerate(density): for j, d in enumerate(group): if counts[i].max() == 0: d = 0 else: count = counts[i, j] if scale_hue: scaler = count / counts[i].max() else: scaler = count / counts.max() d /= d.max() d *= scaler @property def dwidth(self): if self.hue_names is None or not self.dodge: return self.width / 2 elif self.split: return self.width / 2 else: return self.width / (2 * len(self.hue_names)) def draw_violins(self, ax): """Draw the violins onto `ax`.""" fill_func = ax.fill_betweenx if self.orient == "v" else ax.fill_between for i, group_data in enumerate(self.plot_data): kws = dict(edgecolor=self.gray, linewidth=self.linewidth) # Option 1: we have a single level of grouping # -------------------------------------------- if self.plot_hues is None: support, density = self.support[i], self.density[i] # Handle special case of no observations in this bin if support.size == 0: continue # Handle special case of a single observation elif support.size == 1: val = support.item() d = density.item() self.draw_single_observation(ax, i, val, d) continue # Draw the violin for this group grid = np.ones(self.gridsize) * i fill_func(support, grid - density * self.dwidth, grid + density * self.dwidth, facecolor=self.colors[i], **kws) # Draw the interior representation of the data if self.inner is None: continue # Get a nan-free vector of datapoints violin_data = remove_na(group_data) # Draw box and whisker information if self.inner.startswith("box"): self.draw_box_lines(ax, violin_data, support, density, i) # Draw quartile lines elif self.inner.startswith("quart"): self.draw_quartiles(ax, violin_data, support, density, i) # Draw stick observations elif self.inner.startswith("stick"): self.draw_stick_lines(ax, violin_data, support, density, i) # Draw point observations elif self.inner.startswith("point"): self.draw_points(ax, violin_data, i) # Option 2: we have nested grouping by a hue variable # --------------------------------------------------- else: offsets = self.hue_offsets for j, hue_level in enumerate(self.hue_names): support, density = self.support[i][j], self.density[i][j] kws["facecolor"] = self.colors[j] # Add legend data, but just for one set of violins if not i: self.add_legend_data(ax, self.colors[j], hue_level) # Handle the special case where we have no observations if support.size == 0: continue # Handle the special case where we have one observation elif support.size == 1: val = support.item() d = density.item() if self.split: d = d / 2 at_group = i + offsets[j] self.draw_single_observation(ax, at_group, val, d) continue # Option 2a: we are drawing a single split violin # ----------------------------------------------- if self.split: grid = np.ones(self.gridsize) * i if j: fill_func(support, grid, grid + density * self.dwidth, **kws) else: fill_func(support, grid - density * self.dwidth, grid, **kws) # Draw the interior representation of the data if self.inner is None: continue # Get a nan-free vector of datapoints hue_mask = self.plot_hues[i] == hue_level violin_data = remove_na(group_data[hue_mask]) # Draw quartile lines if self.inner.startswith("quart"): self.draw_quartiles(ax, violin_data, support, density, i, ["left", "right"][j]) # Draw stick observations elif self.inner.startswith("stick"): self.draw_stick_lines(ax, violin_data, support, density, i, ["left", "right"][j]) # The box and point interior plots are drawn for # all data at the group level, so we just do that once if not j: continue # Get the whole vector for this group level violin_data = remove_na(group_data) # Draw box and whisker information if self.inner.startswith("box"): self.draw_box_lines(ax, violin_data, support, density, i) # Draw point observations elif self.inner.startswith("point"): self.draw_points(ax, violin_data, i) # Option 2b: we are drawing full nested violins # ----------------------------------------------- else: grid = np.ones(self.gridsize) * (i + offsets[j]) fill_func(support, grid - density * self.dwidth, grid + density * self.dwidth, **kws) # Draw the interior representation if self.inner is None: continue # Get a nan-free vector of datapoints hue_mask = self.plot_hues[i] == hue_level violin_data = remove_na(group_data[hue_mask]) # Draw box and whisker information if self.inner.startswith("box"): self.draw_box_lines(ax, violin_data, support, density, i + offsets[j]) # Draw quartile lines elif self.inner.startswith("quart"): self.draw_quartiles(ax, violin_data, support, density, i + offsets[j]) # Draw stick observations elif self.inner.startswith("stick"): self.draw_stick_lines(ax, violin_data, support, density, i + offsets[j]) # Draw point observations elif self.inner.startswith("point"): self.draw_points(ax, violin_data, i + offsets[j]) def draw_single_observation(self, ax, at_group, at_quant, density): """Draw a line to mark a single observation.""" d_width = density * self.dwidth if self.orient == "v": ax.plot([at_group - d_width, at_group + d_width], [at_quant, at_quant], color=self.gray, linewidth=self.linewidth) else: ax.plot([at_quant, at_quant], [at_group - d_width, at_group + d_width], color=self.gray, linewidth=self.linewidth) def draw_box_lines(self, ax, data, support, density, center): """Draw boxplot information at center of the density.""" # Compute the boxplot statistics q25, q50, q75 = np.percentile(data, [25, 50, 75]) whisker_lim = 1.5 * stats.iqr(data) h1 = np.min(data[data >= (q25 - whisker_lim)]) h2 = np.max(data[data <= (q75 + whisker_lim)]) # Draw a boxplot using lines and a point if self.orient == "v": ax.plot([center, center], [h1, h2], linewidth=self.linewidth, color=self.gray) ax.plot([center, center], [q25, q75], linewidth=self.linewidth * 3, color=self.gray) ax.scatter(center, q50, zorder=3, color="white", edgecolor=self.gray, s=np.square(self.linewidth * 2)) else: ax.plot([h1, h2], [center, center], linewidth=self.linewidth, color=self.gray) ax.plot([q25, q75], [center, center], linewidth=self.linewidth * 3, color=self.gray) ax.scatter(q50, center, zorder=3, color="white", edgecolor=self.gray, s=np.square(self.linewidth * 2)) def draw_quartiles(self, ax, data, support, density, center, split=False): """Draw the quartiles as lines at width of density.""" q25, q50, q75 = np.percentile(data, [25, 50, 75]) self.draw_to_density(ax, center, q25, support, density, split, linewidth=self.linewidth, dashes=[self.linewidth * 1.5] * 2) self.draw_to_density(ax, center, q50, support, density, split, linewidth=self.linewidth, dashes=[self.linewidth * 3] * 2) self.draw_to_density(ax, center, q75, support, density, split, linewidth=self.linewidth, dashes=[self.linewidth * 1.5] * 2) def draw_points(self, ax, data, center): """Draw individual observations as points at middle of the violin.""" kws = dict(s=np.square(self.linewidth * 2), color=self.gray, edgecolor=self.gray) grid = np.ones(len(data)) * center if self.orient == "v": ax.scatter(grid, data, **kws) else: ax.scatter(data, grid, **kws) def draw_stick_lines(self, ax, data, support, density, center, split=False): """Draw individual observations as sticks at width of density.""" for val in data: self.draw_to_density(ax, center, val, support, density, split, linewidth=self.linewidth * .5) def draw_to_density(self, ax, center, val, support, density, split, **kws): """Draw a line orthogonal to the value axis at width of density.""" idx = np.argmin(np.abs(support - val)) width = self.dwidth * density[idx] * .99 kws["color"] = self.gray if self.orient == "v": if split == "left": ax.plot([center - width, center], [val, val], **kws) elif split == "right": ax.plot([center, center + width], [val, val], **kws) else: ax.plot([center - width, center + width], [val, val], **kws) else: if split == "left": ax.plot([val, val], [center - width, center], **kws) elif split == "right": ax.plot([val, val], [center, center + width], **kws) else: ax.plot([val, val], [center - width, center + width], **kws) def plot(self, ax): """Make the violin plot.""" self.draw_violins(ax) self.annotate_axes(ax) if self.orient == "h": ax.invert_yaxis() class _CategoricalScatterPlotter(_CategoricalPlotter): default_palette = "dark" require_numeric = False @property def point_colors(self): """Return an index into the palette for each scatter point.""" point_colors = [] for i, group_data in enumerate(self.plot_data): # Initialize the array for this group level group_colors = np.empty(group_data.size, int) if isinstance(group_data, pd.Series): group_colors = pd.Series(group_colors, group_data.index) if self.plot_hues is None: # Use the same color for all points at this level # group_color = self.colors[i] group_colors[:] = i else: # Color the points based on the hue level for j, level in enumerate(self.hue_names): # hue_color = self.colors[j] if group_data.size: group_colors[self.plot_hues[i] == level] = j point_colors.append(group_colors) return point_colors def add_legend_data(self, ax): """Add empty scatterplot artists with labels for the legend.""" if self.hue_names is not None: for rgb, label in zip(self.colors, self.hue_names): ax.scatter([], [], color=mpl.colors.rgb2hex(rgb), label=label, s=60) class _StripPlotter(_CategoricalScatterPlotter): """1-d scatterplot with categorical organization.""" def __init__(self, x, y, hue, data, order, hue_order, jitter, dodge, orient, color, palette): """Initialize the plotter.""" self.establish_variables(x, y, hue, data, orient, order, hue_order) self.establish_colors(color, palette, 1) # Set object attributes self.dodge = dodge self.width = .8 if jitter == 1: # Use a good default for `jitter = True` jlim = 0.1 else: jlim = float(jitter) if self.hue_names is not None and dodge: jlim /= len(self.hue_names) self.jitterer = stats.uniform(-jlim, jlim * 2).rvs def draw_stripplot(self, ax, kws): """Draw the points onto `ax`.""" palette = np.asarray(self.colors) for i, group_data in enumerate(self.plot_data): if self.plot_hues is None or not self.dodge: if self.hue_names is None: hue_mask = np.ones(group_data.size, bool) else: hue_mask = np.array([h in self.hue_names for h in self.plot_hues[i]], bool) # Broken on older numpys # hue_mask = np.in1d(self.plot_hues[i], self.hue_names) strip_data = group_data[hue_mask] point_colors = np.asarray(self.point_colors[i][hue_mask]) # Plot the points in centered positions cat_pos = np.ones(strip_data.size) * i cat_pos += self.jitterer(len(strip_data)) kws.update(c=palette[point_colors]) if self.orient == "v": ax.scatter(cat_pos, strip_data, **kws) else: ax.scatter(strip_data, cat_pos, **kws) else: offsets = self.hue_offsets for j, hue_level in enumerate(self.hue_names): hue_mask = self.plot_hues[i] == hue_level strip_data = group_data[hue_mask] point_colors = np.asarray(self.point_colors[i][hue_mask]) # Plot the points in centered positions center = i + offsets[j] cat_pos = np.ones(strip_data.size) * center cat_pos += self.jitterer(len(strip_data)) kws.update(c=palette[point_colors]) if self.orient == "v": ax.scatter(cat_pos, strip_data, **kws) else: ax.scatter(strip_data, cat_pos, **kws) def plot(self, ax, kws): """Make the plot.""" self.draw_stripplot(ax, kws) self.add_legend_data(ax) self.annotate_axes(ax) if self.orient == "h": ax.invert_yaxis() class _SwarmPlotter(_CategoricalScatterPlotter): def __init__(self, x, y, hue, data, order, hue_order, dodge, orient, color, palette): """Initialize the plotter.""" self.establish_variables(x, y, hue, data, orient, order, hue_order) self.establish_colors(color, palette, 1) # Set object attributes self.dodge = dodge self.width = .8 def could_overlap(self, xy_i, swarm, d): """Return a list of all swarm points that could overlap with target. Assumes that swarm is a sorted list of all points below xy_i. """ _, y_i = xy_i neighbors = [] for xy_j in reversed(swarm): _, y_j = xy_j if (y_i - y_j) < d: neighbors.append(xy_j) else: break return np.array(list(reversed(neighbors))) def position_candidates(self, xy_i, neighbors, d): """Return a list of (x, y) coordinates that might be valid.""" candidates = [xy_i] x_i, y_i = xy_i left_first = True for x_j, y_j in neighbors: dy = y_i - y_j dx = np.sqrt(max(d ** 2 - dy ** 2, 0)) * 1.05 cl, cr = (x_j - dx, y_i), (x_j + dx, y_i) if left_first: new_candidates = [cl, cr] else: new_candidates = [cr, cl] candidates.extend(new_candidates) left_first = not left_first return np.array(candidates) def first_non_overlapping_candidate(self, candidates, neighbors, d): """Remove candidates from the list if they overlap with the swarm.""" # IF we have no neighbours, all candidates are good. if len(neighbors) == 0: return candidates[0] neighbors_x = neighbors[:, 0] neighbors_y = neighbors[:, 1] d_square = d ** 2 for xy_i in candidates: x_i, y_i = xy_i dx = neighbors_x - x_i dy = neighbors_y - y_i sq_distances = np.power(dx, 2.0) + np.power(dy, 2.0) # good candidate does not overlap any of neighbors # which means that squared distance between candidate # and any of the neighbours has to be at least # square of the diameter good_candidate = np.all(sq_distances >= d_square) if good_candidate: return xy_i # If `position_candidates` works well # this should never happen raise Exception('No non-overlapping candidates found. ' 'This should not happen.') def beeswarm(self, orig_xy, d): """Adjust x position of points to avoid overlaps.""" # In this method, ``x`` is always the categorical axis # Center of the swarm, in point coordinates midline = orig_xy[0, 0] # Start the swarm with the first point swarm = [orig_xy[0]] # Loop over the remaining points for xy_i in orig_xy[1:]: # Find the points in the swarm that could possibly # overlap with the point we are currently placing neighbors = self.could_overlap(xy_i, swarm, d) # Find positions that would be valid individually # with respect to each of the swarm neighbors candidates = self.position_candidates(xy_i, neighbors, d) # Sort candidates by their centrality offsets = np.abs(candidates[:, 0] - midline) candidates = candidates[np.argsort(offsets)] # Find the first candidate that does not overlap any neighbours new_xy_i = self.first_non_overlapping_candidate(candidates, neighbors, d) # Place it into the swarm swarm.append(new_xy_i) return np.array(swarm) def add_gutters(self, points, center, width): """Stop points from extending beyond their territory.""" half_width = width / 2 low_gutter = center - half_width off_low = points < low_gutter if off_low.any(): points[off_low] = low_gutter high_gutter = center + half_width off_high = points > high_gutter if off_high.any(): points[off_high] = high_gutter gutter_prop = (off_high + off_low).sum() / len(points) if gutter_prop > .05: msg = ( "{:.1%} of the points cannot be placed; you may want " "to decrease the size of the markers or use stripplot." ).format(gutter_prop) warnings.warn(msg, UserWarning) return points def swarm_points(self, ax, points, center, width, s, **kws): """Find new positions on the categorical axis for each point.""" # Convert from point size (area) to diameter default_lw = mpl.rcParams["patch.linewidth"] lw = kws.get("linewidth", kws.get("lw", default_lw)) dpi = ax.figure.dpi d = (np.sqrt(s) + lw) * (dpi / 72) # Transform the data coordinates to point coordinates. # We'll figure out the swarm positions in the latter # and then convert back to data coordinates and replot orig_xy = ax.transData.transform(points.get_offsets()) # Order the variables so that x is the categorical axis if self.orient == "h": orig_xy = orig_xy[:, [1, 0]] # Do the beeswarm in point coordinates new_xy = self.beeswarm(orig_xy, d) # Transform the point coordinates back to data coordinates if self.orient == "h": new_xy = new_xy[:, [1, 0]] new_x, new_y = ax.transData.inverted().transform(new_xy).T # Add gutters if self.orient == "v": self.add_gutters(new_x, center, width) else: self.add_gutters(new_y, center, width) # Reposition the points so they do not overlap points.set_offsets(np.c_[new_x, new_y]) def draw_swarmplot(self, ax, kws): """Plot the data.""" s = kws.pop("s") centers = [] swarms = [] palette = np.asarray(self.colors) # Set the categorical axes limits here for the swarm math if self.orient == "v": ax.set_xlim(-.5, len(self.plot_data) - .5) else: ax.set_ylim(-.5, len(self.plot_data) - .5) # Plot each swarm for i, group_data in enumerate(self.plot_data): if self.plot_hues is None or not self.dodge: width = self.width if self.hue_names is None: hue_mask = np.ones(group_data.size, bool) else: hue_mask = np.array([h in self.hue_names for h in self.plot_hues[i]], bool) # Broken on older numpys # hue_mask = np.in1d(self.plot_hues[i], self.hue_names) swarm_data = np.asarray(group_data[hue_mask]) point_colors = np.asarray(self.point_colors[i][hue_mask]) # Sort the points for the beeswarm algorithm sorter = np.argsort(swarm_data) swarm_data = swarm_data[sorter] point_colors = point_colors[sorter] # Plot the points in centered positions cat_pos = np.ones(swarm_data.size) * i kws.update(c=palette[point_colors]) if self.orient == "v": points = ax.scatter(cat_pos, swarm_data, s=s, **kws) else: points = ax.scatter(swarm_data, cat_pos, s=s, **kws) centers.append(i) swarms.append(points) else: offsets = self.hue_offsets width = self.nested_width for j, hue_level in enumerate(self.hue_names): hue_mask = self.plot_hues[i] == hue_level swarm_data = np.asarray(group_data[hue_mask]) point_colors = np.asarray(self.point_colors[i][hue_mask]) # Sort the points for the beeswarm algorithm sorter = np.argsort(swarm_data) swarm_data = swarm_data[sorter] point_colors = point_colors[sorter] # Plot the points in centered positions center = i + offsets[j] cat_pos = np.ones(swarm_data.size) * center kws.update(c=palette[point_colors]) if self.orient == "v": points = ax.scatter(cat_pos, swarm_data, s=s, **kws) else: points = ax.scatter(swarm_data, cat_pos, s=s, **kws) centers.append(center) swarms.append(points) # Autoscale the valus axis to set the data/axes transforms properly ax.autoscale_view(scalex=self.orient == "h", scaley=self.orient == "v") # Update the position of each point on the categorical axis # Do this after plotting so that the numerical axis limits are correct for center, swarm in zip(centers, swarms): if swarm.get_offsets().size: self.swarm_points(ax, swarm, center, width, s, **kws) def plot(self, ax, kws): """Make the full plot.""" self.draw_swarmplot(ax, kws) self.add_legend_data(ax) self.annotate_axes(ax) if self.orient == "h": ax.invert_yaxis() class _CategoricalStatPlotter(_CategoricalPlotter): require_numeric = True @property def nested_width(self): """A float with the width of plot elements when hue nesting is used.""" if self.dodge: width = self.width / len(self.hue_names) else: width = self.width return width def estimate_statistic(self, estimator, ci, n_boot, seed): if self.hue_names is None: statistic = [] confint = [] else: statistic = [[] for _ in self.plot_data] confint = [[] for _ in self.plot_data] for i, group_data in enumerate(self.plot_data): # Option 1: we have a single layer of grouping # -------------------------------------------- if self.plot_hues is None: if self.plot_units is None: stat_data = remove_na(group_data) unit_data = None else: unit_data = self.plot_units[i] have = pd.notnull(np.c_[group_data, unit_data]).all(axis=1) stat_data = group_data[have] unit_data = unit_data[have] # Estimate a statistic from the vector of data if not stat_data.size: statistic.append(np.nan) else: statistic.append(estimator(stat_data)) # Get a confidence interval for this estimate if ci is not None: if stat_data.size < 2: confint.append([np.nan, np.nan]) continue if ci == "sd": estimate = estimator(stat_data) sd = np.std(stat_data) confint.append((estimate - sd, estimate + sd)) else: boots = bootstrap(stat_data, func=estimator, n_boot=n_boot, units=unit_data, seed=seed) confint.append(utils.ci(boots, ci)) # Option 2: we are grouping by a hue layer # ---------------------------------------- else: for j, hue_level in enumerate(self.hue_names): if not self.plot_hues[i].size: statistic[i].append(np.nan) if ci is not None: confint[i].append((np.nan, np.nan)) continue hue_mask = self.plot_hues[i] == hue_level if self.plot_units is None: stat_data = remove_na(group_data[hue_mask]) unit_data = None else: group_units = self.plot_units[i] have = pd.notnull( np.c_[group_data, group_units] ).all(axis=1) stat_data = group_data[hue_mask & have] unit_data = group_units[hue_mask & have] # Estimate a statistic from the vector of data if not stat_data.size: statistic[i].append(np.nan) else: statistic[i].append(estimator(stat_data)) # Get a confidence interval for this estimate if ci is not None: if stat_data.size < 2: confint[i].append([np.nan, np.nan]) continue if ci == "sd": estimate = estimator(stat_data) sd = np.std(stat_data) confint[i].append((estimate - sd, estimate + sd)) else: boots = bootstrap(stat_data, func=estimator, n_boot=n_boot, units=unit_data, seed=seed) confint[i].append(utils.ci(boots, ci)) # Save the resulting values for plotting self.statistic = np.array(statistic) self.confint = np.array(confint) def draw_confints(self, ax, at_group, confint, colors, errwidth=None, capsize=None, **kws): if errwidth is not None: kws.setdefault("lw", errwidth) else: kws.setdefault("lw", mpl.rcParams["lines.linewidth"] * 1.8) for at, (ci_low, ci_high), color in zip(at_group, confint, colors): if self.orient == "v": ax.plot([at, at], [ci_low, ci_high], color=color, **kws) if capsize is not None: ax.plot([at - capsize / 2, at + capsize / 2], [ci_low, ci_low], color=color, **kws) ax.plot([at - capsize / 2, at + capsize / 2], [ci_high, ci_high], color=color, **kws) else: ax.plot([ci_low, ci_high], [at, at], color=color, **kws) if capsize is not None: ax.plot([ci_low, ci_low], [at - capsize / 2, at + capsize / 2], color=color, **kws) ax.plot([ci_high, ci_high], [at - capsize / 2, at + capsize / 2], color=color, **kws) class _BarPlotter(_CategoricalStatPlotter): """Show point estimates and confidence intervals with bars.""" def __init__(self, x, y, hue, data, order, hue_order, estimator, ci, n_boot, units, seed, orient, color, palette, saturation, errcolor, errwidth, capsize, dodge): """Initialize the plotter.""" self.establish_variables(x, y, hue, data, orient, order, hue_order, units) self.establish_colors(color, palette, saturation) self.estimate_statistic(estimator, ci, n_boot, seed) self.dodge = dodge self.errcolor = errcolor self.errwidth = errwidth self.capsize = capsize def draw_bars(self, ax, kws): """Draw the bars onto `ax`.""" # Get the right matplotlib function depending on the orientation barfunc = ax.bar if self.orient == "v" else ax.barh barpos = np.arange(len(self.statistic)) if self.plot_hues is None: # Draw the bars barfunc(barpos, self.statistic, self.width, color=self.colors, align="center", **kws) # Draw the confidence intervals errcolors = [self.errcolor] * len(barpos) self.draw_confints(ax, barpos, self.confint, errcolors, self.errwidth, self.capsize) else: for j, hue_level in enumerate(self.hue_names): # Draw the bars offpos = barpos + self.hue_offsets[j] barfunc(offpos, self.statistic[:, j], self.nested_width, color=self.colors[j], align="center", label=hue_level, **kws) # Draw the confidence intervals if self.confint.size: confint = self.confint[:, j] errcolors = [self.errcolor] * len(offpos) self.draw_confints(ax, offpos, confint, errcolors, self.errwidth, self.capsize) def plot(self, ax, bar_kws): """Make the plot.""" self.draw_bars(ax, bar_kws) self.annotate_axes(ax) if self.orient == "h": ax.invert_yaxis() class _PointPlotter(_CategoricalStatPlotter): default_palette = "dark" """Show point estimates and confidence intervals with (joined) points.""" def __init__(self, x, y, hue, data, order, hue_order, estimator, ci, n_boot, units, seed, markers, linestyles, dodge, join, scale, orient, color, palette, errwidth=None, capsize=None): """Initialize the plotter.""" self.establish_variables(x, y, hue, data, orient, order, hue_order, units) self.establish_colors(color, palette, 1) self.estimate_statistic(estimator, ci, n_boot, seed) # Override the default palette for single-color plots if hue is None and color is None and palette is None: self.colors = [color_palette()[0]] * len(self.colors) # Don't join single-layer plots with different colors if hue is None and palette is not None: join = False # Use a good default for `dodge=True` if dodge is True and self.hue_names is not None: dodge = .025 * len(self.hue_names) # Make sure we have a marker for each hue level if isinstance(markers, str): markers = [markers] * len(self.colors) self.markers = markers # Make sure we have a line style for each hue level if isinstance(linestyles, str): linestyles = [linestyles] * len(self.colors) self.linestyles = linestyles # Set the other plot components self.dodge = dodge self.join = join self.scale = scale self.errwidth = errwidth self.capsize = capsize @property def hue_offsets(self): """Offsets relative to the center position for each hue level.""" if self.dodge: offset = np.linspace(0, self.dodge, len(self.hue_names)) offset -= offset.mean() else: offset = np.zeros(len(self.hue_names)) return offset def draw_points(self, ax): """Draw the main data components of the plot.""" # Get the center positions on the categorical axis pointpos = np.arange(len(self.statistic)) # Get the size of the plot elements lw = mpl.rcParams["lines.linewidth"] * 1.8 * self.scale mew = lw * .75 markersize = np.pi * np.square(lw) * 2 if self.plot_hues is None: # Draw lines joining each estimate point if self.join: color = self.colors[0] ls = self.linestyles[0] if self.orient == "h": ax.plot(self.statistic, pointpos, color=color, ls=ls, lw=lw) else: ax.plot(pointpos, self.statistic, color=color, ls=ls, lw=lw) # Draw the confidence intervals self.draw_confints(ax, pointpos, self.confint, self.colors, self.errwidth, self.capsize) # Draw the estimate points marker = self.markers[0] colors = [mpl.colors.colorConverter.to_rgb(c) for c in self.colors] if self.orient == "h": x, y = self.statistic, pointpos else: x, y = pointpos, self.statistic ax.scatter(x, y, linewidth=mew, marker=marker, s=markersize, facecolor=colors, edgecolor=colors) else: offsets = self.hue_offsets for j, hue_level in enumerate(self.hue_names): # Determine the values to plot for this level statistic = self.statistic[:, j] # Determine the position on the categorical and z axes offpos = pointpos + offsets[j] z = j + 1 # Draw lines joining each estimate point if self.join: color = self.colors[j] ls = self.linestyles[j] if self.orient == "h": ax.plot(statistic, offpos, color=color, zorder=z, ls=ls, lw=lw) else: ax.plot(offpos, statistic, color=color, zorder=z, ls=ls, lw=lw) # Draw the confidence intervals if self.confint.size: confint = self.confint[:, j] errcolors = [self.colors[j]] * len(offpos) self.draw_confints(ax, offpos, confint, errcolors, self.errwidth, self.capsize, zorder=z) # Draw the estimate points n_points = len(remove_na(offpos)) marker = self.markers[j] color = mpl.colors.colorConverter.to_rgb(self.colors[j]) if self.orient == "h": x, y = statistic, offpos else: x, y = offpos, statistic if not len(remove_na(statistic)): x = y = [np.nan] * n_points ax.scatter(x, y, label=hue_level, facecolor=color, edgecolor=color, linewidth=mew, marker=marker, s=markersize, zorder=z) def plot(self, ax): """Make the plot.""" self.draw_points(ax) self.annotate_axes(ax) if self.orient == "h": ax.invert_yaxis() class _CountPlotter(_BarPlotter): require_numeric = False class _LVPlotter(_CategoricalPlotter): def __init__(self, x, y, hue, data, order, hue_order, orient, color, palette, saturation, width, dodge, k_depth, linewidth, scale, outlier_prop, trust_alpha, showfliers=True): self.width = width self.dodge = dodge self.saturation = saturation k_depth_methods = ['proportion', 'tukey', 'trustworthy', 'full'] if not (k_depth in k_depth_methods or isinstance(k_depth, Number)): msg = (f'k_depth must be one of {k_depth_methods} or a number, ' f'but {k_depth} was passed.') raise ValueError(msg) self.k_depth = k_depth if linewidth is None: linewidth = mpl.rcParams["lines.linewidth"] self.linewidth = linewidth scales = ['linear', 'exponential', 'area'] if scale not in scales: msg = f'scale must be one of {scales}, but {scale} was passed.' raise ValueError(msg) self.scale = scale if ((outlier_prop > 1) or (outlier_prop <= 0)): msg = f'outlier_prop {outlier_prop} not in range (0, 1]' raise ValueError(msg) self.outlier_prop = outlier_prop if not 0 < trust_alpha < 1: msg = f'trust_alpha {trust_alpha} not in range (0, 1)' raise ValueError(msg) self.trust_alpha = trust_alpha self.showfliers = showfliers self.establish_variables(x, y, hue, data, orient, order, hue_order) self.establish_colors(color, palette, saturation) def _lv_box_ends(self, vals): """Get the number of data points and calculate `depth` of letter-value plot.""" vals = np.asarray(vals) # Remove infinite values while handling a 'object' dtype # that can come from pd.Float64Dtype() input with pd.option_context('mode.use_inf_as_null', True): vals = vals[~pd.isnull(vals)] n = len(vals) p = self.outlier_prop # Select the depth, i.e. number of boxes to draw, based on the method if self.k_depth == 'full': # extend boxes to 100% of the data k = int(np.log2(n)) + 1 elif self.k_depth == 'tukey': # This results with 5-8 points in each tail k = int(np.log2(n)) - 3 elif self.k_depth == 'proportion': k = int(np.log2(n)) - int(np.log2(n * p)) + 1 elif self.k_depth == 'trustworthy': point_conf = 2 * stats.norm.ppf((1 - self.trust_alpha / 2)) ** 2 k = int(np.log2(n / point_conf)) + 1 else: k = int(self.k_depth) # allow having k as input # If the number happens to be less than 1, set k to 1 if k < 1: k = 1 # Calculate the upper end for each of the k boxes upper = [100 * (1 - 0.5 ** (i + 1)) for i in range(k, 0, -1)] # Calculate the lower end for each of the k boxes lower = [100 * (0.5 ** (i + 1)) for i in range(k, 0, -1)] # Stitch the box ends together percentile_ends = [(i, j) for i, j in zip(lower, upper)] box_ends = [np.percentile(vals, q) for q in percentile_ends] return box_ends, k def _lv_outliers(self, vals, k): """Find the outliers based on the letter value depth.""" box_edge = 0.5 ** (k + 1) perc_ends = (100 * box_edge, 100 * (1 - box_edge)) edges = np.percentile(vals, perc_ends) lower_out = vals[np.where(vals < edges[0])[0]] upper_out = vals[np.where(vals > edges[1])[0]] return np.concatenate((lower_out, upper_out)) def _width_functions(self, width_func): # Dictionary of functions for computing the width of the boxes width_functions = {'linear': lambda h, i, k: (i + 1.) / k, 'exponential': lambda h, i, k: 2**(-k + i - 1), 'area': lambda h, i, k: (1 - 2**(-k + i - 2)) / h} return width_functions[width_func] def _lvplot(self, box_data, positions, color=[255. / 256., 185. / 256., 0.], widths=1, ax=None, **kws): vert = self.orient == "v" x = positions[0] box_data = np.asarray(box_data) # If we only have one data point, plot a line if len(box_data) == 1: kws.update({ 'color': self.gray, 'linestyle': '-', 'linewidth': self.linewidth }) ys = [box_data[0], box_data[0]] xs = [x - widths / 2, x + widths / 2] if vert: xx, yy = xs, ys else: xx, yy = ys, xs ax.plot(xx, yy, **kws) else: # Get the number of data points and calculate "depth" of # letter-value plot box_ends, k = self._lv_box_ends(box_data) # Anonymous functions for calculating the width and height # of the letter value boxes width = self._width_functions(self.scale) # Function to find height of boxes def height(b): return b[1] - b[0] # Functions to construct the letter value boxes def vert_perc_box(x, b, i, k, w): rect = Patches.Rectangle((x - widths * w / 2, b[0]), widths * w, height(b), fill=True) return rect def horz_perc_box(x, b, i, k, w): rect = Patches.Rectangle((b[0], x - widths * w / 2), height(b), widths * w, fill=True) return rect # Scale the width of the boxes so the biggest starts at 1 w_area = np.array([width(height(b), i, k) for i, b in enumerate(box_ends)]) w_area = w_area / np.max(w_area) # Calculate the medians y = np.median(box_data) # Calculate the outliers and plot (only if showfliers == True) outliers = [] if self.showfliers: outliers = self._lv_outliers(box_data, k) hex_color = mpl.colors.rgb2hex(color) if vert: box_func = vert_perc_box xs_median = [x - widths / 2, x + widths / 2] ys_median = [y, y] xs_outliers = np.full(len(outliers), x) ys_outliers = outliers else: box_func = horz_perc_box xs_median = [y, y] ys_median = [x - widths / 2, x + widths / 2] xs_outliers = outliers ys_outliers = np.full(len(outliers), x) boxes = [box_func(x, b[0], i, k, b[1]) for i, b in enumerate(zip(box_ends, w_area))] # Plot the medians ax.plot( xs_median, ys_median, c=".15", alpha=0.45, solid_capstyle="butt", linewidth=self.linewidth, **kws ) # Plot outliers (if any) if len(outliers) > 0: ax.scatter(xs_outliers, ys_outliers, marker='d', c=self.gray, **kws) # Construct a color map from the input color rgb = [hex_color, (1, 1, 1)] cmap = mpl.colors.LinearSegmentedColormap.from_list('new_map', rgb) # Make sure that the last boxes contain hue and are not pure white rgb = [hex_color, cmap(.85)] cmap = mpl.colors.LinearSegmentedColormap.from_list('new_map', rgb) collection = PatchCollection( boxes, cmap=cmap, edgecolor=self.gray, linewidth=self.linewidth ) # Set the color gradation, first box will have color=hex_color collection.set_array(np.array(np.linspace(1, 0, len(boxes)))) # Plot the boxes ax.add_collection(collection) def draw_letter_value_plot(self, ax, kws): """Use matplotlib to draw a letter value plot on an Axes.""" for i, group_data in enumerate(self.plot_data): if self.plot_hues is None: # Handle case where there is data at this level if group_data.size == 0: continue # Draw a single box or a set of boxes # with a single level of grouping box_data = remove_na(group_data) # Handle case where there is no non-null data if box_data.size == 0: continue color = self.colors[i] self._lvplot(box_data, positions=[i], color=color, widths=self.width, ax=ax, **kws) else: # Draw nested groups of boxes offsets = self.hue_offsets for j, hue_level in enumerate(self.hue_names): # Add a legend for this hue level if not i: self.add_legend_data(ax, self.colors[j], hue_level) # Handle case where there is data at this level if group_data.size == 0: continue hue_mask = self.plot_hues[i] == hue_level box_data = remove_na(group_data[hue_mask]) # Handle case where there is no non-null data if box_data.size == 0: continue color = self.colors[j] center = i + offsets[j] self._lvplot(box_data, positions=[center], color=color, widths=self.nested_width, ax=ax, **kws) # Autoscale the values axis to make sure all patches are visible ax.autoscale_view(scalex=self.orient == "h", scaley=self.orient == "v") def plot(self, ax, boxplot_kws): """Make the plot.""" self.draw_letter_value_plot(ax, boxplot_kws) self.annotate_axes(ax) if self.orient == "h": ax.invert_yaxis() _categorical_docs = dict( # Shared narrative docs categorical_narrative=dedent("""\ This function always treats one of the variables as categorical and draws data at ordinal positions (0, 1, ... n) on the relevant axis, even when the data has a numeric or date type. See the :ref:`tutorial ` for more information.\ """), main_api_narrative=dedent("""\ Input data can be passed in a variety of formats, including: - Vectors of data represented as lists, numpy arrays, or pandas Series objects passed directly to the ``x``, ``y``, and/or ``hue`` parameters. - A "long-form" DataFrame, in which case the ``x``, ``y``, and ``hue`` variables will determine how the data are plotted. - A "wide-form" DataFrame, such that each numeric column will be plotted. - An array or list of vectors. In most cases, it is possible to use numpy or Python objects, but pandas objects are preferable because the associated names will be used to annotate the axes. Additionally, you can use Categorical types for the grouping variables to control the order of plot elements.\ """), # Shared function parameters input_params=dedent("""\ x, y, hue : names of variables in ``data`` or vector data, optional Inputs for plotting long-form data. See examples for interpretation.\ """), string_input_params=dedent("""\ x, y, hue : names of variables in ``data`` Inputs for plotting long-form data. See examples for interpretation.\ """), categorical_data=dedent("""\ data : DataFrame, array, or list of arrays, optional Dataset for plotting. If ``x`` and ``y`` are absent, this is interpreted as wide-form. Otherwise it is expected to be long-form.\ """), long_form_data=dedent("""\ data : DataFrame Long-form (tidy) dataset for plotting. Each column should correspond to a variable, and each row should correspond to an observation.\ """), order_vars=dedent("""\ order, hue_order : lists of strings, optional Order to plot the categorical levels in, otherwise the levels are inferred from the data objects.\ """), stat_api_params=dedent("""\ estimator : callable that maps vector -> scalar, optional Statistical function to estimate within each categorical bin. ci : float or "sd" or None, optional Size of confidence intervals to draw around estimated values. If "sd", skip bootstrapping and draw the standard deviation of the observations. If ``None``, no bootstrapping will be performed, and error bars will not be drawn. n_boot : int, optional Number of bootstrap iterations to use when computing confidence intervals. units : name of variable in ``data`` or vector data, optional Identifier of sampling units, which will be used to perform a multilevel bootstrap and account for repeated measures design. seed : int, numpy.random.Generator, or numpy.random.RandomState, optional Seed or random number generator for reproducible bootstrapping.\ """), orient=dedent("""\ orient : "v" | "h", optional Orientation of the plot (vertical or horizontal). This is usually inferred based on the type of the input variables, but it can be used to resolve ambiguity when both `x` and `y` are numeric or when plotting wide-form data.\ """), color=dedent("""\ color : matplotlib color, optional Color for all of the elements, or seed for a gradient palette.\ """), palette=dedent("""\ palette : palette name, list, or dict, optional Color palette that maps either the grouping variable or the hue variable. If the palette is a dictionary, keys should be names of levels and values should be matplotlib colors.\ """), saturation=dedent("""\ saturation : float, optional Proportion of the original saturation to draw colors at. Large patches often look better with slightly desaturated colors, but set this to ``1`` if you want the plot colors to perfectly match the input color spec.\ """), capsize=dedent("""\ capsize : float, optional Width of the "caps" on error bars. """), errwidth=dedent("""\ errwidth : float, optional Thickness of error bar lines (and caps).\ """), width=dedent("""\ width : float, optional Width of a full element when not using hue nesting, or width of all the elements for one level of the major grouping variable.\ """), dodge=dedent("""\ dodge : bool, optional When hue nesting is used, whether elements should be shifted along the categorical axis.\ """), linewidth=dedent("""\ linewidth : float, optional Width of the gray lines that frame the plot elements.\ """), ax_in=dedent("""\ ax : matplotlib Axes, optional Axes object to draw the plot onto, otherwise uses the current Axes.\ """), ax_out=dedent("""\ ax : matplotlib Axes Returns the Axes object with the plot drawn onto it.\ """), # Shared see also boxplot=dedent("""\ boxplot : A traditional box-and-whisker plot with a similar API.\ """), violinplot=dedent("""\ violinplot : A combination of boxplot and kernel density estimation.\ """), stripplot=dedent("""\ stripplot : A scatterplot where one variable is categorical. Can be used in conjunction with other plots to show each observation.\ """), swarmplot=dedent("""\ swarmplot : A categorical scatterplot where the points do not overlap. Can be used with other plots to show each observation.\ """), barplot=dedent("""\ barplot : Show point estimates and confidence intervals using bars.\ """), countplot=dedent("""\ countplot : Show the counts of observations in each categorical bin.\ """), pointplot=dedent("""\ pointplot : Show point estimates and confidence intervals using scatterplot glyphs.\ """), catplot=dedent("""\ catplot : Combine a categorical plot with a :class:`FacetGrid`.\ """), boxenplot=dedent("""\ boxenplot : An enhanced boxplot for larger datasets.\ """), ) _categorical_docs.update(_facet_docs) @_deprecate_positional_args def boxplot( *, x=None, y=None, hue=None, data=None, order=None, hue_order=None, orient=None, color=None, palette=None, saturation=.75, width=.8, dodge=True, fliersize=5, linewidth=None, whis=1.5, ax=None, **kwargs ): plotter = _BoxPlotter(x, y, hue, data, order, hue_order, orient, color, palette, saturation, width, dodge, fliersize, linewidth) if ax is None: ax = plt.gca() kwargs.update(dict(whis=whis)) plotter.plot(ax, kwargs) return ax boxplot.__doc__ = dedent("""\ Draw a box plot to show distributions with respect to categories. A box plot (or box-and-whisker plot) shows the distribution of quantitative data in a way that facilitates comparisons between variables or across levels of a categorical variable. The box shows the quartiles of the dataset while the whiskers extend to show the rest of the distribution, except for points that are determined to be "outliers" using a method that is a function of the inter-quartile range. {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} {orient} {color} {palette} {saturation} {width} {dodge} fliersize : float, optional Size of the markers used to indicate outlier observations. {linewidth} whis : float, optional Proportion of the IQR past the low and high quartiles to extend the plot whiskers. Points outside this range will be identified as outliers. {ax_in} kwargs : key, value mappings Other keyword arguments are passed through to :meth:`matplotlib.axes.Axes.boxplot`. Returns ------- {ax_out} See Also -------- {violinplot} {stripplot} {swarmplot} {catplot} Examples -------- Draw a single horizontal boxplot: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set_theme(style="whitegrid") >>> tips = sns.load_dataset("tips") >>> ax = sns.boxplot(x=tips["total_bill"]) Draw a vertical boxplot grouped by a categorical variable: .. plot:: :context: close-figs >>> ax = sns.boxplot(x="day", y="total_bill", data=tips) Draw a boxplot with nested grouping by two categorical variables: .. plot:: :context: close-figs >>> ax = sns.boxplot(x="day", y="total_bill", hue="smoker", ... data=tips, palette="Set3") Draw a boxplot with nested grouping when some bins are empty: .. plot:: :context: close-figs >>> ax = sns.boxplot(x="day", y="total_bill", hue="time", ... data=tips, linewidth=2.5) Control box order by passing an explicit order: .. plot:: :context: close-figs >>> ax = sns.boxplot(x="time", y="tip", data=tips, ... order=["Dinner", "Lunch"]) Draw a boxplot for each numeric variable in a DataFrame: .. plot:: :context: close-figs >>> iris = sns.load_dataset("iris") >>> ax = sns.boxplot(data=iris, orient="h", palette="Set2") Use ``hue`` without changing box position or width: .. plot:: :context: close-figs >>> tips["weekend"] = tips["day"].isin(["Sat", "Sun"]) >>> ax = sns.boxplot(x="day", y="total_bill", hue="weekend", ... data=tips, dodge=False) Use :func:`swarmplot` to show the datapoints on top of the boxes: .. plot:: :context: close-figs >>> ax = sns.boxplot(x="day", y="total_bill", data=tips) >>> ax = sns.swarmplot(x="day", y="total_bill", data=tips, color=".25") Use :func:`catplot` to combine a :func:`boxplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="sex", y="total_bill", ... hue="smoker", col="time", ... data=tips, kind="box", ... height=4, aspect=.7); """).format(**_categorical_docs) @_deprecate_positional_args def violinplot( *, x=None, y=None, hue=None, data=None, order=None, hue_order=None, bw="scott", cut=2, scale="area", scale_hue=True, gridsize=100, width=.8, inner="box", split=False, dodge=True, orient=None, linewidth=None, color=None, palette=None, saturation=.75, ax=None, **kwargs, ): plotter = _ViolinPlotter(x, y, hue, data, order, hue_order, bw, cut, scale, scale_hue, gridsize, width, inner, split, dodge, orient, linewidth, color, palette, saturation) if ax is None: ax = plt.gca() plotter.plot(ax) return ax violinplot.__doc__ = dedent("""\ Draw a combination of boxplot and kernel density estimate. A violin plot plays a similar role as a box and whisker plot. It shows the distribution of quantitative data across several levels of one (or more) categorical variables such that those distributions can be compared. Unlike a box plot, in which all of the plot components correspond to actual datapoints, the violin plot features a kernel density estimation of the underlying distribution. This can be an effective and attractive way to show multiple distributions of data at once, but keep in mind that the estimation procedure is influenced by the sample size, and violins for relatively small samples might look misleadingly smooth. {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} bw : {{'scott', 'silverman', float}}, optional Either the name of a reference rule or the scale factor to use when computing the kernel bandwidth. The actual kernel size will be determined by multiplying the scale factor by the standard deviation of the data within each bin. cut : float, optional Distance, in units of bandwidth size, to extend the density past the extreme datapoints. Set to 0 to limit the violin range within the range of the observed data (i.e., to have the same effect as ``trim=True`` in ``ggplot``. scale : {{"area", "count", "width"}}, optional The method used to scale the width of each violin. If ``area``, each violin will have the same area. If ``count``, the width of the violins will be scaled by the number of observations in that bin. If ``width``, each violin will have the same width. scale_hue : bool, optional When nesting violins using a ``hue`` variable, this parameter determines whether the scaling is computed within each level of the major grouping variable (``scale_hue=True``) or across all the violins on the plot (``scale_hue=False``). gridsize : int, optional Number of points in the discrete grid used to compute the kernel density estimate. {width} inner : {{"box", "quartile", "point", "stick", None}}, optional Representation of the datapoints in the violin interior. If ``box``, draw a miniature boxplot. If ``quartiles``, draw the quartiles of the distribution. If ``point`` or ``stick``, show each underlying datapoint. Using ``None`` will draw unadorned violins. split : bool, optional When using hue nesting with a variable that takes two levels, setting ``split`` to True will draw half of a violin for each level. This can make it easier to directly compare the distributions. {dodge} {orient} {linewidth} {color} {palette} {saturation} {ax_in} Returns ------- {ax_out} See Also -------- {boxplot} {stripplot} {swarmplot} {catplot} Examples -------- Draw a single horizontal violinplot: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set_theme(style="whitegrid") >>> tips = sns.load_dataset("tips") >>> ax = sns.violinplot(x=tips["total_bill"]) Draw a vertical violinplot grouped by a categorical variable: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", data=tips) Draw a violinplot with nested grouping by two categorical variables: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", hue="smoker", ... data=tips, palette="muted") Draw split violins to compare the across the hue variable: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", hue="smoker", ... data=tips, palette="muted", split=True) Control violin order by passing an explicit order: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="time", y="tip", data=tips, ... order=["Dinner", "Lunch"]) Scale the violin width by the number of observations in each bin: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", hue="sex", ... data=tips, palette="Set2", split=True, ... scale="count") Draw the quartiles as horizontal lines instead of a mini-box: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", hue="sex", ... data=tips, palette="Set2", split=True, ... scale="count", inner="quartile") Show each observation with a stick inside the violin: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", hue="sex", ... data=tips, palette="Set2", split=True, ... scale="count", inner="stick") Scale the density relative to the counts across all bins: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", hue="sex", ... data=tips, palette="Set2", split=True, ... scale="count", inner="stick", scale_hue=False) Use a narrow bandwidth to reduce the amount of smoothing: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", hue="sex", ... data=tips, palette="Set2", split=True, ... scale="count", inner="stick", ... scale_hue=False, bw=.2) Draw horizontal violins: .. plot:: :context: close-figs >>> planets = sns.load_dataset("planets") >>> ax = sns.violinplot(x="orbital_period", y="method", ... data=planets[planets.orbital_period < 1000], ... scale="width", palette="Set3") Don't let density extend past extreme values in the data: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="orbital_period", y="method", ... data=planets[planets.orbital_period < 1000], ... cut=0, scale="width", palette="Set3") Use ``hue`` without changing violin position or width: .. plot:: :context: close-figs >>> tips["weekend"] = tips["day"].isin(["Sat", "Sun"]) >>> ax = sns.violinplot(x="day", y="total_bill", hue="weekend", ... data=tips, dodge=False) Use :func:`catplot` to combine a :func:`violinplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="sex", y="total_bill", ... hue="smoker", col="time", ... data=tips, kind="violin", split=True, ... height=4, aspect=.7); """).format(**_categorical_docs) @_deprecate_positional_args def boxenplot( *, x=None, y=None, hue=None, data=None, order=None, hue_order=None, orient=None, color=None, palette=None, saturation=.75, width=.8, dodge=True, k_depth='tukey', linewidth=None, scale='exponential', outlier_prop=0.007, trust_alpha=0.05, showfliers=True, ax=None, **kwargs ): plotter = _LVPlotter(x, y, hue, data, order, hue_order, orient, color, palette, saturation, width, dodge, k_depth, linewidth, scale, outlier_prop, trust_alpha, showfliers) if ax is None: ax = plt.gca() plotter.plot(ax, kwargs) return ax boxenplot.__doc__ = dedent("""\ Draw an enhanced box plot for larger datasets. This style of plot was originally named a "letter value" plot because it shows a large number of quantiles that are defined as "letter values". It is similar to a box plot in plotting a nonparametric representation of a distribution in which all features correspond to actual observations. By plotting more quantiles, it provides more information about the shape of the distribution, particularly in the tails. For a more extensive explanation, you can read the paper that introduced the plot: https://vita.had.co.nz/papers/letter-value-plot.html {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} {orient} {color} {palette} {saturation} {width} {dodge} k_depth : {{"tukey", "proportion", "trustworthy", "full"}} or scalar,\ optional The number of boxes, and by extension number of percentiles, to draw. All methods are detailed in Wickham's paper. Each makes different assumptions about the number of outliers and leverages different statistical properties. If "proportion", draw no more than `outlier_prop` extreme observations. If "full", draw `log(n)+1` boxes. {linewidth} scale : {{"exponential", "linear", "area"}}, optional Method to use for the width of the letter value boxes. All give similar results visually. "linear" reduces the width by a constant linear factor, "exponential" uses the proportion of data not covered, "area" is proportional to the percentage of data covered. outlier_prop : float, optional Proportion of data believed to be outliers. Must be in the range (0, 1]. Used to determine the number of boxes to plot when `k_depth="proportion"`. trust_alpha : float, optional Confidence level for a box to be plotted. Used to determine the number of boxes to plot when `k_depth="trustworthy"`. Must be in the range (0, 1). showfliers : bool, optional If False, suppress the plotting of outliers. {ax_in} kwargs : key, value mappings Other keyword arguments are passed through to :meth:`matplotlib.axes.Axes.plot` and :meth:`matplotlib.axes.Axes.scatter`. Returns ------- {ax_out} See Also -------- {violinplot} {boxplot} {catplot} Examples -------- Draw a single horizontal boxen plot: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set_theme(style="whitegrid") >>> tips = sns.load_dataset("tips") >>> ax = sns.boxenplot(x=tips["total_bill"]) Draw a vertical boxen plot grouped by a categorical variable: .. plot:: :context: close-figs >>> ax = sns.boxenplot(x="day", y="total_bill", data=tips) Draw a letter value plot with nested grouping by two categorical variables: .. plot:: :context: close-figs >>> ax = sns.boxenplot(x="day", y="total_bill", hue="smoker", ... data=tips, palette="Set3") Draw a boxen plot with nested grouping when some bins are empty: .. plot:: :context: close-figs >>> ax = sns.boxenplot(x="day", y="total_bill", hue="time", ... data=tips, linewidth=2.5) Control box order by passing an explicit order: .. plot:: :context: close-figs >>> ax = sns.boxenplot(x="time", y="tip", data=tips, ... order=["Dinner", "Lunch"]) Draw a boxen plot for each numeric variable in a DataFrame: .. plot:: :context: close-figs >>> iris = sns.load_dataset("iris") >>> ax = sns.boxenplot(data=iris, orient="h", palette="Set2") Use :func:`stripplot` to show the datapoints on top of the boxes: .. plot:: :context: close-figs >>> ax = sns.boxenplot(x="day", y="total_bill", data=tips, ... showfliers=False) >>> ax = sns.stripplot(x="day", y="total_bill", data=tips, ... size=4, color=".26") Use :func:`catplot` to combine :func:`boxenplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="sex", y="total_bill", ... hue="smoker", col="time", ... data=tips, kind="boxen", ... height=4, aspect=.7); """).format(**_categorical_docs) @_deprecate_positional_args def stripplot( *, x=None, y=None, hue=None, data=None, order=None, hue_order=None, jitter=True, dodge=False, orient=None, color=None, palette=None, size=5, edgecolor="gray", linewidth=0, ax=None, **kwargs ): if "split" in kwargs: dodge = kwargs.pop("split") msg = "The `split` parameter has been renamed to `dodge`." warnings.warn(msg, UserWarning) plotter = _StripPlotter(x, y, hue, data, order, hue_order, jitter, dodge, orient, color, palette) if ax is None: ax = plt.gca() kwargs.setdefault("zorder", 3) size = kwargs.get("s", size) if linewidth is None: linewidth = size / 10 if edgecolor == "gray": edgecolor = plotter.gray kwargs.update(dict(s=size ** 2, edgecolor=edgecolor, linewidth=linewidth)) plotter.plot(ax, kwargs) return ax stripplot.__doc__ = dedent("""\ Draw a scatterplot where one variable is categorical. A strip plot can be drawn on its own, but it is also a good complement to a box or violin plot in cases where you want to show all observations along with some representation of the underlying distribution. {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} jitter : float, ``True``/``1`` is special-cased, optional Amount of jitter (only along the categorical axis) to apply. This can be useful when you have many points and they overlap, so that it is easier to see the distribution. You can specify the amount of jitter (half the width of the uniform random variable support), or just use ``True`` for a good default. dodge : bool, optional When using ``hue`` nesting, setting this to ``True`` will separate the strips for different hue levels along the categorical axis. Otherwise, the points for each level will be plotted on top of each other. {orient} {color} {palette} size : float, optional Radius of the markers, in points. edgecolor : matplotlib color, "gray" is special-cased, optional Color of the lines around each point. If you pass ``"gray"``, the brightness is determined by the color palette used for the body of the points. {linewidth} {ax_in} kwargs : key, value mappings Other keyword arguments are passed through to :meth:`matplotlib.axes.Axes.scatter`. Returns ------- {ax_out} See Also -------- {swarmplot} {boxplot} {violinplot} {catplot} Examples -------- Draw a single horizontal strip plot: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set_theme(style="whitegrid") >>> tips = sns.load_dataset("tips") >>> ax = sns.stripplot(x=tips["total_bill"]) Group the strips by a categorical variable: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="day", y="total_bill", data=tips) Use a smaller amount of jitter: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="day", y="total_bill", data=tips, jitter=0.05) Draw horizontal strips: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="total_bill", y="day", data=tips) Draw outlines around the points: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="total_bill", y="day", data=tips, ... linewidth=1) Nest the strips within a second categorical variable: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="sex", y="total_bill", hue="day", data=tips) Draw each level of the ``hue`` variable at different locations on the major categorical axis: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="day", y="total_bill", hue="smoker", ... data=tips, palette="Set2", dodge=True) Control strip order by passing an explicit order: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="time", y="tip", data=tips, ... order=["Dinner", "Lunch"]) Draw strips with large points and different aesthetics: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="day", y="total_bill", hue="smoker", ... data=tips, palette="Set2", size=20, marker="D", ... edgecolor="gray", alpha=.25) Draw strips of observations on top of a box plot: .. plot:: :context: close-figs >>> import numpy as np >>> ax = sns.boxplot(x="tip", y="day", data=tips, whis=np.inf) >>> ax = sns.stripplot(x="tip", y="day", data=tips, color=".3") Draw strips of observations on top of a violin plot: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", data=tips, ... inner=None, color=".8") >>> ax = sns.stripplot(x="day", y="total_bill", data=tips) Use :func:`catplot` to combine a :func:`stripplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="sex", y="total_bill", ... hue="smoker", col="time", ... data=tips, kind="strip", ... height=4, aspect=.7); """).format(**_categorical_docs) @_deprecate_positional_args def swarmplot( *, x=None, y=None, hue=None, data=None, order=None, hue_order=None, dodge=False, orient=None, color=None, palette=None, size=5, edgecolor="gray", linewidth=0, ax=None, **kwargs ): if "split" in kwargs: dodge = kwargs.pop("split") msg = "The `split` parameter has been renamed to `dodge`." warnings.warn(msg, UserWarning) plotter = _SwarmPlotter(x, y, hue, data, order, hue_order, dodge, orient, color, palette) if ax is None: ax = plt.gca() kwargs.setdefault("zorder", 3) size = kwargs.get("s", size) if linewidth is None: linewidth = size / 10 if edgecolor == "gray": edgecolor = plotter.gray kwargs.update(dict(s=size ** 2, edgecolor=edgecolor, linewidth=linewidth)) plotter.plot(ax, kwargs) return ax swarmplot.__doc__ = dedent("""\ Draw a categorical scatterplot with non-overlapping points. This function is similar to :func:`stripplot`, but the points are adjusted (only along the categorical axis) so that they don't overlap. This gives a better representation of the distribution of values, but it does not scale well to large numbers of observations. This style of plot is sometimes called a "beeswarm". A swarm plot can be drawn on its own, but it is also a good complement to a box or violin plot in cases where you want to show all observations along with some representation of the underlying distribution. Arranging the points properly requires an accurate transformation between data and point coordinates. This means that non-default axis limits must be set *before* drawing the plot. {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} dodge : bool, optional When using ``hue`` nesting, setting this to ``True`` will separate the strips for different hue levels along the categorical axis. Otherwise, the points for each level will be plotted in one swarm. {orient} {color} {palette} size : float, optional Radius of the markers, in points. edgecolor : matplotlib color, "gray" is special-cased, optional Color of the lines around each point. If you pass ``"gray"``, the brightness is determined by the color palette used for the body of the points. {linewidth} {ax_in} kwargs : key, value mappings Other keyword arguments are passed through to :meth:`matplotlib.axes.Axes.scatter`. Returns ------- {ax_out} See Also -------- {boxplot} {violinplot} {stripplot} {catplot} Examples -------- Draw a single horizontal swarm plot: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set_theme(style="whitegrid") >>> tips = sns.load_dataset("tips") >>> ax = sns.swarmplot(x=tips["total_bill"]) Group the swarms by a categorical variable: .. plot:: :context: close-figs >>> ax = sns.swarmplot(x="day", y="total_bill", data=tips) Draw horizontal swarms: .. plot:: :context: close-figs >>> ax = sns.swarmplot(x="total_bill", y="day", data=tips) Color the points using a second categorical variable: .. plot:: :context: close-figs >>> ax = sns.swarmplot(x="day", y="total_bill", hue="sex", data=tips) Split each level of the ``hue`` variable along the categorical axis: .. plot:: :context: close-figs >>> ax = sns.swarmplot(x="day", y="total_bill", hue="smoker", ... data=tips, palette="Set2", dodge=True) Control swarm order by passing an explicit order: .. plot:: :context: close-figs >>> ax = sns.swarmplot(x="time", y="total_bill", data=tips, ... order=["Dinner", "Lunch"]) Plot using larger points: .. plot:: :context: close-figs >>> ax = sns.swarmplot(x="time", y="total_bill", data=tips, size=6) Draw swarms of observations on top of a box plot: .. plot:: :context: close-figs >>> ax = sns.boxplot(x="total_bill", y="day", data=tips, whis=np.inf) >>> ax = sns.swarmplot(x="total_bill", y="day", data=tips, color=".2") Draw swarms of observations on top of a violin plot: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", data=tips, inner=None) >>> ax = sns.swarmplot(x="day", y="total_bill", data=tips, ... color="white", edgecolor="gray") Use :func:`catplot` to combine a :func:`swarmplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="sex", y="total_bill", ... hue="smoker", col="time", ... data=tips, kind="swarm", ... height=4, aspect=.7); """).format(**_categorical_docs) @_deprecate_positional_args def barplot( *, x=None, y=None, hue=None, data=None, order=None, hue_order=None, estimator=np.mean, ci=95, n_boot=1000, units=None, seed=None, orient=None, color=None, palette=None, saturation=.75, errcolor=".26", errwidth=None, capsize=None, dodge=True, ax=None, **kwargs, ): plotter = _BarPlotter(x, y, hue, data, order, hue_order, estimator, ci, n_boot, units, seed, orient, color, palette, saturation, errcolor, errwidth, capsize, dodge) if ax is None: ax = plt.gca() plotter.plot(ax, kwargs) return ax barplot.__doc__ = dedent("""\ Show point estimates and confidence intervals as rectangular bars. A bar plot represents an estimate of central tendency for a numeric variable with the height of each rectangle and provides some indication of the uncertainty around that estimate using error bars. Bar plots include 0 in the quantitative axis range, and they are a good choice when 0 is a meaningful value for the quantitative variable, and you want to make comparisons against it. For datasets where 0 is not a meaningful value, a point plot will allow you to focus on differences between levels of one or more categorical variables. It is also important to keep in mind that a bar plot shows only the mean (or other estimator) value, but in many cases it may be more informative to show the distribution of values at each level of the categorical variables. In that case, other approaches such as a box or violin plot may be more appropriate. {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} {stat_api_params} {orient} {color} {palette} {saturation} errcolor : matplotlib color Color for the lines that represent the confidence interval. {errwidth} {capsize} {dodge} {ax_in} kwargs : key, value mappings Other keyword arguments are passed through to :meth:`matplotlib.axes.Axes.bar`. Returns ------- {ax_out} See Also -------- {countplot} {pointplot} {catplot} Examples -------- Draw a set of vertical bar plots grouped by a categorical variable: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set_theme(style="whitegrid") >>> tips = sns.load_dataset("tips") >>> ax = sns.barplot(x="day", y="total_bill", data=tips) Draw a set of vertical bars with nested grouping by a two variables: .. plot:: :context: close-figs >>> ax = sns.barplot(x="day", y="total_bill", hue="sex", data=tips) Draw a set of horizontal bars: .. plot:: :context: close-figs >>> ax = sns.barplot(x="tip", y="day", data=tips) Control bar order by passing an explicit order: .. plot:: :context: close-figs >>> ax = sns.barplot(x="time", y="tip", data=tips, ... order=["Dinner", "Lunch"]) Use median as the estimate of central tendency: .. plot:: :context: close-figs >>> from numpy import median >>> ax = sns.barplot(x="day", y="tip", data=tips, estimator=median) Show the standard error of the mean with the error bars: .. plot:: :context: close-figs >>> ax = sns.barplot(x="day", y="tip", data=tips, ci=68) Show standard deviation of observations instead of a confidence interval: .. plot:: :context: close-figs >>> ax = sns.barplot(x="day", y="tip", data=tips, ci="sd") Add "caps" to the error bars: .. plot:: :context: close-figs >>> ax = sns.barplot(x="day", y="tip", data=tips, capsize=.2) Use a different color palette for the bars: .. plot:: :context: close-figs >>> ax = sns.barplot(x="size", y="total_bill", data=tips, ... palette="Blues_d") Use ``hue`` without changing bar position or width: .. plot:: :context: close-figs >>> tips["weekend"] = tips["day"].isin(["Sat", "Sun"]) >>> ax = sns.barplot(x="day", y="total_bill", hue="weekend", ... data=tips, dodge=False) Plot all bars in a single color: .. plot:: :context: close-figs >>> ax = sns.barplot(x="size", y="total_bill", data=tips, ... color="salmon", saturation=.5) Use :meth:`matplotlib.axes.Axes.bar` parameters to control the style. .. plot:: :context: close-figs >>> ax = sns.barplot(x="day", y="total_bill", data=tips, ... linewidth=2.5, facecolor=(1, 1, 1, 0), ... errcolor=".2", edgecolor=".2") Use :func:`catplot` to combine a :func:`barplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="sex", y="total_bill", ... hue="smoker", col="time", ... data=tips, kind="bar", ... height=4, aspect=.7); """).format(**_categorical_docs) @_deprecate_positional_args def pointplot( *, x=None, y=None, hue=None, data=None, order=None, hue_order=None, estimator=np.mean, ci=95, n_boot=1000, units=None, seed=None, markers="o", linestyles="-", dodge=False, join=True, scale=1, orient=None, color=None, palette=None, errwidth=None, capsize=None, ax=None, **kwargs ): plotter = _PointPlotter(x, y, hue, data, order, hue_order, estimator, ci, n_boot, units, seed, markers, linestyles, dodge, join, scale, orient, color, palette, errwidth, capsize) if ax is None: ax = plt.gca() plotter.plot(ax) return ax pointplot.__doc__ = dedent("""\ Show point estimates and confidence intervals using scatter plot glyphs. A point plot represents an estimate of central tendency for a numeric variable by the position of scatter plot points and provides some indication of the uncertainty around that estimate using error bars. Point plots can be more useful than bar plots for focusing comparisons between different levels of one or more categorical variables. They are particularly adept at showing interactions: how the relationship between levels of one categorical variable changes across levels of a second categorical variable. The lines that join each point from the same ``hue`` level allow interactions to be judged by differences in slope, which is easier for the eyes than comparing the heights of several groups of points or bars. It is important to keep in mind that a point plot shows only the mean (or other estimator) value, but in many cases it may be more informative to show the distribution of values at each level of the categorical variables. In that case, other approaches such as a box or violin plot may be more appropriate. {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} {stat_api_params} markers : string or list of strings, optional Markers to use for each of the ``hue`` levels. linestyles : string or list of strings, optional Line styles to use for each of the ``hue`` levels. dodge : bool or float, optional Amount to separate the points for each level of the ``hue`` variable along the categorical axis. join : bool, optional If ``True``, lines will be drawn between point estimates at the same ``hue`` level. scale : float, optional Scale factor for the plot elements. {orient} {color} {palette} {errwidth} {capsize} {ax_in} Returns ------- {ax_out} See Also -------- {barplot} {catplot} Examples -------- Draw a set of vertical point plots grouped by a categorical variable: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set_theme(style="darkgrid") >>> tips = sns.load_dataset("tips") >>> ax = sns.pointplot(x="time", y="total_bill", data=tips) Draw a set of vertical points with nested grouping by a two variables: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="time", y="total_bill", hue="smoker", ... data=tips) Separate the points for different hue levels along the categorical axis: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="time", y="total_bill", hue="smoker", ... data=tips, dodge=True) Use a different marker and line style for the hue levels: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="time", y="total_bill", hue="smoker", ... data=tips, ... markers=["o", "x"], ... linestyles=["-", "--"]) Draw a set of horizontal points: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="tip", y="day", data=tips) Don't draw a line connecting each point: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="tip", y="day", data=tips, join=False) Use a different color for a single-layer plot: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="time", y="total_bill", data=tips, ... color="#bb3f3f") Use a different color palette for the points: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="time", y="total_bill", hue="smoker", ... data=tips, palette="Set2") Control point order by passing an explicit order: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="time", y="tip", data=tips, ... order=["Dinner", "Lunch"]) Use median as the estimate of central tendency: .. plot:: :context: close-figs >>> from numpy import median >>> ax = sns.pointplot(x="day", y="tip", data=tips, estimator=median) Show the standard error of the mean with the error bars: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="day", y="tip", data=tips, ci=68) Show standard deviation of observations instead of a confidence interval: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="day", y="tip", data=tips, ci="sd") Add "caps" to the error bars: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="day", y="tip", data=tips, capsize=.2) Use :func:`catplot` to combine a :func:`pointplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="sex", y="total_bill", ... hue="smoker", col="time", ... data=tips, kind="point", ... dodge=True, ... height=4, aspect=.7); """).format(**_categorical_docs) @_deprecate_positional_args def countplot( *, x=None, y=None, hue=None, data=None, order=None, hue_order=None, orient=None, color=None, palette=None, saturation=.75, dodge=True, ax=None, **kwargs ): estimator = len ci = None n_boot = 0 units = None seed = None errcolor = None errwidth = None capsize = None if x is None and y is not None: orient = "h" x = y elif y is None and x is not None: orient = "v" y = x elif x is not None and y is not None: raise ValueError("Cannot pass values for both `x` and `y`") plotter = _CountPlotter( x, y, hue, data, order, hue_order, estimator, ci, n_boot, units, seed, orient, color, palette, saturation, errcolor, errwidth, capsize, dodge ) plotter.value_label = "count" if ax is None: ax = plt.gca() plotter.plot(ax, kwargs) return ax countplot.__doc__ = dedent("""\ Show the counts of observations in each categorical bin using bars. A count plot can be thought of as a histogram across a categorical, instead of quantitative, variable. The basic API and options are identical to those for :func:`barplot`, so you can compare counts across nested variables. {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} {orient} {color} {palette} {saturation} {dodge} {ax_in} kwargs : key, value mappings Other keyword arguments are passed through to :meth:`matplotlib.axes.Axes.bar`. Returns ------- {ax_out} See Also -------- {barplot} {catplot} Examples -------- Show value counts for a single categorical variable: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set_theme(style="darkgrid") >>> titanic = sns.load_dataset("titanic") >>> ax = sns.countplot(x="class", data=titanic) Show value counts for two categorical variables: .. plot:: :context: close-figs >>> ax = sns.countplot(x="class", hue="who", data=titanic) Plot the bars horizontally: .. plot:: :context: close-figs >>> ax = sns.countplot(y="class", hue="who", data=titanic) Use a different color palette: .. plot:: :context: close-figs >>> ax = sns.countplot(x="who", data=titanic, palette="Set3") Use :meth:`matplotlib.axes.Axes.bar` parameters to control the style. .. plot:: :context: close-figs >>> ax = sns.countplot(x="who", data=titanic, ... facecolor=(0, 0, 0, 0), ... linewidth=5, ... edgecolor=sns.color_palette("dark", 3)) Use :func:`catplot` to combine a :func:`countplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="class", hue="who", col="survived", ... data=titanic, kind="count", ... height=4, aspect=.7); """).format(**_categorical_docs) def factorplot(*args, **kwargs): """Deprecated; please use `catplot` instead.""" msg = ( "The `factorplot` function has been renamed to `catplot`. The " "original name will be removed in a future release. Please update " "your code. Note that the default `kind` in `factorplot` (`'point'`) " "has changed `'strip'` in `catplot`." ) warnings.warn(msg) if "size" in kwargs: kwargs["height"] = kwargs.pop("size") msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(msg, UserWarning) kwargs.setdefault("kind", "point") return catplot(*args, **kwargs) @_deprecate_positional_args def catplot( *, x=None, y=None, hue=None, data=None, row=None, col=None, # TODO move in front of data when * is enforced col_wrap=None, estimator=np.mean, ci=95, n_boot=1000, units=None, seed=None, order=None, hue_order=None, row_order=None, col_order=None, kind="strip", height=5, aspect=1, orient=None, color=None, palette=None, legend=True, legend_out=True, sharex=True, sharey=True, margin_titles=False, facet_kws=None, **kwargs ): # Handle deprecations if "size" in kwargs: height = kwargs.pop("size") msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(msg, UserWarning) # Determine the plotting function try: plot_func = globals()[kind + "plot"] except KeyError: err = "Plot kind '{}' is not recognized".format(kind) raise ValueError(err) # Alias the input variables to determine categorical order and palette # correctly in the case of a count plot if kind == "count": if x is None and y is not None: x_, y_, orient = y, y, "h" elif y is None and x is not None: x_, y_, orient = x, x, "v" else: raise ValueError("Either `x` or `y` must be None for kind='count'") else: x_, y_ = x, y # Check for attempt to plot onto specific axes and warn if "ax" in kwargs: msg = ("catplot is a figure-level function and does not accept " "target axes. You may wish to try {}".format(kind + "plot")) warnings.warn(msg, UserWarning) kwargs.pop("ax") # Determine the order for the whole dataset, which will be used in all # facets to ensure representation of all data in the final plot plotter_class = { "box": _BoxPlotter, "violin": _ViolinPlotter, "boxen": _LVPlotter, "bar": _BarPlotter, "point": _PointPlotter, "strip": _StripPlotter, "swarm": _SwarmPlotter, "count": _CountPlotter, }[kind] p = _CategoricalPlotter() p.require_numeric = plotter_class.require_numeric p.establish_variables(x_, y_, hue, data, orient, order, hue_order) if ( order is not None or (sharex and p.orient == "v") or (sharey and p.orient == "h") ): # Sync categorical axis between facets to have the same categories order = p.group_names elif color is None and hue is None: msg = ( "Setting `{}=False` with `color=None` may cause different levels of the " "`{}` variable to share colors. This will change in a future version." ) if not sharex and p.orient == "v": warnings.warn(msg.format("sharex", "x"), UserWarning) if not sharey and p.orient == "h": warnings.warn(msg.format("sharey", "y"), UserWarning) hue_order = p.hue_names # Determine the palette to use # (FacetGrid will pass a value for ``color`` to the plotting function # so we need to define ``palette`` to get default behavior for the # categorical functions p.establish_colors(color, palette, 1) if kind != "point" or hue is not None: palette = p.colors # Determine keyword arguments for the facets facet_kws = {} if facet_kws is None else facet_kws facet_kws.update( data=data, row=row, col=col, row_order=row_order, col_order=col_order, col_wrap=col_wrap, height=height, aspect=aspect, sharex=sharex, sharey=sharey, legend_out=legend_out, margin_titles=margin_titles, dropna=False, ) # Determine keyword arguments for the plotting function plot_kws = dict( order=order, hue_order=hue_order, orient=orient, color=color, palette=palette, ) plot_kws.update(kwargs) if kind in ["bar", "point"]: plot_kws.update( estimator=estimator, ci=ci, n_boot=n_boot, units=units, seed=seed, ) # Initialize the facets g = FacetGrid(**facet_kws) # Draw the plot onto the facets g.map_dataframe(plot_func, x=x, y=y, hue=hue, **plot_kws) if p.orient == "h": g.set_axis_labels(p.value_label, p.group_label) else: g.set_axis_labels(p.group_label, p.value_label) # Special case axis labels for a count type plot if kind == "count": if x is None: g.set_axis_labels(x_var="count") if y is None: g.set_axis_labels(y_var="count") if legend and (hue is not None) and (hue not in [x, row, col]): hue_order = list(map(utils.to_utf8, hue_order)) g.add_legend(title=hue, label_order=hue_order) return g catplot.__doc__ = dedent("""\ Figure-level interface for drawing categorical plots onto a FacetGrid. This function provides access to several axes-level functions that show the relationship between a numerical and one or more categorical variables using one of several visual representations. The ``kind`` parameter selects the underlying axes-level function to use: Categorical scatterplots: - :func:`stripplot` (with ``kind="strip"``; the default) - :func:`swarmplot` (with ``kind="swarm"``) Categorical distribution plots: - :func:`boxplot` (with ``kind="box"``) - :func:`violinplot` (with ``kind="violin"``) - :func:`boxenplot` (with ``kind="boxen"``) Categorical estimate plots: - :func:`pointplot` (with ``kind="point"``) - :func:`barplot` (with ``kind="bar"``) - :func:`countplot` (with ``kind="count"``) Extra keyword arguments are passed to the underlying function, so you should refer to the documentation for each to see kind-specific options. Note that unlike when using the axes-level functions directly, data must be passed in a long-form DataFrame with variables specified by passing strings to ``x``, ``y``, ``hue``, etc. As in the case with the underlying plot functions, if variables have a ``categorical`` data type, the levels of the categorical variables, and their order will be inferred from the objects. Otherwise you may have to use alter the dataframe sorting or use the function parameters (``orient``, ``order``, ``hue_order``, etc.) to set up the plot correctly. {categorical_narrative} After plotting, the :class:`FacetGrid` with the plot is returned and can be used directly to tweak supporting plot details or add other layers. Parameters ---------- {string_input_params} {long_form_data} row, col : names of variables in ``data``, optional Categorical variables that will determine the faceting of the grid. {col_wrap} {stat_api_params} {order_vars} row_order, col_order : lists of strings, optional Order to organize the rows and/or columns of the grid in, otherwise the orders are inferred from the data objects. kind : str, optional The kind of plot to draw, corresponds to the name of a categorical axes-level plotting function. Options are: "strip", "swarm", "box", "violin", "boxen", "point", "bar", or "count". {height} {aspect} {orient} {color} {palette} legend : bool, optional If ``True`` and there is a ``hue`` variable, draw a legend on the plot. {legend_out} {share_xy} {margin_titles} facet_kws : dict, optional Dictionary of other keyword arguments to pass to :class:`FacetGrid`. kwargs : key, value pairings Other keyword arguments are passed through to the underlying plotting function. Returns ------- g : :class:`FacetGrid` Returns the :class:`FacetGrid` object with the plot on it for further tweaking. Examples -------- Draw a single facet to use the :class:`FacetGrid` legend placement: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set_theme(style="ticks") >>> exercise = sns.load_dataset("exercise") >>> g = sns.catplot(x="time", y="pulse", hue="kind", data=exercise) Use a different plot kind to visualize the same data: .. plot:: :context: close-figs >>> g = sns.catplot(x="time", y="pulse", hue="kind", ... data=exercise, kind="violin") Facet along the columns to show a third categorical variable: .. plot:: :context: close-figs >>> g = sns.catplot(x="time", y="pulse", hue="kind", ... col="diet", data=exercise) Use a different height and aspect ratio for the facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="time", y="pulse", hue="kind", ... col="diet", data=exercise, ... height=5, aspect=.8) Make many column facets and wrap them into the rows of the grid: .. plot:: :context: close-figs >>> titanic = sns.load_dataset("titanic") >>> g = sns.catplot(x="alive", col="deck", col_wrap=4, ... data=titanic[titanic.deck.notnull()], ... kind="count", height=2.5, aspect=.8) Plot horizontally and pass other keyword arguments to the plot function: .. plot:: :context: close-figs >>> g = sns.catplot(x="age", y="embark_town", ... hue="sex", row="class", ... data=titanic[titanic.embark_town.notnull()], ... orient="h", height=2, aspect=3, palette="Set3", ... kind="violin", dodge=True, cut=0, bw=.2) Use methods on the returned :class:`FacetGrid` to tweak the presentation: .. plot:: :context: close-figs >>> g = sns.catplot(x="who", y="survived", col="class", ... data=titanic, saturation=.5, ... kind="bar", ci=None, aspect=.6) >>> (g.set_axis_labels("", "Survival Rate") ... .set_xticklabels(["Men", "Women", "Children"]) ... .set_titles("{{col_name}} {{col_var}}") ... .set(ylim=(0, 1)) ... .despine(left=True)) #doctest: +ELLIPSIS """).format(**_categorical_docs) seaborn-0.11.2/seaborn/cm.py000066400000000000000000002007231410631356500156360ustar00rootroot00000000000000from matplotlib import colors, cm as mpl_cm _rocket_lut = [ [ 0.01060815, 0.01808215, 0.10018654], [ 0.01428972, 0.02048237, 0.10374486], [ 0.01831941, 0.0229766 , 0.10738511], [ 0.02275049, 0.02554464, 0.11108639], [ 0.02759119, 0.02818316, 0.11483751], [ 0.03285175, 0.03088792, 0.11863035], [ 0.03853466, 0.03365771, 0.12245873], [ 0.04447016, 0.03648425, 0.12631831], [ 0.05032105, 0.03936808, 0.13020508], [ 0.05611171, 0.04224835, 0.13411624], [ 0.0618531 , 0.04504866, 0.13804929], [ 0.06755457, 0.04778179, 0.14200206], [ 0.0732236 , 0.05045047, 0.14597263], [ 0.0788708 , 0.05305461, 0.14995981], [ 0.08450105, 0.05559631, 0.15396203], [ 0.09011319, 0.05808059, 0.15797687], [ 0.09572396, 0.06050127, 0.16200507], [ 0.10132312, 0.06286782, 0.16604287], [ 0.10692823, 0.06517224, 0.17009175], [ 0.1125315 , 0.06742194, 0.17414848], [ 0.11813947, 0.06961499, 0.17821272], [ 0.12375803, 0.07174938, 0.18228425], [ 0.12938228, 0.07383015, 0.18636053], [ 0.13501631, 0.07585609, 0.19044109], [ 0.14066867, 0.0778224 , 0.19452676], [ 0.14633406, 0.07973393, 0.1986151 ], [ 0.15201338, 0.08159108, 0.20270523], [ 0.15770877, 0.08339312, 0.20679668], [ 0.16342174, 0.0851396 , 0.21088893], [ 0.16915387, 0.08682996, 0.21498104], [ 0.17489524, 0.08848235, 0.2190294 ], [ 0.18065495, 0.09009031, 0.22303512], [ 0.18643324, 0.09165431, 0.22699705], [ 0.19223028, 0.09317479, 0.23091409], [ 0.19804623, 0.09465217, 0.23478512], [ 0.20388117, 0.09608689, 0.23860907], [ 0.20973515, 0.09747934, 0.24238489], [ 0.21560818, 0.09882993, 0.24611154], [ 0.22150014, 0.10013944, 0.2497868 ], [ 0.22741085, 0.10140876, 0.25340813], [ 0.23334047, 0.10263737, 0.25697736], [ 0.23928891, 0.10382562, 0.2604936 ], [ 0.24525608, 0.10497384, 0.26395596], [ 0.25124182, 0.10608236, 0.26736359], [ 0.25724602, 0.10715148, 0.27071569], [ 0.26326851, 0.1081815 , 0.27401148], [ 0.26930915, 0.1091727 , 0.2772502 ], [ 0.27536766, 0.11012568, 0.28043021], [ 0.28144375, 0.11104133, 0.2835489 ], [ 0.2875374 , 0.11191896, 0.28660853], [ 0.29364846, 0.11275876, 0.2896085 ], [ 0.29977678, 0.11356089, 0.29254823], [ 0.30592213, 0.11432553, 0.29542718], [ 0.31208435, 0.11505284, 0.29824485], [ 0.31826327, 0.1157429 , 0.30100076], [ 0.32445869, 0.11639585, 0.30369448], [ 0.33067031, 0.11701189, 0.30632563], [ 0.33689808, 0.11759095, 0.3088938 ], [ 0.34314168, 0.11813362, 0.31139721], [ 0.34940101, 0.11863987, 0.3138355 ], [ 0.355676 , 0.11910909, 0.31620996], [ 0.36196644, 0.1195413 , 0.31852037], [ 0.36827206, 0.11993653, 0.32076656], [ 0.37459292, 0.12029443, 0.32294825], [ 0.38092887, 0.12061482, 0.32506528], [ 0.38727975, 0.12089756, 0.3271175 ], [ 0.39364518, 0.12114272, 0.32910494], [ 0.40002537, 0.12134964, 0.33102734], [ 0.40642019, 0.12151801, 0.33288464], [ 0.41282936, 0.12164769, 0.33467689], [ 0.41925278, 0.12173833, 0.33640407], [ 0.42569057, 0.12178916, 0.33806605], [ 0.43214263, 0.12179973, 0.33966284], [ 0.43860848, 0.12177004, 0.34119475], [ 0.44508855, 0.12169883, 0.34266151], [ 0.45158266, 0.12158557, 0.34406324], [ 0.45809049, 0.12142996, 0.34540024], [ 0.46461238, 0.12123063, 0.34667231], [ 0.47114798, 0.12098721, 0.34787978], [ 0.47769736, 0.12069864, 0.34902273], [ 0.48426077, 0.12036349, 0.35010104], [ 0.49083761, 0.11998161, 0.35111537], [ 0.49742847, 0.11955087, 0.35206533], [ 0.50403286, 0.11907081, 0.35295152], [ 0.51065109, 0.11853959, 0.35377385], [ 0.51728314, 0.1179558 , 0.35453252], [ 0.52392883, 0.11731817, 0.35522789], [ 0.53058853, 0.11662445, 0.35585982], [ 0.53726173, 0.11587369, 0.35642903], [ 0.54394898, 0.11506307, 0.35693521], [ 0.5506426 , 0.11420757, 0.35737863], [ 0.55734473, 0.11330456, 0.35775059], [ 0.56405586, 0.11235265, 0.35804813], [ 0.57077365, 0.11135597, 0.35827146], [ 0.5774991 , 0.11031233, 0.35841679], [ 0.58422945, 0.10922707, 0.35848469], [ 0.59096382, 0.10810205, 0.35847347], [ 0.59770215, 0.10693774, 0.35838029], [ 0.60444226, 0.10573912, 0.35820487], [ 0.61118304, 0.10450943, 0.35794557], [ 0.61792306, 0.10325288, 0.35760108], [ 0.62466162, 0.10197244, 0.35716891], [ 0.63139686, 0.10067417, 0.35664819], [ 0.63812122, 0.09938212, 0.35603757], [ 0.64483795, 0.0980891 , 0.35533555], [ 0.65154562, 0.09680192, 0.35454107], [ 0.65824241, 0.09552918, 0.3536529 ], [ 0.66492652, 0.09428017, 0.3526697 ], [ 0.67159578, 0.09306598, 0.35159077], [ 0.67824099, 0.09192342, 0.3504148 ], [ 0.684863 , 0.09085633, 0.34914061], [ 0.69146268, 0.0898675 , 0.34776864], [ 0.69803757, 0.08897226, 0.3462986 ], [ 0.70457834, 0.0882129 , 0.34473046], [ 0.71108138, 0.08761223, 0.3430635 ], [ 0.7175507 , 0.08716212, 0.34129974], [ 0.72398193, 0.08688725, 0.33943958], [ 0.73035829, 0.0868623 , 0.33748452], [ 0.73669146, 0.08704683, 0.33543669], [ 0.74297501, 0.08747196, 0.33329799], [ 0.74919318, 0.08820542, 0.33107204], [ 0.75535825, 0.08919792, 0.32876184], [ 0.76145589, 0.09050716, 0.32637117], [ 0.76748424, 0.09213602, 0.32390525], [ 0.77344838, 0.09405684, 0.32136808], [ 0.77932641, 0.09634794, 0.31876642], [ 0.78513609, 0.09892473, 0.31610488], [ 0.79085854, 0.10184672, 0.313391 ], [ 0.7965014 , 0.10506637, 0.31063031], [ 0.80205987, 0.10858333, 0.30783 ], [ 0.80752799, 0.11239964, 0.30499738], [ 0.81291606, 0.11645784, 0.30213802], [ 0.81820481, 0.12080606, 0.29926105], [ 0.82341472, 0.12535343, 0.2963705 ], [ 0.82852822, 0.13014118, 0.29347474], [ 0.83355779, 0.13511035, 0.29057852], [ 0.83850183, 0.14025098, 0.2876878 ], [ 0.84335441, 0.14556683, 0.28480819], [ 0.84813096, 0.15099892, 0.281943 ], [ 0.85281737, 0.15657772, 0.27909826], [ 0.85742602, 0.1622583 , 0.27627462], [ 0.86196552, 0.16801239, 0.27346473], [ 0.86641628, 0.17387796, 0.27070818], [ 0.87079129, 0.17982114, 0.26797378], [ 0.87507281, 0.18587368, 0.26529697], [ 0.87925878, 0.19203259, 0.26268136], [ 0.8833417 , 0.19830556, 0.26014181], [ 0.88731387, 0.20469941, 0.25769539], [ 0.89116859, 0.21121788, 0.2553592 ], [ 0.89490337, 0.21785614, 0.25314362], [ 0.8985026 , 0.22463251, 0.25108745], [ 0.90197527, 0.23152063, 0.24918223], [ 0.90530097, 0.23854541, 0.24748098], [ 0.90848638, 0.24568473, 0.24598324], [ 0.911533 , 0.25292623, 0.24470258], [ 0.9144225 , 0.26028902, 0.24369359], [ 0.91717106, 0.26773821, 0.24294137], [ 0.91978131, 0.27526191, 0.24245973], [ 0.92223947, 0.28287251, 0.24229568], [ 0.92456587, 0.29053388, 0.24242622], [ 0.92676657, 0.29823282, 0.24285536], [ 0.92882964, 0.30598085, 0.24362274], [ 0.93078135, 0.31373977, 0.24468803], [ 0.93262051, 0.3215093 , 0.24606461], [ 0.93435067, 0.32928362, 0.24775328], [ 0.93599076, 0.33703942, 0.24972157], [ 0.93752831, 0.34479177, 0.25199928], [ 0.93899289, 0.35250734, 0.25452808], [ 0.94036561, 0.36020899, 0.25734661], [ 0.94167588, 0.36786594, 0.2603949 ], [ 0.94291042, 0.37549479, 0.26369821], [ 0.94408513, 0.3830811 , 0.26722004], [ 0.94520419, 0.39062329, 0.27094924], [ 0.94625977, 0.39813168, 0.27489742], [ 0.94727016, 0.4055909 , 0.27902322], [ 0.94823505, 0.41300424, 0.28332283], [ 0.94914549, 0.42038251, 0.28780969], [ 0.95001704, 0.42771398, 0.29244728], [ 0.95085121, 0.43500005, 0.29722817], [ 0.95165009, 0.44224144, 0.30214494], [ 0.9524044 , 0.44944853, 0.3072105 ], [ 0.95312556, 0.45661389, 0.31239776], [ 0.95381595, 0.46373781, 0.31769923], [ 0.95447591, 0.47082238, 0.32310953], [ 0.95510255, 0.47787236, 0.32862553], [ 0.95569679, 0.48489115, 0.33421404], [ 0.95626788, 0.49187351, 0.33985601], [ 0.95681685, 0.49882008, 0.34555431], [ 0.9573439 , 0.50573243, 0.35130912], [ 0.95784842, 0.51261283, 0.35711942], [ 0.95833051, 0.51946267, 0.36298589], [ 0.95879054, 0.52628305, 0.36890904], [ 0.95922872, 0.53307513, 0.3748895 ], [ 0.95964538, 0.53983991, 0.38092784], [ 0.96004345, 0.54657593, 0.3870292 ], [ 0.96042097, 0.55328624, 0.39319057], [ 0.96077819, 0.55997184, 0.39941173], [ 0.9611152 , 0.5666337 , 0.40569343], [ 0.96143273, 0.57327231, 0.41203603], [ 0.96173392, 0.57988594, 0.41844491], [ 0.96201757, 0.58647675, 0.42491751], [ 0.96228344, 0.59304598, 0.43145271], [ 0.96253168, 0.5995944 , 0.43805131], [ 0.96276513, 0.60612062, 0.44471698], [ 0.96298491, 0.6126247 , 0.45145074], [ 0.96318967, 0.61910879, 0.45824902], [ 0.96337949, 0.6255736 , 0.46511271], [ 0.96355923, 0.63201624, 0.47204746], [ 0.96372785, 0.63843852, 0.47905028], [ 0.96388426, 0.64484214, 0.4861196 ], [ 0.96403203, 0.65122535, 0.4932578 ], [ 0.96417332, 0.65758729, 0.50046894], [ 0.9643063 , 0.66393045, 0.5077467 ], [ 0.96443322, 0.67025402, 0.51509334], [ 0.96455845, 0.67655564, 0.52251447], [ 0.96467922, 0.68283846, 0.53000231], [ 0.96479861, 0.68910113, 0.53756026], [ 0.96492035, 0.69534192, 0.5451917 ], [ 0.96504223, 0.7015636 , 0.5528892 ], [ 0.96516917, 0.70776351, 0.5606593 ], [ 0.96530224, 0.71394212, 0.56849894], [ 0.96544032, 0.72010124, 0.57640375], [ 0.96559206, 0.72623592, 0.58438387], [ 0.96575293, 0.73235058, 0.59242739], [ 0.96592829, 0.73844258, 0.60053991], [ 0.96612013, 0.74451182, 0.60871954], [ 0.96632832, 0.75055966, 0.61696136], [ 0.96656022, 0.75658231, 0.62527295], [ 0.96681185, 0.76258381, 0.63364277], [ 0.96709183, 0.76855969, 0.64207921], [ 0.96739773, 0.77451297, 0.65057302], [ 0.96773482, 0.78044149, 0.65912731], [ 0.96810471, 0.78634563, 0.66773889], [ 0.96850919, 0.79222565, 0.6764046 ], [ 0.96893132, 0.79809112, 0.68512266], [ 0.96935926, 0.80395415, 0.69383201], [ 0.9698028 , 0.80981139, 0.70252255], [ 0.97025511, 0.81566605, 0.71120296], [ 0.97071849, 0.82151775, 0.71987163], [ 0.97120159, 0.82736371, 0.72851999], [ 0.97169389, 0.83320847, 0.73716071], [ 0.97220061, 0.83905052, 0.74578903], [ 0.97272597, 0.84488881, 0.75440141], [ 0.97327085, 0.85072354, 0.76299805], [ 0.97383206, 0.85655639, 0.77158353], [ 0.97441222, 0.86238689, 0.78015619], [ 0.97501782, 0.86821321, 0.78871034], [ 0.97564391, 0.87403763, 0.79725261], [ 0.97628674, 0.87986189, 0.8057883 ], [ 0.97696114, 0.88568129, 0.81430324], [ 0.97765722, 0.89149971, 0.82280948], [ 0.97837585, 0.89731727, 0.83130786], [ 0.97912374, 0.90313207, 0.83979337], [ 0.979891 , 0.90894778, 0.84827858], [ 0.98067764, 0.91476465, 0.85676611], [ 0.98137749, 0.92061729, 0.86536915] ] _mako_lut = [ [ 0.04503935, 0.01482344, 0.02092227], [ 0.04933018, 0.01709292, 0.02535719], [ 0.05356262, 0.01950702, 0.03018802], [ 0.05774337, 0.02205989, 0.03545515], [ 0.06188095, 0.02474764, 0.04115287], [ 0.06598247, 0.0275665 , 0.04691409], [ 0.07005374, 0.03051278, 0.05264306], [ 0.07409947, 0.03358324, 0.05834631], [ 0.07812339, 0.03677446, 0.06403249], [ 0.08212852, 0.0400833 , 0.06970862], [ 0.08611731, 0.04339148, 0.07538208], [ 0.09009161, 0.04664706, 0.08105568], [ 0.09405308, 0.04985685, 0.08673591], [ 0.09800301, 0.05302279, 0.09242646], [ 0.10194255, 0.05614641, 0.09813162], [ 0.10587261, 0.05922941, 0.103854 ], [ 0.1097942 , 0.06227277, 0.10959847], [ 0.11370826, 0.06527747, 0.11536893], [ 0.11761516, 0.06824548, 0.12116393], [ 0.12151575, 0.07117741, 0.12698763], [ 0.12541095, 0.07407363, 0.1328442 ], [ 0.12930083, 0.07693611, 0.13873064], [ 0.13317849, 0.07976988, 0.14465095], [ 0.13701138, 0.08259683, 0.15060265], [ 0.14079223, 0.08542126, 0.15659379], [ 0.14452486, 0.08824175, 0.16262484], [ 0.14820351, 0.09106304, 0.16869476], [ 0.15183185, 0.09388372, 0.17480366], [ 0.15540398, 0.09670855, 0.18094993], [ 0.15892417, 0.09953561, 0.18713384], [ 0.16238588, 0.10236998, 0.19335329], [ 0.16579435, 0.10520905, 0.19960847], [ 0.16914226, 0.10805832, 0.20589698], [ 0.17243586, 0.11091443, 0.21221911], [ 0.17566717, 0.11378321, 0.21857219], [ 0.17884322, 0.11666074, 0.2249565 ], [ 0.18195582, 0.11955283, 0.23136943], [ 0.18501213, 0.12245547, 0.23781116], [ 0.18800459, 0.12537395, 0.24427914], [ 0.19093944, 0.1283047 , 0.25077369], [ 0.19381092, 0.13125179, 0.25729255], [ 0.19662307, 0.13421303, 0.26383543], [ 0.19937337, 0.13719028, 0.27040111], [ 0.20206187, 0.14018372, 0.27698891], [ 0.20469116, 0.14319196, 0.28359861], [ 0.20725547, 0.14621882, 0.29022775], [ 0.20976258, 0.14925954, 0.29687795], [ 0.21220409, 0.15231929, 0.30354703], [ 0.21458611, 0.15539445, 0.31023563], [ 0.21690827, 0.15848519, 0.31694355], [ 0.21916481, 0.16159489, 0.32366939], [ 0.2213631 , 0.16471913, 0.33041431], [ 0.22349947, 0.1678599 , 0.33717781], [ 0.2255714 , 0.1710185 , 0.34395925], [ 0.22758415, 0.17419169, 0.35075983], [ 0.22953569, 0.17738041, 0.35757941], [ 0.23142077, 0.18058733, 0.3644173 ], [ 0.2332454 , 0.18380872, 0.37127514], [ 0.2350092 , 0.18704459, 0.3781528 ], [ 0.23670785, 0.190297 , 0.38504973], [ 0.23834119, 0.19356547, 0.39196711], [ 0.23991189, 0.19684817, 0.39890581], [ 0.24141903, 0.20014508, 0.4058667 ], [ 0.24286214, 0.20345642, 0.4128484 ], [ 0.24423453, 0.20678459, 0.41985299], [ 0.24554109, 0.21012669, 0.42688124], [ 0.2467815 , 0.21348266, 0.43393244], [ 0.24795393, 0.21685249, 0.4410088 ], [ 0.24905614, 0.22023618, 0.448113 ], [ 0.25007383, 0.22365053, 0.45519562], [ 0.25098926, 0.22710664, 0.46223892], [ 0.25179696, 0.23060342, 0.46925447], [ 0.25249346, 0.23414353, 0.47623196], [ 0.25307401, 0.23772973, 0.48316271], [ 0.25353152, 0.24136961, 0.49001976], [ 0.25386167, 0.24506548, 0.49679407], [ 0.25406082, 0.2488164 , 0.50348932], [ 0.25412435, 0.25262843, 0.51007843], [ 0.25404842, 0.25650743, 0.51653282], [ 0.25383134, 0.26044852, 0.52286845], [ 0.2534705 , 0.26446165, 0.52903422], [ 0.25296722, 0.2685428 , 0.53503572], [ 0.2523226 , 0.27269346, 0.54085315], [ 0.25153974, 0.27691629, 0.54645752], [ 0.25062402, 0.28120467, 0.55185939], [ 0.24958205, 0.28556371, 0.55701246], [ 0.24842386, 0.28998148, 0.56194601], [ 0.24715928, 0.29446327, 0.56660884], [ 0.24580099, 0.29899398, 0.57104399], [ 0.24436202, 0.30357852, 0.57519929], [ 0.24285591, 0.30819938, 0.57913247], [ 0.24129828, 0.31286235, 0.58278615], [ 0.23970131, 0.3175495 , 0.5862272 ], [ 0.23807973, 0.32226344, 0.58941872], [ 0.23644557, 0.32699241, 0.59240198], [ 0.2348113 , 0.33173196, 0.59518282], [ 0.23318874, 0.33648036, 0.59775543], [ 0.2315855 , 0.34122763, 0.60016456], [ 0.23001121, 0.34597357, 0.60240251], [ 0.2284748 , 0.35071512, 0.6044784 ], [ 0.22698081, 0.35544612, 0.60642528], [ 0.22553305, 0.36016515, 0.60825252], [ 0.22413977, 0.36487341, 0.60994938], [ 0.22280246, 0.36956728, 0.61154118], [ 0.22152555, 0.37424409, 0.61304472], [ 0.22030752, 0.37890437, 0.61446646], [ 0.2191538 , 0.38354668, 0.61581561], [ 0.21806257, 0.38817169, 0.61709794], [ 0.21703799, 0.39277882, 0.61831922], [ 0.21607792, 0.39736958, 0.61948028], [ 0.21518463, 0.40194196, 0.62059763], [ 0.21435467, 0.40649717, 0.62167507], [ 0.21358663, 0.41103579, 0.62271724], [ 0.21288172, 0.41555771, 0.62373011], [ 0.21223835, 0.42006355, 0.62471794], [ 0.21165312, 0.42455441, 0.62568371], [ 0.21112526, 0.42903064, 0.6266318 ], [ 0.21065161, 0.43349321, 0.62756504], [ 0.21023306, 0.43794288, 0.62848279], [ 0.20985996, 0.44238227, 0.62938329], [ 0.20951045, 0.44680966, 0.63030696], [ 0.20916709, 0.45122981, 0.63124483], [ 0.20882976, 0.45564335, 0.63219599], [ 0.20849798, 0.46005094, 0.63315928], [ 0.20817199, 0.46445309, 0.63413391], [ 0.20785149, 0.46885041, 0.63511876], [ 0.20753716, 0.47324327, 0.63611321], [ 0.20722876, 0.47763224, 0.63711608], [ 0.20692679, 0.48201774, 0.63812656], [ 0.20663156, 0.48640018, 0.63914367], [ 0.20634336, 0.49078002, 0.64016638], [ 0.20606303, 0.49515755, 0.6411939 ], [ 0.20578999, 0.49953341, 0.64222457], [ 0.20552612, 0.50390766, 0.64325811], [ 0.20527189, 0.50828072, 0.64429331], [ 0.20502868, 0.51265277, 0.64532947], [ 0.20479718, 0.51702417, 0.64636539], [ 0.20457804, 0.52139527, 0.64739979], [ 0.20437304, 0.52576622, 0.64843198], [ 0.20418396, 0.53013715, 0.64946117], [ 0.20401238, 0.53450825, 0.65048638], [ 0.20385896, 0.53887991, 0.65150606], [ 0.20372653, 0.54325208, 0.65251978], [ 0.20361709, 0.5476249 , 0.6535266 ], [ 0.20353258, 0.55199854, 0.65452542], [ 0.20347472, 0.55637318, 0.655515 ], [ 0.20344718, 0.56074869, 0.65649508], [ 0.20345161, 0.56512531, 0.65746419], [ 0.20349089, 0.56950304, 0.65842151], [ 0.20356842, 0.57388184, 0.65936642], [ 0.20368663, 0.57826181, 0.66029768], [ 0.20384884, 0.58264293, 0.6612145 ], [ 0.20405904, 0.58702506, 0.66211645], [ 0.20431921, 0.59140842, 0.66300179], [ 0.20463464, 0.59579264, 0.66387079], [ 0.20500731, 0.60017798, 0.66472159], [ 0.20544449, 0.60456387, 0.66555409], [ 0.20596097, 0.60894927, 0.66636568], [ 0.20654832, 0.61333521, 0.66715744], [ 0.20721003, 0.61772167, 0.66792838], [ 0.20795035, 0.62210845, 0.66867802], [ 0.20877302, 0.62649546, 0.66940555], [ 0.20968223, 0.63088252, 0.6701105 ], [ 0.21068163, 0.63526951, 0.67079211], [ 0.21177544, 0.63965621, 0.67145005], [ 0.21298582, 0.64404072, 0.67208182], [ 0.21430361, 0.64842404, 0.67268861], [ 0.21572716, 0.65280655, 0.67326978], [ 0.21726052, 0.65718791, 0.6738255 ], [ 0.21890636, 0.66156803, 0.67435491], [ 0.220668 , 0.66594665, 0.67485792], [ 0.22255447, 0.67032297, 0.67533374], [ 0.22458372, 0.67469531, 0.67578061], [ 0.22673713, 0.67906542, 0.67620044], [ 0.22901625, 0.6834332 , 0.67659251], [ 0.23142316, 0.68779836, 0.67695703], [ 0.23395924, 0.69216072, 0.67729378], [ 0.23663857, 0.69651881, 0.67760151], [ 0.23946645, 0.70087194, 0.67788018], [ 0.24242624, 0.70522162, 0.67813088], [ 0.24549008, 0.70957083, 0.67835215], [ 0.24863372, 0.71392166, 0.67854868], [ 0.25187832, 0.71827158, 0.67872193], [ 0.25524083, 0.72261873, 0.67887024], [ 0.25870947, 0.72696469, 0.67898912], [ 0.26229238, 0.73130855, 0.67907645], [ 0.26604085, 0.73564353, 0.67914062], [ 0.26993099, 0.73997282, 0.67917264], [ 0.27397488, 0.74429484, 0.67917096], [ 0.27822463, 0.74860229, 0.67914468], [ 0.28264201, 0.75290034, 0.67907959], [ 0.2873016 , 0.75717817, 0.67899164], [ 0.29215894, 0.76144162, 0.67886578], [ 0.29729823, 0.76567816, 0.67871894], [ 0.30268199, 0.76989232, 0.67853896], [ 0.30835665, 0.77407636, 0.67833512], [ 0.31435139, 0.77822478, 0.67811118], [ 0.3206671 , 0.78233575, 0.67786729], [ 0.32733158, 0.78640315, 0.67761027], [ 0.33437168, 0.79042043, 0.67734882], [ 0.34182112, 0.79437948, 0.67709394], [ 0.34968889, 0.79827511, 0.67685638], [ 0.35799244, 0.80210037, 0.67664969], [ 0.36675371, 0.80584651, 0.67649539], [ 0.3759816 , 0.80950627, 0.67641393], [ 0.38566792, 0.81307432, 0.67642947], [ 0.39579804, 0.81654592, 0.67656899], [ 0.40634556, 0.81991799, 0.67686215], [ 0.41730243, 0.82318339, 0.67735255], [ 0.4285828 , 0.82635051, 0.6780564 ], [ 0.44012728, 0.82942353, 0.67900049], [ 0.45189421, 0.83240398, 0.68021733], [ 0.46378379, 0.83530763, 0.6817062 ], [ 0.47573199, 0.83814472, 0.68347352], [ 0.48769865, 0.84092197, 0.68552698], [ 0.49962354, 0.84365379, 0.68783929], [ 0.5114027 , 0.8463718 , 0.69029789], [ 0.52301693, 0.84908401, 0.69288545], [ 0.53447549, 0.85179048, 0.69561066], [ 0.54578602, 0.8544913 , 0.69848331], [ 0.55695565, 0.85718723, 0.70150427], [ 0.56798832, 0.85987893, 0.70468261], [ 0.57888639, 0.86256715, 0.70802931], [ 0.5896541 , 0.8652532 , 0.71154204], [ 0.60028928, 0.86793835, 0.71523675], [ 0.61079441, 0.87062438, 0.71910895], [ 0.62116633, 0.87331311, 0.72317003], [ 0.63140509, 0.87600675, 0.72741689], [ 0.64150735, 0.87870746, 0.73185717], [ 0.65147219, 0.8814179 , 0.73648495], [ 0.66129632, 0.8841403 , 0.74130658], [ 0.67097934, 0.88687758, 0.74631123], [ 0.68051833, 0.88963189, 0.75150483], [ 0.68991419, 0.89240612, 0.75687187], [ 0.69916533, 0.89520211, 0.76241714], [ 0.70827373, 0.89802257, 0.76812286], [ 0.71723995, 0.90086891, 0.77399039], [ 0.72606665, 0.90374337, 0.7800041 ], [ 0.73475675, 0.90664718, 0.78615802], [ 0.74331358, 0.90958151, 0.79244474], [ 0.75174143, 0.91254787, 0.79884925], [ 0.76004473, 0.91554656, 0.80536823], [ 0.76827704, 0.91856549, 0.81196513], [ 0.77647029, 0.921603 , 0.81855729], [ 0.78462009, 0.92466151, 0.82514119], [ 0.79273542, 0.92773848, 0.83172131], [ 0.8008109 , 0.93083672, 0.83829355], [ 0.80885107, 0.93395528, 0.84485982], [ 0.81685878, 0.9370938 , 0.85142101], [ 0.82483206, 0.94025378, 0.8579751 ], [ 0.83277661, 0.94343371, 0.86452477], [ 0.84069127, 0.94663473, 0.87106853], [ 0.84857662, 0.9498573 , 0.8776059 ], [ 0.8564431 , 0.95309792, 0.88414253], [ 0.86429066, 0.95635719, 0.89067759], [ 0.87218969, 0.95960708, 0.89725384] ] _vlag_lut = [ [ 0.13850039, 0.41331206, 0.74052025], [ 0.15077609, 0.41762684, 0.73970427], [ 0.16235219, 0.4219191 , 0.7389667 ], [ 0.1733322 , 0.42619024, 0.73832537], [ 0.18382538, 0.43044226, 0.73776764], [ 0.19394034, 0.4346772 , 0.73725867], [ 0.20367115, 0.43889576, 0.73685314], [ 0.21313625, 0.44310003, 0.73648045], [ 0.22231173, 0.44729079, 0.73619681], [ 0.23125148, 0.45146945, 0.73597803], [ 0.23998101, 0.45563715, 0.7358223 ], [ 0.24853358, 0.45979489, 0.73571524], [ 0.25691416, 0.4639437 , 0.73566943], [ 0.26513894, 0.46808455, 0.73568319], [ 0.27322194, 0.47221835, 0.73575497], [ 0.28117543, 0.47634598, 0.73588332], [ 0.28901021, 0.48046826, 0.73606686], [ 0.2967358 , 0.48458597, 0.73630433], [ 0.30436071, 0.48869986, 0.73659451], [ 0.3118955 , 0.49281055, 0.73693255], [ 0.31935389, 0.49691847, 0.73730851], [ 0.32672701, 0.5010247 , 0.73774013], [ 0.33402607, 0.50512971, 0.73821941], [ 0.34125337, 0.50923419, 0.73874905], [ 0.34840921, 0.51333892, 0.73933402], [ 0.35551826, 0.51744353, 0.73994642], [ 0.3625676 , 0.52154929, 0.74060763], [ 0.36956356, 0.52565656, 0.74131327], [ 0.37649902, 0.52976642, 0.74207698], [ 0.38340273, 0.53387791, 0.74286286], [ 0.39025859, 0.53799253, 0.7436962 ], [ 0.39706821, 0.54211081, 0.744578 ], [ 0.40384046, 0.54623277, 0.74549872], [ 0.41058241, 0.55035849, 0.74645094], [ 0.41728385, 0.55448919, 0.74745174], [ 0.42395178, 0.55862494, 0.74849357], [ 0.4305964 , 0.56276546, 0.74956387], [ 0.4372044 , 0.56691228, 0.75068412], [ 0.4437909 , 0.57106468, 0.75183427], [ 0.45035117, 0.5752235 , 0.75302312], [ 0.45687824, 0.57938983, 0.75426297], [ 0.46339713, 0.58356191, 0.75551816], [ 0.46988778, 0.58774195, 0.75682037], [ 0.47635605, 0.59192986, 0.75816245], [ 0.48281101, 0.5961252 , 0.75953212], [ 0.4892374 , 0.60032986, 0.76095418], [ 0.49566225, 0.60454154, 0.76238852], [ 0.50206137, 0.60876307, 0.76387371], [ 0.50845128, 0.61299312, 0.76538551], [ 0.5148258 , 0.61723272, 0.76693475], [ 0.52118385, 0.62148236, 0.76852436], [ 0.52753571, 0.62574126, 0.77013939], [ 0.53386831, 0.63001125, 0.77180152], [ 0.54020159, 0.63429038, 0.7734803 ], [ 0.54651272, 0.63858165, 0.77521306], [ 0.55282975, 0.64288207, 0.77695608], [ 0.55912585, 0.64719519, 0.77875327], [ 0.56542599, 0.65151828, 0.78056551], [ 0.57170924, 0.65585426, 0.78242747], [ 0.57799572, 0.6602009 , 0.78430751], [ 0.58426817, 0.66456073, 0.78623458], [ 0.590544 , 0.66893178, 0.78818117], [ 0.59680758, 0.67331643, 0.79017369], [ 0.60307553, 0.67771273, 0.79218572], [ 0.60934065, 0.68212194, 0.79422987], [ 0.61559495, 0.68654548, 0.7963202 ], [ 0.62185554, 0.69098125, 0.79842918], [ 0.62810662, 0.69543176, 0.80058381], [ 0.63436425, 0.69989499, 0.80275812], [ 0.64061445, 0.70437326, 0.80497621], [ 0.6468706 , 0.70886488, 0.80721641], [ 0.65312213, 0.7133717 , 0.80949719], [ 0.65937818, 0.71789261, 0.81180392], [ 0.66563334, 0.72242871, 0.81414642], [ 0.67189155, 0.72697967, 0.81651872], [ 0.67815314, 0.73154569, 0.81892097], [ 0.68441395, 0.73612771, 0.82136094], [ 0.69068321, 0.74072452, 0.82382353], [ 0.69694776, 0.7453385 , 0.82633199], [ 0.70322431, 0.74996721, 0.8288583 ], [ 0.70949595, 0.75461368, 0.83143221], [ 0.7157774 , 0.75927574, 0.83402904], [ 0.72206299, 0.76395461, 0.83665922], [ 0.72835227, 0.76865061, 0.8393242 ], [ 0.73465238, 0.7733628 , 0.84201224], [ 0.74094862, 0.77809393, 0.84474951], [ 0.74725683, 0.78284158, 0.84750915], [ 0.75357103, 0.78760701, 0.85030217], [ 0.75988961, 0.79239077, 0.85313207], [ 0.76621987, 0.79719185, 0.85598668], [ 0.77255045, 0.8020125 , 0.85888658], [ 0.77889241, 0.80685102, 0.86181298], [ 0.78524572, 0.81170768, 0.86476656], [ 0.79159841, 0.81658489, 0.86776906], [ 0.79796459, 0.82148036, 0.8707962 ], [ 0.80434168, 0.82639479, 0.87385315], [ 0.8107221 , 0.83132983, 0.87695392], [ 0.81711301, 0.8362844 , 0.88008641], [ 0.82351479, 0.84125863, 0.88325045], [ 0.82992772, 0.84625263, 0.88644594], [ 0.83634359, 0.85126806, 0.8896878 ], [ 0.84277295, 0.85630293, 0.89295721], [ 0.84921192, 0.86135782, 0.89626076], [ 0.85566206, 0.866432 , 0.89959467], [ 0.86211514, 0.87152627, 0.90297183], [ 0.86857483, 0.87663856, 0.90638248], [ 0.87504231, 0.88176648, 0.90981938], [ 0.88151194, 0.88690782, 0.91328493], [ 0.88797938, 0.89205857, 0.91677544], [ 0.89443865, 0.89721298, 0.9202854 ], [ 0.90088204, 0.90236294, 0.92380601], [ 0.90729768, 0.90749778, 0.92732797], [ 0.91367037, 0.91260329, 0.93083814], [ 0.91998105, 0.91766106, 0.93431861], [ 0.92620596, 0.92264789, 0.93774647], [ 0.93231683, 0.9275351 , 0.94109192], [ 0.93827772, 0.9322888 , 0.94432312], [ 0.94404755, 0.93686925, 0.94740137], [ 0.94958284, 0.94123072, 0.95027696], [ 0.95482682, 0.9453245 , 0.95291103], [ 0.9597248 , 0.94909728, 0.95525103], [ 0.96422552, 0.95249273, 0.95723271], [ 0.96826161, 0.95545812, 0.95882188], [ 0.97178458, 0.95793984, 0.95995705], [ 0.97474105, 0.95989142, 0.96059997], [ 0.97708604, 0.96127366, 0.96071853], [ 0.97877855, 0.96205832, 0.96030095], [ 0.97978484, 0.96222949, 0.95935496], [ 0.9805997 , 0.96155216, 0.95813083], [ 0.98152619, 0.95993719, 0.95639322], [ 0.9819726 , 0.95766608, 0.95399269], [ 0.98191855, 0.9547873 , 0.95098107], [ 0.98138514, 0.95134771, 0.94740644], [ 0.98040845, 0.94739906, 0.94332125], [ 0.97902107, 0.94300131, 0.93878672], [ 0.97729348, 0.93820409, 0.93385135], [ 0.9752533 , 0.933073 , 0.92858252], [ 0.97297834, 0.92765261, 0.92302309], [ 0.97049104, 0.92200317, 0.91723505], [ 0.96784372, 0.91616744, 0.91126063], [ 0.96507281, 0.91018664, 0.90514124], [ 0.96222034, 0.90409203, 0.89890756], [ 0.9593079 , 0.89791478, 0.89259122], [ 0.95635626, 0.89167908, 0.88621654], [ 0.95338303, 0.88540373, 0.87980238], [ 0.95040174, 0.87910333, 0.87336339], [ 0.94742246, 0.87278899, 0.86691076], [ 0.94445249, 0.86646893, 0.86045277], [ 0.94150476, 0.86014606, 0.85399191], [ 0.93857394, 0.85382798, 0.84753642], [ 0.93566206, 0.84751766, 0.84108935], [ 0.93277194, 0.8412164 , 0.83465197], [ 0.92990106, 0.83492672, 0.82822708], [ 0.92704736, 0.82865028, 0.82181656], [ 0.92422703, 0.82238092, 0.81541333], [ 0.92142581, 0.81612448, 0.80902415], [ 0.91864501, 0.80988032, 0.80264838], [ 0.91587578, 0.80365187, 0.79629001], [ 0.9131367 , 0.79743115, 0.78994 ], [ 0.91041602, 0.79122265, 0.78360361], [ 0.90771071, 0.78502727, 0.77728196], [ 0.90501581, 0.77884674, 0.7709771 ], [ 0.90235365, 0.77267117, 0.76467793], [ 0.8997019 , 0.76650962, 0.75839484], [ 0.89705346, 0.76036481, 0.752131 ], [ 0.89444021, 0.75422253, 0.74587047], [ 0.89183355, 0.74809474, 0.73962689], [ 0.88923216, 0.74198168, 0.73340061], [ 0.88665892, 0.73587283, 0.72717995], [ 0.88408839, 0.72977904, 0.72097718], [ 0.88153537, 0.72369332, 0.71478461], [ 0.87899389, 0.7176179 , 0.70860487], [ 0.87645157, 0.71155805, 0.7024439 ], [ 0.8739399 , 0.70549893, 0.6962854 ], [ 0.87142626, 0.6994551 , 0.69014561], [ 0.8689268 , 0.69341868, 0.68401597], [ 0.86643562, 0.687392 , 0.67789917], [ 0.86394434, 0.68137863, 0.67179927], [ 0.86147586, 0.67536728, 0.665704 ], [ 0.85899928, 0.66937226, 0.6596292 ], [ 0.85654668, 0.66337773, 0.6535577 ], [ 0.85408818, 0.65739772, 0.64750494], [ 0.85164413, 0.65142189, 0.64145983], [ 0.84920091, 0.6454565 , 0.63542932], [ 0.84676427, 0.63949827, 0.62941 ], [ 0.84433231, 0.63354773, 0.62340261], [ 0.84190106, 0.62760645, 0.61740899], [ 0.83947935, 0.62166951, 0.61142404], [ 0.8370538 , 0.61574332, 0.60545478], [ 0.83463975, 0.60981951, 0.59949247], [ 0.83221877, 0.60390724, 0.593547 ], [ 0.82980985, 0.59799607, 0.58760751], [ 0.82740268, 0.59209095, 0.58167944], [ 0.82498638, 0.5861973 , 0.57576866], [ 0.82258181, 0.5803034 , 0.56986307], [ 0.82016611, 0.57442123, 0.56397539], [ 0.81776305, 0.56853725, 0.55809173], [ 0.81534551, 0.56266602, 0.55222741], [ 0.81294293, 0.55679056, 0.5463651 ], [ 0.81052113, 0.55092973, 0.54052443], [ 0.80811509, 0.54506305, 0.53468464], [ 0.80568952, 0.53921036, 0.52886622], [ 0.80327506, 0.53335335, 0.52305077], [ 0.80084727, 0.52750583, 0.51725256], [ 0.79842217, 0.5216578 , 0.51146173], [ 0.79599382, 0.51581223, 0.50568155], [ 0.79355781, 0.50997127, 0.49991444], [ 0.79112596, 0.50412707, 0.49415289], [ 0.78867442, 0.49829386, 0.48841129], [ 0.7862306 , 0.49245398, 0.48267247], [ 0.7837687 , 0.48662309, 0.47695216], [ 0.78130809, 0.4807883 , 0.47123805], [ 0.77884467, 0.47495151, 0.46553236], [ 0.77636283, 0.46912235, 0.45984473], [ 0.77388383, 0.46328617, 0.45416141], [ 0.77138912, 0.45745466, 0.44849398], [ 0.76888874, 0.45162042, 0.44283573], [ 0.76638802, 0.44577901, 0.43718292], [ 0.76386116, 0.43994762, 0.43155211], [ 0.76133542, 0.43410655, 0.42592523], [ 0.75880631, 0.42825801, 0.42030488], [ 0.75624913, 0.42241905, 0.41470727], [ 0.7536919 , 0.41656866, 0.40911347], [ 0.75112748, 0.41071104, 0.40352792], [ 0.74854331, 0.40485474, 0.3979589 ], [ 0.74594723, 0.39899309, 0.39240088], [ 0.74334332, 0.39312199, 0.38685075], [ 0.74073277, 0.38723941, 0.3813074 ], [ 0.73809409, 0.38136133, 0.37578553], [ 0.73544692, 0.37547129, 0.37027123], [ 0.73278943, 0.36956954, 0.36476549], [ 0.73011829, 0.36365761, 0.35927038], [ 0.72743485, 0.35773314, 0.35378465], [ 0.72472722, 0.35180504, 0.34831662], [ 0.72200473, 0.34586421, 0.34285937], [ 0.71927052, 0.33990649, 0.33741033], [ 0.71652049, 0.33393396, 0.33197219], [ 0.71375362, 0.32794602, 0.32654545], [ 0.71096951, 0.32194148, 0.32113016], [ 0.70816772, 0.31591904, 0.31572637], [ 0.70534784, 0.30987734, 0.31033414], [ 0.70250944, 0.30381489, 0.30495353], [ 0.69965211, 0.2977301 , 0.2995846 ], [ 0.6967754 , 0.29162126, 0.29422741], [ 0.69388446, 0.28548074, 0.28887769], [ 0.69097561, 0.2793096 , 0.28353795], [ 0.68803513, 0.27311993, 0.27821876], [ 0.6850794 , 0.26689144, 0.27290694], [ 0.682108 , 0.26062114, 0.26760246], [ 0.67911013, 0.2543177 , 0.26231367], [ 0.67609393, 0.24796818, 0.25703372], [ 0.67305921, 0.24156846, 0.25176238], [ 0.67000176, 0.23511902, 0.24650278], [ 0.66693423, 0.22859879, 0.24124404], [ 0.6638441 , 0.22201742, 0.2359961 ], [ 0.66080672, 0.21526712, 0.23069468] ] _icefire_lut = [ [ 0.73936227, 0.90443867, 0.85757238], [ 0.72888063, 0.89639109, 0.85488394], [ 0.71834255, 0.88842162, 0.8521605 ], [ 0.70773866, 0.88052939, 0.849422 ], [ 0.69706215, 0.87271313, 0.84668315], [ 0.68629021, 0.86497329, 0.84398721], [ 0.67543654, 0.85730617, 0.84130969], [ 0.66448539, 0.84971123, 0.83868005], [ 0.65342679, 0.84218728, 0.83611512], [ 0.64231804, 0.83471867, 0.83358584], [ 0.63117745, 0.827294 , 0.83113431], [ 0.62000484, 0.81991069, 0.82876741], [ 0.60879435, 0.81256797, 0.82648905], [ 0.59754118, 0.80526458, 0.82430414], [ 0.58624247, 0.79799884, 0.82221573], [ 0.57489525, 0.7907688 , 0.82022901], [ 0.56349779, 0.78357215, 0.81834861], [ 0.55204294, 0.77640827, 0.81657563], [ 0.54052516, 0.76927562, 0.81491462], [ 0.52894085, 0.76217215, 0.81336913], [ 0.51728854, 0.75509528, 0.81194156], [ 0.50555676, 0.74804469, 0.81063503], [ 0.49373871, 0.7410187 , 0.80945242], [ 0.48183174, 0.73401449, 0.80839675], [ 0.46982587, 0.72703075, 0.80747097], [ 0.45770893, 0.72006648, 0.80667756], [ 0.44547249, 0.71311941, 0.80601991], [ 0.43318643, 0.70617126, 0.80549278], [ 0.42110294, 0.69916972, 0.80506683], [ 0.40925101, 0.69211059, 0.80473246], [ 0.3976693 , 0.68498786, 0.80448272], [ 0.38632002, 0.67781125, 0.80431024], [ 0.37523981, 0.67057537, 0.80420832], [ 0.36442578, 0.66328229, 0.80417474], [ 0.35385939, 0.65593699, 0.80420591], [ 0.34358916, 0.64853177, 0.8043 ], [ 0.33355526, 0.64107876, 0.80445484], [ 0.32383062, 0.63356578, 0.80467091], [ 0.31434372, 0.62600624, 0.8049475 ], [ 0.30516161, 0.618389 , 0.80528692], [ 0.29623491, 0.61072284, 0.80569021], [ 0.28759072, 0.60300319, 0.80616055], [ 0.27923924, 0.59522877, 0.80669803], [ 0.27114651, 0.5874047 , 0.80730545], [ 0.26337153, 0.57952055, 0.80799113], [ 0.25588696, 0.57157984, 0.80875922], [ 0.248686 , 0.56358255, 0.80961366], [ 0.24180668, 0.55552289, 0.81055123], [ 0.23526251, 0.54739477, 0.8115939 ], [ 0.22921445, 0.53918506, 0.81267292], [ 0.22397687, 0.53086094, 0.8137141 ], [ 0.21977058, 0.52241482, 0.81457651], [ 0.21658989, 0.51384321, 0.81528511], [ 0.21452772, 0.50514155, 0.81577278], [ 0.21372783, 0.49630865, 0.81589566], [ 0.21409503, 0.48734861, 0.81566163], [ 0.2157176 , 0.47827123, 0.81487615], [ 0.21842857, 0.46909168, 0.81351614], [ 0.22211705, 0.45983212, 0.81146983], [ 0.22665681, 0.45052233, 0.80860217], [ 0.23176013, 0.44119137, 0.80494325], [ 0.23727775, 0.43187704, 0.80038017], [ 0.24298285, 0.42261123, 0.79493267], [ 0.24865068, 0.41341842, 0.78869164], [ 0.25423116, 0.40433127, 0.78155831], [ 0.25950239, 0.39535521, 0.77376848], [ 0.2644736 , 0.38651212, 0.76524809], [ 0.26901584, 0.37779582, 0.75621942], [ 0.27318141, 0.36922056, 0.746605 ], [ 0.27690355, 0.3607736 , 0.73659374], [ 0.28023585, 0.35244234, 0.72622103], [ 0.28306009, 0.34438449, 0.71500731], [ 0.28535896, 0.33660243, 0.70303975], [ 0.28708711, 0.32912157, 0.69034504], [ 0.28816354, 0.32200604, 0.67684067], [ 0.28862749, 0.31519824, 0.66278813], [ 0.28847904, 0.30869064, 0.6482815 ], [ 0.28770912, 0.30250126, 0.63331265], [ 0.28640325, 0.29655509, 0.61811374], [ 0.28458943, 0.29082155, 0.60280913], [ 0.28233561, 0.28527482, 0.58742866], [ 0.27967038, 0.2798938 , 0.57204225], [ 0.27665361, 0.27465357, 0.55667809], [ 0.27332564, 0.2695165 , 0.54145387], [ 0.26973851, 0.26447054, 0.52634916], [ 0.2659204 , 0.25949691, 0.511417 ], [ 0.26190145, 0.25458123, 0.49668768], [ 0.2577151 , 0.24971691, 0.48214874], [ 0.25337618, 0.24490494, 0.46778758], [ 0.24890842, 0.24013332, 0.45363816], [ 0.24433654, 0.23539226, 0.4397245 ], [ 0.23967922, 0.23067729, 0.4260591 ], [ 0.23495608, 0.22598894, 0.41262952], [ 0.23018113, 0.22132414, 0.39945577], [ 0.22534609, 0.21670847, 0.38645794], [ 0.22048761, 0.21211723, 0.37372555], [ 0.2156198 , 0.20755389, 0.36125301], [ 0.21074637, 0.20302717, 0.34903192], [ 0.20586893, 0.19855368, 0.33701661], [ 0.20101757, 0.19411573, 0.32529173], [ 0.19619947, 0.18972425, 0.31383846], [ 0.19140726, 0.18540157, 0.30260777], [ 0.1866769 , 0.1811332 , 0.29166583], [ 0.18201285, 0.17694992, 0.28088776], [ 0.17745228, 0.17282141, 0.27044211], [ 0.17300684, 0.16876921, 0.26024893], [ 0.16868273, 0.16479861, 0.25034479], [ 0.16448691, 0.16091728, 0.24075373], [ 0.16043195, 0.15714351, 0.23141745], [ 0.15652427, 0.15348248, 0.22238175], [ 0.15277065, 0.14994111, 0.21368395], [ 0.14918274, 0.14653431, 0.20529486], [ 0.14577095, 0.14327403, 0.19720829], [ 0.14254381, 0.14016944, 0.18944326], [ 0.13951035, 0.13723063, 0.18201072], [ 0.13667798, 0.13446606, 0.17493774], [ 0.13405762, 0.13188822, 0.16820842], [ 0.13165767, 0.12950667, 0.16183275], [ 0.12948748, 0.12733187, 0.15580631], [ 0.12755435, 0.1253723 , 0.15014098], [ 0.12586516, 0.12363617, 0.1448459 ], [ 0.12442647, 0.12213143, 0.13992571], [ 0.12324241, 0.12086419, 0.13539995], [ 0.12232067, 0.11984278, 0.13124644], [ 0.12166209, 0.11907077, 0.12749671], [ 0.12126982, 0.11855309, 0.12415079], [ 0.12114244, 0.11829179, 0.1212385 ], [ 0.12127766, 0.11828837, 0.11878534], [ 0.12284806, 0.1179729 , 0.11772022], [ 0.12619498, 0.11721796, 0.11770203], [ 0.129968 , 0.11663788, 0.11792377], [ 0.13410011, 0.11625146, 0.11839138], [ 0.13855459, 0.11606618, 0.11910584], [ 0.14333775, 0.11607038, 0.1200606 ], [ 0.148417 , 0.11626929, 0.12125453], [ 0.15377389, 0.11666192, 0.12268364], [ 0.15941427, 0.11723486, 0.12433911], [ 0.16533376, 0.11797856, 0.12621303], [ 0.17152547, 0.11888403, 0.12829735], [ 0.17797765, 0.11994436, 0.13058435], [ 0.18468769, 0.12114722, 0.13306426], [ 0.19165663, 0.12247737, 0.13572616], [ 0.19884415, 0.12394381, 0.1385669 ], [ 0.20627181, 0.12551883, 0.14157124], [ 0.21394877, 0.12718055, 0.14472604], [ 0.22184572, 0.12893119, 0.14802579], [ 0.22994394, 0.13076731, 0.15146314], [ 0.23823937, 0.13267611, 0.15502793], [ 0.24676041, 0.13462172, 0.15870321], [ 0.25546457, 0.13661751, 0.16248722], [ 0.26433628, 0.13865956, 0.16637301], [ 0.27341345, 0.14070412, 0.17034221], [ 0.28264773, 0.14277192, 0.1743957 ], [ 0.29202272, 0.14486161, 0.17852793], [ 0.30159648, 0.14691224, 0.1827169 ], [ 0.31129002, 0.14897583, 0.18695213], [ 0.32111555, 0.15103351, 0.19119629], [ 0.33107961, 0.1530674 , 0.19543758], [ 0.34119892, 0.15504762, 0.1996803 ], [ 0.35142388, 0.15701131, 0.20389086], [ 0.36178937, 0.1589124 , 0.20807639], [ 0.37229381, 0.16073993, 0.21223189], [ 0.38288348, 0.16254006, 0.2163249 ], [ 0.39359592, 0.16426336, 0.22036577], [ 0.40444332, 0.16588767, 0.22434027], [ 0.41537995, 0.16745325, 0.2282297 ], [ 0.42640867, 0.16894939, 0.23202755], [ 0.43754706, 0.17034847, 0.23572899], [ 0.44878564, 0.1716535 , 0.23932344], [ 0.4601126 , 0.17287365, 0.24278607], [ 0.47151732, 0.17401641, 0.24610337], [ 0.48300689, 0.17506676, 0.2492737 ], [ 0.49458302, 0.17601892, 0.25227688], [ 0.50623876, 0.17687777, 0.255096 ], [ 0.5179623 , 0.17765528, 0.2577162 ], [ 0.52975234, 0.17835232, 0.2601134 ], [ 0.54159776, 0.17898292, 0.26226847], [ 0.55348804, 0.17956232, 0.26416003], [ 0.56541729, 0.18010175, 0.26575971], [ 0.57736669, 0.180631 , 0.26704888], [ 0.58932081, 0.18117827, 0.26800409], [ 0.60127582, 0.18175888, 0.26858488], [ 0.61319563, 0.1824336 , 0.2687872 ], [ 0.62506376, 0.18324015, 0.26858301], [ 0.63681202, 0.18430173, 0.26795276], [ 0.64842603, 0.18565472, 0.26689463], [ 0.65988195, 0.18734638, 0.26543435], [ 0.67111966, 0.18948885, 0.26357955], [ 0.68209194, 0.19216636, 0.26137175], [ 0.69281185, 0.19535326, 0.25887063], [ 0.70335022, 0.19891271, 0.25617971], [ 0.71375229, 0.20276438, 0.25331365], [ 0.72401436, 0.20691287, 0.25027366], [ 0.73407638, 0.21145051, 0.24710661], [ 0.74396983, 0.21631913, 0.24380715], [ 0.75361506, 0.22163653, 0.24043996], [ 0.7630579 , 0.22731637, 0.23700095], [ 0.77222228, 0.23346231, 0.23356628], [ 0.78115441, 0.23998404, 0.23013825], [ 0.78979746, 0.24694858, 0.22678822], [ 0.79819286, 0.25427223, 0.22352658], [ 0.80630444, 0.26198807, 0.22040877], [ 0.81417437, 0.27001406, 0.21744645], [ 0.82177364, 0.27837336, 0.21468316], [ 0.82915955, 0.28696963, 0.21210766], [ 0.83628628, 0.2958499 , 0.20977813], [ 0.84322168, 0.30491136, 0.20766435], [ 0.84995458, 0.31415945, 0.2057863 ], [ 0.85648867, 0.32358058, 0.20415327], [ 0.86286243, 0.33312058, 0.20274969], [ 0.86908321, 0.34276705, 0.20157271], [ 0.87512876, 0.3525416 , 0.20064949], [ 0.88100349, 0.36243385, 0.19999078], [ 0.8866469 , 0.37249496, 0.1997976 ], [ 0.89203964, 0.38273475, 0.20013431], [ 0.89713496, 0.39318156, 0.20121514], [ 0.90195099, 0.40380687, 0.20301555], [ 0.90648379, 0.41460191, 0.20558847], [ 0.9106967 , 0.42557857, 0.20918529], [ 0.91463791, 0.43668557, 0.21367954], [ 0.91830723, 0.44790913, 0.21916352], [ 0.92171507, 0.45922856, 0.22568002], [ 0.92491786, 0.4705936 , 0.23308207], [ 0.92790792, 0.48200153, 0.24145932], [ 0.93073701, 0.49341219, 0.25065486], [ 0.93343918, 0.5048017 , 0.26056148], [ 0.93602064, 0.51616486, 0.27118485], [ 0.93850535, 0.52748892, 0.28242464], [ 0.94092933, 0.53875462, 0.29416042], [ 0.94330011, 0.5499628 , 0.30634189], [ 0.94563159, 0.56110987, 0.31891624], [ 0.94792955, 0.57219822, 0.33184256], [ 0.95020929, 0.5832232 , 0.34508419], [ 0.95247324, 0.59419035, 0.35859866], [ 0.95471709, 0.60510869, 0.37236035], [ 0.95698411, 0.61595766, 0.38629631], [ 0.95923863, 0.62676473, 0.40043317], [ 0.9615041 , 0.6375203 , 0.41474106], [ 0.96371553, 0.64826619, 0.42928335], [ 0.96591497, 0.65899621, 0.44380444], [ 0.96809871, 0.66971662, 0.45830232], [ 0.9702495 , 0.6804394 , 0.47280492], [ 0.9723881 , 0.69115622, 0.48729272], [ 0.97450723, 0.70187358, 0.50178034], [ 0.9766108 , 0.712592 , 0.51626837], [ 0.97871716, 0.72330511, 0.53074053], [ 0.98082222, 0.73401769, 0.54520694], [ 0.9829001 , 0.74474445, 0.5597019 ], [ 0.98497466, 0.75547635, 0.57420239], [ 0.98705581, 0.76621129, 0.58870185], [ 0.98913325, 0.77695637, 0.60321626], [ 0.99119918, 0.78771716, 0.61775821], [ 0.9932672 , 0.79848979, 0.63231691], [ 0.99535958, 0.80926704, 0.64687278], [ 0.99740544, 0.82008078, 0.66150571], [ 0.9992197 , 0.83100723, 0.6764127 ] ] _flare_lut = [ [0.92907237, 0.68878959, 0.50411509], [0.92891402, 0.68494686, 0.50173994], [0.92864754, 0.68116207, 0.4993754], [0.92836112, 0.67738527, 0.49701572], [0.9280599, 0.67361354, 0.49466044], [0.92775569, 0.66983999, 0.49230866], [0.9274375, 0.66607098, 0.48996097], [0.927111, 0.66230315, 0.48761688], [0.92677996, 0.6585342, 0.485276], [0.92644317, 0.65476476, 0.48293832], [0.92609759, 0.65099658, 0.48060392], [0.925747, 0.64722729, 0.47827244], [0.92539502, 0.64345456, 0.47594352], [0.92503106, 0.6396848, 0.47361782], [0.92466877, 0.6359095, 0.47129427], [0.92429828, 0.63213463, 0.46897349], [0.92392172, 0.62835879, 0.46665526], [0.92354597, 0.62457749, 0.46433898], [0.9231622, 0.6207962, 0.46202524], [0.92277222, 0.61701365, 0.45971384], [0.92237978, 0.61322733, 0.45740444], [0.92198615, 0.60943622, 0.45509686], [0.92158735, 0.60564276, 0.45279137], [0.92118373, 0.60184659, 0.45048789], [0.92077582, 0.59804722, 0.44818634], [0.92036413, 0.59424414, 0.44588663], [0.91994924, 0.5904368, 0.44358868], [0.91952943, 0.58662619, 0.4412926], [0.91910675, 0.58281075, 0.43899817], [0.91868096, 0.57899046, 0.4367054], [0.91825103, 0.57516584, 0.43441436], [0.91781857, 0.57133556, 0.43212486], [0.9173814, 0.56750099, 0.4298371], [0.91694139, 0.56366058, 0.42755089], [0.91649756, 0.55981483, 0.42526631], [0.91604942, 0.55596387, 0.42298339], [0.9155979, 0.55210684, 0.42070204], [0.9151409, 0.54824485, 0.4184247], [0.91466138, 0.54438817, 0.41617858], [0.91416896, 0.54052962, 0.41396347], [0.91366559, 0.53666778, 0.41177769], [0.91315173, 0.53280208, 0.40962196], [0.91262605, 0.52893336, 0.40749715], [0.91208866, 0.52506133, 0.40540404], [0.91153952, 0.52118582, 0.40334346], [0.91097732, 0.51730767, 0.4013163], [0.910403, 0.51342591, 0.39932342], [0.90981494, 0.50954168, 0.39736571], [0.90921368, 0.5056543, 0.39544411], [0.90859797, 0.50176463, 0.39355952], [0.90796841, 0.49787195, 0.39171297], [0.90732341, 0.4939774, 0.38990532], [0.90666382, 0.49008006, 0.38813773], [0.90598815, 0.486181, 0.38641107], [0.90529624, 0.48228017, 0.38472641], [0.90458808, 0.47837738, 0.38308489], [0.90386248, 0.47447348, 0.38148746], [0.90311921, 0.4705685, 0.37993524], [0.90235809, 0.46666239, 0.37842943], [0.90157824, 0.46275577, 0.37697105], [0.90077904, 0.45884905, 0.37556121], [0.89995995, 0.45494253, 0.37420106], [0.89912041, 0.4510366, 0.37289175], [0.8982602, 0.44713126, 0.37163458], [0.89737819, 0.44322747, 0.37043052], [0.89647387, 0.43932557, 0.36928078], [0.89554477, 0.43542759, 0.36818855], [0.89458871, 0.4315354, 0.36715654], [0.89360794, 0.42764714, 0.36618273], [0.89260152, 0.42376366, 0.36526813], [0.8915687, 0.41988565, 0.36441384], [0.89050882, 0.41601371, 0.36362102], [0.8894159, 0.41215334, 0.36289639], [0.888292, 0.40830288, 0.36223756], [0.88713784, 0.40446193, 0.36164328], [0.88595253, 0.40063149, 0.36111438], [0.88473115, 0.39681635, 0.3606566], [0.88347246, 0.39301805, 0.36027074], [0.88217931, 0.38923439, 0.35995244], [0.880851, 0.38546632, 0.35970244], [0.87947728, 0.38172422, 0.35953127], [0.87806542, 0.37800172, 0.35942941], [0.87661509, 0.37429964, 0.35939659], [0.87511668, 0.37062819, 0.35944178], [0.87357554, 0.36698279, 0.35955811], [0.87199254, 0.3633634, 0.35974223], [0.87035691, 0.35978174, 0.36000516], [0.86867647, 0.35623087, 0.36033559], [0.86694949, 0.35271349, 0.36073358], [0.86516775, 0.34923921, 0.36120624], [0.86333996, 0.34580008, 0.36174113], [0.86145909, 0.3424046, 0.36234402], [0.85952586, 0.33905327, 0.36301129], [0.85754536, 0.33574168, 0.36373567], [0.855514, 0.33247568, 0.36451271], [0.85344392, 0.32924217, 0.36533344], [0.8513284, 0.32604977, 0.36620106], [0.84916723, 0.32289973, 0.36711424], [0.84696243, 0.31979068, 0.36806976], [0.84470627, 0.31673295, 0.36907066], [0.84240761, 0.31371695, 0.37010969], [0.84005337, 0.31075974, 0.37119284], [0.83765537, 0.30784814, 0.3723105], [0.83520234, 0.30499724, 0.37346726], [0.83270291, 0.30219766, 0.37465552], [0.83014895, 0.29946081, 0.37587769], [0.82754694, 0.29677989, 0.37712733], [0.82489111, 0.29416352, 0.37840532], [0.82218644, 0.29160665, 0.37970606], [0.81942908, 0.28911553, 0.38102921], [0.81662276, 0.28668665, 0.38236999], [0.81376555, 0.28432371, 0.383727], [0.81085964, 0.28202508, 0.38509649], [0.8079055, 0.27979128, 0.38647583], [0.80490309, 0.27762348, 0.3878626], [0.80185613, 0.2755178, 0.38925253], [0.79876118, 0.27347974, 0.39064559], [0.79562644, 0.27149928, 0.39203532], [0.79244362, 0.2695883, 0.39342447], [0.78922456, 0.26773176, 0.3948046], [0.78596161, 0.26594053, 0.39617873], [0.7826624, 0.26420493, 0.39754146], [0.77932717, 0.26252522, 0.39889102], [0.77595363, 0.2609049, 0.4002279], [0.77254999, 0.25933319, 0.40154704], [0.76911107, 0.25781758, 0.40284959], [0.76564158, 0.25635173, 0.40413341], [0.76214598, 0.25492998, 0.40539471], [0.75861834, 0.25356035, 0.40663694], [0.75506533, 0.25223402, 0.40785559], [0.75148963, 0.2509473, 0.40904966], [0.74788835, 0.24970413, 0.41022028], [0.74426345, 0.24850191, 0.41136599], [0.74061927, 0.24733457, 0.41248516], [0.73695678, 0.24620072, 0.41357737], [0.73327278, 0.24510469, 0.41464364], [0.72957096, 0.24404127, 0.4156828], [0.72585394, 0.24300672, 0.41669383], [0.7221226, 0.24199971, 0.41767651], [0.71837612, 0.24102046, 0.41863486], [0.71463236, 0.24004289, 0.41956983], [0.7108932, 0.23906316, 0.42048681], [0.70715842, 0.23808142, 0.42138647], [0.70342811, 0.2370976, 0.42226844], [0.69970218, 0.23611179, 0.42313282], [0.69598055, 0.2351247, 0.42397678], [0.69226314, 0.23413578, 0.42480327], [0.68854988, 0.23314511, 0.42561234], [0.68484064, 0.23215279, 0.42640419], [0.68113541, 0.23115942, 0.42717615], [0.67743412, 0.23016472, 0.42792989], [0.67373662, 0.22916861, 0.42866642], [0.67004287, 0.22817117, 0.42938576], [0.66635279, 0.22717328, 0.43008427], [0.66266621, 0.22617435, 0.43076552], [0.65898313, 0.22517434, 0.43142956], [0.65530349, 0.22417381, 0.43207427], [0.65162696, 0.22317307, 0.4327001], [0.64795375, 0.22217149, 0.43330852], [0.64428351, 0.22116972, 0.43389854], [0.64061624, 0.22016818, 0.43446845], [0.63695183, 0.21916625, 0.43502123], [0.63329016, 0.21816454, 0.43555493], [0.62963102, 0.2171635, 0.43606881], [0.62597451, 0.21616235, 0.43656529], [0.62232019, 0.21516239, 0.43704153], [0.61866821, 0.21416307, 0.43749868], [0.61501835, 0.21316435, 0.43793808], [0.61137029, 0.21216761, 0.4383556], [0.60772426, 0.2111715, 0.43875552], [0.60407977, 0.21017746, 0.43913439], [0.60043678, 0.20918503, 0.43949412], [0.59679524, 0.20819447, 0.43983393], [0.59315487, 0.20720639, 0.44015254], [0.58951566, 0.20622027, 0.44045213], [0.58587715, 0.20523751, 0.44072926], [0.5822395, 0.20425693, 0.44098758], [0.57860222, 0.20328034, 0.44122241], [0.57496549, 0.20230637, 0.44143805], [0.57132875, 0.20133689, 0.4416298], [0.56769215, 0.20037071, 0.44180142], [0.5640552, 0.19940936, 0.44194923], [0.56041794, 0.19845221, 0.44207535], [0.55678004, 0.1975, 0.44217824], [0.55314129, 0.19655316, 0.44225723], [0.54950166, 0.19561118, 0.44231412], [0.54585987, 0.19467771, 0.44234111], [0.54221157, 0.19375869, 0.44233698], [0.5385549, 0.19285696, 0.44229959], [0.5348913, 0.19197036, 0.44222958], [0.53122177, 0.1910974, 0.44212735], [0.52754464, 0.19024042, 0.44199159], [0.52386353, 0.18939409, 0.44182449], [0.52017476, 0.18856368, 0.44162345], [0.51648277, 0.18774266, 0.44139128], [0.51278481, 0.18693492, 0.44112605], [0.50908361, 0.18613639, 0.4408295], [0.50537784, 0.18534893, 0.44050064], [0.50166912, 0.18457008, 0.44014054], [0.49795686, 0.18380056, 0.43974881], [0.49424218, 0.18303865, 0.43932623], [0.49052472, 0.18228477, 0.43887255], [0.48680565, 0.1815371, 0.43838867], [0.48308419, 0.18079663, 0.43787408], [0.47936222, 0.18006056, 0.43733022], [0.47563799, 0.17933127, 0.43675585], [0.47191466, 0.17860416, 0.43615337], [0.46818879, 0.17788392, 0.43552047], [0.46446454, 0.17716458, 0.43486036], [0.46073893, 0.17645017, 0.43417097], [0.45701462, 0.17573691, 0.43345429], [0.45329097, 0.17502549, 0.43271025], [0.44956744, 0.17431649, 0.4319386], [0.44584668, 0.17360625, 0.43114133], [0.44212538, 0.17289906, 0.43031642], [0.43840678, 0.17219041, 0.42946642], [0.43469046, 0.17148074, 0.42859124], [0.4309749, 0.17077192, 0.42769008], [0.42726297, 0.17006003, 0.42676519], [0.42355299, 0.16934709, 0.42581586], [0.41984535, 0.16863258, 0.42484219], [0.41614149, 0.16791429, 0.42384614], [0.41244029, 0.16719372, 0.42282661], [0.40874177, 0.16647061, 0.42178429], [0.40504765, 0.16574261, 0.42072062], [0.401357, 0.16501079, 0.41963528], [0.397669, 0.16427607, 0.418528], [0.39398585, 0.16353554, 0.41740053], [0.39030735, 0.16278924, 0.41625344], [0.3866314, 0.16203977, 0.41508517], [0.38295904, 0.16128519, 0.41389849], [0.37928736, 0.16052483, 0.41270599], [0.37562649, 0.15974704, 0.41151182], [0.37197803, 0.15895049, 0.41031532], [0.36833779, 0.15813871, 0.40911916], [0.36470944, 0.15730861, 0.40792149], [0.36109117, 0.15646169, 0.40672362], [0.35748213, 0.15559861, 0.40552633], [0.353885, 0.15471714, 0.40432831], [0.35029682, 0.15381967, 0.4031316], [0.34671861, 0.1529053, 0.40193587], [0.34315191, 0.15197275, 0.40074049], [0.33959331, 0.15102466, 0.3995478], [0.33604378, 0.15006017, 0.39835754], [0.33250529, 0.14907766, 0.39716879], [0.32897621, 0.14807831, 0.39598285], [0.3254559, 0.14706248, 0.39480044], [0.32194567, 0.14602909, 0.39362106], [0.31844477, 0.14497857, 0.39244549], [0.31494974, 0.14391333, 0.39127626], [0.31146605, 0.14282918, 0.39011024], [0.30798857, 0.1417297, 0.38895105], [0.30451661, 0.14061515, 0.38779953], [0.30105136, 0.13948445, 0.38665531], [0.2975886, 0.1383403, 0.38552159], [0.29408557, 0.13721193, 0.38442775] ] _crest_lut = [ [0.6468274, 0.80289262, 0.56592265], [0.64233318, 0.80081141, 0.56639461], [0.63791969, 0.7987162, 0.56674976], [0.6335316, 0.79661833, 0.56706128], [0.62915226, 0.7945212, 0.56735066], [0.62477862, 0.79242543, 0.56762143], [0.62042003, 0.79032918, 0.56786129], [0.61606327, 0.78823508, 0.56808666], [0.61171322, 0.78614216, 0.56829092], [0.60736933, 0.78405055, 0.56847436], [0.60302658, 0.78196121, 0.56864272], [0.59868708, 0.77987374, 0.56879289], [0.59435366, 0.77778758, 0.56892099], [0.59001953, 0.77570403, 0.56903477], [0.58568753, 0.77362254, 0.56913028], [0.58135593, 0.77154342, 0.56920908], [0.57702623, 0.76946638, 0.56926895], [0.57269165, 0.76739266, 0.5693172], [0.56835934, 0.76532092, 0.56934507], [0.56402533, 0.76325185, 0.56935664], [0.55968429, 0.76118643, 0.56935732], [0.55534159, 0.75912361, 0.56934052], [0.55099572, 0.75706366, 0.56930743], [0.54664626, 0.75500662, 0.56925799], [0.54228969, 0.75295306, 0.56919546], [0.53792417, 0.75090328, 0.56912118], [0.53355172, 0.74885687, 0.5690324], [0.52917169, 0.74681387, 0.56892926], [0.52478243, 0.74477453, 0.56881287], [0.52038338, 0.74273888, 0.56868323], [0.5159739, 0.74070697, 0.56854039], [0.51155269, 0.73867895, 0.56838507], [0.50711872, 0.73665492, 0.56821764], [0.50267118, 0.73463494, 0.56803826], [0.49822926, 0.73261388, 0.56785146], [0.49381422, 0.73058524, 0.56767484], [0.48942421, 0.72854938, 0.56751036], [0.48505993, 0.72650623, 0.56735752], [0.48072207, 0.72445575, 0.56721583], [0.4764113, 0.72239788, 0.56708475], [0.47212827, 0.72033258, 0.56696376], [0.46787361, 0.71825983, 0.56685231], [0.46364792, 0.71617961, 0.56674986], [0.45945271, 0.71409167, 0.56665625], [0.45528878, 0.71199595, 0.56657103], [0.45115557, 0.70989276, 0.5664931], [0.44705356, 0.70778212, 0.56642189], [0.44298321, 0.70566406, 0.56635683], [0.43894492, 0.70353863, 0.56629734], [0.43493911, 0.70140588, 0.56624286], [0.43096612, 0.69926587, 0.5661928], [0.42702625, 0.69711868, 0.56614659], [0.42311977, 0.69496438, 0.56610368], [0.41924689, 0.69280308, 0.56606355], [0.41540778, 0.69063486, 0.56602564], [0.41160259, 0.68845984, 0.56598944], [0.40783143, 0.68627814, 0.56595436], [0.40409434, 0.68408988, 0.56591994], [0.40039134, 0.68189518, 0.56588564], [0.39672238, 0.6796942, 0.56585103], [0.39308781, 0.67748696, 0.56581581], [0.38949137, 0.67527276, 0.56578084], [0.38592889, 0.67305266, 0.56574422], [0.38240013, 0.67082685, 0.56570561], [0.37890483, 0.66859548, 0.56566462], [0.37544276, 0.66635871, 0.56562081], [0.37201365, 0.66411673, 0.56557372], [0.36861709, 0.6618697, 0.5655231], [0.36525264, 0.65961782, 0.56546873], [0.36191986, 0.65736125, 0.56541032], [0.35861935, 0.65509998, 0.56534768], [0.35535621, 0.65283302, 0.56528211], [0.35212361, 0.65056188, 0.56521171], [0.34892097, 0.64828676, 0.56513633], [0.34574785, 0.64600783, 0.56505539], [0.34260357, 0.64372528, 0.5649689], [0.33948744, 0.64143931, 0.56487679], [0.33639887, 0.6391501, 0.56477869], [0.33334501, 0.63685626, 0.56467661], [0.33031952, 0.63455911, 0.564569], [0.3273199, 0.63225924, 0.56445488], [0.32434526, 0.62995682, 0.56433457], [0.32139487, 0.62765201, 0.56420795], [0.31846807, 0.62534504, 0.56407446], [0.3155731, 0.62303426, 0.56393695], [0.31270304, 0.62072111, 0.56379321], [0.30985436, 0.61840624, 0.56364307], [0.30702635, 0.61608984, 0.56348606], [0.30421803, 0.61377205, 0.56332267], [0.30143611, 0.61145167, 0.56315419], [0.29867863, 0.60912907, 0.56298054], [0.29593872, 0.60680554, 0.56280022], [0.29321538, 0.60448121, 0.56261376], [0.2905079, 0.60215628, 0.56242036], [0.28782827, 0.5998285, 0.56222366], [0.28516521, 0.59749996, 0.56202093], [0.28251558, 0.59517119, 0.56181204], [0.27987847, 0.59284232, 0.56159709], [0.27726216, 0.59051189, 0.56137785], [0.27466434, 0.58818027, 0.56115433], [0.2720767, 0.58584893, 0.56092486], [0.26949829, 0.58351797, 0.56068983], [0.26693801, 0.58118582, 0.56045121], [0.26439366, 0.57885288, 0.56020858], [0.26185616, 0.57652063, 0.55996077], [0.25932459, 0.57418919, 0.55970795], [0.25681303, 0.57185614, 0.55945297], [0.25431024, 0.56952337, 0.55919385], [0.25180492, 0.56719255, 0.5589305], [0.24929311, 0.56486397, 0.5586654], [0.24678356, 0.56253666, 0.55839491], [0.24426587, 0.56021153, 0.55812473], [0.24174022, 0.55788852, 0.55785448], [0.23921167, 0.55556705, 0.55758211], [0.23668315, 0.55324675, 0.55730676], [0.23414742, 0.55092825, 0.55703167], [0.23160473, 0.54861143, 0.5567573], [0.22905996, 0.54629572, 0.55648168], [0.22651648, 0.54398082, 0.5562029], [0.22396709, 0.54166721, 0.55592542], [0.22141221, 0.53935481, 0.55564885], [0.21885269, 0.53704347, 0.55537294], [0.21629986, 0.53473208, 0.55509319], [0.21374297, 0.53242154, 0.5548144], [0.21118255, 0.53011166, 0.55453708], [0.2086192, 0.52780237, 0.55426067], [0.20605624, 0.52549322, 0.55398479], [0.20350004, 0.5231837, 0.55370601], [0.20094292, 0.52087429, 0.55342884], [0.19838567, 0.51856489, 0.55315283], [0.19582911, 0.51625531, 0.55287818], [0.19327413, 0.51394542, 0.55260469], [0.19072933, 0.51163448, 0.5523289], [0.18819045, 0.50932268, 0.55205372], [0.18565609, 0.50701014, 0.55177937], [0.18312739, 0.50469666, 0.55150597], [0.18060561, 0.50238204, 0.55123374], [0.178092, 0.50006616, 0.55096224], [0.17558808, 0.49774882, 0.55069118], [0.17310341, 0.49542924, 0.5504176], [0.17063111, 0.49310789, 0.55014445], [0.1681728, 0.49078458, 0.54987159], [0.1657302, 0.48845913, 0.54959882], [0.16330517, 0.48613135, 0.54932605], [0.16089963, 0.48380104, 0.54905306], [0.15851561, 0.48146803, 0.54877953], [0.15615526, 0.47913212, 0.54850526], [0.15382083, 0.47679313, 0.54822991], [0.15151471, 0.47445087, 0.54795318], [0.14924112, 0.47210502, 0.54767411], [0.1470032, 0.46975537, 0.54739226], [0.14480101, 0.46740187, 0.54710832], [0.14263736, 0.46504434, 0.54682188], [0.14051521, 0.46268258, 0.54653253], [0.13843761, 0.46031639, 0.54623985], [0.13640774, 0.45794558, 0.5459434], [0.13442887, 0.45556994, 0.54564272], [0.1325044, 0.45318928, 0.54533736], [0.13063777, 0.4508034, 0.54502674], [0.12883252, 0.44841211, 0.5447104], [0.12709242, 0.44601517, 0.54438795], [0.1254209, 0.44361244, 0.54405855], [0.12382162, 0.44120373, 0.54372156], [0.12229818, 0.43878887, 0.54337634], [0.12085453, 0.4363676, 0.54302253], [0.11949938, 0.43393955, 0.54265715], [0.11823166, 0.43150478, 0.54228104], [0.11705496, 0.42906306, 0.54189388], [0.115972, 0.42661431, 0.54149449], [0.11498598, 0.42415835, 0.54108222], [0.11409965, 0.42169502, 0.54065622], [0.11331533, 0.41922424, 0.5402155], [0.11263542, 0.41674582, 0.53975931], [0.1120615, 0.4142597, 0.53928656], [0.11159738, 0.41176567, 0.53879549], [0.11125248, 0.40926325, 0.53828203], [0.11101698, 0.40675289, 0.53774864], [0.11089152, 0.40423445, 0.53719455], [0.11085121, 0.4017095, 0.53662425], [0.11087217, 0.39917938, 0.53604354], [0.11095515, 0.39664394, 0.53545166], [0.11110676, 0.39410282, 0.53484509], [0.11131735, 0.39155635, 0.53422678], [0.11158595, 0.38900446, 0.53359634], [0.11191139, 0.38644711, 0.5329534], [0.11229224, 0.38388426, 0.53229748], [0.11273683, 0.38131546, 0.53162393], [0.11323438, 0.37874109, 0.53093619], [0.11378271, 0.37616112, 0.53023413], [0.11437992, 0.37357557, 0.52951727], [0.11502681, 0.37098429, 0.52878396], [0.11572661, 0.36838709, 0.52803124], [0.11646936, 0.36578429, 0.52726234], [0.11725299, 0.3631759, 0.52647685], [0.1180755, 0.36056193, 0.52567436], [0.1189438, 0.35794203, 0.5248497], [0.11984752, 0.35531657, 0.52400649], [0.1207833, 0.35268564, 0.52314492], [0.12174895, 0.35004927, 0.52226461], [0.12274959, 0.34740723, 0.52136104], [0.12377809, 0.34475975, 0.52043639], [0.12482961, 0.34210702, 0.51949179], [0.125902, 0.33944908, 0.51852688], [0.12699998, 0.33678574, 0.51753708], [0.12811691, 0.33411727, 0.51652464], [0.12924811, 0.33144384, 0.51549084], [0.13039157, 0.32876552, 0.51443538], [0.13155228, 0.32608217, 0.51335321], [0.13272282, 0.32339407, 0.51224759], [0.13389954, 0.32070138, 0.51111946], [0.13508064, 0.31800419, 0.50996862], [0.13627149, 0.31530238, 0.50878942], [0.13746376, 0.31259627, 0.50758645], [0.13865499, 0.30988598, 0.50636017], [0.13984364, 0.30717161, 0.50511042], [0.14103515, 0.30445309, 0.50383119], [0.14222093, 0.30173071, 0.50252813], [0.14339946, 0.2990046, 0.50120127], [0.14456941, 0.29627483, 0.49985054], [0.14573579, 0.29354139, 0.49847009], [0.14689091, 0.29080452, 0.49706566], [0.1480336, 0.28806432, 0.49563732], [0.1491628, 0.28532086, 0.49418508], [0.15028228, 0.28257418, 0.49270402], [0.15138673, 0.27982444, 0.49119848], [0.15247457, 0.27707172, 0.48966925], [0.15354487, 0.2743161, 0.48811641], [0.15459955, 0.27155765, 0.4865371], [0.15563716, 0.26879642, 0.4849321], [0.1566572, 0.26603191, 0.48330429], [0.15765823, 0.26326032, 0.48167456], [0.15862147, 0.26048295, 0.48005785], [0.15954301, 0.25770084, 0.47845341], [0.16043267, 0.25491144, 0.4768626], [0.16129262, 0.25211406, 0.4752857], [0.1621119, 0.24931169, 0.47372076], [0.16290577, 0.24649998, 0.47217025], [0.16366819, 0.24368054, 0.47063302], [0.1644021, 0.24085237, 0.46910949], [0.16510882, 0.2380149, 0.46759982], [0.16579015, 0.23516739, 0.46610429], [0.1664433, 0.2323105, 0.46462219], [0.16707586, 0.22944155, 0.46315508], [0.16768475, 0.22656122, 0.46170223], [0.16826815, 0.22366984, 0.46026308], [0.16883174, 0.22076514, 0.45883891], [0.16937589, 0.21784655, 0.45742976], [0.16990129, 0.21491339, 0.45603578], [0.1704074, 0.21196535, 0.45465677], [0.17089473, 0.20900176, 0.4532928], [0.17136819, 0.20602012, 0.45194524], [0.17182683, 0.20302012, 0.45061386], [0.17227059, 0.20000106, 0.44929865], [0.17270583, 0.19695949, 0.44800165], [0.17313804, 0.19389201, 0.44672488], [0.17363177, 0.19076859, 0.44549087] ] _lut_dict = dict( rocket=_rocket_lut, mako=_mako_lut, icefire=_icefire_lut, vlag=_vlag_lut, flare=_flare_lut, crest=_crest_lut, ) for _name, _lut in _lut_dict.items(): _cmap = colors.ListedColormap(_lut, _name) locals()[_name] = _cmap _cmap_r = colors.ListedColormap(_lut[::-1], _name + "_r") locals()[_name + "_r"] = _cmap_r mpl_cm.register_cmap(_name, _cmap) mpl_cm.register_cmap(_name + "_r", _cmap_r) del colors, mpl_cmseaborn-0.11.2/seaborn/colors/000077500000000000000000000000001410631356500161625ustar00rootroot00000000000000seaborn-0.11.2/seaborn/colors/__init__.py000066400000000000000000000001301410631356500202650ustar00rootroot00000000000000from .xkcd_rgb import xkcd_rgb # noqa: F401 from .crayons import crayons # noqa: F401 seaborn-0.11.2/seaborn/colors/crayons.py000066400000000000000000000103521410631356500202130ustar00rootroot00000000000000crayons = {'Almond': '#EFDECD', 'Antique Brass': '#CD9575', 'Apricot': '#FDD9B5', 'Aquamarine': '#78DBE2', 'Asparagus': '#87A96B', 'Atomic Tangerine': '#FFA474', 'Banana Mania': '#FAE7B5', 'Beaver': '#9F8170', 'Bittersweet': '#FD7C6E', 'Black': '#000000', 'Blue': '#1F75FE', 'Blue Bell': '#A2A2D0', 'Blue Green': '#0D98BA', 'Blue Violet': '#7366BD', 'Blush': '#DE5D83', 'Brick Red': '#CB4154', 'Brown': '#B4674D', 'Burnt Orange': '#FF7F49', 'Burnt Sienna': '#EA7E5D', 'Cadet Blue': '#B0B7C6', 'Canary': '#FFFF99', 'Caribbean Green': '#00CC99', 'Carnation Pink': '#FFAACC', 'Cerise': '#DD4492', 'Cerulean': '#1DACD6', 'Chestnut': '#BC5D58', 'Copper': '#DD9475', 'Cornflower': '#9ACEEB', 'Cotton Candy': '#FFBCD9', 'Dandelion': '#FDDB6D', 'Denim': '#2B6CC4', 'Desert Sand': '#EFCDB8', 'Eggplant': '#6E5160', 'Electric Lime': '#CEFF1D', 'Fern': '#71BC78', 'Forest Green': '#6DAE81', 'Fuchsia': '#C364C5', 'Fuzzy Wuzzy': '#CC6666', 'Gold': '#E7C697', 'Goldenrod': '#FCD975', 'Granny Smith Apple': '#A8E4A0', 'Gray': '#95918C', 'Green': '#1CAC78', 'Green Yellow': '#F0E891', 'Hot Magenta': '#FF1DCE', 'Inchworm': '#B2EC5D', 'Indigo': '#5D76CB', 'Jazzberry Jam': '#CA3767', 'Jungle Green': '#3BB08F', 'Laser Lemon': '#FEFE22', 'Lavender': '#FCB4D5', 'Macaroni and Cheese': '#FFBD88', 'Magenta': '#F664AF', 'Mahogany': '#CD4A4C', 'Manatee': '#979AAA', 'Mango Tango': '#FF8243', 'Maroon': '#C8385A', 'Mauvelous': '#EF98AA', 'Melon': '#FDBCB4', 'Midnight Blue': '#1A4876', 'Mountain Meadow': '#30BA8F', 'Navy Blue': '#1974D2', 'Neon Carrot': '#FFA343', 'Olive Green': '#BAB86C', 'Orange': '#FF7538', 'Orchid': '#E6A8D7', 'Outer Space': '#414A4C', 'Outrageous Orange': '#FF6E4A', 'Pacific Blue': '#1CA9C9', 'Peach': '#FFCFAB', 'Periwinkle': '#C5D0E6', 'Piggy Pink': '#FDDDE6', 'Pine Green': '#158078', 'Pink Flamingo': '#FC74FD', 'Pink Sherbert': '#F78FA7', 'Plum': '#8E4585', 'Purple Heart': '#7442C8', "Purple Mountains' Majesty": '#9D81BA', 'Purple Pizzazz': '#FE4EDA', 'Radical Red': '#FF496C', 'Raw Sienna': '#D68A59', 'Razzle Dazzle Rose': '#FF48D0', 'Razzmatazz': '#E3256B', 'Red': '#EE204D', 'Red Orange': '#FF5349', 'Red Violet': '#C0448F', "Robin's Egg Blue": '#1FCECB', 'Royal Purple': '#7851A9', 'Salmon': '#FF9BAA', 'Scarlet': '#FC2847', "Screamin' Green": '#76FF7A', 'Sea Green': '#93DFB8', 'Sepia': '#A5694F', 'Shadow': '#8A795D', 'Shamrock': '#45CEA2', 'Shocking Pink': '#FB7EFD', 'Silver': '#CDC5C2', 'Sky Blue': '#80DAEB', 'Spring Green': '#ECEABE', 'Sunglow': '#FFCF48', 'Sunset Orange': '#FD5E53', 'Tan': '#FAA76C', 'Tickle Me Pink': '#FC89AC', 'Timberwolf': '#DBD7D2', 'Tropical Rain Forest': '#17806D', 'Tumbleweed': '#DEAA88', 'Turquoise Blue': '#77DDE7', 'Unmellow Yellow': '#FFFF66', 'Violet (Purple)': '#926EAE', 'Violet Red': '#F75394', 'Vivid Tangerine': '#FFA089', 'Vivid Violet': '#8F509D', 'White': '#FFFFFF', 'Wild Blue Yonder': '#A2ADD0', 'Wild Strawberry': '#FF43A4', 'Wild Watermelon': '#FC6C85', 'Wisteria': '#CDA4DE', 'Yellow': '#FCE883', 'Yellow Green': '#C5E384', 'Yellow Orange': '#FFAE42'} seaborn-0.11.2/seaborn/colors/xkcd_rgb.py000066400000000000000000001050631410631356500203240ustar00rootroot00000000000000xkcd_rgb = {'acid green': '#8ffe09', 'adobe': '#bd6c48', 'algae': '#54ac68', 'algae green': '#21c36f', 'almost black': '#070d0d', 'amber': '#feb308', 'amethyst': '#9b5fc0', 'apple': '#6ecb3c', 'apple green': '#76cd26', 'apricot': '#ffb16d', 'aqua': '#13eac9', 'aqua blue': '#02d8e9', 'aqua green': '#12e193', 'aqua marine': '#2ee8bb', 'aquamarine': '#04d8b2', 'army green': '#4b5d16', 'asparagus': '#77ab56', 'aubergine': '#3d0734', 'auburn': '#9a3001', 'avocado': '#90b134', 'avocado green': '#87a922', 'azul': '#1d5dec', 'azure': '#069af3', 'baby blue': '#a2cffe', 'baby green': '#8cff9e', 'baby pink': '#ffb7ce', 'baby poo': '#ab9004', 'baby poop': '#937c00', 'baby poop green': '#8f9805', 'baby puke green': '#b6c406', 'baby purple': '#ca9bf7', 'baby shit brown': '#ad900d', 'baby shit green': '#889717', 'banana': '#ffff7e', 'banana yellow': '#fafe4b', 'barbie pink': '#fe46a5', 'barf green': '#94ac02', 'barney': '#ac1db8', 'barney purple': '#a00498', 'battleship grey': '#6b7c85', 'beige': '#e6daa6', 'berry': '#990f4b', 'bile': '#b5c306', 'black': '#000000', 'bland': '#afa88b', 'blood': '#770001', 'blood orange': '#fe4b03', 'blood red': '#980002', 'blue': '#0343df', 'blue blue': '#2242c7', 'blue green': '#137e6d', 'blue grey': '#607c8e', 'blue purple': '#5729ce', 'blue violet': '#5d06e9', 'blue with a hint of purple': '#533cc6', 'blue/green': '#0f9b8e', 'blue/grey': '#758da3', 'blue/purple': '#5a06ef', 'blueberry': '#464196', 'bluegreen': '#017a79', 'bluegrey': '#85a3b2', 'bluey green': '#2bb179', 'bluey grey': '#89a0b0', 'bluey purple': '#6241c7', 'bluish': '#2976bb', 'bluish green': '#10a674', 'bluish grey': '#748b97', 'bluish purple': '#703be7', 'blurple': '#5539cc', 'blush': '#f29e8e', 'blush pink': '#fe828c', 'booger': '#9bb53c', 'booger green': '#96b403', 'bordeaux': '#7b002c', 'boring green': '#63b365', 'bottle green': '#044a05', 'brick': '#a03623', 'brick orange': '#c14a09', 'brick red': '#8f1402', 'bright aqua': '#0bf9ea', 'bright blue': '#0165fc', 'bright cyan': '#41fdfe', 'bright green': '#01ff07', 'bright lavender': '#c760ff', 'bright light blue': '#26f7fd', 'bright light green': '#2dfe54', 'bright lilac': '#c95efb', 'bright lime': '#87fd05', 'bright lime green': '#65fe08', 'bright magenta': '#ff08e8', 'bright olive': '#9cbb04', 'bright orange': '#ff5b00', 'bright pink': '#fe01b1', 'bright purple': '#be03fd', 'bright red': '#ff000d', 'bright sea green': '#05ffa6', 'bright sky blue': '#02ccfe', 'bright teal': '#01f9c6', 'bright turquoise': '#0ffef9', 'bright violet': '#ad0afd', 'bright yellow': '#fffd01', 'bright yellow green': '#9dff00', 'british racing green': '#05480d', 'bronze': '#a87900', 'brown': '#653700', 'brown green': '#706c11', 'brown grey': '#8d8468', 'brown orange': '#b96902', 'brown red': '#922b05', 'brown yellow': '#b29705', 'brownish': '#9c6d57', 'brownish green': '#6a6e09', 'brownish grey': '#86775f', 'brownish orange': '#cb7723', 'brownish pink': '#c27e79', 'brownish purple': '#76424e', 'brownish red': '#9e3623', 'brownish yellow': '#c9b003', 'browny green': '#6f6c0a', 'browny orange': '#ca6b02', 'bruise': '#7e4071', 'bubble gum pink': '#ff69af', 'bubblegum': '#ff6cb5', 'bubblegum pink': '#fe83cc', 'buff': '#fef69e', 'burgundy': '#610023', 'burnt orange': '#c04e01', 'burnt red': '#9f2305', 'burnt siena': '#b75203', 'burnt sienna': '#b04e0f', 'burnt umber': '#a0450e', 'burnt yellow': '#d5ab09', 'burple': '#6832e3', 'butter': '#ffff81', 'butter yellow': '#fffd74', 'butterscotch': '#fdb147', 'cadet blue': '#4e7496', 'camel': '#c69f59', 'camo': '#7f8f4e', 'camo green': '#526525', 'camouflage green': '#4b6113', 'canary': '#fdff63', 'canary yellow': '#fffe40', 'candy pink': '#ff63e9', 'caramel': '#af6f09', 'carmine': '#9d0216', 'carnation': '#fd798f', 'carnation pink': '#ff7fa7', 'carolina blue': '#8ab8fe', 'celadon': '#befdb7', 'celery': '#c1fd95', 'cement': '#a5a391', 'cerise': '#de0c62', 'cerulean': '#0485d1', 'cerulean blue': '#056eee', 'charcoal': '#343837', 'charcoal grey': '#3c4142', 'chartreuse': '#c1f80a', 'cherry': '#cf0234', 'cherry red': '#f7022a', 'chestnut': '#742802', 'chocolate': '#3d1c02', 'chocolate brown': '#411900', 'cinnamon': '#ac4f06', 'claret': '#680018', 'clay': '#b66a50', 'clay brown': '#b2713d', 'clear blue': '#247afd', 'cloudy blue': '#acc2d9', 'cobalt': '#1e488f', 'cobalt blue': '#030aa7', 'cocoa': '#875f42', 'coffee': '#a6814c', 'cool blue': '#4984b8', 'cool green': '#33b864', 'cool grey': '#95a3a6', 'copper': '#b66325', 'coral': '#fc5a50', 'coral pink': '#ff6163', 'cornflower': '#6a79f7', 'cornflower blue': '#5170d7', 'cranberry': '#9e003a', 'cream': '#ffffc2', 'creme': '#ffffb6', 'crimson': '#8c000f', 'custard': '#fffd78', 'cyan': '#00ffff', 'dandelion': '#fedf08', 'dark': '#1b2431', 'dark aqua': '#05696b', 'dark aquamarine': '#017371', 'dark beige': '#ac9362', 'dark blue': '#00035b', 'dark blue green': '#005249', 'dark blue grey': '#1f3b4d', 'dark brown': '#341c02', 'dark coral': '#cf524e', 'dark cream': '#fff39a', 'dark cyan': '#0a888a', 'dark forest green': '#002d04', 'dark fuchsia': '#9d0759', 'dark gold': '#b59410', 'dark grass green': '#388004', 'dark green': '#033500', 'dark green blue': '#1f6357', 'dark grey': '#363737', 'dark grey blue': '#29465b', 'dark hot pink': '#d90166', 'dark indigo': '#1f0954', 'dark khaki': '#9b8f55', 'dark lavender': '#856798', 'dark lilac': '#9c6da5', 'dark lime': '#84b701', 'dark lime green': '#7ebd01', 'dark magenta': '#960056', 'dark maroon': '#3c0008', 'dark mauve': '#874c62', 'dark mint': '#48c072', 'dark mint green': '#20c073', 'dark mustard': '#a88905', 'dark navy': '#000435', 'dark navy blue': '#00022e', 'dark olive': '#373e02', 'dark olive green': '#3c4d03', 'dark orange': '#c65102', 'dark pastel green': '#56ae57', 'dark peach': '#de7e5d', 'dark periwinkle': '#665fd1', 'dark pink': '#cb416b', 'dark plum': '#3f012c', 'dark purple': '#35063e', 'dark red': '#840000', 'dark rose': '#b5485d', 'dark royal blue': '#02066f', 'dark sage': '#598556', 'dark salmon': '#c85a53', 'dark sand': '#a88f59', 'dark sea green': '#11875d', 'dark seafoam': '#1fb57a', 'dark seafoam green': '#3eaf76', 'dark sky blue': '#448ee4', 'dark slate blue': '#214761', 'dark tan': '#af884a', 'dark taupe': '#7f684e', 'dark teal': '#014d4e', 'dark turquoise': '#045c5a', 'dark violet': '#34013f', 'dark yellow': '#d5b60a', 'dark yellow green': '#728f02', 'darkblue': '#030764', 'darkgreen': '#054907', 'darkish blue': '#014182', 'darkish green': '#287c37', 'darkish pink': '#da467d', 'darkish purple': '#751973', 'darkish red': '#a90308', 'deep aqua': '#08787f', 'deep blue': '#040273', 'deep brown': '#410200', 'deep green': '#02590f', 'deep lavender': '#8d5eb7', 'deep lilac': '#966ebd', 'deep magenta': '#a0025c', 'deep orange': '#dc4d01', 'deep pink': '#cb0162', 'deep purple': '#36013f', 'deep red': '#9a0200', 'deep rose': '#c74767', 'deep sea blue': '#015482', 'deep sky blue': '#0d75f8', 'deep teal': '#00555a', 'deep turquoise': '#017374', 'deep violet': '#490648', 'denim': '#3b638c', 'denim blue': '#3b5b92', 'desert': '#ccad60', 'diarrhea': '#9f8303', 'dirt': '#8a6e45', 'dirt brown': '#836539', 'dirty blue': '#3f829d', 'dirty green': '#667e2c', 'dirty orange': '#c87606', 'dirty pink': '#ca7b80', 'dirty purple': '#734a65', 'dirty yellow': '#cdc50a', 'dodger blue': '#3e82fc', 'drab': '#828344', 'drab green': '#749551', 'dried blood': '#4b0101', 'duck egg blue': '#c3fbf4', 'dull blue': '#49759c', 'dull brown': '#876e4b', 'dull green': '#74a662', 'dull orange': '#d8863b', 'dull pink': '#d5869d', 'dull purple': '#84597e', 'dull red': '#bb3f3f', 'dull teal': '#5f9e8f', 'dull yellow': '#eedc5b', 'dusk': '#4e5481', 'dusk blue': '#26538d', 'dusky blue': '#475f94', 'dusky pink': '#cc7a8b', 'dusky purple': '#895b7b', 'dusky rose': '#ba6873', 'dust': '#b2996e', 'dusty blue': '#5a86ad', 'dusty green': '#76a973', 'dusty lavender': '#ac86a8', 'dusty orange': '#f0833a', 'dusty pink': '#d58a94', 'dusty purple': '#825f87', 'dusty red': '#b9484e', 'dusty rose': '#c0737a', 'dusty teal': '#4c9085', 'earth': '#a2653e', 'easter green': '#8cfd7e', 'easter purple': '#c071fe', 'ecru': '#feffca', 'egg shell': '#fffcc4', 'eggplant': '#380835', 'eggplant purple': '#430541', 'eggshell': '#ffffd4', 'eggshell blue': '#c4fff7', 'electric blue': '#0652ff', 'electric green': '#21fc0d', 'electric lime': '#a8ff04', 'electric pink': '#ff0490', 'electric purple': '#aa23ff', 'emerald': '#01a049', 'emerald green': '#028f1e', 'evergreen': '#05472a', 'faded blue': '#658cbb', 'faded green': '#7bb274', 'faded orange': '#f0944d', 'faded pink': '#de9dac', 'faded purple': '#916e99', 'faded red': '#d3494e', 'faded yellow': '#feff7f', 'fawn': '#cfaf7b', 'fern': '#63a950', 'fern green': '#548d44', 'fire engine red': '#fe0002', 'flat blue': '#3c73a8', 'flat green': '#699d4c', 'fluorescent green': '#08ff08', 'fluro green': '#0aff02', 'foam green': '#90fda9', 'forest': '#0b5509', 'forest green': '#06470c', 'forrest green': '#154406', 'french blue': '#436bad', 'fresh green': '#69d84f', 'frog green': '#58bc08', 'fuchsia': '#ed0dd9', 'gold': '#dbb40c', 'golden': '#f5bf03', 'golden brown': '#b27a01', 'golden rod': '#f9bc08', 'golden yellow': '#fec615', 'goldenrod': '#fac205', 'grape': '#6c3461', 'grape purple': '#5d1451', 'grapefruit': '#fd5956', 'grass': '#5cac2d', 'grass green': '#3f9b0b', 'grassy green': '#419c03', 'green': '#15b01a', 'green apple': '#5edc1f', 'green blue': '#06b48b', 'green brown': '#544e03', 'green grey': '#77926f', 'green teal': '#0cb577', 'green yellow': '#c9ff27', 'green/blue': '#01c08d', 'green/yellow': '#b5ce08', 'greenblue': '#23c48b', 'greenish': '#40a368', 'greenish beige': '#c9d179', 'greenish blue': '#0b8b87', 'greenish brown': '#696112', 'greenish cyan': '#2afeb7', 'greenish grey': '#96ae8d', 'greenish tan': '#bccb7a', 'greenish teal': '#32bf84', 'greenish turquoise': '#00fbb0', 'greenish yellow': '#cdfd02', 'greeny blue': '#42b395', 'greeny brown': '#696006', 'greeny grey': '#7ea07a', 'greeny yellow': '#c6f808', 'grey': '#929591', 'grey blue': '#6b8ba4', 'grey brown': '#7f7053', 'grey green': '#789b73', 'grey pink': '#c3909b', 'grey purple': '#826d8c', 'grey teal': '#5e9b8a', 'grey/blue': '#647d8e', 'grey/green': '#86a17d', 'greyblue': '#77a1b5', 'greyish': '#a8a495', 'greyish blue': '#5e819d', 'greyish brown': '#7a6a4f', 'greyish green': '#82a67d', 'greyish pink': '#c88d94', 'greyish purple': '#887191', 'greyish teal': '#719f91', 'gross green': '#a0bf16', 'gunmetal': '#536267', 'hazel': '#8e7618', 'heather': '#a484ac', 'heliotrope': '#d94ff5', 'highlighter green': '#1bfc06', 'hospital green': '#9be5aa', 'hot green': '#25ff29', 'hot magenta': '#f504c9', 'hot pink': '#ff028d', 'hot purple': '#cb00f5', 'hunter green': '#0b4008', 'ice': '#d6fffa', 'ice blue': '#d7fffe', 'icky green': '#8fae22', 'indian red': '#850e04', 'indigo': '#380282', 'indigo blue': '#3a18b1', 'iris': '#6258c4', 'irish green': '#019529', 'ivory': '#ffffcb', 'jade': '#1fa774', 'jade green': '#2baf6a', 'jungle green': '#048243', 'kelley green': '#009337', 'kelly green': '#02ab2e', 'kermit green': '#5cb200', 'key lime': '#aeff6e', 'khaki': '#aaa662', 'khaki green': '#728639', 'kiwi': '#9cef43', 'kiwi green': '#8ee53f', 'lavender': '#c79fef', 'lavender blue': '#8b88f8', 'lavender pink': '#dd85d7', 'lawn green': '#4da409', 'leaf': '#71aa34', 'leaf green': '#5ca904', 'leafy green': '#51b73b', 'leather': '#ac7434', 'lemon': '#fdff52', 'lemon green': '#adf802', 'lemon lime': '#bffe28', 'lemon yellow': '#fdff38', 'lichen': '#8fb67b', 'light aqua': '#8cffdb', 'light aquamarine': '#7bfdc7', 'light beige': '#fffeb6', 'light blue': '#95d0fc', 'light blue green': '#7efbb3', 'light blue grey': '#b7c9e2', 'light bluish green': '#76fda8', 'light bright green': '#53fe5c', 'light brown': '#ad8150', 'light burgundy': '#a8415b', 'light cyan': '#acfffc', 'light eggplant': '#894585', 'light forest green': '#4f9153', 'light gold': '#fddc5c', 'light grass green': '#9af764', 'light green': '#96f97b', 'light green blue': '#56fca2', 'light greenish blue': '#63f7b4', 'light grey': '#d8dcd6', 'light grey blue': '#9dbcd4', 'light grey green': '#b7e1a1', 'light indigo': '#6d5acf', 'light khaki': '#e6f2a2', 'light lavendar': '#efc0fe', 'light lavender': '#dfc5fe', 'light light blue': '#cafffb', 'light light green': '#c8ffb0', 'light lilac': '#edc8ff', 'light lime': '#aefd6c', 'light lime green': '#b9ff66', 'light magenta': '#fa5ff7', 'light maroon': '#a24857', 'light mauve': '#c292a1', 'light mint': '#b6ffbb', 'light mint green': '#a6fbb2', 'light moss green': '#a6c875', 'light mustard': '#f7d560', 'light navy': '#155084', 'light navy blue': '#2e5a88', 'light neon green': '#4efd54', 'light olive': '#acbf69', 'light olive green': '#a4be5c', 'light orange': '#fdaa48', 'light pastel green': '#b2fba5', 'light pea green': '#c4fe82', 'light peach': '#ffd8b1', 'light periwinkle': '#c1c6fc', 'light pink': '#ffd1df', 'light plum': '#9d5783', 'light purple': '#bf77f6', 'light red': '#ff474c', 'light rose': '#ffc5cb', 'light royal blue': '#3a2efe', 'light sage': '#bcecac', 'light salmon': '#fea993', 'light sea green': '#98f6b0', 'light seafoam': '#a0febf', 'light seafoam green': '#a7ffb5', 'light sky blue': '#c6fcff', 'light tan': '#fbeeac', 'light teal': '#90e4c1', 'light turquoise': '#7ef4cc', 'light urple': '#b36ff6', 'light violet': '#d6b4fc', 'light yellow': '#fffe7a', 'light yellow green': '#ccfd7f', 'light yellowish green': '#c2ff89', 'lightblue': '#7bc8f6', 'lighter green': '#75fd63', 'lighter purple': '#a55af4', 'lightgreen': '#76ff7b', 'lightish blue': '#3d7afd', 'lightish green': '#61e160', 'lightish purple': '#a552e6', 'lightish red': '#fe2f4a', 'lilac': '#cea2fd', 'liliac': '#c48efd', 'lime': '#aaff32', 'lime green': '#89fe05', 'lime yellow': '#d0fe1d', 'lipstick': '#d5174e', 'lipstick red': '#c0022f', 'macaroni and cheese': '#efb435', 'magenta': '#c20078', 'mahogany': '#4a0100', 'maize': '#f4d054', 'mango': '#ffa62b', 'manilla': '#fffa86', 'marigold': '#fcc006', 'marine': '#042e60', 'marine blue': '#01386a', 'maroon': '#650021', 'mauve': '#ae7181', 'medium blue': '#2c6fbb', 'medium brown': '#7f5112', 'medium green': '#39ad48', 'medium grey': '#7d7f7c', 'medium pink': '#f36196', 'medium purple': '#9e43a2', 'melon': '#ff7855', 'merlot': '#730039', 'metallic blue': '#4f738e', 'mid blue': '#276ab3', 'mid green': '#50a747', 'midnight': '#03012d', 'midnight blue': '#020035', 'midnight purple': '#280137', 'military green': '#667c3e', 'milk chocolate': '#7f4e1e', 'mint': '#9ffeb0', 'mint green': '#8fff9f', 'minty green': '#0bf77d', 'mocha': '#9d7651', 'moss': '#769958', 'moss green': '#658b38', 'mossy green': '#638b27', 'mud': '#735c12', 'mud brown': '#60460f', 'mud green': '#606602', 'muddy brown': '#886806', 'muddy green': '#657432', 'muddy yellow': '#bfac05', 'mulberry': '#920a4e', 'murky green': '#6c7a0e', 'mushroom': '#ba9e88', 'mustard': '#ceb301', 'mustard brown': '#ac7e04', 'mustard green': '#a8b504', 'mustard yellow': '#d2bd0a', 'muted blue': '#3b719f', 'muted green': '#5fa052', 'muted pink': '#d1768f', 'muted purple': '#805b87', 'nasty green': '#70b23f', 'navy': '#01153e', 'navy blue': '#001146', 'navy green': '#35530a', 'neon blue': '#04d9ff', 'neon green': '#0cff0c', 'neon pink': '#fe019a', 'neon purple': '#bc13fe', 'neon red': '#ff073a', 'neon yellow': '#cfff04', 'nice blue': '#107ab0', 'night blue': '#040348', 'ocean': '#017b92', 'ocean blue': '#03719c', 'ocean green': '#3d9973', 'ocher': '#bf9b0c', 'ochre': '#bf9005', 'ocre': '#c69c04', 'off blue': '#5684ae', 'off green': '#6ba353', 'off white': '#ffffe4', 'off yellow': '#f1f33f', 'old pink': '#c77986', 'old rose': '#c87f89', 'olive': '#6e750e', 'olive brown': '#645403', 'olive drab': '#6f7632', 'olive green': '#677a04', 'olive yellow': '#c2b709', 'orange': '#f97306', 'orange brown': '#be6400', 'orange pink': '#ff6f52', 'orange red': '#fd411e', 'orange yellow': '#ffad01', 'orangeish': '#fd8d49', 'orangered': '#fe420f', 'orangey brown': '#b16002', 'orangey red': '#fa4224', 'orangey yellow': '#fdb915', 'orangish': '#fc824a', 'orangish brown': '#b25f03', 'orangish red': '#f43605', 'orchid': '#c875c4', 'pale': '#fff9d0', 'pale aqua': '#b8ffeb', 'pale blue': '#d0fefe', 'pale brown': '#b1916e', 'pale cyan': '#b7fffa', 'pale gold': '#fdde6c', 'pale green': '#c7fdb5', 'pale grey': '#fdfdfe', 'pale lavender': '#eecffe', 'pale light green': '#b1fc99', 'pale lilac': '#e4cbff', 'pale lime': '#befd73', 'pale lime green': '#b1ff65', 'pale magenta': '#d767ad', 'pale mauve': '#fed0fc', 'pale olive': '#b9cc81', 'pale olive green': '#b1d27b', 'pale orange': '#ffa756', 'pale peach': '#ffe5ad', 'pale pink': '#ffcfdc', 'pale purple': '#b790d4', 'pale red': '#d9544d', 'pale rose': '#fdc1c5', 'pale salmon': '#ffb19a', 'pale sky blue': '#bdf6fe', 'pale teal': '#82cbb2', 'pale turquoise': '#a5fbd5', 'pale violet': '#ceaefa', 'pale yellow': '#ffff84', 'parchment': '#fefcaf', 'pastel blue': '#a2bffe', 'pastel green': '#b0ff9d', 'pastel orange': '#ff964f', 'pastel pink': '#ffbacd', 'pastel purple': '#caa0ff', 'pastel red': '#db5856', 'pastel yellow': '#fffe71', 'pea': '#a4bf20', 'pea green': '#8eab12', 'pea soup': '#929901', 'pea soup green': '#94a617', 'peach': '#ffb07c', 'peachy pink': '#ff9a8a', 'peacock blue': '#016795', 'pear': '#cbf85f', 'periwinkle': '#8e82fe', 'periwinkle blue': '#8f99fb', 'perrywinkle': '#8f8ce7', 'petrol': '#005f6a', 'pig pink': '#e78ea5', 'pine': '#2b5d34', 'pine green': '#0a481e', 'pink': '#ff81c0', 'pink purple': '#db4bda', 'pink red': '#f5054f', 'pink/purple': '#ef1de7', 'pinkish': '#d46a7e', 'pinkish brown': '#b17261', 'pinkish grey': '#c8aca9', 'pinkish orange': '#ff724c', 'pinkish purple': '#d648d7', 'pinkish red': '#f10c45', 'pinkish tan': '#d99b82', 'pinky': '#fc86aa', 'pinky purple': '#c94cbe', 'pinky red': '#fc2647', 'piss yellow': '#ddd618', 'pistachio': '#c0fa8b', 'plum': '#580f41', 'plum purple': '#4e0550', 'poison green': '#40fd14', 'poo': '#8f7303', 'poo brown': '#885f01', 'poop': '#7f5e00', 'poop brown': '#7a5901', 'poop green': '#6f7c00', 'powder blue': '#b1d1fc', 'powder pink': '#ffb2d0', 'primary blue': '#0804f9', 'prussian blue': '#004577', 'puce': '#a57e52', 'puke': '#a5a502', 'puke brown': '#947706', 'puke green': '#9aae07', 'puke yellow': '#c2be0e', 'pumpkin': '#e17701', 'pumpkin orange': '#fb7d07', 'pure blue': '#0203e2', 'purple': '#7e1e9c', 'purple blue': '#632de9', 'purple brown': '#673a3f', 'purple grey': '#866f85', 'purple pink': '#e03fd8', 'purple red': '#990147', 'purple/blue': '#5d21d0', 'purple/pink': '#d725de', 'purpleish': '#98568d', 'purpleish blue': '#6140ef', 'purpleish pink': '#df4ec8', 'purpley': '#8756e4', 'purpley blue': '#5f34e7', 'purpley grey': '#947e94', 'purpley pink': '#c83cb9', 'purplish': '#94568c', 'purplish blue': '#601ef9', 'purplish brown': '#6b4247', 'purplish grey': '#7a687f', 'purplish pink': '#ce5dae', 'purplish red': '#b0054b', 'purply': '#983fb2', 'purply blue': '#661aee', 'purply pink': '#f075e6', 'putty': '#beae8a', 'racing green': '#014600', 'radioactive green': '#2cfa1f', 'raspberry': '#b00149', 'raw sienna': '#9a6200', 'raw umber': '#a75e09', 'really light blue': '#d4ffff', 'red': '#e50000', 'red brown': '#8b2e16', 'red orange': '#fd3c06', 'red pink': '#fa2a55', 'red purple': '#820747', 'red violet': '#9e0168', 'red wine': '#8c0034', 'reddish': '#c44240', 'reddish brown': '#7f2b0a', 'reddish grey': '#997570', 'reddish orange': '#f8481c', 'reddish pink': '#fe2c54', 'reddish purple': '#910951', 'reddy brown': '#6e1005', 'rich blue': '#021bf9', 'rich purple': '#720058', 'robin egg blue': '#8af1fe', "robin's egg": '#6dedfd', "robin's egg blue": '#98eff9', 'rosa': '#fe86a4', 'rose': '#cf6275', 'rose pink': '#f7879a', 'rose red': '#be013c', 'rosy pink': '#f6688e', 'rouge': '#ab1239', 'royal': '#0c1793', 'royal blue': '#0504aa', 'royal purple': '#4b006e', 'ruby': '#ca0147', 'russet': '#a13905', 'rust': '#a83c09', 'rust brown': '#8b3103', 'rust orange': '#c45508', 'rust red': '#aa2704', 'rusty orange': '#cd5909', 'rusty red': '#af2f0d', 'saffron': '#feb209', 'sage': '#87ae73', 'sage green': '#88b378', 'salmon': '#ff796c', 'salmon pink': '#fe7b7c', 'sand': '#e2ca76', 'sand brown': '#cba560', 'sand yellow': '#fce166', 'sandstone': '#c9ae74', 'sandy': '#f1da7a', 'sandy brown': '#c4a661', 'sandy yellow': '#fdee73', 'sap green': '#5c8b15', 'sapphire': '#2138ab', 'scarlet': '#be0119', 'sea': '#3c9992', 'sea blue': '#047495', 'sea green': '#53fca1', 'seafoam': '#80f9ad', 'seafoam blue': '#78d1b6', 'seafoam green': '#7af9ab', 'seaweed': '#18d17b', 'seaweed green': '#35ad6b', 'sepia': '#985e2b', 'shamrock': '#01b44c', 'shamrock green': '#02c14d', 'shit': '#7f5f00', 'shit brown': '#7b5804', 'shit green': '#758000', 'shocking pink': '#fe02a2', 'sick green': '#9db92c', 'sickly green': '#94b21c', 'sickly yellow': '#d0e429', 'sienna': '#a9561e', 'silver': '#c5c9c7', 'sky': '#82cafc', 'sky blue': '#75bbfd', 'slate': '#516572', 'slate blue': '#5b7c99', 'slate green': '#658d6d', 'slate grey': '#59656d', 'slime green': '#99cc04', 'snot': '#acbb0d', 'snot green': '#9dc100', 'soft blue': '#6488ea', 'soft green': '#6fc276', 'soft pink': '#fdb0c0', 'soft purple': '#a66fb5', 'spearmint': '#1ef876', 'spring green': '#a9f971', 'spruce': '#0a5f38', 'squash': '#f2ab15', 'steel': '#738595', 'steel blue': '#5a7d9a', 'steel grey': '#6f828a', 'stone': '#ada587', 'stormy blue': '#507b9c', 'straw': '#fcf679', 'strawberry': '#fb2943', 'strong blue': '#0c06f7', 'strong pink': '#ff0789', 'sun yellow': '#ffdf22', 'sunflower': '#ffc512', 'sunflower yellow': '#ffda03', 'sunny yellow': '#fff917', 'sunshine yellow': '#fffd37', 'swamp': '#698339', 'swamp green': '#748500', 'tan': '#d1b26f', 'tan brown': '#ab7e4c', 'tan green': '#a9be70', 'tangerine': '#ff9408', 'taupe': '#b9a281', 'tea': '#65ab7c', 'tea green': '#bdf8a3', 'teal': '#029386', 'teal blue': '#01889f', 'teal green': '#25a36f', 'tealish': '#24bca8', 'tealish green': '#0cdc73', 'terra cotta': '#c9643b', 'terracota': '#cb6843', 'terracotta': '#ca6641', 'tiffany blue': '#7bf2da', 'tomato': '#ef4026', 'tomato red': '#ec2d01', 'topaz': '#13bbaf', 'toupe': '#c7ac7d', 'toxic green': '#61de2a', 'tree green': '#2a7e19', 'true blue': '#010fcc', 'true green': '#089404', 'turquoise': '#06c2ac', 'turquoise blue': '#06b1c4', 'turquoise green': '#04f489', 'turtle green': '#75b84f', 'twilight': '#4e518b', 'twilight blue': '#0a437a', 'ugly blue': '#31668a', 'ugly brown': '#7d7103', 'ugly green': '#7a9703', 'ugly pink': '#cd7584', 'ugly purple': '#a442a0', 'ugly yellow': '#d0c101', 'ultramarine': '#2000b1', 'ultramarine blue': '#1805db', 'umber': '#b26400', 'velvet': '#750851', 'vermillion': '#f4320c', 'very dark blue': '#000133', 'very dark brown': '#1d0200', 'very dark green': '#062e03', 'very dark purple': '#2a0134', 'very light blue': '#d5ffff', 'very light brown': '#d3b683', 'very light green': '#d1ffbd', 'very light pink': '#fff4f2', 'very light purple': '#f6cefc', 'very pale blue': '#d6fffe', 'very pale green': '#cffdbc', 'vibrant blue': '#0339f8', 'vibrant green': '#0add08', 'vibrant purple': '#ad03de', 'violet': '#9a0eea', 'violet blue': '#510ac9', 'violet pink': '#fb5ffc', 'violet red': '#a50055', 'viridian': '#1e9167', 'vivid blue': '#152eff', 'vivid green': '#2fef10', 'vivid purple': '#9900fa', 'vomit': '#a2a415', 'vomit green': '#89a203', 'vomit yellow': '#c7c10c', 'warm blue': '#4b57db', 'warm brown': '#964e02', 'warm grey': '#978a84', 'warm pink': '#fb5581', 'warm purple': '#952e8f', 'washed out green': '#bcf5a6', 'water blue': '#0e87cc', 'watermelon': '#fd4659', 'weird green': '#3ae57f', 'wheat': '#fbdd7e', 'white': '#ffffff', 'windows blue': '#3778bf', 'wine': '#80013f', 'wine red': '#7b0323', 'wintergreen': '#20f986', 'wisteria': '#a87dc2', 'yellow': '#ffff14', 'yellow brown': '#b79400', 'yellow green': '#c0fb2d', 'yellow ochre': '#cb9d06', 'yellow orange': '#fcb001', 'yellow tan': '#ffe36e', 'yellow/green': '#c8fd3d', 'yellowgreen': '#bbf90f', 'yellowish': '#faee66', 'yellowish brown': '#9b7a01', 'yellowish green': '#b0dd16', 'yellowish orange': '#ffab0f', 'yellowish tan': '#fcfc81', 'yellowy brown': '#ae8b0c', 'yellowy green': '#bff128'} seaborn-0.11.2/seaborn/conftest.py000066400000000000000000000137641410631356500170730ustar00rootroot00000000000000import numpy as np import pandas as pd import datetime import matplotlib as mpl import matplotlib.pyplot as plt import pytest def has_verdana(): """Helper to verify if Verdana font is present""" # This import is relatively lengthy, so to prevent its import for # testing other tests in this module not requiring this knowledge, # import font_manager here import matplotlib.font_manager as mplfm try: verdana_font = mplfm.findfont('Verdana', fallback_to_default=False) except: # noqa # if https://github.com/matplotlib/matplotlib/pull/3435 # gets accepted return False # otherwise check if not matching the logic for a 'default' one try: unlikely_font = mplfm.findfont("very_unlikely_to_exist1234", fallback_to_default=False) except: # noqa # if matched verdana but not unlikely, Verdana must exist return True # otherwise -- if they match, must be the same default return verdana_font != unlikely_font @pytest.fixture(scope="session", autouse=True) def remove_pandas_unit_conversion(): # Prior to pandas 1.0, it registered its own datetime converters, # but they are less powerful than what matplotlib added in 2.2, # and we rely on that functionality in seaborn. # https://github.com/matplotlib/matplotlib/pull/9779 # https://github.com/pandas-dev/pandas/issues/27036 mpl.units.registry[np.datetime64] = mpl.dates.DateConverter() mpl.units.registry[datetime.date] = mpl.dates.DateConverter() mpl.units.registry[datetime.datetime] = mpl.dates.DateConverter() @pytest.fixture(autouse=True) def close_figs(): yield plt.close("all") @pytest.fixture(autouse=True) def random_seed(): seed = sum(map(ord, "seaborn random global")) np.random.seed(seed) @pytest.fixture() def rng(): seed = sum(map(ord, "seaborn random object")) return np.random.RandomState(seed) @pytest.fixture def wide_df(rng): columns = list("abc") index = pd.Int64Index(np.arange(10, 50, 2), name="wide_index") values = rng.normal(size=(len(index), len(columns))) return pd.DataFrame(values, index=index, columns=columns) @pytest.fixture def wide_array(wide_df): # Requires panads >= 0.24 # return wide_df.to_numpy() return np.asarray(wide_df) @pytest.fixture def flat_series(rng): index = pd.Int64Index(np.arange(10, 30), name="t") return pd.Series(rng.normal(size=20), index, name="s") @pytest.fixture def flat_array(flat_series): # Requires panads >= 0.24 # return flat_series.to_numpy() return np.asarray(flat_series) @pytest.fixture def flat_list(flat_series): # Requires panads >= 0.24 # return flat_series.to_list() return flat_series.tolist() @pytest.fixture(params=["series", "array", "list"]) def flat_data(rng, request): index = pd.Int64Index(np.arange(10, 30), name="t") series = pd.Series(rng.normal(size=20), index, name="s") if request.param == "series": data = series elif request.param == "array": try: data = series.to_numpy() # Requires pandas >= 0.24 except AttributeError: data = np.asarray(series) elif request.param == "list": try: data = series.to_list() # Requires pandas >= 0.24 except AttributeError: data = series.tolist() return data @pytest.fixture def wide_list_of_series(rng): return [pd.Series(rng.normal(size=20), np.arange(20), name="a"), pd.Series(rng.normal(size=10), np.arange(5, 15), name="b")] @pytest.fixture def wide_list_of_arrays(wide_list_of_series): # Requires pandas >= 0.24 # return [s.to_numpy() for s in wide_list_of_series] return [np.asarray(s) for s in wide_list_of_series] @pytest.fixture def wide_list_of_lists(wide_list_of_series): # Requires pandas >= 0.24 # return [s.to_list() for s in wide_list_of_series] return [s.tolist() for s in wide_list_of_series] @pytest.fixture def wide_dict_of_series(wide_list_of_series): return {s.name: s for s in wide_list_of_series} @pytest.fixture def wide_dict_of_arrays(wide_list_of_series): # Requires pandas >= 0.24 # return {s.name: s.to_numpy() for s in wide_list_of_series} return {s.name: np.asarray(s) for s in wide_list_of_series} @pytest.fixture def wide_dict_of_lists(wide_list_of_series): # Requires pandas >= 0.24 # return {s.name: s.to_list() for s in wide_list_of_series} return {s.name: s.tolist() for s in wide_list_of_series} @pytest.fixture def long_df(rng): n = 100 df = pd.DataFrame(dict( x=rng.uniform(0, 20, n).round().astype("int"), y=rng.normal(size=n), z=rng.lognormal(size=n), a=rng.choice(list("abc"), n), b=rng.choice(list("mnop"), n), c=rng.choice([0, 1], n, [.3, .7]), t=rng.choice(np.arange("2004-07-30", "2007-07-30", dtype="datetime64[Y]"), n), s=rng.choice([2, 4, 8], n), f=rng.choice([0.2, 0.3], n), )) a_cat = df["a"].astype("category") new_categories = np.roll(a_cat.cat.categories, 1) df["a_cat"] = a_cat.cat.reorder_categories(new_categories) df["s_cat"] = df["s"].astype("category") df["s_str"] = df["s"].astype(str) return df @pytest.fixture def long_dict(long_df): return long_df.to_dict() @pytest.fixture def repeated_df(rng): n = 100 return pd.DataFrame(dict( x=np.tile(np.arange(n // 2), 2), y=rng.normal(size=n), a=rng.choice(list("abc"), n), u=np.repeat(np.arange(2), n // 2), )) @pytest.fixture def missing_df(rng, long_df): df = long_df.copy() for col in df: idx = rng.permutation(df.index)[:10] df.loc[idx, col] = np.nan return df @pytest.fixture def object_df(rng, long_df): df = long_df.copy() # objectify numeric columns for col in ["c", "s", "f"]: df[col] = df[col].astype(object) return df @pytest.fixture def null_series(flat_series): return pd.Series(index=flat_series.index, dtype='float64') seaborn-0.11.2/seaborn/distributions.py000066400000000000000000002651321410631356500201460ustar00rootroot00000000000000"""Plotting functions for visualizing distributions.""" from numbers import Number from functools import partial import math import warnings import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.transforms as tx from matplotlib.colors import to_rgba from matplotlib.collections import LineCollection from scipy import stats from ._core import ( VectorPlotter, ) from ._statistics import ( KDE, Histogram, ECDF, ) from .axisgrid import ( FacetGrid, _facet_docs, ) from .utils import ( remove_na, _kde_support, _normalize_kwargs, _check_argument, _assign_default_kwargs, ) from .palettes import color_palette from .external import husl from ._decorators import _deprecate_positional_args from ._docstrings import ( DocstringComponents, _core_docs, ) __all__ = ["displot", "histplot", "kdeplot", "ecdfplot", "rugplot", "distplot"] # ==================================================================================== # # Module documentation # ==================================================================================== # _dist_params = dict( multiple=""" multiple : {{"layer", "stack", "fill"}} Method for drawing multiple elements when semantic mapping creates subsets. Only relevant with univariate data. """, log_scale=""" log_scale : bool or number, or pair of bools or numbers Set axis scale(s) to log. A single value sets the data axis for univariate distributions and both axes for bivariate distributions. A pair of values sets each axis independently. Numeric values are interpreted as the desired base (default 10). If `False`, defer to the existing Axes scale. """, legend=""" legend : bool If False, suppress the legend for semantic variables. """, cbar=""" cbar : bool If True, add a colorbar to annotate the color mapping in a bivariate plot. Note: Does not currently support plots with a ``hue`` variable well. """, cbar_ax=""" cbar_ax : :class:`matplotlib.axes.Axes` Pre-existing axes for the colorbar. """, cbar_kws=""" cbar_kws : dict Additional parameters passed to :meth:`matplotlib.figure.Figure.colorbar`. """, ) _param_docs = DocstringComponents.from_nested_components( core=_core_docs["params"], facets=DocstringComponents(_facet_docs), dist=DocstringComponents(_dist_params), kde=DocstringComponents.from_function_params(KDE.__init__), hist=DocstringComponents.from_function_params(Histogram.__init__), ecdf=DocstringComponents.from_function_params(ECDF.__init__), ) # ==================================================================================== # # Internal API # ==================================================================================== # class _DistributionPlotter(VectorPlotter): semantics = "x", "y", "hue", "weights" wide_structure = {"x": "@values", "hue": "@columns"} flat_structure = {"x": "@values"} def __init__( self, data=None, variables={}, ): super().__init__(data=data, variables=variables) @property def univariate(self): """Return True if only x or y are used.""" # TODO this could go down to core, but putting it here now. # We'd want to be conceptually clear that univariate only applies # to x/y and not to other semantics, which can exist. # We haven't settled on a good conceptual name for x/y. return bool({"x", "y"} - set(self.variables)) @property def data_variable(self): """Return the variable with data for univariate plots.""" # TODO This could also be in core, but it should have a better name. if not self.univariate: raise AttributeError("This is not a univariate plot") return {"x", "y"}.intersection(self.variables).pop() @property def has_xy_data(self): """Return True at least one of x or y is defined.""" # TODO see above points about where this should go return bool({"x", "y"} & set(self.variables)) def _add_legend( self, ax_obj, artist, fill, element, multiple, alpha, artist_kws, legend_kws, ): """Add artists that reflect semantic mappings and put then in a legend.""" # TODO note that this doesn't handle numeric mappings like the relational plots handles = [] labels = [] for level in self._hue_map.levels: color = self._hue_map(level) handles.append(artist( **self._artist_kws( artist_kws, fill, element, multiple, color, alpha ) )) labels.append(level) if isinstance(ax_obj, mpl.axes.Axes): ax_obj.legend(handles, labels, title=self.variables["hue"], **legend_kws) else: # i.e. a FacetGrid. TODO make this better legend_data = dict(zip(labels, handles)) ax_obj.add_legend( legend_data, title=self.variables["hue"], label_order=self.var_levels["hue"], **legend_kws ) def _artist_kws(self, kws, fill, element, multiple, color, alpha): """Handle differences between artists in filled/unfilled plots.""" kws = kws.copy() if fill: kws.setdefault("facecolor", to_rgba(color, alpha)) if multiple in ["stack", "fill"] or element == "bars": kws.setdefault("edgecolor", mpl.rcParams["patch.edgecolor"]) else: kws.setdefault("edgecolor", to_rgba(color, 1)) elif element == "bars": kws["facecolor"] = "none" kws["edgecolor"] = to_rgba(color, alpha) else: kws["color"] = to_rgba(color, alpha) return kws def _quantile_to_level(self, data, quantile): """Return data levels corresponding to quantile cuts of mass.""" isoprop = np.asarray(quantile) values = np.ravel(data) sorted_values = np.sort(values)[::-1] normalized_values = np.cumsum(sorted_values) / values.sum() idx = np.searchsorted(normalized_values, 1 - isoprop) levels = np.take(sorted_values, idx, mode="clip") return levels def _cmap_from_color(self, color): """Return a sequential colormap given a color seed.""" # Like so much else here, this is broadly useful, but keeping it # in this class to signify that I haven't thought overly hard about it... r, g, b, _ = to_rgba(color) h, s, _ = husl.rgb_to_husl(r, g, b) xx = np.linspace(-1, 1, int(1.15 * 256))[:256] ramp = np.zeros((256, 3)) ramp[:, 0] = h ramp[:, 1] = s * np.cos(xx) ramp[:, 2] = np.linspace(35, 80, 256) colors = np.clip([husl.husl_to_rgb(*hsl) for hsl in ramp], 0, 1) return mpl.colors.ListedColormap(colors[::-1]) def _default_discrete(self): """Find default values for discrete hist estimation based on variable type.""" if self.univariate: discrete = self.var_types[self.data_variable] == "categorical" else: discrete_x = self.var_types["x"] == "categorical" discrete_y = self.var_types["y"] == "categorical" discrete = discrete_x, discrete_y return discrete def _resolve_multiple(self, curves, multiple): """Modify the density data structure to handle multiple densities.""" # Default baselines have all densities starting at 0 baselines = {k: np.zeros_like(v) for k, v in curves.items()} # TODO we should have some central clearinghouse for checking if any # "grouping" (terminnology?) semantics have been assigned if "hue" not in self.variables: return curves, baselines if multiple in ("stack", "fill"): # Setting stack or fill means that the curves share a # support grid / set of bin edges, so we can make a dataframe # Reverse the column order to plot from top to bottom curves = pd.DataFrame(curves).iloc[:, ::-1] # Find column groups that are nested within col/row variables column_groups = {} for i, keyd in enumerate(map(dict, curves.columns.tolist())): facet_key = keyd.get("col", None), keyd.get("row", None) column_groups.setdefault(facet_key, []) column_groups[facet_key].append(i) baselines = curves.copy() for cols in column_groups.values(): norm_constant = curves.iloc[:, cols].sum(axis="columns") # Take the cumulative sum to stack curves.iloc[:, cols] = curves.iloc[:, cols].cumsum(axis="columns") # Normalize by row sum to fill if multiple == "fill": curves.iloc[:, cols] = (curves .iloc[:, cols] .div(norm_constant, axis="index")) # Define where each segment starts baselines.iloc[:, cols] = (curves .iloc[:, cols] .shift(1, axis=1) .fillna(0)) if multiple == "dodge": # Account for the unique semantic (non-faceting) levels # This will require rethiniking if we add other semantics! hue_levels = self.var_levels["hue"] n = len(hue_levels) for key in curves: level = dict(key)["hue"] hist = curves[key].reset_index(name="heights") hist["widths"] /= n hist["edges"] += hue_levels.index(level) * hist["widths"] curves[key] = hist.set_index(["edges", "widths"])["heights"] return curves, baselines # -------------------------------------------------------------------------------- # # Computation # -------------------------------------------------------------------------------- # def _compute_univariate_density( self, data_variable, common_norm, common_grid, estimate_kws, log_scale, warn_singular=True, ): # Initialize the estimator object estimator = KDE(**estimate_kws) all_data = self.plot_data.dropna() if set(self.variables) - {"x", "y"}: if common_grid: all_observations = self.comp_data.dropna() estimator.define_support(all_observations[data_variable]) else: common_norm = False densities = {} for sub_vars, sub_data in self.iter_data("hue", from_comp_data=True): # Extract the data points from this sub set and remove nulls sub_data = sub_data.dropna() observations = sub_data[data_variable] observation_variance = observations.var() if math.isclose(observation_variance, 0) or np.isnan(observation_variance): msg = ( "Dataset has 0 variance; skipping density estimate. " "Pass `warn_singular=False` to disable this warning." ) if warn_singular: warnings.warn(msg, UserWarning) continue # Extract the weights for this subset of observations if "weights" in self.variables: weights = sub_data["weights"] else: weights = None # Estimate the density of observations at this level density, support = estimator(observations, weights=weights) if log_scale: support = np.power(10, support) # Apply a scaling factor so that the integral over all subsets is 1 if common_norm: density *= len(sub_data) / len(all_data) # Store the density for this level key = tuple(sub_vars.items()) densities[key] = pd.Series(density, index=support) return densities # -------------------------------------------------------------------------------- # # Plotting # -------------------------------------------------------------------------------- # def plot_univariate_histogram( self, multiple, element, fill, common_norm, common_bins, shrink, kde, kde_kws, color, legend, line_kws, estimate_kws, **plot_kws, ): # -- Default keyword dicts kde_kws = {} if kde_kws is None else kde_kws.copy() line_kws = {} if line_kws is None else line_kws.copy() estimate_kws = {} if estimate_kws is None else estimate_kws.copy() # -- Input checking _check_argument("multiple", ["layer", "stack", "fill", "dodge"], multiple) _check_argument("element", ["bars", "step", "poly"], element) if estimate_kws["discrete"] and element != "bars": raise ValueError("`element` must be 'bars' when `discrete` is True") auto_bins_with_weights = ( "weights" in self.variables and estimate_kws["bins"] == "auto" and estimate_kws["binwidth"] is None and not estimate_kws["discrete"] ) if auto_bins_with_weights: msg = ( "`bins` cannot be 'auto' when using weights. " "Setting `bins=10`, but you will likely want to adjust." ) warnings.warn(msg, UserWarning) estimate_kws["bins"] = 10 # Simplify downstream code if we are not normalizing if estimate_kws["stat"] == "count": common_norm = False # Now initialize the Histogram estimator estimator = Histogram(**estimate_kws) histograms = {} # Do pre-compute housekeeping related to multiple groups # TODO best way to account for facet/semantic? if set(self.variables) - {"x", "y"}: all_data = self.comp_data.dropna() if common_bins: all_observations = all_data[self.data_variable] estimator.define_bin_params( all_observations, weights=all_data.get("weights", None), ) else: common_norm = False # Estimate the smoothed kernel densities, for use later if kde: # TODO alternatively, clip at min/max bins? kde_kws.setdefault("cut", 0) kde_kws["cumulative"] = estimate_kws["cumulative"] log_scale = self._log_scaled(self.data_variable) densities = self._compute_univariate_density( self.data_variable, common_norm, common_bins, kde_kws, log_scale, warn_singular=False, ) # First pass through the data to compute the histograms for sub_vars, sub_data in self.iter_data("hue", from_comp_data=True): # Prepare the relevant data key = tuple(sub_vars.items()) sub_data = sub_data.dropna() observations = sub_data[self.data_variable] if "weights" in self.variables: weights = sub_data["weights"] else: weights = None # Do the histogram computation heights, edges = estimator(observations, weights=weights) # Rescale the smoothed curve to match the histogram if kde and key in densities: density = densities[key] if estimator.cumulative: hist_norm = heights.max() else: hist_norm = (heights * np.diff(edges)).sum() densities[key] *= hist_norm # Convert edges back to original units for plotting if self._log_scaled(self.data_variable): edges = np.power(10, edges) # Pack the histogram data and metadata together orig_widths = np.diff(edges) widths = shrink * orig_widths edges = edges[:-1] + (1 - shrink) / 2 * orig_widths index = pd.MultiIndex.from_arrays([ pd.Index(edges, name="edges"), pd.Index(widths, name="widths"), ]) hist = pd.Series(heights, index=index, name="heights") # Apply scaling to normalize across groups if common_norm: hist *= len(sub_data) / len(all_data) # Store the finalized histogram data for future plotting histograms[key] = hist # Modify the histogram and density data to resolve multiple groups histograms, baselines = self._resolve_multiple(histograms, multiple) if kde: densities, _ = self._resolve_multiple( densities, None if multiple == "dodge" else multiple ) # Set autoscaling-related meta sticky_stat = (0, 1) if multiple == "fill" else (0, np.inf) if multiple == "fill": # Filled plots should not have any margins bin_vals = histograms.index.to_frame() edges = bin_vals["edges"] widths = bin_vals["widths"] sticky_data = ( edges.min(), edges.max() + widths.loc[edges.idxmax()] ) else: sticky_data = [] # --- Handle default visual attributes # Note: default linewidth is determined after plotting # Default color without a hue semantic should follow the color cycle # Note, this is fairly complicated and awkward, I'd like a better way # TODO and now with the ax business, this is just super annoying FIX!! if "hue" not in self.variables: if self.ax is None: default_color = "C0" if color is None else color else: if fill: if self.var_types[self.data_variable] == "datetime": # Avoid drawing empty fill_between on date axis # https://github.com/matplotlib/matplotlib/issues/17586 scout = None default_color = plot_kws.pop("facecolor", color) if default_color is None: default_color = "C0" else: artist = mpl.patches.Rectangle plot_kws = _normalize_kwargs(plot_kws, artist) scout = self.ax.fill_between([], [], color=color, **plot_kws) default_color = tuple(scout.get_facecolor().squeeze()) else: artist = mpl.lines.Line2D plot_kws = _normalize_kwargs(plot_kws, artist) scout, = self.ax.plot([], [], color=color, **plot_kws) default_color = scout.get_color() if scout is not None: scout.remove() # Default alpha should depend on other parameters if fill: # Note: will need to account for other grouping semantics if added if "hue" in self.variables and multiple == "layer": default_alpha = .5 if element == "bars" else .25 elif kde: default_alpha = .5 else: default_alpha = .75 else: default_alpha = 1 alpha = plot_kws.pop("alpha", default_alpha) # TODO make parameter? hist_artists = [] # Go back through the dataset and draw the plots for sub_vars, _ in self.iter_data("hue", reverse=True): key = tuple(sub_vars.items()) hist = histograms[key].rename("heights").reset_index() bottom = np.asarray(baselines[key]) ax = self._get_axes(sub_vars) # Define the matplotlib attributes that depend on semantic mapping if "hue" in self.variables: color = self._hue_map(sub_vars["hue"]) else: color = default_color artist_kws = self._artist_kws( plot_kws, fill, element, multiple, color, alpha ) if element == "bars": # Use matplotlib bar plotting plot_func = ax.bar if self.data_variable == "x" else ax.barh artists = plot_func( hist["edges"], hist["heights"] - bottom, hist["widths"], bottom, align="edge", **artist_kws, ) for bar in artists: if self.data_variable == "x": bar.sticky_edges.x[:] = sticky_data bar.sticky_edges.y[:] = sticky_stat else: bar.sticky_edges.x[:] = sticky_stat bar.sticky_edges.y[:] = sticky_data hist_artists.extend(artists) else: # Use either fill_between or plot to draw hull of histogram if element == "step": final = hist.iloc[-1] x = np.append(hist["edges"], final["edges"] + final["widths"]) y = np.append(hist["heights"], final["heights"]) b = np.append(bottom, bottom[-1]) if self.data_variable == "x": step = "post" drawstyle = "steps-post" else: step = "post" # fillbetweenx handles mapping internally drawstyle = "steps-pre" elif element == "poly": x = hist["edges"] + hist["widths"] / 2 y = hist["heights"] b = bottom step = None drawstyle = None if self.data_variable == "x": if fill: artist = ax.fill_between(x, b, y, step=step, **artist_kws) else: artist, = ax.plot(x, y, drawstyle=drawstyle, **artist_kws) artist.sticky_edges.x[:] = sticky_data artist.sticky_edges.y[:] = sticky_stat else: if fill: artist = ax.fill_betweenx(x, b, y, step=step, **artist_kws) else: artist, = ax.plot(y, x, drawstyle=drawstyle, **artist_kws) artist.sticky_edges.x[:] = sticky_stat artist.sticky_edges.y[:] = sticky_data hist_artists.append(artist) if kde: # Add in the density curves try: density = densities[key] except KeyError: continue support = density.index if "x" in self.variables: line_args = support, density sticky_x, sticky_y = None, (0, np.inf) else: line_args = density, support sticky_x, sticky_y = (0, np.inf), None line_kws["color"] = to_rgba(color, 1) line, = ax.plot( *line_args, **line_kws, ) if sticky_x is not None: line.sticky_edges.x[:] = sticky_x if sticky_y is not None: line.sticky_edges.y[:] = sticky_y if element == "bars" and "linewidth" not in plot_kws: # Now we handle linewidth, which depends on the scaling of the plot # We will base everything on the minimum bin width hist_metadata = pd.concat([ # Use .items for generality over dict or df h.index.to_frame() for _, h in histograms.items() ]).reset_index(drop=True) thin_bar_idx = hist_metadata["widths"].idxmin() binwidth = hist_metadata.loc[thin_bar_idx, "widths"] left_edge = hist_metadata.loc[thin_bar_idx, "edges"] # Set initial value default_linewidth = math.inf # Loop through subsets based only on facet variables for sub_vars, _ in self.iter_data(): ax = self._get_axes(sub_vars) # Needed in some cases to get valid transforms. # Innocuous in other cases? ax.autoscale_view() # Convert binwidth from data coordinates to pixels pts_x, pts_y = 72 / ax.figure.dpi * abs( ax.transData.transform([left_edge + binwidth] * 2) - ax.transData.transform([left_edge] * 2) ) if self.data_variable == "x": binwidth_points = pts_x else: binwidth_points = pts_y # The relative size of the lines depends on the appearance # This is a provisional value and may need more tweaking default_linewidth = min(.1 * binwidth_points, default_linewidth) # Set the attributes for bar in hist_artists: # Don't let the lines get too thick max_linewidth = bar.get_linewidth() if not fill: max_linewidth *= 1.5 linewidth = min(default_linewidth, max_linewidth) # If not filling, don't let lines dissapear if not fill: min_linewidth = .5 linewidth = max(linewidth, min_linewidth) bar.set_linewidth(linewidth) # --- Finalize the plot ---- # Axis labels ax = self.ax if self.ax is not None else self.facets.axes.flat[0] default_x = default_y = "" if self.data_variable == "x": default_y = estimator.stat.capitalize() if self.data_variable == "y": default_x = estimator.stat.capitalize() self._add_axis_labels(ax, default_x, default_y) # Legend for semantic variables if "hue" in self.variables and legend: if fill or element == "bars": artist = partial(mpl.patches.Patch) else: artist = partial(mpl.lines.Line2D, [], []) ax_obj = self.ax if self.ax is not None else self.facets self._add_legend( ax_obj, artist, fill, element, multiple, alpha, plot_kws, {}, ) def plot_bivariate_histogram( self, common_bins, common_norm, thresh, pthresh, pmax, color, legend, cbar, cbar_ax, cbar_kws, estimate_kws, **plot_kws, ): # Default keyword dicts cbar_kws = {} if cbar_kws is None else cbar_kws.copy() # Now initialize the Histogram estimator estimator = Histogram(**estimate_kws) # Do pre-compute housekeeping related to multiple groups if set(self.variables) - {"x", "y"}: all_data = self.comp_data.dropna() if common_bins: estimator.define_bin_params( all_data["x"], all_data["y"], all_data.get("weights", None), ) else: common_norm = False # -- Determine colormap threshold and norm based on the full data full_heights = [] for _, sub_data in self.iter_data(from_comp_data=True): sub_data = sub_data.dropna() sub_heights, _ = estimator( sub_data["x"], sub_data["y"], sub_data.get("weights", None) ) full_heights.append(sub_heights) common_color_norm = not set(self.variables) - {"x", "y"} or common_norm if pthresh is not None and common_color_norm: thresh = self._quantile_to_level(full_heights, pthresh) plot_kws.setdefault("vmin", 0) if common_color_norm: if pmax is not None: vmax = self._quantile_to_level(full_heights, pmax) else: vmax = plot_kws.pop("vmax", max(map(np.max, full_heights))) else: vmax = None # Get a default color # (We won't follow the color cycle here, as multiple plots are unlikely) if color is None: color = "C0" # --- Loop over data (subsets) and draw the histograms for sub_vars, sub_data in self.iter_data("hue", from_comp_data=True): sub_data = sub_data.dropna() if sub_data.empty: continue # Do the histogram computation heights, (x_edges, y_edges) = estimator( sub_data["x"], sub_data["y"], weights=sub_data.get("weights", None), ) # Check for log scaling on the data axis if self._log_scaled("x"): x_edges = np.power(10, x_edges) if self._log_scaled("y"): y_edges = np.power(10, y_edges) # Apply scaling to normalize across groups if estimator.stat != "count" and common_norm: heights *= len(sub_data) / len(all_data) # Define the specific kwargs for this artist artist_kws = plot_kws.copy() if "hue" in self.variables: color = self._hue_map(sub_vars["hue"]) cmap = self._cmap_from_color(color) artist_kws["cmap"] = cmap else: cmap = artist_kws.pop("cmap", None) if isinstance(cmap, str): cmap = color_palette(cmap, as_cmap=True) elif cmap is None: cmap = self._cmap_from_color(color) artist_kws["cmap"] = cmap # Set the upper norm on the colormap if not common_color_norm and pmax is not None: vmax = self._quantile_to_level(heights, pmax) if vmax is not None: artist_kws["vmax"] = vmax # Make cells at or below the threshold transparent if not common_color_norm and pthresh: thresh = self._quantile_to_level(heights, pthresh) if thresh is not None: heights = np.ma.masked_less_equal(heights, thresh) # Get the axes for this plot ax = self._get_axes(sub_vars) # pcolormesh is going to turn the grid off, but we want to keep it # I'm not sure if there's a better way to get the grid state x_grid = any([l.get_visible() for l in ax.xaxis.get_gridlines()]) y_grid = any([l.get_visible() for l in ax.yaxis.get_gridlines()]) mesh = ax.pcolormesh( x_edges, y_edges, heights.T, **artist_kws, ) # pcolormesh sets sticky edges, but we only want them if not thresholding if thresh is not None: mesh.sticky_edges.x[:] = [] mesh.sticky_edges.y[:] = [] # Add an optional colorbar # Note, we want to improve this. When hue is used, it will stack # multiple colorbars with redundant ticks in an ugly way. # But it's going to take some work to have multiple colorbars that # share ticks nicely. if cbar: ax.figure.colorbar(mesh, cbar_ax, ax, **cbar_kws) # Reset the grid state if x_grid: ax.grid(True, axis="x") if y_grid: ax.grid(True, axis="y") # --- Finalize the plot ax = self.ax if self.ax is not None else self.facets.axes.flat[0] self._add_axis_labels(ax) if "hue" in self.variables and legend: # TODO if possible, I would like to move the contour # intensity information into the legend too and label the # iso proportions rather than the raw density values artist_kws = {} artist = partial(mpl.patches.Patch) ax_obj = self.ax if self.ax is not None else self.facets self._add_legend( ax_obj, artist, True, False, "layer", 1, artist_kws, {}, ) def plot_univariate_density( self, multiple, common_norm, common_grid, warn_singular, fill, legend, estimate_kws, **plot_kws, ): # Handle conditional defaults if fill is None: fill = multiple in ("stack", "fill") # Preprocess the matplotlib keyword dictionaries if fill: artist = mpl.collections.PolyCollection else: artist = mpl.lines.Line2D plot_kws = _normalize_kwargs(plot_kws, artist) # Input checking _check_argument("multiple", ["layer", "stack", "fill"], multiple) # Always share the evaluation grid when stacking subsets = bool(set(self.variables) - {"x", "y"}) if subsets and multiple in ("stack", "fill"): common_grid = True # Check if the data axis is log scaled log_scale = self._log_scaled(self.data_variable) # Do the computation densities = self._compute_univariate_density( self.data_variable, common_norm, common_grid, estimate_kws, log_scale, warn_singular, ) # Adjust densities based on the `multiple` rule densities, baselines = self._resolve_multiple(densities, multiple) # Control the interaction with autoscaling by defining sticky_edges # i.e. we don't want autoscale margins below the density curve sticky_density = (0, 1) if multiple == "fill" else (0, np.inf) if multiple == "fill": # Filled plots should not have any margins sticky_support = densities.index.min(), densities.index.max() else: sticky_support = [] # Handle default visual attributes if "hue" not in self.variables: if self.ax is None: color = plot_kws.pop("color", None) default_color = "C0" if color is None else color else: if fill: if self.var_types[self.data_variable] == "datetime": # Avoid drawing empty fill_between on date axis # https://github.com/matplotlib/matplotlib/issues/17586 scout = None default_color = plot_kws.pop( "color", plot_kws.pop("facecolor", None) ) if default_color is None: default_color = "C0" else: scout = self.ax.fill_between([], [], **plot_kws) default_color = tuple(scout.get_facecolor().squeeze()) plot_kws.pop("color", None) else: scout, = self.ax.plot([], [], **plot_kws) default_color = scout.get_color() if scout is not None: scout.remove() plot_kws.pop("color", None) if fill: if multiple == "layer": default_alpha = .25 else: default_alpha = .75 else: default_alpha = 1 alpha = plot_kws.pop("alpha", default_alpha) # TODO make parameter? # Now iterate through the subsets and draw the densities # We go backwards so stacked densities read from top-to-bottom for sub_vars, _ in self.iter_data("hue", reverse=True): # Extract the support grid and density curve for this level key = tuple(sub_vars.items()) try: density = densities[key] except KeyError: continue support = density.index fill_from = baselines[key] ax = self._get_axes(sub_vars) # Modify the matplotlib attributes from semantic mapping if "hue" in self.variables: color = self._hue_map(sub_vars["hue"]) else: color = default_color artist_kws = self._artist_kws( plot_kws, fill, False, multiple, color, alpha ) # Either plot a curve with observation values on the x axis if "x" in self.variables: if fill: artist = ax.fill_between( support, fill_from, density, **artist_kws ) else: artist, = ax.plot(support, density, **artist_kws) artist.sticky_edges.x[:] = sticky_support artist.sticky_edges.y[:] = sticky_density # Or plot a curve with observation values on the y axis else: if fill: artist = ax.fill_betweenx( support, fill_from, density, **artist_kws ) else: artist, = ax.plot(density, support, **artist_kws) artist.sticky_edges.x[:] = sticky_density artist.sticky_edges.y[:] = sticky_support # --- Finalize the plot ---- ax = self.ax if self.ax is not None else self.facets.axes.flat[0] default_x = default_y = "" if self.data_variable == "x": default_y = "Density" if self.data_variable == "y": default_x = "Density" self._add_axis_labels(ax, default_x, default_y) if "hue" in self.variables and legend: if fill: artist = partial(mpl.patches.Patch) else: artist = partial(mpl.lines.Line2D, [], []) ax_obj = self.ax if self.ax is not None else self.facets self._add_legend( ax_obj, artist, fill, False, multiple, alpha, plot_kws, {}, ) def plot_bivariate_density( self, common_norm, fill, levels, thresh, color, legend, cbar, warn_singular, cbar_ax, cbar_kws, estimate_kws, **contour_kws, ): contour_kws = contour_kws.copy() estimator = KDE(**estimate_kws) if not set(self.variables) - {"x", "y"}: common_norm = False all_data = self.plot_data.dropna() # Loop through the subsets and estimate the KDEs densities, supports = {}, {} for sub_vars, sub_data in self.iter_data("hue", from_comp_data=True): # Extract the data points from this sub set and remove nulls sub_data = sub_data.dropna() observations = sub_data[["x", "y"]] # Extract the weights for this subset of observations if "weights" in self.variables: weights = sub_data["weights"] else: weights = None # Check that KDE will not error out variance = observations[["x", "y"]].var() if any(math.isclose(x, 0) for x in variance) or variance.isna().any(): msg = ( "Dataset has 0 variance; skipping density estimate. " "Pass `warn_singular=False` to disable this warning." ) if warn_singular: warnings.warn(msg, UserWarning) continue # Estimate the density of observations at this level observations = observations["x"], observations["y"] density, support = estimator(*observations, weights=weights) # Transform the support grid back to the original scale xx, yy = support if self._log_scaled("x"): xx = np.power(10, xx) if self._log_scaled("y"): yy = np.power(10, yy) support = xx, yy # Apply a scaling factor so that the integral over all subsets is 1 if common_norm: density *= len(sub_data) / len(all_data) key = tuple(sub_vars.items()) densities[key] = density supports[key] = support # Define a grid of iso-proportion levels if thresh is None: thresh = 0 if isinstance(levels, Number): levels = np.linspace(thresh, 1, levels) else: if min(levels) < 0 or max(levels) > 1: raise ValueError("levels must be in [0, 1]") # Transform from iso-proportions to iso-densities if common_norm: common_levels = self._quantile_to_level( list(densities.values()), levels, ) draw_levels = {k: common_levels for k in densities} else: draw_levels = { k: self._quantile_to_level(d, levels) for k, d in densities.items() } # Get a default single color from the attribute cycle if self.ax is None: default_color = "C0" if color is None else color else: scout, = self.ax.plot([], color=color) default_color = scout.get_color() scout.remove() # Define the coloring of the contours if "hue" in self.variables: for param in ["cmap", "colors"]: if param in contour_kws: msg = f"{param} parameter ignored when using hue mapping." warnings.warn(msg, UserWarning) contour_kws.pop(param) else: # Work out a default coloring of the contours coloring_given = set(contour_kws) & {"cmap", "colors"} if fill and not coloring_given: cmap = self._cmap_from_color(default_color) contour_kws["cmap"] = cmap if not fill and not coloring_given: contour_kws["colors"] = [default_color] # Use our internal colormap lookup cmap = contour_kws.pop("cmap", None) if isinstance(cmap, str): cmap = color_palette(cmap, as_cmap=True) if cmap is not None: contour_kws["cmap"] = cmap # Loop through the subsets again and plot the data for sub_vars, _ in self.iter_data("hue"): if "hue" in sub_vars: color = self._hue_map(sub_vars["hue"]) if fill: contour_kws["cmap"] = self._cmap_from_color(color) else: contour_kws["colors"] = [color] ax = self._get_axes(sub_vars) # Choose the function to plot with # TODO could add a pcolormesh based option as well # Which would look something like element="raster" if fill: contour_func = ax.contourf else: contour_func = ax.contour key = tuple(sub_vars.items()) if key not in densities: continue density = densities[key] xx, yy = supports[key] label = contour_kws.pop("label", None) cset = contour_func( xx, yy, density, levels=draw_levels[key], **contour_kws, ) if "hue" not in self.variables: cset.collections[0].set_label(label) # Add a color bar representing the contour heights # Note: this shows iso densities, not iso proportions # See more notes in histplot about how this could be improved if cbar: cbar_kws = {} if cbar_kws is None else cbar_kws ax.figure.colorbar(cset, cbar_ax, ax, **cbar_kws) # --- Finalize the plot ax = self.ax if self.ax is not None else self.facets.axes.flat[0] self._add_axis_labels(ax) if "hue" in self.variables and legend: # TODO if possible, I would like to move the contour # intensity information into the legend too and label the # iso proportions rather than the raw density values artist_kws = {} if fill: artist = partial(mpl.patches.Patch) else: artist = partial(mpl.lines.Line2D, [], []) ax_obj = self.ax if self.ax is not None else self.facets self._add_legend( ax_obj, artist, fill, False, "layer", 1, artist_kws, {}, ) def plot_univariate_ecdf(self, estimate_kws, legend, **plot_kws): estimator = ECDF(**estimate_kws) # Set the draw style to step the right way for the data varible drawstyles = dict(x="steps-post", y="steps-pre") plot_kws["drawstyle"] = drawstyles[self.data_variable] # Loop through the subsets, transform and plot the data for sub_vars, sub_data in self.iter_data( "hue", reverse=True, from_comp_data=True, ): # Compute the ECDF sub_data = sub_data.dropna() if sub_data.empty: continue observations = sub_data[self.data_variable] weights = sub_data.get("weights", None) stat, vals = estimator(observations, weights=weights) # Assign attributes based on semantic mapping artist_kws = plot_kws.copy() if "hue" in self.variables: artist_kws["color"] = self._hue_map(sub_vars["hue"]) # Return the data variable to the linear domain # This needs an automatic solution; see GH2409 if self._log_scaled(self.data_variable): vals = np.power(10, vals) vals[0] = -np.inf # Work out the orientation of the plot if self.data_variable == "x": plot_args = vals, stat stat_variable = "y" else: plot_args = stat, vals stat_variable = "x" if estimator.stat == "count": top_edge = len(observations) else: top_edge = 1 # Draw the line for this subset ax = self._get_axes(sub_vars) artist, = ax.plot(*plot_args, **artist_kws) sticky_edges = getattr(artist.sticky_edges, stat_variable) sticky_edges[:] = 0, top_edge # --- Finalize the plot ---- ax = self.ax if self.ax is not None else self.facets.axes.flat[0] stat = estimator.stat.capitalize() default_x = default_y = "" if self.data_variable == "x": default_y = stat if self.data_variable == "y": default_x = stat self._add_axis_labels(ax, default_x, default_y) if "hue" in self.variables and legend: artist = partial(mpl.lines.Line2D, [], []) alpha = plot_kws.get("alpha", 1) ax_obj = self.ax if self.ax is not None else self.facets self._add_legend( ax_obj, artist, False, False, None, alpha, plot_kws, {}, ) def plot_rug(self, height, expand_margins, legend, **kws): kws = _normalize_kwargs(kws, mpl.lines.Line2D) if self.ax is None: kws["color"] = kws.pop("color", "C0") else: scout, = self.ax.plot([], [], **kws) kws["color"] = kws.pop("color", scout.get_color()) scout.remove() for sub_vars, sub_data, in self.iter_data(from_comp_data=True): ax = self._get_axes(sub_vars) kws.setdefault("linewidth", 1) if expand_margins: xmarg, ymarg = ax.margins() if "x" in self.variables: ymarg += height * 2 if "y" in self.variables: xmarg += height * 2 ax.margins(x=xmarg, y=ymarg) if "hue" in self.variables: kws.pop("c", None) kws.pop("color", None) if "x" in self.variables: self._plot_single_rug(sub_data, "x", height, ax, kws) if "y" in self.variables: self._plot_single_rug(sub_data, "y", height, ax, kws) # --- Finalize the plot self._add_axis_labels(ax) if "hue" in self.variables and legend: # TODO ideally i'd like the legend artist to look like a rug legend_artist = partial(mpl.lines.Line2D, [], []) self._add_legend( ax, legend_artist, False, False, None, 1, {}, {}, ) def _plot_single_rug(self, sub_data, var, height, ax, kws): """Draw a rugplot along one axis of the plot.""" vector = sub_data[var] n = len(vector) # Return data to linear domain # This needs an automatic solution; see GH2409 if self._log_scaled(var): vector = np.power(10, vector) # We'll always add a single collection with varying colors if "hue" in self.variables: colors = self._hue_map(sub_data["hue"]) else: colors = None # Build the array of values for the LineCollection if var == "x": trans = tx.blended_transform_factory(ax.transData, ax.transAxes) xy_pairs = np.column_stack([ np.repeat(vector, 2), np.tile([0, height], n) ]) if var == "y": trans = tx.blended_transform_factory(ax.transAxes, ax.transData) xy_pairs = np.column_stack([ np.tile([0, height], n), np.repeat(vector, 2) ]) # Draw the lines on the plot line_segs = xy_pairs.reshape([n, 2, 2]) ax.add_collection(LineCollection( line_segs, transform=trans, colors=colors, **kws )) ax.autoscale_view(scalex=var == "x", scaley=var == "y") class _DistributionFacetPlotter(_DistributionPlotter): semantics = _DistributionPlotter.semantics + ("col", "row") # ==================================================================================== # # External API # ==================================================================================== # def histplot( data=None, *, # Vector variables x=None, y=None, hue=None, weights=None, # Histogram computation parameters stat="count", bins="auto", binwidth=None, binrange=None, discrete=None, cumulative=False, common_bins=True, common_norm=True, # Histogram appearance parameters multiple="layer", element="bars", fill=True, shrink=1, # Histogram smoothing with a kernel density estimate kde=False, kde_kws=None, line_kws=None, # Bivariate histogram parameters thresh=0, pthresh=None, pmax=None, cbar=False, cbar_ax=None, cbar_kws=None, # Hue mapping parameters palette=None, hue_order=None, hue_norm=None, color=None, # Axes information log_scale=None, legend=True, ax=None, # Other appearance keywords **kwargs, ): p = _DistributionPlotter( data=data, variables=_DistributionPlotter.get_semantics(locals()) ) p.map_hue(palette=palette, order=hue_order, norm=hue_norm) if ax is None: ax = plt.gca() # Check for a specification that lacks x/y data and return early if not p.has_xy_data: return ax # Attach the axes to the plotter, setting up unit conversions p._attach(ax, log_scale=log_scale) # Default to discrete bins for categorical variables if discrete is None: discrete = p._default_discrete() estimate_kws = dict( stat=stat, bins=bins, binwidth=binwidth, binrange=binrange, discrete=discrete, cumulative=cumulative, ) if p.univariate: p.plot_univariate_histogram( multiple=multiple, element=element, fill=fill, shrink=shrink, common_norm=common_norm, common_bins=common_bins, kde=kde, kde_kws=kde_kws, color=color, legend=legend, estimate_kws=estimate_kws, line_kws=line_kws, **kwargs, ) else: p.plot_bivariate_histogram( common_bins=common_bins, common_norm=common_norm, thresh=thresh, pthresh=pthresh, pmax=pmax, color=color, legend=legend, cbar=cbar, cbar_ax=cbar_ax, cbar_kws=cbar_kws, estimate_kws=estimate_kws, **kwargs, ) return ax histplot.__doc__ = """\ Plot univariate or bivariate histograms to show distributions of datasets. A histogram is a classic visualization tool that represents the distribution of one or more variables by counting the number of observations that fall within disrete bins. This function can normalize the statistic computed within each bin to estimate frequency, density or probability mass, and it can add a smooth curve obtained using a kernel density estimate, similar to :func:`kdeplot`. More information is provided in the :ref:`user guide `. Parameters ---------- {params.core.data} {params.core.xy} {params.core.hue} weights : vector or key in ``data`` If provided, weight the contribution of the corresponding data points towards the count in each bin by these factors. {params.hist.stat} {params.hist.bins} {params.hist.binwidth} {params.hist.binrange} discrete : bool If True, default to ``binwidth=1`` and draw the bars so that they are centered on their corresponding data points. This avoids "gaps" that may otherwise appear when using discrete (integer) data. cumulative : bool If True, plot the cumulative counts as bins increase. common_bins : bool If True, use the same bins when semantic variables produce multiple plots. If using a reference rule to determine the bins, it will be computed with the full dataset. common_norm : bool If True and using a normalized statistic, the normalization will apply over the full dataset. Otherwise, normalize each histogram independently. multiple : {{"layer", "dodge", "stack", "fill"}} Approach to resolving multiple elements when semantic mapping creates subsets. Only relevant with univariate data. element : {{"bars", "step", "poly"}} Visual representation of the histogram statistic. Only relevant with univariate data. fill : bool If True, fill in the space under the histogram. Only relevant with univariate data. shrink : number Scale the width of each bar relative to the binwidth by this factor. Only relevant with univariate data. kde : bool If True, compute a kernel density estimate to smooth the distribution and show on the plot as (one or more) line(s). Only relevant with univariate data. kde_kws : dict Parameters that control the KDE computation, as in :func:`kdeplot`. line_kws : dict Parameters that control the KDE visualization, passed to :meth:`matplotlib.axes.Axes.plot`. thresh : number or None Cells with a statistic less than or equal to this value will be transparent. Only relevant with bivariate data. pthresh : number or None Like ``thresh``, but a value in [0, 1] such that cells with aggregate counts (or other statistics, when used) up to this proportion of the total will be transparent. pmax : number or None A value in [0, 1] that sets that saturation point for the colormap at a value such that cells below is constistute this proportion of the total count (or other statistic, when used). {params.dist.cbar} {params.dist.cbar_ax} {params.dist.cbar_kws} {params.core.palette} {params.core.hue_order} {params.core.hue_norm} {params.core.color} {params.dist.log_scale} {params.dist.legend} {params.core.ax} kwargs Other keyword arguments are passed to one of the following matplotlib functions: - :meth:`matplotlib.axes.Axes.bar` (univariate, element="bars") - :meth:`matplotlib.axes.Axes.fill_between` (univariate, other element, fill=True) - :meth:`matplotlib.axes.Axes.plot` (univariate, other element, fill=False) - :meth:`matplotlib.axes.Axes.pcolormesh` (bivariate) Returns ------- {returns.ax} See Also -------- {seealso.displot} {seealso.kdeplot} {seealso.rugplot} {seealso.ecdfplot} {seealso.jointplot} Notes ----- The choice of bins for computing and plotting a histogram can exert substantial influence on the insights that one is able to draw from the visualization. If the bins are too large, they may erase important features. On the other hand, bins that are too small may be dominated by random variability, obscuring the shape of the true underlying distribution. The default bin size is determined using a reference rule that depends on the sample size and variance. This works well in many cases, (i.e., with "well-behaved" data) but it fails in others. It is always a good to try different bin sizes to be sure that you are not missing something important. This function allows you to specify bins in several different ways, such as by setting the total number of bins to use, the width of each bin, or the specific locations where the bins should break. Examples -------- .. include:: ../docstrings/histplot.rst """.format( params=_param_docs, returns=_core_docs["returns"], seealso=_core_docs["seealso"], ) @_deprecate_positional_args def kdeplot( x=None, # Allow positional x, because behavior will not change with reorg *, y=None, shade=None, # Note "soft" deprecation, explained below vertical=False, # Deprecated kernel=None, # Deprecated bw=None, # Deprecated gridsize=200, # TODO maybe depend on uni/bivariate? cut=3, clip=None, legend=True, cumulative=False, shade_lowest=None, # Deprecated, controlled with levels now cbar=False, cbar_ax=None, cbar_kws=None, ax=None, # New params weights=None, # TODO note that weights is grouped with semantics hue=None, palette=None, hue_order=None, hue_norm=None, multiple="layer", common_norm=True, common_grid=False, levels=10, thresh=.05, bw_method="scott", bw_adjust=1, log_scale=None, color=None, fill=None, # Renamed params data=None, data2=None, # New in v0.12 warn_singular=True, **kwargs, ): # Handle deprecation of `data2` as name for y variable if data2 is not None: y = data2 # If `data2` is present, we need to check for the `data` kwarg being # used to pass a vector for `x`. We'll reassign the vectors and warn. # We need this check because just passing a vector to `data` is now # technically valid. x_passed_as_data = ( x is None and data is not None and np.ndim(data) == 1 ) if x_passed_as_data: msg = "Use `x` and `y` rather than `data` `and `data2`" x = data else: msg = "The `data2` param is now named `y`; please update your code" warnings.warn(msg, FutureWarning) # Handle deprecation of `vertical` if vertical: msg = ( "The `vertical` parameter is deprecated and will be removed in a " "future version. Assign the data to the `y` variable instead." ) warnings.warn(msg, FutureWarning) x, y = y, x # Handle deprecation of `bw` if bw is not None: msg = ( "The `bw` parameter is deprecated in favor of `bw_method` and " f"`bw_adjust`. Using {bw} for `bw_method`, but please " "see the docs for the new parameters and update your code." ) warnings.warn(msg, FutureWarning) bw_method = bw # Handle deprecation of `kernel` if kernel is not None: msg = ( "Support for alternate kernels has been removed. " "Using Gaussian kernel." ) warnings.warn(msg, UserWarning) # Handle deprecation of shade_lowest if shade_lowest is not None: if shade_lowest: thresh = 0 msg = ( "`shade_lowest` is now deprecated in favor of `thresh`. " f"Setting `thresh={thresh}`, but please update your code." ) warnings.warn(msg, UserWarning) # Handle `n_levels` # This was never in the formal API but it was processed, and appeared in an # example. We can treat as an alias for `levels` now and deprecate later. levels = kwargs.pop("n_levels", levels) # Handle "soft" deprecation of shade `shade` is not really the right # terminology here, but unlike some of the other deprecated parameters it # is probably very commonly used and much hard to remove. This is therefore # going to be a longer process where, first, `fill` will be introduced and # be used throughout the documentation. In 0.12, when kwarg-only # enforcement hits, we can remove the shade/shade_lowest out of the # function signature all together and pull them out of the kwargs. Then we # can actually fire a FutureWarning, and eventually remove. if shade is not None: fill = shade # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # p = _DistributionPlotter( data=data, variables=_DistributionPlotter.get_semantics(locals()), ) p.map_hue(palette=palette, order=hue_order, norm=hue_norm) if ax is None: ax = plt.gca() # Check for a specification that lacks x/y data and return early if not p.has_xy_data: return ax # Pack the kwargs for statistics.KDE estimate_kws = dict( bw_method=bw_method, bw_adjust=bw_adjust, gridsize=gridsize, cut=cut, clip=clip, cumulative=cumulative, ) p._attach(ax, allowed_types=["numeric", "datetime"], log_scale=log_scale) if p.univariate: plot_kws = kwargs.copy() if color is not None: plot_kws["color"] = color p.plot_univariate_density( multiple=multiple, common_norm=common_norm, common_grid=common_grid, fill=fill, legend=legend, warn_singular=warn_singular, estimate_kws=estimate_kws, **plot_kws, ) else: p.plot_bivariate_density( common_norm=common_norm, fill=fill, levels=levels, thresh=thresh, legend=legend, color=color, warn_singular=warn_singular, cbar=cbar, cbar_ax=cbar_ax, cbar_kws=cbar_kws, estimate_kws=estimate_kws, **kwargs, ) return ax kdeplot.__doc__ = """\ Plot univariate or bivariate distributions using kernel density estimation. A kernel density estimate (KDE) plot is a method for visualizing the distribution of observations in a dataset, analagous to a histogram. KDE represents the data using a continuous probability density curve in one or more dimensions. The approach is explained further in the :ref:`user guide `. Relative to a histogram, KDE can produce a plot that is less cluttered and more interpretable, especially when drawing multiple distributions. But it has the potential to introduce distortions if the underlying distribution is bounded or not smooth. Like a histogram, the quality of the representation also depends on the selection of good smoothing parameters. Parameters ---------- {params.core.xy} shade : bool Alias for ``fill``. Using ``fill`` is recommended. vertical : bool Orientation parameter. .. deprecated:: 0.11.0 specify orientation by assigning the ``x`` or ``y`` variables. kernel : str Function that defines the kernel. .. deprecated:: 0.11.0 support for non-Gaussian kernels has been removed. bw : str, number, or callable Smoothing parameter. .. deprecated:: 0.11.0 see ``bw_method`` and ``bw_adjust``. gridsize : int Number of points on each dimension of the evaluation grid. {params.kde.cut} {params.kde.clip} {params.dist.legend} {params.kde.cumulative} shade_lowest : bool If False, the area below the lowest contour will be transparent .. deprecated:: 0.11.0 see ``thresh``. {params.dist.cbar} {params.dist.cbar_ax} {params.dist.cbar_kws} {params.core.ax} weights : vector or key in ``data`` If provided, weight the kernel density estimation using these values. {params.core.hue} {params.core.palette} {params.core.hue_order} {params.core.hue_norm} {params.dist.multiple} common_norm : bool If True, scale each conditional density by the number of observations such that the total area under all densities sums to 1. Otherwise, normalize each density independently. common_grid : bool If True, use the same evaluation grid for each kernel density estimate. Only relevant with univariate data. levels : int or vector Number of contour levels or values to draw contours at. A vector argument must have increasing values in [0, 1]. Levels correspond to iso-proportions of the density: e.g., 20% of the probability mass will lie below the contour drawn for 0.2. Only relevant with bivariate data. thresh : number in [0, 1] Lowest iso-proportion level at which to draw a contour line. Ignored when ``levels`` is a vector. Only relevant with bivariate data. {params.kde.bw_method} {params.kde.bw_adjust} {params.dist.log_scale} {params.core.color} fill : bool or None If True, fill in the area under univariate density curves or between bivariate contours. If None, the default depends on ``multiple``. {params.core.data} warn_singular : bool If True, issue a warning when trying to estimate the density of data with zero variance. kwargs Other keyword arguments are passed to one of the following matplotlib functions: - :meth:`matplotlib.axes.Axes.plot` (univariate, ``fill=False``), - :meth:`matplotlib.axes.Axes.fill_between` (univariate, ``fill=True``), - :meth:`matplotlib.axes.Axes.contour` (bivariate, ``fill=False``), - :meth:`matplotlib.axes.contourf` (bivariate, ``fill=True``). Returns ------- {returns.ax} See Also -------- {seealso.displot} {seealso.histplot} {seealso.ecdfplot} {seealso.jointplot} {seealso.violinplot} Notes ----- The *bandwidth*, or standard deviation of the smoothing kernel, is an important parameter. Misspecification of the bandwidth can produce a distorted representation of the data. Much like the choice of bin width in a histogram, an over-smoothed curve can erase true features of a distribution, while an under-smoothed curve can create false features out of random variability. The rule-of-thumb that sets the default bandwidth works best when the true distribution is smooth, unimodal, and roughly bell-shaped. It is always a good idea to check the default behavior by using ``bw_adjust`` to increase or decrease the amount of smoothing. Because the smoothing algorithm uses a Gaussian kernel, the estimated density curve can extend to values that do not make sense for a particular dataset. For example, the curve may be drawn over negative values when smoothing data that are naturally positive. The ``cut`` and ``clip`` parameters can be used to control the extent of the curve, but datasets that have many observations close to a natural boundary may be better served by a different visualization method. Similar considerations apply when a dataset is naturally discrete or "spiky" (containing many repeated observations of the same value). Kernel density estimation will always produce a smooth curve, which would be misleading in these situations. The units on the density axis are a common source of confusion. While kernel density estimation produces a probability distribution, the height of the curve at each point gives a density, not a probability. A probability can be obtained only by integrating the density across a range. The curve is normalized so that the integral over all possible values is 1, meaning that the scale of the density axis depends on the data values. Examples -------- .. include:: ../docstrings/kdeplot.rst """.format( params=_param_docs, returns=_core_docs["returns"], seealso=_core_docs["seealso"], ) def ecdfplot( data=None, *, # Vector variables x=None, y=None, hue=None, weights=None, # Computation parameters stat="proportion", complementary=False, # Hue mapping parameters palette=None, hue_order=None, hue_norm=None, # Axes information log_scale=None, legend=True, ax=None, # Other appearance keywords **kwargs, ): p = _DistributionPlotter( data=data, variables=_DistributionPlotter.get_semantics(locals()) ) p.map_hue(palette=palette, order=hue_order, norm=hue_norm) # We could support other semantics (size, style) here fairly easily # But it would make distplot a bit more complicated. # It's always possible to add features like that later, so I am going to defer. # It will be even easier to wait until after there is a more general/abstract # way to go from semantic specs to artist attributes. if ax is None: ax = plt.gca() # We could add this one day, but it's of dubious value if not p.univariate: raise NotImplementedError("Bivariate ECDF plots are not implemented") # Attach the axes to the plotter, setting up unit conversions p._attach(ax, log_scale=log_scale) estimate_kws = dict( stat=stat, complementary=complementary, ) p.plot_univariate_ecdf( estimate_kws=estimate_kws, legend=legend, **kwargs, ) return ax ecdfplot.__doc__ = """\ Plot empirical cumulative distribution functions. An ECDF represents the proportion or count of observations falling below each unique value in a dataset. Compared to a histogram or density plot, it has the advantage that each observation is visualized directly, meaning that there are no binning or smoothing parameters that need to be adjusted. It also aids direct comparisons between multiple distributions. A downside is that the relationship between the appearance of the plot and the basic properties of the distribution (such as its central tendency, variance, and the presence of any bimodality) may not be as intuitive. More information is provided in the :ref:`user guide `. Parameters ---------- {params.core.data} {params.core.xy} {params.core.hue} weights : vector or key in ``data`` If provided, weight the contribution of the corresponding data points towards the cumulative distribution using these values. {params.ecdf.stat} {params.ecdf.complementary} {params.core.palette} {params.core.hue_order} {params.core.hue_norm} {params.dist.log_scale} {params.dist.legend} {params.core.ax} kwargs Other keyword arguments are passed to :meth:`matplotlib.axes.Axes.plot`. Returns ------- {returns.ax} See Also -------- {seealso.displot} {seealso.histplot} {seealso.kdeplot} {seealso.rugplot} Examples -------- .. include:: ../docstrings/ecdfplot.rst """.format( params=_param_docs, returns=_core_docs["returns"], seealso=_core_docs["seealso"], ) @_deprecate_positional_args def rugplot( x=None, # Allow positional x, because behavior won't change *, height=.025, axis=None, ax=None, # New parameters data=None, y=None, hue=None, palette=None, hue_order=None, hue_norm=None, expand_margins=True, legend=True, # TODO or maybe default to False? # Renamed parameter a=None, **kwargs ): # A note: I think it would make sense to add multiple= to rugplot and allow # rugs for different hue variables to be shifted orthogonal to the data axis # But is this stacking, or dodging? # A note: if we want to add a style semantic to rugplot, # we could make an option that draws the rug using scatterplot # A note, it would also be nice to offer some kind of histogram/density # rugplot, since alpha blending doesn't work great in the large n regime # Handle deprecation of `a`` if a is not None: msg = "The `a` parameter is now called `x`. Please update your code." warnings.warn(msg, FutureWarning) x = a del a # Handle deprecation of "axis" if axis is not None: msg = ( "The `axis` variable is no longer used and will be removed. " "Instead, assign variables directly to `x` or `y`." ) warnings.warn(msg, FutureWarning) # Handle deprecation of "vertical" if kwargs.pop("vertical", axis == "y"): x, y = None, x msg = ( "Using `vertical=True` to control the orientation of the plot " "is deprecated. Instead, assign the data directly to `y`. " ) warnings.warn(msg, FutureWarning) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # weights = None p = _DistributionPlotter( data=data, variables=_DistributionPlotter.get_semantics(locals()), ) p.map_hue(palette=palette, order=hue_order, norm=hue_norm) if ax is None: ax = plt.gca() p._attach(ax) p.plot_rug(height, expand_margins, legend, **kwargs) return ax rugplot.__doc__ = """\ Plot marginal distributions by drawing ticks along the x and y axes. This function is intended to complement other plots by showing the location of individual observations in an unobstrusive way. Parameters ---------- {params.core.xy} height : number Proportion of axes extent covered by each rug element. axis : {{"x", "y"}} Axis to draw the rug on. .. deprecated:: 0.11.0 specify axis by assigning the ``x`` or ``y`` variables. {params.core.ax} {params.core.data} {params.core.hue} {params.core.palette} {params.core.hue_order} {params.core.hue_norm} expand_margins : bool If True, increase the axes margins by the height of the rug to avoid overlap with other elements. legend : bool If False, do not add a legend for semantic variables. kwargs Other keyword arguments are passed to :meth:`matplotlib.collections.LineCollection` Returns ------- {returns.ax} Examples -------- .. include:: ../docstrings/rugplot.rst """.format( params=_param_docs, returns=_core_docs["returns"], seealso=_core_docs["seealso"], ) def displot( data=None, *, # Vector variables x=None, y=None, hue=None, row=None, col=None, weights=None, # Other plot parameters kind="hist", rug=False, rug_kws=None, log_scale=None, legend=True, # Hue-mapping parameters palette=None, hue_order=None, hue_norm=None, color=None, # Faceting parameters col_wrap=None, row_order=None, col_order=None, height=5, aspect=1, facet_kws=None, **kwargs, ): p = _DistributionFacetPlotter( data=data, variables=_DistributionFacetPlotter.get_semantics(locals()) ) p.map_hue(palette=palette, order=hue_order, norm=hue_norm) _check_argument("kind", ["hist", "kde", "ecdf"], kind) # --- Initialize the FacetGrid object # Check for attempt to plot onto specific axes and warn if "ax" in kwargs: msg = ( "`displot` is a figure-level function and does not accept " "the ax= paramter. You may wish to try {}plot.".format(kind) ) warnings.warn(msg, UserWarning) kwargs.pop("ax") for var in ["row", "col"]: # Handle faceting variables that lack name information if var in p.variables and p.variables[var] is None: p.variables[var] = f"_{var}_" # Adapt the plot_data dataframe for use with FacetGrid grid_data = p.plot_data.rename(columns=p.variables) grid_data = grid_data.loc[:, ~grid_data.columns.duplicated()] col_name = p.variables.get("col", None) row_name = p.variables.get("row", None) if facet_kws is None: facet_kws = {} g = FacetGrid( data=grid_data, row=row_name, col=col_name, col_wrap=col_wrap, row_order=row_order, col_order=col_order, height=height, aspect=aspect, **facet_kws, ) # Now attach the axes object to the plotter object if kind == "kde": allowed_types = ["numeric", "datetime"] else: allowed_types = None p._attach(g, allowed_types=allowed_types, log_scale=log_scale) # Check for a specification that lacks x/y data and return early if not p.has_xy_data: return g kwargs["legend"] = legend # --- Draw the plots if kind == "hist": hist_kws = kwargs.copy() # Extract the parameters that will go directly to Histogram estimate_defaults = {} _assign_default_kwargs(estimate_defaults, Histogram.__init__, histplot) estimate_kws = {} for key, default_val in estimate_defaults.items(): estimate_kws[key] = hist_kws.pop(key, default_val) # Handle derivative defaults if estimate_kws["discrete"] is None: estimate_kws["discrete"] = p._default_discrete() hist_kws["estimate_kws"] = estimate_kws hist_kws.setdefault("color", color) if p.univariate: _assign_default_kwargs(hist_kws, p.plot_univariate_histogram, histplot) p.plot_univariate_histogram(**hist_kws) else: _assign_default_kwargs(hist_kws, p.plot_bivariate_histogram, histplot) p.plot_bivariate_histogram(**hist_kws) elif kind == "kde": kde_kws = kwargs.copy() # Extract the parameters that will go directly to KDE estimate_defaults = {} _assign_default_kwargs(estimate_defaults, KDE.__init__, kdeplot) estimate_kws = {} for key, default_val in estimate_defaults.items(): estimate_kws[key] = kde_kws.pop(key, default_val) kde_kws["estimate_kws"] = estimate_kws kde_kws["color"] = color if p.univariate: _assign_default_kwargs(kde_kws, p.plot_univariate_density, kdeplot) p.plot_univariate_density(**kde_kws) else: _assign_default_kwargs(kde_kws, p.plot_bivariate_density, kdeplot) p.plot_bivariate_density(**kde_kws) elif kind == "ecdf": ecdf_kws = kwargs.copy() # Extract the parameters that will go directly to the estimator estimate_kws = {} estimate_defaults = {} _assign_default_kwargs(estimate_defaults, ECDF.__init__, ecdfplot) for key, default_val in estimate_defaults.items(): estimate_kws[key] = ecdf_kws.pop(key, default_val) ecdf_kws["estimate_kws"] = estimate_kws ecdf_kws["color"] = color if p.univariate: _assign_default_kwargs(ecdf_kws, p.plot_univariate_ecdf, ecdfplot) p.plot_univariate_ecdf(**ecdf_kws) else: raise NotImplementedError("Bivariate ECDF plots are not implemented") # All plot kinds can include a rug if rug: # TODO with expand_margins=True, each facet expands margins... annoying! if rug_kws is None: rug_kws = {} _assign_default_kwargs(rug_kws, p.plot_rug, rugplot) rug_kws["legend"] = False if color is not None: rug_kws["color"] = color p.plot_rug(**rug_kws) # Call FacetGrid annotation methods # Note that the legend is currently set inside the plotting method g.set_axis_labels( x_var=p.variables.get("x", g.axes.flat[0].get_xlabel()), y_var=p.variables.get("y", g.axes.flat[0].get_ylabel()), ) g.set_titles() g.tight_layout() if data is not None and (x is not None or y is not None): if not isinstance(data, pd.DataFrame): data = pd.DataFrame(data) g.data = pd.merge( data, g.data[g.data.columns.difference(data.columns)], left_index=True, right_index=True, ) else: wide_cols = { k: f"_{k}_" if v is None else v for k, v in p.variables.items() } g.data = p.plot_data.rename(columns=wide_cols) return g displot.__doc__ = """\ Figure-level interface for drawing distribution plots onto a FacetGrid. This function provides access to several approaches for visualizing the univariate or bivariate distribution of data, including subsets of data defined by semantic mapping and faceting across multiple subplots. The ``kind`` parameter selects the approach to use: - :func:`histplot` (with ``kind="hist"``; the default) - :func:`kdeplot` (with ``kind="kde"``) - :func:`ecdfplot` (with ``kind="ecdf"``; univariate-only) Additionally, a :func:`rugplot` can be added to any kind of plot to show individual observations. Extra keyword arguments are passed to the underlying function, so you should refer to the documentation for each to understand the complete set of options for making plots with this interface. See the :doc:`distribution plots tutorial <../tutorial/distributions>` for a more in-depth discussion of the relative strengths and weaknesses of each approach. The distinction between figure-level and axes-level functions is explained further in the :doc:`user guide <../tutorial/function_overview>`. Parameters ---------- {params.core.data} {params.core.xy} {params.core.hue} {params.facets.rowcol} kind : {{"hist", "kde", "ecdf"}} Approach for visualizing the data. Selects the underlying plotting function and determines the additional set of valid parameters. rug : bool If True, show each observation with marginal ticks (as in :func:`rugplot`). rug_kws : dict Parameters to control the appearance of the rug plot. {params.dist.log_scale} {params.dist.legend} {params.core.palette} {params.core.hue_order} {params.core.hue_norm} {params.core.color} {params.facets.col_wrap} {params.facets.rowcol_order} {params.facets.height} {params.facets.aspect} {params.facets.facet_kws} kwargs Other keyword arguments are documented with the relevant axes-level function: - :func:`histplot` (with ``kind="hist"``) - :func:`kdeplot` (with ``kind="kde"``) - :func:`ecdfplot` (with ``kind="ecdf"``) Returns ------- {returns.facetgrid} See Also -------- {seealso.histplot} {seealso.kdeplot} {seealso.rugplot} {seealso.ecdfplot} {seealso.jointplot} Examples -------- See the API documentation for the axes-level functions for more details about the breadth of options available for each plot kind. .. include:: ../docstrings/displot.rst """.format( params=_param_docs, returns=_core_docs["returns"], seealso=_core_docs["seealso"], ) # =========================================================================== # # DEPRECATED FUNCTIONS LIVE BELOW HERE # =========================================================================== # def _freedman_diaconis_bins(a): """Calculate number of hist bins using Freedman-Diaconis rule.""" # From https://stats.stackexchange.com/questions/798/ a = np.asarray(a) if len(a) < 2: return 1 h = 2 * stats.iqr(a) / (len(a) ** (1 / 3)) # fall back to sqrt(a) bins if iqr is 0 if h == 0: return int(np.sqrt(a.size)) else: return int(np.ceil((a.max() - a.min()) / h)) def distplot(a=None, bins=None, hist=True, kde=True, rug=False, fit=None, hist_kws=None, kde_kws=None, rug_kws=None, fit_kws=None, color=None, vertical=False, norm_hist=False, axlabel=None, label=None, ax=None, x=None): """DEPRECATED: Flexibly plot a univariate distribution of observations. .. warning:: This function is deprecated and will be removed in a future version. Please adapt your code to use one of two new functions: - :func:`displot`, a figure-level function with a similar flexibility over the kind of plot to draw - :func:`histplot`, an axes-level function for plotting histograms, including with kernel density smoothing This function combines the matplotlib ``hist`` function (with automatic calculation of a good default bin size) with the seaborn :func:`kdeplot` and :func:`rugplot` functions. It can also fit ``scipy.stats`` distributions and plot the estimated PDF over the data. Parameters ---------- a : Series, 1d-array, or list. Observed data. If this is a Series object with a ``name`` attribute, the name will be used to label the data axis. bins : argument for matplotlib hist(), or None, optional Specification of hist bins. If unspecified, as reference rule is used that tries to find a useful default. hist : bool, optional Whether to plot a (normed) histogram. kde : bool, optional Whether to plot a gaussian kernel density estimate. rug : bool, optional Whether to draw a rugplot on the support axis. fit : random variable object, optional An object with `fit` method, returning a tuple that can be passed to a `pdf` method a positional arguments following a grid of values to evaluate the pdf on. hist_kws : dict, optional Keyword arguments for :meth:`matplotlib.axes.Axes.hist`. kde_kws : dict, optional Keyword arguments for :func:`kdeplot`. rug_kws : dict, optional Keyword arguments for :func:`rugplot`. color : matplotlib color, optional Color to plot everything but the fitted curve in. vertical : bool, optional If True, observed values are on y-axis. norm_hist : bool, optional If True, the histogram height shows a density rather than a count. This is implied if a KDE or fitted density is plotted. axlabel : string, False, or None, optional Name for the support axis label. If None, will try to get it from a.name if False, do not set a label. label : string, optional Legend label for the relevant component of the plot. ax : matplotlib axis, optional If provided, plot on this axis. Returns ------- ax : matplotlib Axes Returns the Axes object with the plot for further tweaking. See Also -------- kdeplot : Show a univariate or bivariate distribution with a kernel density estimate. rugplot : Draw small vertical lines to show each observation in a distribution. Examples -------- Show a default plot with a kernel density estimate and histogram with bin size determined automatically with a reference rule: .. plot:: :context: close-figs >>> import seaborn as sns, numpy as np >>> sns.set_theme(); np.random.seed(0) >>> x = np.random.randn(100) >>> ax = sns.distplot(x) Use Pandas objects to get an informative axis label: .. plot:: :context: close-figs >>> import pandas as pd >>> x = pd.Series(x, name="x variable") >>> ax = sns.distplot(x) Plot the distribution with a kernel density estimate and rug plot: .. plot:: :context: close-figs >>> ax = sns.distplot(x, rug=True, hist=False) Plot the distribution with a histogram and maximum likelihood gaussian distribution fit: .. plot:: :context: close-figs >>> from scipy.stats import norm >>> ax = sns.distplot(x, fit=norm, kde=False) Plot the distribution on the vertical axis: .. plot:: :context: close-figs >>> ax = sns.distplot(x, vertical=True) Change the color of all the plot elements: .. plot:: :context: close-figs >>> sns.set_color_codes() >>> ax = sns.distplot(x, color="y") Pass specific parameters to the underlying plot functions: .. plot:: :context: close-figs >>> ax = sns.distplot(x, rug=True, rug_kws={"color": "g"}, ... kde_kws={"color": "k", "lw": 3, "label": "KDE"}, ... hist_kws={"histtype": "step", "linewidth": 3, ... "alpha": 1, "color": "g"}) """ if kde and not hist: axes_level_suggestion = ( "`kdeplot` (an axes-level function for kernel density plots)." ) else: axes_level_suggestion = ( "`histplot` (an axes-level function for histograms)." ) msg = ( "`distplot` is a deprecated function and will be removed in a future version. " "Please adapt your code to use either `displot` (a figure-level function with " "similar flexibility) or " + axes_level_suggestion ) warnings.warn(msg, FutureWarning) if ax is None: ax = plt.gca() # Intelligently label the support axis label_ax = bool(axlabel) if axlabel is None and hasattr(a, "name"): axlabel = a.name if axlabel is not None: label_ax = True # Support new-style API if x is not None: a = x # Make a a 1-d float array a = np.asarray(a, float) if a.ndim > 1: a = a.squeeze() # Drop null values from array a = remove_na(a) # Decide if the hist is normed norm_hist = norm_hist or kde or (fit is not None) # Handle dictionary defaults hist_kws = {} if hist_kws is None else hist_kws.copy() kde_kws = {} if kde_kws is None else kde_kws.copy() rug_kws = {} if rug_kws is None else rug_kws.copy() fit_kws = {} if fit_kws is None else fit_kws.copy() # Get the color from the current color cycle if color is None: if vertical: line, = ax.plot(0, a.mean()) else: line, = ax.plot(a.mean(), 0) color = line.get_color() line.remove() # Plug the label into the right kwarg dictionary if label is not None: if hist: hist_kws["label"] = label elif kde: kde_kws["label"] = label elif rug: rug_kws["label"] = label elif fit: fit_kws["label"] = label if hist: if bins is None: bins = min(_freedman_diaconis_bins(a), 50) hist_kws.setdefault("alpha", 0.4) hist_kws.setdefault("density", norm_hist) orientation = "horizontal" if vertical else "vertical" hist_color = hist_kws.pop("color", color) ax.hist(a, bins, orientation=orientation, color=hist_color, **hist_kws) if hist_color != color: hist_kws["color"] = hist_color if kde: kde_color = kde_kws.pop("color", color) kdeplot(a, vertical=vertical, ax=ax, color=kde_color, **kde_kws) if kde_color != color: kde_kws["color"] = kde_color if rug: rug_color = rug_kws.pop("color", color) axis = "y" if vertical else "x" rugplot(a, axis=axis, ax=ax, color=rug_color, **rug_kws) if rug_color != color: rug_kws["color"] = rug_color if fit is not None: def pdf(x): return fit.pdf(x, *params) fit_color = fit_kws.pop("color", "#282828") gridsize = fit_kws.pop("gridsize", 200) cut = fit_kws.pop("cut", 3) clip = fit_kws.pop("clip", (-np.inf, np.inf)) bw = stats.gaussian_kde(a).scotts_factor() * a.std(ddof=1) x = _kde_support(a, bw, gridsize, cut, clip) params = fit.fit(a) y = pdf(x) if vertical: x, y = y, x ax.plot(x, y, color=fit_color, **fit_kws) if fit_color != "#282828": fit_kws["color"] = fit_color if label_ax: if vertical: ax.set_ylabel(axlabel) else: ax.set_xlabel(axlabel) return ax seaborn-0.11.2/seaborn/external/000077500000000000000000000000001410631356500165035ustar00rootroot00000000000000seaborn-0.11.2/seaborn/external/__init__.py000066400000000000000000000000001410631356500206020ustar00rootroot00000000000000seaborn-0.11.2/seaborn/external/docscrape.py000066400000000000000000000557061410631356500210350ustar00rootroot00000000000000"""Extract reference documentation from the NumPy source tree. Copyright (C) 2008 Stefan van der Walt , Pauli Virtanen Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import inspect import textwrap import re import pydoc from warnings import warn from collections import namedtuple from collections.abc import Callable, Mapping import copy import sys def strip_blank_lines(l): "Remove leading and trailing blank lines from a list of lines" while l and not l[0].strip(): del l[0] while l and not l[-1].strip(): del l[-1] return l class Reader(object): """A line-based string reader. """ def __init__(self, data): """ Parameters ---------- data : str String with lines separated by '\n'. """ if isinstance(data, list): self._str = data else: self._str = data.split('\n') # store string as list of lines self.reset() def __getitem__(self, n): return self._str[n] def reset(self): self._l = 0 # current line nr def read(self): if not self.eof(): out = self[self._l] self._l += 1 return out else: return '' def seek_next_non_empty_line(self): for l in self[self._l:]: if l.strip(): break else: self._l += 1 def eof(self): return self._l >= len(self._str) def read_to_condition(self, condition_func): start = self._l for line in self[start:]: if condition_func(line): return self[start:self._l] self._l += 1 if self.eof(): return self[start:self._l+1] return [] def read_to_next_empty_line(self): self.seek_next_non_empty_line() def is_empty(line): return not line.strip() return self.read_to_condition(is_empty) def read_to_next_unindented_line(self): def is_unindented(line): return (line.strip() and (len(line.lstrip()) == len(line))) return self.read_to_condition(is_unindented) def peek(self, n=0): if self._l + n < len(self._str): return self[self._l + n] else: return '' def is_empty(self): return not ''.join(self._str).strip() class ParseError(Exception): def __str__(self): message = self.args[0] if hasattr(self, 'docstring'): message = "%s in %r" % (message, self.docstring) return message Parameter = namedtuple('Parameter', ['name', 'type', 'desc']) class NumpyDocString(Mapping): """Parses a numpydoc string to an abstract representation Instances define a mapping from section title to structured data. """ sections = { 'Signature': '', 'Summary': [''], 'Extended Summary': [], 'Parameters': [], 'Returns': [], 'Yields': [], 'Receives': [], 'Raises': [], 'Warns': [], 'Other Parameters': [], 'Attributes': [], 'Methods': [], 'See Also': [], 'Notes': [], 'Warnings': [], 'References': '', 'Examples': '', 'index': {} } def __init__(self, docstring, config={}): orig_docstring = docstring docstring = textwrap.dedent(docstring).split('\n') self._doc = Reader(docstring) self._parsed_data = copy.deepcopy(self.sections) try: self._parse() except ParseError as e: e.docstring = orig_docstring raise def __getitem__(self, key): return self._parsed_data[key] def __setitem__(self, key, val): if key not in self._parsed_data: self._error_location("Unknown section %s" % key, error=False) else: self._parsed_data[key] = val def __iter__(self): return iter(self._parsed_data) def __len__(self): return len(self._parsed_data) def _is_at_section(self): self._doc.seek_next_non_empty_line() if self._doc.eof(): return False l1 = self._doc.peek().strip() # e.g. Parameters if l1.startswith('.. index::'): return True l2 = self._doc.peek(1).strip() # ---------- or ========== return l2.startswith('-'*len(l1)) or l2.startswith('='*len(l1)) def _strip(self, doc): i = 0 j = 0 for i, line in enumerate(doc): if line.strip(): break for j, line in enumerate(doc[::-1]): if line.strip(): break return doc[i:len(doc)-j] def _read_to_next_section(self): section = self._doc.read_to_next_empty_line() while not self._is_at_section() and not self._doc.eof(): if not self._doc.peek(-1).strip(): # previous line was empty section += [''] section += self._doc.read_to_next_empty_line() return section def _read_sections(self): while not self._doc.eof(): data = self._read_to_next_section() name = data[0].strip() if name.startswith('..'): # index section yield name, data[1:] elif len(data) < 2: yield StopIteration else: yield name, self._strip(data[2:]) def _parse_param_list(self, content, single_element_is_type=False): r = Reader(content) params = [] while not r.eof(): header = r.read().strip() if ' : ' in header: arg_name, arg_type = header.split(' : ')[:2] else: if single_element_is_type: arg_name, arg_type = '', header else: arg_name, arg_type = header, '' desc = r.read_to_next_unindented_line() desc = dedent_lines(desc) desc = strip_blank_lines(desc) params.append(Parameter(arg_name, arg_type, desc)) return params # See also supports the following formats. # # # SPACE* COLON SPACE+ SPACE* # ( COMMA SPACE+ )+ (COMMA | PERIOD)? SPACE* # ( COMMA SPACE+ )* SPACE* COLON SPACE+ SPACE* # is one of # # COLON COLON BACKTICK BACKTICK # where # is a legal function name, and # is any nonempty sequence of word characters. # Examples: func_f1 :meth:`func_h1` :obj:`~baz.obj_r` :class:`class_j` # is a string describing the function. _role = r":(?P\w+):" _funcbacktick = r"`(?P(?:~\w+\.)?[a-zA-Z0-9_\.-]+)`" _funcplain = r"(?P[a-zA-Z0-9_\.-]+)" _funcname = r"(" + _role + _funcbacktick + r"|" + _funcplain + r")" _funcnamenext = _funcname.replace('role', 'rolenext') _funcnamenext = _funcnamenext.replace('name', 'namenext') _description = r"(?P\s*:(\s+(?P\S+.*))?)?\s*$" _func_rgx = re.compile(r"^\s*" + _funcname + r"\s*") _line_rgx = re.compile( r"^\s*" + r"(?P" + # group for all function names _funcname + r"(?P([,]\s+" + _funcnamenext + r")*)" + r")" + # end of "allfuncs" r"(?P[,\.])?" + # Some function lists have a trailing comma (or period) '\s*' _description) # Empty elements are replaced with '..' empty_description = '..' def _parse_see_also(self, content): """ func_name : Descriptive text continued text another_func_name : Descriptive text func_name1, func_name2, :meth:`func_name`, func_name3 """ items = [] def parse_item_name(text): """Match ':role:`name`' or 'name'.""" m = self._func_rgx.match(text) if not m: raise ParseError("%s is not a item name" % text) role = m.group('role') name = m.group('name') if role else m.group('name2') return name, role, m.end() rest = [] for line in content: if not line.strip(): continue line_match = self._line_rgx.match(line) description = None if line_match: description = line_match.group('desc') if line_match.group('trailing') and description: self._error_location( 'Unexpected comma or period after function list at index %d of ' 'line "%s"' % (line_match.end('trailing'), line), error=False) if not description and line.startswith(' '): rest.append(line.strip()) elif line_match: funcs = [] text = line_match.group('allfuncs') while True: if not text.strip(): break name, role, match_end = parse_item_name(text) funcs.append((name, role)) text = text[match_end:].strip() if text and text[0] == ',': text = text[1:].strip() rest = list(filter(None, [description])) items.append((funcs, rest)) else: raise ParseError("%s is not a item name" % line) return items def _parse_index(self, section, content): """ .. index: default :refguide: something, else, and more """ def strip_each_in(lst): return [s.strip() for s in lst] out = {} section = section.split('::') if len(section) > 1: out['default'] = strip_each_in(section[1].split(','))[0] for line in content: line = line.split(':') if len(line) > 2: out[line[1]] = strip_each_in(line[2].split(',')) return out def _parse_summary(self): """Grab signature (if given) and summary""" if self._is_at_section(): return # If several signatures present, take the last one while True: summary = self._doc.read_to_next_empty_line() summary_str = " ".join([s.strip() for s in summary]).strip() compiled = re.compile(r'^([\w., ]+=)?\s*[\w\.]+\(.*\)$') if compiled.match(summary_str): self['Signature'] = summary_str if not self._is_at_section(): continue break if summary is not None: self['Summary'] = summary if not self._is_at_section(): self['Extended Summary'] = self._read_to_next_section() def _parse(self): self._doc.reset() self._parse_summary() sections = list(self._read_sections()) section_names = set([section for section, content in sections]) has_returns = 'Returns' in section_names has_yields = 'Yields' in section_names # We could do more tests, but we are not. Arbitrarily. if has_returns and has_yields: msg = 'Docstring contains both a Returns and Yields section.' raise ValueError(msg) if not has_yields and 'Receives' in section_names: msg = 'Docstring contains a Receives section but not Yields.' raise ValueError(msg) for (section, content) in sections: if not section.startswith('..'): section = (s.capitalize() for s in section.split(' ')) section = ' '.join(section) if self.get(section): self._error_location("The section %s appears twice" % section) if section in ('Parameters', 'Other Parameters', 'Attributes', 'Methods'): self[section] = self._parse_param_list(content) elif section in ('Returns', 'Yields', 'Raises', 'Warns', 'Receives'): self[section] = self._parse_param_list( content, single_element_is_type=True) elif section.startswith('.. index::'): self['index'] = self._parse_index(section, content) elif section == 'See Also': self['See Also'] = self._parse_see_also(content) else: self[section] = content def _error_location(self, msg, error=True): if hasattr(self, '_obj'): # we know where the docs came from: try: filename = inspect.getsourcefile(self._obj) except TypeError: filename = None msg = msg + (" in the docstring of %s in %s." % (self._obj, filename)) if error: raise ValueError(msg) else: warn(msg) # string conversion routines def _str_header(self, name, symbol='-'): return [name, len(name)*symbol] def _str_indent(self, doc, indent=4): out = [] for line in doc: out += [' '*indent + line] return out def _str_signature(self): if self['Signature']: return [self['Signature'].replace('*', r'\*')] + [''] else: return [''] def _str_summary(self): if self['Summary']: return self['Summary'] + [''] else: return [] def _str_extended_summary(self): if self['Extended Summary']: return self['Extended Summary'] + [''] else: return [] def _str_param_list(self, name): out = [] if self[name]: out += self._str_header(name) for param in self[name]: parts = [] if param.name: parts.append(param.name) if param.type: parts.append(param.type) out += [' : '.join(parts)] if param.desc and ''.join(param.desc).strip(): out += self._str_indent(param.desc) out += [''] return out def _str_section(self, name): out = [] if self[name]: out += self._str_header(name) out += self[name] out += [''] return out def _str_see_also(self, func_role): if not self['See Also']: return [] out = [] out += self._str_header("See Also") out += [''] last_had_desc = True for funcs, desc in self['See Also']: assert isinstance(funcs, list) links = [] for func, role in funcs: if role: link = ':%s:`%s`' % (role, func) elif func_role: link = ':%s:`%s`' % (func_role, func) else: link = "`%s`_" % func links.append(link) link = ', '.join(links) out += [link] if desc: out += self._str_indent([' '.join(desc)]) last_had_desc = True else: last_had_desc = False out += self._str_indent([self.empty_description]) if last_had_desc: out += [''] out += [''] return out def _str_index(self): idx = self['index'] out = [] output_index = False default_index = idx.get('default', '') if default_index: output_index = True out += ['.. index:: %s' % default_index] for section, references in idx.items(): if section == 'default': continue output_index = True out += [' :%s: %s' % (section, ', '.join(references))] if output_index: return out else: return '' def __str__(self, func_role=''): out = [] out += self._str_signature() out += self._str_summary() out += self._str_extended_summary() for param_list in ('Parameters', 'Returns', 'Yields', 'Receives', 'Other Parameters', 'Raises', 'Warns'): out += self._str_param_list(param_list) out += self._str_section('Warnings') out += self._str_see_also(func_role) for s in ('Notes', 'References', 'Examples'): out += self._str_section(s) for param_list in ('Attributes', 'Methods'): out += self._str_param_list(param_list) out += self._str_index() return '\n'.join(out) def indent(str, indent=4): indent_str = ' '*indent if str is None: return indent_str lines = str.split('\n') return '\n'.join(indent_str + l for l in lines) def dedent_lines(lines): """Deindent a list of lines maximally""" return textwrap.dedent("\n".join(lines)).split("\n") def header(text, style='-'): return text + '\n' + style*len(text) + '\n' class FunctionDoc(NumpyDocString): def __init__(self, func, role='func', doc=None, config={}): self._f = func self._role = role # e.g. "func" or "meth" if doc is None: if func is None: raise ValueError("No function or docstring given") doc = inspect.getdoc(func) or '' NumpyDocString.__init__(self, doc, config) if not self['Signature'] and func is not None: func, func_name = self.get_func() try: try: signature = str(inspect.signature(func)) except (AttributeError, ValueError): # try to read signature, backward compat for older Python if sys.version_info[0] >= 3: argspec = inspect.getfullargspec(func) else: argspec = inspect.getargspec(func) signature = inspect.formatargspec(*argspec) signature = '%s%s' % (func_name, signature) except TypeError: signature = '%s()' % func_name self['Signature'] = signature def get_func(self): func_name = getattr(self._f, '__name__', self.__class__.__name__) if inspect.isclass(self._f): func = getattr(self._f, '__call__', self._f.__init__) else: func = self._f return func, func_name def __str__(self): out = '' func, func_name = self.get_func() roles = {'func': 'function', 'meth': 'method'} if self._role: if self._role not in roles: print("Warning: invalid role %s" % self._role) out += '.. %s:: %s\n \n\n' % (roles.get(self._role, ''), func_name) out += super(FunctionDoc, self).__str__(func_role=self._role) return out class ClassDoc(NumpyDocString): extra_public_methods = ['__call__'] def __init__(self, cls, doc=None, modulename='', func_doc=FunctionDoc, config={}): if not inspect.isclass(cls) and cls is not None: raise ValueError("Expected a class or None, but got %r" % cls) self._cls = cls if 'sphinx' in sys.modules: from sphinx.ext.autodoc import ALL else: ALL = object() self.show_inherited_members = config.get( 'show_inherited_class_members', True) if modulename and not modulename.endswith('.'): modulename += '.' self._mod = modulename if doc is None: if cls is None: raise ValueError("No class or documentation string given") doc = pydoc.getdoc(cls) NumpyDocString.__init__(self, doc) _members = config.get('members', []) if _members is ALL: _members = None _exclude = config.get('exclude-members', []) if config.get('show_class_members', True) and _exclude is not ALL: def splitlines_x(s): if not s: return [] else: return s.splitlines() for field, items in [('Methods', self.methods), ('Attributes', self.properties)]: if not self[field]: doc_list = [] for name in sorted(items): if (name in _exclude or (_members and name not in _members)): continue try: doc_item = pydoc.getdoc(getattr(self._cls, name)) doc_list.append( Parameter(name, '', splitlines_x(doc_item))) except AttributeError: pass # method doesn't exist self[field] = doc_list @property def methods(self): if self._cls is None: return [] return [name for name, func in inspect.getmembers(self._cls) if ((not name.startswith('_') or name in self.extra_public_methods) and isinstance(func, Callable) and self._is_show_member(name))] @property def properties(self): if self._cls is None: return [] return [name for name, func in inspect.getmembers(self._cls) if (not name.startswith('_') and (func is None or isinstance(func, property) or inspect.isdatadescriptor(func)) and self._is_show_member(name))] def _is_show_member(self, name): if self.show_inherited_members: return True # show all class members if name not in self._cls.__dict__: return False # class member is inherited, we do not show it return Trueseaborn-0.11.2/seaborn/external/husl.py000066400000000000000000000150121410631356500200270ustar00rootroot00000000000000import operator import math __version__ = "2.1.0" m = [ [3.2406, -1.5372, -0.4986], [-0.9689, 1.8758, 0.0415], [0.0557, -0.2040, 1.0570] ] m_inv = [ [0.4124, 0.3576, 0.1805], [0.2126, 0.7152, 0.0722], [0.0193, 0.1192, 0.9505] ] # Hard-coded D65 illuminant refX = 0.95047 refY = 1.00000 refZ = 1.08883 refU = 0.19784 refV = 0.46834 lab_e = 0.008856 lab_k = 903.3 # Public API def husl_to_rgb(h, s, l): return lch_to_rgb(*husl_to_lch([h, s, l])) def husl_to_hex(h, s, l): return rgb_to_hex(husl_to_rgb(h, s, l)) def rgb_to_husl(r, g, b): return lch_to_husl(rgb_to_lch(r, g, b)) def hex_to_husl(hex): return rgb_to_husl(*hex_to_rgb(hex)) def huslp_to_rgb(h, s, l): return lch_to_rgb(*huslp_to_lch([h, s, l])) def huslp_to_hex(h, s, l): return rgb_to_hex(huslp_to_rgb(h, s, l)) def rgb_to_huslp(r, g, b): return lch_to_huslp(rgb_to_lch(r, g, b)) def hex_to_huslp(hex): return rgb_to_huslp(*hex_to_rgb(hex)) def lch_to_rgb(l, c, h): return xyz_to_rgb(luv_to_xyz(lch_to_luv([l, c, h]))) def rgb_to_lch(r, g, b): return luv_to_lch(xyz_to_luv(rgb_to_xyz([r, g, b]))) def max_chroma(L, H): hrad = math.radians(H) sinH = (math.sin(hrad)) cosH = (math.cos(hrad)) sub1 = (math.pow(L + 16, 3.0) / 1560896.0) sub2 = sub1 if sub1 > 0.008856 else (L / 903.3) result = float("inf") for row in m: m1 = row[0] m2 = row[1] m3 = row[2] top = ((0.99915 * m1 + 1.05122 * m2 + 1.14460 * m3) * sub2) rbottom = (0.86330 * m3 - 0.17266 * m2) lbottom = (0.12949 * m3 - 0.38848 * m1) bottom = (rbottom * sinH + lbottom * cosH) * sub2 for t in (0.0, 1.0): C = (L * (top - 1.05122 * t) / (bottom + 0.17266 * sinH * t)) if C > 0.0 and C < result: result = C return result def _hrad_extremum(L): lhs = (math.pow(L, 3.0) + 48.0 * math.pow(L, 2.0) + 768.0 * L + 4096.0) / 1560896.0 rhs = 1107.0 / 125000.0 sub = lhs if lhs > rhs else 10.0 * L / 9033.0 chroma = float("inf") result = None for row in m: for limit in (0.0, 1.0): [m1, m2, m3] = row top = -3015466475.0 * m3 * sub + 603093295.0 * m2 * sub - 603093295.0 * limit bottom = 1356959916.0 * m1 * sub - 452319972.0 * m3 * sub hrad = math.atan2(top, bottom) # This is a math hack to deal with tan quadrants, I'm too lazy to figure # out how to do this properly if limit == 0.0: hrad += math.pi test = max_chroma(L, math.degrees(hrad)) if test < chroma: chroma = test result = hrad return result def max_chroma_pastel(L): H = math.degrees(_hrad_extremum(L)) return max_chroma(L, H) def dot_product(a, b): return sum(map(operator.mul, a, b)) def f(t): if t > lab_e: return (math.pow(t, 1.0 / 3.0)) else: return (7.787 * t + 16.0 / 116.0) def f_inv(t): if math.pow(t, 3.0) > lab_e: return (math.pow(t, 3.0)) else: return (116.0 * t - 16.0) / lab_k def from_linear(c): if c <= 0.0031308: return 12.92 * c else: return (1.055 * math.pow(c, 1.0 / 2.4) - 0.055) def to_linear(c): a = 0.055 if c > 0.04045: return (math.pow((c + a) / (1.0 + a), 2.4)) else: return (c / 12.92) def rgb_prepare(triple): ret = [] for ch in triple: ch = round(ch, 3) if ch < -0.0001 or ch > 1.0001: raise Exception("Illegal RGB value %f" % ch) if ch < 0: ch = 0 if ch > 1: ch = 1 # Fix for Python 3 which by default rounds 4.5 down to 4.0 # instead of Python 2 which is rounded to 5.0 which caused # a couple off by one errors in the tests. Tests now all pass # in Python 2 and Python 3 ret.append(int(round(ch * 255 + 0.001, 0))) return ret def hex_to_rgb(hex): if hex.startswith('#'): hex = hex[1:] r = int(hex[0:2], 16) / 255.0 g = int(hex[2:4], 16) / 255.0 b = int(hex[4:6], 16) / 255.0 return [r, g, b] def rgb_to_hex(triple): [r, g, b] = triple return '#%02x%02x%02x' % tuple(rgb_prepare([r, g, b])) def xyz_to_rgb(triple): xyz = map(lambda row: dot_product(row, triple), m) return list(map(from_linear, xyz)) def rgb_to_xyz(triple): rgbl = list(map(to_linear, triple)) return list(map(lambda row: dot_product(row, rgbl), m_inv)) def xyz_to_luv(triple): X, Y, Z = triple if X == Y == Z == 0.0: return [0.0, 0.0, 0.0] varU = (4.0 * X) / (X + (15.0 * Y) + (3.0 * Z)) varV = (9.0 * Y) / (X + (15.0 * Y) + (3.0 * Z)) L = 116.0 * f(Y / refY) - 16.0 # Black will create a divide-by-zero error if L == 0.0: return [0.0, 0.0, 0.0] U = 13.0 * L * (varU - refU) V = 13.0 * L * (varV - refV) return [L, U, V] def luv_to_xyz(triple): L, U, V = triple if L == 0: return [0.0, 0.0, 0.0] varY = f_inv((L + 16.0) / 116.0) varU = U / (13.0 * L) + refU varV = V / (13.0 * L) + refV Y = varY * refY X = 0.0 - (9.0 * Y * varU) / ((varU - 4.0) * varV - varU * varV) Z = (9.0 * Y - (15.0 * varV * Y) - (varV * X)) / (3.0 * varV) return [X, Y, Z] def luv_to_lch(triple): L, U, V = triple C = (math.pow(math.pow(U, 2) + math.pow(V, 2), (1.0 / 2.0))) hrad = (math.atan2(V, U)) H = math.degrees(hrad) if H < 0.0: H = 360.0 + H return [L, C, H] def lch_to_luv(triple): L, C, H = triple Hrad = math.radians(H) U = (math.cos(Hrad) * C) V = (math.sin(Hrad) * C) return [L, U, V] def husl_to_lch(triple): H, S, L = triple if L > 99.9999999: return [100, 0.0, H] if L < 0.00000001: return [0.0, 0.0, H] mx = max_chroma(L, H) C = mx / 100.0 * S return [L, C, H] def lch_to_husl(triple): L, C, H = triple if L > 99.9999999: return [H, 0.0, 100.0] if L < 0.00000001: return [H, 0.0, 0.0] mx = max_chroma(L, H) S = C / mx * 100.0 return [H, S, L] def huslp_to_lch(triple): H, S, L = triple if L > 99.9999999: return [100, 0.0, H] if L < 0.00000001: return [0.0, 0.0, H] mx = max_chroma_pastel(L) C = mx / 100.0 * S return [L, C, H] def lch_to_huslp(triple): L, C, H = triple if L > 99.9999999: return [H, 0.0, 100.0] if L < 0.00000001: return [H, 0.0, 0.0] mx = max_chroma_pastel(L) S = C / mx * 100.0 return [H, S, L] seaborn-0.11.2/seaborn/matrix.py000066400000000000000000001437641410631356500165560ustar00rootroot00000000000000"""Functions to visualize matrices of data.""" import warnings import matplotlib as mpl from matplotlib.collections import LineCollection import matplotlib.pyplot as plt from matplotlib import gridspec import numpy as np import pandas as pd from scipy.cluster import hierarchy from . import cm from .axisgrid import Grid from .utils import ( despine, axis_ticklabels_overlap, relative_luminance, to_utf8, _draw_figure, ) from ._decorators import _deprecate_positional_args __all__ = ["heatmap", "clustermap"] def _index_to_label(index): """Convert a pandas index or multiindex to an axis label.""" if isinstance(index, pd.MultiIndex): return "-".join(map(to_utf8, index.names)) else: return index.name def _index_to_ticklabels(index): """Convert a pandas index or multiindex into ticklabels.""" if isinstance(index, pd.MultiIndex): return ["-".join(map(to_utf8, i)) for i in index.values] else: return index.values def _convert_colors(colors): """Convert either a list of colors or nested lists of colors to RGB.""" to_rgb = mpl.colors.to_rgb try: to_rgb(colors[0]) # If this works, there is only one level of colors return list(map(to_rgb, colors)) except ValueError: # If we get here, we have nested lists return [list(map(to_rgb, l)) for l in colors] def _matrix_mask(data, mask): """Ensure that data and mask are compatible and add missing values. Values will be plotted for cells where ``mask`` is ``False``. ``data`` is expected to be a DataFrame; ``mask`` can be an array or a DataFrame. """ if mask is None: mask = np.zeros(data.shape, bool) if isinstance(mask, np.ndarray): # For array masks, ensure that shape matches data then convert if mask.shape != data.shape: raise ValueError("Mask must have the same shape as data.") mask = pd.DataFrame(mask, index=data.index, columns=data.columns, dtype=bool) elif isinstance(mask, pd.DataFrame): # For DataFrame masks, ensure that semantic labels match data if not mask.index.equals(data.index) \ and mask.columns.equals(data.columns): err = "Mask must have the same index and columns as data." raise ValueError(err) # Add any cells with missing data to the mask # This works around an issue where `plt.pcolormesh` doesn't represent # missing data properly mask = mask | pd.isnull(data) return mask class _HeatMapper: """Draw a heatmap plot of a matrix with nice labels and colormaps.""" def __init__(self, data, vmin, vmax, cmap, center, robust, annot, fmt, annot_kws, cbar, cbar_kws, xticklabels=True, yticklabels=True, mask=None): """Initialize the plotting object.""" # We always want to have a DataFrame with semantic information # and an ndarray to pass to matplotlib if isinstance(data, pd.DataFrame): plot_data = data.values else: plot_data = np.asarray(data) data = pd.DataFrame(plot_data) # Validate the mask and convet to DataFrame mask = _matrix_mask(data, mask) plot_data = np.ma.masked_where(np.asarray(mask), plot_data) # Get good names for the rows and columns xtickevery = 1 if isinstance(xticklabels, int): xtickevery = xticklabels xticklabels = _index_to_ticklabels(data.columns) elif xticklabels is True: xticklabels = _index_to_ticklabels(data.columns) elif xticklabels is False: xticklabels = [] ytickevery = 1 if isinstance(yticklabels, int): ytickevery = yticklabels yticklabels = _index_to_ticklabels(data.index) elif yticklabels is True: yticklabels = _index_to_ticklabels(data.index) elif yticklabels is False: yticklabels = [] if not len(xticklabels): self.xticks = [] self.xticklabels = [] elif isinstance(xticklabels, str) and xticklabels == "auto": self.xticks = "auto" self.xticklabels = _index_to_ticklabels(data.columns) else: self.xticks, self.xticklabels = self._skip_ticks(xticklabels, xtickevery) if not len(yticklabels): self.yticks = [] self.yticklabels = [] elif isinstance(yticklabels, str) and yticklabels == "auto": self.yticks = "auto" self.yticklabels = _index_to_ticklabels(data.index) else: self.yticks, self.yticklabels = self._skip_ticks(yticklabels, ytickevery) # Get good names for the axis labels xlabel = _index_to_label(data.columns) ylabel = _index_to_label(data.index) self.xlabel = xlabel if xlabel is not None else "" self.ylabel = ylabel if ylabel is not None else "" # Determine good default values for the colormapping self._determine_cmap_params(plot_data, vmin, vmax, cmap, center, robust) # Sort out the annotations if annot is None or annot is False: annot = False annot_data = None else: if isinstance(annot, bool): annot_data = plot_data else: annot_data = np.asarray(annot) if annot_data.shape != plot_data.shape: err = "`data` and `annot` must have same shape." raise ValueError(err) annot = True # Save other attributes to the object self.data = data self.plot_data = plot_data self.annot = annot self.annot_data = annot_data self.fmt = fmt self.annot_kws = {} if annot_kws is None else annot_kws.copy() self.cbar = cbar self.cbar_kws = {} if cbar_kws is None else cbar_kws.copy() def _determine_cmap_params(self, plot_data, vmin, vmax, cmap, center, robust): """Use some heuristics to set good defaults for colorbar and range.""" # plot_data is a np.ma.array instance calc_data = plot_data.astype(float).filled(np.nan) if vmin is None: if robust: vmin = np.nanpercentile(calc_data, 2) else: vmin = np.nanmin(calc_data) if vmax is None: if robust: vmax = np.nanpercentile(calc_data, 98) else: vmax = np.nanmax(calc_data) self.vmin, self.vmax = vmin, vmax # Choose default colormaps if not provided if cmap is None: if center is None: self.cmap = cm.rocket else: self.cmap = cm.icefire elif isinstance(cmap, str): self.cmap = mpl.cm.get_cmap(cmap) elif isinstance(cmap, list): self.cmap = mpl.colors.ListedColormap(cmap) else: self.cmap = cmap # Recenter a divergent colormap if center is not None: # Copy bad values # in mpl<3.2 only masked values are honored with "bad" color spec # (see https://github.com/matplotlib/matplotlib/pull/14257) bad = self.cmap(np.ma.masked_invalid([np.nan]))[0] # under/over values are set for sure when cmap extremes # do not map to the same color as +-inf under = self.cmap(-np.inf) over = self.cmap(np.inf) under_set = under != self.cmap(0) over_set = over != self.cmap(self.cmap.N - 1) vrange = max(vmax - center, center - vmin) normlize = mpl.colors.Normalize(center - vrange, center + vrange) cmin, cmax = normlize([vmin, vmax]) cc = np.linspace(cmin, cmax, 256) self.cmap = mpl.colors.ListedColormap(self.cmap(cc)) self.cmap.set_bad(bad) if under_set: self.cmap.set_under(under) if over_set: self.cmap.set_over(over) def _annotate_heatmap(self, ax, mesh): """Add textual labels with the value in each cell.""" mesh.update_scalarmappable() height, width = self.annot_data.shape xpos, ypos = np.meshgrid(np.arange(width) + .5, np.arange(height) + .5) for x, y, m, color, val in zip(xpos.flat, ypos.flat, mesh.get_array(), mesh.get_facecolors(), self.annot_data.flat): if m is not np.ma.masked: lum = relative_luminance(color) text_color = ".15" if lum > .408 else "w" annotation = ("{:" + self.fmt + "}").format(val) text_kwargs = dict(color=text_color, ha="center", va="center") text_kwargs.update(self.annot_kws) ax.text(x, y, annotation, **text_kwargs) def _skip_ticks(self, labels, tickevery): """Return ticks and labels at evenly spaced intervals.""" n = len(labels) if tickevery == 0: ticks, labels = [], [] elif tickevery == 1: ticks, labels = np.arange(n) + .5, labels else: start, end, step = 0, n, tickevery ticks = np.arange(start, end, step) + .5 labels = labels[start:end:step] return ticks, labels def _auto_ticks(self, ax, labels, axis): """Determine ticks and ticklabels that minimize overlap.""" transform = ax.figure.dpi_scale_trans.inverted() bbox = ax.get_window_extent().transformed(transform) size = [bbox.width, bbox.height][axis] axis = [ax.xaxis, ax.yaxis][axis] tick, = axis.set_ticks([0]) fontsize = tick.label1.get_size() max_ticks = int(size // (fontsize / 72)) if max_ticks < 1: return [], [] tick_every = len(labels) // max_ticks + 1 tick_every = 1 if tick_every == 0 else tick_every ticks, labels = self._skip_ticks(labels, tick_every) return ticks, labels def plot(self, ax, cax, kws): """Draw the heatmap on the provided Axes.""" # Remove all the Axes spines despine(ax=ax, left=True, bottom=True) # setting vmin/vmax in addition to norm is deprecated # so avoid setting if norm is set if "norm" not in kws: kws.setdefault("vmin", self.vmin) kws.setdefault("vmax", self.vmax) # Draw the heatmap mesh = ax.pcolormesh(self.plot_data, cmap=self.cmap, **kws) # Set the axis limits ax.set(xlim=(0, self.data.shape[1]), ylim=(0, self.data.shape[0])) # Invert the y axis to show the plot in matrix form ax.invert_yaxis() # Possibly add a colorbar if self.cbar: cb = ax.figure.colorbar(mesh, cax, ax, **self.cbar_kws) cb.outline.set_linewidth(0) # If rasterized is passed to pcolormesh, also rasterize the # colorbar to avoid white lines on the PDF rendering if kws.get('rasterized', False): cb.solids.set_rasterized(True) # Add row and column labels if isinstance(self.xticks, str) and self.xticks == "auto": xticks, xticklabels = self._auto_ticks(ax, self.xticklabels, 0) else: xticks, xticklabels = self.xticks, self.xticklabels if isinstance(self.yticks, str) and self.yticks == "auto": yticks, yticklabels = self._auto_ticks(ax, self.yticklabels, 1) else: yticks, yticklabels = self.yticks, self.yticklabels ax.set(xticks=xticks, yticks=yticks) xtl = ax.set_xticklabels(xticklabels) ytl = ax.set_yticklabels(yticklabels, rotation="vertical") plt.setp(ytl, va="center") # GH2484 # Possibly rotate them if they overlap _draw_figure(ax.figure) if axis_ticklabels_overlap(xtl): plt.setp(xtl, rotation="vertical") if axis_ticklabels_overlap(ytl): plt.setp(ytl, rotation="horizontal") # Add the axis labels ax.set(xlabel=self.xlabel, ylabel=self.ylabel) # Annotate the cells with the formatted values if self.annot: self._annotate_heatmap(ax, mesh) @_deprecate_positional_args def heatmap( data, *, vmin=None, vmax=None, cmap=None, center=None, robust=False, annot=None, fmt=".2g", annot_kws=None, linewidths=0, linecolor="white", cbar=True, cbar_kws=None, cbar_ax=None, square=False, xticklabels="auto", yticklabels="auto", mask=None, ax=None, **kwargs ): """Plot rectangular data as a color-encoded matrix. This is an Axes-level function and will draw the heatmap into the currently-active Axes if none is provided to the ``ax`` argument. Part of this Axes space will be taken and used to plot a colormap, unless ``cbar`` is False or a separate Axes is provided to ``cbar_ax``. Parameters ---------- data : rectangular dataset 2D dataset that can be coerced into an ndarray. If a Pandas DataFrame is provided, the index/column information will be used to label the columns and rows. vmin, vmax : floats, optional Values to anchor the colormap, otherwise they are inferred from the data and other keyword arguments. cmap : matplotlib colormap name or object, or list of colors, optional The mapping from data values to color space. If not provided, the default will depend on whether ``center`` is set. center : float, optional The value at which to center the colormap when plotting divergant data. Using this parameter will change the default ``cmap`` if none is specified. robust : bool, optional If True and ``vmin`` or ``vmax`` are absent, the colormap range is computed with robust quantiles instead of the extreme values. annot : bool or rectangular dataset, optional If True, write the data value in each cell. If an array-like with the same shape as ``data``, then use this to annotate the heatmap instead of the data. Note that DataFrames will match on position, not index. fmt : str, optional String formatting code to use when adding annotations. annot_kws : dict of key, value mappings, optional Keyword arguments for :meth:`matplotlib.axes.Axes.text` when ``annot`` is True. linewidths : float, optional Width of the lines that will divide each cell. linecolor : color, optional Color of the lines that will divide each cell. cbar : bool, optional Whether to draw a colorbar. cbar_kws : dict of key, value mappings, optional Keyword arguments for :meth:`matplotlib.figure.Figure.colorbar`. cbar_ax : matplotlib Axes, optional Axes in which to draw the colorbar, otherwise take space from the main Axes. square : bool, optional If True, set the Axes aspect to "equal" so each cell will be square-shaped. xticklabels, yticklabels : "auto", bool, list-like, or int, optional If True, plot the column names of the dataframe. If False, don't plot the column names. If list-like, plot these alternate labels as the xticklabels. If an integer, use the column names but plot only every n label. If "auto", try to densely plot non-overlapping labels. mask : bool array or DataFrame, optional If passed, data will not be shown in cells where ``mask`` is True. Cells with missing values are automatically masked. ax : matplotlib Axes, optional Axes in which to draw the plot, otherwise use the currently-active Axes. kwargs : other keyword arguments All other keyword arguments are passed to :meth:`matplotlib.axes.Axes.pcolormesh`. Returns ------- ax : matplotlib Axes Axes object with the heatmap. See Also -------- clustermap : Plot a matrix using hierachical clustering to arrange the rows and columns. Examples -------- Plot a heatmap for a numpy array: .. plot:: :context: close-figs >>> import numpy as np; np.random.seed(0) >>> import seaborn as sns; sns.set_theme() >>> uniform_data = np.random.rand(10, 12) >>> ax = sns.heatmap(uniform_data) Change the limits of the colormap: .. plot:: :context: close-figs >>> ax = sns.heatmap(uniform_data, vmin=0, vmax=1) Plot a heatmap for data centered on 0 with a diverging colormap: .. plot:: :context: close-figs >>> normal_data = np.random.randn(10, 12) >>> ax = sns.heatmap(normal_data, center=0) Plot a dataframe with meaningful row and column labels: .. plot:: :context: close-figs >>> flights = sns.load_dataset("flights") >>> flights = flights.pivot("month", "year", "passengers") >>> ax = sns.heatmap(flights) Annotate each cell with the numeric value using integer formatting: .. plot:: :context: close-figs >>> ax = sns.heatmap(flights, annot=True, fmt="d") Add lines between each cell: .. plot:: :context: close-figs >>> ax = sns.heatmap(flights, linewidths=.5) Use a different colormap: .. plot:: :context: close-figs >>> ax = sns.heatmap(flights, cmap="YlGnBu") Center the colormap at a specific value: .. plot:: :context: close-figs >>> ax = sns.heatmap(flights, center=flights.loc["Jan", 1955]) Plot every other column label and don't plot row labels: .. plot:: :context: close-figs >>> data = np.random.randn(50, 20) >>> ax = sns.heatmap(data, xticklabels=2, yticklabels=False) Don't draw a colorbar: .. plot:: :context: close-figs >>> ax = sns.heatmap(flights, cbar=False) Use different axes for the colorbar: .. plot:: :context: close-figs >>> grid_kws = {"height_ratios": (.9, .05), "hspace": .3} >>> f, (ax, cbar_ax) = plt.subplots(2, gridspec_kw=grid_kws) >>> ax = sns.heatmap(flights, ax=ax, ... cbar_ax=cbar_ax, ... cbar_kws={"orientation": "horizontal"}) Use a mask to plot only part of a matrix .. plot:: :context: close-figs >>> corr = np.corrcoef(np.random.randn(10, 200)) >>> mask = np.zeros_like(corr) >>> mask[np.triu_indices_from(mask)] = True >>> with sns.axes_style("white"): ... f, ax = plt.subplots(figsize=(7, 5)) ... ax = sns.heatmap(corr, mask=mask, vmax=.3, square=True) """ # Initialize the plotter object plotter = _HeatMapper(data, vmin, vmax, cmap, center, robust, annot, fmt, annot_kws, cbar, cbar_kws, xticklabels, yticklabels, mask) # Add the pcolormesh kwargs here kwargs["linewidths"] = linewidths kwargs["edgecolor"] = linecolor # Draw the plot and return the Axes if ax is None: ax = plt.gca() if square: ax.set_aspect("equal") plotter.plot(ax, cbar_ax, kwargs) return ax class _DendrogramPlotter(object): """Object for drawing tree of similarities between data rows/columns""" def __init__(self, data, linkage, metric, method, axis, label, rotate): """Plot a dendrogram of the relationships between the columns of data Parameters ---------- data : pandas.DataFrame Rectangular data """ self.axis = axis if self.axis == 1: data = data.T if isinstance(data, pd.DataFrame): array = data.values else: array = np.asarray(data) data = pd.DataFrame(array) self.array = array self.data = data self.shape = self.data.shape self.metric = metric self.method = method self.axis = axis self.label = label self.rotate = rotate if linkage is None: self.linkage = self.calculated_linkage else: self.linkage = linkage self.dendrogram = self.calculate_dendrogram() # Dendrogram ends are always at multiples of 5, who knows why ticks = 10 * np.arange(self.data.shape[0]) + 5 if self.label: ticklabels = _index_to_ticklabels(self.data.index) ticklabels = [ticklabels[i] for i in self.reordered_ind] if self.rotate: self.xticks = [] self.yticks = ticks self.xticklabels = [] self.yticklabels = ticklabels self.ylabel = _index_to_label(self.data.index) self.xlabel = '' else: self.xticks = ticks self.yticks = [] self.xticklabels = ticklabels self.yticklabels = [] self.ylabel = '' self.xlabel = _index_to_label(self.data.index) else: self.xticks, self.yticks = [], [] self.yticklabels, self.xticklabels = [], [] self.xlabel, self.ylabel = '', '' self.dependent_coord = self.dendrogram['dcoord'] self.independent_coord = self.dendrogram['icoord'] def _calculate_linkage_scipy(self): linkage = hierarchy.linkage(self.array, method=self.method, metric=self.metric) return linkage def _calculate_linkage_fastcluster(self): import fastcluster # Fastcluster has a memory-saving vectorized version, but only # with certain linkage methods, and mostly with euclidean metric # vector_methods = ('single', 'centroid', 'median', 'ward') euclidean_methods = ('centroid', 'median', 'ward') euclidean = self.metric == 'euclidean' and self.method in \ euclidean_methods if euclidean or self.method == 'single': return fastcluster.linkage_vector(self.array, method=self.method, metric=self.metric) else: linkage = fastcluster.linkage(self.array, method=self.method, metric=self.metric) return linkage @property def calculated_linkage(self): try: return self._calculate_linkage_fastcluster() except ImportError: if np.product(self.shape) >= 10000: msg = ("Clustering large matrix with scipy. Installing " "`fastcluster` may give better performance.") warnings.warn(msg) return self._calculate_linkage_scipy() def calculate_dendrogram(self): """Calculates a dendrogram based on the linkage matrix Made a separate function, not a property because don't want to recalculate the dendrogram every time it is accessed. Returns ------- dendrogram : dict Dendrogram dictionary as returned by scipy.cluster.hierarchy .dendrogram. The important key-value pairing is "reordered_ind" which indicates the re-ordering of the matrix """ return hierarchy.dendrogram(self.linkage, no_plot=True, color_threshold=-np.inf) @property def reordered_ind(self): """Indices of the matrix, reordered by the dendrogram""" return self.dendrogram['leaves'] def plot(self, ax, tree_kws): """Plots a dendrogram of the similarities between data on the axes Parameters ---------- ax : matplotlib.axes.Axes Axes object upon which the dendrogram is plotted """ tree_kws = {} if tree_kws is None else tree_kws.copy() tree_kws.setdefault("linewidths", .5) tree_kws.setdefault("colors", tree_kws.pop("color", (.2, .2, .2))) if self.rotate and self.axis == 0: coords = zip(self.dependent_coord, self.independent_coord) else: coords = zip(self.independent_coord, self.dependent_coord) lines = LineCollection([list(zip(x, y)) for x, y in coords], **tree_kws) ax.add_collection(lines) number_of_leaves = len(self.reordered_ind) max_dependent_coord = max(map(max, self.dependent_coord)) if self.rotate: ax.yaxis.set_ticks_position('right') # Constants 10 and 1.05 come from # `scipy.cluster.hierarchy._plot_dendrogram` ax.set_ylim(0, number_of_leaves * 10) ax.set_xlim(0, max_dependent_coord * 1.05) ax.invert_xaxis() ax.invert_yaxis() else: # Constants 10 and 1.05 come from # `scipy.cluster.hierarchy._plot_dendrogram` ax.set_xlim(0, number_of_leaves * 10) ax.set_ylim(0, max_dependent_coord * 1.05) despine(ax=ax, bottom=True, left=True) ax.set(xticks=self.xticks, yticks=self.yticks, xlabel=self.xlabel, ylabel=self.ylabel) xtl = ax.set_xticklabels(self.xticklabels) ytl = ax.set_yticklabels(self.yticklabels, rotation='vertical') # Force a draw of the plot to avoid matplotlib window error _draw_figure(ax.figure) if len(ytl) > 0 and axis_ticklabels_overlap(ytl): plt.setp(ytl, rotation="horizontal") if len(xtl) > 0 and axis_ticklabels_overlap(xtl): plt.setp(xtl, rotation="vertical") return self @_deprecate_positional_args def dendrogram( data, *, linkage=None, axis=1, label=True, metric='euclidean', method='average', rotate=False, tree_kws=None, ax=None ): """Draw a tree diagram of relationships within a matrix Parameters ---------- data : pandas.DataFrame Rectangular data linkage : numpy.array, optional Linkage matrix axis : int, optional Which axis to use to calculate linkage. 0 is rows, 1 is columns. label : bool, optional If True, label the dendrogram at leaves with column or row names metric : str, optional Distance metric. Anything valid for scipy.spatial.distance.pdist method : str, optional Linkage method to use. Anything valid for scipy.cluster.hierarchy.linkage rotate : bool, optional When plotting the matrix, whether to rotate it 90 degrees counter-clockwise, so the leaves face right tree_kws : dict, optional Keyword arguments for the ``matplotlib.collections.LineCollection`` that is used for plotting the lines of the dendrogram tree. ax : matplotlib axis, optional Axis to plot on, otherwise uses current axis Returns ------- dendrogramplotter : _DendrogramPlotter A Dendrogram plotter object. Notes ----- Access the reordered dendrogram indices with dendrogramplotter.reordered_ind """ plotter = _DendrogramPlotter(data, linkage=linkage, axis=axis, metric=metric, method=method, label=label, rotate=rotate) if ax is None: ax = plt.gca() return plotter.plot(ax=ax, tree_kws=tree_kws) class ClusterGrid(Grid): def __init__(self, data, pivot_kws=None, z_score=None, standard_scale=None, figsize=None, row_colors=None, col_colors=None, mask=None, dendrogram_ratio=None, colors_ratio=None, cbar_pos=None): """Grid object for organizing clustered heatmap input on to axes""" if isinstance(data, pd.DataFrame): self.data = data else: self.data = pd.DataFrame(data) self.data2d = self.format_data(self.data, pivot_kws, z_score, standard_scale) self.mask = _matrix_mask(self.data2d, mask) self._figure = plt.figure(figsize=figsize) self.row_colors, self.row_color_labels = \ self._preprocess_colors(data, row_colors, axis=0) self.col_colors, self.col_color_labels = \ self._preprocess_colors(data, col_colors, axis=1) try: row_dendrogram_ratio, col_dendrogram_ratio = dendrogram_ratio except TypeError: row_dendrogram_ratio = col_dendrogram_ratio = dendrogram_ratio try: row_colors_ratio, col_colors_ratio = colors_ratio except TypeError: row_colors_ratio = col_colors_ratio = colors_ratio width_ratios = self.dim_ratios(self.row_colors, row_dendrogram_ratio, row_colors_ratio) height_ratios = self.dim_ratios(self.col_colors, col_dendrogram_ratio, col_colors_ratio) nrows = 2 if self.col_colors is None else 3 ncols = 2 if self.row_colors is None else 3 self.gs = gridspec.GridSpec(nrows, ncols, width_ratios=width_ratios, height_ratios=height_ratios) self.ax_row_dendrogram = self._figure.add_subplot(self.gs[-1, 0]) self.ax_col_dendrogram = self._figure.add_subplot(self.gs[0, -1]) self.ax_row_dendrogram.set_axis_off() self.ax_col_dendrogram.set_axis_off() self.ax_row_colors = None self.ax_col_colors = None if self.row_colors is not None: self.ax_row_colors = self._figure.add_subplot( self.gs[-1, 1]) if self.col_colors is not None: self.ax_col_colors = self._figure.add_subplot( self.gs[1, -1]) self.ax_heatmap = self._figure.add_subplot(self.gs[-1, -1]) if cbar_pos is None: self.ax_cbar = self.cax = None else: # Initialize the colorbar axes in the gridspec so that tight_layout # works. We will move it where it belongs later. This is a hack. self.ax_cbar = self._figure.add_subplot(self.gs[0, 0]) self.cax = self.ax_cbar # Backwards compatibility self.cbar_pos = cbar_pos self.dendrogram_row = None self.dendrogram_col = None def _preprocess_colors(self, data, colors, axis): """Preprocess {row/col}_colors to extract labels and convert colors.""" labels = None if colors is not None: if isinstance(colors, (pd.DataFrame, pd.Series)): # If data is unindexed, raise if (not hasattr(data, "index") and axis == 0) or ( not hasattr(data, "columns") and axis == 1 ): axis_name = "col" if axis else "row" msg = (f"{axis_name}_colors indices can't be matched with data " f"indices. Provide {axis_name}_colors as a non-indexed " "datatype, e.g. by using `.to_numpy()``") raise TypeError(msg) # Ensure colors match data indices if axis == 0: colors = colors.reindex(data.index) else: colors = colors.reindex(data.columns) # Replace na's with white color # TODO We should set these to transparent instead colors = colors.astype(object).fillna('white') # Extract color values and labels from frame/series if isinstance(colors, pd.DataFrame): labels = list(colors.columns) colors = colors.T.values else: if colors.name is None: labels = [""] else: labels = [colors.name] colors = colors.values colors = _convert_colors(colors) return colors, labels def format_data(self, data, pivot_kws, z_score=None, standard_scale=None): """Extract variables from data or use directly.""" # Either the data is already in 2d matrix format, or need to do a pivot if pivot_kws is not None: data2d = data.pivot(**pivot_kws) else: data2d = data if z_score is not None and standard_scale is not None: raise ValueError( 'Cannot perform both z-scoring and standard-scaling on data') if z_score is not None: data2d = self.z_score(data2d, z_score) if standard_scale is not None: data2d = self.standard_scale(data2d, standard_scale) return data2d @staticmethod def z_score(data2d, axis=1): """Standarize the mean and variance of the data axis Parameters ---------- data2d : pandas.DataFrame Data to normalize axis : int Which axis to normalize across. If 0, normalize across rows, if 1, normalize across columns. Returns ------- normalized : pandas.DataFrame Noramlized data with a mean of 0 and variance of 1 across the specified axis. """ if axis == 1: z_scored = data2d else: z_scored = data2d.T z_scored = (z_scored - z_scored.mean()) / z_scored.std() if axis == 1: return z_scored else: return z_scored.T @staticmethod def standard_scale(data2d, axis=1): """Divide the data by the difference between the max and min Parameters ---------- data2d : pandas.DataFrame Data to normalize axis : int Which axis to normalize across. If 0, normalize across rows, if 1, normalize across columns. Returns ------- standardized : pandas.DataFrame Noramlized data with a mean of 0 and variance of 1 across the specified axis. """ # Normalize these values to range from 0 to 1 if axis == 1: standardized = data2d else: standardized = data2d.T subtract = standardized.min() standardized = (standardized - subtract) / ( standardized.max() - standardized.min()) if axis == 1: return standardized else: return standardized.T def dim_ratios(self, colors, dendrogram_ratio, colors_ratio): """Get the proportions of the figure taken up by each axes.""" ratios = [dendrogram_ratio] if colors is not None: # Colors are encoded as rgb, so ther is an extra dimention if np.ndim(colors) > 2: n_colors = len(colors) else: n_colors = 1 ratios += [n_colors * colors_ratio] # Add the ratio for the heatmap itself ratios.append(1 - sum(ratios)) return ratios @staticmethod def color_list_to_matrix_and_cmap(colors, ind, axis=0): """Turns a list of colors into a numpy matrix and matplotlib colormap These arguments can now be plotted using heatmap(matrix, cmap) and the provided colors will be plotted. Parameters ---------- colors : list of matplotlib colors Colors to label the rows or columns of a dataframe. ind : list of ints Ordering of the rows or columns, to reorder the original colors by the clustered dendrogram order axis : int Which axis this is labeling Returns ------- matrix : numpy.array A numpy array of integer values, where each indexes into the cmap cmap : matplotlib.colors.ListedColormap """ try: mpl.colors.to_rgb(colors[0]) except ValueError: # We have a 2D color structure m, n = len(colors), len(colors[0]) if not all(len(c) == n for c in colors[1:]): raise ValueError("Multiple side color vectors must have same size") else: # We have one vector of colors m, n = 1, len(colors) colors = [colors] # Map from unique colors to colormap index value unique_colors = {} matrix = np.zeros((m, n), int) for i, inner in enumerate(colors): for j, color in enumerate(inner): idx = unique_colors.setdefault(color, len(unique_colors)) matrix[i, j] = idx # Reorder for clustering and transpose for axis matrix = matrix[:, ind] if axis == 0: matrix = matrix.T cmap = mpl.colors.ListedColormap(list(unique_colors)) return matrix, cmap def plot_dendrograms(self, row_cluster, col_cluster, metric, method, row_linkage, col_linkage, tree_kws): # Plot the row dendrogram if row_cluster: self.dendrogram_row = dendrogram( self.data2d, metric=metric, method=method, label=False, axis=0, ax=self.ax_row_dendrogram, rotate=True, linkage=row_linkage, tree_kws=tree_kws ) else: self.ax_row_dendrogram.set_xticks([]) self.ax_row_dendrogram.set_yticks([]) # PLot the column dendrogram if col_cluster: self.dendrogram_col = dendrogram( self.data2d, metric=metric, method=method, label=False, axis=1, ax=self.ax_col_dendrogram, linkage=col_linkage, tree_kws=tree_kws ) else: self.ax_col_dendrogram.set_xticks([]) self.ax_col_dendrogram.set_yticks([]) despine(ax=self.ax_row_dendrogram, bottom=True, left=True) despine(ax=self.ax_col_dendrogram, bottom=True, left=True) def plot_colors(self, xind, yind, **kws): """Plots color labels between the dendrogram and the heatmap Parameters ---------- heatmap_kws : dict Keyword arguments heatmap """ # Remove any custom colormap and centering # TODO this code has consistently caused problems when we # have missed kwargs that need to be excluded that it might # be better to rewrite *in*clusively. kws = kws.copy() kws.pop('cmap', None) kws.pop('norm', None) kws.pop('center', None) kws.pop('annot', None) kws.pop('vmin', None) kws.pop('vmax', None) kws.pop('robust', None) kws.pop('xticklabels', None) kws.pop('yticklabels', None) # Plot the row colors if self.row_colors is not None: matrix, cmap = self.color_list_to_matrix_and_cmap( self.row_colors, yind, axis=0) # Get row_color labels if self.row_color_labels is not None: row_color_labels = self.row_color_labels else: row_color_labels = False heatmap(matrix, cmap=cmap, cbar=False, ax=self.ax_row_colors, xticklabels=row_color_labels, yticklabels=False, **kws) # Adjust rotation of labels if row_color_labels is not False: plt.setp(self.ax_row_colors.get_xticklabels(), rotation=90) else: despine(self.ax_row_colors, left=True, bottom=True) # Plot the column colors if self.col_colors is not None: matrix, cmap = self.color_list_to_matrix_and_cmap( self.col_colors, xind, axis=1) # Get col_color labels if self.col_color_labels is not None: col_color_labels = self.col_color_labels else: col_color_labels = False heatmap(matrix, cmap=cmap, cbar=False, ax=self.ax_col_colors, xticklabels=False, yticklabels=col_color_labels, **kws) # Adjust rotation of labels, place on right side if col_color_labels is not False: self.ax_col_colors.yaxis.tick_right() plt.setp(self.ax_col_colors.get_yticklabels(), rotation=0) else: despine(self.ax_col_colors, left=True, bottom=True) def plot_matrix(self, colorbar_kws, xind, yind, **kws): self.data2d = self.data2d.iloc[yind, xind] self.mask = self.mask.iloc[yind, xind] # Try to reorganize specified tick labels, if provided xtl = kws.pop("xticklabels", "auto") try: xtl = np.asarray(xtl)[xind] except (TypeError, IndexError): pass ytl = kws.pop("yticklabels", "auto") try: ytl = np.asarray(ytl)[yind] except (TypeError, IndexError): pass # Reorganize the annotations to match the heatmap annot = kws.pop("annot", None) if annot is None or annot is False: pass else: if isinstance(annot, bool): annot_data = self.data2d else: annot_data = np.asarray(annot) if annot_data.shape != self.data2d.shape: err = "`data` and `annot` must have same shape." raise ValueError(err) annot_data = annot_data[yind][:, xind] annot = annot_data # Setting ax_cbar=None in clustermap call implies no colorbar kws.setdefault("cbar", self.ax_cbar is not None) heatmap(self.data2d, ax=self.ax_heatmap, cbar_ax=self.ax_cbar, cbar_kws=colorbar_kws, mask=self.mask, xticklabels=xtl, yticklabels=ytl, annot=annot, **kws) ytl = self.ax_heatmap.get_yticklabels() ytl_rot = None if not ytl else ytl[0].get_rotation() self.ax_heatmap.yaxis.set_ticks_position('right') self.ax_heatmap.yaxis.set_label_position('right') if ytl_rot is not None: ytl = self.ax_heatmap.get_yticklabels() plt.setp(ytl, rotation=ytl_rot) tight_params = dict(h_pad=.02, w_pad=.02) if self.ax_cbar is None: self._figure.tight_layout(**tight_params) else: # Turn the colorbar axes off for tight layout so that its # ticks don't interfere with the rest of the plot layout. # Then move it. self.ax_cbar.set_axis_off() self._figure.tight_layout(**tight_params) self.ax_cbar.set_axis_on() self.ax_cbar.set_position(self.cbar_pos) def plot(self, metric, method, colorbar_kws, row_cluster, col_cluster, row_linkage, col_linkage, tree_kws, **kws): # heatmap square=True sets the aspect ratio on the axes, but that is # not compatible with the multi-axes layout of clustergrid if kws.get("square", False): msg = "``square=True`` ignored in clustermap" warnings.warn(msg) kws.pop("square") colorbar_kws = {} if colorbar_kws is None else colorbar_kws self.plot_dendrograms(row_cluster, col_cluster, metric, method, row_linkage=row_linkage, col_linkage=col_linkage, tree_kws=tree_kws) try: xind = self.dendrogram_col.reordered_ind except AttributeError: xind = np.arange(self.data2d.shape[1]) try: yind = self.dendrogram_row.reordered_ind except AttributeError: yind = np.arange(self.data2d.shape[0]) self.plot_colors(xind, yind, **kws) self.plot_matrix(colorbar_kws, xind, yind, **kws) return self @_deprecate_positional_args def clustermap( data, *, pivot_kws=None, method='average', metric='euclidean', z_score=None, standard_scale=None, figsize=(10, 10), cbar_kws=None, row_cluster=True, col_cluster=True, row_linkage=None, col_linkage=None, row_colors=None, col_colors=None, mask=None, dendrogram_ratio=.2, colors_ratio=0.03, cbar_pos=(.02, .8, .05, .18), tree_kws=None, **kwargs ): """ Plot a matrix dataset as a hierarchically-clustered heatmap. Parameters ---------- data : 2D array-like Rectangular data for clustering. Cannot contain NAs. pivot_kws : dict, optional If `data` is a tidy dataframe, can provide keyword arguments for pivot to create a rectangular dataframe. method : str, optional Linkage method to use for calculating clusters. See :func:`scipy.cluster.hierarchy.linkage` documentation for more information. metric : str, optional Distance metric to use for the data. See :func:`scipy.spatial.distance.pdist` documentation for more options. To use different metrics (or methods) for rows and columns, you may construct each linkage matrix yourself and provide them as `{row,col}_linkage`. z_score : int or None, optional Either 0 (rows) or 1 (columns). Whether or not to calculate z-scores for the rows or the columns. Z scores are: z = (x - mean)/std, so values in each row (column) will get the mean of the row (column) subtracted, then divided by the standard deviation of the row (column). This ensures that each row (column) has mean of 0 and variance of 1. standard_scale : int or None, optional Either 0 (rows) or 1 (columns). Whether or not to standardize that dimension, meaning for each row or column, subtract the minimum and divide each by its maximum. figsize : tuple of (width, height), optional Overall size of the figure. cbar_kws : dict, optional Keyword arguments to pass to `cbar_kws` in :func:`heatmap`, e.g. to add a label to the colorbar. {row,col}_cluster : bool, optional If ``True``, cluster the {rows, columns}. {row,col}_linkage : :class:`numpy.ndarray`, optional Precomputed linkage matrix for the rows or columns. See :func:`scipy.cluster.hierarchy.linkage` for specific formats. {row,col}_colors : list-like or pandas DataFrame/Series, optional List of colors to label for either the rows or columns. Useful to evaluate whether samples within a group are clustered together. Can use nested lists or DataFrame for multiple color levels of labeling. If given as a :class:`pandas.DataFrame` or :class:`pandas.Series`, labels for the colors are extracted from the DataFrames column names or from the name of the Series. DataFrame/Series colors are also matched to the data by their index, ensuring colors are drawn in the correct order. mask : bool array or DataFrame, optional If passed, data will not be shown in cells where `mask` is True. Cells with missing values are automatically masked. Only used for visualizing, not for calculating. {dendrogram,colors}_ratio : float, or pair of floats, optional Proportion of the figure size devoted to the two marginal elements. If a pair is given, they correspond to (row, col) ratios. cbar_pos : tuple of (left, bottom, width, height), optional Position of the colorbar axes in the figure. Setting to ``None`` will disable the colorbar. tree_kws : dict, optional Parameters for the :class:`matplotlib.collections.LineCollection` that is used to plot the lines of the dendrogram tree. kwargs : other keyword arguments All other keyword arguments are passed to :func:`heatmap`. Returns ------- :class:`ClusterGrid` A :class:`ClusterGrid` instance. See Also -------- heatmap : Plot rectangular data as a color-encoded matrix. Notes ----- The returned object has a ``savefig`` method that should be used if you want to save the figure object without clipping the dendrograms. To access the reordered row indices, use: ``clustergrid.dendrogram_row.reordered_ind`` Column indices, use: ``clustergrid.dendrogram_col.reordered_ind`` Examples -------- Plot a clustered heatmap: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set_theme(color_codes=True) >>> iris = sns.load_dataset("iris") >>> species = iris.pop("species") >>> g = sns.clustermap(iris) Change the size and layout of the figure: .. plot:: :context: close-figs >>> g = sns.clustermap(iris, ... figsize=(7, 5), ... row_cluster=False, ... dendrogram_ratio=(.1, .2), ... cbar_pos=(0, .2, .03, .4)) Add colored labels to identify observations: .. plot:: :context: close-figs >>> lut = dict(zip(species.unique(), "rbg")) >>> row_colors = species.map(lut) >>> g = sns.clustermap(iris, row_colors=row_colors) Use a different colormap and adjust the limits of the color range: .. plot:: :context: close-figs >>> g = sns.clustermap(iris, cmap="mako", vmin=0, vmax=10) Use a different similarity metric: .. plot:: :context: close-figs >>> g = sns.clustermap(iris, metric="correlation") Use a different clustering method: .. plot:: :context: close-figs >>> g = sns.clustermap(iris, method="single") Standardize the data within the columns: .. plot:: :context: close-figs >>> g = sns.clustermap(iris, standard_scale=1) Normalize the data within the rows: .. plot:: :context: close-figs >>> g = sns.clustermap(iris, z_score=0, cmap="vlag") """ plotter = ClusterGrid(data, pivot_kws=pivot_kws, figsize=figsize, row_colors=row_colors, col_colors=col_colors, z_score=z_score, standard_scale=standard_scale, mask=mask, dendrogram_ratio=dendrogram_ratio, colors_ratio=colors_ratio, cbar_pos=cbar_pos) return plotter.plot(metric=metric, method=method, colorbar_kws=cbar_kws, row_cluster=row_cluster, col_cluster=col_cluster, row_linkage=row_linkage, col_linkage=col_linkage, tree_kws=tree_kws, **kwargs) seaborn-0.11.2/seaborn/miscplot.py000066400000000000000000000025771410631356500171000ustar00rootroot00000000000000import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.ticker as ticker __all__ = ["palplot", "dogplot"] def palplot(pal, size=1): """Plot the values in a color palette as a horizontal array. Parameters ---------- pal : sequence of matplotlib colors colors, i.e. as returned by seaborn.color_palette() size : scaling factor for size of plot """ n = len(pal) f, ax = plt.subplots(1, 1, figsize=(n * size, size)) ax.imshow(np.arange(n).reshape(1, n), cmap=mpl.colors.ListedColormap(list(pal)), interpolation="nearest", aspect="auto") ax.set_xticks(np.arange(n) - .5) ax.set_yticks([-.5, .5]) # Ensure nice border between colors ax.set_xticklabels(["" for _ in range(n)]) # The proper way to set no ticks ax.yaxis.set_major_locator(ticker.NullLocator()) def dogplot(*_, **__): """Who's a good boy?""" try: from urllib.request import urlopen except ImportError: from urllib2 import urlopen from io import BytesIO url = "https://github.com/mwaskom/seaborn-data/raw/master/png/img{}.png" pic = np.random.randint(2, 7) data = BytesIO(urlopen(url.format(pic)).read()) img = plt.imread(data) f, ax = plt.subplots(figsize=(5, 5), dpi=100) f.subplots_adjust(0, 0, 1, 1) ax.imshow(img) ax.set_axis_off() seaborn-0.11.2/seaborn/palettes.py000066400000000000000000000761551410631356500170720ustar00rootroot00000000000000import colorsys from itertools import cycle import numpy as np import matplotlib as mpl from .external import husl from .utils import desaturate, get_color_cycle from .colors import xkcd_rgb, crayons __all__ = ["color_palette", "hls_palette", "husl_palette", "mpl_palette", "dark_palette", "light_palette", "diverging_palette", "blend_palette", "xkcd_palette", "crayon_palette", "cubehelix_palette", "set_color_codes"] SEABORN_PALETTES = dict( deep=["#4C72B0", "#DD8452", "#55A868", "#C44E52", "#8172B3", "#937860", "#DA8BC3", "#8C8C8C", "#CCB974", "#64B5CD"], deep6=["#4C72B0", "#55A868", "#C44E52", "#8172B3", "#CCB974", "#64B5CD"], muted=["#4878D0", "#EE854A", "#6ACC64", "#D65F5F", "#956CB4", "#8C613C", "#DC7EC0", "#797979", "#D5BB67", "#82C6E2"], muted6=["#4878D0", "#6ACC64", "#D65F5F", "#956CB4", "#D5BB67", "#82C6E2"], pastel=["#A1C9F4", "#FFB482", "#8DE5A1", "#FF9F9B", "#D0BBFF", "#DEBB9B", "#FAB0E4", "#CFCFCF", "#FFFEA3", "#B9F2F0"], pastel6=["#A1C9F4", "#8DE5A1", "#FF9F9B", "#D0BBFF", "#FFFEA3", "#B9F2F0"], bright=["#023EFF", "#FF7C00", "#1AC938", "#E8000B", "#8B2BE2", "#9F4800", "#F14CC1", "#A3A3A3", "#FFC400", "#00D7FF"], bright6=["#023EFF", "#1AC938", "#E8000B", "#8B2BE2", "#FFC400", "#00D7FF"], dark=["#001C7F", "#B1400D", "#12711C", "#8C0800", "#591E71", "#592F0D", "#A23582", "#3C3C3C", "#B8850A", "#006374"], dark6=["#001C7F", "#12711C", "#8C0800", "#591E71", "#B8850A", "#006374"], colorblind=["#0173B2", "#DE8F05", "#029E73", "#D55E00", "#CC78BC", "#CA9161", "#FBAFE4", "#949494", "#ECE133", "#56B4E9"], colorblind6=["#0173B2", "#029E73", "#D55E00", "#CC78BC", "#ECE133", "#56B4E9"] ) MPL_QUAL_PALS = { "tab10": 10, "tab20": 20, "tab20b": 20, "tab20c": 20, "Set1": 9, "Set2": 8, "Set3": 12, "Accent": 8, "Paired": 12, "Pastel1": 9, "Pastel2": 8, "Dark2": 8, } QUAL_PALETTE_SIZES = MPL_QUAL_PALS.copy() QUAL_PALETTE_SIZES.update({k: len(v) for k, v in SEABORN_PALETTES.items()}) QUAL_PALETTES = list(QUAL_PALETTE_SIZES.keys()) class _ColorPalette(list): """Set the color palette in a with statement, otherwise be a list.""" def __enter__(self): """Open the context.""" from .rcmod import set_palette self._orig_palette = color_palette() set_palette(self) return self def __exit__(self, *args): """Close the context.""" from .rcmod import set_palette set_palette(self._orig_palette) def as_hex(self): """Return a color palette with hex codes instead of RGB values.""" hex = [mpl.colors.rgb2hex(rgb) for rgb in self] return _ColorPalette(hex) def _repr_html_(self): """Rich display of the color palette in an HTML frontend.""" s = 55 n = len(self) html = f'' for i, c in enumerate(self.as_hex()): html += ( f'' ) html += '' return html def color_palette(palette=None, n_colors=None, desat=None, as_cmap=False): """Return a list of colors or continuous colormap defining a palette. Possible ``palette`` values include: - Name of a seaborn palette (deep, muted, bright, pastel, dark, colorblind) - Name of matplotlib colormap - 'husl' or 'hls' - 'ch:' - 'light:', 'dark:', 'blend:,', - A sequence of colors in any format matplotlib accepts Calling this function with ``palette=None`` will return the current matplotlib color cycle. This function can also be used in a ``with`` statement to temporarily set the color cycle for a plot or set of plots. See the :ref:`tutorial ` for more information. Parameters ---------- palette : None, string, or sequence, optional Name of palette or None to return current palette. If a sequence, input colors are used but possibly cycled and desaturated. n_colors : int, optional Number of colors in the palette. If ``None``, the default will depend on how ``palette`` is specified. Named palettes default to 6 colors, but grabbing the current palette or passing in a list of colors will not change the number of colors unless this is specified. Asking for more colors than exist in the palette will cause it to cycle. Ignored when ``as_cmap`` is True. desat : float, optional Proportion to desaturate each color by. as_cmap : bool If True, return a :class:`matplotlib.colors.Colormap`. Returns ------- list of RGB tuples or :class:`matplotlib.colors.Colormap` See Also -------- set_palette : Set the default color cycle for all plots. set_color_codes : Reassign color codes like ``"b"``, ``"g"``, etc. to colors from one of the seaborn palettes. Examples -------- .. include:: ../docstrings/color_palette.rst """ if palette is None: palette = get_color_cycle() if n_colors is None: n_colors = len(palette) elif not isinstance(palette, str): palette = palette if n_colors is None: n_colors = len(palette) else: if n_colors is None: # Use all colors in a qualitative palette or 6 of another kind n_colors = QUAL_PALETTE_SIZES.get(palette, 6) if palette in SEABORN_PALETTES: # Named "seaborn variant" of matplotlib default color cycle palette = SEABORN_PALETTES[palette] elif palette == "hls": # Evenly spaced colors in cylindrical RGB space palette = hls_palette(n_colors, as_cmap=as_cmap) elif palette == "husl": # Evenly spaced colors in cylindrical Lab space palette = husl_palette(n_colors, as_cmap=as_cmap) elif palette.lower() == "jet": # Paternalism raise ValueError("No.") elif palette.startswith("ch:"): # Cubehelix palette with params specified in string args, kwargs = _parse_cubehelix_args(palette) palette = cubehelix_palette(n_colors, *args, **kwargs, as_cmap=as_cmap) elif palette.startswith("light:"): # light palette to color specified in string _, color = palette.split(":") reverse = color.endswith("_r") if reverse: color = color[:-2] palette = light_palette(color, n_colors, reverse=reverse, as_cmap=as_cmap) elif palette.startswith("dark:"): # light palette to color specified in string _, color = palette.split(":") reverse = color.endswith("_r") if reverse: color = color[:-2] palette = dark_palette(color, n_colors, reverse=reverse, as_cmap=as_cmap) elif palette.startswith("blend:"): # blend palette between colors specified in string _, colors = palette.split(":") colors = colors.split(",") palette = blend_palette(colors, n_colors, as_cmap=as_cmap) else: try: # Perhaps a named matplotlib colormap? palette = mpl_palette(palette, n_colors, as_cmap=as_cmap) except ValueError: raise ValueError("%s is not a valid palette name" % palette) if desat is not None: palette = [desaturate(c, desat) for c in palette] if not as_cmap: # Always return as many colors as we asked for pal_cycle = cycle(palette) palette = [next(pal_cycle) for _ in range(n_colors)] # Always return in r, g, b tuple format try: palette = map(mpl.colors.colorConverter.to_rgb, palette) palette = _ColorPalette(palette) except ValueError: raise ValueError(f"Could not generate a palette for {palette}") return palette def hls_palette(n_colors=6, h=.01, l=.6, s=.65, as_cmap=False): # noqa """Get a set of evenly spaced colors in HLS hue space. h, l, and s should be between 0 and 1 Parameters ---------- n_colors : int number of colors in the palette h : float first hue l : float lightness s : float saturation Returns ------- list of RGB tuples or :class:`matplotlib.colors.Colormap` See Also -------- husl_palette : Make a palette using evenly spaced hues in the HUSL system. Examples -------- Create a palette of 10 colors with the default parameters: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set_theme() >>> sns.palplot(sns.hls_palette(10)) Create a palette of 10 colors that begins at a different hue value: .. plot:: :context: close-figs >>> sns.palplot(sns.hls_palette(10, h=.5)) Create a palette of 10 colors that are darker than the default: .. plot:: :context: close-figs >>> sns.palplot(sns.hls_palette(10, l=.4)) Create a palette of 10 colors that are less saturated than the default: .. plot:: :context: close-figs >>> sns.palplot(sns.hls_palette(10, s=.4)) """ if as_cmap: n_colors = 256 hues = np.linspace(0, 1, int(n_colors) + 1)[:-1] hues += h hues %= 1 hues -= hues.astype(int) palette = [colorsys.hls_to_rgb(h_i, l, s) for h_i in hues] if as_cmap: return mpl.colors.ListedColormap(palette, "hls") else: return _ColorPalette(palette) def husl_palette(n_colors=6, h=.01, s=.9, l=.65, as_cmap=False): # noqa """Get a set of evenly spaced colors in HUSL hue space. h, s, and l should be between 0 and 1 Parameters ---------- n_colors : int number of colors in the palette h : float first hue s : float saturation l : float lightness Returns ------- list of RGB tuples or :class:`matplotlib.colors.Colormap` See Also -------- hls_palette : Make a palette using evently spaced circular hues in the HSL system. Examples -------- Create a palette of 10 colors with the default parameters: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set_theme() >>> sns.palplot(sns.husl_palette(10)) Create a palette of 10 colors that begins at a different hue value: .. plot:: :context: close-figs >>> sns.palplot(sns.husl_palette(10, h=.5)) Create a palette of 10 colors that are darker than the default: .. plot:: :context: close-figs >>> sns.palplot(sns.husl_palette(10, l=.4)) Create a palette of 10 colors that are less saturated than the default: .. plot:: :context: close-figs >>> sns.palplot(sns.husl_palette(10, s=.4)) """ if as_cmap: n_colors = 256 hues = np.linspace(0, 1, int(n_colors) + 1)[:-1] hues += h hues %= 1 hues *= 359 s *= 99 l *= 99 # noqa palette = [_color_to_rgb((h_i, s, l), input="husl") for h_i in hues] if as_cmap: return mpl.colors.ListedColormap(palette, "hsl") else: return _ColorPalette(palette) def mpl_palette(name, n_colors=6, as_cmap=False): """Return discrete colors from a matplotlib palette. Note that this handles the qualitative colorbrewer palettes properly, although if you ask for more colors than a particular qualitative palette can provide you will get fewer than you are expecting. In contrast, asking for qualitative color brewer palettes using :func:`color_palette` will return the expected number of colors, but they will cycle. If you are using the IPython notebook, you can also use the function :func:`choose_colorbrewer_palette` to interactively select palettes. Parameters ---------- name : string Name of the palette. This should be a named matplotlib colormap. n_colors : int Number of discrete colors in the palette. Returns ------- list of RGB tuples or :class:`matplotlib.colors.Colormap` Examples -------- Create a qualitative colorbrewer palette with 8 colors: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set_theme() >>> sns.palplot(sns.mpl_palette("Set2", 8)) Create a sequential colorbrewer palette: .. plot:: :context: close-figs >>> sns.palplot(sns.mpl_palette("Blues")) Create a diverging palette: .. plot:: :context: close-figs >>> sns.palplot(sns.mpl_palette("seismic", 8)) Create a "dark" sequential palette: .. plot:: :context: close-figs >>> sns.palplot(sns.mpl_palette("GnBu_d")) """ if name.endswith("_d"): sub_name = name[:-2] if sub_name.endswith("_r"): reverse = True sub_name = sub_name[:-2] else: reverse = False pal = color_palette(sub_name, 2) + ["#333333"] if reverse: pal = pal[::-1] cmap = blend_palette(pal, n_colors, as_cmap=True) else: cmap = mpl.cm.get_cmap(name) if name in MPL_QUAL_PALS: bins = np.linspace(0, 1, MPL_QUAL_PALS[name])[:n_colors] else: bins = np.linspace(0, 1, int(n_colors) + 2)[1:-1] palette = list(map(tuple, cmap(bins)[:, :3])) if as_cmap: return cmap else: return _ColorPalette(palette) def _color_to_rgb(color, input): """Add some more flexibility to color choices.""" if input == "hls": color = colorsys.hls_to_rgb(*color) elif input == "husl": color = husl.husl_to_rgb(*color) color = tuple(np.clip(color, 0, 1)) elif input == "xkcd": color = xkcd_rgb[color] return mpl.colors.to_rgb(color) def dark_palette(color, n_colors=6, reverse=False, as_cmap=False, input="rgb"): """Make a sequential palette that blends from dark to ``color``. This kind of palette is good for data that range between relatively uninteresting low values and interesting high values. The ``color`` parameter can be specified in a number of ways, including all options for defining a color in matplotlib and several additional color spaces that are handled by seaborn. You can also use the database of named colors from the XKCD color survey. If you are using the IPython notebook, you can also choose this palette interactively with the :func:`choose_dark_palette` function. Parameters ---------- color : base color for high values hex, rgb-tuple, or html color name n_colors : int, optional number of colors in the palette reverse : bool, optional if True, reverse the direction of the blend as_cmap : bool, optional If True, return a :class:`matplotlib.colors.Colormap`. input : {'rgb', 'hls', 'husl', xkcd'} Color space to interpret the input color. The first three options apply to tuple inputs and the latter applies to string inputs. Returns ------- list of RGB tuples or :class:`matplotlib.colors.Colormap` See Also -------- light_palette : Create a sequential palette with bright low values. diverging_palette : Create a diverging palette with two colors. Examples -------- Generate a palette from an HTML color: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set_theme() >>> sns.palplot(sns.dark_palette("purple")) Generate a palette that decreases in lightness: .. plot:: :context: close-figs >>> sns.palplot(sns.dark_palette("seagreen", reverse=True)) Generate a palette from an HUSL-space seed: .. plot:: :context: close-figs >>> sns.palplot(sns.dark_palette((260, 75, 60), input="husl")) Generate a colormap object: .. plot:: :context: close-figs >>> from numpy import arange >>> x = arange(25).reshape(5, 5) >>> cmap = sns.dark_palette("#2ecc71", as_cmap=True) >>> ax = sns.heatmap(x, cmap=cmap) """ rgb = _color_to_rgb(color, input) h, s, l = husl.rgb_to_husl(*rgb) gray_s, gray_l = .15 * s, 15 gray = _color_to_rgb((h, gray_s, gray_l), input="husl") colors = [rgb, gray] if reverse else [gray, rgb] return blend_palette(colors, n_colors, as_cmap) def light_palette(color, n_colors=6, reverse=False, as_cmap=False, input="rgb"): """Make a sequential palette that blends from light to ``color``. This kind of palette is good for data that range between relatively uninteresting low values and interesting high values. The ``color`` parameter can be specified in a number of ways, including all options for defining a color in matplotlib and several additional color spaces that are handled by seaborn. You can also use the database of named colors from the XKCD color survey. If you are using the IPython notebook, you can also choose this palette interactively with the :func:`choose_light_palette` function. Parameters ---------- color : base color for high values hex code, html color name, or tuple in ``input`` space. n_colors : int, optional number of colors in the palette reverse : bool, optional if True, reverse the direction of the blend as_cmap : bool, optional If True, return a :class:`matplotlib.colors.Colormap`. input : {'rgb', 'hls', 'husl', xkcd'} Color space to interpret the input color. The first three options apply to tuple inputs and the latter applies to string inputs. Returns ------- list of RGB tuples or :class:`matplotlib.colors.Colormap` See Also -------- dark_palette : Create a sequential palette with dark low values. diverging_palette : Create a diverging palette with two colors. Examples -------- Generate a palette from an HTML color: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set_theme() >>> sns.palplot(sns.light_palette("purple")) Generate a palette that increases in lightness: .. plot:: :context: close-figs >>> sns.palplot(sns.light_palette("seagreen", reverse=True)) Generate a palette from an HUSL-space seed: .. plot:: :context: close-figs >>> sns.palplot(sns.light_palette((260, 75, 60), input="husl")) Generate a colormap object: .. plot:: :context: close-figs >>> from numpy import arange >>> x = arange(25).reshape(5, 5) >>> cmap = sns.light_palette("#2ecc71", as_cmap=True) >>> ax = sns.heatmap(x, cmap=cmap) """ rgb = _color_to_rgb(color, input) h, s, l = husl.rgb_to_husl(*rgb) gray_s, gray_l = .15 * s, 95 gray = _color_to_rgb((h, gray_s, gray_l), input="husl") colors = [rgb, gray] if reverse else [gray, rgb] return blend_palette(colors, n_colors, as_cmap) def diverging_palette(h_neg, h_pos, s=75, l=50, sep=1, n=6, # noqa center="light", as_cmap=False): """Make a diverging palette between two HUSL colors. If you are using the IPython notebook, you can also choose this palette interactively with the :func:`choose_diverging_palette` function. Parameters ---------- h_neg, h_pos : float in [0, 359] Anchor hues for negative and positive extents of the map. s : float in [0, 100], optional Anchor saturation for both extents of the map. l : float in [0, 100], optional Anchor lightness for both extents of the map. sep : int, optional Size of the intermediate region. n : int, optional Number of colors in the palette (if not returning a cmap) center : {"light", "dark"}, optional Whether the center of the palette is light or dark as_cmap : bool, optional If True, return a :class:`matplotlib.colors.Colormap`. Returns ------- list of RGB tuples or :class:`matplotlib.colors.Colormap` See Also -------- dark_palette : Create a sequential palette with dark values. light_palette : Create a sequential palette with light values. Examples -------- Generate a blue-white-red palette: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set_theme() >>> sns.palplot(sns.diverging_palette(240, 10, n=9)) Generate a brighter green-white-purple palette: .. plot:: :context: close-figs >>> sns.palplot(sns.diverging_palette(150, 275, s=80, l=55, n=9)) Generate a blue-black-red palette: .. plot:: :context: close-figs >>> sns.palplot(sns.diverging_palette(250, 15, s=75, l=40, ... n=9, center="dark")) Generate a colormap object: .. plot:: :context: close-figs >>> from numpy import arange >>> x = arange(25).reshape(5, 5) >>> cmap = sns.diverging_palette(220, 20, as_cmap=True) >>> ax = sns.heatmap(x, cmap=cmap) """ palfunc = dict(dark=dark_palette, light=light_palette)[center] n_half = int(128 - (sep // 2)) neg = palfunc((h_neg, s, l), n_half, reverse=True, input="husl") pos = palfunc((h_pos, s, l), n_half, input="husl") midpoint = dict(light=[(.95, .95, .95)], dark=[(.133, .133, .133)])[center] mid = midpoint * sep pal = blend_palette(np.concatenate([neg, mid, pos]), n, as_cmap=as_cmap) return pal def blend_palette(colors, n_colors=6, as_cmap=False, input="rgb"): """Make a palette that blends between a list of colors. Parameters ---------- colors : sequence of colors in various formats interpreted by ``input`` hex code, html color name, or tuple in ``input`` space. n_colors : int, optional Number of colors in the palette. as_cmap : bool, optional If True, return a :class:`matplotlib.colors.Colormap`. Returns ------- list of RGB tuples or :class:`matplotlib.colors.Colormap` """ colors = [_color_to_rgb(color, input) for color in colors] name = "blend" pal = mpl.colors.LinearSegmentedColormap.from_list(name, colors) if not as_cmap: rgb_array = pal(np.linspace(0, 1, int(n_colors)))[:, :3] # no alpha pal = _ColorPalette(map(tuple, rgb_array)) return pal def xkcd_palette(colors): """Make a palette with color names from the xkcd color survey. See xkcd for the full list of colors: https://xkcd.com/color/rgb/ This is just a simple wrapper around the ``seaborn.xkcd_rgb`` dictionary. Parameters ---------- colors : list of strings List of keys in the ``seaborn.xkcd_rgb`` dictionary. Returns ------- palette : seaborn color palette Returns the list of colors as RGB tuples in an object that behaves like other seaborn color palettes. See Also -------- crayon_palette : Make a palette with Crayola crayon colors. """ palette = [xkcd_rgb[name] for name in colors] return color_palette(palette, len(palette)) def crayon_palette(colors): """Make a palette with color names from Crayola crayons. Colors are taken from here: https://en.wikipedia.org/wiki/List_of_Crayola_crayon_colors This is just a simple wrapper around the ``seaborn.crayons`` dictionary. Parameters ---------- colors : list of strings List of keys in the ``seaborn.crayons`` dictionary. Returns ------- palette : seaborn color palette Returns the list of colors as rgb tuples in an object that behaves like other seaborn color palettes. See Also -------- xkcd_palette : Make a palette with named colors from the XKCD color survey. """ palette = [crayons[name] for name in colors] return color_palette(palette, len(palette)) def cubehelix_palette(n_colors=6, start=0, rot=.4, gamma=1.0, hue=0.8, light=.85, dark=.15, reverse=False, as_cmap=False): """Make a sequential palette from the cubehelix system. This produces a colormap with linearly-decreasing (or increasing) brightness. That means that information will be preserved if printed to black and white or viewed by someone who is colorblind. "cubehelix" is also available as a matplotlib-based palette, but this function gives the user more control over the look of the palette and has a different set of defaults. In addition to using this function, it is also possible to generate a cubehelix palette generally in seaborn using a string-shorthand; see the example below. Parameters ---------- n_colors : int Number of colors in the palette. start : float, 0 <= start <= 3 The hue at the start of the helix. rot : float Rotations around the hue wheel over the range of the palette. gamma : float 0 <= gamma Gamma factor to emphasize darker (gamma < 1) or lighter (gamma > 1) colors. hue : float, 0 <= hue <= 1 Saturation of the colors. dark : float 0 <= dark <= 1 Intensity of the darkest color in the palette. light : float 0 <= light <= 1 Intensity of the lightest color in the palette. reverse : bool If True, the palette will go from dark to light. as_cmap : bool If True, return a :class:`matplotlib.colors.Colormap`. Returns ------- list of RGB tuples or :class:`matplotlib.colors.Colormap` See Also -------- choose_cubehelix_palette : Launch an interactive widget to select cubehelix palette parameters. dark_palette : Create a sequential palette with dark low values. light_palette : Create a sequential palette with bright low values. References ---------- Green, D. A. (2011). "A colour scheme for the display of astronomical intensity images". Bulletin of the Astromical Society of India, Vol. 39, p. 289-295. Examples -------- Generate the default palette: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set_theme() >>> sns.palplot(sns.cubehelix_palette()) Rotate backwards from the same starting location: .. plot:: :context: close-figs >>> sns.palplot(sns.cubehelix_palette(rot=-.4)) Use a different starting point and shorter rotation: .. plot:: :context: close-figs >>> sns.palplot(sns.cubehelix_palette(start=2.8, rot=.1)) Reverse the direction of the lightness ramp: .. plot:: :context: close-figs >>> sns.palplot(sns.cubehelix_palette(reverse=True)) Generate a colormap object: .. plot:: :context: close-figs >>> from numpy import arange >>> x = arange(25).reshape(5, 5) >>> cmap = sns.cubehelix_palette(as_cmap=True) >>> ax = sns.heatmap(x, cmap=cmap) Use the full lightness range: .. plot:: :context: close-figs >>> cmap = sns.cubehelix_palette(dark=0, light=1, as_cmap=True) >>> ax = sns.heatmap(x, cmap=cmap) Use through the :func:`color_palette` interface: .. plot:: :context: close-figs >>> sns.palplot(sns.color_palette("ch:2,r=.2,l=.6")) """ def get_color_function(p0, p1): # Copied from matplotlib because it lives in private module def color(x): # Apply gamma factor to emphasise low or high intensity values xg = x ** gamma # Calculate amplitude and angle of deviation from the black # to white diagonal in the plane of constant # perceived intensity. a = hue * xg * (1 - xg) / 2 phi = 2 * np.pi * (start / 3 + rot * x) return xg + a * (p0 * np.cos(phi) + p1 * np.sin(phi)) return color cdict = { "red": get_color_function(-0.14861, 1.78277), "green": get_color_function(-0.29227, -0.90649), "blue": get_color_function(1.97294, 0.0), } cmap = mpl.colors.LinearSegmentedColormap("cubehelix", cdict) x = np.linspace(light, dark, int(n_colors)) pal = cmap(x)[:, :3].tolist() if reverse: pal = pal[::-1] if as_cmap: x_256 = np.linspace(light, dark, 256) if reverse: x_256 = x_256[::-1] pal_256 = cmap(x_256) cmap = mpl.colors.ListedColormap(pal_256, "seaborn_cubehelix") return cmap else: return _ColorPalette(pal) def _parse_cubehelix_args(argstr): """Turn stringified cubehelix params into args/kwargs.""" if argstr.startswith("ch:"): argstr = argstr[3:] if argstr.endswith("_r"): reverse = True argstr = argstr[:-2] else: reverse = False if not argstr: return [], {"reverse": reverse} all_args = argstr.split(",") args = [float(a.strip(" ")) for a in all_args if "=" not in a] kwargs = [a.split("=") for a in all_args if "=" in a] kwargs = {k.strip(" "): float(v.strip(" ")) for k, v in kwargs} kwarg_map = dict( s="start", r="rot", g="gamma", h="hue", l="light", d="dark", # noqa: E741 ) kwargs = {kwarg_map.get(k, k): v for k, v in kwargs.items()} if reverse: kwargs["reverse"] = True return args, kwargs def set_color_codes(palette="deep"): """Change how matplotlib color shorthands are interpreted. Calling this will change how shorthand codes like "b" or "g" are interpreted by matplotlib in subsequent plots. Parameters ---------- palette : {deep, muted, pastel, dark, bright, colorblind} Named seaborn palette to use as the source of colors. See Also -------- set : Color codes can be set through the high-level seaborn style manager. set_palette : Color codes can also be set through the function that sets the matplotlib color cycle. Examples -------- Map matplotlib color codes to the default seaborn palette. .. plot:: :context: close-figs >>> import matplotlib.pyplot as plt >>> import seaborn as sns; sns.set_theme() >>> sns.set_color_codes() >>> _ = plt.plot([0, 1], color="r") Use a different seaborn palette. .. plot:: :context: close-figs >>> sns.set_color_codes("dark") >>> _ = plt.plot([0, 1], color="g") >>> _ = plt.plot([0, 2], color="m") """ if palette == "reset": colors = [(0., 0., 1.), (0., .5, 0.), (1., 0., 0.), (.75, 0., .75), (.75, .75, 0.), (0., .75, .75), (0., 0., 0.)] elif not isinstance(palette, str): err = "set_color_codes requires a named seaborn palette" raise TypeError(err) elif palette in SEABORN_PALETTES: if not palette.endswith("6"): palette = palette + "6" colors = SEABORN_PALETTES[palette] + [(.1, .1, .1)] else: err = "Cannot set colors with palette '{}'".format(palette) raise ValueError(err) for code, color in zip("bgrmyck", colors): rgb = mpl.colors.colorConverter.to_rgb(color) mpl.colors.colorConverter.colors[code] = rgb mpl.colors.colorConverter.cache[code] = rgb seaborn-0.11.2/seaborn/rcmod.py000066400000000000000000000403321410631356500163410ustar00rootroot00000000000000"""Control plot style and scaling using the matplotlib rcParams interface.""" import warnings import functools from distutils.version import LooseVersion import matplotlib as mpl from cycler import cycler from . import palettes __all__ = ["set_theme", "set", "reset_defaults", "reset_orig", "axes_style", "set_style", "plotting_context", "set_context", "set_palette"] _style_keys = [ "axes.facecolor", "axes.edgecolor", "axes.grid", "axes.axisbelow", "axes.labelcolor", "figure.facecolor", "grid.color", "grid.linestyle", "text.color", "xtick.color", "ytick.color", "xtick.direction", "ytick.direction", "lines.solid_capstyle", "patch.edgecolor", "patch.force_edgecolor", "image.cmap", "font.family", "font.sans-serif", "xtick.bottom", "xtick.top", "ytick.left", "ytick.right", "axes.spines.left", "axes.spines.bottom", "axes.spines.right", "axes.spines.top", ] _context_keys = [ "font.size", "axes.labelsize", "axes.titlesize", "xtick.labelsize", "ytick.labelsize", "legend.fontsize", "axes.linewidth", "grid.linewidth", "lines.linewidth", "lines.markersize", "patch.linewidth", "xtick.major.width", "ytick.major.width", "xtick.minor.width", "ytick.minor.width", "xtick.major.size", "ytick.major.size", "xtick.minor.size", "ytick.minor.size", ] if LooseVersion(mpl.__version__) >= "3.0": _context_keys.append("legend.title_fontsize") def set_theme(context="notebook", style="darkgrid", palette="deep", font="sans-serif", font_scale=1, color_codes=True, rc=None): """ Set aspects of the visual theme for all matplotlib and seaborn plots. This function changes the global defaults for all plots using the :ref:`matplotlib rcParams system `. The themeing is decomposed into several distinct sets of parameter values. The options are illustrated in the :doc:`aesthetics <../tutorial/aesthetics>` and :doc:`color palette <../tutorial/color_palettes>` tutorials. Parameters ---------- context : string or dict Scaling parameters, see :func:`plotting_context`. style : string or dict Axes style parameters, see :func:`axes_style`. palette : string or sequence Color palette, see :func:`color_palette`. font : string Font family, see matplotlib font manager. font_scale : float, optional Separate scaling factor to independently scale the size of the font elements. color_codes : bool If ``True`` and ``palette`` is a seaborn palette, remap the shorthand color codes (e.g. "b", "g", "r", etc.) to the colors from this palette. rc : dict or None Dictionary of rc parameter mappings to override the above. Examples -------- .. include:: ../docstrings/set_theme.rst """ set_context(context, font_scale) set_style(style, rc={"font.family": font}) set_palette(palette, color_codes=color_codes) if rc is not None: mpl.rcParams.update(rc) def set(*args, **kwargs): """ Alias for :func:`set_theme`, which is the preferred interface. This function may be removed in the future. """ set_theme(*args, **kwargs) def reset_defaults(): """Restore all RC params to default settings.""" mpl.rcParams.update(mpl.rcParamsDefault) def reset_orig(): """Restore all RC params to original settings (respects custom rc).""" from . import _orig_rc_params with warnings.catch_warnings(): warnings.simplefilter('ignore', mpl.cbook.MatplotlibDeprecationWarning) mpl.rcParams.update(_orig_rc_params) def axes_style(style=None, rc=None): """ Get the parameters that control the general style of the plots. The style parameters control properties like the color of the background and whether a grid is enabled by default. This is accomplished using the :ref:`matplotlib rcParams system `. The options are illustrated in the :doc:`aesthetics tutorial <../tutorial/aesthetics>`. This function can also be used as a context manager to temporarily alter the global defaults. See :func:`set_theme` or :func:`set_style` to modify the global defaults for all plots. Parameters ---------- style : None, dict, or one of {darkgrid, whitegrid, dark, white, ticks} A dictionary of parameters or the name of a preconfigured style. rc : dict, optional Parameter mappings to override the values in the preset seaborn style dictionaries. This only updates parameters that are considered part of the style definition. Examples -------- .. include:: ../docstrings/axes_style.rst """ if style is None: style_dict = {k: mpl.rcParams[k] for k in _style_keys} elif isinstance(style, dict): style_dict = style else: styles = ["white", "dark", "whitegrid", "darkgrid", "ticks"] if style not in styles: raise ValueError("style must be one of %s" % ", ".join(styles)) # Define colors here dark_gray = ".15" light_gray = ".8" # Common parameters style_dict = { "figure.facecolor": "white", "axes.labelcolor": dark_gray, "xtick.direction": "out", "ytick.direction": "out", "xtick.color": dark_gray, "ytick.color": dark_gray, "axes.axisbelow": True, "grid.linestyle": "-", "text.color": dark_gray, "font.family": ["sans-serif"], "font.sans-serif": ["Arial", "DejaVu Sans", "Liberation Sans", "Bitstream Vera Sans", "sans-serif"], "lines.solid_capstyle": "round", "patch.edgecolor": "w", "patch.force_edgecolor": True, "image.cmap": "rocket", "xtick.top": False, "ytick.right": False, } # Set grid on or off if "grid" in style: style_dict.update({ "axes.grid": True, }) else: style_dict.update({ "axes.grid": False, }) # Set the color of the background, spines, and grids if style.startswith("dark"): style_dict.update({ "axes.facecolor": "#EAEAF2", "axes.edgecolor": "white", "grid.color": "white", "axes.spines.left": True, "axes.spines.bottom": True, "axes.spines.right": True, "axes.spines.top": True, }) elif style == "whitegrid": style_dict.update({ "axes.facecolor": "white", "axes.edgecolor": light_gray, "grid.color": light_gray, "axes.spines.left": True, "axes.spines.bottom": True, "axes.spines.right": True, "axes.spines.top": True, }) elif style in ["white", "ticks"]: style_dict.update({ "axes.facecolor": "white", "axes.edgecolor": dark_gray, "grid.color": light_gray, "axes.spines.left": True, "axes.spines.bottom": True, "axes.spines.right": True, "axes.spines.top": True, }) # Show or hide the axes ticks if style == "ticks": style_dict.update({ "xtick.bottom": True, "ytick.left": True, }) else: style_dict.update({ "xtick.bottom": False, "ytick.left": False, }) # Remove entries that are not defined in the base list of valid keys # This lets us handle matplotlib <=/> 2.0 style_dict = {k: v for k, v in style_dict.items() if k in _style_keys} # Override these settings with the provided rc dictionary if rc is not None: rc = {k: v for k, v in rc.items() if k in _style_keys} style_dict.update(rc) # Wrap in an _AxesStyle object so this can be used in a with statement style_object = _AxesStyle(style_dict) return style_object def set_style(style=None, rc=None): """ Set the parameters that control the general style of the plots. The style parameters control properties like the color of the background and whether a grid is enabled by default. This is accomplished using the :ref:`matplotlib rcParams system `. The options are illustrated in the :doc:`aesthetics tutorial <../tutorial/aesthetics>`. See :func:`axes_style` to get the parameter values. Parameters ---------- style : dict, or one of {darkgrid, whitegrid, dark, white, ticks} A dictionary of parameters or the name of a preconfigured style. rc : dict, optional Parameter mappings to override the values in the preset seaborn style dictionaries. This only updates parameters that are considered part of the style definition. Examples -------- .. include:: ../docstrings/set_style.rst """ style_object = axes_style(style, rc) mpl.rcParams.update(style_object) def plotting_context(context=None, font_scale=1, rc=None): """ Get the parameters that control the scaling of plot elements. This affects things like the size of the labels, lines, and other elements of the plot, but not the overall style. This is accomplished using the :ref:`matplotlib rcParams system `. The base context is "notebook", and the other contexts are "paper", "talk", and "poster", which are version of the notebook parameters scaled by different values. Font elements can also be scaled independently of (but relative to) the other values. This function can also be used as a context manager to temporarily alter the global defaults. See :func:`set_theme` or :func:`set_context` to modify the global defaults for all plots. Parameters ---------- context : None, dict, or one of {paper, notebook, talk, poster} A dictionary of parameters or the name of a preconfigured set. font_scale : float, optional Separate scaling factor to independently scale the size of the font elements. rc : dict, optional Parameter mappings to override the values in the preset seaborn context dictionaries. This only updates parameters that are considered part of the context definition. Examples -------- .. include:: ../docstrings/plotting_context.rst """ if context is None: context_dict = {k: mpl.rcParams[k] for k in _context_keys} elif isinstance(context, dict): context_dict = context else: contexts = ["paper", "notebook", "talk", "poster"] if context not in contexts: raise ValueError("context must be in %s" % ", ".join(contexts)) # Set up dictionary of default parameters texts_base_context = { "font.size": 12, "axes.labelsize": 12, "axes.titlesize": 12, "xtick.labelsize": 11, "ytick.labelsize": 11, "legend.fontsize": 11, } if LooseVersion(mpl.__version__) >= "3.0": texts_base_context["legend.title_fontsize"] = 12 base_context = { "axes.linewidth": 1.25, "grid.linewidth": 1, "lines.linewidth": 1.5, "lines.markersize": 6, "patch.linewidth": 1, "xtick.major.width": 1.25, "ytick.major.width": 1.25, "xtick.minor.width": 1, "ytick.minor.width": 1, "xtick.major.size": 6, "ytick.major.size": 6, "xtick.minor.size": 4, "ytick.minor.size": 4, } base_context.update(texts_base_context) # Scale all the parameters by the same factor depending on the context scaling = dict(paper=.8, notebook=1, talk=1.5, poster=2)[context] context_dict = {k: v * scaling for k, v in base_context.items()} # Now independently scale the fonts font_keys = texts_base_context.keys() font_dict = {k: context_dict[k] * font_scale for k in font_keys} context_dict.update(font_dict) # Override these settings with the provided rc dictionary if rc is not None: rc = {k: v for k, v in rc.items() if k in _context_keys} context_dict.update(rc) # Wrap in a _PlottingContext object so this can be used in a with statement context_object = _PlottingContext(context_dict) return context_object def set_context(context=None, font_scale=1, rc=None): """ Set the parameters that control the scaling of plot elements. This affects things like the size of the labels, lines, and other elements of the plot, but not the overall style. This is accomplished using the :ref:`matplotlib rcParams system `. The base context is "notebook", and the other contexts are "paper", "talk", and "poster", which are version of the notebook parameters scaled by different values. Font elements can also be scaled independently of (but relative to) the other values. See :func:`plotting_context` to get the parameter values. Parameters ---------- context : dict, or one of {paper, notebook, talk, poster} A dictionary of parameters or the name of a preconfigured set. font_scale : float, optional Separate scaling factor to independently scale the size of the font elements. rc : dict, optional Parameter mappings to override the values in the preset seaborn context dictionaries. This only updates parameters that are considered part of the context definition. Examples -------- .. include:: ../docstrings/set_context.rst """ context_object = plotting_context(context, font_scale, rc) mpl.rcParams.update(context_object) class _RCAesthetics(dict): def __enter__(self): rc = mpl.rcParams self._orig = {k: rc[k] for k in self._keys} self._set(self) def __exit__(self, exc_type, exc_value, exc_tb): self._set(self._orig) def __call__(self, func): @functools.wraps(func) def wrapper(*args, **kwargs): with self: return func(*args, **kwargs) return wrapper class _AxesStyle(_RCAesthetics): """Light wrapper on a dict to set style temporarily.""" _keys = _style_keys _set = staticmethod(set_style) class _PlottingContext(_RCAesthetics): """Light wrapper on a dict to set context temporarily.""" _keys = _context_keys _set = staticmethod(set_context) def set_palette(palette, n_colors=None, desat=None, color_codes=False): """Set the matplotlib color cycle using a seaborn palette. Parameters ---------- palette : seaborn color paltte | matplotlib colormap | hls | husl Palette definition. Should be something that :func:`color_palette` can process. n_colors : int Number of colors in the cycle. The default number of colors will depend on the format of ``palette``, see the :func:`color_palette` documentation for more information. desat : float Proportion to desaturate each color by. color_codes : bool If ``True`` and ``palette`` is a seaborn palette, remap the shorthand color codes (e.g. "b", "g", "r", etc.) to the colors from this palette. Examples -------- >>> set_palette("Reds") >>> set_palette("Set1", 8, .75) See Also -------- color_palette : build a color palette or set the color cycle temporarily in a ``with`` statement. set_context : set parameters to scale plot elements set_style : set the default parameters for figure style """ colors = palettes.color_palette(palette, n_colors, desat) cyl = cycler('color', colors) mpl.rcParams['axes.prop_cycle'] = cyl mpl.rcParams["patch.facecolor"] = colors[0] if color_codes: try: palettes.set_color_codes(palette) except (ValueError, TypeError): pass seaborn-0.11.2/seaborn/regression.py000066400000000000000000001150431410631356500174170ustar00rootroot00000000000000"""Plotting functions for linear models (broadly construed).""" import copy from textwrap import dedent import warnings import numpy as np import pandas as pd from scipy.spatial import distance import matplotlib as mpl import matplotlib.pyplot as plt try: import statsmodels assert statsmodels _has_statsmodels = True except ImportError: _has_statsmodels = False from . import utils from . import algorithms as algo from .axisgrid import FacetGrid, _facet_docs from ._decorators import _deprecate_positional_args __all__ = ["lmplot", "regplot", "residplot"] class _LinearPlotter(object): """Base class for plotting relational data in tidy format. To get anything useful done you'll have to inherit from this, but setup code that can be abstracted out should be put here. """ def establish_variables(self, data, **kws): """Extract variables from data or use directly.""" self.data = data # Validate the inputs any_strings = any([isinstance(v, str) for v in kws.values()]) if any_strings and data is None: raise ValueError("Must pass `data` if using named variables.") # Set the variables for var, val in kws.items(): if isinstance(val, str): vector = data[val] elif isinstance(val, list): vector = np.asarray(val) else: vector = val if vector is not None and vector.shape != (1,): vector = np.squeeze(vector) if np.ndim(vector) > 1: err = "regplot inputs must be 1d" raise ValueError(err) setattr(self, var, vector) def dropna(self, *vars): """Remove observations with missing data.""" vals = [getattr(self, var) for var in vars] vals = [v for v in vals if v is not None] not_na = np.all(np.column_stack([pd.notnull(v) for v in vals]), axis=1) for var in vars: val = getattr(self, var) if val is not None: setattr(self, var, val[not_na]) def plot(self, ax): raise NotImplementedError class _RegressionPlotter(_LinearPlotter): """Plotter for numeric independent variables with regression model. This does the computations and drawing for the `regplot` function, and is thus also used indirectly by `lmplot`. """ def __init__(self, x, y, data=None, x_estimator=None, x_bins=None, x_ci="ci", scatter=True, fit_reg=True, ci=95, n_boot=1000, units=None, seed=None, order=1, logistic=False, lowess=False, robust=False, logx=False, x_partial=None, y_partial=None, truncate=False, dropna=True, x_jitter=None, y_jitter=None, color=None, label=None): # Set member attributes self.x_estimator = x_estimator self.ci = ci self.x_ci = ci if x_ci == "ci" else x_ci self.n_boot = n_boot self.seed = seed self.scatter = scatter self.fit_reg = fit_reg self.order = order self.logistic = logistic self.lowess = lowess self.robust = robust self.logx = logx self.truncate = truncate self.x_jitter = x_jitter self.y_jitter = y_jitter self.color = color self.label = label # Validate the regression options: if sum((order > 1, logistic, robust, lowess, logx)) > 1: raise ValueError("Mutually exclusive regression options.") # Extract the data vals from the arguments or passed dataframe self.establish_variables(data, x=x, y=y, units=units, x_partial=x_partial, y_partial=y_partial) # Drop null observations if dropna: self.dropna("x", "y", "units", "x_partial", "y_partial") # Regress nuisance variables out of the data if self.x_partial is not None: self.x = self.regress_out(self.x, self.x_partial) if self.y_partial is not None: self.y = self.regress_out(self.y, self.y_partial) # Possibly bin the predictor variable, which implies a point estimate if x_bins is not None: self.x_estimator = np.mean if x_estimator is None else x_estimator x_discrete, x_bins = self.bin_predictor(x_bins) self.x_discrete = x_discrete else: self.x_discrete = self.x # Disable regression in case of singleton inputs if len(self.x) <= 1: self.fit_reg = False # Save the range of the x variable for the grid later if self.fit_reg: self.x_range = self.x.min(), self.x.max() @property def scatter_data(self): """Data where each observation is a point.""" x_j = self.x_jitter if x_j is None: x = self.x else: x = self.x + np.random.uniform(-x_j, x_j, len(self.x)) y_j = self.y_jitter if y_j is None: y = self.y else: y = self.y + np.random.uniform(-y_j, y_j, len(self.y)) return x, y @property def estimate_data(self): """Data with a point estimate and CI for each discrete x value.""" x, y = self.x_discrete, self.y vals = sorted(np.unique(x)) points, cis = [], [] for val in vals: # Get the point estimate of the y variable _y = y[x == val] est = self.x_estimator(_y) points.append(est) # Compute the confidence interval for this estimate if self.x_ci is None: cis.append(None) else: units = None if self.x_ci == "sd": sd = np.std(_y) _ci = est - sd, est + sd else: if self.units is not None: units = self.units[x == val] boots = algo.bootstrap(_y, func=self.x_estimator, n_boot=self.n_boot, units=units, seed=self.seed) _ci = utils.ci(boots, self.x_ci) cis.append(_ci) return vals, points, cis def fit_regression(self, ax=None, x_range=None, grid=None): """Fit the regression model.""" # Create the grid for the regression if grid is None: if self.truncate: x_min, x_max = self.x_range else: if ax is None: x_min, x_max = x_range else: x_min, x_max = ax.get_xlim() grid = np.linspace(x_min, x_max, 100) ci = self.ci # Fit the regression if self.order > 1: yhat, yhat_boots = self.fit_poly(grid, self.order) elif self.logistic: from statsmodels.genmod.generalized_linear_model import GLM from statsmodels.genmod.families import Binomial yhat, yhat_boots = self.fit_statsmodels(grid, GLM, family=Binomial()) elif self.lowess: ci = None grid, yhat = self.fit_lowess() elif self.robust: from statsmodels.robust.robust_linear_model import RLM yhat, yhat_boots = self.fit_statsmodels(grid, RLM) elif self.logx: yhat, yhat_boots = self.fit_logx(grid) else: yhat, yhat_boots = self.fit_fast(grid) # Compute the confidence interval at each grid point if ci is None: err_bands = None else: err_bands = utils.ci(yhat_boots, ci, axis=0) return grid, yhat, err_bands def fit_fast(self, grid): """Low-level regression and prediction using linear algebra.""" def reg_func(_x, _y): return np.linalg.pinv(_x).dot(_y) X, y = np.c_[np.ones(len(self.x)), self.x], self.y grid = np.c_[np.ones(len(grid)), grid] yhat = grid.dot(reg_func(X, y)) if self.ci is None: return yhat, None beta_boots = algo.bootstrap(X, y, func=reg_func, n_boot=self.n_boot, units=self.units, seed=self.seed).T yhat_boots = grid.dot(beta_boots).T return yhat, yhat_boots def fit_poly(self, grid, order): """Regression using numpy polyfit for higher-order trends.""" def reg_func(_x, _y): return np.polyval(np.polyfit(_x, _y, order), grid) x, y = self.x, self.y yhat = reg_func(x, y) if self.ci is None: return yhat, None yhat_boots = algo.bootstrap(x, y, func=reg_func, n_boot=self.n_boot, units=self.units, seed=self.seed) return yhat, yhat_boots def fit_statsmodels(self, grid, model, **kwargs): """More general regression function using statsmodels objects.""" import statsmodels.genmod.generalized_linear_model as glm X, y = np.c_[np.ones(len(self.x)), self.x], self.y grid = np.c_[np.ones(len(grid)), grid] def reg_func(_x, _y): try: yhat = model(_y, _x, **kwargs).fit().predict(grid) except glm.PerfectSeparationError: yhat = np.empty(len(grid)) yhat.fill(np.nan) return yhat yhat = reg_func(X, y) if self.ci is None: return yhat, None yhat_boots = algo.bootstrap(X, y, func=reg_func, n_boot=self.n_boot, units=self.units, seed=self.seed) return yhat, yhat_boots def fit_lowess(self): """Fit a locally-weighted regression, which returns its own grid.""" from statsmodels.nonparametric.smoothers_lowess import lowess grid, yhat = lowess(self.y, self.x).T return grid, yhat def fit_logx(self, grid): """Fit the model in log-space.""" X, y = np.c_[np.ones(len(self.x)), self.x], self.y grid = np.c_[np.ones(len(grid)), np.log(grid)] def reg_func(_x, _y): _x = np.c_[_x[:, 0], np.log(_x[:, 1])] return np.linalg.pinv(_x).dot(_y) yhat = grid.dot(reg_func(X, y)) if self.ci is None: return yhat, None beta_boots = algo.bootstrap(X, y, func=reg_func, n_boot=self.n_boot, units=self.units, seed=self.seed).T yhat_boots = grid.dot(beta_boots).T return yhat, yhat_boots def bin_predictor(self, bins): """Discretize a predictor by assigning value to closest bin.""" x = self.x if np.isscalar(bins): percentiles = np.linspace(0, 100, bins + 2)[1:-1] bins = np.c_[np.percentile(x, percentiles)] else: bins = np.c_[np.ravel(bins)] dist = distance.cdist(np.c_[x], bins) x_binned = bins[np.argmin(dist, axis=1)].ravel() return x_binned, bins.ravel() def regress_out(self, a, b): """Regress b from a keeping a's original mean.""" a_mean = a.mean() a = a - a_mean b = b - b.mean() b = np.c_[b] a_prime = a - b.dot(np.linalg.pinv(b).dot(a)) return np.asarray(a_prime + a_mean).reshape(a.shape) def plot(self, ax, scatter_kws, line_kws): """Draw the full plot.""" # Insert the plot label into the correct set of keyword arguments if self.scatter: scatter_kws["label"] = self.label else: line_kws["label"] = self.label # Use the current color cycle state as a default if self.color is None: lines, = ax.plot([], []) color = lines.get_color() lines.remove() else: color = self.color # Ensure that color is hex to avoid matplotlib weirdness color = mpl.colors.rgb2hex(mpl.colors.colorConverter.to_rgb(color)) # Let color in keyword arguments override overall plot color scatter_kws.setdefault("color", color) line_kws.setdefault("color", color) # Draw the constituent plots if self.scatter: self.scatterplot(ax, scatter_kws) if self.fit_reg: self.lineplot(ax, line_kws) # Label the axes if hasattr(self.x, "name"): ax.set_xlabel(self.x.name) if hasattr(self.y, "name"): ax.set_ylabel(self.y.name) def scatterplot(self, ax, kws): """Draw the data.""" # Treat the line-based markers specially, explicitly setting larger # linewidth than is provided by the seaborn style defaults. # This would ideally be handled better in matplotlib (i.e., distinguish # between edgewidth for solid glyphs and linewidth for line glyphs # but this should do for now. line_markers = ["1", "2", "3", "4", "+", "x", "|", "_"] if self.x_estimator is None: if "marker" in kws and kws["marker"] in line_markers: lw = mpl.rcParams["lines.linewidth"] else: lw = mpl.rcParams["lines.markeredgewidth"] kws.setdefault("linewidths", lw) if not hasattr(kws['color'], 'shape') or kws['color'].shape[1] < 4: kws.setdefault("alpha", .8) x, y = self.scatter_data ax.scatter(x, y, **kws) else: # TODO abstraction ci_kws = {"color": kws["color"]} ci_kws["linewidth"] = mpl.rcParams["lines.linewidth"] * 1.75 kws.setdefault("s", 50) xs, ys, cis = self.estimate_data if [ci for ci in cis if ci is not None]: for x, ci in zip(xs, cis): ax.plot([x, x], ci, **ci_kws) ax.scatter(xs, ys, **kws) def lineplot(self, ax, kws): """Draw the model.""" # Fit the regression model grid, yhat, err_bands = self.fit_regression(ax) edges = grid[0], grid[-1] # Get set default aesthetics fill_color = kws["color"] lw = kws.pop("lw", mpl.rcParams["lines.linewidth"] * 1.5) kws.setdefault("linewidth", lw) # Draw the regression line and confidence interval line, = ax.plot(grid, yhat, **kws) if not self.truncate: line.sticky_edges.x[:] = edges # Prevent mpl from adding margin if err_bands is not None: ax.fill_between(grid, *err_bands, facecolor=fill_color, alpha=.15) _regression_docs = dict( model_api=dedent("""\ There are a number of mutually exclusive options for estimating the regression model. See the :ref:`tutorial ` for more information.\ """), regplot_vs_lmplot=dedent("""\ The :func:`regplot` and :func:`lmplot` functions are closely related, but the former is an axes-level function while the latter is a figure-level function that combines :func:`regplot` and :class:`FacetGrid`.\ """), x_estimator=dedent("""\ x_estimator : callable that maps vector -> scalar, optional Apply this function to each unique value of ``x`` and plot the resulting estimate. This is useful when ``x`` is a discrete variable. If ``x_ci`` is given, this estimate will be bootstrapped and a confidence interval will be drawn.\ """), x_bins=dedent("""\ x_bins : int or vector, optional Bin the ``x`` variable into discrete bins and then estimate the central tendency and a confidence interval. This binning only influences how the scatterplot is drawn; the regression is still fit to the original data. This parameter is interpreted either as the number of evenly-sized (not necessary spaced) bins or the positions of the bin centers. When this parameter is used, it implies that the default of ``x_estimator`` is ``numpy.mean``.\ """), x_ci=dedent("""\ x_ci : "ci", "sd", int in [0, 100] or None, optional Size of the confidence interval used when plotting a central tendency for discrete values of ``x``. If ``"ci"``, defer to the value of the ``ci`` parameter. If ``"sd"``, skip bootstrapping and show the standard deviation of the observations in each bin.\ """), scatter=dedent("""\ scatter : bool, optional If ``True``, draw a scatterplot with the underlying observations (or the ``x_estimator`` values).\ """), fit_reg=dedent("""\ fit_reg : bool, optional If ``True``, estimate and plot a regression model relating the ``x`` and ``y`` variables.\ """), ci=dedent("""\ ci : int in [0, 100] or None, optional Size of the confidence interval for the regression estimate. This will be drawn using translucent bands around the regression line. The confidence interval is estimated using a bootstrap; for large datasets, it may be advisable to avoid that computation by setting this parameter to None.\ """), n_boot=dedent("""\ n_boot : int, optional Number of bootstrap resamples used to estimate the ``ci``. The default value attempts to balance time and stability; you may want to increase this value for "final" versions of plots.\ """), units=dedent("""\ units : variable name in ``data``, optional If the ``x`` and ``y`` observations are nested within sampling units, those can be specified here. This will be taken into account when computing the confidence intervals by performing a multilevel bootstrap that resamples both units and observations (within unit). This does not otherwise influence how the regression is estimated or drawn.\ """), seed=dedent("""\ seed : int, numpy.random.Generator, or numpy.random.RandomState, optional Seed or random number generator for reproducible bootstrapping.\ """), order=dedent("""\ order : int, optional If ``order`` is greater than 1, use ``numpy.polyfit`` to estimate a polynomial regression.\ """), logistic=dedent("""\ logistic : bool, optional If ``True``, assume that ``y`` is a binary variable and use ``statsmodels`` to estimate a logistic regression model. Note that this is substantially more computationally intensive than linear regression, so you may wish to decrease the number of bootstrap resamples (``n_boot``) or set ``ci`` to None.\ """), lowess=dedent("""\ lowess : bool, optional If ``True``, use ``statsmodels`` to estimate a nonparametric lowess model (locally weighted linear regression). Note that confidence intervals cannot currently be drawn for this kind of model.\ """), robust=dedent("""\ robust : bool, optional If ``True``, use ``statsmodels`` to estimate a robust regression. This will de-weight outliers. Note that this is substantially more computationally intensive than standard linear regression, so you may wish to decrease the number of bootstrap resamples (``n_boot``) or set ``ci`` to None.\ """), logx=dedent("""\ logx : bool, optional If ``True``, estimate a linear regression of the form y ~ log(x), but plot the scatterplot and regression model in the input space. Note that ``x`` must be positive for this to work.\ """), xy_partial=dedent("""\ {x,y}_partial : strings in ``data`` or matrices Confounding variables to regress out of the ``x`` or ``y`` variables before plotting.\ """), truncate=dedent("""\ truncate : bool, optional If ``True``, the regression line is bounded by the data limits. If ``False``, it extends to the ``x`` axis limits. """), xy_jitter=dedent("""\ {x,y}_jitter : floats, optional Add uniform random noise of this size to either the ``x`` or ``y`` variables. The noise is added to a copy of the data after fitting the regression, and only influences the look of the scatterplot. This can be helpful when plotting variables that take discrete values.\ """), scatter_line_kws=dedent("""\ {scatter,line}_kws : dictionaries Additional keyword arguments to pass to ``plt.scatter`` and ``plt.plot``.\ """), ) _regression_docs.update(_facet_docs) @_deprecate_positional_args def lmplot( *, x=None, y=None, data=None, hue=None, col=None, row=None, # TODO move before data once * is enforced palette=None, col_wrap=None, height=5, aspect=1, markers="o", sharex=None, sharey=None, hue_order=None, col_order=None, row_order=None, legend=True, legend_out=None, x_estimator=None, x_bins=None, x_ci="ci", scatter=True, fit_reg=True, ci=95, n_boot=1000, units=None, seed=None, order=1, logistic=False, lowess=False, robust=False, logx=False, x_partial=None, y_partial=None, truncate=True, x_jitter=None, y_jitter=None, scatter_kws=None, line_kws=None, facet_kws=None, size=None, ): # Handle deprecations if size is not None: height = size msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(msg, UserWarning) if facet_kws is None: facet_kws = {} def facet_kw_deprecation(key, val): msg = ( f"{key} is deprecated from the `lmplot` function signature. " "Please update your code to pass it using `facet_kws`." ) if val is not None: warnings.warn(msg, UserWarning) facet_kws[key] = val facet_kw_deprecation("sharex", sharex) facet_kw_deprecation("sharey", sharey) facet_kw_deprecation("legend_out", legend_out) if data is None: raise TypeError("Missing required keyword argument `data`.") # Reduce the dataframe to only needed columns need_cols = [x, y, hue, col, row, units, x_partial, y_partial] cols = np.unique([a for a in need_cols if a is not None]).tolist() data = data[cols] # Initialize the grid facets = FacetGrid( data, row=row, col=col, hue=hue, palette=palette, row_order=row_order, col_order=col_order, hue_order=hue_order, height=height, aspect=aspect, col_wrap=col_wrap, **facet_kws, ) # Add the markers here as FacetGrid has figured out how many levels of the # hue variable are needed and we don't want to duplicate that process if facets.hue_names is None: n_markers = 1 else: n_markers = len(facets.hue_names) if not isinstance(markers, list): markers = [markers] * n_markers if len(markers) != n_markers: raise ValueError(("markers must be a singeton or a list of markers " "for each level of the hue variable")) facets.hue_kws = {"marker": markers} def update_datalim(data, x, y, ax, **kws): xys = np.asarray(data[[x, y]]).astype(float) ax.update_datalim(xys, updatey=False) ax.autoscale_view(scaley=False) facets.map_dataframe(update_datalim, x=x, y=y) # Draw the regression plot on each facet regplot_kws = dict( x_estimator=x_estimator, x_bins=x_bins, x_ci=x_ci, scatter=scatter, fit_reg=fit_reg, ci=ci, n_boot=n_boot, units=units, seed=seed, order=order, logistic=logistic, lowess=lowess, robust=robust, logx=logx, x_partial=x_partial, y_partial=y_partial, truncate=truncate, x_jitter=x_jitter, y_jitter=y_jitter, scatter_kws=scatter_kws, line_kws=line_kws, ) facets.map_dataframe(regplot, x=x, y=y, **regplot_kws) facets.set_axis_labels(x, y) # Add a legend if legend and (hue is not None) and (hue not in [col, row]): facets.add_legend() return facets lmplot.__doc__ = dedent("""\ Plot data and regression model fits across a FacetGrid. This function combines :func:`regplot` and :class:`FacetGrid`. It is intended as a convenient interface to fit regression models across conditional subsets of a dataset. When thinking about how to assign variables to different facets, a general rule is that it makes sense to use ``hue`` for the most important comparison, followed by ``col`` and ``row``. However, always think about your particular dataset and the goals of the visualization you are creating. {model_api} The parameters to this function span most of the options in :class:`FacetGrid`, although there may be occasional cases where you will want to use that class and :func:`regplot` directly. Parameters ---------- x, y : strings, optional Input variables; these should be column names in ``data``. {data} hue, col, row : strings Variables that define subsets of the data, which will be drawn on separate facets in the grid. See the ``*_order`` parameters to control the order of levels of this variable. {palette} {col_wrap} {height} {aspect} markers : matplotlib marker code or list of marker codes, optional Markers for the scatterplot. If a list, each marker in the list will be used for each level of the ``hue`` variable. {share_xy} .. deprecated:: 0.12.0 Pass using the `facet_kws` dictionary. {{hue,col,row}}_order : lists, optional Order for the levels of the faceting variables. By default, this will be the order that the levels appear in ``data`` or, if the variables are pandas categoricals, the category order. legend : bool, optional If ``True`` and there is a ``hue`` variable, add a legend. {legend_out} .. deprecated:: 0.12.0 Pass using the `facet_kws` dictionary. {x_estimator} {x_bins} {x_ci} {scatter} {fit_reg} {ci} {n_boot} {units} {seed} {order} {logistic} {lowess} {robust} {logx} {xy_partial} {truncate} {xy_jitter} {scatter_line_kws} facet_kws : dict Dictionary of keyword arguments for :class:`FacetGrid`. See Also -------- regplot : Plot data and a conditional model fit. FacetGrid : Subplot grid for plotting conditional relationships. pairplot : Combine :func:`regplot` and :class:`PairGrid` (when used with ``kind="reg"``). Notes ----- {regplot_vs_lmplot} Examples -------- These examples focus on basic regression model plots to exhibit the various faceting options; see the :func:`regplot` docs for demonstrations of the other options for plotting the data and models. There are also other examples for how to manipulate plot using the returned object on the :class:`FacetGrid` docs. Plot a simple linear relationship between two variables: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set_theme(color_codes=True) >>> tips = sns.load_dataset("tips") >>> g = sns.lmplot(x="total_bill", y="tip", data=tips) Condition on a third variable and plot the levels in different colors: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips) Use different markers as well as colors so the plot will reproduce to black-and-white more easily: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips, ... markers=["o", "x"]) Use a different color palette: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips, ... palette="Set1") Map ``hue`` levels to colors with a dictionary: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips, ... palette=dict(Yes="g", No="m")) Plot the levels of the third variable across different columns: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", col="smoker", data=tips) Change the height and aspect ratio of the facets: .. plot:: :context: close-figs >>> g = sns.lmplot(x="size", y="total_bill", hue="day", col="day", ... data=tips, height=6, aspect=.4, x_jitter=.1) Wrap the levels of the column variable into multiple rows: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", col="day", hue="day", ... data=tips, col_wrap=2, height=3) Condition on two variables to make a full grid: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", row="sex", col="time", ... data=tips, height=3) Use methods on the returned :class:`FacetGrid` instance to further tweak the plot: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", row="sex", col="time", ... data=tips, height=3) >>> g = (g.set_axis_labels("Total bill (US Dollars)", "Tip") ... .set(xlim=(0, 60), ylim=(0, 12), ... xticks=[10, 30, 50], yticks=[2, 6, 10]) ... .fig.subplots_adjust(wspace=.02)) """).format(**_regression_docs) @_deprecate_positional_args def regplot( *, x=None, y=None, data=None, x_estimator=None, x_bins=None, x_ci="ci", scatter=True, fit_reg=True, ci=95, n_boot=1000, units=None, seed=None, order=1, logistic=False, lowess=False, robust=False, logx=False, x_partial=None, y_partial=None, truncate=True, dropna=True, x_jitter=None, y_jitter=None, label=None, color=None, marker="o", scatter_kws=None, line_kws=None, ax=None ): plotter = _RegressionPlotter(x, y, data, x_estimator, x_bins, x_ci, scatter, fit_reg, ci, n_boot, units, seed, order, logistic, lowess, robust, logx, x_partial, y_partial, truncate, dropna, x_jitter, y_jitter, color, label) if ax is None: ax = plt.gca() scatter_kws = {} if scatter_kws is None else copy.copy(scatter_kws) scatter_kws["marker"] = marker line_kws = {} if line_kws is None else copy.copy(line_kws) plotter.plot(ax, scatter_kws, line_kws) return ax regplot.__doc__ = dedent("""\ Plot data and a linear regression model fit. {model_api} Parameters ---------- x, y: string, series, or vector array Input variables. If strings, these should correspond with column names in ``data``. When pandas objects are used, axes will be labeled with the series name. {data} {x_estimator} {x_bins} {x_ci} {scatter} {fit_reg} {ci} {n_boot} {units} {seed} {order} {logistic} {lowess} {robust} {logx} {xy_partial} {truncate} {xy_jitter} label : string Label to apply to either the scatterplot or regression line (if ``scatter`` is ``False``) for use in a legend. color : matplotlib color Color to apply to all plot elements; will be superseded by colors passed in ``scatter_kws`` or ``line_kws``. marker : matplotlib marker code Marker to use for the scatterplot glyphs. {scatter_line_kws} ax : matplotlib Axes, optional Axes object to draw the plot onto, otherwise uses the current Axes. Returns ------- ax : matplotlib Axes The Axes object containing the plot. See Also -------- lmplot : Combine :func:`regplot` and :class:`FacetGrid` to plot multiple linear relationships in a dataset. jointplot : Combine :func:`regplot` and :class:`JointGrid` (when used with ``kind="reg"``). pairplot : Combine :func:`regplot` and :class:`PairGrid` (when used with ``kind="reg"``). residplot : Plot the residuals of a linear regression model. Notes ----- {regplot_vs_lmplot} It's also easy to combine combine :func:`regplot` and :class:`JointGrid` or :class:`PairGrid` through the :func:`jointplot` and :func:`pairplot` functions, although these do not directly accept all of :func:`regplot`'s parameters. Examples -------- Plot the relationship between two variables in a DataFrame: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set_theme(color_codes=True) >>> tips = sns.load_dataset("tips") >>> ax = sns.regplot(x="total_bill", y="tip", data=tips) Plot with two variables defined as numpy arrays; use a different color: .. plot:: :context: close-figs >>> import numpy as np; np.random.seed(8) >>> mean, cov = [4, 6], [(1.5, .7), (.7, 1)] >>> x, y = np.random.multivariate_normal(mean, cov, 80).T >>> ax = sns.regplot(x=x, y=y, color="g") Plot with two variables defined as pandas Series; use a different marker: .. plot:: :context: close-figs >>> import pandas as pd >>> x, y = pd.Series(x, name="x_var"), pd.Series(y, name="y_var") >>> ax = sns.regplot(x=x, y=y, marker="+") Use a 68% confidence interval, which corresponds with the standard error of the estimate, and extend the regression line to the axis limits: .. plot:: :context: close-figs >>> ax = sns.regplot(x=x, y=y, ci=68, truncate=False) Plot with a discrete ``x`` variable and add some jitter: .. plot:: :context: close-figs >>> ax = sns.regplot(x="size", y="total_bill", data=tips, x_jitter=.1) Plot with a discrete ``x`` variable showing means and confidence intervals for unique values: .. plot:: :context: close-figs >>> ax = sns.regplot(x="size", y="total_bill", data=tips, ... x_estimator=np.mean) Plot with a continuous variable divided into discrete bins: .. plot:: :context: close-figs >>> ax = sns.regplot(x=x, y=y, x_bins=4) Fit a higher-order polynomial regression: .. plot:: :context: close-figs >>> ans = sns.load_dataset("anscombe") >>> ax = sns.regplot(x="x", y="y", data=ans.loc[ans.dataset == "II"], ... scatter_kws={{"s": 80}}, ... order=2, ci=None) Fit a robust regression and don't plot a confidence interval: .. plot:: :context: close-figs >>> ax = sns.regplot(x="x", y="y", data=ans.loc[ans.dataset == "III"], ... scatter_kws={{"s": 80}}, ... robust=True, ci=None) Fit a logistic regression; jitter the y variable and use fewer bootstrap iterations: .. plot:: :context: close-figs >>> tips["big_tip"] = (tips.tip / tips.total_bill) > .175 >>> ax = sns.regplot(x="total_bill", y="big_tip", data=tips, ... logistic=True, n_boot=500, y_jitter=.03) Fit the regression model using log(x): .. plot:: :context: close-figs >>> ax = sns.regplot(x="size", y="total_bill", data=tips, ... x_estimator=np.mean, logx=True) """).format(**_regression_docs) @_deprecate_positional_args def residplot( *, x=None, y=None, data=None, lowess=False, x_partial=None, y_partial=None, order=1, robust=False, dropna=True, label=None, color=None, scatter_kws=None, line_kws=None, ax=None ): """Plot the residuals of a linear regression. This function will regress y on x (possibly as a robust or polynomial regression) and then draw a scatterplot of the residuals. You can optionally fit a lowess smoother to the residual plot, which can help in determining if there is structure to the residuals. Parameters ---------- x : vector or string Data or column name in `data` for the predictor variable. y : vector or string Data or column name in `data` for the response variable. data : DataFrame, optional DataFrame to use if `x` and `y` are column names. lowess : boolean, optional Fit a lowess smoother to the residual scatterplot. {x, y}_partial : matrix or string(s) , optional Matrix with same first dimension as `x`, or column name(s) in `data`. These variables are treated as confounding and are removed from the `x` or `y` variables before plotting. order : int, optional Order of the polynomial to fit when calculating the residuals. robust : boolean, optional Fit a robust linear regression when calculating the residuals. dropna : boolean, optional If True, ignore observations with missing data when fitting and plotting. label : string, optional Label that will be used in any plot legends. color : matplotlib color, optional Color to use for all elements of the plot. {scatter, line}_kws : dictionaries, optional Additional keyword arguments passed to scatter() and plot() for drawing the components of the plot. ax : matplotlib axis, optional Plot into this axis, otherwise grab the current axis or make a new one if not existing. Returns ------- ax: matplotlib axes Axes with the regression plot. See Also -------- regplot : Plot a simple linear regression model. jointplot : Draw a :func:`residplot` with univariate marginal distributions (when used with ``kind="resid"``). """ plotter = _RegressionPlotter(x, y, data, ci=None, order=order, robust=robust, x_partial=x_partial, y_partial=y_partial, dropna=dropna, color=color, label=label) if ax is None: ax = plt.gca() # Calculate the residual from a linear regression _, yhat, _ = plotter.fit_regression(grid=plotter.x) plotter.y = plotter.y - yhat # Set the regression option on the plotter if lowess: plotter.lowess = True else: plotter.fit_reg = False # Plot a horizontal line at 0 ax.axhline(0, ls=":", c=".2") # Draw the scatterplot scatter_kws = {} if scatter_kws is None else scatter_kws.copy() line_kws = {} if line_kws is None else line_kws.copy() plotter.plot(ax, scatter_kws, line_kws) return ax seaborn-0.11.2/seaborn/relational.py000066400000000000000000001160521410631356500173720ustar00rootroot00000000000000import warnings import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt from ._core import ( VectorPlotter, ) from .utils import ( ci_to_errsize, locator_to_legend_entries, adjust_legend_subtitles, ci as ci_func ) from .algorithms import bootstrap from .axisgrid import FacetGrid, _facet_docs from ._decorators import _deprecate_positional_args from ._docstrings import ( DocstringComponents, _core_docs, ) __all__ = ["relplot", "scatterplot", "lineplot"] _relational_narrative = DocstringComponents(dict( # --- Introductory prose main_api=""" The relationship between ``x`` and ``y`` can be shown for different subsets of the data using the ``hue``, ``size``, and ``style`` parameters. These parameters control what visual semantics are used to identify the different subsets. It is possible to show up to three dimensions independently by using all three semantic types, but this style of plot can be hard to interpret and is often ineffective. Using redundant semantics (i.e. both ``hue`` and ``style`` for the same variable) can be helpful for making graphics more accessible. See the :ref:`tutorial ` for more information. """, relational_semantic=""" The default treatment of the ``hue`` (and to a lesser extent, ``size``) semantic, if present, depends on whether the variable is inferred to represent "numeric" or "categorical" data. In particular, numeric variables are represented with a sequential colormap by default, and the legend entries show regular "ticks" with values that may or may not exist in the data. This behavior can be controlled through various parameters, as described and illustrated below. """, )) _relational_docs = dict( # --- Shared function parameters data_vars=""" x, y : names of variables in ``data`` or vector data Input data variables; must be numeric. Can pass data directly or reference columns in ``data``. """, data=""" data : DataFrame, array, or list of arrays Input data structure. If ``x`` and ``y`` are specified as names, this should be a "long-form" DataFrame containing those columns. Otherwise it is treated as "wide-form" data and grouping variables are ignored. See the examples for the various ways this parameter can be specified and the different effects of each. """, palette=""" palette : string, list, dict, or matplotlib colormap An object that determines how colors are chosen when ``hue`` is used. It can be the name of a seaborn palette or matplotlib colormap, a list of colors (anything matplotlib understands), a dict mapping levels of the ``hue`` variable to colors, or a matplotlib colormap object. """, hue_order=""" hue_order : list Specified order for the appearance of the ``hue`` variable levels, otherwise they are determined from the data. Not relevant when the ``hue`` variable is numeric. """, hue_norm=""" hue_norm : tuple or :class:`matplotlib.colors.Normalize` object Normalization in data units for colormap applied to the ``hue`` variable when it is numeric. Not relevant if it is categorical. """, sizes=""" sizes : list, dict, or tuple An object that determines how sizes are chosen when ``size`` is used. It can always be a list of size values or a dict mapping levels of the ``size`` variable to sizes. When ``size`` is numeric, it can also be a tuple specifying the minimum and maximum size to use such that other values are normalized within this range. """, size_order=""" size_order : list Specified order for appearance of the ``size`` variable levels, otherwise they are determined from the data. Not relevant when the ``size`` variable is numeric. """, size_norm=""" size_norm : tuple or Normalize object Normalization in data units for scaling plot objects when the ``size`` variable is numeric. """, dashes=""" dashes : boolean, list, or dictionary Object determining how to draw the lines for different levels of the ``style`` variable. Setting to ``True`` will use default dash codes, or you can pass a list of dash codes or a dictionary mapping levels of the ``style`` variable to dash codes. Setting to ``False`` will use solid lines for all subsets. Dashes are specified as in matplotlib: a tuple of ``(segment, gap)`` lengths, or an empty string to draw a solid line. """, markers=""" markers : boolean, list, or dictionary Object determining how to draw the markers for different levels of the ``style`` variable. Setting to ``True`` will use default markers, or you can pass a list of markers or a dictionary mapping levels of the ``style`` variable to markers. Setting to ``False`` will draw marker-less lines. Markers are specified as in matplotlib. """, style_order=""" style_order : list Specified order for appearance of the ``style`` variable levels otherwise they are determined from the data. Not relevant when the ``style`` variable is numeric. """, units=""" units : vector or key in ``data`` Grouping variable identifying sampling units. When used, a separate line will be drawn for each unit with appropriate semantics, but no legend entry will be added. Useful for showing distribution of experimental replicates when exact identities are not needed. """, estimator=""" estimator : name of pandas method or callable or None Method for aggregating across multiple observations of the ``y`` variable at the same ``x`` level. If ``None``, all observations will be drawn. """, ci=""" ci : int or "sd" or None Size of the confidence interval to draw when aggregating with an estimator. "sd" means to draw the standard deviation of the data. Setting to ``None`` will skip bootstrapping. """, n_boot=""" n_boot : int Number of bootstraps to use for computing the confidence interval. """, seed=""" seed : int, numpy.random.Generator, or numpy.random.RandomState Seed or random number generator for reproducible bootstrapping. """, legend=""" legend : "auto", "brief", "full", or False How to draw the legend. If "brief", numeric ``hue`` and ``size`` variables will be represented with a sample of evenly spaced values. If "full", every group will get an entry in the legend. If "auto", choose between brief or full representation based on number of levels. If ``False``, no legend data is added and no legend is drawn. """, ax_in=""" ax : matplotlib Axes Axes object to draw the plot onto, otherwise uses the current Axes. """, ax_out=""" ax : matplotlib Axes Returns the Axes object with the plot drawn onto it. """, ) _param_docs = DocstringComponents.from_nested_components( core=_core_docs["params"], facets=DocstringComponents(_facet_docs), rel=DocstringComponents(_relational_docs), ) class _RelationalPlotter(VectorPlotter): wide_structure = { "x": "@index", "y": "@values", "hue": "@columns", "style": "@columns", } # TODO where best to define default parameters? sort = True def add_legend_data(self, ax): """Add labeled artists to represent the different plot semantics.""" verbosity = self.legend if isinstance(verbosity, str) and verbosity not in ["auto", "brief", "full"]: err = "`legend` must be 'auto', 'brief', 'full', or a boolean." raise ValueError(err) elif verbosity is True: verbosity = "auto" legend_kwargs = {} keys = [] # Assign a legend title if there is only going to be one sub-legend, # otherwise, subtitles will be inserted into the texts list with an # invisible handle (which is a hack) titles = { title for title in (self.variables.get(v, None) for v in ["hue", "size", "style"]) if title is not None } if len(titles) == 1: legend_title = titles.pop() else: legend_title = "" title_kws = dict( visible=False, color="w", s=0, linewidth=0, marker="", dashes="" ) def update(var_name, val_name, **kws): key = var_name, val_name if key in legend_kwargs: legend_kwargs[key].update(**kws) else: keys.append(key) legend_kwargs[key] = dict(**kws) # Define the maximum number of ticks to use for "brief" legends brief_ticks = 6 # -- Add a legend for hue semantics brief_hue = self._hue_map.map_type == "numeric" and ( verbosity == "brief" or (verbosity == "auto" and len(self._hue_map.levels) > brief_ticks) ) if brief_hue: if isinstance(self._hue_map.norm, mpl.colors.LogNorm): locator = mpl.ticker.LogLocator(numticks=brief_ticks) else: locator = mpl.ticker.MaxNLocator(nbins=brief_ticks) limits = min(self._hue_map.levels), max(self._hue_map.levels) hue_levels, hue_formatted_levels = locator_to_legend_entries( locator, limits, self.plot_data["hue"].infer_objects().dtype ) elif self._hue_map.levels is None: hue_levels = hue_formatted_levels = [] else: hue_levels = hue_formatted_levels = self._hue_map.levels # Add the hue semantic subtitle if not legend_title and self.variables.get("hue", None) is not None: update((self.variables["hue"], "title"), self.variables["hue"], **title_kws) # Add the hue semantic labels for level, formatted_level in zip(hue_levels, hue_formatted_levels): if level is not None: color = self._hue_map(level) update(self.variables["hue"], formatted_level, color=color) # -- Add a legend for size semantics brief_size = self._size_map.map_type == "numeric" and ( verbosity == "brief" or (verbosity == "auto" and len(self._size_map.levels) > brief_ticks) ) if brief_size: # Define how ticks will interpolate between the min/max data values if isinstance(self._size_map.norm, mpl.colors.LogNorm): locator = mpl.ticker.LogLocator(numticks=brief_ticks) else: locator = mpl.ticker.MaxNLocator(nbins=brief_ticks) # Define the min/max data values limits = min(self._size_map.levels), max(self._size_map.levels) size_levels, size_formatted_levels = locator_to_legend_entries( locator, limits, self.plot_data["size"].infer_objects().dtype ) elif self._size_map.levels is None: size_levels = size_formatted_levels = [] else: size_levels = size_formatted_levels = self._size_map.levels # Add the size semantic subtitle if not legend_title and self.variables.get("size", None) is not None: update((self.variables["size"], "title"), self.variables["size"], **title_kws) # Add the size semantic labels for level, formatted_level in zip(size_levels, size_formatted_levels): if level is not None: size = self._size_map(level) update( self.variables["size"], formatted_level, linewidth=size, s=size, ) # -- Add a legend for style semantics # Add the style semantic title if not legend_title and self.variables.get("style", None) is not None: update((self.variables["style"], "title"), self.variables["style"], **title_kws) # Add the style semantic labels if self._style_map.levels is not None: for level in self._style_map.levels: if level is not None: attrs = self._style_map(level) update( self.variables["style"], level, marker=attrs.get("marker", ""), dashes=attrs.get("dashes", ""), ) func = getattr(ax, self._legend_func) legend_data = {} legend_order = [] for key in keys: _, label = key kws = legend_kwargs[key] kws.setdefault("color", ".2") use_kws = {} for attr in self._legend_attributes + ["visible"]: if attr in kws: use_kws[attr] = kws[attr] artist = func([], [], label=label, **use_kws) if self._legend_func == "plot": artist = artist[0] legend_data[key] = artist legend_order.append(key) self.legend_title = legend_title self.legend_data = legend_data self.legend_order = legend_order class _LinePlotter(_RelationalPlotter): _legend_attributes = ["color", "linewidth", "marker", "dashes"] _legend_func = "plot" def __init__( self, *, data=None, variables={}, estimator=None, ci=None, n_boot=None, seed=None, sort=True, err_style=None, err_kws=None, legend=None ): # TODO this is messy, we want the mapping to be agnoistic about # the kind of plot to draw, but for the time being we need to set # this information so the SizeMapping can use it self._default_size_range = ( np.r_[.5, 2] * mpl.rcParams["lines.linewidth"] ) super().__init__(data=data, variables=variables) self.estimator = estimator self.ci = ci self.n_boot = n_boot self.seed = seed self.sort = sort self.err_style = err_style self.err_kws = {} if err_kws is None else err_kws self.legend = legend def aggregate(self, vals, grouper, units=None): """Compute an estimate and confidence interval using grouper.""" func = self.estimator ci = self.ci n_boot = self.n_boot seed = self.seed # Define a "null" CI for when we only have one value null_ci = pd.Series(index=["low", "high"], dtype=float) # Function to bootstrap in the context of a pandas group by def bootstrapped_cis(vals): if len(vals) <= 1: return null_ci boots = bootstrap(vals, func=func, n_boot=n_boot, seed=seed) cis = ci_func(boots, ci) return pd.Series(cis, ["low", "high"]) # Group and get the aggregation estimate grouped = vals.groupby(grouper, sort=self.sort) est = grouped.agg(func) # Exit early if we don't want a confidence interval if ci is None: return est.index, est, None # Compute the error bar extents if ci == "sd": sd = grouped.std() cis = pd.DataFrame(np.c_[est - sd, est + sd], index=est.index, columns=["low", "high"]).stack() else: cis = grouped.apply(bootstrapped_cis) # Unpack the CIs into "wide" format for plotting if cis.notnull().any(): cis = cis.unstack().reindex(est.index) else: cis = None return est.index, est, cis def plot(self, ax, kws): """Draw the plot onto an axes, passing matplotlib kwargs.""" # Draw a test plot, using the passed in kwargs. The goal here is to # honor both (a) the current state of the plot cycler and (b) the # specified kwargs on all the lines we will draw, overriding when # relevant with the data semantics. Note that we won't cycle # internally; in other words, if ``hue`` is not used, all elements will # have the same color, but they will have the color that you would have # gotten from the corresponding matplotlib function, and calling the # function will advance the axes property cycle. scout, = ax.plot([], [], **kws) orig_color = kws.pop("color", scout.get_color()) orig_marker = kws.pop("marker", scout.get_marker()) orig_linewidth = kws.pop("linewidth", kws.pop("lw", scout.get_linewidth())) # Note that scout.get_linestyle() is` not correct as of mpl 3.2 orig_linestyle = kws.pop("linestyle", kws.pop("ls", None)) kws.setdefault("markeredgewidth", kws.pop("mew", .75)) kws.setdefault("markeredgecolor", kws.pop("mec", "w")) scout.remove() # Set default error kwargs err_kws = self.err_kws.copy() if self.err_style == "band": err_kws.setdefault("alpha", .2) elif self.err_style == "bars": pass elif self.err_style is not None: err = "`err_style` must be 'band' or 'bars', not {}" raise ValueError(err.format(self.err_style)) # Set the default artist keywords kws.update(dict( color=orig_color, marker=orig_marker, linewidth=orig_linewidth, linestyle=orig_linestyle, )) # Loop over the semantic subsets and add to the plot grouping_vars = "hue", "size", "style" for sub_vars, sub_data in self.iter_data(grouping_vars, from_comp_data=True): if self.sort: sort_vars = ["units", "x", "y"] sort_cols = [var for var in sort_vars if var in self.variables] sub_data = sub_data.sort_values(sort_cols) # TODO # How to handle NA? We don't want NA to propagate through to the # estimate/CI when some values are present, but we would also like # matplotlib to show "gaps" in the line when all values are missing. # This is straightforward absent aggregation, but complicated with it. sub_data = sub_data.dropna() # Due to the original design, code below was written assuming that # sub_data always has x, y, and units columns, which may be empty. # Adding this here to avoid otherwise disruptive changes, but it # could get removed if the rest of the logic is sorted out null = pd.Series(index=sub_data.index, dtype=float) x = sub_data.get("x", null) y = sub_data.get("y", null) u = sub_data.get("units", null) if self.estimator is not None: if "units" in self.variables: err = "estimator must be None when specifying units" raise ValueError(err) x, y, y_ci = self.aggregate(y, x, u) else: y_ci = None if "hue" in sub_vars: kws["color"] = self._hue_map(sub_vars["hue"]) if "size" in sub_vars: kws["linewidth"] = self._size_map(sub_vars["size"]) if "style" in sub_vars: attributes = self._style_map(sub_vars["style"]) if "dashes" in attributes: kws["dashes"] = attributes["dashes"] if "marker" in attributes: kws["marker"] = attributes["marker"] line, = ax.plot([], [], **kws) line_color = line.get_color() line_alpha = line.get_alpha() line_capstyle = line.get_solid_capstyle() line.remove() # --- Draw the main line x, y = np.asarray(x), np.asarray(y) if "units" in self.variables: for u_i in u.unique(): rows = np.asarray(u == u_i) ax.plot(x[rows], y[rows], **kws) else: line, = ax.plot(x, y, **kws) # --- Draw the confidence intervals if y_ci is not None: low, high = np.asarray(y_ci["low"]), np.asarray(y_ci["high"]) if self.err_style == "band": ax.fill_between(x, low, high, color=line_color, **err_kws) elif self.err_style == "bars": y_err = ci_to_errsize((low, high), y) ebars = ax.errorbar(x, y, y_err, linestyle="", color=line_color, alpha=line_alpha, **err_kws) # Set the capstyle properly on the error bars for obj in ebars.get_children(): try: obj.set_capstyle(line_capstyle) except AttributeError: # Does not exist on mpl < 2.2 pass # Finalize the axes details self._add_axis_labels(ax) if self.legend: self.add_legend_data(ax) handles, _ = ax.get_legend_handles_labels() if handles: legend = ax.legend(title=self.legend_title) adjust_legend_subtitles(legend) class _ScatterPlotter(_RelationalPlotter): _legend_attributes = ["color", "s", "marker"] _legend_func = "scatter" def __init__( self, *, data=None, variables={}, x_bins=None, y_bins=None, estimator=None, ci=None, n_boot=None, alpha=None, x_jitter=None, y_jitter=None, legend=None ): # TODO this is messy, we want the mapping to be agnoistic about # the kind of plot to draw, but for the time being we need to set # this information so the SizeMapping can use it self._default_size_range = ( np.r_[.5, 2] * np.square(mpl.rcParams["lines.markersize"]) ) super().__init__(data=data, variables=variables) self.alpha = alpha self.legend = legend def plot(self, ax, kws): # Draw a test plot, using the passed in kwargs. The goal here is to # honor both (a) the current state of the plot cycler and (b) the # specified kwargs on all the lines we will draw, overriding when # relevant with the data semantics. Note that we won't cycle # internally; in other words, if ``hue`` is not used, all elements will # have the same color, but they will have the color that you would have # gotten from the corresponding matplotlib function, and calling the # function will advance the axes property cycle. scout_size = max( np.atleast_1d(kws.get("s", [])).shape[0], np.atleast_1d(kws.get("c", [])).shape[0], ) scout_x = scout_y = np.full(scout_size, np.nan) scout = ax.scatter(scout_x, scout_y, **kws) s = kws.pop("s", scout.get_sizes()) c = kws.pop("c", scout.get_facecolors()) scout.remove() kws.pop("color", None) # TODO is this optimal? # --- Determine the visual attributes of the plot data = self.plot_data[list(self.variables)].dropna() if not data.size: return # Define the vectors of x and y positions empty = np.full(len(data), np.nan) x = data.get("x", empty) y = data.get("y", empty) # Apply the mapping from semantic variables to artist attributes if "hue" in self.variables: c = self._hue_map(data["hue"]) if "size" in self.variables: s = self._size_map(data["size"]) # Set defaults for other visual attributes kws.setdefault("linewidth", .08 * np.sqrt(np.percentile(s, 10))) if "style" in self.variables: # Use a representative marker so scatter sets the edgecolor # properly for line art markers. We currently enforce either # all or none line art so this works. example_level = self._style_map.levels[0] example_marker = self._style_map(example_level, "marker") kws.setdefault("marker", example_marker) # Conditionally set the marker edgecolor based on whether the marker is "filled" # See https://github.com/matplotlib/matplotlib/issues/17849 for context m = kws.get("marker", mpl.rcParams.get("marker", "o")) if not isinstance(m, mpl.markers.MarkerStyle): m = mpl.markers.MarkerStyle(m) if m.is_filled(): kws.setdefault("edgecolor", "w") # TODO this makes it impossible to vary alpha with hue which might # otherwise be useful? Should we just pass None? kws["alpha"] = 1 if self.alpha == "auto" else self.alpha # Draw the scatter plot args = np.asarray(x), np.asarray(y), np.asarray(s), np.asarray(c) points = ax.scatter(*args, **kws) # Update the paths to get different marker shapes. # This has to be done here because ax.scatter allows varying sizes # and colors but only a single marker shape per call. if "style" in self.variables: p = [self._style_map(val, "path") for val in data["style"]] points.set_paths(p) # Finalize the axes details self._add_axis_labels(ax) if self.legend: self.add_legend_data(ax) handles, _ = ax.get_legend_handles_labels() if handles: legend = ax.legend(title=self.legend_title) adjust_legend_subtitles(legend) @_deprecate_positional_args def lineplot( *, x=None, y=None, hue=None, size=None, style=None, data=None, palette=None, hue_order=None, hue_norm=None, sizes=None, size_order=None, size_norm=None, dashes=True, markers=None, style_order=None, units=None, estimator="mean", ci=95, n_boot=1000, seed=None, sort=True, err_style="band", err_kws=None, legend="auto", ax=None, **kwargs ): variables = _LinePlotter.get_semantics(locals()) p = _LinePlotter( data=data, variables=variables, estimator=estimator, ci=ci, n_boot=n_boot, seed=seed, sort=sort, err_style=err_style, err_kws=err_kws, legend=legend, ) p.map_hue(palette=palette, order=hue_order, norm=hue_norm) p.map_size(sizes=sizes, order=size_order, norm=size_norm) p.map_style(markers=markers, dashes=dashes, order=style_order) if ax is None: ax = plt.gca() if not p.has_xy_data: return ax p._attach(ax) p.plot(ax, kwargs) return ax lineplot.__doc__ = """\ Draw a line plot with possibility of several semantic groupings. {narrative.main_api} {narrative.relational_semantic} By default, the plot aggregates over multiple ``y`` values at each value of ``x`` and shows an estimate of the central tendency and a confidence interval for that estimate. Parameters ---------- {params.core.xy} hue : vector or key in ``data`` Grouping variable that will produce lines with different colors. Can be either categorical or numeric, although color mapping will behave differently in latter case. size : vector or key in ``data`` Grouping variable that will produce lines with different widths. Can be either categorical or numeric, although size mapping will behave differently in latter case. style : vector or key in ``data`` Grouping variable that will produce lines with different dashes and/or markers. Can have a numeric dtype but will always be treated as categorical. {params.core.data} {params.core.palette} {params.core.hue_order} {params.core.hue_norm} {params.rel.sizes} {params.rel.size_order} {params.rel.size_norm} {params.rel.dashes} {params.rel.markers} {params.rel.style_order} {params.rel.units} {params.rel.estimator} {params.rel.ci} {params.rel.n_boot} {params.rel.seed} sort : boolean If True, the data will be sorted by the x and y variables, otherwise lines will connect points in the order they appear in the dataset. err_style : "band" or "bars" Whether to draw the confidence intervals with translucent error bands or discrete error bars. err_kws : dict of keyword arguments Additional paramters to control the aesthetics of the error bars. The kwargs are passed either to :meth:`matplotlib.axes.Axes.fill_between` or :meth:`matplotlib.axes.Axes.errorbar`, depending on ``err_style``. {params.rel.legend} {params.core.ax} kwargs : key, value mappings Other keyword arguments are passed down to :meth:`matplotlib.axes.Axes.plot`. Returns ------- {returns.ax} See Also -------- {seealso.scatterplot} {seealso.pointplot} Examples -------- .. include:: ../docstrings/lineplot.rst """.format( narrative=_relational_narrative, params=_param_docs, returns=_core_docs["returns"], seealso=_core_docs["seealso"], ) @_deprecate_positional_args def scatterplot( *, x=None, y=None, hue=None, style=None, size=None, data=None, palette=None, hue_order=None, hue_norm=None, sizes=None, size_order=None, size_norm=None, markers=True, style_order=None, x_bins=None, y_bins=None, units=None, estimator=None, ci=95, n_boot=1000, alpha=None, x_jitter=None, y_jitter=None, legend="auto", ax=None, **kwargs ): variables = _ScatterPlotter.get_semantics(locals()) p = _ScatterPlotter( data=data, variables=variables, x_bins=x_bins, y_bins=y_bins, estimator=estimator, ci=ci, n_boot=n_boot, alpha=alpha, x_jitter=x_jitter, y_jitter=y_jitter, legend=legend, ) p.map_hue(palette=palette, order=hue_order, norm=hue_norm) p.map_size(sizes=sizes, order=size_order, norm=size_norm) p.map_style(markers=markers, order=style_order) if ax is None: ax = plt.gca() if not p.has_xy_data: return ax p._attach(ax) p.plot(ax, kwargs) return ax scatterplot.__doc__ = """\ Draw a scatter plot with possibility of several semantic groupings. {narrative.main_api} {narrative.relational_semantic} Parameters ---------- {params.core.xy} hue : vector or key in ``data`` Grouping variable that will produce points with different colors. Can be either categorical or numeric, although color mapping will behave differently in latter case. size : vector or key in ``data`` Grouping variable that will produce points with different sizes. Can be either categorical or numeric, although size mapping will behave differently in latter case. style : vector or key in ``data`` Grouping variable that will produce points with different markers. Can have a numeric dtype but will always be treated as categorical. {params.core.data} {params.core.palette} {params.core.hue_order} {params.core.hue_norm} {params.rel.sizes} {params.rel.size_order} {params.rel.size_norm} {params.rel.markers} {params.rel.style_order} {{x,y}}_bins : lists or arrays or functions *Currently non-functional.* {params.rel.units} *Currently non-functional.* {params.rel.estimator} *Currently non-functional.* {params.rel.ci} *Currently non-functional.* {params.rel.n_boot} *Currently non-functional.* alpha : float Proportional opacity of the points. {{x,y}}_jitter : booleans or floats *Currently non-functional.* {params.rel.legend} {params.core.ax} kwargs : key, value mappings Other keyword arguments are passed down to :meth:`matplotlib.axes.Axes.scatter`. Returns ------- {returns.ax} See Also -------- {seealso.lineplot} {seealso.stripplot} {seealso.swarmplot} Examples -------- .. include:: ../docstrings/scatterplot.rst """.format( narrative=_relational_narrative, params=_param_docs, returns=_core_docs["returns"], seealso=_core_docs["seealso"], ) @_deprecate_positional_args def relplot( *, x=None, y=None, hue=None, size=None, style=None, data=None, row=None, col=None, col_wrap=None, row_order=None, col_order=None, palette=None, hue_order=None, hue_norm=None, sizes=None, size_order=None, size_norm=None, markers=None, dashes=None, style_order=None, legend="auto", kind="scatter", height=5, aspect=1, facet_kws=None, units=None, **kwargs ): if kind == "scatter": plotter = _ScatterPlotter func = scatterplot markers = True if markers is None else markers elif kind == "line": plotter = _LinePlotter func = lineplot dashes = True if dashes is None else dashes else: err = "Plot kind {} not recognized".format(kind) raise ValueError(err) # Check for attempt to plot onto specific axes and warn if "ax" in kwargs: msg = ( "relplot is a figure-level function and does not accept " "the `ax` parameter. You may wish to try {}".format(kind + "plot") ) warnings.warn(msg, UserWarning) kwargs.pop("ax") # Use the full dataset to map the semantics p = plotter( data=data, variables=plotter.get_semantics(locals()), legend=legend, ) p.map_hue(palette=palette, order=hue_order, norm=hue_norm) p.map_size(sizes=sizes, order=size_order, norm=size_norm) p.map_style(markers=markers, dashes=dashes, order=style_order) # Extract the semantic mappings if "hue" in p.variables: palette = p._hue_map.lookup_table hue_order = p._hue_map.levels hue_norm = p._hue_map.norm else: palette = hue_order = hue_norm = None if "size" in p.variables: sizes = p._size_map.lookup_table size_order = p._size_map.levels size_norm = p._size_map.norm if "style" in p.variables: style_order = p._style_map.levels if markers: markers = {k: p._style_map(k, "marker") for k in style_order} else: markers = None if dashes: dashes = {k: p._style_map(k, "dashes") for k in style_order} else: dashes = None else: markers = dashes = style_order = None # Now extract the data that would be used to draw a single plot variables = p.variables plot_data = p.plot_data plot_semantics = p.semantics # Define the common plotting parameters plot_kws = dict( palette=palette, hue_order=hue_order, hue_norm=hue_norm, sizes=sizes, size_order=size_order, size_norm=size_norm, markers=markers, dashes=dashes, style_order=style_order, legend=False, ) plot_kws.update(kwargs) if kind == "scatter": plot_kws.pop("dashes") # Add the grid semantics onto the plotter grid_semantics = "row", "col" p.semantics = plot_semantics + grid_semantics p.assign_variables( data=data, variables=dict( x=x, y=y, hue=hue, size=size, style=style, units=units, row=row, col=col, ), ) # Define the named variables for plotting on each facet # Rename the variables with a leading underscore to avoid # collisions with faceting variable names plot_variables = {v: f"_{v}" for v in variables} plot_kws.update(plot_variables) # Pass the row/col variables to FacetGrid with their original # names so that the axes titles render correctly grid_kws = {v: p.variables.get(v, None) for v in grid_semantics} # Rename the columns of the plot_data structure appropriately new_cols = plot_variables.copy() new_cols.update(grid_kws) full_data = p.plot_data.rename(columns=new_cols) # Set up the FacetGrid object facet_kws = {} if facet_kws is None else facet_kws.copy() g = FacetGrid( data=full_data.dropna(axis=1, how="all"), **grid_kws, col_wrap=col_wrap, row_order=row_order, col_order=col_order, height=height, aspect=aspect, dropna=False, **facet_kws ) # Draw the plot g.map_dataframe(func, **plot_kws) # Label the axes g.set_axis_labels( variables.get("x", None), variables.get("y", None) ) # Show the legend if legend: # Replace the original plot data so the legend uses # numeric data with the correct type p.plot_data = plot_data p.add_legend_data(g.axes.flat[0]) if p.legend_data: g.add_legend(legend_data=p.legend_data, label_order=p.legend_order, title=p.legend_title, adjust_subtitles=True) # Rename the columns of the FacetGrid's `data` attribute # to match the original column names orig_cols = { f"_{k}": f"_{k}_" if v is None else v for k, v in variables.items() } grid_data = g.data.rename(columns=orig_cols) if data is not None and (x is not None or y is not None): if not isinstance(data, pd.DataFrame): data = pd.DataFrame(data) g.data = pd.merge( data, grid_data[grid_data.columns.difference(data.columns)], left_index=True, right_index=True, ) else: g.data = grid_data return g relplot.__doc__ = """\ Figure-level interface for drawing relational plots onto a FacetGrid. This function provides access to several different axes-level functions that show the relationship between two variables with semantic mappings of subsets. The ``kind`` parameter selects the underlying axes-level function to use: - :func:`scatterplot` (with ``kind="scatter"``; the default) - :func:`lineplot` (with ``kind="line"``) Extra keyword arguments are passed to the underlying function, so you should refer to the documentation for each to see kind-specific options. {narrative.main_api} {narrative.relational_semantic} After plotting, the :class:`FacetGrid` with the plot is returned and can be used directly to tweak supporting plot details or add other layers. Note that, unlike when using the underlying plotting functions directly, data must be passed in a long-form DataFrame with variables specified by passing strings to ``x``, ``y``, and other parameters. Parameters ---------- {params.core.xy} hue : vector or key in ``data`` Grouping variable that will produce elements with different colors. Can be either categorical or numeric, although color mapping will behave differently in latter case. size : vector or key in ``data`` Grouping variable that will produce elements with different sizes. Can be either categorical or numeric, although size mapping will behave differently in latter case. style : vector or key in ``data`` Grouping variable that will produce elements with different styles. Can have a numeric dtype but will always be treated as categorical. {params.core.data} {params.facets.rowcol} {params.facets.col_wrap} row_order, col_order : lists of strings Order to organize the rows and/or columns of the grid in, otherwise the orders are inferred from the data objects. {params.core.palette} {params.core.hue_order} {params.core.hue_norm} {params.rel.sizes} {params.rel.size_order} {params.rel.size_norm} {params.rel.style_order} {params.rel.dashes} {params.rel.markers} {params.rel.legend} kind : string Kind of plot to draw, corresponding to a seaborn relational plot. Options are {{``scatter`` and ``line``}}. {params.facets.height} {params.facets.aspect} facet_kws : dict Dictionary of other keyword arguments to pass to :class:`FacetGrid`. {params.rel.units} kwargs : key, value pairings Other keyword arguments are passed through to the underlying plotting function. Returns ------- {returns.facetgrid} Examples -------- .. include:: ../docstrings/relplot.rst """.format( narrative=_relational_narrative, params=_param_docs, returns=_core_docs["returns"], seealso=_core_docs["seealso"], ) seaborn-0.11.2/seaborn/tests/000077500000000000000000000000001410631356500160235ustar00rootroot00000000000000seaborn-0.11.2/seaborn/tests/__init__.py000066400000000000000000000000001410631356500201220ustar00rootroot00000000000000seaborn-0.11.2/seaborn/tests/test_algorithms.py000066400000000000000000000151641410631356500216140ustar00rootroot00000000000000import numpy as np import numpy.random as npr import pytest from numpy.testing import assert_array_equal from distutils.version import LooseVersion from .. import algorithms as algo @pytest.fixture def random(): np.random.seed(sum(map(ord, "test_algorithms"))) def test_bootstrap(random): """Test that bootstrapping gives the right answer in dumb cases.""" a_ones = np.ones(10) n_boot = 5 out1 = algo.bootstrap(a_ones, n_boot=n_boot) assert_array_equal(out1, np.ones(n_boot)) out2 = algo.bootstrap(a_ones, n_boot=n_boot, func=np.median) assert_array_equal(out2, np.ones(n_boot)) def test_bootstrap_length(random): """Test that we get a bootstrap array of the right shape.""" a_norm = np.random.randn(1000) out = algo.bootstrap(a_norm) assert len(out) == 10000 n_boot = 100 out = algo.bootstrap(a_norm, n_boot=n_boot) assert len(out) == n_boot def test_bootstrap_range(random): """Test that boostrapping a random array stays within the right range.""" a_norm = np.random.randn(1000) amin, amax = a_norm.min(), a_norm.max() out = algo.bootstrap(a_norm) assert amin <= out.min() assert amax >= out.max() def test_bootstrap_multiarg(random): """Test that bootstrap works with multiple input arrays.""" x = np.vstack([[1, 10] for i in range(10)]) y = np.vstack([[5, 5] for i in range(10)]) def f(x, y): return np.vstack((x, y)).max(axis=0) out_actual = algo.bootstrap(x, y, n_boot=2, func=f) out_wanted = np.array([[5, 10], [5, 10]]) assert_array_equal(out_actual, out_wanted) def test_bootstrap_axis(random): """Test axis kwarg to bootstrap function.""" x = np.random.randn(10, 20) n_boot = 100 out_default = algo.bootstrap(x, n_boot=n_boot) assert out_default.shape == (n_boot,) out_axis = algo.bootstrap(x, n_boot=n_boot, axis=0) assert out_axis.shape, (n_boot, x.shape[1]) def test_bootstrap_seed(random): """Test that we can get reproducible resamples by seeding the RNG.""" data = np.random.randn(50) seed = 42 boots1 = algo.bootstrap(data, seed=seed) boots2 = algo.bootstrap(data, seed=seed) assert_array_equal(boots1, boots2) def test_bootstrap_ols(random): """Test bootstrap of OLS model fit.""" def ols_fit(X, y): XtXinv = np.linalg.inv(np.dot(X.T, X)) return XtXinv.dot(X.T).dot(y) X = np.column_stack((np.random.randn(50, 4), np.ones(50))) w = [2, 4, 0, 3, 5] y_noisy = np.dot(X, w) + np.random.randn(50) * 20 y_lownoise = np.dot(X, w) + np.random.randn(50) n_boot = 500 w_boot_noisy = algo.bootstrap(X, y_noisy, n_boot=n_boot, func=ols_fit) w_boot_lownoise = algo.bootstrap(X, y_lownoise, n_boot=n_boot, func=ols_fit) assert w_boot_noisy.shape == (n_boot, 5) assert w_boot_lownoise.shape == (n_boot, 5) assert w_boot_noisy.std() > w_boot_lownoise.std() def test_bootstrap_units(random): """Test that results make sense when passing unit IDs to bootstrap.""" data = np.random.randn(50) ids = np.repeat(range(10), 5) bwerr = np.random.normal(0, 2, 10) bwerr = bwerr[ids] data_rm = data + bwerr seed = 77 boots_orig = algo.bootstrap(data_rm, seed=seed) boots_rm = algo.bootstrap(data_rm, units=ids, seed=seed) assert boots_rm.std() > boots_orig.std() def test_bootstrap_arglength(): """Test that different length args raise ValueError.""" with pytest.raises(ValueError): algo.bootstrap(np.arange(5), np.arange(10)) def test_bootstrap_string_func(): """Test that named numpy methods are the same as the numpy function.""" x = np.random.randn(100) res_a = algo.bootstrap(x, func="mean", seed=0) res_b = algo.bootstrap(x, func=np.mean, seed=0) assert np.array_equal(res_a, res_b) res_a = algo.bootstrap(x, func="std", seed=0) res_b = algo.bootstrap(x, func=np.std, seed=0) assert np.array_equal(res_a, res_b) with pytest.raises(AttributeError): algo.bootstrap(x, func="not_a_method_name") def test_bootstrap_reproducibility(random): """Test that bootstrapping uses the internal random state.""" data = np.random.randn(50) boots1 = algo.bootstrap(data, seed=100) boots2 = algo.bootstrap(data, seed=100) assert_array_equal(boots1, boots2) with pytest.warns(UserWarning): # Deprecatd, remove when removing random_seed boots1 = algo.bootstrap(data, random_seed=100) boots2 = algo.bootstrap(data, random_seed=100) assert_array_equal(boots1, boots2) @pytest.mark.skipif(LooseVersion(np.__version__) < "1.17", reason="Tests new numpy random functionality") def test_seed_new(): # Can't use pytest parametrize because tests will fail where the new # Generator object and related function are not defined test_bank = [ (None, None, npr.Generator, False), (npr.RandomState(0), npr.RandomState(0), npr.RandomState, True), (npr.RandomState(0), npr.RandomState(1), npr.RandomState, False), (npr.default_rng(1), npr.default_rng(1), npr.Generator, True), (npr.default_rng(1), npr.default_rng(2), npr.Generator, False), (npr.SeedSequence(10), npr.SeedSequence(10), npr.Generator, True), (npr.SeedSequence(10), npr.SeedSequence(20), npr.Generator, False), (100, 100, npr.Generator, True), (100, 200, npr.Generator, False), ] for seed1, seed2, rng_class, match in test_bank: rng1 = algo._handle_random_seed(seed1) rng2 = algo._handle_random_seed(seed2) assert isinstance(rng1, rng_class) assert isinstance(rng2, rng_class) assert (rng1.uniform() == rng2.uniform()) == match @pytest.mark.skipif(LooseVersion(np.__version__) >= "1.17", reason="Tests old numpy random functionality") @pytest.mark.parametrize("seed1, seed2, match", [ (None, None, False), (npr.RandomState(0), npr.RandomState(0), True), (npr.RandomState(0), npr.RandomState(1), False), (100, 100, True), (100, 200, False), ]) def test_seed_old(seed1, seed2, match): rng1 = algo._handle_random_seed(seed1) rng2 = algo._handle_random_seed(seed2) assert isinstance(rng1, np.random.RandomState) assert isinstance(rng2, np.random.RandomState) assert (rng1.uniform() == rng2.uniform()) == match @pytest.mark.skipif(LooseVersion(np.__version__) >= "1.17", reason="Tests old numpy random functionality") def test_bad_seed_old(): with pytest.raises(ValueError): algo._handle_random_seed("not_a_random_seed") seaborn-0.11.2/seaborn/tests/test_axisgrid.py000066400000000000000000001644531410631356500212630ustar00rootroot00000000000000import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt import pytest import numpy.testing as npt from numpy.testing import assert_array_equal try: import pandas.testing as tm except ImportError: import pandas.util.testing as tm from .._core import categorical_order from .. import rcmod from ..palettes import color_palette from ..relational import scatterplot from ..distributions import histplot, kdeplot, distplot from ..categorical import pointplot from .. import axisgrid as ag from .._testing import ( assert_plots_equal, assert_colors_equal, ) rs = np.random.RandomState(0) class TestFacetGrid: df = pd.DataFrame(dict(x=rs.normal(size=60), y=rs.gamma(4, size=60), a=np.repeat(list("abc"), 20), b=np.tile(list("mn"), 30), c=np.tile(list("tuv"), 20), d=np.tile(list("abcdefghijkl"), 5))) def test_self_data(self): g = ag.FacetGrid(self.df) assert g.data is self.df def test_self_figure(self): g = ag.FacetGrid(self.df) assert isinstance(g.figure, plt.Figure) assert g.figure is g._figure def test_self_axes(self): g = ag.FacetGrid(self.df, row="a", col="b", hue="c") for ax in g.axes.flat: assert isinstance(ax, plt.Axes) def test_axes_array_size(self): g = ag.FacetGrid(self.df) assert g.axes.shape == (1, 1) g = ag.FacetGrid(self.df, row="a") assert g.axes.shape == (3, 1) g = ag.FacetGrid(self.df, col="b") assert g.axes.shape == (1, 2) g = ag.FacetGrid(self.df, hue="c") assert g.axes.shape == (1, 1) g = ag.FacetGrid(self.df, row="a", col="b", hue="c") assert g.axes.shape == (3, 2) for ax in g.axes.flat: assert isinstance(ax, plt.Axes) def test_single_axes(self): g = ag.FacetGrid(self.df) assert isinstance(g.ax, plt.Axes) g = ag.FacetGrid(self.df, row="a") with pytest.raises(AttributeError): g.ax g = ag.FacetGrid(self.df, col="a") with pytest.raises(AttributeError): g.ax g = ag.FacetGrid(self.df, col="a", row="b") with pytest.raises(AttributeError): g.ax def test_col_wrap(self): n = len(self.df.d.unique()) g = ag.FacetGrid(self.df, col="d") assert g.axes.shape == (1, n) assert g.facet_axis(0, 8) is g.axes[0, 8] g_wrap = ag.FacetGrid(self.df, col="d", col_wrap=4) assert g_wrap.axes.shape == (n,) assert g_wrap.facet_axis(0, 8) is g_wrap.axes[8] assert g_wrap._ncol == 4 assert g_wrap._nrow == (n / 4) with pytest.raises(ValueError): g = ag.FacetGrid(self.df, row="b", col="d", col_wrap=4) df = self.df.copy() df.loc[df.d == "j"] = np.nan g_missing = ag.FacetGrid(df, col="d") assert g_missing.axes.shape == (1, n - 1) g_missing_wrap = ag.FacetGrid(df, col="d", col_wrap=4) assert g_missing_wrap.axes.shape == (n - 1,) g = ag.FacetGrid(self.df, col="d", col_wrap=1) assert len(list(g.facet_data())) == n def test_normal_axes(self): null = np.empty(0, object).flat g = ag.FacetGrid(self.df) npt.assert_array_equal(g._bottom_axes, g.axes.flat) npt.assert_array_equal(g._not_bottom_axes, null) npt.assert_array_equal(g._left_axes, g.axes.flat) npt.assert_array_equal(g._not_left_axes, null) npt.assert_array_equal(g._inner_axes, null) g = ag.FacetGrid(self.df, col="c") npt.assert_array_equal(g._bottom_axes, g.axes.flat) npt.assert_array_equal(g._not_bottom_axes, null) npt.assert_array_equal(g._left_axes, g.axes[:, 0].flat) npt.assert_array_equal(g._not_left_axes, g.axes[:, 1:].flat) npt.assert_array_equal(g._inner_axes, null) g = ag.FacetGrid(self.df, row="c") npt.assert_array_equal(g._bottom_axes, g.axes[-1, :].flat) npt.assert_array_equal(g._not_bottom_axes, g.axes[:-1, :].flat) npt.assert_array_equal(g._left_axes, g.axes.flat) npt.assert_array_equal(g._not_left_axes, null) npt.assert_array_equal(g._inner_axes, null) g = ag.FacetGrid(self.df, col="a", row="c") npt.assert_array_equal(g._bottom_axes, g.axes[-1, :].flat) npt.assert_array_equal(g._not_bottom_axes, g.axes[:-1, :].flat) npt.assert_array_equal(g._left_axes, g.axes[:, 0].flat) npt.assert_array_equal(g._not_left_axes, g.axes[:, 1:].flat) npt.assert_array_equal(g._inner_axes, g.axes[:-1, 1:].flat) def test_wrapped_axes(self): null = np.empty(0, object).flat g = ag.FacetGrid(self.df, col="a", col_wrap=2) npt.assert_array_equal(g._bottom_axes, g.axes[np.array([1, 2])].flat) npt.assert_array_equal(g._not_bottom_axes, g.axes[:1].flat) npt.assert_array_equal(g._left_axes, g.axes[np.array([0, 2])].flat) npt.assert_array_equal(g._not_left_axes, g.axes[np.array([1])].flat) npt.assert_array_equal(g._inner_axes, null) def test_axes_dict(self): g = ag.FacetGrid(self.df) assert isinstance(g.axes_dict, dict) assert not g.axes_dict g = ag.FacetGrid(self.df, row="c") assert list(g.axes_dict.keys()) == g.row_names for (name, ax) in zip(g.row_names, g.axes.flat): assert g.axes_dict[name] is ax g = ag.FacetGrid(self.df, col="c") assert list(g.axes_dict.keys()) == g.col_names for (name, ax) in zip(g.col_names, g.axes.flat): assert g.axes_dict[name] is ax g = ag.FacetGrid(self.df, col="a", col_wrap=2) assert list(g.axes_dict.keys()) == g.col_names for (name, ax) in zip(g.col_names, g.axes.flat): assert g.axes_dict[name] is ax g = ag.FacetGrid(self.df, row="a", col="c") for (row_var, col_var), ax in g.axes_dict.items(): i = g.row_names.index(row_var) j = g.col_names.index(col_var) assert g.axes[i, j] is ax def test_figure_size(self): g = ag.FacetGrid(self.df, row="a", col="b") npt.assert_array_equal(g.figure.get_size_inches(), (6, 9)) g = ag.FacetGrid(self.df, row="a", col="b", height=6) npt.assert_array_equal(g.figure.get_size_inches(), (12, 18)) g = ag.FacetGrid(self.df, col="c", height=4, aspect=.5) npt.assert_array_equal(g.figure.get_size_inches(), (6, 4)) def test_figure_size_with_legend(self): g = ag.FacetGrid(self.df, col="a", hue="c", height=4, aspect=.5) npt.assert_array_equal(g.figure.get_size_inches(), (6, 4)) g.add_legend() assert g.figure.get_size_inches()[0] > 6 g = ag.FacetGrid(self.df, col="a", hue="c", height=4, aspect=.5, legend_out=False) npt.assert_array_equal(g.figure.get_size_inches(), (6, 4)) g.add_legend() npt.assert_array_equal(g.figure.get_size_inches(), (6, 4)) def test_legend_data(self): g = ag.FacetGrid(self.df, hue="a") g.map(plt.plot, "x", "y") g.add_legend() palette = color_palette(n_colors=3) assert g._legend.get_title().get_text() == "a" a_levels = sorted(self.df.a.unique()) lines = g._legend.get_lines() assert len(lines) == len(a_levels) for line, hue in zip(lines, palette): assert line.get_color() == hue labels = g._legend.get_texts() assert len(labels) == len(a_levels) for label, level in zip(labels, a_levels): assert label.get_text() == level def test_legend_data_missing_level(self): g = ag.FacetGrid(self.df, hue="a", hue_order=list("azbc")) g.map(plt.plot, "x", "y") g.add_legend() c1, c2, c3, c4 = color_palette(n_colors=4) palette = [c1, c3, c4] assert g._legend.get_title().get_text() == "a" a_levels = sorted(self.df.a.unique()) lines = g._legend.get_lines() assert len(lines) == len(a_levels) for line, hue in zip(lines, palette): assert line.get_color() == hue labels = g._legend.get_texts() assert len(labels) == 4 for label, level in zip(labels, list("azbc")): assert label.get_text() == level def test_get_boolean_legend_data(self): self.df["b_bool"] = self.df.b == "m" g = ag.FacetGrid(self.df, hue="b_bool") g.map(plt.plot, "x", "y") g.add_legend() palette = color_palette(n_colors=2) assert g._legend.get_title().get_text() == "b_bool" b_levels = list(map(str, categorical_order(self.df.b_bool))) lines = g._legend.get_lines() assert len(lines) == len(b_levels) for line, hue in zip(lines, palette): assert line.get_color() == hue labels = g._legend.get_texts() assert len(labels) == len(b_levels) for label, level in zip(labels, b_levels): assert label.get_text() == level def test_legend_tuples(self): g = ag.FacetGrid(self.df, hue="a") g.map(plt.plot, "x", "y") handles, labels = g.ax.get_legend_handles_labels() label_tuples = [("", l) for l in labels] legend_data = dict(zip(label_tuples, handles)) g.add_legend(legend_data, label_tuples) for entry, label in zip(g._legend.get_texts(), labels): assert entry.get_text() == label def test_legend_options(self): g = ag.FacetGrid(self.df, hue="b") g.map(plt.plot, "x", "y") g.add_legend() g1 = ag.FacetGrid(self.df, hue="b", legend_out=False) g1.add_legend(adjust_subtitles=True) g1 = ag.FacetGrid(self.df, hue="b", legend_out=False) g1.add_legend(adjust_subtitles=False) def test_legendout_with_colwrap(self): g = ag.FacetGrid(self.df, col="d", hue='b', col_wrap=4, legend_out=False) g.map(plt.plot, "x", "y", linewidth=3) g.add_legend() def test_legend_tight_layout(self): g = ag.FacetGrid(self.df, hue='b') g.map(plt.plot, "x", "y", linewidth=3) g.add_legend() g.tight_layout() axes_right_edge = g.ax.get_window_extent().xmax legend_left_edge = g._legend.get_window_extent().xmin assert axes_right_edge < legend_left_edge def test_subplot_kws(self): g = ag.FacetGrid(self.df, despine=False, subplot_kws=dict(projection="polar")) for ax in g.axes.flat: assert "PolarAxesSubplot" in str(type(ax)) def test_gridspec_kws(self): ratios = [3, 1, 2] gskws = dict(width_ratios=ratios) g = ag.FacetGrid(self.df, col='c', row='a', gridspec_kws=gskws) for ax in g.axes.flat: ax.set_xticks([]) ax.set_yticks([]) g.figure.tight_layout() for (l, m, r) in g.axes: assert l.get_position().width > m.get_position().width assert r.get_position().width > m.get_position().width def test_gridspec_kws_col_wrap(self): ratios = [3, 1, 2, 1, 1] gskws = dict(width_ratios=ratios) with pytest.warns(UserWarning): ag.FacetGrid(self.df, col='d', col_wrap=5, gridspec_kws=gskws) def test_data_generator(self): g = ag.FacetGrid(self.df, row="a") d = list(g.facet_data()) assert len(d) == 3 tup, data = d[0] assert tup == (0, 0, 0) assert (data["a"] == "a").all() tup, data = d[1] assert tup == (1, 0, 0) assert (data["a"] == "b").all() g = ag.FacetGrid(self.df, row="a", col="b") d = list(g.facet_data()) assert len(d) == 6 tup, data = d[0] assert tup == (0, 0, 0) assert (data["a"] == "a").all() assert (data["b"] == "m").all() tup, data = d[1] assert tup == (0, 1, 0) assert (data["a"] == "a").all() assert (data["b"] == "n").all() tup, data = d[2] assert tup == (1, 0, 0) assert (data["a"] == "b").all() assert (data["b"] == "m").all() g = ag.FacetGrid(self.df, hue="c") d = list(g.facet_data()) assert len(d) == 3 tup, data = d[1] assert tup == (0, 0, 1) assert (data["c"] == "u").all() def test_map(self): g = ag.FacetGrid(self.df, row="a", col="b", hue="c") g.map(plt.plot, "x", "y", linewidth=3) lines = g.axes[0, 0].lines assert len(lines) == 3 line1, _, _ = lines assert line1.get_linewidth() == 3 x, y = line1.get_data() mask = (self.df.a == "a") & (self.df.b == "m") & (self.df.c == "t") npt.assert_array_equal(x, self.df.x[mask]) npt.assert_array_equal(y, self.df.y[mask]) def test_map_dataframe(self): g = ag.FacetGrid(self.df, row="a", col="b", hue="c") def plot(x, y, data=None, **kws): plt.plot(data[x], data[y], **kws) # Modify __module__ so this doesn't look like a seaborn function plot.__module__ = "test" g.map_dataframe(plot, "x", "y", linestyle="--") lines = g.axes[0, 0].lines assert len(g.axes[0, 0].lines) == 3 line1, _, _ = lines assert line1.get_linestyle() == "--" x, y = line1.get_data() mask = (self.df.a == "a") & (self.df.b == "m") & (self.df.c == "t") npt.assert_array_equal(x, self.df.x[mask]) npt.assert_array_equal(y, self.df.y[mask]) def test_set(self): g = ag.FacetGrid(self.df, row="a", col="b") xlim = (-2, 5) ylim = (3, 6) xticks = [-2, 0, 3, 5] yticks = [3, 4.5, 6] g.set(xlim=xlim, ylim=ylim, xticks=xticks, yticks=yticks) for ax in g.axes.flat: npt.assert_array_equal(ax.get_xlim(), xlim) npt.assert_array_equal(ax.get_ylim(), ylim) npt.assert_array_equal(ax.get_xticks(), xticks) npt.assert_array_equal(ax.get_yticks(), yticks) def test_set_titles(self): g = ag.FacetGrid(self.df, row="a", col="b") g.map(plt.plot, "x", "y") # Test the default titles assert g.axes[0, 0].get_title() == "a = a | b = m" assert g.axes[0, 1].get_title() == "a = a | b = n" assert g.axes[1, 0].get_title() == "a = b | b = m" # Test a provided title g.set_titles("{row_var} == {row_name} \\/ {col_var} == {col_name}") assert g.axes[0, 0].get_title() == "a == a \\/ b == m" assert g.axes[0, 1].get_title() == "a == a \\/ b == n" assert g.axes[1, 0].get_title() == "a == b \\/ b == m" # Test a single row g = ag.FacetGrid(self.df, col="b") g.map(plt.plot, "x", "y") # Test the default titles assert g.axes[0, 0].get_title() == "b = m" assert g.axes[0, 1].get_title() == "b = n" # test with dropna=False g = ag.FacetGrid(self.df, col="b", hue="b", dropna=False) g.map(plt.plot, 'x', 'y') def test_set_titles_margin_titles(self): g = ag.FacetGrid(self.df, row="a", col="b", margin_titles=True) g.map(plt.plot, "x", "y") # Test the default titles assert g.axes[0, 0].get_title() == "b = m" assert g.axes[0, 1].get_title() == "b = n" assert g.axes[1, 0].get_title() == "" # Test the row "titles" assert g.axes[0, 1].texts[0].get_text() == "a = a" assert g.axes[1, 1].texts[0].get_text() == "a = b" assert g.axes[0, 1].texts[0] is g._margin_titles_texts[0] # Test provided titles g.set_titles(col_template="{col_name}", row_template="{row_name}") assert g.axes[0, 0].get_title() == "m" assert g.axes[0, 1].get_title() == "n" assert g.axes[1, 0].get_title() == "" assert len(g.axes[1, 1].texts) == 1 assert g.axes[1, 1].texts[0].get_text() == "b" def test_set_ticklabels(self): g = ag.FacetGrid(self.df, row="a", col="b") g.map(plt.plot, "x", "y") ax = g.axes[-1, 0] xlab = [l.get_text() + "h" for l in ax.get_xticklabels()] ylab = [l.get_text() + "i" for l in ax.get_yticklabels()] g.set_xticklabels(xlab) g.set_yticklabels(ylab) got_x = [l.get_text() for l in g.axes[-1, 1].get_xticklabels()] got_y = [l.get_text() for l in g.axes[0, 0].get_yticklabels()] npt.assert_array_equal(got_x, xlab) npt.assert_array_equal(got_y, ylab) x, y = np.arange(10), np.arange(10) df = pd.DataFrame(np.c_[x, y], columns=["x", "y"]) g = ag.FacetGrid(df).map_dataframe(pointplot, x="x", y="y", order=x) g.set_xticklabels(step=2) got_x = [int(l.get_text()) for l in g.axes[0, 0].get_xticklabels()] npt.assert_array_equal(x[::2], got_x) g = ag.FacetGrid(self.df, col="d", col_wrap=5) g.map(plt.plot, "x", "y") g.set_xticklabels(rotation=45) g.set_yticklabels(rotation=75) for ax in g._bottom_axes: for l in ax.get_xticklabels(): assert l.get_rotation() == 45 for ax in g._left_axes: for l in ax.get_yticklabels(): assert l.get_rotation() == 75 def test_set_axis_labels(self): g = ag.FacetGrid(self.df, row="a", col="b") g.map(plt.plot, "x", "y") xlab = 'xx' ylab = 'yy' g.set_axis_labels(xlab, ylab) got_x = [ax.get_xlabel() for ax in g.axes[-1, :]] got_y = [ax.get_ylabel() for ax in g.axes[:, 0]] npt.assert_array_equal(got_x, xlab) npt.assert_array_equal(got_y, ylab) for ax in g.axes.flat: ax.set(xlabel="x", ylabel="y") g.set_axis_labels(xlab, ylab) for ax in g._not_bottom_axes: assert not ax.get_xlabel() for ax in g._not_left_axes: assert not ax.get_ylabel() def test_axis_lims(self): g = ag.FacetGrid(self.df, row="a", col="b", xlim=(0, 4), ylim=(-2, 3)) assert g.axes[0, 0].get_xlim() == (0, 4) assert g.axes[0, 0].get_ylim() == (-2, 3) def test_data_orders(self): g = ag.FacetGrid(self.df, row="a", col="b", hue="c") assert g.row_names == list("abc") assert g.col_names == list("mn") assert g.hue_names == list("tuv") assert g.axes.shape == (3, 2) g = ag.FacetGrid(self.df, row="a", col="b", hue="c", row_order=list("bca"), col_order=list("nm"), hue_order=list("vtu")) assert g.row_names == list("bca") assert g.col_names == list("nm") assert g.hue_names == list("vtu") assert g.axes.shape == (3, 2) g = ag.FacetGrid(self.df, row="a", col="b", hue="c", row_order=list("bcda"), col_order=list("nom"), hue_order=list("qvtu")) assert g.row_names == list("bcda") assert g.col_names == list("nom") assert g.hue_names == list("qvtu") assert g.axes.shape == (4, 3) def test_palette(self): rcmod.set() g = ag.FacetGrid(self.df, hue="c") assert g._colors == color_palette(n_colors=len(self.df.c.unique())) g = ag.FacetGrid(self.df, hue="d") assert g._colors == color_palette("husl", len(self.df.d.unique())) g = ag.FacetGrid(self.df, hue="c", palette="Set2") assert g._colors == color_palette("Set2", len(self.df.c.unique())) dict_pal = dict(t="red", u="green", v="blue") list_pal = color_palette(["red", "green", "blue"], 3) g = ag.FacetGrid(self.df, hue="c", palette=dict_pal) assert g._colors == list_pal list_pal = color_palette(["green", "blue", "red"], 3) g = ag.FacetGrid(self.df, hue="c", hue_order=list("uvt"), palette=dict_pal) assert g._colors == list_pal def test_hue_kws(self): kws = dict(marker=["o", "s", "D"]) g = ag.FacetGrid(self.df, hue="c", hue_kws=kws) g.map(plt.plot, "x", "y") for line, marker in zip(g.axes[0, 0].lines, kws["marker"]): assert line.get_marker() == marker def test_dropna(self): df = self.df.copy() hasna = pd.Series(np.tile(np.arange(6), 10), dtype=float) hasna[hasna == 5] = np.nan df["hasna"] = hasna g = ag.FacetGrid(df, dropna=False, row="hasna") assert g._not_na.sum() == 60 g = ag.FacetGrid(df, dropna=True, row="hasna") assert g._not_na.sum() == 50 def test_categorical_column_missing_categories(self): df = self.df.copy() df['a'] = df['a'].astype('category') g = ag.FacetGrid(df[df['a'] == 'a'], col="a", col_wrap=1) assert g.axes.shape == (len(df['a'].cat.categories),) def test_categorical_warning(self): g = ag.FacetGrid(self.df, col="b") with pytest.warns(UserWarning): g.map(pointplot, "b", "x") def test_refline(self): g = ag.FacetGrid(self.df, row="a", col="b") g.refline() for ax in g.axes.ravel(): assert not ax.lines refx = refy = 0.5 hline = np.array([[0, refy], [1, refy]]) vline = np.array([[refx, 0], [refx, 1]]) g.refline(x=refx, y=refy) for ax in g.axes.ravel(): assert ax.lines[0].get_color() == '.5' assert ax.lines[0].get_linestyle() == '--' assert len(ax.lines) == 2 npt.assert_array_equal(ax.lines[0].get_xydata(), vline) npt.assert_array_equal(ax.lines[1].get_xydata(), hline) color, linestyle = 'red', '-' g.refline(x=refx, color=color, linestyle=linestyle) npt.assert_array_equal(g.axes[0, 0].lines[-1].get_xydata(), vline) assert g.axes[0, 0].lines[-1].get_color() == color assert g.axes[0, 0].lines[-1].get_linestyle() == linestyle class TestPairGrid: rs = np.random.RandomState(sum(map(ord, "PairGrid"))) df = pd.DataFrame(dict(x=rs.normal(size=60), y=rs.randint(0, 4, size=(60)), z=rs.gamma(3, size=60), a=np.repeat(list("abc"), 20), b=np.repeat(list("abcdefghijkl"), 5))) def test_self_data(self): g = ag.PairGrid(self.df) assert g.data is self.df def test_ignore_datelike_data(self): df = self.df.copy() df['date'] = pd.date_range('2010-01-01', periods=len(df), freq='d') result = ag.PairGrid(self.df).data expected = df.drop('date', axis=1) tm.assert_frame_equal(result, expected) def test_self_figure(self): g = ag.PairGrid(self.df) assert isinstance(g.figure, plt.Figure) assert g.figure is g._figure def test_self_axes(self): g = ag.PairGrid(self.df) for ax in g.axes.flat: assert isinstance(ax, plt.Axes) def test_default_axes(self): g = ag.PairGrid(self.df) assert g.axes.shape == (3, 3) assert g.x_vars == ["x", "y", "z"] assert g.y_vars == ["x", "y", "z"] assert g.square_grid @pytest.mark.parametrize("vars", [["z", "x"], np.array(["z", "x"])]) def test_specific_square_axes(self, vars): g = ag.PairGrid(self.df, vars=vars) assert g.axes.shape == (len(vars), len(vars)) assert g.x_vars == list(vars) assert g.y_vars == list(vars) assert g.square_grid def test_remove_hue_from_default(self): hue = "z" g = ag.PairGrid(self.df, hue=hue) assert hue not in g.x_vars assert hue not in g.y_vars vars = ["x", "y", "z"] g = ag.PairGrid(self.df, hue=hue, vars=vars) assert hue in g.x_vars assert hue in g.y_vars @pytest.mark.parametrize( "x_vars, y_vars", [ (["x", "y"], ["z", "y", "x"]), (["x", "y"], "z"), (np.array(["x", "y"]), np.array(["z", "y", "x"])), ], ) def test_specific_nonsquare_axes(self, x_vars, y_vars): g = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars) assert g.axes.shape == (len(y_vars), len(x_vars)) assert g.x_vars == list(x_vars) assert g.y_vars == list(y_vars) assert not g.square_grid def test_corner(self): plot_vars = ["x", "y", "z"] g = ag.PairGrid(self.df, vars=plot_vars, corner=True) corner_size = sum([i + 1 for i in range(len(plot_vars))]) assert len(g.figure.axes) == corner_size g.map_diag(plt.hist) assert len(g.figure.axes) == (corner_size + len(plot_vars)) for ax in np.diag(g.axes): assert not ax.yaxis.get_visible() assert not g.axes[0, 0].get_ylabel() plot_vars = ["x", "y", "z"] g = ag.PairGrid(self.df, vars=plot_vars, corner=True) g.map(scatterplot) assert len(g.figure.axes) == corner_size def test_size(self): g1 = ag.PairGrid(self.df, height=3) npt.assert_array_equal(g1.fig.get_size_inches(), (9, 9)) g2 = ag.PairGrid(self.df, height=4, aspect=.5) npt.assert_array_equal(g2.fig.get_size_inches(), (6, 12)) g3 = ag.PairGrid(self.df, y_vars=["z"], x_vars=["x", "y"], height=2, aspect=2) npt.assert_array_equal(g3.fig.get_size_inches(), (8, 2)) def test_empty_grid(self): with pytest.raises(ValueError, match="No variables found"): ag.PairGrid(self.df[["a", "b"]]) def test_map(self): vars = ["x", "y", "z"] g1 = ag.PairGrid(self.df) g1.map(plt.scatter) for i, axes_i in enumerate(g1.axes): for j, ax in enumerate(axes_i): x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) g2 = ag.PairGrid(self.df, hue="a") g2.map(plt.scatter) for i, axes_i in enumerate(g2.axes): for j, ax in enumerate(axes_i): x_in = self.df[vars[j]] y_in = self.df[vars[i]] for k, k_level in enumerate(self.df.a.unique()): x_in_k = x_in[self.df.a == k_level] y_in_k = y_in[self.df.a == k_level] x_out, y_out = ax.collections[k].get_offsets().T npt.assert_array_equal(x_in_k, x_out) npt.assert_array_equal(y_in_k, y_out) def test_map_nonsquare(self): x_vars = ["x"] y_vars = ["y", "z"] g = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars) g.map(plt.scatter) x_in = self.df.x for i, i_var in enumerate(y_vars): ax = g.axes[i, 0] y_in = self.df[i_var] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) def test_map_lower(self): vars = ["x", "y", "z"] g = ag.PairGrid(self.df) g.map_lower(plt.scatter) for i, j in zip(*np.tril_indices_from(g.axes, -1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.triu_indices_from(g.axes)): ax = g.axes[i, j] assert len(ax.collections) == 0 def test_map_upper(self): vars = ["x", "y", "z"] g = ag.PairGrid(self.df) g.map_upper(plt.scatter) for i, j in zip(*np.triu_indices_from(g.axes, 1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.tril_indices_from(g.axes)): ax = g.axes[i, j] assert len(ax.collections) == 0 def test_map_mixed_funcsig(self): vars = ["x", "y", "z"] g = ag.PairGrid(self.df, vars=vars) g.map_lower(scatterplot) g.map_upper(plt.scatter) for i, j in zip(*np.triu_indices_from(g.axes, 1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) def test_map_diag(self): g = ag.PairGrid(self.df) g.map_diag(plt.hist) for var, ax in zip(g.diag_vars, g.diag_axes): assert len(ax.patches) == 10 assert pytest.approx(ax.patches[0].get_x()) == self.df[var].min() g = ag.PairGrid(self.df, hue="a") g.map_diag(plt.hist) for ax in g.diag_axes: assert len(ax.patches) == 30 g = ag.PairGrid(self.df, hue="a") g.map_diag(plt.hist, histtype='step') for ax in g.diag_axes: for ptch in ax.patches: assert not ptch.fill def test_map_diag_rectangular(self): x_vars = ["x", "y"] y_vars = ["x", "z", "y"] g1 = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars) g1.map_diag(plt.hist) g1.map_offdiag(plt.scatter) assert set(g1.diag_vars) == (set(x_vars) & set(y_vars)) for var, ax in zip(g1.diag_vars, g1.diag_axes): assert len(ax.patches) == 10 assert pytest.approx(ax.patches[0].get_x()) == self.df[var].min() for j, x_var in enumerate(x_vars): for i, y_var in enumerate(y_vars): ax = g1.axes[i, j] if x_var == y_var: diag_ax = g1.diag_axes[j] # because fewer x than y vars assert ax.bbox.bounds == diag_ax.bbox.bounds else: x, y = ax.collections[0].get_offsets().T assert_array_equal(x, self.df[x_var]) assert_array_equal(y, self.df[y_var]) g2 = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars, hue="a") g2.map_diag(plt.hist) g2.map_offdiag(plt.scatter) assert set(g2.diag_vars) == (set(x_vars) & set(y_vars)) for ax in g2.diag_axes: assert len(ax.patches) == 30 x_vars = ["x", "y", "z"] y_vars = ["x", "z"] g3 = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars) g3.map_diag(plt.hist) g3.map_offdiag(plt.scatter) assert set(g3.diag_vars) == (set(x_vars) & set(y_vars)) for var, ax in zip(g3.diag_vars, g3.diag_axes): assert len(ax.patches) == 10 assert pytest.approx(ax.patches[0].get_x()) == self.df[var].min() for j, x_var in enumerate(x_vars): for i, y_var in enumerate(y_vars): ax = g3.axes[i, j] if x_var == y_var: diag_ax = g3.diag_axes[i] # because fewer y than x vars assert ax.bbox.bounds == diag_ax.bbox.bounds else: x, y = ax.collections[0].get_offsets().T assert_array_equal(x, self.df[x_var]) assert_array_equal(y, self.df[y_var]) def test_map_diag_color(self): color = "red" g1 = ag.PairGrid(self.df) g1.map_diag(plt.hist, color=color) for ax in g1.diag_axes: for patch in ax.patches: assert_colors_equal(patch.get_facecolor(), color) g2 = ag.PairGrid(self.df) g2.map_diag(kdeplot, color='red') for ax in g2.diag_axes: for line in ax.lines: assert_colors_equal(line.get_color(), color) def test_map_diag_palette(self): palette = "muted" pal = color_palette(palette, n_colors=len(self.df.a.unique())) g = ag.PairGrid(self.df, hue="a", palette=palette) g.map_diag(kdeplot) for ax in g.diag_axes: for line, color in zip(ax.lines[::-1], pal): assert_colors_equal(line.get_color(), color) def test_map_diag_and_offdiag(self): vars = ["x", "y", "z"] g = ag.PairGrid(self.df) g.map_offdiag(plt.scatter) g.map_diag(plt.hist) for ax in g.diag_axes: assert len(ax.patches) == 10 for i, j in zip(*np.triu_indices_from(g.axes, 1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.tril_indices_from(g.axes, -1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.diag_indices_from(g.axes)): ax = g.axes[i, j] assert len(ax.collections) == 0 def test_diag_sharey(self): g = ag.PairGrid(self.df, diag_sharey=True) g.map_diag(kdeplot) for ax in g.diag_axes[1:]: assert ax.get_ylim() == g.diag_axes[0].get_ylim() def test_map_diag_matplotlib(self): bins = 10 g = ag.PairGrid(self.df) g.map_diag(plt.hist, bins=bins) for ax in g.diag_axes: assert len(ax.patches) == bins levels = len(self.df["a"].unique()) g = ag.PairGrid(self.df, hue="a") g.map_diag(plt.hist, bins=bins) for ax in g.diag_axes: assert len(ax.patches) == (bins * levels) def test_palette(self): rcmod.set() g = ag.PairGrid(self.df, hue="a") assert g.palette == color_palette(n_colors=len(self.df.a.unique())) g = ag.PairGrid(self.df, hue="b") assert g.palette == color_palette("husl", len(self.df.b.unique())) g = ag.PairGrid(self.df, hue="a", palette="Set2") assert g.palette == color_palette("Set2", len(self.df.a.unique())) dict_pal = dict(a="red", b="green", c="blue") list_pal = color_palette(["red", "green", "blue"]) g = ag.PairGrid(self.df, hue="a", palette=dict_pal) assert g.palette == list_pal list_pal = color_palette(["blue", "red", "green"]) g = ag.PairGrid(self.df, hue="a", hue_order=list("cab"), palette=dict_pal) assert g.palette == list_pal def test_hue_kws(self): kws = dict(marker=["o", "s", "d", "+"]) g = ag.PairGrid(self.df, hue="a", hue_kws=kws) g.map(plt.plot) for line, marker in zip(g.axes[0, 0].lines, kws["marker"]): assert line.get_marker() == marker g = ag.PairGrid(self.df, hue="a", hue_kws=kws, hue_order=list("dcab")) g.map(plt.plot) for line, marker in zip(g.axes[0, 0].lines, kws["marker"]): assert line.get_marker() == marker def test_hue_order(self): order = list("dcab") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map(plt.plot) for line, level in zip(g.axes[1, 0].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"]) plt.close("all") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map_diag(plt.plot) for line, level in zip(g.axes[0, 0].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"]) plt.close("all") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map_lower(plt.plot) for line, level in zip(g.axes[1, 0].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"]) plt.close("all") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map_upper(plt.plot) for line, level in zip(g.axes[0, 1].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "y"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"]) plt.close("all") def test_hue_order_missing_level(self): order = list("dcaeb") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map(plt.plot) for line, level in zip(g.axes[1, 0].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"]) plt.close("all") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map_diag(plt.plot) for line, level in zip(g.axes[0, 0].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"]) plt.close("all") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map_lower(plt.plot) for line, level in zip(g.axes[1, 0].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"]) plt.close("all") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map_upper(plt.plot) for line, level in zip(g.axes[0, 1].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "y"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"]) plt.close("all") def test_nondefault_index(self): df = self.df.copy().set_index("b") plot_vars = ["x", "y", "z"] g1 = ag.PairGrid(df) g1.map(plt.scatter) for i, axes_i in enumerate(g1.axes): for j, ax in enumerate(axes_i): x_in = self.df[plot_vars[j]] y_in = self.df[plot_vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) g2 = ag.PairGrid(df, hue="a") g2.map(plt.scatter) for i, axes_i in enumerate(g2.axes): for j, ax in enumerate(axes_i): x_in = self.df[plot_vars[j]] y_in = self.df[plot_vars[i]] for k, k_level in enumerate(self.df.a.unique()): x_in_k = x_in[self.df.a == k_level] y_in_k = y_in[self.df.a == k_level] x_out, y_out = ax.collections[k].get_offsets().T npt.assert_array_equal(x_in_k, x_out) npt.assert_array_equal(y_in_k, y_out) @pytest.mark.parametrize("func", [scatterplot, plt.scatter]) def test_dropna(self, func): df = self.df.copy() n_null = 20 df.loc[np.arange(n_null), "x"] = np.nan plot_vars = ["x", "y", "z"] g1 = ag.PairGrid(df, vars=plot_vars, dropna=True) g1.map(func) for i, axes_i in enumerate(g1.axes): for j, ax in enumerate(axes_i): x_in = df[plot_vars[j]] y_in = df[plot_vars[i]] x_out, y_out = ax.collections[0].get_offsets().T n_valid = (x_in * y_in).notnull().sum() assert n_valid == len(x_out) assert n_valid == len(y_out) g1.map_diag(histplot) for i, ax in enumerate(g1.diag_axes): var = plot_vars[i] count = sum([p.get_height() for p in ax.patches]) assert count == df[var].notna().sum() def test_histplot_legend(self): # Tests _extract_legend_handles g = ag.PairGrid(self.df, vars=["x", "y"], hue="a") g.map_offdiag(histplot) g.add_legend() assert len(g._legend.legendHandles) == len(self.df["a"].unique()) def test_pairplot(self): vars = ["x", "y", "z"] g = ag.pairplot(self.df) for ax in g.diag_axes: assert len(ax.patches) > 1 for i, j in zip(*np.triu_indices_from(g.axes, 1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.tril_indices_from(g.axes, -1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.diag_indices_from(g.axes)): ax = g.axes[i, j] assert len(ax.collections) == 0 g = ag.pairplot(self.df, hue="a") n = len(self.df.a.unique()) for ax in g.diag_axes: assert len(ax.collections) == n def test_pairplot_reg(self): vars = ["x", "y", "z"] g = ag.pairplot(self.df, diag_kind="hist", kind="reg") for ax in g.diag_axes: assert len(ax.patches) for i, j in zip(*np.triu_indices_from(g.axes, 1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) assert len(ax.lines) == 1 assert len(ax.collections) == 2 for i, j in zip(*np.tril_indices_from(g.axes, -1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) assert len(ax.lines) == 1 assert len(ax.collections) == 2 for i, j in zip(*np.diag_indices_from(g.axes)): ax = g.axes[i, j] assert len(ax.collections) == 0 def test_pairplot_reg_hue(self): markers = ["o", "s", "d"] g = ag.pairplot(self.df, kind="reg", hue="a", markers=markers) ax = g.axes[-1, 0] c1 = ax.collections[0] c2 = ax.collections[2] assert not np.array_equal(c1.get_facecolor(), c2.get_facecolor()) assert not np.array_equal( c1.get_paths()[0].vertices, c2.get_paths()[0].vertices, ) def test_pairplot_diag_kde(self): vars = ["x", "y", "z"] g = ag.pairplot(self.df, diag_kind="kde") for ax in g.diag_axes: assert len(ax.collections) == 1 for i, j in zip(*np.triu_indices_from(g.axes, 1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.tril_indices_from(g.axes, -1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.diag_indices_from(g.axes)): ax = g.axes[i, j] assert len(ax.collections) == 0 def test_pairplot_kde(self): f, ax1 = plt.subplots() kdeplot(data=self.df, x="x", y="y", ax=ax1) g = ag.pairplot(self.df, kind="kde") ax2 = g.axes[1, 0] assert_plots_equal(ax1, ax2, labels=False) def test_pairplot_hist(self): f, ax1 = plt.subplots() histplot(data=self.df, x="x", y="y", ax=ax1) g = ag.pairplot(self.df, kind="hist") ax2 = g.axes[1, 0] assert_plots_equal(ax1, ax2, labels=False) def test_pairplot_markers(self): vars = ["x", "y", "z"] markers = ["o", "X", "s"] g = ag.pairplot(self.df, hue="a", vars=vars, markers=markers) m1 = g._legend.legendHandles[0].get_paths()[0] m2 = g._legend.legendHandles[1].get_paths()[0] assert m1 != m2 with pytest.raises(ValueError): g = ag.pairplot(self.df, hue="a", vars=vars, markers=markers[:-2]) def test_corner_despine(self): g = ag.PairGrid(self.df, corner=True, despine=False) g.map_diag(histplot) assert g.axes[0, 0].spines["top"].get_visible() def test_corner_set(self): g = ag.PairGrid(self.df, corner=True, despine=False) g.set(xlim=(0, 10)) assert g.axes[-1, 0].get_xlim() == (0, 10) def test_legend(self): g1 = ag.pairplot(self.df, hue="a") assert isinstance(g1.legend, mpl.legend.Legend) g2 = ag.pairplot(self.df) assert g2.legend is None class TestJointGrid: rs = np.random.RandomState(sum(map(ord, "JointGrid"))) x = rs.randn(100) y = rs.randn(100) x_na = x.copy() x_na[10] = np.nan x_na[20] = np.nan data = pd.DataFrame(dict(x=x, y=y, x_na=x_na)) def test_margin_grid_from_lists(self): g = ag.JointGrid(x=self.x.tolist(), y=self.y.tolist()) npt.assert_array_equal(g.x, self.x) npt.assert_array_equal(g.y, self.y) def test_margin_grid_from_arrays(self): g = ag.JointGrid(x=self.x, y=self.y) npt.assert_array_equal(g.x, self.x) npt.assert_array_equal(g.y, self.y) def test_margin_grid_from_series(self): g = ag.JointGrid(x=self.data.x, y=self.data.y) npt.assert_array_equal(g.x, self.x) npt.assert_array_equal(g.y, self.y) def test_margin_grid_from_dataframe(self): g = ag.JointGrid(x="x", y="y", data=self.data) npt.assert_array_equal(g.x, self.x) npt.assert_array_equal(g.y, self.y) def test_margin_grid_from_dataframe_bad_variable(self): with pytest.raises(ValueError): ag.JointGrid(x="x", y="bad_column", data=self.data) def test_margin_grid_axis_labels(self): g = ag.JointGrid(x="x", y="y", data=self.data) xlabel, ylabel = g.ax_joint.get_xlabel(), g.ax_joint.get_ylabel() assert xlabel == "x" assert ylabel == "y" g.set_axis_labels("x variable", "y variable") xlabel, ylabel = g.ax_joint.get_xlabel(), g.ax_joint.get_ylabel() assert xlabel == "x variable" assert ylabel == "y variable" def test_dropna(self): g = ag.JointGrid(x="x_na", y="y", data=self.data, dropna=False) assert len(g.x) == len(self.x_na) g = ag.JointGrid(x="x_na", y="y", data=self.data, dropna=True) assert len(g.x) == pd.notnull(self.x_na).sum() def test_axlims(self): lim = (-3, 3) g = ag.JointGrid(x="x", y="y", data=self.data, xlim=lim, ylim=lim) assert g.ax_joint.get_xlim() == lim assert g.ax_joint.get_ylim() == lim assert g.ax_marg_x.get_xlim() == lim assert g.ax_marg_y.get_ylim() == lim def test_marginal_ticks(self): g = ag.JointGrid(marginal_ticks=False) assert not sum(t.get_visible() for t in g.ax_marg_x.get_yticklabels()) assert not sum(t.get_visible() for t in g.ax_marg_y.get_xticklabels()) g = ag.JointGrid(marginal_ticks=True) assert sum(t.get_visible() for t in g.ax_marg_x.get_yticklabels()) assert sum(t.get_visible() for t in g.ax_marg_y.get_xticklabels()) def test_bivariate_plot(self): g = ag.JointGrid(x="x", y="y", data=self.data) g.plot_joint(plt.plot) x, y = g.ax_joint.lines[0].get_xydata().T npt.assert_array_equal(x, self.x) npt.assert_array_equal(y, self.y) def test_univariate_plot(self): g = ag.JointGrid(x="x", y="x", data=self.data) g.plot_marginals(kdeplot) _, y1 = g.ax_marg_x.lines[0].get_xydata().T y2, _ = g.ax_marg_y.lines[0].get_xydata().T npt.assert_array_equal(y1, y2) def test_univariate_plot_distplot(self): bins = 10 g = ag.JointGrid(x="x", y="x", data=self.data) with pytest.warns(FutureWarning): g.plot_marginals(distplot, bins=bins) assert len(g.ax_marg_x.patches) == bins assert len(g.ax_marg_y.patches) == bins for x, y in zip(g.ax_marg_x.patches, g.ax_marg_y.patches): assert x.get_height() == y.get_width() def test_univariate_plot_matplotlib(self): bins = 10 g = ag.JointGrid(x="x", y="x", data=self.data) g.plot_marginals(plt.hist, bins=bins) assert len(g.ax_marg_x.patches) == bins assert len(g.ax_marg_y.patches) == bins def test_plot(self): g = ag.JointGrid(x="x", y="x", data=self.data) g.plot(plt.plot, kdeplot) x, y = g.ax_joint.lines[0].get_xydata().T npt.assert_array_equal(x, self.x) npt.assert_array_equal(y, self.x) _, y1 = g.ax_marg_x.lines[0].get_xydata().T y2, _ = g.ax_marg_y.lines[0].get_xydata().T npt.assert_array_equal(y1, y2) def test_space(self): g = ag.JointGrid(x="x", y="y", data=self.data, space=0) joint_bounds = g.ax_joint.bbox.bounds marg_x_bounds = g.ax_marg_x.bbox.bounds marg_y_bounds = g.ax_marg_y.bbox.bounds assert joint_bounds[2] == marg_x_bounds[2] assert joint_bounds[3] == marg_y_bounds[3] @pytest.mark.parametrize( "as_vector", [True, False], ) def test_hue(self, long_df, as_vector): if as_vector: data = None x, y, hue = long_df["x"], long_df["y"], long_df["a"] else: data = long_df x, y, hue = "x", "y", "a" g = ag.JointGrid(data=data, x=x, y=y, hue=hue) g.plot_joint(scatterplot) g.plot_marginals(histplot) g2 = ag.JointGrid() scatterplot(data=long_df, x=x, y=y, hue=hue, ax=g2.ax_joint) histplot(data=long_df, x=x, hue=hue, ax=g2.ax_marg_x) histplot(data=long_df, y=y, hue=hue, ax=g2.ax_marg_y) assert_plots_equal(g.ax_joint, g2.ax_joint) assert_plots_equal(g.ax_marg_x, g2.ax_marg_x, labels=False) assert_plots_equal(g.ax_marg_y, g2.ax_marg_y, labels=False) def test_refline(self): g = ag.JointGrid(x="x", y="y", data=self.data) g.plot(scatterplot, histplot) g.refline() assert not g.ax_joint.lines and not g.ax_marg_x.lines and not g.ax_marg_y.lines refx = refy = 0.5 hline = np.array([[0, refy], [1, refy]]) vline = np.array([[refx, 0], [refx, 1]]) g.refline(x=refx, y=refy, joint=False, marginal=False) assert not g.ax_joint.lines and not g.ax_marg_x.lines and not g.ax_marg_y.lines g.refline(x=refx, y=refy) assert g.ax_joint.lines[0].get_color() == '.5' assert g.ax_joint.lines[0].get_linestyle() == '--' assert len(g.ax_joint.lines) == 2 assert len(g.ax_marg_x.lines) == 1 assert len(g.ax_marg_y.lines) == 1 npt.assert_array_equal(g.ax_joint.lines[0].get_xydata(), vline) npt.assert_array_equal(g.ax_joint.lines[1].get_xydata(), hline) npt.assert_array_equal(g.ax_marg_x.lines[0].get_xydata(), vline) npt.assert_array_equal(g.ax_marg_y.lines[0].get_xydata(), hline) color, linestyle = 'red', '-' g.refline(x=refx, marginal=False, color=color, linestyle=linestyle) npt.assert_array_equal(g.ax_joint.lines[-1].get_xydata(), vline) assert g.ax_joint.lines[-1].get_color() == color assert g.ax_joint.lines[-1].get_linestyle() == linestyle assert len(g.ax_marg_x.lines) == len(g.ax_marg_y.lines) g.refline(x=refx, joint=False) npt.assert_array_equal(g.ax_marg_x.lines[-1].get_xydata(), vline) assert len(g.ax_marg_x.lines) == len(g.ax_marg_y.lines) + 1 g.refline(y=refy, joint=False) npt.assert_array_equal(g.ax_marg_y.lines[-1].get_xydata(), hline) assert len(g.ax_marg_x.lines) == len(g.ax_marg_y.lines) g.refline(y=refy, marginal=False) npt.assert_array_equal(g.ax_joint.lines[-1].get_xydata(), hline) assert len(g.ax_marg_x.lines) == len(g.ax_marg_y.lines) class TestJointPlot: rs = np.random.RandomState(sum(map(ord, "jointplot"))) x = rs.randn(100) y = rs.randn(100) data = pd.DataFrame(dict(x=x, y=y)) def test_scatter(self): g = ag.jointplot(x="x", y="y", data=self.data) assert len(g.ax_joint.collections) == 1 x, y = g.ax_joint.collections[0].get_offsets().T assert_array_equal(self.x, x) assert_array_equal(self.y, y) assert_array_equal( [b.get_x() for b in g.ax_marg_x.patches], np.histogram_bin_edges(self.x, "auto")[:-1], ) assert_array_equal( [b.get_y() for b in g.ax_marg_y.patches], np.histogram_bin_edges(self.y, "auto")[:-1], ) def test_scatter_hue(self, long_df): g1 = ag.jointplot(data=long_df, x="x", y="y", hue="a") g2 = ag.JointGrid() scatterplot(data=long_df, x="x", y="y", hue="a", ax=g2.ax_joint) kdeplot(data=long_df, x="x", hue="a", ax=g2.ax_marg_x, fill=True) kdeplot(data=long_df, y="y", hue="a", ax=g2.ax_marg_y, fill=True) assert_plots_equal(g1.ax_joint, g2.ax_joint) assert_plots_equal(g1.ax_marg_x, g2.ax_marg_x, labels=False) assert_plots_equal(g1.ax_marg_y, g2.ax_marg_y, labels=False) def test_reg(self): g = ag.jointplot(x="x", y="y", data=self.data, kind="reg") assert len(g.ax_joint.collections) == 2 x, y = g.ax_joint.collections[0].get_offsets().T assert_array_equal(self.x, x) assert_array_equal(self.y, y) assert g.ax_marg_x.patches assert g.ax_marg_y.patches assert g.ax_marg_x.lines assert g.ax_marg_y.lines def test_resid(self): g = ag.jointplot(x="x", y="y", data=self.data, kind="resid") assert g.ax_joint.collections assert g.ax_joint.lines assert not g.ax_marg_x.lines assert not g.ax_marg_y.lines def test_hist(self, long_df): bins = 3, 6 g1 = ag.jointplot(data=long_df, x="x", y="y", kind="hist", bins=bins) g2 = ag.JointGrid() histplot(data=long_df, x="x", y="y", ax=g2.ax_joint, bins=bins) histplot(data=long_df, x="x", ax=g2.ax_marg_x, bins=bins[0]) histplot(data=long_df, y="y", ax=g2.ax_marg_y, bins=bins[1]) assert_plots_equal(g1.ax_joint, g2.ax_joint) assert_plots_equal(g1.ax_marg_x, g2.ax_marg_x, labels=False) assert_plots_equal(g1.ax_marg_y, g2.ax_marg_y, labels=False) def test_hex(self): g = ag.jointplot(x="x", y="y", data=self.data, kind="hex") assert g.ax_joint.collections assert g.ax_marg_x.patches assert g.ax_marg_y.patches def test_kde(self, long_df): g1 = ag.jointplot(data=long_df, x="x", y="y", kind="kde") g2 = ag.JointGrid() kdeplot(data=long_df, x="x", y="y", ax=g2.ax_joint) kdeplot(data=long_df, x="x", ax=g2.ax_marg_x) kdeplot(data=long_df, y="y", ax=g2.ax_marg_y) assert_plots_equal(g1.ax_joint, g2.ax_joint) assert_plots_equal(g1.ax_marg_x, g2.ax_marg_x, labels=False) assert_plots_equal(g1.ax_marg_y, g2.ax_marg_y, labels=False) def test_kde_hue(self, long_df): g1 = ag.jointplot(data=long_df, x="x", y="y", hue="a", kind="kde") g2 = ag.JointGrid() kdeplot(data=long_df, x="x", y="y", hue="a", ax=g2.ax_joint) kdeplot(data=long_df, x="x", hue="a", ax=g2.ax_marg_x) kdeplot(data=long_df, y="y", hue="a", ax=g2.ax_marg_y) assert_plots_equal(g1.ax_joint, g2.ax_joint) assert_plots_equal(g1.ax_marg_x, g2.ax_marg_x, labels=False) assert_plots_equal(g1.ax_marg_y, g2.ax_marg_y, labels=False) def test_color(self): g = ag.jointplot(x="x", y="y", data=self.data, color="purple") purple = mpl.colors.colorConverter.to_rgb("purple") scatter_color = g.ax_joint.collections[0].get_facecolor()[0, :3] assert tuple(scatter_color) == purple hist_color = g.ax_marg_x.patches[0].get_facecolor()[:3] assert hist_color == purple def test_palette(self, long_df): kws = dict(data=long_df, hue="a", palette="Set2") g1 = ag.jointplot(x="x", y="y", **kws) g2 = ag.JointGrid() scatterplot(x="x", y="y", ax=g2.ax_joint, **kws) kdeplot(x="x", ax=g2.ax_marg_x, fill=True, **kws) kdeplot(y="y", ax=g2.ax_marg_y, fill=True, **kws) assert_plots_equal(g1.ax_joint, g2.ax_joint) assert_plots_equal(g1.ax_marg_x, g2.ax_marg_x, labels=False) assert_plots_equal(g1.ax_marg_y, g2.ax_marg_y, labels=False) def test_hex_customise(self): # test that default gridsize can be overridden g = ag.jointplot(x="x", y="y", data=self.data, kind="hex", joint_kws=dict(gridsize=5)) assert len(g.ax_joint.collections) == 1 a = g.ax_joint.collections[0].get_array() assert a.shape[0] == 28 # 28 hexagons expected for gridsize 5 def test_bad_kind(self): with pytest.raises(ValueError): ag.jointplot(x="x", y="y", data=self.data, kind="not_a_kind") def test_unsupported_hue_kind(self): for kind in ["reg", "resid", "hex"]: with pytest.raises(ValueError): ag.jointplot(x="x", y="y", hue="a", data=self.data, kind=kind) def test_leaky_dict(self): # Validate input dicts are unchanged by jointplot plotting function for kwarg in ("joint_kws", "marginal_kws"): for kind in ("hex", "kde", "resid", "reg", "scatter"): empty_dict = {} ag.jointplot(x="x", y="y", data=self.data, kind=kind, **{kwarg: empty_dict}) assert empty_dict == {} def test_distplot_kwarg_warning(self, long_df): with pytest.warns(UserWarning): g = ag.jointplot(data=long_df, x="x", y="y", marginal_kws=dict(rug=True)) assert g.ax_marg_x.patches seaborn-0.11.2/seaborn/tests/test_categorical.py000066400000000000000000003143211410631356500217150ustar00rootroot00000000000000import numpy as np import pandas as pd from scipy import stats, spatial import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib.colors import rgb2hex import pytest from pytest import approx import numpy.testing as npt from distutils.version import LooseVersion from .. import categorical as cat from .. import palettes class CategoricalFixture: """Test boxplot (also base class for things like violinplots).""" rs = np.random.RandomState(30) n_total = 60 x = rs.randn(int(n_total / 3), 3) x_df = pd.DataFrame(x, columns=pd.Series(list("XYZ"), name="big")) y = pd.Series(rs.randn(n_total), name="y_data") y_perm = y.reindex(rs.choice(y.index, y.size, replace=False)) g = pd.Series(np.repeat(list("abc"), int(n_total / 3)), name="small") h = pd.Series(np.tile(list("mn"), int(n_total / 2)), name="medium") u = pd.Series(np.tile(list("jkh"), int(n_total / 3))) df = pd.DataFrame(dict(y=y, g=g, h=h, u=u)) x_df["W"] = g class TestCategoricalPlotter(CategoricalFixture): def test_wide_df_data(self): p = cat._CategoricalPlotter() # Test basic wide DataFrame p.establish_variables(data=self.x_df) # Check data attribute for x, y, in zip(p.plot_data, self.x_df[["X", "Y", "Z"]].values.T): npt.assert_array_equal(x, y) # Check semantic attributes assert p.orient == "v" assert p.plot_hues is None assert p.group_label == "big" assert p.value_label is None # Test wide dataframe with forced horizontal orientation p.establish_variables(data=self.x_df, orient="horiz") assert p.orient == "h" # Test exception by trying to hue-group with a wide dataframe with pytest.raises(ValueError): p.establish_variables(hue="d", data=self.x_df) def test_1d_input_data(self): p = cat._CategoricalPlotter() # Test basic vector data x_1d_array = self.x.ravel() p.establish_variables(data=x_1d_array) assert len(p.plot_data) == 1 assert len(p.plot_data[0]) == self.n_total assert p.group_label is None assert p.value_label is None # Test basic vector data in list form x_1d_list = x_1d_array.tolist() p.establish_variables(data=x_1d_list) assert len(p.plot_data) == 1 assert len(p.plot_data[0]) == self.n_total assert p.group_label is None assert p.value_label is None # Test an object array that looks 1D but isn't x_notreally_1d = np.array([self.x.ravel(), self.x.ravel()[:int(self.n_total / 2)]], dtype=object) p.establish_variables(data=x_notreally_1d) assert len(p.plot_data) == 2 assert len(p.plot_data[0]) == self.n_total assert len(p.plot_data[1]) == self.n_total / 2 assert p.group_label is None assert p.value_label is None def test_2d_input_data(self): p = cat._CategoricalPlotter() x = self.x[:, 0] # Test vector data that looks 2D but doesn't really have columns p.establish_variables(data=x[:, np.newaxis]) assert len(p.plot_data) == 1 assert len(p.plot_data[0]) == self.x.shape[0] assert p.group_label is None assert p.value_label is None # Test vector data that looks 2D but doesn't really have rows p.establish_variables(data=x[np.newaxis, :]) assert len(p.plot_data) == 1 assert len(p.plot_data[0]) == self.x.shape[0] assert p.group_label is None assert p.value_label is None def test_3d_input_data(self): p = cat._CategoricalPlotter() # Test that passing actually 3D data raises x = np.zeros((5, 5, 5)) with pytest.raises(ValueError): p.establish_variables(data=x) def test_list_of_array_input_data(self): p = cat._CategoricalPlotter() # Test 2D input in list form x_list = self.x.T.tolist() p.establish_variables(data=x_list) assert len(p.plot_data) == 3 lengths = [len(v_i) for v_i in p.plot_data] assert lengths == [self.n_total / 3] * 3 assert p.group_label is None assert p.value_label is None def test_wide_array_input_data(self): p = cat._CategoricalPlotter() # Test 2D input in array form p.establish_variables(data=self.x) assert np.shape(p.plot_data) == (3, self.n_total / 3) npt.assert_array_equal(p.plot_data, self.x.T) assert p.group_label is None assert p.value_label is None def test_single_long_direct_inputs(self): p = cat._CategoricalPlotter() # Test passing a series to the x variable p.establish_variables(x=self.y) npt.assert_equal(p.plot_data, [self.y]) assert p.orient == "h" assert p.value_label == "y_data" assert p.group_label is None # Test passing a series to the y variable p.establish_variables(y=self.y) npt.assert_equal(p.plot_data, [self.y]) assert p.orient == "v" assert p.value_label == "y_data" assert p.group_label is None # Test passing an array to the y variable p.establish_variables(y=self.y.values) npt.assert_equal(p.plot_data, [self.y]) assert p.orient == "v" assert p.group_label is None assert p.value_label is None # Test array and series with non-default index x = pd.Series([1, 1, 1, 1], index=[0, 2, 4, 6]) y = np.array([1, 2, 3, 4]) p.establish_variables(x, y) assert len(p.plot_data[0]) == 4 def test_single_long_indirect_inputs(self): p = cat._CategoricalPlotter() # Test referencing a DataFrame series in the x variable p.establish_variables(x="y", data=self.df) npt.assert_equal(p.plot_data, [self.y]) assert p.orient == "h" assert p.value_label == "y" assert p.group_label is None # Test referencing a DataFrame series in the y variable p.establish_variables(y="y", data=self.df) npt.assert_equal(p.plot_data, [self.y]) assert p.orient == "v" assert p.value_label == "y" assert p.group_label is None def test_longform_groupby(self): p = cat._CategoricalPlotter() # Test a vertically oriented grouped and nested plot p.establish_variables("g", "y", hue="h", data=self.df) assert len(p.plot_data) == 3 assert len(p.plot_hues) == 3 assert p.orient == "v" assert p.value_label == "y" assert p.group_label == "g" assert p.hue_title == "h" for group, vals in zip(["a", "b", "c"], p.plot_data): npt.assert_array_equal(vals, self.y[self.g == group]) for group, hues in zip(["a", "b", "c"], p.plot_hues): npt.assert_array_equal(hues, self.h[self.g == group]) # Test a grouped and nested plot with direct array value data p.establish_variables("g", self.y.values, "h", self.df) assert p.value_label is None assert p.group_label == "g" for group, vals in zip(["a", "b", "c"], p.plot_data): npt.assert_array_equal(vals, self.y[self.g == group]) # Test a grouped and nested plot with direct array hue data p.establish_variables("g", "y", self.h.values, self.df) for group, hues in zip(["a", "b", "c"], p.plot_hues): npt.assert_array_equal(hues, self.h[self.g == group]) # Test categorical grouping data df = self.df.copy() df.g = df.g.astype("category") # Test that horizontal orientation is automatically detected p.establish_variables("y", "g", hue="h", data=df) assert len(p.plot_data) == 3 assert len(p.plot_hues) == 3 assert p.orient == "h" assert p.value_label == "y" assert p.group_label == "g" assert p.hue_title == "h" for group, vals in zip(["a", "b", "c"], p.plot_data): npt.assert_array_equal(vals, self.y[self.g == group]) for group, hues in zip(["a", "b", "c"], p.plot_hues): npt.assert_array_equal(hues, self.h[self.g == group]) # Test grouped data that matches on index p1 = cat._CategoricalPlotter() p1.establish_variables(self.g, self.y, hue=self.h) p2 = cat._CategoricalPlotter() p2.establish_variables(self.g, self.y[::-1], self.h) for i, (d1, d2) in enumerate(zip(p1.plot_data, p2.plot_data)): assert np.array_equal(d1.sort_index(), d2.sort_index()) def test_input_validation(self): p = cat._CategoricalPlotter() kws = dict(x="g", y="y", hue="h", units="u", data=self.df) for var in ["x", "y", "hue", "units"]: input_kws = kws.copy() input_kws[var] = "bad_input" with pytest.raises(ValueError): p.establish_variables(**input_kws) def test_order(self): p = cat._CategoricalPlotter() # Test inferred order from a wide dataframe input p.establish_variables(data=self.x_df) assert p.group_names == ["X", "Y", "Z"] # Test specified order with a wide dataframe input p.establish_variables(data=self.x_df, order=["Y", "Z", "X"]) assert p.group_names == ["Y", "Z", "X"] for group, vals in zip(["Y", "Z", "X"], p.plot_data): npt.assert_array_equal(vals, self.x_df[group]) with pytest.raises(ValueError): p.establish_variables(data=self.x, order=[1, 2, 0]) # Test inferred order from a grouped longform input p.establish_variables("g", "y", data=self.df) assert p.group_names == ["a", "b", "c"] # Test specified order from a grouped longform input p.establish_variables("g", "y", data=self.df, order=["b", "a", "c"]) assert p.group_names == ["b", "a", "c"] for group, vals in zip(["b", "a", "c"], p.plot_data): npt.assert_array_equal(vals, self.y[self.g == group]) # Test inferred order from a grouped input with categorical groups df = self.df.copy() df.g = df.g.astype("category") df.g = df.g.cat.reorder_categories(["c", "b", "a"]) p.establish_variables("g", "y", data=df) assert p.group_names == ["c", "b", "a"] for group, vals in zip(["c", "b", "a"], p.plot_data): npt.assert_array_equal(vals, self.y[self.g == group]) df.g = (df.g.cat.add_categories("d") .cat.reorder_categories(["c", "b", "d", "a"])) p.establish_variables("g", "y", data=df) assert p.group_names == ["c", "b", "d", "a"] def test_hue_order(self): p = cat._CategoricalPlotter() # Test inferred hue order p.establish_variables("g", "y", hue="h", data=self.df) assert p.hue_names == ["m", "n"] # Test specified hue order p.establish_variables("g", "y", hue="h", data=self.df, hue_order=["n", "m"]) assert p.hue_names == ["n", "m"] # Test inferred hue order from a categorical hue input df = self.df.copy() df.h = df.h.astype("category") df.h = df.h.cat.reorder_categories(["n", "m"]) p.establish_variables("g", "y", hue="h", data=df) assert p.hue_names == ["n", "m"] df.h = (df.h.cat.add_categories("o") .cat.reorder_categories(["o", "m", "n"])) p.establish_variables("g", "y", hue="h", data=df) assert p.hue_names == ["o", "m", "n"] def test_plot_units(self): p = cat._CategoricalPlotter() p.establish_variables("g", "y", hue="h", data=self.df) assert p.plot_units is None p.establish_variables("g", "y", hue="h", data=self.df, units="u") for group, units in zip(["a", "b", "c"], p.plot_units): npt.assert_array_equal(units, self.u[self.g == group]) def test_default_palettes(self): p = cat._CategoricalPlotter() # Test palette mapping the x position p.establish_variables("g", "y", data=self.df) p.establish_colors(None, None, 1) assert p.colors == palettes.color_palette(n_colors=3) # Test palette mapping the hue position p.establish_variables("g", "y", hue="h", data=self.df) p.establish_colors(None, None, 1) assert p.colors == palettes.color_palette(n_colors=2) def test_default_palette_with_many_levels(self): with palettes.color_palette(["blue", "red"], 2): p = cat._CategoricalPlotter() p.establish_variables("g", "y", data=self.df) p.establish_colors(None, None, 1) npt.assert_array_equal(p.colors, palettes.husl_palette(3, l=.7)) # noqa def test_specific_color(self): p = cat._CategoricalPlotter() # Test the same color for each x position p.establish_variables("g", "y", data=self.df) p.establish_colors("blue", None, 1) blue_rgb = mpl.colors.colorConverter.to_rgb("blue") assert p.colors == [blue_rgb] * 3 # Test a color-based blend for the hue mapping p.establish_variables("g", "y", hue="h", data=self.df) p.establish_colors("#ff0022", None, 1) rgba_array = np.array(palettes.light_palette("#ff0022", 2)) npt.assert_array_almost_equal(p.colors, rgba_array[:, :3]) def test_specific_palette(self): p = cat._CategoricalPlotter() # Test palette mapping the x position p.establish_variables("g", "y", data=self.df) p.establish_colors(None, "dark", 1) assert p.colors == palettes.color_palette("dark", 3) # Test that non-None `color` and `hue` raises an error p.establish_variables("g", "y", hue="h", data=self.df) p.establish_colors(None, "muted", 1) assert p.colors == palettes.color_palette("muted", 2) # Test that specified palette overrides specified color p = cat._CategoricalPlotter() p.establish_variables("g", "y", data=self.df) p.establish_colors("blue", "deep", 1) assert p.colors == palettes.color_palette("deep", 3) def test_dict_as_palette(self): p = cat._CategoricalPlotter() p.establish_variables("g", "y", hue="h", data=self.df) pal = {"m": (0, 0, 1), "n": (1, 0, 0)} p.establish_colors(None, pal, 1) assert p.colors == [(0, 0, 1), (1, 0, 0)] def test_palette_desaturation(self): p = cat._CategoricalPlotter() p.establish_variables("g", "y", data=self.df) p.establish_colors((0, 0, 1), None, .5) assert p.colors == [(.25, .25, .75)] * 3 p.establish_colors(None, [(0, 0, 1), (1, 0, 0), "w"], .5) assert p.colors == [(.25, .25, .75), (.75, .25, .25), (1, 1, 1)] class TestCategoricalStatPlotter(CategoricalFixture): def test_no_bootstrappig(self): p = cat._CategoricalStatPlotter() p.establish_variables("g", "y", data=self.df) p.estimate_statistic(np.mean, None, 100, None) npt.assert_array_equal(p.confint, np.array([])) p.establish_variables("g", "y", hue="h", data=self.df) p.estimate_statistic(np.mean, None, 100, None) npt.assert_array_equal(p.confint, np.array([[], [], []])) def test_single_layer_stats(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 100)) y = pd.Series(np.random.RandomState(0).randn(300)) p.establish_variables(g, y) p.estimate_statistic(np.mean, 95, 10000, None) assert p.statistic.shape == (3,) assert p.confint.shape == (3, 2) npt.assert_array_almost_equal(p.statistic, y.groupby(g).mean()) for ci, (_, grp_y) in zip(p.confint, y.groupby(g)): sem = stats.sem(grp_y) mean = grp_y.mean() stats.norm.ppf(.975) half_ci = stats.norm.ppf(.975) * sem ci_want = mean - half_ci, mean + half_ci npt.assert_array_almost_equal(ci_want, ci, 2) def test_single_layer_stats_with_units(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 90)) y = pd.Series(np.random.RandomState(0).randn(270)) u = pd.Series(np.repeat(np.tile(list("xyz"), 30), 3)) y[u == "x"] -= 3 y[u == "y"] += 3 p.establish_variables(g, y) p.estimate_statistic(np.mean, 95, 10000, None) stat1, ci1 = p.statistic, p.confint p.establish_variables(g, y, units=u) p.estimate_statistic(np.mean, 95, 10000, None) stat2, ci2 = p.statistic, p.confint npt.assert_array_equal(stat1, stat2) ci1_size = ci1[:, 1] - ci1[:, 0] ci2_size = ci2[:, 1] - ci2[:, 0] npt.assert_array_less(ci1_size, ci2_size) def test_single_layer_stats_with_missing_data(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 100)) y = pd.Series(np.random.RandomState(0).randn(300)) p.establish_variables(g, y, order=list("abdc")) p.estimate_statistic(np.mean, 95, 10000, None) assert p.statistic.shape == (4,) assert p.confint.shape == (4, 2) mean = y[g == "b"].mean() sem = stats.sem(y[g == "b"]) half_ci = stats.norm.ppf(.975) * sem ci = mean - half_ci, mean + half_ci npt.assert_almost_equal(p.statistic[1], mean) npt.assert_array_almost_equal(p.confint[1], ci, 2) npt.assert_equal(p.statistic[2], np.nan) npt.assert_array_equal(p.confint[2], (np.nan, np.nan)) def test_nested_stats(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 100)) h = pd.Series(np.tile(list("xy"), 150)) y = pd.Series(np.random.RandomState(0).randn(300)) p.establish_variables(g, y, h) p.estimate_statistic(np.mean, 95, 50000, None) assert p.statistic.shape == (3, 2) assert p.confint.shape == (3, 2, 2) npt.assert_array_almost_equal(p.statistic, y.groupby([g, h]).mean().unstack()) for ci_g, (_, grp_y) in zip(p.confint, y.groupby(g)): for ci, hue_y in zip(ci_g, [grp_y[::2], grp_y[1::2]]): sem = stats.sem(hue_y) mean = hue_y.mean() half_ci = stats.norm.ppf(.975) * sem ci_want = mean - half_ci, mean + half_ci npt.assert_array_almost_equal(ci_want, ci, 2) def test_bootstrap_seed(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 100)) h = pd.Series(np.tile(list("xy"), 150)) y = pd.Series(np.random.RandomState(0).randn(300)) p.establish_variables(g, y, h) p.estimate_statistic(np.mean, 95, 1000, 0) confint_1 = p.confint p.estimate_statistic(np.mean, 95, 1000, 0) confint_2 = p.confint npt.assert_array_equal(confint_1, confint_2) def test_nested_stats_with_units(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 90)) h = pd.Series(np.tile(list("xy"), 135)) u = pd.Series(np.repeat(list("ijkijk"), 45)) y = pd.Series(np.random.RandomState(0).randn(270)) y[u == "i"] -= 3 y[u == "k"] += 3 p.establish_variables(g, y, h) p.estimate_statistic(np.mean, 95, 10000, None) stat1, ci1 = p.statistic, p.confint p.establish_variables(g, y, h, units=u) p.estimate_statistic(np.mean, 95, 10000, None) stat2, ci2 = p.statistic, p.confint npt.assert_array_equal(stat1, stat2) ci1_size = ci1[:, 0, 1] - ci1[:, 0, 0] ci2_size = ci2[:, 0, 1] - ci2[:, 0, 0] npt.assert_array_less(ci1_size, ci2_size) def test_nested_stats_with_missing_data(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 100)) y = pd.Series(np.random.RandomState(0).randn(300)) h = pd.Series(np.tile(list("xy"), 150)) p.establish_variables(g, y, h, order=list("abdc"), hue_order=list("zyx")) p.estimate_statistic(np.mean, 95, 50000, None) assert p.statistic.shape == (4, 3) assert p.confint.shape == (4, 3, 2) mean = y[(g == "b") & (h == "x")].mean() sem = stats.sem(y[(g == "b") & (h == "x")]) half_ci = stats.norm.ppf(.975) * sem ci = mean - half_ci, mean + half_ci npt.assert_almost_equal(p.statistic[1, 2], mean) npt.assert_array_almost_equal(p.confint[1, 2], ci, 2) npt.assert_array_equal(p.statistic[:, 0], [np.nan] * 4) npt.assert_array_equal(p.statistic[2], [np.nan] * 3) npt.assert_array_equal(p.confint[:, 0], np.zeros((4, 2)) * np.nan) npt.assert_array_equal(p.confint[2], np.zeros((3, 2)) * np.nan) def test_sd_error_bars(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 100)) y = pd.Series(np.random.RandomState(0).randn(300)) p.establish_variables(g, y) p.estimate_statistic(np.mean, "sd", None, None) assert p.statistic.shape == (3,) assert p.confint.shape == (3, 2) npt.assert_array_almost_equal(p.statistic, y.groupby(g).mean()) for ci, (_, grp_y) in zip(p.confint, y.groupby(g)): mean = grp_y.mean() half_ci = np.std(grp_y) ci_want = mean - half_ci, mean + half_ci npt.assert_array_almost_equal(ci_want, ci, 2) def test_nested_sd_error_bars(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 100)) h = pd.Series(np.tile(list("xy"), 150)) y = pd.Series(np.random.RandomState(0).randn(300)) p.establish_variables(g, y, h) p.estimate_statistic(np.mean, "sd", None, None) assert p.statistic.shape == (3, 2) assert p.confint.shape == (3, 2, 2) npt.assert_array_almost_equal(p.statistic, y.groupby([g, h]).mean().unstack()) for ci_g, (_, grp_y) in zip(p.confint, y.groupby(g)): for ci, hue_y in zip(ci_g, [grp_y[::2], grp_y[1::2]]): mean = hue_y.mean() half_ci = np.std(hue_y) ci_want = mean - half_ci, mean + half_ci npt.assert_array_almost_equal(ci_want, ci, 2) def test_draw_cis(self): p = cat._CategoricalStatPlotter() # Test vertical CIs p.orient = "v" f, ax = plt.subplots() at_group = [0, 1] confints = [(.5, 1.5), (.25, .8)] colors = [".2", ".3"] p.draw_confints(ax, at_group, confints, colors) lines = ax.lines for line, at, ci, c in zip(lines, at_group, confints, colors): x, y = line.get_xydata().T npt.assert_array_equal(x, [at, at]) npt.assert_array_equal(y, ci) assert line.get_color() == c plt.close("all") # Test horizontal CIs p.orient = "h" f, ax = plt.subplots() p.draw_confints(ax, at_group, confints, colors) lines = ax.lines for line, at, ci, c in zip(lines, at_group, confints, colors): x, y = line.get_xydata().T npt.assert_array_equal(x, ci) npt.assert_array_equal(y, [at, at]) assert line.get_color() == c plt.close("all") # Test vertical CIs with endcaps p.orient = "v" f, ax = plt.subplots() p.draw_confints(ax, at_group, confints, colors, capsize=0.3) capline = ax.lines[len(ax.lines) - 1] caplinestart = capline.get_xdata()[0] caplineend = capline.get_xdata()[1] caplinelength = abs(caplineend - caplinestart) assert caplinelength == approx(0.3) assert len(ax.lines) == 6 plt.close("all") # Test horizontal CIs with endcaps p.orient = "h" f, ax = plt.subplots() p.draw_confints(ax, at_group, confints, colors, capsize=0.3) capline = ax.lines[len(ax.lines) - 1] caplinestart = capline.get_ydata()[0] caplineend = capline.get_ydata()[1] caplinelength = abs(caplineend - caplinestart) assert caplinelength == approx(0.3) assert len(ax.lines) == 6 # Test extra keyword arguments f, ax = plt.subplots() p.draw_confints(ax, at_group, confints, colors, lw=4) line = ax.lines[0] assert line.get_linewidth() == 4 plt.close("all") # Test errwidth is set appropriately f, ax = plt.subplots() p.draw_confints(ax, at_group, confints, colors, errwidth=2) capline = ax.lines[len(ax.lines) - 1] assert capline._linewidth == 2 assert len(ax.lines) == 2 plt.close("all") class TestBoxPlotter(CategoricalFixture): default_kws = dict(x=None, y=None, hue=None, data=None, order=None, hue_order=None, orient=None, color=None, palette=None, saturation=.75, width=.8, dodge=True, fliersize=5, linewidth=None) def test_nested_width(self): kws = self.default_kws.copy() p = cat._BoxPlotter(**kws) p.establish_variables("g", "y", hue="h", data=self.df) assert p.nested_width == .4 * .98 kws = self.default_kws.copy() kws["width"] = .6 p = cat._BoxPlotter(**kws) p.establish_variables("g", "y", hue="h", data=self.df) assert p.nested_width == .3 * .98 kws = self.default_kws.copy() kws["dodge"] = False p = cat._BoxPlotter(**kws) p.establish_variables("g", "y", hue="h", data=self.df) assert p.nested_width == .8 def test_hue_offsets(self): p = cat._BoxPlotter(**self.default_kws) p.establish_variables("g", "y", hue="h", data=self.df) npt.assert_array_equal(p.hue_offsets, [-.2, .2]) kws = self.default_kws.copy() kws["width"] = .6 p = cat._BoxPlotter(**kws) p.establish_variables("g", "y", hue="h", data=self.df) npt.assert_array_equal(p.hue_offsets, [-.15, .15]) p = cat._BoxPlotter(**kws) p.establish_variables("h", "y", "g", data=self.df) npt.assert_array_almost_equal(p.hue_offsets, [-.2, 0, .2]) def test_axes_data(self): ax = cat.boxplot(x="g", y="y", data=self.df) assert len(ax.artists) == 3 plt.close("all") ax = cat.boxplot(x="g", y="y", hue="h", data=self.df) assert len(ax.artists) == 6 plt.close("all") def test_box_colors(self): ax = cat.boxplot(x="g", y="y", data=self.df, saturation=1) pal = palettes.color_palette(n_colors=3) for patch, color in zip(ax.artists, pal): assert patch.get_facecolor()[:3] == color plt.close("all") ax = cat.boxplot(x="g", y="y", hue="h", data=self.df, saturation=1) pal = palettes.color_palette(n_colors=2) for patch, color in zip(ax.artists, pal * 2): assert patch.get_facecolor()[:3] == color plt.close("all") def test_draw_missing_boxes(self): ax = cat.boxplot(x="g", y="y", data=self.df, order=["a", "b", "c", "d"]) assert len(ax.artists) == 3 def test_missing_data(self): x = ["a", "a", "b", "b", "c", "c", "d", "d"] h = ["x", "y", "x", "y", "x", "y", "x", "y"] y = self.rs.randn(8) y[-2:] = np.nan ax = cat.boxplot(x=x, y=y) assert len(ax.artists) == 3 plt.close("all") y[-1] = 0 ax = cat.boxplot(x=x, y=y, hue=h) assert len(ax.artists) == 7 plt.close("all") def test_unaligned_index(self): f, (ax1, ax2) = plt.subplots(2) cat.boxplot(x=self.g, y=self.y, ax=ax1) cat.boxplot(x=self.g, y=self.y_perm, ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert np.array_equal(l1.get_xydata(), l2.get_xydata()) f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.boxplot(x=self.g, y=self.y, hue=self.h, hue_order=hue_order, ax=ax1) cat.boxplot(x=self.g, y=self.y_perm, hue=self.h, hue_order=hue_order, ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert np.array_equal(l1.get_xydata(), l2.get_xydata()) def test_boxplots(self): # Smoke test the high level boxplot options cat.boxplot(x="y", data=self.df) plt.close("all") cat.boxplot(y="y", data=self.df) plt.close("all") cat.boxplot(x="g", y="y", data=self.df) plt.close("all") cat.boxplot(x="y", y="g", data=self.df, orient="h") plt.close("all") cat.boxplot(x="g", y="y", hue="h", data=self.df) plt.close("all") cat.boxplot(x="g", y="y", hue="h", order=list("nabc"), data=self.df) plt.close("all") cat.boxplot(x="g", y="y", hue="h", hue_order=list("omn"), data=self.df) plt.close("all") cat.boxplot(x="y", y="g", hue="h", data=self.df, orient="h") plt.close("all") def test_axes_annotation(self): ax = cat.boxplot(x="g", y="y", data=self.df) assert ax.get_xlabel() == "g" assert ax.get_ylabel() == "y" assert ax.get_xlim() == (-.5, 2.5) npt.assert_array_equal(ax.get_xticks(), [0, 1, 2]) npt.assert_array_equal([l.get_text() for l in ax.get_xticklabels()], ["a", "b", "c"]) plt.close("all") ax = cat.boxplot(x="g", y="y", hue="h", data=self.df) assert ax.get_xlabel() == "g" assert ax.get_ylabel() == "y" npt.assert_array_equal(ax.get_xticks(), [0, 1, 2]) npt.assert_array_equal([l.get_text() for l in ax.get_xticklabels()], ["a", "b", "c"]) npt.assert_array_equal([l.get_text() for l in ax.legend_.get_texts()], ["m", "n"]) plt.close("all") ax = cat.boxplot(x="y", y="g", data=self.df, orient="h") assert ax.get_xlabel() == "y" assert ax.get_ylabel() == "g" assert ax.get_ylim() == (2.5, -.5) npt.assert_array_equal(ax.get_yticks(), [0, 1, 2]) npt.assert_array_equal([l.get_text() for l in ax.get_yticklabels()], ["a", "b", "c"]) plt.close("all") class TestViolinPlotter(CategoricalFixture): default_kws = dict(x=None, y=None, hue=None, data=None, order=None, hue_order=None, bw="scott", cut=2, scale="area", scale_hue=True, gridsize=100, width=.8, inner="box", split=False, dodge=True, orient=None, linewidth=None, color=None, palette=None, saturation=.75) def test_split_error(self): kws = self.default_kws.copy() kws.update(dict(x="h", y="y", hue="g", data=self.df, split=True)) with pytest.raises(ValueError): cat._ViolinPlotter(**kws) def test_no_observations(self): p = cat._ViolinPlotter(**self.default_kws) x = ["a", "a", "b"] y = self.rs.randn(3) y[-1] = np.nan p.establish_variables(x, y) p.estimate_densities("scott", 2, "area", True, 20) assert len(p.support[0]) == 20 assert len(p.support[1]) == 0 assert len(p.density[0]) == 20 assert len(p.density[1]) == 1 assert p.density[1].item() == 1 p.estimate_densities("scott", 2, "count", True, 20) assert p.density[1].item() == 0 x = ["a"] * 4 + ["b"] * 2 y = self.rs.randn(6) h = ["m", "n"] * 2 + ["m"] * 2 p.establish_variables(x, y, hue=h) p.estimate_densities("scott", 2, "area", True, 20) assert len(p.support[1][0]) == 20 assert len(p.support[1][1]) == 0 assert len(p.density[1][0]) == 20 assert len(p.density[1][1]) == 1 assert p.density[1][1].item() == 1 p.estimate_densities("scott", 2, "count", False, 20) assert p.density[1][1].item() == 0 def test_single_observation(self): p = cat._ViolinPlotter(**self.default_kws) x = ["a", "a", "b"] y = self.rs.randn(3) p.establish_variables(x, y) p.estimate_densities("scott", 2, "area", True, 20) assert len(p.support[0]) == 20 assert len(p.support[1]) == 1 assert len(p.density[0]) == 20 assert len(p.density[1]) == 1 assert p.density[1].item() == 1 p.estimate_densities("scott", 2, "count", True, 20) assert p.density[1].item() == .5 x = ["b"] * 4 + ["a"] * 3 y = self.rs.randn(7) h = (["m", "n"] * 4)[:-1] p.establish_variables(x, y, hue=h) p.estimate_densities("scott", 2, "area", True, 20) assert len(p.support[1][0]) == 20 assert len(p.support[1][1]) == 1 assert len(p.density[1][0]) == 20 assert len(p.density[1][1]) == 1 assert p.density[1][1].item() == 1 p.estimate_densities("scott", 2, "count", False, 20) assert p.density[1][1].item() == .5 def test_dwidth(self): kws = self.default_kws.copy() kws.update(dict(x="g", y="y", data=self.df)) p = cat._ViolinPlotter(**kws) assert p.dwidth == .4 kws.update(dict(width=.4)) p = cat._ViolinPlotter(**kws) assert p.dwidth == .2 kws.update(dict(hue="h", width=.8)) p = cat._ViolinPlotter(**kws) assert p.dwidth == .2 kws.update(dict(split=True)) p = cat._ViolinPlotter(**kws) assert p.dwidth == .4 def test_scale_area(self): kws = self.default_kws.copy() kws["scale"] = "area" p = cat._ViolinPlotter(**kws) # Test single layer of grouping p.hue_names = None density = [self.rs.uniform(0, .8, 50), self.rs.uniform(0, .2, 50)] max_before = np.array([d.max() for d in density]) p.scale_area(density, max_before, False) max_after = np.array([d.max() for d in density]) assert max_after[0] == 1 before_ratio = max_before[1] / max_before[0] after_ratio = max_after[1] / max_after[0] assert before_ratio == after_ratio # Test nested grouping scaling across all densities p.hue_names = ["foo", "bar"] density = [[self.rs.uniform(0, .8, 50), self.rs.uniform(0, .2, 50)], [self.rs.uniform(0, .1, 50), self.rs.uniform(0, .02, 50)]] max_before = np.array([[r.max() for r in row] for row in density]) p.scale_area(density, max_before, False) max_after = np.array([[r.max() for r in row] for row in density]) assert max_after[0, 0] == 1 before_ratio = max_before[1, 1] / max_before[0, 0] after_ratio = max_after[1, 1] / max_after[0, 0] assert before_ratio == after_ratio # Test nested grouping scaling within hue p.hue_names = ["foo", "bar"] density = [[self.rs.uniform(0, .8, 50), self.rs.uniform(0, .2, 50)], [self.rs.uniform(0, .1, 50), self.rs.uniform(0, .02, 50)]] max_before = np.array([[r.max() for r in row] for row in density]) p.scale_area(density, max_before, True) max_after = np.array([[r.max() for r in row] for row in density]) assert max_after[0, 0] == 1 assert max_after[1, 0] == 1 before_ratio = max_before[1, 1] / max_before[1, 0] after_ratio = max_after[1, 1] / max_after[1, 0] assert before_ratio == after_ratio def test_scale_width(self): kws = self.default_kws.copy() kws["scale"] = "width" p = cat._ViolinPlotter(**kws) # Test single layer of grouping p.hue_names = None density = [self.rs.uniform(0, .8, 50), self.rs.uniform(0, .2, 50)] p.scale_width(density) max_after = np.array([d.max() for d in density]) npt.assert_array_equal(max_after, [1, 1]) # Test nested grouping p.hue_names = ["foo", "bar"] density = [[self.rs.uniform(0, .8, 50), self.rs.uniform(0, .2, 50)], [self.rs.uniform(0, .1, 50), self.rs.uniform(0, .02, 50)]] p.scale_width(density) max_after = np.array([[r.max() for r in row] for row in density]) npt.assert_array_equal(max_after, [[1, 1], [1, 1]]) def test_scale_count(self): kws = self.default_kws.copy() kws["scale"] = "count" p = cat._ViolinPlotter(**kws) # Test single layer of grouping p.hue_names = None density = [self.rs.uniform(0, .8, 20), self.rs.uniform(0, .2, 40)] counts = np.array([20, 40]) p.scale_count(density, counts, False) max_after = np.array([d.max() for d in density]) npt.assert_array_equal(max_after, [.5, 1]) # Test nested grouping scaling across all densities p.hue_names = ["foo", "bar"] density = [[self.rs.uniform(0, .8, 5), self.rs.uniform(0, .2, 40)], [self.rs.uniform(0, .1, 100), self.rs.uniform(0, .02, 50)]] counts = np.array([[5, 40], [100, 50]]) p.scale_count(density, counts, False) max_after = np.array([[r.max() for r in row] for row in density]) npt.assert_array_equal(max_after, [[.05, .4], [1, .5]]) # Test nested grouping scaling within hue p.hue_names = ["foo", "bar"] density = [[self.rs.uniform(0, .8, 5), self.rs.uniform(0, .2, 40)], [self.rs.uniform(0, .1, 100), self.rs.uniform(0, .02, 50)]] counts = np.array([[5, 40], [100, 50]]) p.scale_count(density, counts, True) max_after = np.array([[r.max() for r in row] for row in density]) npt.assert_array_equal(max_after, [[.125, 1], [1, .5]]) def test_bad_scale(self): kws = self.default_kws.copy() kws["scale"] = "not_a_scale_type" with pytest.raises(ValueError): cat._ViolinPlotter(**kws) def test_kde_fit(self): p = cat._ViolinPlotter(**self.default_kws) data = self.y data_std = data.std(ddof=1) # Test reference rule bandwidth kde, bw = p.fit_kde(data, "scott") assert isinstance(kde, stats.gaussian_kde) assert kde.factor == kde.scotts_factor() assert bw == kde.scotts_factor() * data_std # Test numeric scale factor kde, bw = p.fit_kde(self.y, .2) assert isinstance(kde, stats.gaussian_kde) assert kde.factor == .2 assert bw == .2 * data_std def test_draw_to_density(self): p = cat._ViolinPlotter(**self.default_kws) # p.dwidth will be 1 for easier testing p.width = 2 # Test verical plots support = np.array([.2, .6]) density = np.array([.1, .4]) # Test full vertical plot _, ax = plt.subplots() p.draw_to_density(ax, 0, .5, support, density, False) x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [.99 * -.4, .99 * .4]) npt.assert_array_equal(y, [.5, .5]) plt.close("all") # Test left vertical plot _, ax = plt.subplots() p.draw_to_density(ax, 0, .5, support, density, "left") x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [.99 * -.4, 0]) npt.assert_array_equal(y, [.5, .5]) plt.close("all") # Test right vertical plot _, ax = plt.subplots() p.draw_to_density(ax, 0, .5, support, density, "right") x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [0, .99 * .4]) npt.assert_array_equal(y, [.5, .5]) plt.close("all") # Switch orientation to test horizontal plots p.orient = "h" support = np.array([.2, .5]) density = np.array([.3, .7]) # Test full horizontal plot _, ax = plt.subplots() p.draw_to_density(ax, 0, .6, support, density, False) x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [.6, .6]) npt.assert_array_equal(y, [.99 * -.7, .99 * .7]) plt.close("all") # Test left horizontal plot _, ax = plt.subplots() p.draw_to_density(ax, 0, .6, support, density, "left") x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [.6, .6]) npt.assert_array_equal(y, [.99 * -.7, 0]) plt.close("all") # Test right horizontal plot _, ax = plt.subplots() p.draw_to_density(ax, 0, .6, support, density, "right") x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [.6, .6]) npt.assert_array_equal(y, [0, .99 * .7]) plt.close("all") def test_draw_single_observations(self): p = cat._ViolinPlotter(**self.default_kws) p.width = 2 # Test vertical plot _, ax = plt.subplots() p.draw_single_observation(ax, 1, 1.5, 1) x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [0, 2]) npt.assert_array_equal(y, [1.5, 1.5]) plt.close("all") # Test horizontal plot p.orient = "h" _, ax = plt.subplots() p.draw_single_observation(ax, 2, 2.2, .5) x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [2.2, 2.2]) npt.assert_array_equal(y, [1.5, 2.5]) plt.close("all") def test_draw_box_lines(self): # Test vertical plot kws = self.default_kws.copy() kws.update(dict(y="y", data=self.df, inner=None)) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_box_lines(ax, self.y, p.support[0], p.density[0], 0) assert len(ax.lines) == 2 q25, q50, q75 = np.percentile(self.y, [25, 50, 75]) _, y = ax.lines[1].get_xydata().T npt.assert_array_equal(y, [q25, q75]) _, y = ax.collections[0].get_offsets().T assert y == q50 plt.close("all") # Test horizontal plot kws = self.default_kws.copy() kws.update(dict(x="y", data=self.df, inner=None)) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_box_lines(ax, self.y, p.support[0], p.density[0], 0) assert len(ax.lines) == 2 q25, q50, q75 = np.percentile(self.y, [25, 50, 75]) x, _ = ax.lines[1].get_xydata().T npt.assert_array_equal(x, [q25, q75]) x, _ = ax.collections[0].get_offsets().T assert x == q50 plt.close("all") def test_draw_quartiles(self): kws = self.default_kws.copy() kws.update(dict(y="y", data=self.df, inner=None)) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_quartiles(ax, self.y, p.support[0], p.density[0], 0) for val, line in zip(np.percentile(self.y, [25, 50, 75]), ax.lines): _, y = line.get_xydata().T npt.assert_array_equal(y, [val, val]) def test_draw_points(self): p = cat._ViolinPlotter(**self.default_kws) # Test vertical plot _, ax = plt.subplots() p.draw_points(ax, self.y, 0) x, y = ax.collections[0].get_offsets().T npt.assert_array_equal(x, np.zeros_like(self.y)) npt.assert_array_equal(y, self.y) plt.close("all") # Test horizontal plot p.orient = "h" _, ax = plt.subplots() p.draw_points(ax, self.y, 0) x, y = ax.collections[0].get_offsets().T npt.assert_array_equal(x, self.y) npt.assert_array_equal(y, np.zeros_like(self.y)) plt.close("all") def test_draw_sticks(self): kws = self.default_kws.copy() kws.update(dict(y="y", data=self.df, inner=None)) p = cat._ViolinPlotter(**kws) # Test vertical plot _, ax = plt.subplots() p.draw_stick_lines(ax, self.y, p.support[0], p.density[0], 0) for val, line in zip(self.y, ax.lines): _, y = line.get_xydata().T npt.assert_array_equal(y, [val, val]) plt.close("all") # Test horizontal plot p.orient = "h" _, ax = plt.subplots() p.draw_stick_lines(ax, self.y, p.support[0], p.density[0], 0) for val, line in zip(self.y, ax.lines): x, _ = line.get_xydata().T npt.assert_array_equal(x, [val, val]) plt.close("all") def test_validate_inner(self): kws = self.default_kws.copy() kws.update(dict(inner="bad_inner")) with pytest.raises(ValueError): cat._ViolinPlotter(**kws) def test_draw_violinplots(self): kws = self.default_kws.copy() # Test single vertical violin kws.update(dict(y="y", data=self.df, inner=None, saturation=1, color=(1, 0, 0, 1))) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) assert len(ax.collections) == 1 npt.assert_array_equal(ax.collections[0].get_facecolors(), [(1, 0, 0, 1)]) plt.close("all") # Test single horizontal violin kws.update(dict(x="y", y=None, color=(0, 1, 0, 1))) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) assert len(ax.collections) == 1 npt.assert_array_equal(ax.collections[0].get_facecolors(), [(0, 1, 0, 1)]) plt.close("all") # Test multiple vertical violins kws.update(dict(x="g", y="y", color=None,)) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) assert len(ax.collections) == 3 for violin, color in zip(ax.collections, palettes.color_palette()): npt.assert_array_equal(violin.get_facecolors()[0, :-1], color) plt.close("all") # Test multiple violins with hue nesting kws.update(dict(hue="h")) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) assert len(ax.collections) == 6 for violin, color in zip(ax.collections, palettes.color_palette(n_colors=2) * 3): npt.assert_array_equal(violin.get_facecolors()[0, :-1], color) plt.close("all") # Test multiple split violins kws.update(dict(split=True, palette="muted")) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) assert len(ax.collections) == 6 for violin, color in zip(ax.collections, palettes.color_palette("muted", n_colors=2) * 3): npt.assert_array_equal(violin.get_facecolors()[0, :-1], color) plt.close("all") def test_draw_violinplots_no_observations(self): kws = self.default_kws.copy() kws["inner"] = None # Test single layer of grouping x = ["a", "a", "b"] y = self.rs.randn(3) y[-1] = np.nan kws.update(x=x, y=y) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) assert len(ax.collections) == 1 assert len(ax.lines) == 0 plt.close("all") # Test nested hue grouping x = ["a"] * 4 + ["b"] * 2 y = self.rs.randn(6) h = ["m", "n"] * 2 + ["m"] * 2 kws.update(x=x, y=y, hue=h) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) assert len(ax.collections) == 3 assert len(ax.lines) == 0 plt.close("all") def test_draw_violinplots_single_observations(self): kws = self.default_kws.copy() kws["inner"] = None # Test single layer of grouping x = ["a", "a", "b"] y = self.rs.randn(3) kws.update(x=x, y=y) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) assert len(ax.collections) == 1 assert len(ax.lines) == 1 plt.close("all") # Test nested hue grouping x = ["b"] * 4 + ["a"] * 3 y = self.rs.randn(7) h = (["m", "n"] * 4)[:-1] kws.update(x=x, y=y, hue=h) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) assert len(ax.collections) == 3 assert len(ax.lines) == 1 plt.close("all") # Test nested hue grouping with split kws["split"] = True p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) assert len(ax.collections) == 3 assert len(ax.lines) == 1 plt.close("all") def test_violinplots(self): # Smoke test the high level violinplot options cat.violinplot(x="y", data=self.df) plt.close("all") cat.violinplot(y="y", data=self.df) plt.close("all") cat.violinplot(x="g", y="y", data=self.df) plt.close("all") cat.violinplot(x="y", y="g", data=self.df, orient="h") plt.close("all") cat.violinplot(x="g", y="y", hue="h", data=self.df) plt.close("all") order = list("nabc") cat.violinplot(x="g", y="y", hue="h", order=order, data=self.df) plt.close("all") order = list("omn") cat.violinplot(x="g", y="y", hue="h", hue_order=order, data=self.df) plt.close("all") cat.violinplot(x="y", y="g", hue="h", data=self.df, orient="h") plt.close("all") for inner in ["box", "quart", "point", "stick", None]: cat.violinplot(x="g", y="y", data=self.df, inner=inner) plt.close("all") cat.violinplot(x="g", y="y", hue="h", data=self.df, inner=inner) plt.close("all") cat.violinplot(x="g", y="y", hue="h", data=self.df, inner=inner, split=True) plt.close("all") class TestCategoricalScatterPlotter(CategoricalFixture): def test_group_point_colors(self): p = cat._CategoricalScatterPlotter() p.establish_variables(x="g", y="y", data=self.df) p.establish_colors(None, "deep", 1) point_colors = p.point_colors n_colors = self.g.unique().size assert len(point_colors) == n_colors for i, group_colors in enumerate(point_colors): for color in group_colors: assert color == i def test_hue_point_colors(self): p = cat._CategoricalScatterPlotter() hue_order = self.h.unique().tolist() p.establish_variables(x="g", y="y", hue="h", hue_order=hue_order, data=self.df) p.establish_colors(None, "deep", 1) point_colors = p.point_colors assert len(point_colors) == self.g.unique().size for i, group_colors in enumerate(point_colors): group_hues = np.asarray(p.plot_hues[i]) for point_hue, point_color in zip(group_hues, group_colors): assert point_color == p.hue_names.index(point_hue) # hue_level = np.asarray(p.plot_hues[i])[j] # palette_color = deep_colors[hue_order.index(hue_level)] # assert tuple(point_color) == palette_color def test_scatterplot_legend(self): p = cat._CategoricalScatterPlotter() hue_order = ["m", "n"] p.establish_variables(x="g", y="y", hue="h", hue_order=hue_order, data=self.df) p.establish_colors(None, "deep", 1) deep_colors = palettes.color_palette("deep", self.h.unique().size) f, ax = plt.subplots() p.add_legend_data(ax) leg = ax.legend() for i, t in enumerate(leg.get_texts()): assert t.get_text() == hue_order[i] for i, h in enumerate(leg.legendHandles): rgb = h.get_facecolor()[0, :3] assert tuple(rgb) == tuple(deep_colors[i]) class TestStripPlotter(CategoricalFixture): def test_stripplot_vertical(self): pal = palettes.color_palette() ax = cat.stripplot(x="g", y="y", jitter=False, data=self.df) for i, (_, vals) in enumerate(self.y.groupby(self.g)): x, y = ax.collections[i].get_offsets().T npt.assert_array_equal(x, np.ones(len(x)) * i) npt.assert_array_equal(y, vals) npt.assert_equal(ax.collections[i].get_facecolors()[0, :3], pal[i]) def test_stripplot_horiztonal(self): df = self.df.copy() df.g = df.g.astype("category") ax = cat.stripplot(x="y", y="g", jitter=False, data=df) for i, (_, vals) in enumerate(self.y.groupby(self.g)): x, y = ax.collections[i].get_offsets().T npt.assert_array_equal(x, vals) npt.assert_array_equal(y, np.ones(len(x)) * i) def test_stripplot_jitter(self): pal = palettes.color_palette() ax = cat.stripplot(x="g", y="y", data=self.df, jitter=True) for i, (_, vals) in enumerate(self.y.groupby(self.g)): x, y = ax.collections[i].get_offsets().T npt.assert_array_less(np.ones(len(x)) * i - .1, x) npt.assert_array_less(x, np.ones(len(x)) * i + .1) npt.assert_array_equal(y, vals) npt.assert_equal(ax.collections[i].get_facecolors()[0, :3], pal[i]) def test_dodge_nested_stripplot_vertical(self): pal = palettes.color_palette() ax = cat.stripplot(x="g", y="y", hue="h", data=self.df, jitter=False, dodge=True) for i, (_, group_vals) in enumerate(self.y.groupby(self.g)): for j, (_, vals) in enumerate(group_vals.groupby(self.h)): x, y = ax.collections[i * 2 + j].get_offsets().T npt.assert_array_equal(x, np.ones(len(x)) * i + [-.2, .2][j]) npt.assert_array_equal(y, vals) fc = ax.collections[i * 2 + j].get_facecolors()[0, :3] assert tuple(fc) == pal[j] def test_dodge_nested_stripplot_horizontal(self): df = self.df.copy() df.g = df.g.astype("category") ax = cat.stripplot(x="y", y="g", hue="h", data=df, jitter=False, dodge=True) for i, (_, group_vals) in enumerate(self.y.groupby(self.g)): for j, (_, vals) in enumerate(group_vals.groupby(self.h)): x, y = ax.collections[i * 2 + j].get_offsets().T npt.assert_array_equal(x, vals) npt.assert_array_equal(y, np.ones(len(x)) * i + [-.2, .2][j]) def test_nested_stripplot_vertical(self): # Test a simple vertical strip plot ax = cat.stripplot(x="g", y="y", hue="h", data=self.df, jitter=False, dodge=False) for i, (_, group_vals) in enumerate(self.y.groupby(self.g)): x, y = ax.collections[i].get_offsets().T npt.assert_array_equal(x, np.ones(len(x)) * i) npt.assert_array_equal(y, group_vals) def test_nested_stripplot_horizontal(self): df = self.df.copy() df.g = df.g.astype("category") ax = cat.stripplot(x="y", y="g", hue="h", data=df, jitter=False, dodge=False) for i, (_, group_vals) in enumerate(self.y.groupby(self.g)): x, y = ax.collections[i].get_offsets().T npt.assert_array_equal(x, group_vals) npt.assert_array_equal(y, np.ones(len(x)) * i) def test_three_strip_points(self): x = np.arange(3) ax = cat.stripplot(x=x) facecolors = ax.collections[0].get_facecolor() assert facecolors.shape == (3, 4) npt.assert_array_equal(facecolors[0], facecolors[1]) def test_unaligned_index(self): f, (ax1, ax2) = plt.subplots(2) cat.stripplot(x=self.g, y=self.y, ax=ax1) cat.stripplot(x=self.g, y=self.y_perm, ax=ax2) for p1, p2 in zip(ax1.collections, ax2.collections): y1, y2 = p1.get_offsets()[:, 1], p2.get_offsets()[:, 1] assert np.array_equal(np.sort(y1), np.sort(y2)) assert np.array_equal(p1.get_facecolors()[np.argsort(y1)], p2.get_facecolors()[np.argsort(y2)]) f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.stripplot(x=self.g, y=self.y, hue=self.h, hue_order=hue_order, ax=ax1) cat.stripplot(x=self.g, y=self.y_perm, hue=self.h, hue_order=hue_order, ax=ax2) for p1, p2 in zip(ax1.collections, ax2.collections): y1, y2 = p1.get_offsets()[:, 1], p2.get_offsets()[:, 1] assert np.array_equal(np.sort(y1), np.sort(y2)) assert np.array_equal(p1.get_facecolors()[np.argsort(y1)], p2.get_facecolors()[np.argsort(y2)]) f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.stripplot(x=self.g, y=self.y, hue=self.h, dodge=True, hue_order=hue_order, ax=ax1) cat.stripplot(x=self.g, y=self.y_perm, hue=self.h, dodge=True, hue_order=hue_order, ax=ax2) for p1, p2 in zip(ax1.collections, ax2.collections): y1, y2 = p1.get_offsets()[:, 1], p2.get_offsets()[:, 1] assert np.array_equal(np.sort(y1), np.sort(y2)) assert np.array_equal(p1.get_facecolors()[np.argsort(y1)], p2.get_facecolors()[np.argsort(y2)]) class TestSwarmPlotter(CategoricalFixture): default_kws = dict(x=None, y=None, hue=None, data=None, order=None, hue_order=None, dodge=False, orient=None, color=None, palette=None) def test_could_overlap(self): p = cat._SwarmPlotter(**self.default_kws) neighbors = p.could_overlap((1, 1), [(0, 0), (1, .5), (.5, .5)], 1) npt.assert_array_equal(neighbors, [(1, .5), (.5, .5)]) def test_position_candidates(self): p = cat._SwarmPlotter(**self.default_kws) xy_i = (0, 1) neighbors = [(0, 1), (0, 1.5)] candidates = p.position_candidates(xy_i, neighbors, 1) dx1 = 1.05 dx2 = np.sqrt(1 - .5 ** 2) * 1.05 npt.assert_array_equal(candidates, [(0, 1), (-dx1, 1), (dx1, 1), (dx2, 1), (-dx2, 1)]) def test_find_first_non_overlapping_candidate(self): p = cat._SwarmPlotter(**self.default_kws) candidates = [(.5, 1), (1, 1), (1.5, 1)] neighbors = np.array([(0, 1)]) first = p.first_non_overlapping_candidate(candidates, neighbors, 1) npt.assert_array_equal(first, (1, 1)) def test_beeswarm(self): p = cat._SwarmPlotter(**self.default_kws) d = self.y.diff().mean() * 1.5 x = np.zeros(self.y.size) y = np.sort(self.y) orig_xy = np.c_[x, y] swarm = p.beeswarm(orig_xy, d) dmat = spatial.distance.cdist(swarm, swarm) triu = dmat[np.triu_indices_from(dmat, 1)] npt.assert_array_less(d, triu) npt.assert_array_equal(y, swarm[:, 1]) def test_add_gutters(self): p = cat._SwarmPlotter(**self.default_kws) points = np.zeros(10) assert np.array_equal(points, p.add_gutters(points, 0, 1)) points = np.array([0, -1, .4, .8]) msg = r"50.0% of the points cannot be placed.+$" with pytest.warns(UserWarning, match=msg): new_points = p.add_gutters(points, 0, 1) assert np.array_equal(new_points, np.array([0, -.5, .4, .5])) def test_swarmplot_vertical(self): pal = palettes.color_palette() ax = cat.swarmplot(x="g", y="y", data=self.df) for i, (_, vals) in enumerate(self.y.groupby(self.g)): x, y = ax.collections[i].get_offsets().T npt.assert_array_almost_equal(y, np.sort(vals)) fc = ax.collections[i].get_facecolors()[0, :3] npt.assert_equal(fc, pal[i]) def test_swarmplot_horizontal(self): pal = palettes.color_palette() ax = cat.swarmplot(x="y", y="g", data=self.df, orient="h") for i, (_, vals) in enumerate(self.y.groupby(self.g)): x, y = ax.collections[i].get_offsets().T npt.assert_array_almost_equal(x, np.sort(vals)) fc = ax.collections[i].get_facecolors()[0, :3] npt.assert_equal(fc, pal[i]) def test_dodge_nested_swarmplot_vertical(self): pal = palettes.color_palette() ax = cat.swarmplot(x="g", y="y", hue="h", data=self.df, dodge=True) for i, (_, group_vals) in enumerate(self.y.groupby(self.g)): for j, (_, vals) in enumerate(group_vals.groupby(self.h)): x, y = ax.collections[i * 2 + j].get_offsets().T npt.assert_array_almost_equal(y, np.sort(vals)) fc = ax.collections[i * 2 + j].get_facecolors()[0, :3] assert tuple(fc) == pal[j] def test_dodge_nested_swarmplot_horizontal(self): pal = palettes.color_palette() ax = cat.swarmplot(x="y", y="g", hue="h", data=self.df, orient="h", dodge=True) for i, (_, group_vals) in enumerate(self.y.groupby(self.g)): for j, (_, vals) in enumerate(group_vals.groupby(self.h)): x, y = ax.collections[i * 2 + j].get_offsets().T npt.assert_array_almost_equal(x, np.sort(vals)) fc = ax.collections[i * 2 + j].get_facecolors()[0, :3] assert tuple(fc) == pal[j] def test_nested_swarmplot_vertical(self): ax = cat.swarmplot(x="g", y="y", hue="h", data=self.df) pal = palettes.color_palette() hue_names = self.h.unique().tolist() grouped_hues = list(self.h.groupby(self.g)) for i, (_, vals) in enumerate(self.y.groupby(self.g)): points = ax.collections[i] x, y = points.get_offsets().T sorter = np.argsort(vals) npt.assert_array_almost_equal(y, vals.iloc[sorter]) _, hue_vals = grouped_hues[i] for hue, fc in zip(hue_vals.values[sorter.values], points.get_facecolors()): assert tuple(fc[:3]) == pal[hue_names.index(hue)] def test_nested_swarmplot_horizontal(self): ax = cat.swarmplot(x="y", y="g", hue="h", data=self.df, orient="h") pal = palettes.color_palette() hue_names = self.h.unique().tolist() grouped_hues = list(self.h.groupby(self.g)) for i, (_, vals) in enumerate(self.y.groupby(self.g)): points = ax.collections[i] x, y = points.get_offsets().T sorter = np.argsort(vals) npt.assert_array_almost_equal(x, vals.iloc[sorter]) _, hue_vals = grouped_hues[i] for hue, fc in zip(hue_vals.values[sorter.values], points.get_facecolors()): assert tuple(fc[:3]) == pal[hue_names.index(hue)] def test_unaligned_index(self): f, (ax1, ax2) = plt.subplots(2) cat.swarmplot(x=self.g, y=self.y, ax=ax1) cat.swarmplot(x=self.g, y=self.y_perm, ax=ax2) for p1, p2 in zip(ax1.collections, ax2.collections): assert np.allclose(p1.get_offsets()[:, 1], p2.get_offsets()[:, 1]) assert np.array_equal(p1.get_facecolors(), p2.get_facecolors()) f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.swarmplot(x=self.g, y=self.y, hue=self.h, hue_order=hue_order, ax=ax1) cat.swarmplot(x=self.g, y=self.y_perm, hue=self.h, hue_order=hue_order, ax=ax2) for p1, p2 in zip(ax1.collections, ax2.collections): assert np.allclose(p1.get_offsets()[:, 1], p2.get_offsets()[:, 1]) assert np.array_equal(p1.get_facecolors(), p2.get_facecolors()) f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.swarmplot(x=self.g, y=self.y, hue=self.h, dodge=True, hue_order=hue_order, ax=ax1) cat.swarmplot(x=self.g, y=self.y_perm, hue=self.h, dodge=True, hue_order=hue_order, ax=ax2) for p1, p2 in zip(ax1.collections, ax2.collections): assert np.allclose(p1.get_offsets()[:, 1], p2.get_offsets()[:, 1]) assert np.array_equal(p1.get_facecolors(), p2.get_facecolors()) class TestBarPlotter(CategoricalFixture): default_kws = dict( x=None, y=None, hue=None, data=None, estimator=np.mean, ci=95, n_boot=100, units=None, seed=None, order=None, hue_order=None, orient=None, color=None, palette=None, saturation=.75, errcolor=".26", errwidth=None, capsize=None, dodge=True ) def test_nested_width(self): kws = self.default_kws.copy() p = cat._BarPlotter(**kws) p.establish_variables("g", "y", hue="h", data=self.df) assert p.nested_width == .8 / 2 p = cat._BarPlotter(**kws) p.establish_variables("h", "y", "g", data=self.df) assert p.nested_width == .8 / 3 kws["dodge"] = False p = cat._BarPlotter(**kws) p.establish_variables("h", "y", "g", data=self.df) assert p.nested_width == .8 def test_draw_vertical_bars(self): kws = self.default_kws.copy() kws.update(x="g", y="y", data=self.df) p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) assert len(ax.patches) == len(p.plot_data) assert len(ax.lines) == len(p.plot_data) for bar, color in zip(ax.patches, p.colors): assert bar.get_facecolor()[:-1] == color positions = np.arange(len(p.plot_data)) - p.width / 2 for bar, pos, stat in zip(ax.patches, positions, p.statistic): assert bar.get_x() == pos assert bar.get_width() == p.width assert bar.get_y() == 0 assert bar.get_height() == stat def test_draw_horizontal_bars(self): kws = self.default_kws.copy() kws.update(x="y", y="g", orient="h", data=self.df) p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) assert len(ax.patches) == len(p.plot_data) assert len(ax.lines) == len(p.plot_data) for bar, color in zip(ax.patches, p.colors): assert bar.get_facecolor()[:-1] == color positions = np.arange(len(p.plot_data)) - p.width / 2 for bar, pos, stat in zip(ax.patches, positions, p.statistic): assert bar.get_y() == pos assert bar.get_height() == p.width assert bar.get_x() == 0 assert bar.get_width() == stat def test_draw_nested_vertical_bars(self): kws = self.default_kws.copy() kws.update(x="g", y="y", hue="h", data=self.df) p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) n_groups, n_hues = len(p.plot_data), len(p.hue_names) assert len(ax.patches) == n_groups * n_hues assert len(ax.lines) == n_groups * n_hues for bar in ax.patches[:n_groups]: assert bar.get_facecolor()[:-1] == p.colors[0] for bar in ax.patches[n_groups:]: assert bar.get_facecolor()[:-1] == p.colors[1] positions = np.arange(len(p.plot_data)) for bar, pos in zip(ax.patches[:n_groups], positions): assert bar.get_x() == approx(pos - p.width / 2) assert bar.get_width() == approx(p.nested_width) for bar, stat in zip(ax.patches, p.statistic.T.flat): assert bar.get_y() == approx(0) assert bar.get_height() == approx(stat) def test_draw_nested_horizontal_bars(self): kws = self.default_kws.copy() kws.update(x="y", y="g", hue="h", orient="h", data=self.df) p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) n_groups, n_hues = len(p.plot_data), len(p.hue_names) assert len(ax.patches) == n_groups * n_hues assert len(ax.lines) == n_groups * n_hues for bar in ax.patches[:n_groups]: assert bar.get_facecolor()[:-1] == p.colors[0] for bar in ax.patches[n_groups:]: assert bar.get_facecolor()[:-1] == p.colors[1] positions = np.arange(len(p.plot_data)) for bar, pos in zip(ax.patches[:n_groups], positions): assert bar.get_y() == approx(pos - p.width / 2) assert bar.get_height() == approx(p.nested_width) for bar, stat in zip(ax.patches, p.statistic.T.flat): assert bar.get_x() == approx(0) assert bar.get_width() == approx(stat) def test_draw_missing_bars(self): kws = self.default_kws.copy() order = list("abcd") kws.update(x="g", y="y", order=order, data=self.df) p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) assert len(ax.patches) == len(order) assert len(ax.lines) == len(order) plt.close("all") hue_order = list("mno") kws.update(x="g", y="y", hue="h", hue_order=hue_order, data=self.df) p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) assert len(ax.patches) == len(p.plot_data) * len(hue_order) assert len(ax.lines) == len(p.plot_data) * len(hue_order) plt.close("all") def test_unaligned_index(self): f, (ax1, ax2) = plt.subplots(2) cat.barplot(x=self.g, y=self.y, ci="sd", ax=ax1) cat.barplot(x=self.g, y=self.y_perm, ci="sd", ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert approx(l1.get_xydata()) == l2.get_xydata() for p1, p2 in zip(ax1.patches, ax2.patches): assert approx(p1.get_xy()) == p2.get_xy() assert approx(p1.get_height()) == p2.get_height() assert approx(p1.get_width()) == p2.get_width() f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.barplot(x=self.g, y=self.y, hue=self.h, hue_order=hue_order, ci="sd", ax=ax1) cat.barplot(x=self.g, y=self.y_perm, hue=self.h, hue_order=hue_order, ci="sd", ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert approx(l1.get_xydata()) == l2.get_xydata() for p1, p2 in zip(ax1.patches, ax2.patches): assert approx(p1.get_xy()) == p2.get_xy() assert approx(p1.get_height()) == p2.get_height() assert approx(p1.get_width()) == p2.get_width() def test_barplot_colors(self): # Test unnested palette colors kws = self.default_kws.copy() kws.update(x="g", y="y", data=self.df, saturation=1, palette="muted") p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) palette = palettes.color_palette("muted", len(self.g.unique())) for patch, pal_color in zip(ax.patches, palette): assert patch.get_facecolor()[:-1] == pal_color plt.close("all") # Test single color color = (.2, .2, .3, 1) kws = self.default_kws.copy() kws.update(x="g", y="y", data=self.df, saturation=1, color=color) p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) for patch in ax.patches: assert patch.get_facecolor() == color plt.close("all") # Test nested palette colors kws = self.default_kws.copy() kws.update(x="g", y="y", hue="h", data=self.df, saturation=1, palette="Set2") p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) palette = palettes.color_palette("Set2", len(self.h.unique())) for patch in ax.patches[:len(self.g.unique())]: assert patch.get_facecolor()[:-1] == palette[0] for patch in ax.patches[len(self.g.unique()):]: assert patch.get_facecolor()[:-1] == palette[1] plt.close("all") def test_simple_barplots(self): ax = cat.barplot(x="g", y="y", data=self.df) assert len(ax.patches) == len(self.g.unique()) assert ax.get_xlabel() == "g" assert ax.get_ylabel() == "y" plt.close("all") ax = cat.barplot(x="y", y="g", orient="h", data=self.df) assert len(ax.patches) == len(self.g.unique()) assert ax.get_xlabel() == "y" assert ax.get_ylabel() == "g" plt.close("all") ax = cat.barplot(x="g", y="y", hue="h", data=self.df) assert len(ax.patches) == len(self.g.unique()) * len(self.h.unique()) assert ax.get_xlabel() == "g" assert ax.get_ylabel() == "y" plt.close("all") ax = cat.barplot(x="y", y="g", hue="h", orient="h", data=self.df) assert len(ax.patches) == len(self.g.unique()) * len(self.h.unique()) assert ax.get_xlabel() == "y" assert ax.get_ylabel() == "g" plt.close("all") class TestPointPlotter(CategoricalFixture): default_kws = dict( x=None, y=None, hue=None, data=None, estimator=np.mean, ci=95, n_boot=100, units=None, seed=None, order=None, hue_order=None, markers="o", linestyles="-", dodge=0, join=True, scale=1, orient=None, color=None, palette=None, ) def test_different_defualt_colors(self): kws = self.default_kws.copy() kws.update(dict(x="g", y="y", data=self.df)) p = cat._PointPlotter(**kws) color = palettes.color_palette()[0] npt.assert_array_equal(p.colors, [color, color, color]) def test_hue_offsets(self): kws = self.default_kws.copy() kws.update(dict(x="g", y="y", hue="h", data=self.df)) p = cat._PointPlotter(**kws) npt.assert_array_equal(p.hue_offsets, [0, 0]) kws.update(dict(dodge=.5)) p = cat._PointPlotter(**kws) npt.assert_array_equal(p.hue_offsets, [-.25, .25]) kws.update(dict(x="h", hue="g", dodge=0)) p = cat._PointPlotter(**kws) npt.assert_array_equal(p.hue_offsets, [0, 0, 0]) kws.update(dict(dodge=.3)) p = cat._PointPlotter(**kws) npt.assert_array_equal(p.hue_offsets, [-.15, 0, .15]) def test_draw_vertical_points(self): kws = self.default_kws.copy() kws.update(x="g", y="y", data=self.df) p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) assert len(ax.collections) == 1 assert len(ax.lines) == len(p.plot_data) + 1 points = ax.collections[0] assert len(points.get_offsets()) == len(p.plot_data) x, y = points.get_offsets().T npt.assert_array_equal(x, np.arange(len(p.plot_data))) npt.assert_array_equal(y, p.statistic) for got_color, want_color in zip(points.get_facecolors(), p.colors): npt.assert_array_equal(got_color[:-1], want_color) def test_draw_horizontal_points(self): kws = self.default_kws.copy() kws.update(x="y", y="g", orient="h", data=self.df) p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) assert len(ax.collections) == 1 assert len(ax.lines) == len(p.plot_data) + 1 points = ax.collections[0] assert len(points.get_offsets()) == len(p.plot_data) x, y = points.get_offsets().T npt.assert_array_equal(x, p.statistic) npt.assert_array_equal(y, np.arange(len(p.plot_data))) for got_color, want_color in zip(points.get_facecolors(), p.colors): npt.assert_array_equal(got_color[:-1], want_color) def test_draw_vertical_nested_points(self): kws = self.default_kws.copy() kws.update(x="g", y="y", hue="h", data=self.df) p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) assert len(ax.collections) == 2 assert len(ax.lines) == len(p.plot_data) * len(p.hue_names) + len(p.hue_names) for points, numbers, color in zip(ax.collections, p.statistic.T, p.colors): assert len(points.get_offsets()) == len(p.plot_data) x, y = points.get_offsets().T npt.assert_array_equal(x, np.arange(len(p.plot_data))) npt.assert_array_equal(y, numbers) for got_color in points.get_facecolors(): npt.assert_array_equal(got_color[:-1], color) def test_draw_horizontal_nested_points(self): kws = self.default_kws.copy() kws.update(x="y", y="g", hue="h", orient="h", data=self.df) p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) assert len(ax.collections) == 2 assert len(ax.lines) == len(p.plot_data) * len(p.hue_names) + len(p.hue_names) for points, numbers, color in zip(ax.collections, p.statistic.T, p.colors): assert len(points.get_offsets()) == len(p.plot_data) x, y = points.get_offsets().T npt.assert_array_equal(x, numbers) npt.assert_array_equal(y, np.arange(len(p.plot_data))) for got_color in points.get_facecolors(): npt.assert_array_equal(got_color[:-1], color) def test_draw_missing_points(self): kws = self.default_kws.copy() df = self.df.copy() kws.update(x="g", y="y", hue="h", hue_order=["x", "y"], data=df) p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) df.loc[df["h"] == "m", "y"] = np.nan kws.update(x="g", y="y", hue="h", data=df) p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) def test_unaligned_index(self): f, (ax1, ax2) = plt.subplots(2) cat.pointplot(x=self.g, y=self.y, ci="sd", ax=ax1) cat.pointplot(x=self.g, y=self.y_perm, ci="sd", ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert approx(l1.get_xydata()) == l2.get_xydata() for p1, p2 in zip(ax1.collections, ax2.collections): assert approx(p1.get_offsets()) == p2.get_offsets() f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.pointplot(x=self.g, y=self.y, hue=self.h, hue_order=hue_order, ci="sd", ax=ax1) cat.pointplot(x=self.g, y=self.y_perm, hue=self.h, hue_order=hue_order, ci="sd", ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert approx(l1.get_xydata()) == l2.get_xydata() for p1, p2 in zip(ax1.collections, ax2.collections): assert approx(p1.get_offsets()) == p2.get_offsets() def test_pointplot_colors(self): # Test a single-color unnested plot color = (.2, .2, .3, 1) kws = self.default_kws.copy() kws.update(x="g", y="y", data=self.df, color=color) p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) for line in ax.lines: assert line.get_color() == color[:-1] for got_color in ax.collections[0].get_facecolors(): npt.assert_array_equal(rgb2hex(got_color), rgb2hex(color)) plt.close("all") # Test a multi-color unnested plot palette = palettes.color_palette("Set1", 3) kws.update(x="g", y="y", data=self.df, palette="Set1") p = cat._PointPlotter(**kws) assert not p.join f, ax = plt.subplots() p.draw_points(ax) for line, pal_color in zip(ax.lines, palette): npt.assert_array_equal(line.get_color(), pal_color) for point_color, pal_color in zip(ax.collections[0].get_facecolors(), palette): npt.assert_array_equal(rgb2hex(point_color), rgb2hex(pal_color)) plt.close("all") # Test a multi-colored nested plot palette = palettes.color_palette("dark", 2) kws.update(x="g", y="y", hue="h", data=self.df, palette="dark") p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) for line in ax.lines[:(len(p.plot_data) + 1)]: assert line.get_color() == palette[0] for line in ax.lines[(len(p.plot_data) + 1):]: assert line.get_color() == palette[1] for i, pal_color in enumerate(palette): for point_color in ax.collections[i].get_facecolors(): npt.assert_array_equal(point_color[:-1], pal_color) plt.close("all") def test_simple_pointplots(self): ax = cat.pointplot(x="g", y="y", data=self.df) assert len(ax.collections) == 1 assert len(ax.lines) == len(self.g.unique()) + 1 assert ax.get_xlabel() == "g" assert ax.get_ylabel() == "y" plt.close("all") ax = cat.pointplot(x="y", y="g", orient="h", data=self.df) assert len(ax.collections) == 1 assert len(ax.lines) == len(self.g.unique()) + 1 assert ax.get_xlabel() == "y" assert ax.get_ylabel() == "g" plt.close("all") ax = cat.pointplot(x="g", y="y", hue="h", data=self.df) assert len(ax.collections) == len(self.h.unique()) assert len(ax.lines) == ( len(self.g.unique()) * len(self.h.unique()) + len(self.h.unique()) ) assert ax.get_xlabel() == "g" assert ax.get_ylabel() == "y" plt.close("all") ax = cat.pointplot(x="y", y="g", hue="h", orient="h", data=self.df) assert len(ax.collections) == len(self.h.unique()) assert len(ax.lines) == ( len(self.g.unique()) * len(self.h.unique()) + len(self.h.unique()) ) assert ax.get_xlabel() == "y" assert ax.get_ylabel() == "g" plt.close("all") class TestCountPlot(CategoricalFixture): def test_plot_elements(self): ax = cat.countplot(x="g", data=self.df) assert len(ax.patches) == self.g.unique().size for p in ax.patches: assert p.get_y() == 0 assert p.get_height() == self.g.size / self.g.unique().size plt.close("all") ax = cat.countplot(y="g", data=self.df) assert len(ax.patches) == self.g.unique().size for p in ax.patches: assert p.get_x() == 0 assert p.get_width() == self.g.size / self.g.unique().size plt.close("all") ax = cat.countplot(x="g", hue="h", data=self.df) assert len(ax.patches) == self.g.unique().size * self.h.unique().size plt.close("all") ax = cat.countplot(y="g", hue="h", data=self.df) assert len(ax.patches) == self.g.unique().size * self.h.unique().size plt.close("all") def test_input_error(self): with pytest.raises(ValueError): cat.countplot(x="g", y="h", data=self.df) class TestCatPlot(CategoricalFixture): def test_facet_organization(self): g = cat.catplot(x="g", y="y", data=self.df) assert g.axes.shape == (1, 1) g = cat.catplot(x="g", y="y", col="h", data=self.df) assert g.axes.shape == (1, 2) g = cat.catplot(x="g", y="y", row="h", data=self.df) assert g.axes.shape == (2, 1) g = cat.catplot(x="g", y="y", col="u", row="h", data=self.df) assert g.axes.shape == (2, 3) def test_plot_elements(self): g = cat.catplot(x="g", y="y", data=self.df, kind="point") assert len(g.ax.collections) == 1 want_lines = self.g.unique().size + 1 assert len(g.ax.lines) == want_lines g = cat.catplot(x="g", y="y", hue="h", data=self.df, kind="point") want_collections = self.h.unique().size assert len(g.ax.collections) == want_collections want_lines = (self.g.unique().size + 1) * self.h.unique().size assert len(g.ax.lines) == want_lines g = cat.catplot(x="g", y="y", data=self.df, kind="bar") want_elements = self.g.unique().size assert len(g.ax.patches) == want_elements assert len(g.ax.lines) == want_elements g = cat.catplot(x="g", y="y", hue="h", data=self.df, kind="bar") want_elements = self.g.unique().size * self.h.unique().size assert len(g.ax.patches) == want_elements assert len(g.ax.lines) == want_elements g = cat.catplot(x="g", data=self.df, kind="count") want_elements = self.g.unique().size assert len(g.ax.patches) == want_elements assert len(g.ax.lines) == 0 g = cat.catplot(x="g", hue="h", data=self.df, kind="count") want_elements = self.g.unique().size * self.h.unique().size assert len(g.ax.patches) == want_elements assert len(g.ax.lines) == 0 g = cat.catplot(x="g", y="y", data=self.df, kind="box") want_artists = self.g.unique().size assert len(g.ax.artists) == want_artists g = cat.catplot(x="g", y="y", hue="h", data=self.df, kind="box") want_artists = self.g.unique().size * self.h.unique().size assert len(g.ax.artists) == want_artists g = cat.catplot(x="g", y="y", data=self.df, kind="violin", inner=None) want_elements = self.g.unique().size assert len(g.ax.collections) == want_elements g = cat.catplot(x="g", y="y", hue="h", data=self.df, kind="violin", inner=None) want_elements = self.g.unique().size * self.h.unique().size assert len(g.ax.collections) == want_elements g = cat.catplot(x="g", y="y", data=self.df, kind="strip") want_elements = self.g.unique().size assert len(g.ax.collections) == want_elements g = cat.catplot(x="g", y="y", hue="h", data=self.df, kind="strip") want_elements = self.g.unique().size + self.h.unique().size assert len(g.ax.collections) == want_elements def test_bad_plot_kind_error(self): with pytest.raises(ValueError): cat.catplot(x="g", y="y", data=self.df, kind="not_a_kind") def test_count_x_and_y(self): with pytest.raises(ValueError): cat.catplot(x="g", y="y", data=self.df, kind="count") def test_plot_colors(self): ax = cat.barplot(x="g", y="y", data=self.df) g = cat.catplot(x="g", y="y", data=self.df, kind="bar") for p1, p2 in zip(ax.patches, g.ax.patches): assert p1.get_facecolor() == p2.get_facecolor() plt.close("all") ax = cat.barplot(x="g", y="y", data=self.df, color="purple") g = cat.catplot(x="g", y="y", data=self.df, kind="bar", color="purple") for p1, p2 in zip(ax.patches, g.ax.patches): assert p1.get_facecolor() == p2.get_facecolor() plt.close("all") ax = cat.barplot(x="g", y="y", data=self.df, palette="Set2") g = cat.catplot(x="g", y="y", data=self.df, kind="bar", palette="Set2") for p1, p2 in zip(ax.patches, g.ax.patches): assert p1.get_facecolor() == p2.get_facecolor() plt.close("all") ax = cat.pointplot(x="g", y="y", data=self.df) g = cat.catplot(x="g", y="y", data=self.df) for l1, l2 in zip(ax.lines, g.ax.lines): assert l1.get_color() == l2.get_color() plt.close("all") ax = cat.pointplot(x="g", y="y", data=self.df, color="purple") g = cat.catplot(x="g", y="y", data=self.df, color="purple") for l1, l2 in zip(ax.lines, g.ax.lines): assert l1.get_color() == l2.get_color() plt.close("all") ax = cat.pointplot(x="g", y="y", data=self.df, palette="Set2") g = cat.catplot(x="g", y="y", data=self.df, palette="Set2") for l1, l2 in zip(ax.lines, g.ax.lines): assert l1.get_color() == l2.get_color() plt.close("all") def test_ax_kwarg_removal(self): f, ax = plt.subplots() with pytest.warns(UserWarning): g = cat.catplot(x="g", y="y", data=self.df, ax=ax) assert len(ax.collections) == 0 assert len(g.ax.collections) > 0 def test_factorplot(self): with pytest.warns(UserWarning): g = cat.factorplot(x="g", y="y", data=self.df) assert len(g.ax.collections) == 1 want_lines = self.g.unique().size + 1 assert len(g.ax.lines) == want_lines def test_share_xy(self): # Test default behavior works g = cat.catplot(x="g", y="y", col="g", data=self.df, sharex=True) for ax in g.axes.flat: assert len(ax.collections) == len(self.df.g.unique()) g = cat.catplot(x="y", y="g", col="g", data=self.df, sharey=True) for ax in g.axes.flat: assert len(ax.collections) == len(self.df.g.unique()) # Test unsharing works with pytest.warns(UserWarning): g = cat.catplot(x="g", y="y", col="g", data=self.df, sharex=False) for ax in g.axes.flat: assert len(ax.collections) == 1 with pytest.warns(UserWarning): g = cat.catplot(x="y", y="g", col="g", data=self.df, sharey=False) for ax in g.axes.flat: assert len(ax.collections) == 1 # Make sure no warning is raised if color is provided on unshared plot with pytest.warns(None) as record: g = cat.catplot( x="g", y="y", col="g", data=self.df, sharex=False, color="b" ) assert not len(record) with pytest.warns(None) as record: g = cat.catplot( x="y", y="g", col="g", data=self.df, sharey=False, color="r" ) assert not len(record) # Make sure order is used if given, regardless of sharex value order = self.df.g.unique() g = cat.catplot(x="g", y="y", col="g", data=self.df, sharex=False, order=order) for ax in g.axes.flat: assert len(ax.collections) == len(self.df.g.unique()) g = cat.catplot(x="y", y="g", col="g", data=self.df, sharey=False, order=order) for ax in g.axes.flat: assert len(ax.collections) == len(self.df.g.unique()) class TestBoxenPlotter(CategoricalFixture): default_kws = dict(x=None, y=None, hue=None, data=None, order=None, hue_order=None, orient=None, color=None, palette=None, saturation=.75, width=.8, dodge=True, k_depth='tukey', linewidth=None, scale='exponential', outlier_prop=0.007, trust_alpha=0.05, showfliers=True) def ispatch(self, c): return isinstance(c, mpl.collections.PatchCollection) def ispath(self, c): return isinstance(c, mpl.collections.PathCollection) def edge_calc(self, n, data): q = np.asanyarray([0.5 ** n, 1 - 0.5 ** n]) * 100 q = list(np.unique(q)) return np.percentile(data, q) def test_box_ends_finite(self): p = cat._LVPlotter(**self.default_kws) p.establish_variables("g", "y", data=self.df) box_ends = [] k_vals = [] for s in p.plot_data: b, k = p._lv_box_ends(s) box_ends.append(b) k_vals.append(k) # Check that all the box ends are finite and are within # the bounds of the data b_e = map(lambda a: np.all(np.isfinite(a)), box_ends) assert np.sum(list(b_e)) == len(box_ends) def within(t): a, d = t return ((np.ravel(a) <= d.max()) & (np.ravel(a) >= d.min())).all() b_w = map(within, zip(box_ends, p.plot_data)) assert np.sum(list(b_w)) == len(box_ends) k_f = map(lambda k: (k > 0.) & np.isfinite(k), k_vals) assert np.sum(list(k_f)) == len(k_vals) def test_box_ends_correct_tukey(self): n = 100 linear_data = np.arange(n) expected_k = max(int(np.log2(n)) - 3, 1) expected_edges = [self.edge_calc(i, linear_data) for i in range(expected_k + 1, 1, -1)] p = cat._LVPlotter(**self.default_kws) calc_edges, calc_k = p._lv_box_ends(linear_data) npt.assert_array_equal(expected_edges, calc_edges) assert expected_k == calc_k def test_box_ends_correct_proportion(self): n = 100 linear_data = np.arange(n) expected_k = int(np.log2(n)) - int(np.log2(n * 0.007)) + 1 expected_edges = [self.edge_calc(i, linear_data) for i in range(expected_k + 1, 1, -1)] kws = self.default_kws.copy() kws["k_depth"] = "proportion" p = cat._LVPlotter(**kws) calc_edges, calc_k = p._lv_box_ends(linear_data) npt.assert_array_equal(expected_edges, calc_edges) assert expected_k == calc_k @pytest.mark.parametrize( "n,exp_k", [(491, 6), (492, 7), (983, 7), (984, 8), (1966, 8), (1967, 9)], ) def test_box_ends_correct_trustworthy(self, n, exp_k): linear_data = np.arange(n) kws = self.default_kws.copy() kws["k_depth"] = "trustworthy" p = cat._LVPlotter(**kws) _, calc_k = p._lv_box_ends(linear_data) assert exp_k == calc_k def test_outliers(self): n = 100 outlier_data = np.append(np.arange(n - 1), 2 * n) expected_k = max(int(np.log2(n)) - 3, 1) expected_edges = [self.edge_calc(i, outlier_data) for i in range(expected_k + 1, 1, -1)] p = cat._LVPlotter(**self.default_kws) calc_edges, calc_k = p._lv_box_ends(outlier_data) npt.assert_array_equal(calc_edges, expected_edges) assert calc_k == expected_k out_calc = p._lv_outliers(outlier_data, calc_k) out_exp = p._lv_outliers(outlier_data, expected_k) npt.assert_equal(out_calc, out_exp) def test_showfliers(self): ax = cat.boxenplot(x="g", y="y", data=self.df, k_depth="proportion", showfliers=True) ax_collections = list(filter(self.ispath, ax.collections)) for c in ax_collections: assert len(c.get_offsets()) == 2 # Test that all data points are in the plot assert ax.get_ylim()[0] < self.df["y"].min() assert ax.get_ylim()[1] > self.df["y"].max() plt.close("all") ax = cat.boxenplot(x="g", y="y", data=self.df, showfliers=False) assert len(list(filter(self.ispath, ax.collections))) == 0 plt.close("all") def test_invalid_depths(self): kws = self.default_kws.copy() # Make sure illegal depth raises kws["k_depth"] = "nosuchdepth" with pytest.raises(ValueError): cat._LVPlotter(**kws) # Make sure illegal outlier_prop raises kws["k_depth"] = "proportion" for p in (-13, 37): kws["outlier_prop"] = p with pytest.raises(ValueError): cat._LVPlotter(**kws) kws["k_depth"] = "trustworthy" for alpha in (-13, 37): kws["trust_alpha"] = alpha with pytest.raises(ValueError): cat._LVPlotter(**kws) @pytest.mark.parametrize("power", [1, 3, 7, 11, 13, 17]) def test_valid_depths(self, power): x = np.random.standard_t(10, 2 ** power) valid_depths = ["proportion", "tukey", "trustworthy", "full"] kws = self.default_kws.copy() for depth in valid_depths + [4]: kws["k_depth"] = depth box_ends, k = cat._LVPlotter(**kws)._lv_box_ends(x) if depth == "full": assert k == int(np.log2(len(x))) + 1 def test_valid_scales(self): valid_scales = ["linear", "exponential", "area"] kws = self.default_kws.copy() for scale in valid_scales + ["unknown_scale"]: kws["scale"] = scale if scale not in valid_scales: with pytest.raises(ValueError): cat._LVPlotter(**kws) else: cat._LVPlotter(**kws) def test_hue_offsets(self): p = cat._LVPlotter(**self.default_kws) p.establish_variables("g", "y", hue="h", data=self.df) npt.assert_array_equal(p.hue_offsets, [-.2, .2]) kws = self.default_kws.copy() kws["width"] = .6 p = cat._LVPlotter(**kws) p.establish_variables("g", "y", hue="h", data=self.df) npt.assert_array_equal(p.hue_offsets, [-.15, .15]) p = cat._LVPlotter(**kws) p.establish_variables("h", "y", "g", data=self.df) npt.assert_array_almost_equal(p.hue_offsets, [-.2, 0, .2]) def test_axes_data(self): ax = cat.boxenplot(x="g", y="y", data=self.df) patches = filter(self.ispatch, ax.collections) assert len(list(patches)) == 3 plt.close("all") ax = cat.boxenplot(x="g", y="y", hue="h", data=self.df) patches = filter(self.ispatch, ax.collections) assert len(list(patches)) == 6 plt.close("all") def test_box_colors(self): ax = cat.boxenplot(x="g", y="y", data=self.df, saturation=1) pal = palettes.color_palette(n_colors=3) for patch, color in zip(ax.artists, pal): assert patch.get_facecolor()[:3] == color plt.close("all") ax = cat.boxenplot(x="g", y="y", hue="h", data=self.df, saturation=1) pal = palettes.color_palette(n_colors=2) for patch, color in zip(ax.artists, pal * 2): assert patch.get_facecolor()[:3] == color plt.close("all") def test_draw_missing_boxes(self): ax = cat.boxenplot(x="g", y="y", data=self.df, order=["a", "b", "c", "d"]) patches = filter(self.ispatch, ax.collections) assert len(list(patches)) == 3 plt.close("all") def test_unaligned_index(self): f, (ax1, ax2) = plt.subplots(2) cat.boxenplot(x=self.g, y=self.y, ax=ax1) cat.boxenplot(x=self.g, y=self.y_perm, ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert np.array_equal(l1.get_xydata(), l2.get_xydata()) f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.boxenplot(x=self.g, y=self.y, hue=self.h, hue_order=hue_order, ax=ax1) cat.boxenplot(x=self.g, y=self.y_perm, hue=self.h, hue_order=hue_order, ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert np.array_equal(l1.get_xydata(), l2.get_xydata()) def test_missing_data(self): x = ["a", "a", "b", "b", "c", "c", "d", "d"] h = ["x", "y", "x", "y", "x", "y", "x", "y"] y = self.rs.randn(8) y[-2:] = np.nan ax = cat.boxenplot(x=x, y=y) assert len(ax.lines) == 3 plt.close("all") y[-1] = 0 ax = cat.boxenplot(x=x, y=y, hue=h) assert len(ax.lines) == 7 plt.close("all") def test_boxenplots(self): # Smoke test the high level boxenplot options cat.boxenplot(x="y", data=self.df) plt.close("all") cat.boxenplot(y="y", data=self.df) plt.close("all") cat.boxenplot(x="g", y="y", data=self.df) plt.close("all") cat.boxenplot(x="y", y="g", data=self.df, orient="h") plt.close("all") cat.boxenplot(x="g", y="y", hue="h", data=self.df) plt.close("all") for scale in ("linear", "area", "exponential"): cat.boxenplot(x="g", y="y", hue="h", scale=scale, data=self.df) plt.close("all") for depth in ("proportion", "tukey", "trustworthy"): cat.boxenplot(x="g", y="y", hue="h", k_depth=depth, data=self.df) plt.close("all") order = list("nabc") cat.boxenplot(x="g", y="y", hue="h", order=order, data=self.df) plt.close("all") order = list("omn") cat.boxenplot(x="g", y="y", hue="h", hue_order=order, data=self.df) plt.close("all") cat.boxenplot(x="y", y="g", hue="h", data=self.df, orient="h") plt.close("all") cat.boxenplot(x="y", y="g", hue="h", data=self.df, orient="h", palette="Set2") plt.close("all") cat.boxenplot(x="y", y="g", hue="h", data=self.df, orient="h", color="b") plt.close("all") def test_axes_annotation(self): ax = cat.boxenplot(x="g", y="y", data=self.df) assert ax.get_xlabel() == "g" assert ax.get_ylabel() == "y" assert ax.get_xlim() == (-.5, 2.5) npt.assert_array_equal(ax.get_xticks(), [0, 1, 2]) npt.assert_array_equal([l.get_text() for l in ax.get_xticklabels()], ["a", "b", "c"]) plt.close("all") ax = cat.boxenplot(x="g", y="y", hue="h", data=self.df) assert ax.get_xlabel() == "g" assert ax.get_ylabel() == "y" npt.assert_array_equal(ax.get_xticks(), [0, 1, 2]) npt.assert_array_equal([l.get_text() for l in ax.get_xticklabels()], ["a", "b", "c"]) npt.assert_array_equal([l.get_text() for l in ax.legend_.get_texts()], ["m", "n"]) plt.close("all") ax = cat.boxenplot(x="y", y="g", data=self.df, orient="h") assert ax.get_xlabel() == "y" assert ax.get_ylabel() == "g" assert ax.get_ylim() == (2.5, -.5) npt.assert_array_equal(ax.get_yticks(), [0, 1, 2]) npt.assert_array_equal([l.get_text() for l in ax.get_yticklabels()], ["a", "b", "c"]) plt.close("all") @pytest.mark.parametrize("size", ["large", "medium", "small", 22, 12]) def test_legend_titlesize(self, size): if LooseVersion(mpl.__version__) >= LooseVersion("3.0"): rc_ctx = {"legend.title_fontsize": size} else: # Old matplotlib doesn't have legend.title_fontsize rcparam rc_ctx = {"axes.labelsize": size} if isinstance(size, int): size = size * .85 exp = mpl.font_manager.FontProperties(size=size).get_size() with plt.rc_context(rc=rc_ctx): ax = cat.boxenplot(x="g", y="y", hue="h", data=self.df) obs = ax.get_legend().get_title().get_fontproperties().get_size() assert obs == exp plt.close("all") @pytest.mark.skipif( LooseVersion(pd.__version__) < "1.2", reason="Test requires pandas>=1.2") def test_Float64_input(self): data = pd.DataFrame( {"x": np.random.choice(["a", "b"], 20), "y": np.random.random(20)} ) data['y'] = data['y'].astype(pd.Float64Dtype()) _ = cat.boxenplot(x="x", y="y", data=data) plt.close("all") seaborn-0.11.2/seaborn/tests/test_core.py000066400000000000000000001230571410631356500203740ustar00rootroot00000000000000import itertools import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt import pytest from numpy.testing import assert_array_equal from pandas.testing import assert_frame_equal from ..axisgrid import FacetGrid from .._core import ( SemanticMapping, HueMapping, SizeMapping, StyleMapping, VectorPlotter, variable_type, infer_orient, unique_dashes, unique_markers, categorical_order, ) from ..palettes import color_palette try: from pandas import NA as PD_NA except ImportError: PD_NA = None class TestSemanticMapping: def test_call_lookup(self): m = SemanticMapping(VectorPlotter()) lookup_table = dict(zip("abc", (1, 2, 3))) m.lookup_table = lookup_table for key, val in lookup_table.items(): assert m(key) == val class TestHueMapping: def test_init_from_map(self, long_df): p_orig = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue="a") ) palette = "Set2" p = HueMapping.map(p_orig, palette=palette) assert p is p_orig assert isinstance(p._hue_map, HueMapping) assert p._hue_map.palette == palette def test_plotter_default_init(self, long_df): p = VectorPlotter( data=long_df, variables=dict(x="x", y="y"), ) assert isinstance(p._hue_map, HueMapping) assert p._hue_map.map_type is None p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue="a"), ) assert isinstance(p._hue_map, HueMapping) assert p._hue_map.map_type == p.var_types["hue"] def test_plotter_reinit(self, long_df): p_orig = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue="a"), ) palette = "muted" hue_order = ["b", "a", "c"] p = p_orig.map_hue(palette=palette, order=hue_order) assert p is p_orig assert p._hue_map.palette == palette assert p._hue_map.levels == hue_order def test_hue_map_null(self, flat_series, null_series): p = VectorPlotter(variables=dict(x=flat_series, hue=null_series)) m = HueMapping(p) assert m.levels is None assert m.map_type is None assert m.palette is None assert m.cmap is None assert m.norm is None assert m.lookup_table is None def test_hue_map_categorical(self, wide_df, long_df): p = VectorPlotter(data=wide_df) m = HueMapping(p) assert m.levels == wide_df.columns.tolist() assert m.map_type == "categorical" assert m.cmap is None # Test named palette palette = "Blues" expected_colors = color_palette(palette, wide_df.shape[1]) expected_lookup_table = dict(zip(wide_df.columns, expected_colors)) m = HueMapping(p, palette=palette) assert m.palette == "Blues" assert m.lookup_table == expected_lookup_table # Test list palette palette = color_palette("Reds", wide_df.shape[1]) expected_lookup_table = dict(zip(wide_df.columns, palette)) m = HueMapping(p, palette=palette) assert m.palette == palette assert m.lookup_table == expected_lookup_table # Test dict palette colors = color_palette("Set1", 8) palette = dict(zip(wide_df.columns, colors)) m = HueMapping(p, palette=palette) assert m.palette == palette assert m.lookup_table == palette # Test dict with missing keys palette = dict(zip(wide_df.columns[:-1], colors)) with pytest.raises(ValueError): HueMapping(p, palette=palette) # Test dict with missing keys palette = dict(zip(wide_df.columns[:-1], colors)) with pytest.raises(ValueError): HueMapping(p, palette=palette) # Test list with wrong number of colors palette = colors[:-1] with pytest.raises(ValueError): HueMapping(p, palette=palette) # Test hue order hue_order = ["a", "c", "d"] m = HueMapping(p, order=hue_order) assert m.levels == hue_order # Test long data p = VectorPlotter(data=long_df, variables=dict(x="x", y="y", hue="a")) m = HueMapping(p) assert m.levels == categorical_order(long_df["a"]) assert m.map_type == "categorical" assert m.cmap is None # Test default palette m = HueMapping(p) hue_levels = categorical_order(long_df["a"]) expected_colors = color_palette(n_colors=len(hue_levels)) expected_lookup_table = dict(zip(hue_levels, expected_colors)) assert m.lookup_table == expected_lookup_table # Test missing data m = HueMapping(p) assert m(np.nan) == (0, 0, 0, 0) # Test default palette with many levels x = y = np.arange(26) hue = pd.Series(list("abcdefghijklmnopqrstuvwxyz")) p = VectorPlotter(variables=dict(x=x, y=y, hue=hue)) m = HueMapping(p) expected_colors = color_palette("husl", n_colors=len(hue)) expected_lookup_table = dict(zip(hue, expected_colors)) assert m.lookup_table == expected_lookup_table # Test binary data p = VectorPlotter(data=long_df, variables=dict(x="x", y="y", hue="c")) m = HueMapping(p) assert m.levels == [0, 1] assert m.map_type == "categorical" for val in [0, 1]: p = VectorPlotter( data=long_df[long_df["c"] == val], variables=dict(x="x", y="y", hue="c"), ) m = HueMapping(p) assert m.levels == [val] assert m.map_type == "categorical" # Test Timestamp data p = VectorPlotter(data=long_df, variables=dict(x="x", y="y", hue="t")) m = HueMapping(p) assert m.levels == [pd.Timestamp(t) for t in long_df["t"].unique()] assert m.map_type == "datetime" # Test excplicit categories p = VectorPlotter(data=long_df, variables=dict(x="x", hue="a_cat")) m = HueMapping(p) assert m.levels == long_df["a_cat"].cat.categories.tolist() assert m.map_type == "categorical" # Test numeric data with category type p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue="s_cat") ) m = HueMapping(p) assert m.levels == categorical_order(long_df["s_cat"]) assert m.map_type == "categorical" assert m.cmap is None # Test categorical palette specified for numeric data p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue="s") ) palette = "deep" levels = categorical_order(long_df["s"]) expected_colors = color_palette(palette, n_colors=len(levels)) expected_lookup_table = dict(zip(levels, expected_colors)) m = HueMapping(p, palette=palette) assert m.lookup_table == expected_lookup_table assert m.map_type == "categorical" def test_hue_map_numeric(self, long_df): # Test default colormap p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue="s") ) hue_levels = list(np.sort(long_df["s"].unique())) m = HueMapping(p) assert m.levels == hue_levels assert m.map_type == "numeric" assert m.cmap.name == "seaborn_cubehelix" # Test named colormap palette = "Purples" m = HueMapping(p, palette=palette) assert m.cmap is mpl.cm.get_cmap(palette) # Test colormap object palette = mpl.cm.get_cmap("Greens") m = HueMapping(p, palette=palette) assert m.cmap is mpl.cm.get_cmap(palette) # Test cubehelix shorthand palette = "ch:2,0,light=.2" m = HueMapping(p, palette=palette) assert isinstance(m.cmap, mpl.colors.ListedColormap) # Test specified hue limits hue_norm = 1, 4 m = HueMapping(p, norm=hue_norm) assert isinstance(m.norm, mpl.colors.Normalize) assert m.norm.vmin == hue_norm[0] assert m.norm.vmax == hue_norm[1] # Test Normalize object hue_norm = mpl.colors.PowerNorm(2, vmin=1, vmax=10) m = HueMapping(p, norm=hue_norm) assert m.norm is hue_norm # Test default colormap values hmin, hmax = p.plot_data["hue"].min(), p.plot_data["hue"].max() m = HueMapping(p) assert m.lookup_table[hmin] == pytest.approx(m.cmap(0.0)) assert m.lookup_table[hmax] == pytest.approx(m.cmap(1.0)) # Test specified colormap values hue_norm = hmin - 1, hmax - 1 m = HueMapping(p, norm=hue_norm) norm_min = (hmin - hue_norm[0]) / (hue_norm[1] - hue_norm[0]) assert m.lookup_table[hmin] == pytest.approx(m.cmap(norm_min)) assert m.lookup_table[hmax] == pytest.approx(m.cmap(1.0)) # Test list of colors hue_levels = list(np.sort(long_df["s"].unique())) palette = color_palette("Blues", len(hue_levels)) m = HueMapping(p, palette=palette) assert m.lookup_table == dict(zip(hue_levels, palette)) palette = color_palette("Blues", len(hue_levels) + 1) with pytest.raises(ValueError): HueMapping(p, palette=palette) # Test dictionary of colors palette = dict(zip(hue_levels, color_palette("Reds"))) m = HueMapping(p, palette=palette) assert m.lookup_table == palette palette.pop(hue_levels[0]) with pytest.raises(ValueError): HueMapping(p, palette=palette) # Test invalid palette with pytest.raises(ValueError): HueMapping(p, palette="not a valid palette") # Test bad norm argument with pytest.raises(ValueError): HueMapping(p, norm="not a norm") class TestSizeMapping: def test_init_from_map(self, long_df): p_orig = VectorPlotter( data=long_df, variables=dict(x="x", y="y", size="a") ) sizes = 1, 6 p = SizeMapping.map(p_orig, sizes=sizes) assert p is p_orig assert isinstance(p._size_map, SizeMapping) assert min(p._size_map.lookup_table.values()) == sizes[0] assert max(p._size_map.lookup_table.values()) == sizes[1] def test_plotter_default_init(self, long_df): p = VectorPlotter( data=long_df, variables=dict(x="x", y="y"), ) assert isinstance(p._size_map, SizeMapping) assert p._size_map.map_type is None p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", size="a"), ) assert isinstance(p._size_map, SizeMapping) assert p._size_map.map_type == p.var_types["size"] def test_plotter_reinit(self, long_df): p_orig = VectorPlotter( data=long_df, variables=dict(x="x", y="y", size="a"), ) sizes = [1, 4, 2] size_order = ["b", "a", "c"] p = p_orig.map_size(sizes=sizes, order=size_order) assert p is p_orig assert p._size_map.lookup_table == dict(zip(size_order, sizes)) assert p._size_map.levels == size_order def test_size_map_null(self, flat_series, null_series): p = VectorPlotter(variables=dict(x=flat_series, size=null_series)) m = HueMapping(p) assert m.levels is None assert m.map_type is None assert m.norm is None assert m.lookup_table is None def test_map_size_numeric(self, long_df): p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", size="s"), ) # Test default range of keys in the lookup table values m = SizeMapping(p) size_values = m.lookup_table.values() value_range = min(size_values), max(size_values) assert value_range == p._default_size_range # Test specified range of size values sizes = 1, 5 m = SizeMapping(p, sizes=sizes) size_values = m.lookup_table.values() assert min(size_values), max(size_values) == sizes # Test size values with normalization range norm = 1, 10 m = SizeMapping(p, sizes=sizes, norm=norm) normalize = mpl.colors.Normalize(*norm, clip=True) for key, val in m.lookup_table.items(): assert val == sizes[0] + (sizes[1] - sizes[0]) * normalize(key) # Test size values with normalization object norm = mpl.colors.LogNorm(1, 10, clip=False) m = SizeMapping(p, sizes=sizes, norm=norm) assert m.norm.clip for key, val in m.lookup_table.items(): assert val == sizes[0] + (sizes[1] - sizes[0]) * norm(key) # Test bad sizes argument with pytest.raises(ValueError): SizeMapping(p, sizes="bad_sizes") # Test bad sizes argument with pytest.raises(ValueError): SizeMapping(p, sizes=(1, 2, 3)) # Test bad norm argument with pytest.raises(ValueError): SizeMapping(p, norm="bad_norm") def test_map_size_categorical(self, long_df): p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", size="a"), ) # Test specified size order levels = p.plot_data["size"].unique() sizes = [1, 4, 6] order = [levels[1], levels[2], levels[0]] m = SizeMapping(p, sizes=sizes, order=order) assert m.lookup_table == dict(zip(order, sizes)) # Test list of sizes order = categorical_order(p.plot_data["size"]) sizes = list(np.random.rand(len(levels))) m = SizeMapping(p, sizes=sizes) assert m.lookup_table == dict(zip(order, sizes)) # Test dict of sizes sizes = dict(zip(levels, np.random.rand(len(levels)))) m = SizeMapping(p, sizes=sizes) assert m.lookup_table == sizes # Test specified size range sizes = (2, 5) m = SizeMapping(p, sizes=sizes) values = np.linspace(*sizes, len(m.levels))[::-1] assert m.lookup_table == dict(zip(m.levels, values)) # Test explicit categories p = VectorPlotter(data=long_df, variables=dict(x="x", size="a_cat")) m = SizeMapping(p) assert m.levels == long_df["a_cat"].cat.categories.tolist() assert m.map_type == "categorical" # Test sizes list with wrong length sizes = list(np.random.rand(len(levels) + 1)) with pytest.raises(ValueError): SizeMapping(p, sizes=sizes) # Test sizes dict with missing levels sizes = dict(zip(levels, np.random.rand(len(levels) - 1))) with pytest.raises(ValueError): SizeMapping(p, sizes=sizes) # Test bad sizes argument with pytest.raises(ValueError): SizeMapping(p, sizes="bad_size") class TestStyleMapping: def test_init_from_map(self, long_df): p_orig = VectorPlotter( data=long_df, variables=dict(x="x", y="y", style="a") ) markers = ["s", "p", "h"] p = StyleMapping.map(p_orig, markers=markers) assert p is p_orig assert isinstance(p._style_map, StyleMapping) assert p._style_map(p._style_map.levels, "marker") == markers def test_plotter_default_init(self, long_df): p = VectorPlotter( data=long_df, variables=dict(x="x", y="y"), ) assert isinstance(p._style_map, StyleMapping) p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", style="a"), ) assert isinstance(p._style_map, StyleMapping) def test_plotter_reinit(self, long_df): p_orig = VectorPlotter( data=long_df, variables=dict(x="x", y="y", style="a"), ) markers = ["s", "p", "h"] style_order = ["b", "a", "c"] p = p_orig.map_style(markers=markers, order=style_order) assert p is p_orig assert p._style_map.levels == style_order assert p._style_map(style_order, "marker") == markers def test_style_map_null(self, flat_series, null_series): p = VectorPlotter(variables=dict(x=flat_series, style=null_series)) m = HueMapping(p) assert m.levels is None assert m.map_type is None assert m.lookup_table is None def test_map_style(self, long_df): p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", style="a"), ) # Test defaults m = StyleMapping(p, markers=True, dashes=True) n = len(m.levels) for key, dashes in zip(m.levels, unique_dashes(n)): assert m(key, "dashes") == dashes actual_marker_paths = { k: mpl.markers.MarkerStyle(m(k, "marker")).get_path() for k in m.levels } expected_marker_paths = { k: mpl.markers.MarkerStyle(m).get_path() for k, m in zip(m.levels, unique_markers(n)) } assert actual_marker_paths == expected_marker_paths # Test lists markers, dashes = ["o", "s", "d"], [(1, 0), (1, 1), (2, 1, 3, 1)] m = StyleMapping(p, markers=markers, dashes=dashes) for key, mark, dash in zip(m.levels, markers, dashes): assert m(key, "marker") == mark assert m(key, "dashes") == dash # Test dicts markers = dict(zip(p.plot_data["style"].unique(), markers)) dashes = dict(zip(p.plot_data["style"].unique(), dashes)) m = StyleMapping(p, markers=markers, dashes=dashes) for key in m.levels: assert m(key, "marker") == markers[key] assert m(key, "dashes") == dashes[key] # Test excplicit categories p = VectorPlotter(data=long_df, variables=dict(x="x", style="a_cat")) m = StyleMapping(p) assert m.levels == long_df["a_cat"].cat.categories.tolist() # Test style order with defaults order = p.plot_data["style"].unique()[[1, 2, 0]] m = StyleMapping(p, markers=True, dashes=True, order=order) n = len(order) for key, mark, dash in zip(order, unique_markers(n), unique_dashes(n)): assert m(key, "dashes") == dash assert m(key, "marker") == mark obj = mpl.markers.MarkerStyle(mark) path = obj.get_path().transformed(obj.get_transform()) assert_array_equal(m(key, "path").vertices, path.vertices) # Test too many levels with style lists with pytest.raises(ValueError): StyleMapping(p, markers=["o", "s"], dashes=False) with pytest.raises(ValueError): StyleMapping(p, markers=False, dashes=[(2, 1)]) # Test too many levels with style dicts markers, dashes = {"a": "o", "b": "s"}, False with pytest.raises(ValueError): StyleMapping(p, markers=markers, dashes=dashes) markers, dashes = False, {"a": (1, 0), "b": (2, 1)} with pytest.raises(ValueError): StyleMapping(p, markers=markers, dashes=dashes) # Test mixture of filled and unfilled markers markers, dashes = ["o", "x", "s"], None with pytest.raises(ValueError): StyleMapping(p, markers=markers, dashes=dashes) class TestVectorPlotter: def test_flat_variables(self, flat_data): p = VectorPlotter() p.assign_variables(data=flat_data) assert p.input_format == "wide" assert list(p.variables) == ["x", "y"] assert len(p.plot_data) == len(flat_data) try: expected_x = flat_data.index expected_x_name = flat_data.index.name except AttributeError: expected_x = np.arange(len(flat_data)) expected_x_name = None x = p.plot_data["x"] assert_array_equal(x, expected_x) expected_y = flat_data expected_y_name = getattr(flat_data, "name", None) y = p.plot_data["y"] assert_array_equal(y, expected_y) assert p.variables["x"] == expected_x_name assert p.variables["y"] == expected_y_name # TODO note that most of the other tests that exercise the core # variable assignment code still live in test_relational @pytest.mark.parametrize("name", [3, 4.5]) def test_long_numeric_name(self, long_df, name): long_df[name] = long_df["x"] p = VectorPlotter() p.assign_variables(data=long_df, variables={"x": name}) assert_array_equal(p.plot_data["x"], long_df[name]) assert p.variables["x"] == name def test_long_hierarchical_index(self, rng): cols = pd.MultiIndex.from_product([["a"], ["x", "y"]]) data = rng.uniform(size=(50, 2)) df = pd.DataFrame(data, columns=cols) name = ("a", "y") var = "y" p = VectorPlotter() p.assign_variables(data=df, variables={var: name}) assert_array_equal(p.plot_data[var], df[name]) assert p.variables[var] == name def test_long_scalar_and_data(self, long_df): val = 22 p = VectorPlotter(data=long_df, variables={"x": "x", "y": val}) assert (p.plot_data["y"] == val).all() assert p.variables["y"] is None def test_wide_semantic_error(self, wide_df): err = "The following variable cannot be assigned with wide-form data: `hue`" with pytest.raises(ValueError, match=err): VectorPlotter(data=wide_df, variables={"hue": "a"}) def test_long_unknown_error(self, long_df): err = "Could not interpret value `what` for parameter `hue`" with pytest.raises(ValueError, match=err): VectorPlotter(data=long_df, variables={"x": "x", "hue": "what"}) def test_long_unmatched_size_error(self, long_df, flat_array): err = "Length of ndarray vectors must match length of `data`" with pytest.raises(ValueError, match=err): VectorPlotter(data=long_df, variables={"x": "x", "hue": flat_array}) def test_wide_categorical_columns(self, wide_df): wide_df.columns = pd.CategoricalIndex(wide_df.columns) p = VectorPlotter(data=wide_df) assert_array_equal(p.plot_data["hue"].unique(), ["a", "b", "c"]) def test_iter_data_quantitites(self, long_df): p = VectorPlotter( data=long_df, variables=dict(x="x", y="y"), ) out = p.iter_data("hue") assert len(list(out)) == 1 var = "a" n_subsets = len(long_df[var].unique()) semantics = ["hue", "size", "style"] for semantic in semantics: p = VectorPlotter( data=long_df, variables={"x": "x", "y": "y", semantic: var}, ) out = p.iter_data(semantics) assert len(list(out)) == n_subsets var = "a" n_subsets = len(long_df[var].unique()) p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue=var, style=var), ) out = p.iter_data(semantics) assert len(list(out)) == n_subsets # -- out = p.iter_data(semantics, reverse=True) assert len(list(out)) == n_subsets # -- var1, var2 = "a", "s" n_subsets = len(long_df[var1].unique()) p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue=var1, style=var2), ) out = p.iter_data(["hue"]) assert len(list(out)) == n_subsets n_subsets = len(set(list(map(tuple, long_df[[var1, var2]].values)))) p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue=var1, style=var2), ) out = p.iter_data(semantics) assert len(list(out)) == n_subsets p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue=var1, size=var2, style=var1), ) out = p.iter_data(semantics) assert len(list(out)) == n_subsets # -- var1, var2, var3 = "a", "s", "b" cols = [var1, var2, var3] n_subsets = len(set(list(map(tuple, long_df[cols].values)))) p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue=var1, size=var2, style=var3), ) out = p.iter_data(semantics) assert len(list(out)) == n_subsets def test_iter_data_keys(self, long_df): semantics = ["hue", "size", "style"] p = VectorPlotter( data=long_df, variables=dict(x="x", y="y"), ) for sub_vars, _ in p.iter_data("hue"): assert sub_vars == {} # -- var = "a" p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue=var), ) for sub_vars, _ in p.iter_data("hue"): assert list(sub_vars) == ["hue"] assert sub_vars["hue"] in long_df[var].values p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", size=var), ) for sub_vars, _ in p.iter_data("size"): assert list(sub_vars) == ["size"] assert sub_vars["size"] in long_df[var].values p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue=var, style=var), ) for sub_vars, _ in p.iter_data(semantics): assert list(sub_vars) == ["hue", "style"] assert sub_vars["hue"] in long_df[var].values assert sub_vars["style"] in long_df[var].values assert sub_vars["hue"] == sub_vars["style"] var1, var2 = "a", "s" p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue=var1, size=var2), ) for sub_vars, _ in p.iter_data(semantics): assert list(sub_vars) == ["hue", "size"] assert sub_vars["hue"] in long_df[var1].values assert sub_vars["size"] in long_df[var2].values semantics = ["hue", "col", "row"] p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue=var1, col=var2), ) for sub_vars, _ in p.iter_data("hue"): assert list(sub_vars) == ["hue", "col"] assert sub_vars["hue"] in long_df[var1].values assert sub_vars["col"] in long_df[var2].values def test_iter_data_values(self, long_df): p = VectorPlotter( data=long_df, variables=dict(x="x", y="y"), ) p.sort = True _, sub_data = next(p.iter_data("hue")) assert_frame_equal(sub_data, p.plot_data) p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue="a"), ) for sub_vars, sub_data in p.iter_data("hue"): rows = p.plot_data["hue"] == sub_vars["hue"] assert_frame_equal(sub_data, p.plot_data[rows]) p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue="a", size="s"), ) for sub_vars, sub_data in p.iter_data(["hue", "size"]): rows = p.plot_data["hue"] == sub_vars["hue"] rows &= p.plot_data["size"] == sub_vars["size"] assert_frame_equal(sub_data, p.plot_data[rows]) def test_iter_data_reverse(self, long_df): reversed_order = categorical_order(long_df["a"])[::-1] p = VectorPlotter( data=long_df, variables=dict(x="x", y="y", hue="a") ) iterator = p.iter_data("hue", reverse=True) for i, (sub_vars, _) in enumerate(iterator): assert sub_vars["hue"] == reversed_order[i] def test_axis_labels(self, long_df): f, ax = plt.subplots() p = VectorPlotter(data=long_df, variables=dict(x="a")) p._add_axis_labels(ax) assert ax.get_xlabel() == "a" assert ax.get_ylabel() == "" ax.clear() p = VectorPlotter(data=long_df, variables=dict(y="a")) p._add_axis_labels(ax) assert ax.get_xlabel() == "" assert ax.get_ylabel() == "a" ax.clear() p = VectorPlotter(data=long_df, variables=dict(x="a")) p._add_axis_labels(ax, default_y="default") assert ax.get_xlabel() == "a" assert ax.get_ylabel() == "default" ax.clear() p = VectorPlotter(data=long_df, variables=dict(y="a")) p._add_axis_labels(ax, default_x="default", default_y="default") assert ax.get_xlabel() == "default" assert ax.get_ylabel() == "a" ax.clear() p = VectorPlotter(data=long_df, variables=dict(x="x", y="a")) ax.set(xlabel="existing", ylabel="also existing") p._add_axis_labels(ax) assert ax.get_xlabel() == "existing" assert ax.get_ylabel() == "also existing" f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) p = VectorPlotter(data=long_df, variables=dict(x="x", y="y")) p._add_axis_labels(ax1) p._add_axis_labels(ax2) assert ax1.get_xlabel() == "x" assert ax1.get_ylabel() == "y" assert ax1.yaxis.label.get_visible() assert ax2.get_xlabel() == "x" assert ax2.get_ylabel() == "y" assert not ax2.yaxis.label.get_visible() @pytest.mark.parametrize( "variables", [ dict(x="x", y="y"), dict(x="x"), dict(y="y"), dict(x="t", y="y"), dict(x="x", y="a"), ] ) def test_attach_basics(self, long_df, variables): _, ax = plt.subplots() p = VectorPlotter(data=long_df, variables=variables) p._attach(ax) assert p.ax is ax def test_attach_disallowed(self, long_df): _, ax = plt.subplots() p = VectorPlotter(data=long_df, variables={"x": "a"}) with pytest.raises(TypeError): p._attach(ax, allowed_types="numeric") with pytest.raises(TypeError): p._attach(ax, allowed_types=["datetime", "numeric"]) _, ax = plt.subplots() p = VectorPlotter(data=long_df, variables={"x": "x"}) with pytest.raises(TypeError): p._attach(ax, allowed_types="categorical") _, ax = plt.subplots() p = VectorPlotter(data=long_df, variables={"x": "x", "y": "t"}) with pytest.raises(TypeError): p._attach(ax, allowed_types=["numeric", "categorical"]) def test_attach_log_scale(self, long_df): _, ax = plt.subplots() p = VectorPlotter(data=long_df, variables={"x": "x"}) p._attach(ax, log_scale=True) assert ax.xaxis.get_scale() == "log" assert ax.yaxis.get_scale() == "linear" assert p._log_scaled("x") assert not p._log_scaled("y") _, ax = plt.subplots() p = VectorPlotter(data=long_df, variables={"x": "x"}) p._attach(ax, log_scale=2) assert ax.xaxis.get_scale() == "log" assert ax.yaxis.get_scale() == "linear" assert p._log_scaled("x") assert not p._log_scaled("y") _, ax = plt.subplots() p = VectorPlotter(data=long_df, variables={"y": "y"}) p._attach(ax, log_scale=True) assert ax.xaxis.get_scale() == "linear" assert ax.yaxis.get_scale() == "log" assert not p._log_scaled("x") assert p._log_scaled("y") _, ax = plt.subplots() p = VectorPlotter(data=long_df, variables={"x": "x", "y": "y"}) p._attach(ax, log_scale=True) assert ax.xaxis.get_scale() == "log" assert ax.yaxis.get_scale() == "log" assert p._log_scaled("x") assert p._log_scaled("y") _, ax = plt.subplots() p = VectorPlotter(data=long_df, variables={"x": "x", "y": "y"}) p._attach(ax, log_scale=(True, False)) assert ax.xaxis.get_scale() == "log" assert ax.yaxis.get_scale() == "linear" assert p._log_scaled("x") assert not p._log_scaled("y") _, ax = plt.subplots() p = VectorPlotter(data=long_df, variables={"x": "x", "y": "y"}) p._attach(ax, log_scale=(False, 2)) assert ax.xaxis.get_scale() == "linear" assert ax.yaxis.get_scale() == "log" assert not p._log_scaled("x") assert p._log_scaled("y") def test_attach_converters(self, long_df): _, ax = plt.subplots() p = VectorPlotter(data=long_df, variables={"x": "x", "y": "t"}) p._attach(ax) assert ax.xaxis.converter is None assert isinstance(ax.yaxis.converter, mpl.dates.DateConverter) _, ax = plt.subplots() p = VectorPlotter(data=long_df, variables={"x": "a", "y": "y"}) p._attach(ax) assert isinstance(ax.xaxis.converter, mpl.category.StrCategoryConverter) assert ax.yaxis.converter is None def test_attach_facets(self, long_df): g = FacetGrid(long_df, col="a") p = VectorPlotter(data=long_df, variables={"x": "x", "col": "a"}) p._attach(g) assert p.ax is None assert p.facets == g def test_get_axes_single(self, long_df): ax = plt.figure().subplots() p = VectorPlotter(data=long_df, variables={"x": "x", "hue": "a"}) p._attach(ax) assert p._get_axes({"hue": "a"}) is ax def test_get_axes_facets(self, long_df): g = FacetGrid(long_df, col="a") p = VectorPlotter(data=long_df, variables={"x": "x", "col": "a"}) p._attach(g) assert p._get_axes({"col": "b"}) is g.axes_dict["b"] g = FacetGrid(long_df, col="a", row="c") p = VectorPlotter( data=long_df, variables={"x": "x", "col": "a", "row": "c"} ) p._attach(g) assert p._get_axes({"row": 1, "col": "b"}) is g.axes_dict[(1, "b")] def test_comp_data(self, long_df): p = VectorPlotter(data=long_df, variables={"x": "x", "y": "t"}) # We have disabled this check for now, while it remains part of # the internal API, because it will require updating a number of tests # with pytest.raises(AttributeError): # p.comp_data _, ax = plt.subplots() p._attach(ax) assert_array_equal(p.comp_data["x"], p.plot_data["x"]) assert_array_equal( p.comp_data["y"], ax.yaxis.convert_units(p.plot_data["y"]) ) p = VectorPlotter(data=long_df, variables={"x": "a"}) _, ax = plt.subplots() p._attach(ax) assert_array_equal( p.comp_data["x"], ax.xaxis.convert_units(p.plot_data["x"]) ) def test_comp_data_log(self, long_df): p = VectorPlotter(data=long_df, variables={"x": "z", "y": "y"}) _, ax = plt.subplots() p._attach(ax, log_scale=(True, False)) assert_array_equal( p.comp_data["x"], np.log10(p.plot_data["x"]) ) assert_array_equal(p.comp_data["y"], p.plot_data["y"]) def test_comp_data_category_order(self): s = (pd.Series(["a", "b", "c", "a"], dtype="category") .cat.set_categories(["b", "c", "a"], ordered=True)) p = VectorPlotter(variables={"x": s}) _, ax = plt.subplots() p._attach(ax) assert_array_equal( p.comp_data["x"], [2, 0, 1, 2], ) @pytest.fixture( params=itertools.product( [None, np.nan, PD_NA], ["numeric", "category", "datetime"] ) ) @pytest.mark.parametrize( "NA,var_type", ) def comp_data_missing_fixture(self, request): # This fixture holds the logic for parametrizing # the following test (test_comp_data_missing) NA, var_type = request.param if NA is None: pytest.skip("No pandas.NA available") comp_data = [0, 1, np.nan, 2, np.nan, 1] if var_type == "numeric": orig_data = [0, 1, NA, 2, np.inf, 1] elif var_type == "category": orig_data = ["a", "b", NA, "c", NA, "b"] elif var_type == "datetime": # Use 1-based numbers to avoid issue on matplotlib<3.2 # Could simplify the test a bit when we roll off that version comp_data = [1, 2, np.nan, 3, np.nan, 2] numbers = [1, 2, 3, 2] orig_data = mpl.dates.num2date(numbers) orig_data.insert(2, NA) orig_data.insert(4, np.inf) return orig_data, comp_data def test_comp_data_missing(self, comp_data_missing_fixture): orig_data, comp_data = comp_data_missing_fixture p = VectorPlotter(variables={"x": orig_data}) ax = plt.figure().subplots() p._attach(ax) assert_array_equal(p.comp_data["x"], comp_data) def test_var_order(self, long_df): order = ["c", "b", "a"] for var in ["hue", "size", "style"]: p = VectorPlotter(data=long_df, variables={"x": "x", var: "a"}) mapper = getattr(p, f"map_{var}") mapper(order=order) assert p.var_levels[var] == order class TestCoreFunc: def test_unique_dashes(self): n = 24 dashes = unique_dashes(n) assert len(dashes) == n assert len(set(dashes)) == n assert dashes[0] == "" for spec in dashes[1:]: assert isinstance(spec, tuple) assert not len(spec) % 2 def test_unique_markers(self): n = 24 markers = unique_markers(n) assert len(markers) == n assert len(set(markers)) == n for m in markers: assert mpl.markers.MarkerStyle(m).is_filled() def test_variable_type(self): s = pd.Series([1., 2., 3.]) assert variable_type(s) == "numeric" assert variable_type(s.astype(int)) == "numeric" assert variable_type(s.astype(object)) == "numeric" # assert variable_type(s.to_numpy()) == "numeric" assert variable_type(s.values) == "numeric" # assert variable_type(s.to_list()) == "numeric" assert variable_type(s.tolist()) == "numeric" s = pd.Series([1, 2, 3, np.nan], dtype=object) assert variable_type(s) == "numeric" s = pd.Series([np.nan, np.nan]) # s = pd.Series([pd.NA, pd.NA]) assert variable_type(s) == "numeric" s = pd.Series(["1", "2", "3"]) assert variable_type(s) == "categorical" # assert variable_type(s.to_numpy()) == "categorical" assert variable_type(s.values) == "categorical" # assert variable_type(s.to_list()) == "categorical" assert variable_type(s.tolist()) == "categorical" s = pd.Series([True, False, False]) assert variable_type(s) == "numeric" assert variable_type(s, boolean_type="categorical") == "categorical" s_cat = s.astype("category") assert variable_type(s_cat, boolean_type="categorical") == "categorical" assert variable_type(s_cat, boolean_type="numeric") == "categorical" s = pd.Series([pd.Timestamp(1), pd.Timestamp(2)]) assert variable_type(s) == "datetime" assert variable_type(s.astype(object)) == "datetime" # assert variable_type(s.to_numpy()) == "datetime" assert variable_type(s.values) == "datetime" # assert variable_type(s.to_list()) == "datetime" assert variable_type(s.tolist()) == "datetime" def test_infer_orient(self): nums = pd.Series(np.arange(6)) cats = pd.Series(["a", "b"] * 3) assert infer_orient(cats, nums) == "v" assert infer_orient(nums, cats) == "h" assert infer_orient(nums, None) == "h" with pytest.warns(UserWarning, match="Vertical .+ `x`"): assert infer_orient(nums, None, "v") == "h" assert infer_orient(None, nums) == "v" with pytest.warns(UserWarning, match="Horizontal .+ `y`"): assert infer_orient(None, nums, "h") == "v" infer_orient(cats, None, require_numeric=False) == "h" with pytest.raises(TypeError, match="Horizontal .+ `x`"): infer_orient(cats, None) infer_orient(cats, None, require_numeric=False) == "v" with pytest.raises(TypeError, match="Vertical .+ `y`"): infer_orient(None, cats) assert infer_orient(nums, nums, "vert") == "v" assert infer_orient(nums, nums, "hori") == "h" assert infer_orient(cats, cats, "h", require_numeric=False) == "h" assert infer_orient(cats, cats, "v", require_numeric=False) == "v" assert infer_orient(cats, cats, require_numeric=False) == "v" with pytest.raises(TypeError, match="Vertical .+ `y`"): infer_orient(cats, cats, "v") with pytest.raises(TypeError, match="Horizontal .+ `x`"): infer_orient(cats, cats, "h") with pytest.raises(TypeError, match="Neither"): infer_orient(cats, cats) def test_categorical_order(self): x = ["a", "c", "c", "b", "a", "d"] y = [3, 2, 5, 1, 4] order = ["a", "b", "c", "d"] out = categorical_order(x) assert out == ["a", "c", "b", "d"] out = categorical_order(x, order) assert out == order out = categorical_order(x, ["b", "a"]) assert out == ["b", "a"] out = categorical_order(np.array(x)) assert out == ["a", "c", "b", "d"] out = categorical_order(pd.Series(x)) assert out == ["a", "c", "b", "d"] out = categorical_order(y) assert out == [1, 2, 3, 4, 5] out = categorical_order(np.array(y)) assert out == [1, 2, 3, 4, 5] out = categorical_order(pd.Series(y)) assert out == [1, 2, 3, 4, 5] x = pd.Categorical(x, order) out = categorical_order(x) assert out == list(x.categories) x = pd.Series(x) out = categorical_order(x) assert out == list(x.cat.categories) out = categorical_order(x, ["b", "a"]) assert out == ["b", "a"] x = ["a", np.nan, "c", "c", "b", "a", "d"] out = categorical_order(x) assert out == ["a", "c", "b", "d"] seaborn-0.11.2/seaborn/tests/test_decorators.py000066400000000000000000000055031410631356500216040ustar00rootroot00000000000000import inspect import pytest from .._decorators import ( _deprecate_positional_args, share_init_params_with_map, ) # This test was adapted from scikit-learn # github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/tests/test_validation.py def test_deprecate_positional_args_warns_for_function(): @_deprecate_positional_args def f1(a, b, *, c=1, d=1): return a, b, c, d with pytest.warns( FutureWarning, match=r"Pass the following variable as a keyword arg: c\." ): assert f1(1, 2, 3) == (1, 2, 3, 1) with pytest.warns( FutureWarning, match=r"Pass the following variables as keyword args: c, d\." ): assert f1(1, 2, 3, 4) == (1, 2, 3, 4) @_deprecate_positional_args def f2(a=1, *, b=1, c=1, d=1): return a, b, c, d with pytest.warns( FutureWarning, match=r"Pass the following variable as a keyword arg: b\.", ): assert f2(1, 2) == (1, 2, 1, 1) # The * is placed before a keyword only argument without a default value @_deprecate_positional_args def f3(a, *, b, c=1, d=1): return a, b, c, d with pytest.warns( FutureWarning, match=r"Pass the following variable as a keyword arg: b\.", ): assert f3(1, 2) == (1, 2, 1, 1) def test_deprecate_positional_args_warns_for_class(): class A1: @_deprecate_positional_args def __init__(self, a, b, *, c=1, d=1): self.a = a, b, c, d with pytest.warns( FutureWarning, match=r"Pass the following variable as a keyword arg: c\." ): assert A1(1, 2, 3).a == (1, 2, 3, 1) with pytest.warns( FutureWarning, match=r"Pass the following variables as keyword args: c, d\." ): assert A1(1, 2, 3, 4).a == (1, 2, 3, 4) class A2: @_deprecate_positional_args def __init__(self, a=1, b=1, *, c=1, d=1): self.a = a, b, c, d with pytest.warns( FutureWarning, match=r"Pass the following variable as a keyword arg: c\.", ): assert A2(1, 2, 3).a == (1, 2, 3, 1) with pytest.warns( FutureWarning, match=r"Pass the following variables as keyword args: c, d\.", ): assert A2(1, 2, 3, 4).a == (1, 2, 3, 4) def test_share_init_params_with_map(): @share_init_params_with_map class Thingie: def map(cls, *args, **kwargs): return cls(*args, **kwargs) def __init__(self, a, b=1): """Make a new thingie.""" self.a = a self.b = b thingie = Thingie.map(1, b=2) assert thingie.a == 1 assert thingie.b == 2 assert "a" in inspect.signature(Thingie.map).parameters assert "b" in inspect.signature(Thingie.map).parameters assert Thingie.map.__doc__ == Thingie.__init__.__doc__ seaborn-0.11.2/seaborn/tests/test_distributions.py000066400000000000000000002161741410631356500223510ustar00rootroot00000000000000import itertools from distutils.version import LooseVersion import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib.colors import to_rgb, to_rgba import scipy from scipy import stats, integrate import pytest from numpy.testing import assert_array_equal, assert_array_almost_equal from .. import distributions as dist from ..palettes import ( color_palette, light_palette, ) from .._core import ( categorical_order, ) from .._statistics import ( KDE, Histogram, ) from ..distributions import ( _DistributionPlotter, displot, distplot, histplot, ecdfplot, kdeplot, rugplot, ) from ..axisgrid import FacetGrid from .._testing import ( assert_plots_equal, assert_legends_equal, ) class TestDistPlot(object): rs = np.random.RandomState(0) x = rs.randn(100) def test_hist_bins(self): fd_edges = np.histogram_bin_edges(self.x, "fd") with pytest.warns(FutureWarning): ax = distplot(self.x) for edge, bar in zip(fd_edges, ax.patches): assert pytest.approx(edge) == bar.get_x() plt.close(ax.figure) n = 25 n_edges = np.histogram_bin_edges(self.x, n) with pytest.warns(FutureWarning): ax = distplot(self.x, bins=n) for edge, bar in zip(n_edges, ax.patches): assert pytest.approx(edge) == bar.get_x() def test_elements(self): with pytest.warns(FutureWarning): n = 10 ax = distplot(self.x, bins=n, hist=True, kde=False, rug=False, fit=None) assert len(ax.patches) == 10 assert len(ax.lines) == 0 assert len(ax.collections) == 0 plt.close(ax.figure) ax = distplot(self.x, hist=False, kde=True, rug=False, fit=None) assert len(ax.patches) == 0 assert len(ax.lines) == 1 assert len(ax.collections) == 0 plt.close(ax.figure) ax = distplot(self.x, hist=False, kde=False, rug=True, fit=None) assert len(ax.patches) == 0 assert len(ax.lines) == 0 assert len(ax.collections) == 1 plt.close(ax.figure) ax = distplot(self.x, hist=False, kde=False, rug=False, fit=stats.norm) assert len(ax.patches) == 0 assert len(ax.lines) == 1 assert len(ax.collections) == 0 def test_distplot_with_nans(self): f, (ax1, ax2) = plt.subplots(2) x_null = np.append(self.x, [np.nan]) with pytest.warns(FutureWarning): distplot(self.x, ax=ax1) distplot(x_null, ax=ax2) line1 = ax1.lines[0] line2 = ax2.lines[0] assert np.array_equal(line1.get_xydata(), line2.get_xydata()) for bar1, bar2 in zip(ax1.patches, ax2.patches): assert bar1.get_xy() == bar2.get_xy() assert bar1.get_height() == bar2.get_height() class TestRugPlot: def assert_rug_equal(self, a, b): assert_array_equal(a.get_segments(), b.get_segments()) @pytest.mark.parametrize("variable", ["x", "y"]) def test_long_data(self, long_df, variable): vector = long_df[variable] vectors = [ variable, vector, np.asarray(vector), vector.tolist(), ] f, ax = plt.subplots() for vector in vectors: rugplot(data=long_df, **{variable: vector}) for a, b in itertools.product(ax.collections, ax.collections): self.assert_rug_equal(a, b) def test_bivariate_data(self, long_df): f, (ax1, ax2) = plt.subplots(ncols=2) rugplot(data=long_df, x="x", y="y", ax=ax1) rugplot(data=long_df, x="x", ax=ax2) rugplot(data=long_df, y="y", ax=ax2) self.assert_rug_equal(ax1.collections[0], ax2.collections[0]) self.assert_rug_equal(ax1.collections[1], ax2.collections[1]) def test_wide_vs_long_data(self, wide_df): f, (ax1, ax2) = plt.subplots(ncols=2) rugplot(data=wide_df, ax=ax1) for col in wide_df: rugplot(data=wide_df, x=col, ax=ax2) wide_segments = np.sort( np.array(ax1.collections[0].get_segments()) ) long_segments = np.sort( np.concatenate([c.get_segments() for c in ax2.collections]) ) assert_array_equal(wide_segments, long_segments) def test_flat_vector(self, long_df): f, ax = plt.subplots() rugplot(data=long_df["x"]) rugplot(x=long_df["x"]) self.assert_rug_equal(*ax.collections) def test_datetime_data(self, long_df): ax = rugplot(data=long_df["t"]) vals = np.stack(ax.collections[0].get_segments())[:, 0, 0] assert_array_equal(vals, mpl.dates.date2num(long_df["t"])) def test_empty_data(self): ax = rugplot(x=[]) assert not ax.collections def test_a_deprecation(self, flat_series): f, ax = plt.subplots() with pytest.warns(FutureWarning): rugplot(a=flat_series) rugplot(x=flat_series) self.assert_rug_equal(*ax.collections) @pytest.mark.parametrize("variable", ["x", "y"]) def test_axis_deprecation(self, flat_series, variable): f, ax = plt.subplots() with pytest.warns(FutureWarning): rugplot(flat_series, axis=variable) rugplot(**{variable: flat_series}) self.assert_rug_equal(*ax.collections) def test_vertical_deprecation(self, flat_series): f, ax = plt.subplots() with pytest.warns(FutureWarning): rugplot(flat_series, vertical=True) rugplot(y=flat_series) self.assert_rug_equal(*ax.collections) def test_rug_data(self, flat_array): height = .05 ax = rugplot(x=flat_array, height=height) segments = np.stack(ax.collections[0].get_segments()) n = flat_array.size assert_array_equal(segments[:, 0, 1], np.zeros(n)) assert_array_equal(segments[:, 1, 1], np.full(n, height)) assert_array_equal(segments[:, 1, 0], flat_array) def test_rug_colors(self, long_df): ax = rugplot(data=long_df, x="x", hue="a") order = categorical_order(long_df["a"]) palette = color_palette() expected_colors = np.ones((len(long_df), 4)) for i, val in enumerate(long_df["a"]): expected_colors[i, :3] = palette[order.index(val)] assert_array_equal(ax.collections[0].get_color(), expected_colors) def test_expand_margins(self, flat_array): f, ax = plt.subplots() x1, y1 = ax.margins() rugplot(x=flat_array, expand_margins=False) x2, y2 = ax.margins() assert x1 == x2 assert y1 == y2 f, ax = plt.subplots() x1, y1 = ax.margins() height = .05 rugplot(x=flat_array, height=height) x2, y2 = ax.margins() assert x1 == x2 assert y1 + height * 2 == pytest.approx(y2) def test_matplotlib_kwargs(self, flat_series): lw = 2 alpha = .2 ax = rugplot(y=flat_series, linewidth=lw, alpha=alpha) rug = ax.collections[0] assert np.all(rug.get_alpha() == alpha) assert np.all(rug.get_linewidth() == lw) def test_axis_labels(self, flat_series): ax = rugplot(x=flat_series) assert ax.get_xlabel() == flat_series.name assert not ax.get_ylabel() def test_log_scale(self, long_df): ax1, ax2 = plt.figure().subplots(2) ax2.set_xscale("log") rugplot(data=long_df, x="z", ax=ax1) rugplot(data=long_df, x="z", ax=ax2) rug1 = np.stack(ax1.collections[0].get_segments()) rug2 = np.stack(ax2.collections[0].get_segments()) assert_array_almost_equal(rug1, rug2) class TestKDEPlotUnivariate: @pytest.mark.parametrize( "variable", ["x", "y"], ) def test_long_vectors(self, long_df, variable): vector = long_df[variable] vectors = [ variable, vector, np.asarray(vector), vector.tolist(), ] f, ax = plt.subplots() for vector in vectors: kdeplot(data=long_df, **{variable: vector}) xdata = [l.get_xdata() for l in ax.lines] for a, b in itertools.product(xdata, xdata): assert_array_equal(a, b) ydata = [l.get_ydata() for l in ax.lines] for a, b in itertools.product(ydata, ydata): assert_array_equal(a, b) def test_wide_vs_long_data(self, wide_df): f, (ax1, ax2) = plt.subplots(ncols=2) kdeplot(data=wide_df, ax=ax1, common_norm=False, common_grid=False) for col in wide_df: kdeplot(data=wide_df, x=col, ax=ax2) for l1, l2 in zip(ax1.lines[::-1], ax2.lines): assert_array_equal(l1.get_xydata(), l2.get_xydata()) def test_flat_vector(self, long_df): f, ax = plt.subplots() kdeplot(data=long_df["x"]) kdeplot(x=long_df["x"]) assert_array_equal(ax.lines[0].get_xydata(), ax.lines[1].get_xydata()) def test_empty_data(self): ax = kdeplot(x=[]) assert not ax.lines def test_singular_data(self): with pytest.warns(UserWarning): ax = kdeplot(x=np.ones(10)) assert not ax.lines with pytest.warns(UserWarning): ax = kdeplot(x=[5]) assert not ax.lines with pytest.warns(None) as record: ax = kdeplot(x=[5], warn_singular=False) assert not record def test_variable_assignment(self, long_df): f, ax = plt.subplots() kdeplot(data=long_df, x="x", fill=True) kdeplot(data=long_df, y="x", fill=True) v0 = ax.collections[0].get_paths()[0].vertices v1 = ax.collections[1].get_paths()[0].vertices[:, [1, 0]] assert_array_equal(v0, v1) def test_vertical_deprecation(self, long_df): f, ax = plt.subplots() kdeplot(data=long_df, y="x") with pytest.warns(FutureWarning): kdeplot(data=long_df, x="x", vertical=True) assert_array_equal(ax.lines[0].get_xydata(), ax.lines[1].get_xydata()) def test_bw_deprecation(self, long_df): f, ax = plt.subplots() kdeplot(data=long_df, x="x", bw_method="silverman") with pytest.warns(FutureWarning): kdeplot(data=long_df, x="x", bw="silverman") assert_array_equal(ax.lines[0].get_xydata(), ax.lines[1].get_xydata()) def test_kernel_deprecation(self, long_df): f, ax = plt.subplots() kdeplot(data=long_df, x="x") with pytest.warns(UserWarning): kdeplot(data=long_df, x="x", kernel="epi") assert_array_equal(ax.lines[0].get_xydata(), ax.lines[1].get_xydata()) def test_shade_deprecation(self, long_df): f, ax = plt.subplots() kdeplot(data=long_df, x="x", shade=True) kdeplot(data=long_df, x="x", fill=True) fill1, fill2 = ax.collections assert_array_equal( fill1.get_paths()[0].vertices, fill2.get_paths()[0].vertices ) @pytest.mark.parametrize("multiple", ["layer", "stack", "fill"]) def test_hue_colors(self, long_df, multiple): ax = kdeplot( data=long_df, x="x", hue="a", multiple=multiple, fill=True, legend=False ) # Note that hue order is reversed in the plot lines = ax.lines[::-1] fills = ax.collections[::-1] palette = color_palette() for line, fill, color in zip(lines, fills, palette): assert line.get_color() == color assert tuple(fill.get_facecolor().squeeze()) == color + (.25,) def test_hue_stacking(self, long_df): f, (ax1, ax2) = plt.subplots(ncols=2) kdeplot( data=long_df, x="x", hue="a", multiple="layer", common_grid=True, legend=False, ax=ax1, ) kdeplot( data=long_df, x="x", hue="a", multiple="stack", fill=False, legend=False, ax=ax2, ) layered_densities = np.stack([ l.get_ydata() for l in ax1.lines ]) stacked_densities = np.stack([ l.get_ydata() for l in ax2.lines ]) assert_array_equal(layered_densities.cumsum(axis=0), stacked_densities) def test_hue_filling(self, long_df): f, (ax1, ax2) = plt.subplots(ncols=2) kdeplot( data=long_df, x="x", hue="a", multiple="layer", common_grid=True, legend=False, ax=ax1, ) kdeplot( data=long_df, x="x", hue="a", multiple="fill", fill=False, legend=False, ax=ax2, ) layered = np.stack([l.get_ydata() for l in ax1.lines]) filled = np.stack([l.get_ydata() for l in ax2.lines]) assert_array_almost_equal( (layered / layered.sum(axis=0)).cumsum(axis=0), filled, ) @pytest.mark.parametrize("multiple", ["stack", "fill"]) def test_fill_default(self, long_df, multiple): ax = kdeplot( data=long_df, x="x", hue="a", multiple=multiple, fill=None ) assert len(ax.collections) > 0 @pytest.mark.parametrize("multiple", ["layer", "stack", "fill"]) def test_fill_nondefault(self, long_df, multiple): f, (ax1, ax2) = plt.subplots(ncols=2) kws = dict(data=long_df, x="x", hue="a") kdeplot(**kws, multiple=multiple, fill=False, ax=ax1) kdeplot(**kws, multiple=multiple, fill=True, ax=ax2) assert len(ax1.collections) == 0 assert len(ax2.collections) > 0 def test_color_cycle_interaction(self, flat_series): color = (.2, 1, .6) C0, C1 = to_rgb("C0"), to_rgb("C1") f, ax = plt.subplots() kdeplot(flat_series) kdeplot(flat_series) assert to_rgb(ax.lines[0].get_color()) == C0 assert to_rgb(ax.lines[1].get_color()) == C1 plt.close(f) f, ax = plt.subplots() kdeplot(flat_series, color=color) kdeplot(flat_series) assert to_rgb(ax.lines[0].get_color()) == color assert to_rgb(ax.lines[1].get_color()) == C0 plt.close(f) f, ax = plt.subplots() kdeplot(flat_series, fill=True) kdeplot(flat_series, fill=True) assert ( to_rgba(ax.collections[0].get_facecolor().squeeze()) == to_rgba(C0, .25) ) assert ( to_rgba(ax.collections[1].get_facecolor().squeeze()) == to_rgba(C1, .25) ) plt.close(f) @pytest.mark.parametrize("fill", [True, False]) def test_color(self, long_df, fill): color = (.2, 1, .6) alpha = .5 f, ax = plt.subplots() kdeplot(long_df["x"], fill=fill, color=color) if fill: artist_color = ax.collections[-1].get_facecolor().squeeze() else: artist_color = ax.lines[-1].get_color() default_alpha = .25 if fill else 1 assert to_rgba(artist_color) == to_rgba(color, default_alpha) kdeplot(long_df["x"], fill=fill, color=color, alpha=alpha) if fill: artist_color = ax.collections[-1].get_facecolor().squeeze() else: artist_color = ax.lines[-1].get_color() assert to_rgba(artist_color) == to_rgba(color, alpha) @pytest.mark.skipif( LooseVersion(np.__version__) < "1.17", reason="Histogram over datetime64 requires numpy >= 1.17", ) def test_datetime_scale(self, long_df): f, (ax1, ax2) = plt.subplots(2) kdeplot(x=long_df["t"], fill=True, ax=ax1) kdeplot(x=long_df["t"], fill=False, ax=ax2) assert ax1.get_xlim() == ax2.get_xlim() def test_multiple_argument_check(self, long_df): with pytest.raises(ValueError, match="`multiple` must be"): kdeplot(data=long_df, x="x", hue="a", multiple="bad_input") def test_cut(self, rng): x = rng.normal(0, 3, 1000) f, ax = plt.subplots() kdeplot(x=x, cut=0, legend=False) xdata_0 = ax.lines[0].get_xdata() assert xdata_0.min() == x.min() assert xdata_0.max() == x.max() kdeplot(x=x, cut=2, legend=False) xdata_2 = ax.lines[1].get_xdata() assert xdata_2.min() < xdata_0.min() assert xdata_2.max() > xdata_0.max() assert len(xdata_0) == len(xdata_2) def test_clip(self, rng): x = rng.normal(0, 3, 1000) clip = -1, 1 ax = kdeplot(x=x, clip=clip) xdata = ax.lines[0].get_xdata() assert xdata.min() >= clip[0] assert xdata.max() <= clip[1] def test_line_is_density(self, long_df): ax = kdeplot(data=long_df, x="x", cut=5) x, y = ax.lines[0].get_xydata().T assert integrate.trapz(y, x) == pytest.approx(1) def test_cumulative(self, long_df): ax = kdeplot(data=long_df, x="x", cut=5, cumulative=True) y = ax.lines[0].get_ydata() assert y[0] == pytest.approx(0) assert y[-1] == pytest.approx(1) def test_common_norm(self, long_df): f, (ax1, ax2) = plt.subplots(ncols=2) kdeplot( data=long_df, x="x", hue="c", common_norm=True, cut=10, ax=ax1 ) kdeplot( data=long_df, x="x", hue="c", common_norm=False, cut=10, ax=ax2 ) total_area = 0 for line in ax1.lines: xdata, ydata = line.get_xydata().T total_area += integrate.trapz(ydata, xdata) assert total_area == pytest.approx(1) for line in ax2.lines: xdata, ydata = line.get_xydata().T assert integrate.trapz(ydata, xdata) == pytest.approx(1) def test_common_grid(self, long_df): f, (ax1, ax2) = plt.subplots(ncols=2) order = "a", "b", "c" kdeplot( data=long_df, x="x", hue="a", hue_order=order, common_grid=False, cut=0, ax=ax1, ) kdeplot( data=long_df, x="x", hue="a", hue_order=order, common_grid=True, cut=0, ax=ax2, ) for line, level in zip(ax1.lines[::-1], order): xdata = line.get_xdata() assert xdata.min() == long_df.loc[long_df["a"] == level, "x"].min() assert xdata.max() == long_df.loc[long_df["a"] == level, "x"].max() for line in ax2.lines: xdata = line.get_xdata().T assert xdata.min() == long_df["x"].min() assert xdata.max() == long_df["x"].max() def test_bw_method(self, long_df): f, ax = plt.subplots() kdeplot(data=long_df, x="x", bw_method=0.2, legend=False) kdeplot(data=long_df, x="x", bw_method=1.0, legend=False) kdeplot(data=long_df, x="x", bw_method=3.0, legend=False) l1, l2, l3 = ax.lines assert ( np.abs(np.diff(l1.get_ydata())).mean() > np.abs(np.diff(l2.get_ydata())).mean() ) assert ( np.abs(np.diff(l2.get_ydata())).mean() > np.abs(np.diff(l3.get_ydata())).mean() ) def test_bw_adjust(self, long_df): f, ax = plt.subplots() kdeplot(data=long_df, x="x", bw_adjust=0.2, legend=False) kdeplot(data=long_df, x="x", bw_adjust=1.0, legend=False) kdeplot(data=long_df, x="x", bw_adjust=3.0, legend=False) l1, l2, l3 = ax.lines assert ( np.abs(np.diff(l1.get_ydata())).mean() > np.abs(np.diff(l2.get_ydata())).mean() ) assert ( np.abs(np.diff(l2.get_ydata())).mean() > np.abs(np.diff(l3.get_ydata())).mean() ) def test_log_scale_implicit(self, rng): x = rng.lognormal(0, 1, 100) f, (ax1, ax2) = plt.subplots(ncols=2) ax1.set_xscale("log") kdeplot(x=x, ax=ax1) kdeplot(x=x, ax=ax1) xdata_log = ax1.lines[0].get_xdata() assert (xdata_log > 0).all() assert (np.diff(xdata_log, 2) > 0).all() assert np.allclose(np.diff(np.log(xdata_log), 2), 0) f, ax = plt.subplots() ax.set_yscale("log") kdeplot(y=x, ax=ax) assert_array_equal(ax.lines[0].get_xdata(), ax1.lines[0].get_ydata()) def test_log_scale_explicit(self, rng): x = rng.lognormal(0, 1, 100) f, (ax1, ax2, ax3) = plt.subplots(ncols=3) ax1.set_xscale("log") kdeplot(x=x, ax=ax1) kdeplot(x=x, log_scale=True, ax=ax2) kdeplot(x=x, log_scale=10, ax=ax3) for ax in f.axes: assert ax.get_xscale() == "log" supports = [ax.lines[0].get_xdata() for ax in f.axes] for a, b in itertools.product(supports, supports): assert_array_equal(a, b) densities = [ax.lines[0].get_ydata() for ax in f.axes] for a, b in itertools.product(densities, densities): assert_array_equal(a, b) f, ax = plt.subplots() kdeplot(y=x, log_scale=True, ax=ax) assert ax.get_yscale() == "log" def test_log_scale_with_hue(self, rng): data = rng.lognormal(0, 1, 50), rng.lognormal(0, 2, 100) ax = kdeplot(data=data, log_scale=True, common_grid=True) assert_array_equal(ax.lines[0].get_xdata(), ax.lines[1].get_xdata()) def test_log_scale_normalization(self, rng): x = rng.lognormal(0, 1, 100) ax = kdeplot(x=x, log_scale=True, cut=10) xdata, ydata = ax.lines[0].get_xydata().T integral = integrate.trapz(ydata, np.log10(xdata)) assert integral == pytest.approx(1) @pytest.mark.skipif( LooseVersion(scipy.__version__) < "1.2.0", reason="Weights require scipy >= 1.2.0" ) def test_weights(self): x = [1, 2] weights = [2, 1] ax = kdeplot(x=x, weights=weights) xdata, ydata = ax.lines[0].get_xydata().T y1 = ydata[np.argwhere(np.abs(xdata - 1).min())] y2 = ydata[np.argwhere(np.abs(xdata - 2).min())] assert y1 == pytest.approx(2 * y2) def test_sticky_edges(self, long_df): f, (ax1, ax2) = plt.subplots(ncols=2) kdeplot(data=long_df, x="x", fill=True, ax=ax1) assert ax1.collections[0].sticky_edges.y[:] == [0, np.inf] kdeplot( data=long_df, x="x", hue="a", multiple="fill", fill=True, ax=ax2 ) assert ax2.collections[0].sticky_edges.y[:] == [0, 1] def test_line_kws(self, flat_array): lw = 3 color = (.2, .5, .8) ax = kdeplot(x=flat_array, linewidth=lw, color=color) line, = ax.lines assert line.get_linewidth() == lw assert to_rgb(line.get_color()) == color def test_input_checking(self, long_df): err = "The x variable is categorical," with pytest.raises(TypeError, match=err): kdeplot(data=long_df, x="a") def test_axis_labels(self, long_df): f, (ax1, ax2) = plt.subplots(ncols=2) kdeplot(data=long_df, x="x", ax=ax1) assert ax1.get_xlabel() == "x" assert ax1.get_ylabel() == "Density" kdeplot(data=long_df, y="y", ax=ax2) assert ax2.get_xlabel() == "Density" assert ax2.get_ylabel() == "y" def test_legend(self, long_df): ax = kdeplot(data=long_df, x="x", hue="a") assert ax.legend_.get_title().get_text() == "a" legend_labels = ax.legend_.get_texts() order = categorical_order(long_df["a"]) for label, level in zip(legend_labels, order): assert label.get_text() == level legend_artists = ax.legend_.findobj(mpl.lines.Line2D)[::2] palette = color_palette() for artist, color in zip(legend_artists, palette): assert to_rgb(artist.get_color()) == to_rgb(color) ax.clear() kdeplot(data=long_df, x="x", hue="a", legend=False) assert ax.legend_ is None class TestKDEPlotBivariate: def test_long_vectors(self, long_df): ax1 = kdeplot(data=long_df, x="x", y="y") x = long_df["x"] x_values = [x, np.asarray(x), x.tolist()] y = long_df["y"] y_values = [y, np.asarray(y), y.tolist()] for x, y in zip(x_values, y_values): f, ax2 = plt.subplots() kdeplot(x=x, y=y, ax=ax2) for c1, c2 in zip(ax1.collections, ax2.collections): assert_array_equal(c1.get_offsets(), c2.get_offsets()) def test_singular_data(self): with pytest.warns(UserWarning): ax = dist.kdeplot(x=np.ones(10), y=np.arange(10)) assert not ax.lines with pytest.warns(UserWarning): ax = dist.kdeplot(x=[5], y=[6]) assert not ax.lines with pytest.warns(None) as record: ax = kdeplot(x=[5], y=[7], warn_singular=False) assert not record def test_fill_artists(self, long_df): for fill in [True, False]: f, ax = plt.subplots() kdeplot(data=long_df, x="x", y="y", hue="c", fill=fill) for c in ax.collections: if fill: assert isinstance(c, mpl.collections.PathCollection) else: assert isinstance(c, mpl.collections.LineCollection) def test_common_norm(self, rng): hue = np.repeat(["a", "a", "a", "b"], 40) x, y = rng.multivariate_normal([0, 0], [(.2, .5), (.5, 2)], len(hue)).T x[hue == "a"] -= 2 x[hue == "b"] += 2 f, (ax1, ax2) = plt.subplots(ncols=2) kdeplot(x=x, y=y, hue=hue, common_norm=True, ax=ax1) kdeplot(x=x, y=y, hue=hue, common_norm=False, ax=ax2) n_seg_1 = sum([len(c.get_segments()) > 0 for c in ax1.collections]) n_seg_2 = sum([len(c.get_segments()) > 0 for c in ax2.collections]) assert n_seg_2 > n_seg_1 def test_log_scale(self, rng): x = rng.lognormal(0, 1, 100) y = rng.uniform(0, 1, 100) levels = .2, .5, 1 f, ax = plt.subplots() kdeplot(x=x, y=y, log_scale=True, levels=levels, ax=ax) assert ax.get_xscale() == "log" assert ax.get_yscale() == "log" f, (ax1, ax2) = plt.subplots(ncols=2) kdeplot(x=x, y=y, log_scale=(10, False), levels=levels, ax=ax1) assert ax1.get_xscale() == "log" assert ax1.get_yscale() == "linear" p = _DistributionPlotter() kde = KDE() density, (xx, yy) = kde(np.log10(x), y) levels = p._quantile_to_level(density, levels) ax2.contour(10 ** xx, yy, density, levels=levels) for c1, c2 in zip(ax1.collections, ax2.collections): assert_array_equal(c1.get_segments(), c2.get_segments()) def test_bandwidth(self, rng): n = 100 x, y = rng.multivariate_normal([0, 0], [(.2, .5), (.5, 2)], n).T f, (ax1, ax2) = plt.subplots(ncols=2) kdeplot(x=x, y=y, ax=ax1) kdeplot(x=x, y=y, bw_adjust=2, ax=ax2) for c1, c2 in zip(ax1.collections, ax2.collections): seg1, seg2 = c1.get_segments(), c2.get_segments() if seg1 + seg2: x1 = seg1[0][:, 0] x2 = seg2[0][:, 0] assert np.abs(x2).max() > np.abs(x1).max() @pytest.mark.skipif( LooseVersion(scipy.__version__) < "1.2.0", reason="Weights require scipy >= 1.2.0" ) def test_weights(self, rng): import warnings warnings.simplefilter("error", np.VisibleDeprecationWarning) n = 100 x, y = rng.multivariate_normal([1, 3], [(.2, .5), (.5, 2)], n).T hue = np.repeat([0, 1], n // 2) weights = rng.uniform(0, 1, n) f, (ax1, ax2) = plt.subplots(ncols=2) kdeplot(x=x, y=y, hue=hue, ax=ax1) kdeplot(x=x, y=y, hue=hue, weights=weights, ax=ax2) for c1, c2 in zip(ax1.collections, ax2.collections): if c1.get_segments() and c2.get_segments(): seg1 = np.concatenate(c1.get_segments(), axis=0) seg2 = np.concatenate(c2.get_segments(), axis=0) assert not np.array_equal(seg1, seg2) def test_hue_ignores_cmap(self, long_df): with pytest.warns(UserWarning, match="cmap parameter ignored"): ax = kdeplot(data=long_df, x="x", y="y", hue="c", cmap="viridis") color = tuple(ax.collections[0].get_color().squeeze()) assert color == mpl.colors.colorConverter.to_rgba("C0") def test_contour_line_colors(self, long_df): color = (.2, .9, .8, 1) ax = kdeplot(data=long_df, x="x", y="y", color=color) for c in ax.collections: assert tuple(c.get_color().squeeze()) == color def test_contour_fill_colors(self, long_df): n = 6 color = (.2, .9, .8, 1) ax = kdeplot( data=long_df, x="x", y="y", fill=True, color=color, levels=n, ) cmap = light_palette(color, reverse=True, as_cmap=True) lut = cmap(np.linspace(0, 1, 256)) for c in ax.collections: color = c.get_facecolor().squeeze() assert color in lut def test_colorbar(self, long_df): ax = kdeplot(data=long_df, x="x", y="y", fill=True, cbar=True) assert len(ax.figure.axes) == 2 def test_levels_and_thresh(self, long_df): f, (ax1, ax2) = plt.subplots(ncols=2) n = 8 thresh = .1 plot_kws = dict(data=long_df, x="x", y="y") kdeplot(**plot_kws, levels=n, thresh=thresh, ax=ax1) kdeplot(**plot_kws, levels=np.linspace(thresh, 1, n), ax=ax2) for c1, c2 in zip(ax1.collections, ax2.collections): assert_array_equal(c1.get_segments(), c2.get_segments()) with pytest.raises(ValueError): kdeplot(**plot_kws, levels=[0, 1, 2]) ax1.clear() ax2.clear() kdeplot(**plot_kws, levels=n, thresh=None, ax=ax1) kdeplot(**plot_kws, levels=n, thresh=0, ax=ax2) for c1, c2 in zip(ax1.collections, ax2.collections): assert_array_equal(c1.get_segments(), c2.get_segments()) for c1, c2 in zip(ax1.collections, ax2.collections): assert_array_equal(c1.get_facecolors(), c2.get_facecolors()) def test_quantile_to_level(self, rng): x = rng.uniform(0, 1, 100000) isoprop = np.linspace(.1, 1, 6) levels = _DistributionPlotter()._quantile_to_level(x, isoprop) for h, p in zip(levels, isoprop): assert (x[x <= h].sum() / x.sum()) == pytest.approx(p, abs=1e-4) def test_input_checking(self, long_df): with pytest.raises(TypeError, match="The x variable is categorical,"): kdeplot(data=long_df, x="a", y="y") class TestHistPlotUnivariate: @pytest.mark.parametrize( "variable", ["x", "y"], ) def test_long_vectors(self, long_df, variable): vector = long_df[variable] vectors = [ variable, vector, np.asarray(vector), vector.tolist(), ] f, axs = plt.subplots(3) for vector, ax in zip(vectors, axs): histplot(data=long_df, ax=ax, **{variable: vector}) bars = [ax.patches for ax in axs] for a_bars, b_bars in itertools.product(bars, bars): for a, b in zip(a_bars, b_bars): assert_array_equal(a.get_height(), b.get_height()) assert_array_equal(a.get_xy(), b.get_xy()) def test_wide_vs_long_data(self, wide_df): f, (ax1, ax2) = plt.subplots(2) histplot(data=wide_df, ax=ax1, common_bins=False) for col in wide_df.columns[::-1]: histplot(data=wide_df, x=col, ax=ax2) for a, b in zip(ax1.patches, ax2.patches): assert a.get_height() == b.get_height() assert a.get_xy() == b.get_xy() def test_flat_vector(self, long_df): f, (ax1, ax2) = plt.subplots(2) histplot(data=long_df["x"], ax=ax1) histplot(data=long_df, x="x", ax=ax2) for a, b in zip(ax1.patches, ax2.patches): assert a.get_height() == b.get_height() assert a.get_xy() == b.get_xy() def test_empty_data(self): ax = histplot(x=[]) assert not ax.patches def test_variable_assignment(self, long_df): f, (ax1, ax2) = plt.subplots(2) histplot(data=long_df, x="x", ax=ax1) histplot(data=long_df, y="x", ax=ax2) for a, b in zip(ax1.patches, ax2.patches): assert a.get_height() == b.get_width() @pytest.mark.parametrize("element", ["bars", "step", "poly"]) @pytest.mark.parametrize("multiple", ["layer", "dodge", "stack", "fill"]) def test_hue_fill_colors(self, long_df, multiple, element): ax = histplot( data=long_df, x="x", hue="a", multiple=multiple, bins=1, fill=True, element=element, legend=False, ) palette = color_palette() if multiple == "layer": if element == "bars": a = .5 else: a = .25 else: a = .75 for bar, color in zip(ax.patches[::-1], palette): assert bar.get_facecolor() == to_rgba(color, a) for poly, color in zip(ax.collections[::-1], palette): assert tuple(poly.get_facecolor().squeeze()) == to_rgba(color, a) def test_hue_stack(self, long_df): f, (ax1, ax2) = plt.subplots(2) n = 10 kws = dict(data=long_df, x="x", hue="a", bins=n, element="bars") histplot(**kws, multiple="layer", ax=ax1) histplot(**kws, multiple="stack", ax=ax2) layer_heights = np.reshape([b.get_height() for b in ax1.patches], (-1, n)) stack_heights = np.reshape([b.get_height() for b in ax2.patches], (-1, n)) assert_array_equal(layer_heights, stack_heights) stack_xys = np.reshape([b.get_xy() for b in ax2.patches], (-1, n, 2)) assert_array_equal( stack_xys[..., 1] + stack_heights, stack_heights.cumsum(axis=0), ) def test_hue_fill(self, long_df): f, (ax1, ax2) = plt.subplots(2) n = 10 kws = dict(data=long_df, x="x", hue="a", bins=n, element="bars") histplot(**kws, multiple="layer", ax=ax1) histplot(**kws, multiple="fill", ax=ax2) layer_heights = np.reshape([b.get_height() for b in ax1.patches], (-1, n)) stack_heights = np.reshape([b.get_height() for b in ax2.patches], (-1, n)) assert_array_almost_equal( layer_heights / layer_heights.sum(axis=0), stack_heights ) stack_xys = np.reshape([b.get_xy() for b in ax2.patches], (-1, n, 2)) assert_array_almost_equal( (stack_xys[..., 1] + stack_heights) / stack_heights.sum(axis=0), stack_heights.cumsum(axis=0), ) def test_hue_dodge(self, long_df): f, (ax1, ax2) = plt.subplots(2) bw = 2 kws = dict(data=long_df, x="x", hue="c", binwidth=bw, element="bars") histplot(**kws, multiple="layer", ax=ax1) histplot(**kws, multiple="dodge", ax=ax2) layer_heights = [b.get_height() for b in ax1.patches] dodge_heights = [b.get_height() for b in ax2.patches] assert_array_equal(layer_heights, dodge_heights) layer_xs = np.reshape([b.get_x() for b in ax1.patches], (2, -1)) dodge_xs = np.reshape([b.get_x() for b in ax2.patches], (2, -1)) assert_array_almost_equal(layer_xs[1], dodge_xs[1]) assert_array_almost_equal(layer_xs[0], dodge_xs[0] - bw / 2) def test_hue_as_numpy_dodged(self, long_df): # https://github.com/mwaskom/seaborn/issues/2452 ax = histplot( long_df, x="y", hue=np.asarray(long_df["a"]), multiple="dodge", bins=1, ) # Note hue order reversal assert ax.patches[1].get_x() < ax.patches[0].get_x() def test_multiple_input_check(self, flat_series): with pytest.raises(ValueError, match="`multiple` must be"): histplot(flat_series, multiple="invalid") def test_element_input_check(self, flat_series): with pytest.raises(ValueError, match="`element` must be"): histplot(flat_series, element="invalid") def test_count_stat(self, flat_series): ax = histplot(flat_series, stat="count") bar_heights = [b.get_height() for b in ax.patches] assert sum(bar_heights) == len(flat_series) def test_density_stat(self, flat_series): ax = histplot(flat_series, stat="density") bar_heights = [b.get_height() for b in ax.patches] bar_widths = [b.get_width() for b in ax.patches] assert np.multiply(bar_heights, bar_widths).sum() == pytest.approx(1) def test_density_stat_common_norm(self, long_df): ax = histplot( data=long_df, x="x", hue="a", stat="density", common_norm=True, element="bars", ) bar_heights = [b.get_height() for b in ax.patches] bar_widths = [b.get_width() for b in ax.patches] assert np.multiply(bar_heights, bar_widths).sum() == pytest.approx(1) def test_density_stat_unique_norm(self, long_df): n = 10 ax = histplot( data=long_df, x="x", hue="a", stat="density", bins=n, common_norm=False, element="bars", ) bar_groups = ax.patches[:n], ax.patches[-n:] for bars in bar_groups: bar_heights = [b.get_height() for b in bars] bar_widths = [b.get_width() for b in bars] bar_areas = np.multiply(bar_heights, bar_widths) assert bar_areas.sum() == pytest.approx(1) @pytest.fixture(params=["probability", "proportion"]) def height_norm_arg(self, request): return request.param def test_probability_stat(self, flat_series, height_norm_arg): ax = histplot(flat_series, stat=height_norm_arg) bar_heights = [b.get_height() for b in ax.patches] assert sum(bar_heights) == pytest.approx(1) def test_probability_stat_common_norm(self, long_df, height_norm_arg): ax = histplot( data=long_df, x="x", hue="a", stat=height_norm_arg, common_norm=True, element="bars", ) bar_heights = [b.get_height() for b in ax.patches] assert sum(bar_heights) == pytest.approx(1) def test_probability_stat_unique_norm(self, long_df, height_norm_arg): n = 10 ax = histplot( data=long_df, x="x", hue="a", stat=height_norm_arg, bins=n, common_norm=False, element="bars", ) bar_groups = ax.patches[:n], ax.patches[-n:] for bars in bar_groups: bar_heights = [b.get_height() for b in bars] assert sum(bar_heights) == pytest.approx(1) def test_percent_stat(self, flat_series): ax = histplot(flat_series, stat="percent") bar_heights = [b.get_height() for b in ax.patches] assert sum(bar_heights) == 100 def test_common_bins(self, long_df): n = 10 ax = histplot( long_df, x="x", hue="a", common_bins=True, bins=n, element="bars", ) bar_groups = ax.patches[:n], ax.patches[-n:] assert_array_equal( [b.get_xy() for b in bar_groups[0]], [b.get_xy() for b in bar_groups[1]] ) def test_unique_bins(self, wide_df): ax = histplot(wide_df, common_bins=False, bins=10, element="bars") bar_groups = np.split(np.array(ax.patches), len(wide_df.columns)) for i, col in enumerate(wide_df.columns[::-1]): bars = bar_groups[i] start = bars[0].get_x() stop = bars[-1].get_x() + bars[-1].get_width() assert start == wide_df[col].min() assert stop == wide_df[col].max() def test_weights_with_missing(self, missing_df): ax = histplot(missing_df, x="x", weights="s", bins=5) bar_heights = [bar.get_height() for bar in ax.patches] total_weight = missing_df[["x", "s"]].dropna()["s"].sum() assert sum(bar_heights) == pytest.approx(total_weight) def test_discrete(self, long_df): ax = histplot(long_df, x="s", discrete=True) data_min = long_df["s"].min() data_max = long_df["s"].max() assert len(ax.patches) == (data_max - data_min + 1) for i, bar in enumerate(ax.patches): assert bar.get_width() == 1 assert bar.get_x() == (data_min + i - .5) def test_discrete_categorical_default(self, long_df): ax = histplot(long_df, x="a") for i, bar in enumerate(ax.patches): assert bar.get_width() == 1 def test_categorical_yaxis_inversion(self, long_df): ax = histplot(long_df, y="a") ymax, ymin = ax.get_ylim() assert ymax > ymin def test_discrete_requires_bars(self, long_df): with pytest.raises(ValueError, match="`element` must be 'bars'"): histplot(long_df, x="s", discrete=True, element="poly") @pytest.mark.skipif( LooseVersion(np.__version__) < "1.17", reason="Histogram over datetime64 requires numpy >= 1.17", ) def test_datetime_scale(self, long_df): f, (ax1, ax2) = plt.subplots(2) histplot(x=long_df["t"], fill=True, ax=ax1) histplot(x=long_df["t"], fill=False, ax=ax2) assert ax1.get_xlim() == ax2.get_xlim() @pytest.mark.parametrize("stat", ["count", "density", "probability"]) def test_kde(self, flat_series, stat): ax = histplot( flat_series, kde=True, stat=stat, kde_kws={"cut": 10} ) bar_widths = [b.get_width() for b in ax.patches] bar_heights = [b.get_height() for b in ax.patches] hist_area = np.multiply(bar_widths, bar_heights).sum() density, = ax.lines kde_area = integrate.trapz(density.get_ydata(), density.get_xdata()) assert kde_area == pytest.approx(hist_area) @pytest.mark.parametrize("multiple", ["layer", "dodge"]) @pytest.mark.parametrize("stat", ["count", "density", "probability"]) def test_kde_with_hue(self, long_df, stat, multiple): n = 10 ax = histplot( long_df, x="x", hue="c", multiple=multiple, kde=True, stat=stat, element="bars", kde_kws={"cut": 10}, bins=n, ) bar_groups = ax.patches[:n], ax.patches[-n:] for i, bars in enumerate(bar_groups): bar_widths = [b.get_width() for b in bars] bar_heights = [b.get_height() for b in bars] hist_area = np.multiply(bar_widths, bar_heights).sum() x, y = ax.lines[i].get_xydata().T kde_area = integrate.trapz(y, x) if multiple == "layer": assert kde_area == pytest.approx(hist_area) elif multiple == "dodge": assert kde_area == pytest.approx(hist_area * 2) def test_kde_default_cut(self, flat_series): ax = histplot(flat_series, kde=True) support = ax.lines[0].get_xdata() assert support.min() == flat_series.min() assert support.max() == flat_series.max() def test_kde_hue(self, long_df): n = 10 ax = histplot(data=long_df, x="x", hue="a", kde=True, bins=n) for bar, line in zip(ax.patches[::n], ax.lines): assert to_rgba(bar.get_facecolor(), 1) == line.get_color() def test_kde_yaxis(self, flat_series): f, ax = plt.subplots() histplot(x=flat_series, kde=True) histplot(y=flat_series, kde=True) x, y = ax.lines assert_array_equal(x.get_xdata(), y.get_ydata()) assert_array_equal(x.get_ydata(), y.get_xdata()) def test_kde_line_kws(self, flat_series): lw = 5 ax = histplot(flat_series, kde=True, line_kws=dict(lw=lw)) assert ax.lines[0].get_linewidth() == lw def test_kde_singular_data(self): with pytest.warns(None) as record: ax = histplot(x=np.ones(10), kde=True) assert not record assert not ax.lines with pytest.warns(None) as record: ax = histplot(x=[5], kde=True) assert not record assert not ax.lines def test_element_default(self, long_df): f, (ax1, ax2) = plt.subplots(2) histplot(long_df, x="x", ax=ax1) histplot(long_df, x="x", ax=ax2, element="bars") assert len(ax1.patches) == len(ax2.patches) f, (ax1, ax2) = plt.subplots(2) histplot(long_df, x="x", hue="a", ax=ax1) histplot(long_df, x="x", hue="a", ax=ax2, element="bars") assert len(ax1.patches) == len(ax2.patches) def test_bars_no_fill(self, flat_series): alpha = .5 ax = histplot(flat_series, element="bars", fill=False, alpha=alpha) for bar in ax.patches: assert bar.get_facecolor() == (0, 0, 0, 0) assert bar.get_edgecolor()[-1] == alpha def test_step_fill(self, flat_series): f, (ax1, ax2) = plt.subplots(2) n = 10 histplot(flat_series, element="bars", fill=True, bins=n, ax=ax1) histplot(flat_series, element="step", fill=True, bins=n, ax=ax2) bar_heights = [b.get_height() for b in ax1.patches] bar_widths = [b.get_width() for b in ax1.patches] bar_edges = [b.get_x() for b in ax1.patches] fill = ax2.collections[0] x, y = fill.get_paths()[0].vertices[::-1].T assert_array_equal(x[1:2 * n:2], bar_edges) assert_array_equal(y[1:2 * n:2], bar_heights) assert x[n * 2] == bar_edges[-1] + bar_widths[-1] assert y[n * 2] == bar_heights[-1] def test_poly_fill(self, flat_series): f, (ax1, ax2) = plt.subplots(2) n = 10 histplot(flat_series, element="bars", fill=True, bins=n, ax=ax1) histplot(flat_series, element="poly", fill=True, bins=n, ax=ax2) bar_heights = np.array([b.get_height() for b in ax1.patches]) bar_widths = np.array([b.get_width() for b in ax1.patches]) bar_edges = np.array([b.get_x() for b in ax1.patches]) fill = ax2.collections[0] x, y = fill.get_paths()[0].vertices[::-1].T assert_array_equal(x[1:n + 1], bar_edges + bar_widths / 2) assert_array_equal(y[1:n + 1], bar_heights) def test_poly_no_fill(self, flat_series): f, (ax1, ax2) = plt.subplots(2) n = 10 histplot(flat_series, element="bars", fill=False, bins=n, ax=ax1) histplot(flat_series, element="poly", fill=False, bins=n, ax=ax2) bar_heights = np.array([b.get_height() for b in ax1.patches]) bar_widths = np.array([b.get_width() for b in ax1.patches]) bar_edges = np.array([b.get_x() for b in ax1.patches]) x, y = ax2.lines[0].get_xydata().T assert_array_equal(x, bar_edges + bar_widths / 2) assert_array_equal(y, bar_heights) def test_step_no_fill(self, flat_series): f, (ax1, ax2) = plt.subplots(2) histplot(flat_series, element="bars", fill=False, ax=ax1) histplot(flat_series, element="step", fill=False, ax=ax2) bar_heights = [b.get_height() for b in ax1.patches] bar_widths = [b.get_width() for b in ax1.patches] bar_edges = [b.get_x() for b in ax1.patches] x, y = ax2.lines[0].get_xydata().T assert_array_equal(x[:-1], bar_edges) assert_array_equal(y[:-1], bar_heights) assert x[-1] == bar_edges[-1] + bar_widths[-1] assert y[-1] == y[-2] def test_step_fill_xy(self, flat_series): f, ax = plt.subplots() histplot(x=flat_series, element="step", fill=True) histplot(y=flat_series, element="step", fill=True) xverts = ax.collections[0].get_paths()[0].vertices yverts = ax.collections[1].get_paths()[0].vertices assert_array_equal(xverts, yverts[:, ::-1]) def test_step_no_fill_xy(self, flat_series): f, ax = plt.subplots() histplot(x=flat_series, element="step", fill=False) histplot(y=flat_series, element="step", fill=False) xline, yline = ax.lines assert_array_equal(xline.get_xdata(), yline.get_ydata()) assert_array_equal(xline.get_ydata(), yline.get_xdata()) def test_weighted_histogram(self): ax = histplot(x=[0, 1, 2], weights=[1, 2, 3], discrete=True) bar_heights = [b.get_height() for b in ax.patches] assert bar_heights == [1, 2, 3] def test_weights_with_auto_bins(self, long_df): with pytest.warns(UserWarning): ax = histplot(long_df, x="x", weights="f") assert len(ax.patches) == 10 def test_shrink(self, long_df): f, (ax1, ax2) = plt.subplots(2) bw = 2 shrink = .4 histplot(long_df, x="x", binwidth=bw, ax=ax1) histplot(long_df, x="x", binwidth=bw, shrink=shrink, ax=ax2) for p1, p2 in zip(ax1.patches, ax2.patches): w1, w2 = p1.get_width(), p2.get_width() assert w2 == pytest.approx(shrink * w1) x1, x2 = p1.get_x(), p2.get_x() assert (x2 + w2 / 2) == pytest.approx(x1 + w1 / 2) def test_log_scale_explicit(self, rng): x = rng.lognormal(0, 2, 1000) ax = histplot(x, log_scale=True, binwidth=1) bar_widths = [b.get_width() for b in ax.patches] steps = np.divide(bar_widths[1:], bar_widths[:-1]) assert np.allclose(steps, 10) def test_log_scale_implicit(self, rng): x = rng.lognormal(0, 2, 1000) f, ax = plt.subplots() ax.set_xscale("log") histplot(x, binwidth=1, ax=ax) bar_widths = [b.get_width() for b in ax.patches] steps = np.divide(bar_widths[1:], bar_widths[:-1]) assert np.allclose(steps, 10) @pytest.mark.parametrize( "fill", [True, False], ) def test_auto_linewidth(self, flat_series, fill): get_lw = lambda ax: ax.patches[0].get_linewidth() # noqa: E731 kws = dict(element="bars", fill=fill) f, (ax1, ax2) = plt.subplots(2) histplot(flat_series, **kws, bins=10, ax=ax1) histplot(flat_series, **kws, bins=100, ax=ax2) assert get_lw(ax1) > get_lw(ax2) f, ax1 = plt.subplots(figsize=(10, 5)) f, ax2 = plt.subplots(figsize=(2, 5)) histplot(flat_series, **kws, bins=30, ax=ax1) histplot(flat_series, **kws, bins=30, ax=ax2) assert get_lw(ax1) > get_lw(ax2) f, ax1 = plt.subplots(figsize=(4, 5)) f, ax2 = plt.subplots(figsize=(4, 5)) histplot(flat_series, **kws, bins=30, ax=ax1) histplot(10 ** flat_series, **kws, bins=30, log_scale=True, ax=ax2) assert get_lw(ax1) == pytest.approx(get_lw(ax2)) f, ax1 = plt.subplots(figsize=(4, 5)) f, ax2 = plt.subplots(figsize=(4, 5)) histplot(y=[0, 1, 1], **kws, discrete=True, ax=ax1) histplot(y=["a", "b", "b"], **kws, ax=ax2) assert get_lw(ax1) == pytest.approx(get_lw(ax2)) def test_bar_kwargs(self, flat_series): lw = 2 ec = (1, .2, .9, .5) ax = histplot(flat_series, binwidth=1, ec=ec, lw=lw) for bar in ax.patches: assert bar.get_edgecolor() == ec assert bar.get_linewidth() == lw def test_step_fill_kwargs(self, flat_series): lw = 2 ec = (1, .2, .9, .5) ax = histplot(flat_series, element="step", ec=ec, lw=lw) poly = ax.collections[0] assert tuple(poly.get_edgecolor().squeeze()) == ec assert poly.get_linewidth() == lw def test_step_line_kwargs(self, flat_series): lw = 2 ls = "--" ax = histplot(flat_series, element="step", fill=False, lw=lw, ls=ls) line = ax.lines[0] assert line.get_linewidth() == lw assert line.get_linestyle() == ls class TestHistPlotBivariate: def test_mesh(self, long_df): hist = Histogram() counts, (x_edges, y_edges) = hist(long_df["x"], long_df["y"]) ax = histplot(long_df, x="x", y="y") mesh = ax.collections[0] mesh_data = mesh.get_array() assert_array_equal(mesh_data.data, counts.T.flat) assert_array_equal(mesh_data.mask, counts.T.flat == 0) edges = itertools.product(y_edges[:-1], x_edges[:-1]) for i, (y, x) in enumerate(edges): path = mesh.get_paths()[i] assert path.vertices[0, 0] == x assert path.vertices[0, 1] == y def test_mesh_with_hue(self, long_df): ax = histplot(long_df, x="x", y="y", hue="c") hist = Histogram() hist.define_bin_params(long_df["x"], long_df["y"]) for i, sub_df in long_df.groupby("c"): mesh = ax.collections[i] mesh_data = mesh.get_array() counts, (x_edges, y_edges) = hist(sub_df["x"], sub_df["y"]) assert_array_equal(mesh_data.data, counts.T.flat) assert_array_equal(mesh_data.mask, counts.T.flat == 0) edges = itertools.product(y_edges[:-1], x_edges[:-1]) for i, (y, x) in enumerate(edges): path = mesh.get_paths()[i] assert path.vertices[0, 0] == x assert path.vertices[0, 1] == y def test_mesh_with_hue_unique_bins(self, long_df): ax = histplot(long_df, x="x", y="y", hue="c", common_bins=False) for i, sub_df in long_df.groupby("c"): hist = Histogram() mesh = ax.collections[i] mesh_data = mesh.get_array() counts, (x_edges, y_edges) = hist(sub_df["x"], sub_df["y"]) assert_array_equal(mesh_data.data, counts.T.flat) assert_array_equal(mesh_data.mask, counts.T.flat == 0) edges = itertools.product(y_edges[:-1], x_edges[:-1]) for i, (y, x) in enumerate(edges): path = mesh.get_paths()[i] assert path.vertices[0, 0] == x assert path.vertices[0, 1] == y def test_mesh_with_col_unique_bins(self, long_df): g = displot(long_df, x="x", y="y", col="c", common_bins=False) for i, sub_df in long_df.groupby("c"): hist = Histogram() mesh = g.axes.flat[i].collections[0] mesh_data = mesh.get_array() counts, (x_edges, y_edges) = hist(sub_df["x"], sub_df["y"]) assert_array_equal(mesh_data.data, counts.T.flat) assert_array_equal(mesh_data.mask, counts.T.flat == 0) edges = itertools.product(y_edges[:-1], x_edges[:-1]) for i, (y, x) in enumerate(edges): path = mesh.get_paths()[i] assert path.vertices[0, 0] == x assert path.vertices[0, 1] == y def test_mesh_log_scale(self, rng): x, y = rng.lognormal(0, 1, (2, 1000)) hist = Histogram() counts, (x_edges, y_edges) = hist(np.log10(x), np.log10(y)) ax = histplot(x=x, y=y, log_scale=True) mesh = ax.collections[0] mesh_data = mesh.get_array() assert_array_equal(mesh_data.data, counts.T.flat) edges = itertools.product(y_edges[:-1], x_edges[:-1]) for i, (y_i, x_i) in enumerate(edges): path = mesh.get_paths()[i] assert path.vertices[0, 0] == 10 ** x_i assert path.vertices[0, 1] == 10 ** y_i def test_mesh_thresh(self, long_df): hist = Histogram() counts, (x_edges, y_edges) = hist(long_df["x"], long_df["y"]) thresh = 5 ax = histplot(long_df, x="x", y="y", thresh=thresh) mesh = ax.collections[0] mesh_data = mesh.get_array() assert_array_equal(mesh_data.data, counts.T.flat) assert_array_equal(mesh_data.mask, (counts <= thresh).T.flat) def test_mesh_sticky_edges(self, long_df): ax = histplot(long_df, x="x", y="y", thresh=None) mesh = ax.collections[0] assert mesh.sticky_edges.x == [long_df["x"].min(), long_df["x"].max()] assert mesh.sticky_edges.y == [long_df["y"].min(), long_df["y"].max()] ax.clear() ax = histplot(long_df, x="x", y="y") mesh = ax.collections[0] assert not mesh.sticky_edges.x assert not mesh.sticky_edges.y def test_mesh_common_norm(self, long_df): stat = "density" ax = histplot( long_df, x="x", y="y", hue="c", common_norm=True, stat=stat, ) hist = Histogram(stat="density") hist.define_bin_params(long_df["x"], long_df["y"]) for i, sub_df in long_df.groupby("c"): mesh = ax.collections[i] mesh_data = mesh.get_array() density, (x_edges, y_edges) = hist(sub_df["x"], sub_df["y"]) scale = len(sub_df) / len(long_df) assert_array_equal(mesh_data.data, (density * scale).T.flat) def test_mesh_unique_norm(self, long_df): stat = "density" ax = histplot( long_df, x="x", y="y", hue="c", common_norm=False, stat=stat, ) hist = Histogram() bin_kws = hist.define_bin_params(long_df["x"], long_df["y"]) for i, sub_df in long_df.groupby("c"): sub_hist = Histogram(bins=bin_kws["bins"], stat=stat) mesh = ax.collections[i] mesh_data = mesh.get_array() density, (x_edges, y_edges) = sub_hist(sub_df["x"], sub_df["y"]) assert_array_equal(mesh_data.data, density.T.flat) @pytest.mark.parametrize("stat", ["probability", "proportion", "percent"]) def test_mesh_normalization(self, long_df, stat): ax = histplot( long_df, x="x", y="y", stat=stat, ) mesh_data = ax.collections[0].get_array() expected_sum = {"percent": 100}.get(stat, 1) assert mesh_data.data.sum() == expected_sum def test_mesh_colors(self, long_df): color = "r" f, ax = plt.subplots() histplot( long_df, x="x", y="y", color=color, ) mesh = ax.collections[0] assert_array_equal( mesh.get_cmap().colors, _DistributionPlotter()._cmap_from_color(color).colors, ) f, ax = plt.subplots() histplot( long_df, x="x", y="y", hue="c", ) colors = color_palette() for i, mesh in enumerate(ax.collections): assert_array_equal( mesh.get_cmap().colors, _DistributionPlotter()._cmap_from_color(colors[i]).colors, ) def test_color_limits(self, long_df): f, (ax1, ax2, ax3) = plt.subplots(3) kws = dict(data=long_df, x="x", y="y") hist = Histogram() counts, _ = hist(long_df["x"], long_df["y"]) histplot(**kws, ax=ax1) assert ax1.collections[0].get_clim() == (0, counts.max()) vmax = 10 histplot(**kws, vmax=vmax, ax=ax2) counts, _ = hist(long_df["x"], long_df["y"]) assert ax2.collections[0].get_clim() == (0, vmax) pmax = .8 pthresh = .1 f = _DistributionPlotter()._quantile_to_level histplot(**kws, pmax=pmax, pthresh=pthresh, ax=ax3) counts, _ = hist(long_df["x"], long_df["y"]) mesh = ax3.collections[0] assert mesh.get_clim() == (0, f(counts, pmax)) assert_array_equal( mesh.get_array().mask, (counts <= f(counts, pthresh)).T.flat, ) def test_hue_color_limits(self, long_df): _, (ax1, ax2, ax3, ax4) = plt.subplots(4) kws = dict(data=long_df, x="x", y="y", hue="c", bins=4) hist = Histogram(bins=kws["bins"]) hist.define_bin_params(long_df["x"], long_df["y"]) full_counts, _ = hist(long_df["x"], long_df["y"]) sub_counts = [] for _, sub_df in long_df.groupby(kws["hue"]): c, _ = hist(sub_df["x"], sub_df["y"]) sub_counts.append(c) pmax = .8 pthresh = .05 f = _DistributionPlotter()._quantile_to_level histplot(**kws, common_norm=True, ax=ax1) for i, mesh in enumerate(ax1.collections): assert mesh.get_clim() == (0, full_counts.max()) histplot(**kws, common_norm=False, ax=ax2) for i, mesh in enumerate(ax2.collections): assert mesh.get_clim() == (0, sub_counts[i].max()) histplot(**kws, common_norm=True, pmax=pmax, pthresh=pthresh, ax=ax3) for i, mesh in enumerate(ax3.collections): assert mesh.get_clim() == (0, f(full_counts, pmax)) assert_array_equal( mesh.get_array().mask, (sub_counts[i] <= f(full_counts, pthresh)).T.flat, ) histplot(**kws, common_norm=False, pmax=pmax, pthresh=pthresh, ax=ax4) for i, mesh in enumerate(ax4.collections): assert mesh.get_clim() == (0, f(sub_counts[i], pmax)) assert_array_equal( mesh.get_array().mask, (sub_counts[i] <= f(sub_counts[i], pthresh)).T.flat, ) def test_colorbar(self, long_df): f, ax = plt.subplots() histplot(long_df, x="x", y="y", cbar=True, ax=ax) assert len(ax.figure.axes) == 2 f, (ax, cax) = plt.subplots(2) histplot(long_df, x="x", y="y", cbar=True, cbar_ax=cax, ax=ax) assert len(ax.figure.axes) == 2 class TestECDFPlotUnivariate: @pytest.mark.parametrize("variable", ["x", "y"]) def test_long_vectors(self, long_df, variable): vector = long_df[variable] vectors = [ variable, vector, np.asarray(vector), vector.tolist(), ] f, ax = plt.subplots() for vector in vectors: ecdfplot(data=long_df, ax=ax, **{variable: vector}) xdata = [l.get_xdata() for l in ax.lines] for a, b in itertools.product(xdata, xdata): assert_array_equal(a, b) ydata = [l.get_ydata() for l in ax.lines] for a, b in itertools.product(ydata, ydata): assert_array_equal(a, b) def test_hue(self, long_df): ax = ecdfplot(long_df, x="x", hue="a") for line, color in zip(ax.lines[::-1], color_palette()): assert line.get_color() == color def test_line_kwargs(self, long_df): color = "r" ls = "--" lw = 3 ax = ecdfplot(long_df, x="x", color=color, ls=ls, lw=lw) for line in ax.lines: assert to_rgb(line.get_color()) == to_rgb(color) assert line.get_linestyle() == ls assert line.get_linewidth() == lw @pytest.mark.parametrize("data_var", ["x", "y"]) def test_drawstyle(self, flat_series, data_var): ax = ecdfplot(**{data_var: flat_series}) drawstyles = dict(x="steps-post", y="steps-pre") assert ax.lines[0].get_drawstyle() == drawstyles[data_var] @pytest.mark.parametrize( "data_var,stat_var", [["x", "y"], ["y", "x"]], ) def test_proportion_limits(self, flat_series, data_var, stat_var): ax = ecdfplot(**{data_var: flat_series}) data = getattr(ax.lines[0], f"get_{stat_var}data")() assert data[0] == 0 assert data[-1] == 1 sticky_edges = getattr(ax.lines[0].sticky_edges, stat_var) assert sticky_edges[:] == [0, 1] @pytest.mark.parametrize( "data_var,stat_var", [["x", "y"], ["y", "x"]], ) def test_proportion_limits_complementary(self, flat_series, data_var, stat_var): ax = ecdfplot(**{data_var: flat_series}, complementary=True) data = getattr(ax.lines[0], f"get_{stat_var}data")() assert data[0] == 1 assert data[-1] == 0 sticky_edges = getattr(ax.lines[0].sticky_edges, stat_var) assert sticky_edges[:] == [0, 1] @pytest.mark.parametrize( "data_var,stat_var", [["x", "y"], ["y", "x"]], ) def test_proportion_count(self, flat_series, data_var, stat_var): n = len(flat_series) ax = ecdfplot(**{data_var: flat_series}, stat="count") data = getattr(ax.lines[0], f"get_{stat_var}data")() assert data[0] == 0 assert data[-1] == n sticky_edges = getattr(ax.lines[0].sticky_edges, stat_var) assert sticky_edges[:] == [0, n] def test_weights(self): ax = ecdfplot(x=[1, 2, 3], weights=[1, 1, 2]) y = ax.lines[0].get_ydata() assert_array_equal(y, [0, .25, .5, 1]) def test_bivariate_error(self, long_df): with pytest.raises(NotImplementedError, match="Bivariate ECDF plots"): ecdfplot(data=long_df, x="x", y="y") def test_log_scale(self, long_df): ax1, ax2 = plt.figure().subplots(2) ecdfplot(data=long_df, x="z", ax=ax1) ecdfplot(data=long_df, x="z", log_scale=True, ax=ax2) # Ignore first point, which either -inf (in linear) or 0 (in log) line1 = ax1.lines[0].get_xydata()[1:] line2 = ax2.lines[0].get_xydata()[1:] assert_array_almost_equal(line1, line2) class TestDisPlot: # TODO probably good to move these utility attributes/methods somewhere else @pytest.mark.parametrize( "kwargs", [ dict(), dict(x="x"), dict(x="t"), dict(x="a"), dict(x="z", log_scale=True), dict(x="x", binwidth=4), dict(x="x", weights="f", bins=5), dict(x="x", color="green", linewidth=2, binwidth=4), dict(x="x", hue="a", fill=False), dict(x="y", hue="a", fill=False), dict(x="x", hue="a", multiple="stack"), dict(x="x", hue="a", element="step"), dict(x="x", hue="a", palette="muted"), dict(x="x", hue="a", kde=True), dict(x="x", hue="a", stat="density", common_norm=False), dict(x="x", y="y"), ], ) def test_versus_single_histplot(self, long_df, kwargs): ax = histplot(long_df, **kwargs) g = displot(long_df, **kwargs) assert_plots_equal(ax, g.ax) if ax.legend_ is not None: assert_legends_equal(ax.legend_, g._legend) if kwargs: long_df["_"] = "_" g2 = displot(long_df, col="_", **kwargs) assert_plots_equal(ax, g2.ax) @pytest.mark.parametrize( "kwargs", [ dict(), dict(x="x"), dict(x="t"), dict(x="z", log_scale=True), dict(x="x", bw_adjust=.5), dict(x="x", weights="f"), dict(x="x", color="green", linewidth=2), dict(x="x", hue="a", multiple="stack"), dict(x="x", hue="a", fill=True), dict(x="y", hue="a", fill=False), dict(x="x", hue="a", palette="muted"), dict(x="x", y="y"), ], ) def test_versus_single_kdeplot(self, long_df, kwargs): if "weights" in kwargs and LooseVersion(scipy.__version__) < "1.2": pytest.skip("Weights require scipy >= 1.2") ax = kdeplot(data=long_df, **kwargs) g = displot(long_df, kind="kde", **kwargs) assert_plots_equal(ax, g.ax) if ax.legend_ is not None: assert_legends_equal(ax.legend_, g._legend) if kwargs: long_df["_"] = "_" g2 = displot(long_df, kind="kde", col="_", **kwargs) assert_plots_equal(ax, g2.ax) @pytest.mark.parametrize( "kwargs", [ dict(), dict(x="x"), dict(x="t"), dict(x="z", log_scale=True), dict(x="x", weights="f"), dict(y="x"), dict(x="x", color="green", linewidth=2), dict(x="x", hue="a", complementary=True), dict(x="x", hue="a", stat="count"), dict(x="x", hue="a", palette="muted"), ], ) def test_versus_single_ecdfplot(self, long_df, kwargs): ax = ecdfplot(data=long_df, **kwargs) g = displot(long_df, kind="ecdf", **kwargs) assert_plots_equal(ax, g.ax) if ax.legend_ is not None: assert_legends_equal(ax.legend_, g._legend) if kwargs: long_df["_"] = "_" g2 = displot(long_df, kind="ecdf", col="_", **kwargs) assert_plots_equal(ax, g2.ax) @pytest.mark.parametrize( "kwargs", [ dict(x="x"), dict(x="x", y="y"), dict(x="x", hue="a"), ] ) def test_with_rug(self, long_df, kwargs): ax = rugplot(data=long_df, **kwargs) g = displot(long_df, rug=True, **kwargs) g.ax.patches = [] assert_plots_equal(ax, g.ax, labels=False) long_df["_"] = "_" g2 = displot(long_df, col="_", rug=True, **kwargs) g2.ax.patches = [] assert_plots_equal(ax, g2.ax, labels=False) @pytest.mark.parametrize( "facet_var", ["col", "row"], ) def test_facets(self, long_df, facet_var): kwargs = {facet_var: "a"} ax = kdeplot(data=long_df, x="x", hue="a") g = displot(long_df, x="x", kind="kde", **kwargs) legend_texts = ax.legend_.get_texts() for i, line in enumerate(ax.lines[::-1]): facet_ax = g.axes.flat[i] facet_line = facet_ax.lines[0] assert_array_equal(line.get_xydata(), facet_line.get_xydata()) text = legend_texts[i].get_text() assert text in facet_ax.get_title() @pytest.mark.parametrize("multiple", ["dodge", "stack", "fill"]) def test_facet_multiple(self, long_df, multiple): bins = np.linspace(0, 20, 5) ax = histplot( data=long_df[long_df["c"] == 0], x="x", hue="a", hue_order=["a", "b", "c"], multiple=multiple, bins=bins, ) g = displot( data=long_df, x="x", hue="a", col="c", hue_order=["a", "b", "c"], multiple=multiple, bins=bins, ) assert_plots_equal(ax, g.axes_dict[0]) def test_ax_warning(self, long_df): ax = plt.figure().subplots() with pytest.warns(UserWarning, match="`displot` is a figure-level"): displot(long_df, x="x", ax=ax) @pytest.mark.parametrize("key", ["col", "row"]) def test_array_faceting(self, long_df, key): a = np.asarray(long_df["a"]) # .to_numpy on pandas 0.24 vals = categorical_order(a) g = displot(long_df, x="x", **{key: a}) assert len(g.axes.flat) == len(vals) for ax, val in zip(g.axes.flat, vals): assert val in ax.get_title() def test_legend(self, long_df): g = displot(long_df, x="x", hue="a") assert g._legend is not None def test_empty(self): g = displot(x=[], y=[]) assert isinstance(g, FacetGrid) def test_bivariate_ecdf_error(self, long_df): with pytest.raises(NotImplementedError): displot(long_df, x="x", y="y", kind="ecdf") def test_bivariate_kde_norm(self, rng): x, y = rng.normal(0, 1, (2, 100)) z = [0] * 80 + [1] * 20 g = displot(x=x, y=y, col=z, kind="kde", levels=10) l1 = sum(bool(c.get_segments()) for c in g.axes.flat[0].collections) l2 = sum(bool(c.get_segments()) for c in g.axes.flat[1].collections) assert l1 > l2 g = displot(x=x, y=y, col=z, kind="kde", levels=10, common_norm=False) l1 = sum(bool(c.get_segments()) for c in g.axes.flat[0].collections) l2 = sum(bool(c.get_segments()) for c in g.axes.flat[1].collections) assert l1 == l2 def test_bivariate_hist_norm(self, rng): x, y = rng.normal(0, 1, (2, 100)) z = [0] * 80 + [1] * 20 g = displot(x=x, y=y, col=z, kind="hist") clim1 = g.axes.flat[0].collections[0].get_clim() clim2 = g.axes.flat[1].collections[0].get_clim() assert clim1 == clim2 g = displot(x=x, y=y, col=z, kind="hist", common_norm=False) clim1 = g.axes.flat[0].collections[0].get_clim() clim2 = g.axes.flat[1].collections[0].get_clim() assert clim1[1] > clim2[1] def test_facetgrid_data(self, long_df): g = displot( data=long_df.to_dict(orient="list"), x="z", hue=long_df["a"].rename("hue_var"), col=np.asarray(long_df["c"]), ) expected_cols = set(long_df.columns.tolist() + ["hue_var", "_col_"]) assert set(g.data.columns) == expected_cols assert_array_equal(g.data["hue_var"], long_df["a"]) assert_array_equal(g.data["_col_"], long_df["c"]) seaborn-0.11.2/seaborn/tests/test_docstrings.py000066400000000000000000000023331410631356500216140ustar00rootroot00000000000000from .._docstrings import DocstringComponents EXAMPLE_DICT = dict( param_a=""" a : str The first parameter. """, ) class ExampleClass: def example_method(self): """An example method. Parameters ---------- a : str A method parameter. """ def example_func(): """An example function. Parameters ---------- a : str A function parameter. """ class TestDocstringComponents: def test_from_dict(self): obj = DocstringComponents(EXAMPLE_DICT) assert obj.param_a == "a : str\n The first parameter." def test_from_nested_components(self): obj_inner = DocstringComponents(EXAMPLE_DICT) obj_outer = DocstringComponents.from_nested_components(inner=obj_inner) assert obj_outer.inner.param_a == "a : str\n The first parameter." def test_from_function(self): obj = DocstringComponents.from_function_params(example_func) assert obj.a == "a : str\n A function parameter." def test_from_method(self): obj = DocstringComponents.from_function_params( ExampleClass.example_method ) assert obj.a == "a : str\n A method parameter." seaborn-0.11.2/seaborn/tests/test_matrix.py000066400000000000000000001345551410631356500207550ustar00rootroot00000000000000import tempfile import copy import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import pandas as pd from scipy.spatial import distance from scipy.cluster import hierarchy import numpy.testing as npt try: import pandas.testing as pdt except ImportError: import pandas.util.testing as pdt import pytest from .. import matrix as mat from .. import color_palette from .._testing import assert_colors_equal try: import fastcluster assert fastcluster _no_fastcluster = False except ImportError: _no_fastcluster = True # Copied from master onto v0.11 here to fix break introduced by # cherry pick commit 49fbd353 class TestHeatmap: rs = np.random.RandomState(sum(map(ord, "heatmap"))) x_norm = rs.randn(4, 8) letters = pd.Series(["A", "B", "C", "D"], name="letters") df_norm = pd.DataFrame(x_norm, index=letters) x_unif = rs.rand(20, 13) df_unif = pd.DataFrame(x_unif) default_kws = dict(vmin=None, vmax=None, cmap=None, center=None, robust=False, annot=False, fmt=".2f", annot_kws=None, cbar=True, cbar_kws=None, mask=None) def test_ndarray_input(self): p = mat._HeatMapper(self.x_norm, **self.default_kws) npt.assert_array_equal(p.plot_data, self.x_norm) pdt.assert_frame_equal(p.data, pd.DataFrame(self.x_norm)) npt.assert_array_equal(p.xticklabels, np.arange(8)) npt.assert_array_equal(p.yticklabels, np.arange(4)) assert p.xlabel == "" assert p.ylabel == "" def test_df_input(self): p = mat._HeatMapper(self.df_norm, **self.default_kws) npt.assert_array_equal(p.plot_data, self.x_norm) pdt.assert_frame_equal(p.data, self.df_norm) npt.assert_array_equal(p.xticklabels, np.arange(8)) npt.assert_array_equal(p.yticklabels, self.letters.values) assert p.xlabel == "" assert p.ylabel == "letters" def test_df_multindex_input(self): df = self.df_norm.copy() index = pd.MultiIndex.from_tuples([("A", 1), ("B", 2), ("C", 3), ("D", 4)], names=["letter", "number"]) index.name = "letter-number" df.index = index p = mat._HeatMapper(df, **self.default_kws) combined_tick_labels = ["A-1", "B-2", "C-3", "D-4"] npt.assert_array_equal(p.yticklabels, combined_tick_labels) assert p.ylabel == "letter-number" p = mat._HeatMapper(df.T, **self.default_kws) npt.assert_array_equal(p.xticklabels, combined_tick_labels) assert p.xlabel == "letter-number" @pytest.mark.parametrize("dtype", [float, np.int64, object]) def test_mask_input(self, dtype): kws = self.default_kws.copy() mask = self.x_norm > 0 kws['mask'] = mask data = self.x_norm.astype(dtype) p = mat._HeatMapper(data, **kws) plot_data = np.ma.masked_where(mask, data) npt.assert_array_equal(p.plot_data, plot_data) def test_mask_limits(self): """Make sure masked cells are not used to calculate extremes""" kws = self.default_kws.copy() mask = self.x_norm > 0 kws['mask'] = mask p = mat._HeatMapper(self.x_norm, **kws) assert p.vmax == np.ma.array(self.x_norm, mask=mask).max() assert p.vmin == np.ma.array(self.x_norm, mask=mask).min() mask = self.x_norm < 0 kws['mask'] = mask p = mat._HeatMapper(self.x_norm, **kws) assert p.vmin == np.ma.array(self.x_norm, mask=mask).min() assert p.vmax == np.ma.array(self.x_norm, mask=mask).max() def test_default_vlims(self): p = mat._HeatMapper(self.df_unif, **self.default_kws) assert p.vmin == self.x_unif.min() assert p.vmax == self.x_unif.max() def test_robust_vlims(self): kws = self.default_kws.copy() kws["robust"] = True p = mat._HeatMapper(self.df_unif, **kws) assert p.vmin == np.percentile(self.x_unif, 2) assert p.vmax == np.percentile(self.x_unif, 98) def test_custom_sequential_vlims(self): kws = self.default_kws.copy() kws["vmin"] = 0 kws["vmax"] = 1 p = mat._HeatMapper(self.df_unif, **kws) assert p.vmin == 0 assert p.vmax == 1 def test_custom_diverging_vlims(self): kws = self.default_kws.copy() kws["vmin"] = -4 kws["vmax"] = 5 kws["center"] = 0 p = mat._HeatMapper(self.df_norm, **kws) assert p.vmin == -4 assert p.vmax == 5 def test_array_with_nans(self): x1 = self.rs.rand(10, 10) nulls = np.zeros(10) * np.nan x2 = np.c_[x1, nulls] m1 = mat._HeatMapper(x1, **self.default_kws) m2 = mat._HeatMapper(x2, **self.default_kws) assert m1.vmin == m2.vmin assert m1.vmax == m2.vmax def test_mask(self): df = pd.DataFrame(data={'a': [1, 1, 1], 'b': [2, np.nan, 2], 'c': [3, 3, np.nan]}) kws = self.default_kws.copy() kws["mask"] = np.isnan(df.values) m = mat._HeatMapper(df, **kws) npt.assert_array_equal(np.isnan(m.plot_data.data), m.plot_data.mask) def test_custom_cmap(self): kws = self.default_kws.copy() kws["cmap"] = "BuGn" p = mat._HeatMapper(self.df_unif, **kws) assert p.cmap == mpl.cm.BuGn def test_centered_vlims(self): kws = self.default_kws.copy() kws["center"] = .5 p = mat._HeatMapper(self.df_unif, **kws) assert p.vmin == self.df_unif.values.min() assert p.vmax == self.df_unif.values.max() def test_default_colors(self): vals = np.linspace(.2, 1, 9) cmap = mpl.cm.binary ax = mat.heatmap([vals], cmap=cmap) fc = ax.collections[0].get_facecolors() cvals = np.linspace(0, 1, 9) npt.assert_array_almost_equal(fc, cmap(cvals), 2) def test_custom_vlim_colors(self): vals = np.linspace(.2, 1, 9) cmap = mpl.cm.binary ax = mat.heatmap([vals], vmin=0, cmap=cmap) fc = ax.collections[0].get_facecolors() npt.assert_array_almost_equal(fc, cmap(vals), 2) def test_custom_center_colors(self): vals = np.linspace(.2, 1, 9) cmap = mpl.cm.binary ax = mat.heatmap([vals], center=.5, cmap=cmap) fc = ax.collections[0].get_facecolors() npt.assert_array_almost_equal(fc, cmap(vals), 2) def test_cmap_with_properties(self): kws = self.default_kws.copy() cmap = copy.copy(mpl.cm.get_cmap("BrBG")) cmap.set_bad("red") kws["cmap"] = cmap hm = mat._HeatMapper(self.df_unif, **kws) npt.assert_array_equal( cmap(np.ma.masked_invalid([np.nan])), hm.cmap(np.ma.masked_invalid([np.nan]))) kws["center"] = 0.5 hm = mat._HeatMapper(self.df_unif, **kws) npt.assert_array_equal( cmap(np.ma.masked_invalid([np.nan])), hm.cmap(np.ma.masked_invalid([np.nan]))) kws = self.default_kws.copy() cmap = copy.copy(mpl.cm.get_cmap("BrBG")) cmap.set_under("red") kws["cmap"] = cmap hm = mat._HeatMapper(self.df_unif, **kws) npt.assert_array_equal(cmap(-np.inf), hm.cmap(-np.inf)) kws["center"] = .5 hm = mat._HeatMapper(self.df_unif, **kws) npt.assert_array_equal(cmap(-np.inf), hm.cmap(-np.inf)) kws = self.default_kws.copy() cmap = copy.copy(mpl.cm.get_cmap("BrBG")) cmap.set_over("red") kws["cmap"] = cmap hm = mat._HeatMapper(self.df_unif, **kws) npt.assert_array_equal(cmap(-np.inf), hm.cmap(-np.inf)) kws["center"] = .5 hm = mat._HeatMapper(self.df_unif, **kws) npt.assert_array_equal(cmap(np.inf), hm.cmap(np.inf)) def test_tickabels_off(self): kws = self.default_kws.copy() kws['xticklabels'] = False kws['yticklabels'] = False p = mat._HeatMapper(self.df_norm, **kws) assert p.xticklabels == [] assert p.yticklabels == [] def test_custom_ticklabels(self): kws = self.default_kws.copy() xticklabels = list('iheartheatmaps'[:self.df_norm.shape[1]]) yticklabels = list('heatmapsarecool'[:self.df_norm.shape[0]]) kws['xticklabels'] = xticklabels kws['yticklabels'] = yticklabels p = mat._HeatMapper(self.df_norm, **kws) assert p.xticklabels == xticklabels assert p.yticklabels == yticklabels def test_custom_ticklabel_interval(self): kws = self.default_kws.copy() xstep, ystep = 2, 3 kws['xticklabels'] = xstep kws['yticklabels'] = ystep p = mat._HeatMapper(self.df_norm, **kws) nx, ny = self.df_norm.T.shape npt.assert_array_equal(p.xticks, np.arange(0, nx, xstep) + .5) npt.assert_array_equal(p.yticks, np.arange(0, ny, ystep) + .5) npt.assert_array_equal(p.xticklabels, self.df_norm.columns[0:nx:xstep]) npt.assert_array_equal(p.yticklabels, self.df_norm.index[0:ny:ystep]) def test_heatmap_annotation(self): ax = mat.heatmap(self.df_norm, annot=True, fmt=".1f", annot_kws={"fontsize": 14}) for val, text in zip(self.x_norm.flat, ax.texts): assert text.get_text() == "{:.1f}".format(val) assert text.get_fontsize() == 14 def test_heatmap_annotation_overwrite_kws(self): annot_kws = dict(color="0.3", va="bottom", ha="left") ax = mat.heatmap(self.df_norm, annot=True, fmt=".1f", annot_kws=annot_kws) for text in ax.texts: assert text.get_color() == "0.3" assert text.get_ha() == "left" assert text.get_va() == "bottom" def test_heatmap_annotation_with_mask(self): df = pd.DataFrame(data={'a': [1, 1, 1], 'b': [2, np.nan, 2], 'c': [3, 3, np.nan]}) mask = np.isnan(df.values) df_masked = np.ma.masked_where(mask, df) ax = mat.heatmap(df, annot=True, fmt='.1f', mask=mask) assert len(df_masked.compressed()) == len(ax.texts) for val, text in zip(df_masked.compressed(), ax.texts): assert "{:.1f}".format(val) == text.get_text() def test_heatmap_annotation_mesh_colors(self): ax = mat.heatmap(self.df_norm, annot=True) mesh = ax.collections[0] assert len(mesh.get_facecolors()) == self.df_norm.values.size plt.close("all") def test_heatmap_annotation_other_data(self): annot_data = self.df_norm + 10 ax = mat.heatmap(self.df_norm, annot=annot_data, fmt=".1f", annot_kws={"fontsize": 14}) for val, text in zip(annot_data.values.flat, ax.texts): assert text.get_text() == "{:.1f}".format(val) assert text.get_fontsize() == 14 def test_heatmap_annotation_with_limited_ticklabels(self): ax = mat.heatmap(self.df_norm, fmt=".2f", annot=True, xticklabels=False, yticklabels=False) for val, text in zip(self.x_norm.flat, ax.texts): assert text.get_text() == "{:.2f}".format(val) def test_heatmap_cbar(self): f = plt.figure() mat.heatmap(self.df_norm) assert len(f.axes) == 2 plt.close(f) f = plt.figure() mat.heatmap(self.df_norm, cbar=False) assert len(f.axes) == 1 plt.close(f) f, (ax1, ax2) = plt.subplots(2) mat.heatmap(self.df_norm, ax=ax1, cbar_ax=ax2) assert len(f.axes) == 2 plt.close(f) @pytest.mark.xfail(mpl.__version__ == "3.1.1", reason="matplotlib 3.1.1 bug") def test_heatmap_axes(self): ax = mat.heatmap(self.df_norm) xtl = [int(l.get_text()) for l in ax.get_xticklabels()] assert xtl == list(self.df_norm.columns) ytl = [l.get_text() for l in ax.get_yticklabels()] assert ytl == list(self.df_norm.index) assert ax.get_xlabel() == "" assert ax.get_ylabel() == "letters" assert ax.get_xlim() == (0, 8) assert ax.get_ylim() == (4, 0) def test_heatmap_ticklabel_rotation(self): f, ax = plt.subplots(figsize=(2, 2)) mat.heatmap(self.df_norm, xticklabels=1, yticklabels=1, ax=ax) for t in ax.get_xticklabels(): assert t.get_rotation() == 0 for t in ax.get_yticklabels(): assert t.get_rotation() == 90 plt.close(f) df = self.df_norm.copy() df.columns = [str(c) * 10 for c in df.columns] df.index = [i * 10 for i in df.index] f, ax = plt.subplots(figsize=(2, 2)) mat.heatmap(df, xticklabels=1, yticklabels=1, ax=ax) for t in ax.get_xticklabels(): assert t.get_rotation() == 90 for t in ax.get_yticklabels(): assert t.get_rotation() == 0 plt.close(f) def test_heatmap_inner_lines(self): c = (0, 0, 1, 1) ax = mat.heatmap(self.df_norm, linewidths=2, linecolor=c) mesh = ax.collections[0] assert mesh.get_linewidths()[0] == 2 assert tuple(mesh.get_edgecolor()[0]) == c def test_square_aspect(self): ax = mat.heatmap(self.df_norm, square=True) obs_aspect = ax.get_aspect() # mpl>3.3 returns 1 for setting "equal" aspect # so test for the two possible equal outcomes assert obs_aspect == "equal" or obs_aspect == 1 def test_mask_validation(self): mask = mat._matrix_mask(self.df_norm, None) assert mask.shape == self.df_norm.shape assert mask.values.sum() == 0 with pytest.raises(ValueError): bad_array_mask = self.rs.randn(3, 6) > 0 mat._matrix_mask(self.df_norm, bad_array_mask) with pytest.raises(ValueError): bad_df_mask = pd.DataFrame(self.rs.randn(4, 8) > 0) mat._matrix_mask(self.df_norm, bad_df_mask) def test_missing_data_mask(self): data = pd.DataFrame(np.arange(4, dtype=float).reshape(2, 2)) data.loc[0, 0] = np.nan mask = mat._matrix_mask(data, None) npt.assert_array_equal(mask, [[True, False], [False, False]]) mask_in = np.array([[False, True], [False, False]]) mask_out = mat._matrix_mask(data, mask_in) npt.assert_array_equal(mask_out, [[True, True], [False, False]]) def test_cbar_ticks(self): f, (ax1, ax2) = plt.subplots(2) mat.heatmap(self.df_norm, ax=ax1, cbar_ax=ax2, cbar_kws=dict(drawedges=True)) assert len(ax2.collections) == 2 class TestDendrogram: rs = np.random.RandomState(sum(map(ord, "dendrogram"))) x_norm = rs.randn(4, 8) + np.arange(8) x_norm = (x_norm.T + np.arange(4)).T letters = pd.Series(["A", "B", "C", "D", "E", "F", "G", "H"], name="letters") df_norm = pd.DataFrame(x_norm, columns=letters) try: import fastcluster x_norm_linkage = fastcluster.linkage_vector(x_norm.T, metric='euclidean', method='single') except ImportError: x_norm_distances = distance.pdist(x_norm.T, metric='euclidean') x_norm_linkage = hierarchy.linkage(x_norm_distances, method='single') x_norm_dendrogram = hierarchy.dendrogram(x_norm_linkage, no_plot=True, color_threshold=-np.inf) x_norm_leaves = x_norm_dendrogram['leaves'] df_norm_leaves = np.asarray(df_norm.columns[x_norm_leaves]) default_kws = dict(linkage=None, metric='euclidean', method='single', axis=1, label=True, rotate=False) def test_ndarray_input(self): p = mat._DendrogramPlotter(self.x_norm, **self.default_kws) npt.assert_array_equal(p.array.T, self.x_norm) pdt.assert_frame_equal(p.data.T, pd.DataFrame(self.x_norm)) npt.assert_array_equal(p.linkage, self.x_norm_linkage) assert p.dendrogram == self.x_norm_dendrogram npt.assert_array_equal(p.reordered_ind, self.x_norm_leaves) npt.assert_array_equal(p.xticklabels, self.x_norm_leaves) npt.assert_array_equal(p.yticklabels, []) assert p.xlabel is None assert p.ylabel == '' def test_df_input(self): p = mat._DendrogramPlotter(self.df_norm, **self.default_kws) npt.assert_array_equal(p.array.T, np.asarray(self.df_norm)) pdt.assert_frame_equal(p.data.T, self.df_norm) npt.assert_array_equal(p.linkage, self.x_norm_linkage) assert p.dendrogram == self.x_norm_dendrogram npt.assert_array_equal(p.xticklabels, np.asarray(self.df_norm.columns)[ self.x_norm_leaves]) npt.assert_array_equal(p.yticklabels, []) assert p.xlabel == 'letters' assert p.ylabel == '' def test_df_multindex_input(self): df = self.df_norm.copy() index = pd.MultiIndex.from_tuples([("A", 1), ("B", 2), ("C", 3), ("D", 4)], names=["letter", "number"]) index.name = "letter-number" df.index = index kws = self.default_kws.copy() kws['label'] = True p = mat._DendrogramPlotter(df.T, **kws) xticklabels = ["A-1", "B-2", "C-3", "D-4"] xticklabels = [xticklabels[i] for i in p.reordered_ind] npt.assert_array_equal(p.xticklabels, xticklabels) npt.assert_array_equal(p.yticklabels, []) assert p.xlabel == "letter-number" def test_axis0_input(self): kws = self.default_kws.copy() kws['axis'] = 0 p = mat._DendrogramPlotter(self.df_norm.T, **kws) npt.assert_array_equal(p.array, np.asarray(self.df_norm.T)) pdt.assert_frame_equal(p.data, self.df_norm.T) npt.assert_array_equal(p.linkage, self.x_norm_linkage) assert p.dendrogram == self.x_norm_dendrogram npt.assert_array_equal(p.xticklabels, self.df_norm_leaves) npt.assert_array_equal(p.yticklabels, []) assert p.xlabel == 'letters' assert p.ylabel == '' def test_rotate_input(self): kws = self.default_kws.copy() kws['rotate'] = True p = mat._DendrogramPlotter(self.df_norm, **kws) npt.assert_array_equal(p.array.T, np.asarray(self.df_norm)) pdt.assert_frame_equal(p.data.T, self.df_norm) npt.assert_array_equal(p.xticklabels, []) npt.assert_array_equal(p.yticklabels, self.df_norm_leaves) assert p.xlabel == '' assert p.ylabel == 'letters' def test_rotate_axis0_input(self): kws = self.default_kws.copy() kws['rotate'] = True kws['axis'] = 0 p = mat._DendrogramPlotter(self.df_norm.T, **kws) npt.assert_array_equal(p.reordered_ind, self.x_norm_leaves) def test_custom_linkage(self): kws = self.default_kws.copy() try: import fastcluster linkage = fastcluster.linkage_vector(self.x_norm, method='single', metric='euclidean') except ImportError: d = distance.pdist(self.x_norm, metric='euclidean') linkage = hierarchy.linkage(d, method='single') dendrogram = hierarchy.dendrogram(linkage, no_plot=True, color_threshold=-np.inf) kws['linkage'] = linkage p = mat._DendrogramPlotter(self.df_norm, **kws) npt.assert_array_equal(p.linkage, linkage) assert p.dendrogram == dendrogram def test_label_false(self): kws = self.default_kws.copy() kws['label'] = False p = mat._DendrogramPlotter(self.df_norm, **kws) assert p.xticks == [] assert p.yticks == [] assert p.xticklabels == [] assert p.yticklabels == [] assert p.xlabel == "" assert p.ylabel == "" def test_linkage_scipy(self): p = mat._DendrogramPlotter(self.x_norm, **self.default_kws) scipy_linkage = p._calculate_linkage_scipy() from scipy.spatial import distance from scipy.cluster import hierarchy dists = distance.pdist(self.x_norm.T, metric=self.default_kws['metric']) linkage = hierarchy.linkage(dists, method=self.default_kws['method']) npt.assert_array_equal(scipy_linkage, linkage) @pytest.mark.skipif(_no_fastcluster, reason="fastcluster not installed") def test_fastcluster_other_method(self): import fastcluster kws = self.default_kws.copy() kws['method'] = 'average' linkage = fastcluster.linkage(self.x_norm.T, method='average', metric='euclidean') p = mat._DendrogramPlotter(self.x_norm, **kws) npt.assert_array_equal(p.linkage, linkage) @pytest.mark.skipif(_no_fastcluster, reason="fastcluster not installed") def test_fastcluster_non_euclidean(self): import fastcluster kws = self.default_kws.copy() kws['metric'] = 'cosine' kws['method'] = 'average' linkage = fastcluster.linkage(self.x_norm.T, method=kws['method'], metric=kws['metric']) p = mat._DendrogramPlotter(self.x_norm, **kws) npt.assert_array_equal(p.linkage, linkage) def test_dendrogram_plot(self): d = mat.dendrogram(self.x_norm, **self.default_kws) ax = plt.gca() xlim = ax.get_xlim() # 10 comes from _plot_dendrogram in scipy.cluster.hierarchy xmax = len(d.reordered_ind) * 10 assert xlim[0] == 0 assert xlim[1] == xmax assert len(ax.collections[0].get_paths()) == len(d.dependent_coord) @pytest.mark.xfail(mpl.__version__ == "3.1.1", reason="matplotlib 3.1.1 bug") def test_dendrogram_rotate(self): kws = self.default_kws.copy() kws['rotate'] = True d = mat.dendrogram(self.x_norm, **kws) ax = plt.gca() ylim = ax.get_ylim() # 10 comes from _plot_dendrogram in scipy.cluster.hierarchy ymax = len(d.reordered_ind) * 10 # Since y axis is inverted, ylim is (80, 0) # and therefore not (0, 80) as usual: assert ylim[1] == 0 assert ylim[0] == ymax def test_dendrogram_ticklabel_rotation(self): f, ax = plt.subplots(figsize=(2, 2)) mat.dendrogram(self.df_norm, ax=ax) for t in ax.get_xticklabels(): assert t.get_rotation() == 0 plt.close(f) df = self.df_norm.copy() df.columns = [str(c) * 10 for c in df.columns] df.index = [i * 10 for i in df.index] f, ax = plt.subplots(figsize=(2, 2)) mat.dendrogram(df, ax=ax) for t in ax.get_xticklabels(): assert t.get_rotation() == 90 plt.close(f) f, ax = plt.subplots(figsize=(2, 2)) mat.dendrogram(df.T, axis=0, rotate=True) for t in ax.get_yticklabels(): assert t.get_rotation() == 0 plt.close(f) class TestClustermap: rs = np.random.RandomState(sum(map(ord, "clustermap"))) x_norm = rs.randn(4, 8) + np.arange(8) x_norm = (x_norm.T + np.arange(4)).T letters = pd.Series(["A", "B", "C", "D", "E", "F", "G", "H"], name="letters") df_norm = pd.DataFrame(x_norm, columns=letters) try: import fastcluster x_norm_linkage = fastcluster.linkage_vector(x_norm.T, metric='euclidean', method='single') except ImportError: x_norm_distances = distance.pdist(x_norm.T, metric='euclidean') x_norm_linkage = hierarchy.linkage(x_norm_distances, method='single') x_norm_dendrogram = hierarchy.dendrogram(x_norm_linkage, no_plot=True, color_threshold=-np.inf) x_norm_leaves = x_norm_dendrogram['leaves'] df_norm_leaves = np.asarray(df_norm.columns[x_norm_leaves]) default_kws = dict(pivot_kws=None, z_score=None, standard_scale=None, figsize=(10, 10), row_colors=None, col_colors=None, dendrogram_ratio=.2, colors_ratio=.03, cbar_pos=(0, .8, .05, .2)) default_plot_kws = dict(metric='euclidean', method='average', colorbar_kws=None, row_cluster=True, col_cluster=True, row_linkage=None, col_linkage=None, tree_kws=None) row_colors = color_palette('Set2', df_norm.shape[0]) col_colors = color_palette('Dark2', df_norm.shape[1]) def test_ndarray_input(self): cg = mat.ClusterGrid(self.x_norm, **self.default_kws) pdt.assert_frame_equal(cg.data, pd.DataFrame(self.x_norm)) assert len(cg.fig.axes) == 4 assert cg.ax_row_colors is None assert cg.ax_col_colors is None def test_df_input(self): cg = mat.ClusterGrid(self.df_norm, **self.default_kws) pdt.assert_frame_equal(cg.data, self.df_norm) def test_corr_df_input(self): df = self.df_norm.corr() cg = mat.ClusterGrid(df, **self.default_kws) cg.plot(**self.default_plot_kws) diag = cg.data2d.values[np.diag_indices_from(cg.data2d)] npt.assert_array_equal(diag, np.ones(cg.data2d.shape[0])) def test_pivot_input(self): df_norm = self.df_norm.copy() df_norm.index.name = 'numbers' df_long = pd.melt(df_norm.reset_index(), var_name='letters', id_vars='numbers') kws = self.default_kws.copy() kws['pivot_kws'] = dict(index='numbers', columns='letters', values='value') cg = mat.ClusterGrid(df_long, **kws) pdt.assert_frame_equal(cg.data2d, df_norm) def test_colors_input(self): kws = self.default_kws.copy() kws['row_colors'] = self.row_colors kws['col_colors'] = self.col_colors cg = mat.ClusterGrid(self.df_norm, **kws) npt.assert_array_equal(cg.row_colors, self.row_colors) npt.assert_array_equal(cg.col_colors, self.col_colors) assert len(cg.fig.axes) == 6 def test_categorical_colors_input(self): kws = self.default_kws.copy() row_colors = pd.Series(self.row_colors, dtype="category") col_colors = pd.Series( self.col_colors, dtype="category", index=self.df_norm.columns ) kws['row_colors'] = row_colors kws['col_colors'] = col_colors exp_row_colors = list(map(mpl.colors.to_rgb, row_colors)) exp_col_colors = list(map(mpl.colors.to_rgb, col_colors)) cg = mat.ClusterGrid(self.df_norm, **kws) npt.assert_array_equal(cg.row_colors, exp_row_colors) npt.assert_array_equal(cg.col_colors, exp_col_colors) assert len(cg.fig.axes) == 6 def test_nested_colors_input(self): kws = self.default_kws.copy() row_colors = [self.row_colors, self.row_colors] col_colors = [self.col_colors, self.col_colors] kws['row_colors'] = row_colors kws['col_colors'] = col_colors cm = mat.ClusterGrid(self.df_norm, **kws) npt.assert_array_equal(cm.row_colors, row_colors) npt.assert_array_equal(cm.col_colors, col_colors) assert len(cm.fig.axes) == 6 def test_colors_input_custom_cmap(self): kws = self.default_kws.copy() kws['cmap'] = mpl.cm.PRGn kws['row_colors'] = self.row_colors kws['col_colors'] = self.col_colors cg = mat.clustermap(self.df_norm, **kws) npt.assert_array_equal(cg.row_colors, self.row_colors) npt.assert_array_equal(cg.col_colors, self.col_colors) assert len(cg.fig.axes) == 6 def test_z_score(self): df = self.df_norm.copy() df = (df - df.mean()) / df.std() kws = self.default_kws.copy() kws['z_score'] = 1 cg = mat.ClusterGrid(self.df_norm, **kws) pdt.assert_frame_equal(cg.data2d, df) def test_z_score_axis0(self): df = self.df_norm.copy() df = df.T df = (df - df.mean()) / df.std() df = df.T kws = self.default_kws.copy() kws['z_score'] = 0 cg = mat.ClusterGrid(self.df_norm, **kws) pdt.assert_frame_equal(cg.data2d, df) def test_standard_scale(self): df = self.df_norm.copy() df = (df - df.min()) / (df.max() - df.min()) kws = self.default_kws.copy() kws['standard_scale'] = 1 cg = mat.ClusterGrid(self.df_norm, **kws) pdt.assert_frame_equal(cg.data2d, df) def test_standard_scale_axis0(self): df = self.df_norm.copy() df = df.T df = (df - df.min()) / (df.max() - df.min()) df = df.T kws = self.default_kws.copy() kws['standard_scale'] = 0 cg = mat.ClusterGrid(self.df_norm, **kws) pdt.assert_frame_equal(cg.data2d, df) def test_z_score_standard_scale(self): kws = self.default_kws.copy() kws['z_score'] = True kws['standard_scale'] = True with pytest.raises(ValueError): mat.ClusterGrid(self.df_norm, **kws) def test_color_list_to_matrix_and_cmap(self): # Note this uses the attribute named col_colors but tests row colors matrix, cmap = mat.ClusterGrid.color_list_to_matrix_and_cmap( self.col_colors, self.x_norm_leaves, axis=0) for i, leaf in enumerate(self.x_norm_leaves): color = self.col_colors[leaf] assert_colors_equal(cmap(matrix[i, 0]), color) def test_nested_color_list_to_matrix_and_cmap(self): # Note this uses the attribute named col_colors but tests row colors colors = [self.col_colors, self.col_colors[::-1]] matrix, cmap = mat.ClusterGrid.color_list_to_matrix_and_cmap( colors, self.x_norm_leaves, axis=0) for i, leaf in enumerate(self.x_norm_leaves): for j, color_row in enumerate(colors): color = color_row[leaf] assert_colors_equal(cmap(matrix[i, j]), color) def test_color_list_to_matrix_and_cmap_axis1(self): matrix, cmap = mat.ClusterGrid.color_list_to_matrix_and_cmap( self.col_colors, self.x_norm_leaves, axis=1) for j, leaf in enumerate(self.x_norm_leaves): color = self.col_colors[leaf] assert_colors_equal(cmap(matrix[0, j]), color) def test_color_list_to_matrix_and_cmap_different_sizes(self): colors = [self.col_colors, self.col_colors * 2] with pytest.raises(ValueError): matrix, cmap = mat.ClusterGrid.color_list_to_matrix_and_cmap( colors, self.x_norm_leaves, axis=1) def test_savefig(self): # Not sure if this is the right way to test.... cg = mat.ClusterGrid(self.df_norm, **self.default_kws) cg.plot(**self.default_plot_kws) cg.savefig(tempfile.NamedTemporaryFile(), format='png') def test_plot_dendrograms(self): cm = mat.clustermap(self.df_norm, **self.default_kws) assert len(cm.ax_row_dendrogram.collections[0].get_paths()) == len( cm.dendrogram_row.independent_coord ) assert len(cm.ax_col_dendrogram.collections[0].get_paths()) == len( cm.dendrogram_col.independent_coord ) data2d = self.df_norm.iloc[cm.dendrogram_row.reordered_ind, cm.dendrogram_col.reordered_ind] pdt.assert_frame_equal(cm.data2d, data2d) def test_cluster_false(self): kws = self.default_kws.copy() kws['row_cluster'] = False kws['col_cluster'] = False cm = mat.clustermap(self.df_norm, **kws) assert len(cm.ax_row_dendrogram.lines) == 0 assert len(cm.ax_col_dendrogram.lines) == 0 assert len(cm.ax_row_dendrogram.get_xticks()) == 0 assert len(cm.ax_row_dendrogram.get_yticks()) == 0 assert len(cm.ax_col_dendrogram.get_xticks()) == 0 assert len(cm.ax_col_dendrogram.get_yticks()) == 0 pdt.assert_frame_equal(cm.data2d, self.df_norm) def test_row_col_colors(self): kws = self.default_kws.copy() kws['row_colors'] = self.row_colors kws['col_colors'] = self.col_colors cm = mat.clustermap(self.df_norm, **kws) assert len(cm.ax_row_colors.collections) == 1 assert len(cm.ax_col_colors.collections) == 1 def test_cluster_false_row_col_colors(self): kws = self.default_kws.copy() kws['row_cluster'] = False kws['col_cluster'] = False kws['row_colors'] = self.row_colors kws['col_colors'] = self.col_colors cm = mat.clustermap(self.df_norm, **kws) assert len(cm.ax_row_dendrogram.lines) == 0 assert len(cm.ax_col_dendrogram.lines) == 0 assert len(cm.ax_row_dendrogram.get_xticks()) == 0 assert len(cm.ax_row_dendrogram.get_yticks()) == 0 assert len(cm.ax_col_dendrogram.get_xticks()) == 0 assert len(cm.ax_col_dendrogram.get_yticks()) == 0 assert len(cm.ax_row_colors.collections) == 1 assert len(cm.ax_col_colors.collections) == 1 pdt.assert_frame_equal(cm.data2d, self.df_norm) def test_row_col_colors_df(self): kws = self.default_kws.copy() kws['row_colors'] = pd.DataFrame({'row_1': list(self.row_colors), 'row_2': list(self.row_colors)}, index=self.df_norm.index, columns=['row_1', 'row_2']) kws['col_colors'] = pd.DataFrame({'col_1': list(self.col_colors), 'col_2': list(self.col_colors)}, index=self.df_norm.columns, columns=['col_1', 'col_2']) cm = mat.clustermap(self.df_norm, **kws) row_labels = [l.get_text() for l in cm.ax_row_colors.get_xticklabels()] assert cm.row_color_labels == ['row_1', 'row_2'] assert row_labels == cm.row_color_labels col_labels = [l.get_text() for l in cm.ax_col_colors.get_yticklabels()] assert cm.col_color_labels == ['col_1', 'col_2'] assert col_labels == cm.col_color_labels def test_row_col_colors_df_shuffled(self): # Tests if colors are properly matched, even if given in wrong order m, n = self.df_norm.shape shuffled_inds = [self.df_norm.index[i] for i in list(range(0, m, 2)) + list(range(1, m, 2))] shuffled_cols = [self.df_norm.columns[i] for i in list(range(0, n, 2)) + list(range(1, n, 2))] kws = self.default_kws.copy() row_colors = pd.DataFrame({'row_annot': list(self.row_colors)}, index=self.df_norm.index) kws['row_colors'] = row_colors.loc[shuffled_inds] col_colors = pd.DataFrame({'col_annot': list(self.col_colors)}, index=self.df_norm.columns) kws['col_colors'] = col_colors.loc[shuffled_cols] cm = mat.clustermap(self.df_norm, **kws) assert list(cm.col_colors)[0] == list(self.col_colors) assert list(cm.row_colors)[0] == list(self.row_colors) def test_row_col_colors_df_missing(self): kws = self.default_kws.copy() row_colors = pd.DataFrame({'row_annot': list(self.row_colors)}, index=self.df_norm.index) kws['row_colors'] = row_colors.drop(self.df_norm.index[0]) col_colors = pd.DataFrame({'col_annot': list(self.col_colors)}, index=self.df_norm.columns) kws['col_colors'] = col_colors.drop(self.df_norm.columns[0]) cm = mat.clustermap(self.df_norm, **kws) assert list(cm.col_colors)[0] == [(1.0, 1.0, 1.0)] + list(self.col_colors[1:]) assert list(cm.row_colors)[0] == [(1.0, 1.0, 1.0)] + list(self.row_colors[1:]) def test_row_col_colors_df_one_axis(self): # Test case with only row annotation. kws1 = self.default_kws.copy() kws1['row_colors'] = pd.DataFrame({'row_1': list(self.row_colors), 'row_2': list(self.row_colors)}, index=self.df_norm.index, columns=['row_1', 'row_2']) cm1 = mat.clustermap(self.df_norm, **kws1) row_labels = [l.get_text() for l in cm1.ax_row_colors.get_xticklabels()] assert cm1.row_color_labels == ['row_1', 'row_2'] assert row_labels == cm1.row_color_labels # Test case with only col annotation. kws2 = self.default_kws.copy() kws2['col_colors'] = pd.DataFrame({'col_1': list(self.col_colors), 'col_2': list(self.col_colors)}, index=self.df_norm.columns, columns=['col_1', 'col_2']) cm2 = mat.clustermap(self.df_norm, **kws2) col_labels = [l.get_text() for l in cm2.ax_col_colors.get_yticklabels()] assert cm2.col_color_labels == ['col_1', 'col_2'] assert col_labels == cm2.col_color_labels def test_row_col_colors_series(self): kws = self.default_kws.copy() kws['row_colors'] = pd.Series(list(self.row_colors), name='row_annot', index=self.df_norm.index) kws['col_colors'] = pd.Series(list(self.col_colors), name='col_annot', index=self.df_norm.columns) cm = mat.clustermap(self.df_norm, **kws) row_labels = [l.get_text() for l in cm.ax_row_colors.get_xticklabels()] assert cm.row_color_labels == ['row_annot'] assert row_labels == cm.row_color_labels col_labels = [l.get_text() for l in cm.ax_col_colors.get_yticklabels()] assert cm.col_color_labels == ['col_annot'] assert col_labels == cm.col_color_labels def test_row_col_colors_series_shuffled(self): # Tests if colors are properly matched, even if given in wrong order m, n = self.df_norm.shape shuffled_inds = [self.df_norm.index[i] for i in list(range(0, m, 2)) + list(range(1, m, 2))] shuffled_cols = [self.df_norm.columns[i] for i in list(range(0, n, 2)) + list(range(1, n, 2))] kws = self.default_kws.copy() row_colors = pd.Series(list(self.row_colors), name='row_annot', index=self.df_norm.index) kws['row_colors'] = row_colors.loc[shuffled_inds] col_colors = pd.Series(list(self.col_colors), name='col_annot', index=self.df_norm.columns) kws['col_colors'] = col_colors.loc[shuffled_cols] cm = mat.clustermap(self.df_norm, **kws) assert list(cm.col_colors) == list(self.col_colors) assert list(cm.row_colors) == list(self.row_colors) def test_row_col_colors_series_missing(self): kws = self.default_kws.copy() row_colors = pd.Series(list(self.row_colors), name='row_annot', index=self.df_norm.index) kws['row_colors'] = row_colors.drop(self.df_norm.index[0]) col_colors = pd.Series(list(self.col_colors), name='col_annot', index=self.df_norm.columns) kws['col_colors'] = col_colors.drop(self.df_norm.columns[0]) cm = mat.clustermap(self.df_norm, **kws) assert list(cm.col_colors) == [(1.0, 1.0, 1.0)] + list(self.col_colors[1:]) assert list(cm.row_colors) == [(1.0, 1.0, 1.0)] + list(self.row_colors[1:]) def test_row_col_colors_ignore_heatmap_kwargs(self): g = mat.clustermap(self.rs.uniform(0, 200, self.df_norm.shape), row_colors=self.row_colors, col_colors=self.col_colors, cmap="Spectral", norm=mpl.colors.LogNorm(), vmax=100) assert np.array_equal( np.array(self.row_colors)[g.dendrogram_row.reordered_ind], g.ax_row_colors.collections[0].get_facecolors()[:, :3] ) assert np.array_equal( np.array(self.col_colors)[g.dendrogram_col.reordered_ind], g.ax_col_colors.collections[0].get_facecolors()[:, :3] ) def test_row_col_colors_raise_on_mixed_index_types(self): row_colors = pd.Series( list(self.row_colors), name="row_annot", index=self.df_norm.index ) col_colors = pd.Series( list(self.col_colors), name="col_annot", index=self.df_norm.columns ) with pytest.raises(TypeError): mat.clustermap(self.x_norm, row_colors=row_colors) with pytest.raises(TypeError): mat.clustermap(self.x_norm, col_colors=col_colors) def test_mask_reorganization(self): kws = self.default_kws.copy() kws["mask"] = self.df_norm > 0 g = mat.clustermap(self.df_norm, **kws) npt.assert_array_equal(g.data2d.index, g.mask.index) npt.assert_array_equal(g.data2d.columns, g.mask.columns) npt.assert_array_equal(g.mask.index, self.df_norm.index[ g.dendrogram_row.reordered_ind]) npt.assert_array_equal(g.mask.columns, self.df_norm.columns[ g.dendrogram_col.reordered_ind]) def test_ticklabel_reorganization(self): kws = self.default_kws.copy() xtl = np.arange(self.df_norm.shape[1]) kws["xticklabels"] = list(xtl) ytl = self.letters.loc[:self.df_norm.shape[0]] kws["yticklabels"] = ytl g = mat.clustermap(self.df_norm, **kws) xtl_actual = [t.get_text() for t in g.ax_heatmap.get_xticklabels()] ytl_actual = [t.get_text() for t in g.ax_heatmap.get_yticklabels()] xtl_want = xtl[g.dendrogram_col.reordered_ind].astype(" g1.ax_col_dendrogram.get_position().height) assert (g2.ax_col_colors.get_position().height > g1.ax_col_colors.get_position().height) assert (g2.ax_heatmap.get_position().height < g1.ax_heatmap.get_position().height) assert (g2.ax_row_dendrogram.get_position().width > g1.ax_row_dendrogram.get_position().width) assert (g2.ax_row_colors.get_position().width > g1.ax_row_colors.get_position().width) assert (g2.ax_heatmap.get_position().width < g1.ax_heatmap.get_position().width) kws1 = self.default_kws.copy() kws1.update(col_colors=self.col_colors) kws2 = kws1.copy() kws2.update(col_colors=[self.col_colors, self.col_colors]) g1 = mat.clustermap(self.df_norm, **kws1) g2 = mat.clustermap(self.df_norm, **kws2) assert (g2.ax_col_colors.get_position().height > g1.ax_col_colors.get_position().height) kws1 = self.default_kws.copy() kws1.update(dendrogram_ratio=(.2, .2)) kws2 = kws1.copy() kws2.update(dendrogram_ratio=(.2, .3)) g1 = mat.clustermap(self.df_norm, **kws1) g2 = mat.clustermap(self.df_norm, **kws2) assert (g2.ax_row_dendrogram.get_position().width == g1.ax_row_dendrogram.get_position().width) assert (g2.ax_col_dendrogram.get_position().height > g1.ax_col_dendrogram.get_position().height) def test_cbar_pos(self): kws = self.default_kws.copy() kws["cbar_pos"] = (.2, .1, .4, .3) g = mat.clustermap(self.df_norm, **kws) pos = g.ax_cbar.get_position() assert pytest.approx(tuple(pos.p0)) == kws["cbar_pos"][:2] assert pytest.approx(pos.width) == kws["cbar_pos"][2] assert pytest.approx(pos.height) == kws["cbar_pos"][3] kws["cbar_pos"] = None g = mat.clustermap(self.df_norm, **kws) assert g.ax_cbar is None def test_square_warning(self): kws = self.default_kws.copy() g1 = mat.clustermap(self.df_norm, **kws) with pytest.warns(UserWarning): kws["square"] = True g2 = mat.clustermap(self.df_norm, **kws) g1_shape = g1.ax_heatmap.get_position().get_points() g2_shape = g2.ax_heatmap.get_position().get_points() assert np.array_equal(g1_shape, g2_shape) def test_clustermap_annotation(self): g = mat.clustermap(self.df_norm, annot=True, fmt=".1f") for val, text in zip(np.asarray(g.data2d).flat, g.ax_heatmap.texts): assert text.get_text() == "{:.1f}".format(val) g = mat.clustermap(self.df_norm, annot=self.df_norm, fmt=".1f") for val, text in zip(np.asarray(g.data2d).flat, g.ax_heatmap.texts): assert text.get_text() == "{:.1f}".format(val) def test_tree_kws(self): rgb = (1, .5, .2) g = mat.clustermap(self.df_norm, tree_kws=dict(color=rgb)) for ax in [g.ax_col_dendrogram, g.ax_row_dendrogram]: tree, = ax.collections assert tuple(tree.get_color().squeeze())[:3] == rgb seaborn-0.11.2/seaborn/tests/test_miscplot.py000066400000000000000000000016071410631356500212720ustar00rootroot00000000000000import matplotlib.pyplot as plt from .. import miscplot as misc from ..palettes import color_palette from .test_utils import _network class TestPalPlot: """Test the function that visualizes a color palette.""" def test_palplot_size(self): pal4 = color_palette("husl", 4) misc.palplot(pal4) size4 = plt.gcf().get_size_inches() assert tuple(size4) == (4, 1) pal5 = color_palette("husl", 5) misc.palplot(pal5) size5 = plt.gcf().get_size_inches() assert tuple(size5) == (5, 1) palbig = color_palette("husl", 3) misc.palplot(palbig, 2) sizebig = plt.gcf().get_size_inches() assert tuple(sizebig) == (6, 2) class TestDogPlot: @_network(url="https://github.com/mwaskom/seaborn-data") def test_dogplot(self): misc.dogplot() ax = plt.gca() assert len(ax.images) == 1 seaborn-0.11.2/seaborn/tests/test_palettes.py000066400000000000000000000337511410631356500212660ustar00rootroot00000000000000import colorsys import numpy as np import matplotlib as mpl import pytest import numpy.testing as npt from .. import palettes, utils, rcmod from ..external import husl from ..colors import xkcd_rgb, crayons class TestColorPalettes: def test_current_palette(self): pal = palettes.color_palette(["red", "blue", "green"]) rcmod.set_palette(pal) assert pal == utils.get_color_cycle() rcmod.set() def test_palette_context(self): default_pal = palettes.color_palette() context_pal = palettes.color_palette("muted") with palettes.color_palette(context_pal): assert utils.get_color_cycle() == context_pal assert utils.get_color_cycle() == default_pal def test_big_palette_context(self): original_pal = palettes.color_palette("deep", n_colors=8) context_pal = palettes.color_palette("husl", 10) rcmod.set_palette(original_pal) with palettes.color_palette(context_pal, 10): assert utils.get_color_cycle() == context_pal assert utils.get_color_cycle() == original_pal # Reset default rcmod.set() def test_palette_size(self): pal = palettes.color_palette("deep") assert len(pal) == palettes.QUAL_PALETTE_SIZES["deep"] pal = palettes.color_palette("pastel6") assert len(pal) == palettes.QUAL_PALETTE_SIZES["pastel6"] pal = palettes.color_palette("Set3") assert len(pal) == palettes.QUAL_PALETTE_SIZES["Set3"] pal = palettes.color_palette("husl") assert len(pal) == 6 pal = palettes.color_palette("Greens") assert len(pal) == 6 def test_seaborn_palettes(self): pals = "deep", "muted", "pastel", "bright", "dark", "colorblind" for name in pals: full = palettes.color_palette(name, 10).as_hex() short = palettes.color_palette(name + "6", 6).as_hex() b, _, g, r, m, _, _, _, y, c = full assert [b, g, r, m, y, c] == list(short) def test_hls_palette(self): pal1 = palettes.hls_palette() pal2 = palettes.color_palette("hls") npt.assert_array_equal(pal1, pal2) cmap1 = palettes.hls_palette(as_cmap=True) cmap2 = palettes.color_palette("hls", as_cmap=True) npt.assert_array_equal(cmap1([.2, .8]), cmap2([.2, .8])) def test_husl_palette(self): pal1 = palettes.husl_palette() pal2 = palettes.color_palette("husl") npt.assert_array_equal(pal1, pal2) cmap1 = palettes.husl_palette(as_cmap=True) cmap2 = palettes.color_palette("husl", as_cmap=True) npt.assert_array_equal(cmap1([.2, .8]), cmap2([.2, .8])) def test_mpl_palette(self): pal1 = palettes.mpl_palette("Reds") pal2 = palettes.color_palette("Reds") npt.assert_array_equal(pal1, pal2) cmap1 = mpl.cm.get_cmap("Reds") cmap2 = palettes.mpl_palette("Reds", as_cmap=True) cmap3 = palettes.color_palette("Reds", as_cmap=True) npt.assert_array_equal(cmap1, cmap2) npt.assert_array_equal(cmap1, cmap3) def test_mpl_dark_palette(self): mpl_pal1 = palettes.mpl_palette("Blues_d") mpl_pal2 = palettes.color_palette("Blues_d") npt.assert_array_equal(mpl_pal1, mpl_pal2) mpl_pal1 = palettes.mpl_palette("Blues_r_d") mpl_pal2 = palettes.color_palette("Blues_r_d") npt.assert_array_equal(mpl_pal1, mpl_pal2) def test_bad_palette_name(self): with pytest.raises(ValueError): palettes.color_palette("IAmNotAPalette") def test_terrible_palette_name(self): with pytest.raises(ValueError): palettes.color_palette("jet") def test_bad_palette_colors(self): pal = ["red", "blue", "iamnotacolor"] with pytest.raises(ValueError): palettes.color_palette(pal) def test_palette_desat(self): pal1 = palettes.husl_palette(6) pal1 = [utils.desaturate(c, .5) for c in pal1] pal2 = palettes.color_palette("husl", desat=.5) npt.assert_array_equal(pal1, pal2) def test_palette_is_list_of_tuples(self): pal_in = np.array(["red", "blue", "green"]) pal_out = palettes.color_palette(pal_in, 3) assert isinstance(pal_out, list) assert isinstance(pal_out[0], tuple) assert isinstance(pal_out[0][0], float) assert len(pal_out[0]) == 3 def test_palette_cycles(self): deep = palettes.color_palette("deep6") double_deep = palettes.color_palette("deep6", 12) assert double_deep == deep + deep def test_hls_values(self): pal1 = palettes.hls_palette(6, h=0) pal2 = palettes.hls_palette(6, h=.5) pal2 = pal2[3:] + pal2[:3] npt.assert_array_almost_equal(pal1, pal2) pal_dark = palettes.hls_palette(5, l=.2) # noqa pal_bright = palettes.hls_palette(5, l=.8) # noqa npt.assert_array_less(list(map(sum, pal_dark)), list(map(sum, pal_bright))) pal_flat = palettes.hls_palette(5, s=.1) pal_bold = palettes.hls_palette(5, s=.9) npt.assert_array_less(list(map(np.std, pal_flat)), list(map(np.std, pal_bold))) def test_husl_values(self): pal1 = palettes.husl_palette(6, h=0) pal2 = palettes.husl_palette(6, h=.5) pal2 = pal2[3:] + pal2[:3] npt.assert_array_almost_equal(pal1, pal2) pal_dark = palettes.husl_palette(5, l=.2) # noqa pal_bright = palettes.husl_palette(5, l=.8) # noqa npt.assert_array_less(list(map(sum, pal_dark)), list(map(sum, pal_bright))) pal_flat = palettes.husl_palette(5, s=.1) pal_bold = palettes.husl_palette(5, s=.9) npt.assert_array_less(list(map(np.std, pal_flat)), list(map(np.std, pal_bold))) def test_cbrewer_qual(self): pal_short = palettes.mpl_palette("Set1", 4) pal_long = palettes.mpl_palette("Set1", 6) assert pal_short == pal_long[:4] pal_full = palettes.mpl_palette("Set2", 8) pal_long = palettes.mpl_palette("Set2", 10) assert pal_full == pal_long[:8] def test_mpl_reversal(self): pal_forward = palettes.mpl_palette("BuPu", 6) pal_reverse = palettes.mpl_palette("BuPu_r", 6) npt.assert_array_almost_equal(pal_forward, pal_reverse[::-1]) def test_rgb_from_hls(self): color = .5, .8, .4 rgb_got = palettes._color_to_rgb(color, "hls") rgb_want = colorsys.hls_to_rgb(*color) assert rgb_got == rgb_want def test_rgb_from_husl(self): color = 120, 50, 40 rgb_got = palettes._color_to_rgb(color, "husl") rgb_want = tuple(husl.husl_to_rgb(*color)) assert rgb_got == rgb_want for h in range(0, 360): color = h, 100, 100 rgb = palettes._color_to_rgb(color, "husl") assert min(rgb) >= 0 assert max(rgb) <= 1 def test_rgb_from_xkcd(self): color = "dull red" rgb_got = palettes._color_to_rgb(color, "xkcd") rgb_want = mpl.colors.to_rgb(xkcd_rgb[color]) assert rgb_got == rgb_want def test_light_palette(self): n = 4 pal_forward = palettes.light_palette("red", n) pal_reverse = palettes.light_palette("red", n, reverse=True) assert np.allclose(pal_forward, pal_reverse[::-1]) red = mpl.colors.colorConverter.to_rgb("red") assert pal_forward[-1] == red pal_f_from_string = palettes.color_palette("light:red", n) assert pal_forward[3] == pal_f_from_string[3] pal_r_from_string = palettes.color_palette("light:red_r", n) assert pal_reverse[3] == pal_r_from_string[3] pal_cmap = palettes.light_palette("blue", as_cmap=True) assert isinstance(pal_cmap, mpl.colors.LinearSegmentedColormap) pal_cmap_from_string = palettes.color_palette("light:blue", as_cmap=True) assert pal_cmap(.8) == pal_cmap_from_string(.8) pal_cmap = palettes.light_palette("blue", as_cmap=True, reverse=True) pal_cmap_from_string = palettes.color_palette("light:blue_r", as_cmap=True) assert pal_cmap(.8) == pal_cmap_from_string(.8) def test_dark_palette(self): n = 4 pal_forward = palettes.dark_palette("red", n) pal_reverse = palettes.dark_palette("red", n, reverse=True) assert np.allclose(pal_forward, pal_reverse[::-1]) red = mpl.colors.colorConverter.to_rgb("red") assert pal_forward[-1] == red pal_f_from_string = palettes.color_palette("dark:red", n) assert pal_forward[3] == pal_f_from_string[3] pal_r_from_string = palettes.color_palette("dark:red_r", n) assert pal_reverse[3] == pal_r_from_string[3] pal_cmap = palettes.dark_palette("blue", as_cmap=True) assert isinstance(pal_cmap, mpl.colors.LinearSegmentedColormap) pal_cmap_from_string = palettes.color_palette("dark:blue", as_cmap=True) assert pal_cmap(.8) == pal_cmap_from_string(.8) pal_cmap = palettes.dark_palette("blue", as_cmap=True, reverse=True) pal_cmap_from_string = palettes.color_palette("dark:blue_r", as_cmap=True) assert pal_cmap(.8) == pal_cmap_from_string(.8) def test_diverging_palette(self): h_neg, h_pos = 100, 200 sat, lum = 70, 50 args = h_neg, h_pos, sat, lum n = 12 pal = palettes.diverging_palette(*args, n=n) neg_pal = palettes.light_palette((h_neg, sat, lum), int(n // 2), input="husl") pos_pal = palettes.light_palette((h_pos, sat, lum), int(n // 2), input="husl") assert len(pal) == n assert pal[0] == neg_pal[-1] assert pal[-1] == pos_pal[-1] pal_dark = palettes.diverging_palette(*args, n=n, center="dark") assert np.mean(pal[int(n / 2)]) > np.mean(pal_dark[int(n / 2)]) pal_cmap = palettes.diverging_palette(*args, as_cmap=True) assert isinstance(pal_cmap, mpl.colors.LinearSegmentedColormap) def test_blend_palette(self): colors = ["red", "yellow", "white"] pal_cmap = palettes.blend_palette(colors, as_cmap=True) assert isinstance(pal_cmap, mpl.colors.LinearSegmentedColormap) colors = ["red", "blue"] pal = palettes.blend_palette(colors) pal_str = "blend:" + ",".join(colors) pal_from_str = palettes.color_palette(pal_str) assert pal == pal_from_str def test_cubehelix_against_matplotlib(self): x = np.linspace(0, 1, 8) mpl_pal = mpl.cm.cubehelix(x)[:, :3].tolist() sns_pal = palettes.cubehelix_palette(8, start=0.5, rot=-1.5, hue=1, dark=0, light=1, reverse=True) assert sns_pal == mpl_pal def test_cubehelix_n_colors(self): for n in [3, 5, 8]: pal = palettes.cubehelix_palette(n) assert len(pal) == n def test_cubehelix_reverse(self): pal_forward = palettes.cubehelix_palette() pal_reverse = palettes.cubehelix_palette(reverse=True) assert pal_forward == pal_reverse[::-1] def test_cubehelix_cmap(self): cmap = palettes.cubehelix_palette(as_cmap=True) assert isinstance(cmap, mpl.colors.ListedColormap) pal = palettes.cubehelix_palette() x = np.linspace(0, 1, 6) npt.assert_array_equal(cmap(x)[:, :3], pal) cmap_rev = palettes.cubehelix_palette(as_cmap=True, reverse=True) x = np.linspace(0, 1, 6) pal_forward = cmap(x).tolist() pal_reverse = cmap_rev(x[::-1]).tolist() assert pal_forward == pal_reverse def test_cubehelix_code(self): color_palette = palettes.color_palette cubehelix_palette = palettes.cubehelix_palette pal1 = color_palette("ch:", 8) pal2 = color_palette(cubehelix_palette(8)) assert pal1 == pal2 pal1 = color_palette("ch:.5, -.25,hue = .5,light=.75", 8) pal2 = color_palette(cubehelix_palette(8, .5, -.25, hue=.5, light=.75)) assert pal1 == pal2 pal1 = color_palette("ch:h=1,r=.5", 9) pal2 = color_palette(cubehelix_palette(9, hue=1, rot=.5)) assert pal1 == pal2 pal1 = color_palette("ch:_r", 6) pal2 = color_palette(cubehelix_palette(6, reverse=True)) assert pal1 == pal2 pal1 = color_palette("ch:_r", as_cmap=True) pal2 = cubehelix_palette(6, reverse=True, as_cmap=True) assert pal1(.5) == pal2(.5) def test_xkcd_palette(self): names = list(xkcd_rgb.keys())[10:15] colors = palettes.xkcd_palette(names) for name, color in zip(names, colors): as_hex = mpl.colors.rgb2hex(color) assert as_hex == xkcd_rgb[name] def test_crayon_palette(self): names = list(crayons.keys())[10:15] colors = palettes.crayon_palette(names) for name, color in zip(names, colors): as_hex = mpl.colors.rgb2hex(color) assert as_hex == crayons[name].lower() def test_color_codes(self): palettes.set_color_codes("deep") colors = palettes.color_palette("deep6") + [".1"] for code, color in zip("bgrmyck", colors): rgb_want = mpl.colors.colorConverter.to_rgb(color) rgb_got = mpl.colors.colorConverter.to_rgb(code) assert rgb_want == rgb_got palettes.set_color_codes("reset") with pytest.raises(ValueError): palettes.set_color_codes("Set1") def test_as_hex(self): pal = palettes.color_palette("deep") for rgb, hex in zip(pal, pal.as_hex()): assert mpl.colors.rgb2hex(rgb) == hex def test_preserved_palette_length(self): pal_in = palettes.color_palette("Set1", 10) pal_out = palettes.color_palette(pal_in) assert pal_in == pal_out def test_html_rep(self): pal = palettes.color_palette() html = pal._repr_html_() for color in pal.as_hex(): assert color in html seaborn-0.11.2/seaborn/tests/test_rcmod.py000066400000000000000000000173251410631356500205500ustar00rootroot00000000000000from distutils.version import LooseVersion import pytest import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import numpy.testing as npt from .. import rcmod, palettes, utils from ..conftest import has_verdana class RCParamTester: def flatten_list(self, orig_list): iter_list = map(np.atleast_1d, orig_list) flat_list = [item for sublist in iter_list for item in sublist] return flat_list def assert_rc_params(self, params): for k, v in params.items(): # Various subtle issues in matplotlib lead to unexpected # values for the backend rcParam, which isn't relevant here if k == "backend": continue if isinstance(v, np.ndarray): npt.assert_array_equal(mpl.rcParams[k], v) else: assert mpl.rcParams[k] == v def assert_rc_params_equal(self, params1, params2): for key, v1 in params1.items(): # Various subtle issues in matplotlib lead to unexpected # values for the backend rcParam, which isn't relevant here if key == "backend": continue v2 = params2[key] if isinstance(v1, np.ndarray): npt.assert_array_equal(v1, v2) else: assert v1 == v2 class TestAxesStyle(RCParamTester): styles = ["white", "dark", "whitegrid", "darkgrid", "ticks"] def test_default_return(self): current = rcmod.axes_style() self.assert_rc_params(current) def test_key_usage(self): _style_keys = set(rcmod._style_keys) for style in self.styles: assert not set(rcmod.axes_style(style)) ^ _style_keys def test_bad_style(self): with pytest.raises(ValueError): rcmod.axes_style("i_am_not_a_style") def test_rc_override(self): rc = {"axes.facecolor": "blue", "foo.notaparam": "bar"} out = rcmod.axes_style("darkgrid", rc) assert out["axes.facecolor"] == "blue" assert "foo.notaparam" not in out def test_set_style(self): for style in self.styles: style_dict = rcmod.axes_style(style) rcmod.set_style(style) self.assert_rc_params(style_dict) def test_style_context_manager(self): rcmod.set_style("darkgrid") orig_params = rcmod.axes_style() context_params = rcmod.axes_style("whitegrid") with rcmod.axes_style("whitegrid"): self.assert_rc_params(context_params) self.assert_rc_params(orig_params) @rcmod.axes_style("whitegrid") def func(): self.assert_rc_params(context_params) func() self.assert_rc_params(orig_params) def test_style_context_independence(self): assert set(rcmod._style_keys) ^ set(rcmod._context_keys) def test_set_rc(self): rcmod.set_theme(rc={"lines.linewidth": 4}) assert mpl.rcParams["lines.linewidth"] == 4 rcmod.set_theme() def test_set_with_palette(self): rcmod.reset_orig() rcmod.set_theme(palette="deep") assert utils.get_color_cycle() == palettes.color_palette("deep", 10) rcmod.reset_orig() rcmod.set_theme(palette="deep", color_codes=False) assert utils.get_color_cycle() == palettes.color_palette("deep", 10) rcmod.reset_orig() pal = palettes.color_palette("deep") rcmod.set_theme(palette=pal) assert utils.get_color_cycle() == palettes.color_palette("deep", 10) rcmod.reset_orig() rcmod.set_theme(palette=pal, color_codes=False) assert utils.get_color_cycle() == palettes.color_palette("deep", 10) rcmod.reset_orig() rcmod.set_theme() def test_reset_defaults(self): rcmod.reset_defaults() self.assert_rc_params(mpl.rcParamsDefault) rcmod.set_theme() def test_reset_orig(self): rcmod.reset_orig() self.assert_rc_params(mpl.rcParamsOrig) rcmod.set_theme() def test_set_is_alias(self): rcmod.set_theme(context="paper", style="white") params1 = mpl.rcParams.copy() rcmod.reset_orig() rcmod.set_theme(context="paper", style="white") params2 = mpl.rcParams.copy() self.assert_rc_params_equal(params1, params2) rcmod.set_theme() class TestPlottingContext(RCParamTester): contexts = ["paper", "notebook", "talk", "poster"] def test_default_return(self): current = rcmod.plotting_context() self.assert_rc_params(current) def test_key_usage(self): _context_keys = set(rcmod._context_keys) for context in self.contexts: missing = set(rcmod.plotting_context(context)) ^ _context_keys assert not missing def test_bad_context(self): with pytest.raises(ValueError): rcmod.plotting_context("i_am_not_a_context") def test_font_scale(self): notebook_ref = rcmod.plotting_context("notebook") notebook_big = rcmod.plotting_context("notebook", 2) font_keys = ["axes.labelsize", "axes.titlesize", "legend.fontsize", "xtick.labelsize", "ytick.labelsize", "font.size"] if LooseVersion(mpl.__version__) >= "3.0": font_keys.append("legend.title_fontsize") for k in font_keys: assert notebook_ref[k] * 2 == notebook_big[k] def test_rc_override(self): key, val = "grid.linewidth", 5 rc = {key: val, "foo": "bar"} out = rcmod.plotting_context("talk", rc=rc) assert out[key] == val assert "foo" not in out def test_set_context(self): for context in self.contexts: context_dict = rcmod.plotting_context(context) rcmod.set_context(context) self.assert_rc_params(context_dict) def test_context_context_manager(self): rcmod.set_context("notebook") orig_params = rcmod.plotting_context() context_params = rcmod.plotting_context("paper") with rcmod.plotting_context("paper"): self.assert_rc_params(context_params) self.assert_rc_params(orig_params) @rcmod.plotting_context("paper") def func(): self.assert_rc_params(context_params) func() self.assert_rc_params(orig_params) class TestPalette: def test_set_palette(self): rcmod.set_palette("deep") assert utils.get_color_cycle() == palettes.color_palette("deep", 10) rcmod.set_palette("pastel6") assert utils.get_color_cycle() == palettes.color_palette("pastel6", 6) rcmod.set_palette("dark", 4) assert utils.get_color_cycle() == palettes.color_palette("dark", 4) rcmod.set_palette("Set2", color_codes=True) assert utils.get_color_cycle() == palettes.color_palette("Set2", 8) class TestFonts: _no_verdana = not has_verdana() @pytest.mark.skipif(_no_verdana, reason="Verdana font is not present") def test_set_font(self): rcmod.set_theme(font="Verdana") _, ax = plt.subplots() ax.set_xlabel("foo") assert ax.xaxis.label.get_fontname() == "Verdana" rcmod.set_theme() def test_set_serif_font(self): rcmod.set_theme(font="serif") _, ax = plt.subplots() ax.set_xlabel("foo") assert ax.xaxis.label.get_fontname() in mpl.rcParams["font.serif"] rcmod.set_theme() @pytest.mark.skipif(_no_verdana, reason="Verdana font is not present") def test_different_sans_serif(self): rcmod.set_theme() rcmod.set_style(rc={"font.sans-serif": ["Verdana"]}) _, ax = plt.subplots() ax.set_xlabel("foo") assert ax.xaxis.label.get_fontname() == "Verdana" rcmod.set_theme() seaborn-0.11.2/seaborn/tests/test_regression.py000066400000000000000000000542731410631356500216270ustar00rootroot00000000000000from distutils.version import LooseVersion import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import pandas as pd import pytest import numpy.testing as npt try: import pandas.testing as pdt except ImportError: import pandas.util.testing as pdt try: import statsmodels.regression.linear_model as smlm _no_statsmodels = False except ImportError: _no_statsmodels = True from .. import regression as lm from ..palettes import color_palette rs = np.random.RandomState(0) class TestLinearPlotter: rs = np.random.RandomState(77) df = pd.DataFrame(dict(x=rs.normal(size=60), d=rs.randint(-2, 3, 60), y=rs.gamma(4, size=60), s=np.tile(list("abcdefghij"), 6))) df["z"] = df.y + rs.randn(60) df["y_na"] = df.y.copy() df.loc[[10, 20, 30], 'y_na'] = np.nan def test_establish_variables_from_frame(self): p = lm._LinearPlotter() p.establish_variables(self.df, x="x", y="y") pdt.assert_series_equal(p.x, self.df.x) pdt.assert_series_equal(p.y, self.df.y) pdt.assert_frame_equal(p.data, self.df) def test_establish_variables_from_series(self): p = lm._LinearPlotter() p.establish_variables(None, x=self.df.x, y=self.df.y) pdt.assert_series_equal(p.x, self.df.x) pdt.assert_series_equal(p.y, self.df.y) assert p.data is None def test_establish_variables_from_array(self): p = lm._LinearPlotter() p.establish_variables(None, x=self.df.x.values, y=self.df.y.values) npt.assert_array_equal(p.x, self.df.x) npt.assert_array_equal(p.y, self.df.y) assert p.data is None def test_establish_variables_from_lists(self): p = lm._LinearPlotter() p.establish_variables(None, x=self.df.x.values.tolist(), y=self.df.y.values.tolist()) npt.assert_array_equal(p.x, self.df.x) npt.assert_array_equal(p.y, self.df.y) assert p.data is None def test_establish_variables_from_mix(self): p = lm._LinearPlotter() p.establish_variables(self.df, x="x", y=self.df.y) pdt.assert_series_equal(p.x, self.df.x) pdt.assert_series_equal(p.y, self.df.y) pdt.assert_frame_equal(p.data, self.df) def test_establish_variables_from_bad(self): p = lm._LinearPlotter() with pytest.raises(ValueError): p.establish_variables(None, x="x", y=self.df.y) def test_dropna(self): p = lm._LinearPlotter() p.establish_variables(self.df, x="x", y_na="y_na") pdt.assert_series_equal(p.x, self.df.x) pdt.assert_series_equal(p.y_na, self.df.y_na) p.dropna("x", "y_na") mask = self.df.y_na.notnull() pdt.assert_series_equal(p.x, self.df.x[mask]) pdt.assert_series_equal(p.y_na, self.df.y_na[mask]) class TestRegressionPlotter: rs = np.random.RandomState(49) grid = np.linspace(-3, 3, 30) n_boot = 100 bins_numeric = 3 bins_given = [-1, 0, 1] df = pd.DataFrame(dict(x=rs.normal(size=60), d=rs.randint(-2, 3, 60), y=rs.gamma(4, size=60), s=np.tile(list(range(6)), 10))) df["z"] = df.y + rs.randn(60) df["y_na"] = df.y.copy() bw_err = rs.randn(6)[df.s.values] * 2 df.y += bw_err p = 1 / (1 + np.exp(-(df.x * 2 + rs.randn(60)))) df["c"] = [rs.binomial(1, p_i) for p_i in p] df.loc[[10, 20, 30], 'y_na'] = np.nan def test_variables_from_frame(self): p = lm._RegressionPlotter("x", "y", data=self.df, units="s") pdt.assert_series_equal(p.x, self.df.x) pdt.assert_series_equal(p.y, self.df.y) pdt.assert_series_equal(p.units, self.df.s) pdt.assert_frame_equal(p.data, self.df) def test_variables_from_series(self): p = lm._RegressionPlotter(self.df.x, self.df.y, units=self.df.s) npt.assert_array_equal(p.x, self.df.x) npt.assert_array_equal(p.y, self.df.y) npt.assert_array_equal(p.units, self.df.s) assert p.data is None def test_variables_from_mix(self): p = lm._RegressionPlotter("x", self.df.y + 1, data=self.df) npt.assert_array_equal(p.x, self.df.x) npt.assert_array_equal(p.y, self.df.y + 1) pdt.assert_frame_equal(p.data, self.df) def test_variables_must_be_1d(self): array_2d = np.random.randn(20, 2) array_1d = np.random.randn(20) with pytest.raises(ValueError): lm._RegressionPlotter(array_2d, array_1d) with pytest.raises(ValueError): lm._RegressionPlotter(array_1d, array_2d) def test_dropna(self): p = lm._RegressionPlotter("x", "y_na", data=self.df) assert len(p.x) == pd.notnull(self.df.y_na).sum() p = lm._RegressionPlotter("x", "y_na", data=self.df, dropna=False) assert len(p.x) == len(self.df.y_na) @pytest.mark.parametrize("x,y", [([1.5], [2]), (np.array([1.5]), np.array([2])), (pd.Series(1.5), pd.Series(2))]) def test_singleton(self, x, y): p = lm._RegressionPlotter(x, y) assert not p.fit_reg def test_ci(self): p = lm._RegressionPlotter("x", "y", data=self.df, ci=95) assert p.ci == 95 assert p.x_ci == 95 p = lm._RegressionPlotter("x", "y", data=self.df, ci=95, x_ci=68) assert p.ci == 95 assert p.x_ci == 68 p = lm._RegressionPlotter("x", "y", data=self.df, ci=95, x_ci="sd") assert p.ci == 95 assert p.x_ci == "sd" @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_fast_regression(self): p = lm._RegressionPlotter("x", "y", data=self.df, n_boot=self.n_boot) # Fit with the "fast" function, which just does linear algebra yhat_fast, _ = p.fit_fast(self.grid) # Fit using the statsmodels function with an OLS model yhat_smod, _ = p.fit_statsmodels(self.grid, smlm.OLS) # Compare the vector of y_hat values npt.assert_array_almost_equal(yhat_fast, yhat_smod) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_regress_poly(self): p = lm._RegressionPlotter("x", "y", data=self.df, n_boot=self.n_boot) # Fit an first-order polynomial yhat_poly, _ = p.fit_poly(self.grid, 1) # Fit using the statsmodels function with an OLS model yhat_smod, _ = p.fit_statsmodels(self.grid, smlm.OLS) # Compare the vector of y_hat values npt.assert_array_almost_equal(yhat_poly, yhat_smod) def test_regress_logx(self): x = np.arange(1, 10) y = np.arange(1, 10) grid = np.linspace(1, 10, 100) p = lm._RegressionPlotter(x, y, n_boot=self.n_boot) yhat_lin, _ = p.fit_fast(grid) yhat_log, _ = p.fit_logx(grid) assert yhat_lin[0] > yhat_log[0] assert yhat_log[20] > yhat_lin[20] assert yhat_lin[90] > yhat_log[90] @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_regress_n_boot(self): p = lm._RegressionPlotter("x", "y", data=self.df, n_boot=self.n_boot) # Fast (linear algebra) version _, boots_fast = p.fit_fast(self.grid) npt.assert_equal(boots_fast.shape, (self.n_boot, self.grid.size)) # Slower (np.polyfit) version _, boots_poly = p.fit_poly(self.grid, 1) npt.assert_equal(boots_poly.shape, (self.n_boot, self.grid.size)) # Slowest (statsmodels) version _, boots_smod = p.fit_statsmodels(self.grid, smlm.OLS) npt.assert_equal(boots_smod.shape, (self.n_boot, self.grid.size)) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_regress_without_bootstrap(self): p = lm._RegressionPlotter("x", "y", data=self.df, n_boot=self.n_boot, ci=None) # Fast (linear algebra) version _, boots_fast = p.fit_fast(self.grid) assert boots_fast is None # Slower (np.polyfit) version _, boots_poly = p.fit_poly(self.grid, 1) assert boots_poly is None # Slowest (statsmodels) version _, boots_smod = p.fit_statsmodels(self.grid, smlm.OLS) assert boots_smod is None def test_regress_bootstrap_seed(self): seed = 200 p1 = lm._RegressionPlotter("x", "y", data=self.df, n_boot=self.n_boot, seed=seed) p2 = lm._RegressionPlotter("x", "y", data=self.df, n_boot=self.n_boot, seed=seed) _, boots1 = p1.fit_fast(self.grid) _, boots2 = p2.fit_fast(self.grid) npt.assert_array_equal(boots1, boots2) def test_numeric_bins(self): p = lm._RegressionPlotter(self.df.x, self.df.y) x_binned, bins = p.bin_predictor(self.bins_numeric) npt.assert_equal(len(bins), self.bins_numeric) npt.assert_array_equal(np.unique(x_binned), bins) def test_provided_bins(self): p = lm._RegressionPlotter(self.df.x, self.df.y) x_binned, bins = p.bin_predictor(self.bins_given) npt.assert_array_equal(np.unique(x_binned), self.bins_given) def test_bin_results(self): p = lm._RegressionPlotter(self.df.x, self.df.y) x_binned, bins = p.bin_predictor(self.bins_given) assert self.df.x[x_binned == 0].min() > self.df.x[x_binned == -1].max() assert self.df.x[x_binned == 1].min() > self.df.x[x_binned == 0].max() def test_scatter_data(self): p = lm._RegressionPlotter(self.df.x, self.df.y) x, y = p.scatter_data npt.assert_array_equal(x, self.df.x) npt.assert_array_equal(y, self.df.y) p = lm._RegressionPlotter(self.df.d, self.df.y) x, y = p.scatter_data npt.assert_array_equal(x, self.df.d) npt.assert_array_equal(y, self.df.y) p = lm._RegressionPlotter(self.df.d, self.df.y, x_jitter=.1) x, y = p.scatter_data assert (x != self.df.d).any() npt.assert_array_less(np.abs(self.df.d - x), np.repeat(.1, len(x))) npt.assert_array_equal(y, self.df.y) p = lm._RegressionPlotter(self.df.d, self.df.y, y_jitter=.05) x, y = p.scatter_data npt.assert_array_equal(x, self.df.d) npt.assert_array_less(np.abs(self.df.y - y), np.repeat(.1, len(y))) def test_estimate_data(self): p = lm._RegressionPlotter(self.df.d, self.df.y, x_estimator=np.mean) x, y, ci = p.estimate_data npt.assert_array_equal(x, np.sort(np.unique(self.df.d))) npt.assert_array_almost_equal(y, self.df.groupby("d").y.mean()) npt.assert_array_less(np.array(ci)[:, 0], y) npt.assert_array_less(y, np.array(ci)[:, 1]) def test_estimate_cis(self): seed = 123 p = lm._RegressionPlotter(self.df.d, self.df.y, x_estimator=np.mean, ci=95, seed=seed) _, _, ci_big = p.estimate_data p = lm._RegressionPlotter(self.df.d, self.df.y, x_estimator=np.mean, ci=50, seed=seed) _, _, ci_wee = p.estimate_data npt.assert_array_less(np.diff(ci_wee), np.diff(ci_big)) p = lm._RegressionPlotter(self.df.d, self.df.y, x_estimator=np.mean, ci=None) _, _, ci_nil = p.estimate_data npt.assert_array_equal(ci_nil, [None] * len(ci_nil)) def test_estimate_units(self): # Seed the RNG locally seed = 345 p = lm._RegressionPlotter("x", "y", data=self.df, units="s", seed=seed, x_bins=3) _, _, ci_big = p.estimate_data ci_big = np.diff(ci_big, axis=1) p = lm._RegressionPlotter("x", "y", data=self.df, seed=seed, x_bins=3) _, _, ci_wee = p.estimate_data ci_wee = np.diff(ci_wee, axis=1) npt.assert_array_less(ci_wee, ci_big) def test_partial(self): x = self.rs.randn(100) y = x + self.rs.randn(100) z = x + self.rs.randn(100) p = lm._RegressionPlotter(y, z) _, r_orig = np.corrcoef(p.x, p.y)[0] p = lm._RegressionPlotter(y, z, y_partial=x) _, r_semipartial = np.corrcoef(p.x, p.y)[0] assert r_semipartial < r_orig p = lm._RegressionPlotter(y, z, x_partial=x, y_partial=x) _, r_partial = np.corrcoef(p.x, p.y)[0] assert r_partial < r_orig x = pd.Series(x) y = pd.Series(y) p = lm._RegressionPlotter(y, z, x_partial=x, y_partial=x) _, r_partial = np.corrcoef(p.x, p.y)[0] assert r_partial < r_orig @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_logistic_regression(self): p = lm._RegressionPlotter("x", "c", data=self.df, logistic=True, n_boot=self.n_boot) _, yhat, _ = p.fit_regression(x_range=(-3, 3)) npt.assert_array_less(yhat, 1) npt.assert_array_less(0, yhat) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_logistic_perfect_separation(self): y = self.df.x > self.df.x.mean() p = lm._RegressionPlotter("x", y, data=self.df, logistic=True, n_boot=10) with np.errstate(all="ignore"): _, yhat, _ = p.fit_regression(x_range=(-3, 3)) assert np.isnan(yhat).all() @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_robust_regression(self): p_ols = lm._RegressionPlotter("x", "y", data=self.df, n_boot=self.n_boot) _, ols_yhat, _ = p_ols.fit_regression(x_range=(-3, 3)) p_robust = lm._RegressionPlotter("x", "y", data=self.df, robust=True, n_boot=self.n_boot) _, robust_yhat, _ = p_robust.fit_regression(x_range=(-3, 3)) assert len(ols_yhat) == len(robust_yhat) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_lowess_regression(self): p = lm._RegressionPlotter("x", "y", data=self.df, lowess=True) grid, yhat, err_bands = p.fit_regression(x_range=(-3, 3)) assert len(grid) == len(yhat) assert err_bands is None def test_regression_options(self): with pytest.raises(ValueError): lm._RegressionPlotter("x", "y", data=self.df, lowess=True, order=2) with pytest.raises(ValueError): lm._RegressionPlotter("x", "y", data=self.df, lowess=True, logistic=True) def test_regression_limits(self): f, ax = plt.subplots() ax.scatter(self.df.x, self.df.y) p = lm._RegressionPlotter("x", "y", data=self.df) grid, _, _ = p.fit_regression(ax) xlim = ax.get_xlim() assert grid.min() == xlim[0] assert grid.max() == xlim[1] p = lm._RegressionPlotter("x", "y", data=self.df, truncate=True) grid, _, _ = p.fit_regression() assert grid.min() == self.df.x.min() assert grid.max() == self.df.x.max() class TestRegressionPlots: rs = np.random.RandomState(56) df = pd.DataFrame(dict(x=rs.randn(90), y=rs.randn(90) + 5, z=rs.randint(0, 1, 90), g=np.repeat(list("abc"), 30), h=np.tile(list("xy"), 45), u=np.tile(np.arange(6), 15))) bw_err = rs.randn(6)[df.u.values] df.y += bw_err def test_regplot_basic(self): f, ax = plt.subplots() lm.regplot(x="x", y="y", data=self.df) assert len(ax.lines) == 1 assert len(ax.collections) == 2 x, y = ax.collections[0].get_offsets().T npt.assert_array_equal(x, self.df.x) npt.assert_array_equal(y, self.df.y) def test_regplot_selective(self): f, ax = plt.subplots() ax = lm.regplot(x="x", y="y", data=self.df, scatter=False, ax=ax) assert len(ax.lines) == 1 assert len(ax.collections) == 1 ax.clear() f, ax = plt.subplots() ax = lm.regplot(x="x", y="y", data=self.df, fit_reg=False) assert len(ax.lines) == 0 assert len(ax.collections) == 1 ax.clear() f, ax = plt.subplots() ax = lm.regplot(x="x", y="y", data=self.df, ci=None) assert len(ax.lines) == 1 assert len(ax.collections) == 1 ax.clear() def test_regplot_scatter_kws_alpha(self): f, ax = plt.subplots() color = np.array([[0.3, 0.8, 0.5, 0.5]]) ax = lm.regplot(x="x", y="y", data=self.df, scatter_kws={'color': color}) assert ax.collections[0]._alpha is None assert ax.collections[0]._facecolors[0, 3] == 0.5 f, ax = plt.subplots() color = np.array([[0.3, 0.8, 0.5]]) ax = lm.regplot(x="x", y="y", data=self.df, scatter_kws={'color': color}) assert ax.collections[0]._alpha == 0.8 f, ax = plt.subplots() color = np.array([[0.3, 0.8, 0.5]]) ax = lm.regplot(x="x", y="y", data=self.df, scatter_kws={'color': color, 'alpha': 0.4}) assert ax.collections[0]._alpha == 0.4 f, ax = plt.subplots() color = 'r' ax = lm.regplot(x="x", y="y", data=self.df, scatter_kws={'color': color}) assert ax.collections[0]._alpha == 0.8 def test_regplot_binned(self): ax = lm.regplot(x="x", y="y", data=self.df, x_bins=5) assert len(ax.lines) == 6 assert len(ax.collections) == 2 def test_lmplot_no_data(self): with pytest.raises(TypeError): # keyword argument `data` is required lm.lmplot(x="x", y="y") def test_lmplot_basic(self): g = lm.lmplot(x="x", y="y", data=self.df) ax = g.axes[0, 0] assert len(ax.lines) == 1 assert len(ax.collections) == 2 x, y = ax.collections[0].get_offsets().T npt.assert_array_equal(x, self.df.x) npt.assert_array_equal(y, self.df.y) def test_lmplot_hue(self): g = lm.lmplot(x="x", y="y", data=self.df, hue="h") ax = g.axes[0, 0] assert len(ax.lines) == 2 assert len(ax.collections) == 4 def test_lmplot_markers(self): g1 = lm.lmplot(x="x", y="y", data=self.df, hue="h", markers="s") assert g1.hue_kws == {"marker": ["s", "s"]} g2 = lm.lmplot(x="x", y="y", data=self.df, hue="h", markers=["o", "s"]) assert g2.hue_kws == {"marker": ["o", "s"]} with pytest.raises(ValueError): lm.lmplot(x="x", y="y", data=self.df, hue="h", markers=["o", "s", "d"]) def test_lmplot_marker_linewidths(self): g = lm.lmplot(x="x", y="y", data=self.df, hue="h", fit_reg=False, markers=["o", "+"]) c = g.axes[0, 0].collections assert c[1].get_linewidths()[0] == mpl.rcParams["lines.linewidth"] def test_lmplot_facets(self): g = lm.lmplot(x="x", y="y", data=self.df, row="g", col="h") assert g.axes.shape == (3, 2) g = lm.lmplot(x="x", y="y", data=self.df, col="u", col_wrap=4) assert g.axes.shape == (6,) g = lm.lmplot(x="x", y="y", data=self.df, hue="h", col="u") assert g.axes.shape == (1, 6) def test_lmplot_hue_col_nolegend(self): g = lm.lmplot(x="x", y="y", data=self.df, col="h", hue="h") assert g._legend is None def test_lmplot_scatter_kws(self): g = lm.lmplot(x="x", y="y", hue="h", data=self.df, ci=None) red_scatter, blue_scatter = g.axes[0, 0].collections red, blue = color_palette(n_colors=2) npt.assert_array_equal(red, red_scatter.get_facecolors()[0, :3]) npt.assert_array_equal(blue, blue_scatter.get_facecolors()[0, :3]) @pytest.mark.skipif(LooseVersion(mpl.__version__) < "3.4", reason="MPL bug #15967") @pytest.mark.parametrize("sharex", [True, False]) def test_lmplot_facet_truncate(self, sharex): g = lm.lmplot( data=self.df, x="x", y="y", hue="g", col="h", truncate=False, facet_kws=dict(sharex=sharex), ) for ax in g.axes.flat: for line in ax.lines: xdata = line.get_xdata() assert ax.get_xlim() == tuple(xdata[[0, -1]]) def test_lmplot_sharey(self): df = pd.DataFrame(dict( x=[0, 1, 2, 0, 1, 2], y=[1, -1, 0, -100, 200, 0], z=["a", "a", "a", "b", "b", "b"], )) with pytest.warns(UserWarning): g = lm.lmplot(data=df, x="x", y="y", col="z", sharey=False) ax1, ax2 = g.axes.flat assert ax1.get_ylim()[0] > ax2.get_ylim()[0] assert ax1.get_ylim()[1] < ax2.get_ylim()[1] def test_lmplot_facet_kws(self): xlim = -4, 20 g = lm.lmplot( data=self.df, x="x", y="y", col="h", facet_kws={"xlim": xlim} ) for ax in g.axes.flat: assert ax.get_xlim() == xlim def test_residplot(self): x, y = self.df.x, self.df.y ax = lm.residplot(x=x, y=y) resid = y - np.polyval(np.polyfit(x, y, 1), x) x_plot, y_plot = ax.collections[0].get_offsets().T npt.assert_array_equal(x, x_plot) npt.assert_array_almost_equal(resid, y_plot) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_residplot_lowess(self): ax = lm.residplot(x="x", y="y", data=self.df, lowess=True) assert len(ax.lines) == 2 x, y = ax.lines[1].get_xydata().T npt.assert_array_equal(x, np.sort(self.df.x)) def test_three_point_colors(self): x, y = np.random.randn(2, 3) ax = lm.regplot(x=x, y=y, color=(1, 0, 0)) color = ax.collections[0].get_facecolors() npt.assert_almost_equal(color[0, :3], (1, 0, 0)) def test_regplot_xlim(self): f, ax = plt.subplots() x, y1, y2 = np.random.randn(3, 50) lm.regplot(x=x, y=y1, truncate=False) lm.regplot(x=x, y=y2, truncate=False) line1, line2 = ax.lines assert np.array_equal(line1.get_xdata(), line2.get_xdata()) seaborn-0.11.2/seaborn/tests/test_relational.py000066400000000000000000001631751410631356500216030ustar00rootroot00000000000000from itertools import product import warnings import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib.colors import same_color import pytest from numpy.testing import assert_array_equal from ..palettes import color_palette from ..relational import ( _RelationalPlotter, _LinePlotter, _ScatterPlotter, relplot, lineplot, scatterplot ) @pytest.fixture(params=[ dict(x="x", y="y"), dict(x="t", y="y"), dict(x="a", y="y"), dict(x="x", y="y", hue="y"), dict(x="x", y="y", hue="a"), dict(x="x", y="y", size="a"), dict(x="x", y="y", style="a"), dict(x="x", y="y", hue="s"), dict(x="x", y="y", size="s"), dict(x="x", y="y", style="s"), dict(x="x", y="y", hue="a", style="a"), dict(x="x", y="y", hue="a", size="b", style="b"), ]) def long_semantics(request): return request.param class Helpers: # TODO Better place for these? def scatter_rgbs(self, collections): rgbs = [] for col in collections: rgb = tuple(col.get_facecolor().squeeze()[:3]) rgbs.append(rgb) return rgbs def paths_equal(self, *args): equal = all([len(a) == len(args[0]) for a in args]) for p1, p2 in zip(*args): equal &= np.array_equal(p1.vertices, p2.vertices) equal &= np.array_equal(p1.codes, p2.codes) return equal class TestRelationalPlotter(Helpers): def test_wide_df_variables(self, wide_df): p = _RelationalPlotter() p.assign_variables(data=wide_df) assert p.input_format == "wide" assert list(p.variables) == ["x", "y", "hue", "style"] assert len(p.plot_data) == np.product(wide_df.shape) x = p.plot_data["x"] expected_x = np.tile(wide_df.index, wide_df.shape[1]) assert_array_equal(x, expected_x) y = p.plot_data["y"] expected_y = wide_df.values.ravel(order="f") assert_array_equal(y, expected_y) hue = p.plot_data["hue"] expected_hue = np.repeat(wide_df.columns.values, wide_df.shape[0]) assert_array_equal(hue, expected_hue) style = p.plot_data["style"] expected_style = expected_hue assert_array_equal(style, expected_style) assert p.variables["x"] == wide_df.index.name assert p.variables["y"] is None assert p.variables["hue"] == wide_df.columns.name assert p.variables["style"] == wide_df.columns.name def test_wide_df_with_nonnumeric_variables(self, long_df): p = _RelationalPlotter() p.assign_variables(data=long_df) assert p.input_format == "wide" assert list(p.variables) == ["x", "y", "hue", "style"] numeric_df = long_df.select_dtypes("number") assert len(p.plot_data) == np.product(numeric_df.shape) x = p.plot_data["x"] expected_x = np.tile(numeric_df.index, numeric_df.shape[1]) assert_array_equal(x, expected_x) y = p.plot_data["y"] expected_y = numeric_df.values.ravel(order="f") assert_array_equal(y, expected_y) hue = p.plot_data["hue"] expected_hue = np.repeat( numeric_df.columns.values, numeric_df.shape[0] ) assert_array_equal(hue, expected_hue) style = p.plot_data["style"] expected_style = expected_hue assert_array_equal(style, expected_style) assert p.variables["x"] == numeric_df.index.name assert p.variables["y"] is None assert p.variables["hue"] == numeric_df.columns.name assert p.variables["style"] == numeric_df.columns.name def test_wide_array_variables(self, wide_array): p = _RelationalPlotter() p.assign_variables(data=wide_array) assert p.input_format == "wide" assert list(p.variables) == ["x", "y", "hue", "style"] assert len(p.plot_data) == np.product(wide_array.shape) nrow, ncol = wide_array.shape x = p.plot_data["x"] expected_x = np.tile(np.arange(nrow), ncol) assert_array_equal(x, expected_x) y = p.plot_data["y"] expected_y = wide_array.ravel(order="f") assert_array_equal(y, expected_y) hue = p.plot_data["hue"] expected_hue = np.repeat(np.arange(ncol), nrow) assert_array_equal(hue, expected_hue) style = p.plot_data["style"] expected_style = expected_hue assert_array_equal(style, expected_style) assert p.variables["x"] is None assert p.variables["y"] is None assert p.variables["hue"] is None assert p.variables["style"] is None def test_flat_array_variables(self, flat_array): p = _RelationalPlotter() p.assign_variables(data=flat_array) assert p.input_format == "wide" assert list(p.variables) == ["x", "y"] assert len(p.plot_data) == np.product(flat_array.shape) x = p.plot_data["x"] expected_x = np.arange(flat_array.shape[0]) assert_array_equal(x, expected_x) y = p.plot_data["y"] expected_y = flat_array assert_array_equal(y, expected_y) assert p.variables["x"] is None assert p.variables["y"] is None def test_flat_list_variables(self, flat_list): p = _RelationalPlotter() p.assign_variables(data=flat_list) assert p.input_format == "wide" assert list(p.variables) == ["x", "y"] assert len(p.plot_data) == len(flat_list) x = p.plot_data["x"] expected_x = np.arange(len(flat_list)) assert_array_equal(x, expected_x) y = p.plot_data["y"] expected_y = flat_list assert_array_equal(y, expected_y) assert p.variables["x"] is None assert p.variables["y"] is None def test_flat_series_variables(self, flat_series): p = _RelationalPlotter() p.assign_variables(data=flat_series) assert p.input_format == "wide" assert list(p.variables) == ["x", "y"] assert len(p.plot_data) == len(flat_series) x = p.plot_data["x"] expected_x = flat_series.index assert_array_equal(x, expected_x) y = p.plot_data["y"] expected_y = flat_series assert_array_equal(y, expected_y) assert p.variables["x"] is flat_series.index.name assert p.variables["y"] is flat_series.name def test_wide_list_of_series_variables(self, wide_list_of_series): p = _RelationalPlotter() p.assign_variables(data=wide_list_of_series) assert p.input_format == "wide" assert list(p.variables) == ["x", "y", "hue", "style"] chunks = len(wide_list_of_series) chunk_size = max(len(l) for l in wide_list_of_series) assert len(p.plot_data) == chunks * chunk_size index_union = np.unique( np.concatenate([s.index for s in wide_list_of_series]) ) x = p.plot_data["x"] expected_x = np.tile(index_union, chunks) assert_array_equal(x, expected_x) y = p.plot_data["y"] expected_y = np.concatenate([ s.reindex(index_union) for s in wide_list_of_series ]) assert_array_equal(y, expected_y) hue = p.plot_data["hue"] series_names = [s.name for s in wide_list_of_series] expected_hue = np.repeat(series_names, chunk_size) assert_array_equal(hue, expected_hue) style = p.plot_data["style"] expected_style = expected_hue assert_array_equal(style, expected_style) assert p.variables["x"] is None assert p.variables["y"] is None assert p.variables["hue"] is None assert p.variables["style"] is None def test_wide_list_of_arrays_variables(self, wide_list_of_arrays): p = _RelationalPlotter() p.assign_variables(data=wide_list_of_arrays) assert p.input_format == "wide" assert list(p.variables) == ["x", "y", "hue", "style"] chunks = len(wide_list_of_arrays) chunk_size = max(len(l) for l in wide_list_of_arrays) assert len(p.plot_data) == chunks * chunk_size x = p.plot_data["x"] expected_x = np.tile(np.arange(chunk_size), chunks) assert_array_equal(x, expected_x) y = p.plot_data["y"].dropna() expected_y = np.concatenate(wide_list_of_arrays) assert_array_equal(y, expected_y) hue = p.plot_data["hue"] expected_hue = np.repeat(np.arange(chunks), chunk_size) assert_array_equal(hue, expected_hue) style = p.plot_data["style"] expected_style = expected_hue assert_array_equal(style, expected_style) assert p.variables["x"] is None assert p.variables["y"] is None assert p.variables["hue"] is None assert p.variables["style"] is None def test_wide_list_of_list_variables(self, wide_list_of_lists): p = _RelationalPlotter() p.assign_variables(data=wide_list_of_lists) assert p.input_format == "wide" assert list(p.variables) == ["x", "y", "hue", "style"] chunks = len(wide_list_of_lists) chunk_size = max(len(l) for l in wide_list_of_lists) assert len(p.plot_data) == chunks * chunk_size x = p.plot_data["x"] expected_x = np.tile(np.arange(chunk_size), chunks) assert_array_equal(x, expected_x) y = p.plot_data["y"].dropna() expected_y = np.concatenate(wide_list_of_lists) assert_array_equal(y, expected_y) hue = p.plot_data["hue"] expected_hue = np.repeat(np.arange(chunks), chunk_size) assert_array_equal(hue, expected_hue) style = p.plot_data["style"] expected_style = expected_hue assert_array_equal(style, expected_style) assert p.variables["x"] is None assert p.variables["y"] is None assert p.variables["hue"] is None assert p.variables["style"] is None def test_wide_dict_of_series_variables(self, wide_dict_of_series): p = _RelationalPlotter() p.assign_variables(data=wide_dict_of_series) assert p.input_format == "wide" assert list(p.variables) == ["x", "y", "hue", "style"] chunks = len(wide_dict_of_series) chunk_size = max(len(l) for l in wide_dict_of_series.values()) assert len(p.plot_data) == chunks * chunk_size x = p.plot_data["x"] expected_x = np.tile(np.arange(chunk_size), chunks) assert_array_equal(x, expected_x) y = p.plot_data["y"].dropna() expected_y = np.concatenate(list(wide_dict_of_series.values())) assert_array_equal(y, expected_y) hue = p.plot_data["hue"] expected_hue = np.repeat(list(wide_dict_of_series), chunk_size) assert_array_equal(hue, expected_hue) style = p.plot_data["style"] expected_style = expected_hue assert_array_equal(style, expected_style) assert p.variables["x"] is None assert p.variables["y"] is None assert p.variables["hue"] is None assert p.variables["style"] is None def test_wide_dict_of_arrays_variables(self, wide_dict_of_arrays): p = _RelationalPlotter() p.assign_variables(data=wide_dict_of_arrays) assert p.input_format == "wide" assert list(p.variables) == ["x", "y", "hue", "style"] chunks = len(wide_dict_of_arrays) chunk_size = max(len(l) for l in wide_dict_of_arrays.values()) assert len(p.plot_data) == chunks * chunk_size x = p.plot_data["x"] expected_x = np.tile(np.arange(chunk_size), chunks) assert_array_equal(x, expected_x) y = p.plot_data["y"].dropna() expected_y = np.concatenate(list(wide_dict_of_arrays.values())) assert_array_equal(y, expected_y) hue = p.plot_data["hue"] expected_hue = np.repeat(list(wide_dict_of_arrays), chunk_size) assert_array_equal(hue, expected_hue) style = p.plot_data["style"] expected_style = expected_hue assert_array_equal(style, expected_style) assert p.variables["x"] is None assert p.variables["y"] is None assert p.variables["hue"] is None assert p.variables["style"] is None def test_wide_dict_of_lists_variables(self, wide_dict_of_lists): p = _RelationalPlotter() p.assign_variables(data=wide_dict_of_lists) assert p.input_format == "wide" assert list(p.variables) == ["x", "y", "hue", "style"] chunks = len(wide_dict_of_lists) chunk_size = max(len(l) for l in wide_dict_of_lists.values()) assert len(p.plot_data) == chunks * chunk_size x = p.plot_data["x"] expected_x = np.tile(np.arange(chunk_size), chunks) assert_array_equal(x, expected_x) y = p.plot_data["y"].dropna() expected_y = np.concatenate(list(wide_dict_of_lists.values())) assert_array_equal(y, expected_y) hue = p.plot_data["hue"] expected_hue = np.repeat(list(wide_dict_of_lists), chunk_size) assert_array_equal(hue, expected_hue) style = p.plot_data["style"] expected_style = expected_hue assert_array_equal(style, expected_style) assert p.variables["x"] is None assert p.variables["y"] is None assert p.variables["hue"] is None assert p.variables["style"] is None def test_long_df(self, long_df, long_semantics): p = _RelationalPlotter(data=long_df, variables=long_semantics) assert p.input_format == "long" assert p.variables == long_semantics for key, val in long_semantics.items(): assert_array_equal(p.plot_data[key], long_df[val]) def test_long_df_with_index(self, long_df, long_semantics): p = _RelationalPlotter( data=long_df.set_index("a"), variables=long_semantics, ) assert p.input_format == "long" assert p.variables == long_semantics for key, val in long_semantics.items(): assert_array_equal(p.plot_data[key], long_df[val]) def test_long_df_with_multiindex(self, long_df, long_semantics): p = _RelationalPlotter( data=long_df.set_index(["a", "x"]), variables=long_semantics, ) assert p.input_format == "long" assert p.variables == long_semantics for key, val in long_semantics.items(): assert_array_equal(p.plot_data[key], long_df[val]) def test_long_dict(self, long_dict, long_semantics): p = _RelationalPlotter( data=long_dict, variables=long_semantics, ) assert p.input_format == "long" assert p.variables == long_semantics for key, val in long_semantics.items(): assert_array_equal(p.plot_data[key], pd.Series(long_dict[val])) @pytest.mark.parametrize( "vector_type", ["series", "numpy", "list"], ) def test_long_vectors(self, long_df, long_semantics, vector_type): variables = {key: long_df[val] for key, val in long_semantics.items()} if vector_type == "numpy": # Requires pandas >= 0.24 # {key: val.to_numpy() for key, val in variables.items()} variables = { key: np.asarray(val) for key, val in variables.items() } elif vector_type == "list": # Requires pandas >= 0.24 # {key: val.to_list() for key, val in variables.items()} variables = { key: val.tolist() for key, val in variables.items() } p = _RelationalPlotter(variables=variables) assert p.input_format == "long" assert list(p.variables) == list(long_semantics) if vector_type == "series": assert p.variables == long_semantics for key, val in long_semantics.items(): assert_array_equal(p.plot_data[key], long_df[val]) def test_long_undefined_variables(self, long_df): p = _RelationalPlotter() with pytest.raises(ValueError): p.assign_variables( data=long_df, variables=dict(x="not_in_df"), ) with pytest.raises(ValueError): p.assign_variables( data=long_df, variables=dict(x="x", y="not_in_df"), ) with pytest.raises(ValueError): p.assign_variables( data=long_df, variables=dict(x="x", y="y", hue="not_in_df"), ) @pytest.mark.parametrize( "arg", [[], np.array([]), pd.DataFrame()], ) def test_empty_data_input(self, arg): p = _RelationalPlotter(data=arg) assert not p.variables if not isinstance(arg, pd.DataFrame): p = _RelationalPlotter(variables=dict(x=arg, y=arg)) assert not p.variables def test_units(self, repeated_df): p = _RelationalPlotter( data=repeated_df, variables=dict(x="x", y="y", units="u"), ) assert_array_equal(p.plot_data["units"], repeated_df["u"]) def test_relplot_simple(self, long_df): g = relplot(data=long_df, x="x", y="y", kind="scatter") x, y = g.ax.collections[0].get_offsets().T assert_array_equal(x, long_df["x"]) assert_array_equal(y, long_df["y"]) g = relplot(data=long_df, x="x", y="y", kind="line") x, y = g.ax.lines[0].get_xydata().T expected = long_df.groupby("x").y.mean() assert_array_equal(x, expected.index) assert y == pytest.approx(expected.values) with pytest.raises(ValueError): g = relplot(data=long_df, x="x", y="y", kind="not_a_kind") def test_relplot_complex(self, long_df): for sem in ["hue", "size", "style"]: g = relplot(data=long_df, x="x", y="y", **{sem: "a"}) x, y = g.ax.collections[0].get_offsets().T assert_array_equal(x, long_df["x"]) assert_array_equal(y, long_df["y"]) for sem in ["hue", "size", "style"]: g = relplot( data=long_df, x="x", y="y", col="c", **{sem: "a"} ) grouped = long_df.groupby("c") for (_, grp_df), ax in zip(grouped, g.axes.flat): x, y = ax.collections[0].get_offsets().T assert_array_equal(x, grp_df["x"]) assert_array_equal(y, grp_df["y"]) for sem in ["size", "style"]: g = relplot( data=long_df, x="x", y="y", hue="b", col="c", **{sem: "a"} ) grouped = long_df.groupby("c") for (_, grp_df), ax in zip(grouped, g.axes.flat): x, y = ax.collections[0].get_offsets().T assert_array_equal(x, grp_df["x"]) assert_array_equal(y, grp_df["y"]) for sem in ["hue", "size", "style"]: g = relplot( data=long_df.sort_values(["c", "b"]), x="x", y="y", col="b", row="c", **{sem: "a"} ) grouped = long_df.groupby(["c", "b"]) for (_, grp_df), ax in zip(grouped, g.axes.flat): x, y = ax.collections[0].get_offsets().T assert_array_equal(x, grp_df["x"]) assert_array_equal(y, grp_df["y"]) @pytest.mark.parametrize( "vector_type", ["series", "numpy", "list"], ) def test_relplot_vectors(self, long_df, vector_type): semantics = dict(x="x", y="y", hue="f", col="c") kws = {key: long_df[val] for key, val in semantics.items()} g = relplot(data=long_df, **kws) grouped = long_df.groupby("c") for (_, grp_df), ax in zip(grouped, g.axes.flat): x, y = ax.collections[0].get_offsets().T assert_array_equal(x, grp_df["x"]) assert_array_equal(y, grp_df["y"]) def test_relplot_wide(self, wide_df): g = relplot(data=wide_df) x, y = g.ax.collections[0].get_offsets().T assert_array_equal(y, wide_df.values.T.ravel()) def test_relplot_hues(self, long_df): palette = ["r", "b", "g"] g = relplot( x="x", y="y", hue="a", style="b", col="c", palette=palette, data=long_df ) palette = dict(zip(long_df["a"].unique(), palette)) grouped = long_df.groupby("c") for (_, grp_df), ax in zip(grouped, g.axes.flat): points = ax.collections[0] expected_hues = [palette[val] for val in grp_df["a"]] assert same_color(points.get_facecolors(), expected_hues) def test_relplot_sizes(self, long_df): sizes = [5, 12, 7] g = relplot( data=long_df, x="x", y="y", size="a", hue="b", col="c", sizes=sizes, ) sizes = dict(zip(long_df["a"].unique(), sizes)) grouped = long_df.groupby("c") for (_, grp_df), ax in zip(grouped, g.axes.flat): points = ax.collections[0] expected_sizes = [sizes[val] for val in grp_df["a"]] assert_array_equal(points.get_sizes(), expected_sizes) def test_relplot_styles(self, long_df): markers = ["o", "d", "s"] g = relplot( data=long_df, x="x", y="y", style="a", hue="b", col="c", markers=markers, ) paths = [] for m in markers: m = mpl.markers.MarkerStyle(m) paths.append(m.get_path().transformed(m.get_transform())) paths = dict(zip(long_df["a"].unique(), paths)) grouped = long_df.groupby("c") for (_, grp_df), ax in zip(grouped, g.axes.flat): points = ax.collections[0] expected_paths = [paths[val] for val in grp_df["a"]] assert self.paths_equal(points.get_paths(), expected_paths) def test_relplot_stringy_numerics(self, long_df): long_df["x_str"] = long_df["x"].astype(str) g = relplot(data=long_df, x="x", y="y", hue="x_str") points = g.ax.collections[0] xys = points.get_offsets() mask = np.ma.getmask(xys) assert not mask.any() assert_array_equal(xys, long_df[["x", "y"]]) g = relplot(data=long_df, x="x", y="y", size="x_str") points = g.ax.collections[0] xys = points.get_offsets() mask = np.ma.getmask(xys) assert not mask.any() assert_array_equal(xys, long_df[["x", "y"]]) def test_relplot_legend(self, long_df): g = relplot(data=long_df, x="x", y="y") assert g._legend is None g = relplot(data=long_df, x="x", y="y", hue="a") texts = [t.get_text() for t in g._legend.texts] expected_texts = long_df["a"].unique() assert_array_equal(texts, expected_texts) g = relplot(data=long_df, x="x", y="y", hue="s", size="s") texts = [t.get_text() for t in g._legend.texts] assert_array_equal(texts, np.sort(texts)) g = relplot(data=long_df, x="x", y="y", hue="a", legend=False) assert g._legend is None palette = color_palette("deep", len(long_df["b"].unique())) a_like_b = dict(zip(long_df["a"].unique(), long_df["b"].unique())) long_df["a_like_b"] = long_df["a"].map(a_like_b) g = relplot( data=long_df, x="x", y="y", hue="b", style="a_like_b", palette=palette, kind="line", estimator=None, ) lines = g._legend.get_lines()[1:] # Chop off title dummy for line, color in zip(lines, palette): assert line.get_color() == color def test_relplot_data(self, long_df): g = relplot( data=long_df.to_dict(orient="list"), x="x", y=long_df["y"].rename("y_var"), hue=np.asarray(long_df["a"]), col="c", ) expected_cols = set(long_df.columns.tolist() + ["_hue_", "y_var"]) assert set(g.data.columns) == expected_cols assert_array_equal(g.data["y_var"], long_df["y"]) assert_array_equal(g.data["_hue_"], long_df["a"]) def test_facet_variable_collision(self, long_df): # https://github.com/mwaskom/seaborn/issues/2488 col_data = long_df["c"] long_df = long_df.assign(size=col_data) g = relplot( data=long_df, x="x", y="y", col="size", ) assert g.axes.shape == (1, len(col_data.unique())) def test_ax_kwarg_removal(self, long_df): f, ax = plt.subplots() with pytest.warns(UserWarning): g = relplot(data=long_df, x="x", y="y", ax=ax) assert len(ax.collections) == 0 assert len(g.ax.collections) > 0 class TestLinePlotter(Helpers): def test_aggregate(self, long_df): p = _LinePlotter(data=long_df, variables=dict(x="x", y="y")) p.n_boot = 10000 p.sort = False x = pd.Series(np.tile([1, 2], 100)) y = pd.Series(np.random.randn(200)) y_mean = y.groupby(x).mean() def sem(x): return np.std(x) / np.sqrt(len(x)) y_sem = y.groupby(x).apply(sem) y_cis = pd.DataFrame(dict(low=y_mean - y_sem, high=y_mean + y_sem), columns=["low", "high"]) p.ci = 68 p.estimator = "mean" index, est, cis = p.aggregate(y, x) assert_array_equal(index.values, x.unique()) assert est.index.equals(index) assert est.values == pytest.approx(y_mean.values) assert cis.values == pytest.approx(y_cis.values, 4) assert list(cis.columns) == ["low", "high"] p.estimator = np.mean index, est, cis = p.aggregate(y, x) assert_array_equal(index.values, x.unique()) assert est.index.equals(index) assert est.values == pytest.approx(y_mean.values) assert cis.values == pytest.approx(y_cis.values, 4) assert list(cis.columns) == ["low", "high"] p.seed = 0 _, _, ci1 = p.aggregate(y, x) _, _, ci2 = p.aggregate(y, x) assert_array_equal(ci1, ci2) y_std = y.groupby(x).std() y_cis = pd.DataFrame(dict(low=y_mean - y_std, high=y_mean + y_std), columns=["low", "high"]) p.ci = "sd" index, est, cis = p.aggregate(y, x) assert_array_equal(index.values, x.unique()) assert est.index.equals(index) assert est.values == pytest.approx(y_mean.values) assert cis.values == pytest.approx(y_cis.values) assert list(cis.columns) == ["low", "high"] p.ci = None index, est, cis = p.aggregate(y, x) assert cis is None p.ci = 68 x, y = pd.Series([1, 2, 3]), pd.Series([4, 3, 2]) index, est, cis = p.aggregate(y, x) assert_array_equal(index.values, x) assert_array_equal(est.values, y) assert cis is None x, y = pd.Series([1, 1, 2]), pd.Series([2, 3, 4]) index, est, cis = p.aggregate(y, x) assert cis.loc[2].isnull().all() p = _LinePlotter(data=long_df, variables=dict(x="x", y="y")) p.estimator = "mean" p.n_boot = 100 p.ci = 95 x = pd.Categorical(["a", "b", "a", "b"], ["a", "b", "c"]) y = pd.Series([1, 1, 2, 2]) with warnings.catch_warnings(): warnings.simplefilter("error", RuntimeWarning) index, est, cis = p.aggregate(y, x) assert cis.loc[["c"]].isnull().all().all() def test_legend_data(self, long_df): f, ax = plt.subplots() p = _LinePlotter( data=long_df, variables=dict(x="x", y="y"), legend="full" ) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert handles == [] # -- ax.clear() p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", hue="a"), legend="full", ) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_color() for h in handles] assert labels == p._hue_map.levels assert colors == p._hue_map(p._hue_map.levels) # -- ax.clear() p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", hue="a", style="a"), legend="full", ) p.map_style(markers=True) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_color() for h in handles] markers = [h.get_marker() for h in handles] assert labels == p._hue_map.levels assert labels == p._style_map.levels assert colors == p._hue_map(p._hue_map.levels) assert markers == p._style_map(p._style_map.levels, "marker") # -- ax.clear() p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", hue="a", style="b"), legend="full", ) p.map_style(markers=True) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_color() for h in handles] markers = [h.get_marker() for h in handles] expected_labels = ( ["a"] + p._hue_map.levels + ["b"] + p._style_map.levels ) expected_colors = ( ["w"] + p._hue_map(p._hue_map.levels) + ["w"] + [".2" for _ in p._style_map.levels] ) expected_markers = ( [""] + ["None" for _ in p._hue_map.levels] + [""] + p._style_map(p._style_map.levels, "marker") ) assert labels == expected_labels assert colors == expected_colors assert markers == expected_markers # -- ax.clear() p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", hue="a", size="a"), legend="full" ) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_color() for h in handles] widths = [h.get_linewidth() for h in handles] assert labels == p._hue_map.levels assert labels == p._size_map.levels assert colors == p._hue_map(p._hue_map.levels) assert widths == p._size_map(p._size_map.levels) # -- x, y = np.random.randn(2, 40) z = np.tile(np.arange(20), 2) p = _LinePlotter(variables=dict(x=x, y=y, hue=z)) ax.clear() p.legend = "full" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert labels == [str(l) for l in p._hue_map.levels] ax.clear() p.legend = "brief" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert len(labels) < len(p._hue_map.levels) p = _LinePlotter(variables=dict(x=x, y=y, size=z)) ax.clear() p.legend = "full" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert labels == [str(l) for l in p._size_map.levels] ax.clear() p.legend = "brief" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert len(labels) < len(p._size_map.levels) ax.clear() p.legend = "auto" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert len(labels) < len(p._size_map.levels) ax.clear() p.legend = True p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert len(labels) < len(p._size_map.levels) ax.clear() p.legend = "bad_value" with pytest.raises(ValueError): p.add_legend_data(ax) ax.clear() p = _LinePlotter( variables=dict(x=x, y=y, hue=z + 1), legend="brief" ) p.map_hue(norm=mpl.colors.LogNorm()), p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert float(labels[1]) / float(labels[0]) == 10 ax.clear() p = _LinePlotter( variables=dict(x=x, y=y, hue=z % 2), legend="auto" ) p.map_hue(norm=mpl.colors.LogNorm()), p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert labels == ["0", "1"] ax.clear() p = _LinePlotter( variables=dict(x=x, y=y, size=z + 1), legend="brief" ) p.map_size(norm=mpl.colors.LogNorm()) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert float(labels[1]) / float(labels[0]) == 10 ax.clear() p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", hue="f"), legend="brief", ) p.add_legend_data(ax) expected_labels = ['0.20', '0.22', '0.24', '0.26', '0.28'] handles, labels = ax.get_legend_handles_labels() assert labels == expected_labels ax.clear() p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", size="f"), legend="brief", ) p.add_legend_data(ax) expected_levels = ['0.20', '0.22', '0.24', '0.26', '0.28'] handles, labels = ax.get_legend_handles_labels() assert labels == expected_levels def test_plot(self, long_df, repeated_df): f, ax = plt.subplots() p = _LinePlotter( data=long_df, variables=dict(x="x", y="y"), sort=False, estimator=None ) p.plot(ax, {}) line, = ax.lines assert_array_equal(line.get_xdata(), long_df.x.values) assert_array_equal(line.get_ydata(), long_df.y.values) ax.clear() p.plot(ax, {"color": "k", "label": "test"}) line, = ax.lines assert line.get_color() == "k" assert line.get_label() == "test" p = _LinePlotter( data=long_df, variables=dict(x="x", y="y"), sort=True, estimator=None ) ax.clear() p.plot(ax, {}) line, = ax.lines sorted_data = long_df.sort_values(["x", "y"]) assert_array_equal(line.get_xdata(), sorted_data.x.values) assert_array_equal(line.get_ydata(), sorted_data.y.values) p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", hue="a"), ) ax.clear() p.plot(ax, {}) assert len(ax.lines) == len(p._hue_map.levels) for line, level in zip(ax.lines, p._hue_map.levels): assert line.get_color() == p._hue_map(level) p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", size="a"), ) ax.clear() p.plot(ax, {}) assert len(ax.lines) == len(p._size_map.levels) for line, level in zip(ax.lines, p._size_map.levels): assert line.get_linewidth() == p._size_map(level) p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", hue="a", style="a"), ) p.map_style(markers=True) ax.clear() p.plot(ax, {}) assert len(ax.lines) == len(p._hue_map.levels) assert len(ax.lines) == len(p._style_map.levels) for line, level in zip(ax.lines, p._hue_map.levels): assert line.get_color() == p._hue_map(level) assert line.get_marker() == p._style_map(level, "marker") p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", hue="a", style="b"), ) p.map_style(markers=True) ax.clear() p.plot(ax, {}) levels = product(p._hue_map.levels, p._style_map.levels) expected_line_count = len(p._hue_map.levels) * len(p._style_map.levels) assert len(ax.lines) == expected_line_count for line, (hue, style) in zip(ax.lines, levels): assert line.get_color() == p._hue_map(hue) assert line.get_marker() == p._style_map(style, "marker") p = _LinePlotter( data=long_df, variables=dict(x="x", y="y"), estimator="mean", err_style="band", ci="sd", sort=True ) ax.clear() p.plot(ax, {}) line, = ax.lines expected_data = long_df.groupby("x").y.mean() assert_array_equal(line.get_xdata(), expected_data.index.values) assert np.allclose(line.get_ydata(), expected_data.values) assert len(ax.collections) == 1 # Test that nans do not propagate to means or CIs p = _LinePlotter( variables=dict( x=[1, 1, 1, 2, 2, 2, 3, 3, 3], y=[1, 2, 3, 3, np.nan, 5, 4, 5, 6], ), estimator="mean", err_style="band", ci=95, n_boot=100, sort=True, ) ax.clear() p.plot(ax, {}) line, = ax.lines assert line.get_xdata().tolist() == [1, 2, 3] err_band = ax.collections[0].get_paths() assert len(err_band) == 1 assert len(err_band[0].vertices) == 9 p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", hue="a"), estimator="mean", err_style="band", ci="sd" ) ax.clear() p.plot(ax, {}) assert len(ax.lines) == len(ax.collections) == len(p._hue_map.levels) for c in ax.collections: assert isinstance(c, mpl.collections.PolyCollection) p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", hue="a"), estimator="mean", err_style="bars", ci="sd" ) ax.clear() p.plot(ax, {}) n_lines = len(ax.lines) assert n_lines / 2 == len(ax.collections) == len(p._hue_map.levels) assert len(ax.collections) == len(p._hue_map.levels) for c in ax.collections: assert isinstance(c, mpl.collections.LineCollection) p = _LinePlotter( data=repeated_df, variables=dict(x="x", y="y", units="u"), estimator=None ) ax.clear() p.plot(ax, {}) n_units = len(repeated_df["u"].unique()) assert len(ax.lines) == n_units p = _LinePlotter( data=repeated_df, variables=dict(x="x", y="y", hue="a", units="u"), estimator=None ) ax.clear() p.plot(ax, {}) n_units *= len(repeated_df["a"].unique()) assert len(ax.lines) == n_units p.estimator = "mean" with pytest.raises(ValueError): p.plot(ax, {}) p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", hue="a"), err_style="band", err_kws={"alpha": .5}, ) ax.clear() p.plot(ax, {}) for band in ax.collections: assert band.get_alpha() == .5 p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", hue="a"), err_style="bars", err_kws={"elinewidth": 2}, ) ax.clear() p.plot(ax, {}) for lines in ax.collections: assert lines.get_linestyles() == 2 p.err_style = "invalid" with pytest.raises(ValueError): p.plot(ax, {}) x_str = long_df["x"].astype(str) p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", hue=x_str), ) ax.clear() p.plot(ax, {}) p = _LinePlotter( data=long_df, variables=dict(x="x", y="y", size=x_str), ) ax.clear() p.plot(ax, {}) def test_axis_labels(self, long_df): f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) p = _LinePlotter( data=long_df, variables=dict(x="x", y="y"), ) p.plot(ax1, {}) assert ax1.get_xlabel() == "x" assert ax1.get_ylabel() == "y" p.plot(ax2, {}) assert ax2.get_xlabel() == "x" assert ax2.get_ylabel() == "y" assert not ax2.yaxis.label.get_visible() def test_matplotlib_kwargs(self, long_df): kws = { "linestyle": "--", "linewidth": 3, "color": (1, .5, .2), "markeredgecolor": (.2, .5, .2), "markeredgewidth": 1, } ax = lineplot(data=long_df, x="x", y="y", **kws) line, *_ = ax.lines for key, val in kws.items(): plot_val = getattr(line, f"get_{key}")() assert plot_val == val def test_lineplot_axes(self, wide_df): f1, ax1 = plt.subplots() f2, ax2 = plt.subplots() ax = lineplot(data=wide_df) assert ax is ax2 ax = lineplot(data=wide_df, ax=ax1) assert ax is ax1 def test_lineplot_vs_relplot(self, long_df, long_semantics): ax = lineplot(data=long_df, **long_semantics) g = relplot(data=long_df, kind="line", **long_semantics) lin_lines = ax.lines rel_lines = g.ax.lines for l1, l2 in zip(lin_lines, rel_lines): assert_array_equal(l1.get_xydata(), l2.get_xydata()) assert same_color(l1.get_color(), l2.get_color()) assert l1.get_linewidth() == l2.get_linewidth() assert l1.get_linestyle() == l2.get_linestyle() def test_lineplot_smoke( self, wide_df, wide_array, wide_list_of_series, wide_list_of_arrays, wide_list_of_lists, flat_array, flat_series, flat_list, long_df, missing_df, object_df ): f, ax = plt.subplots() lineplot(x=[], y=[]) ax.clear() lineplot(data=wide_df) ax.clear() lineplot(data=wide_array) ax.clear() lineplot(data=wide_list_of_series) ax.clear() lineplot(data=wide_list_of_arrays) ax.clear() lineplot(data=wide_list_of_lists) ax.clear() lineplot(data=flat_series) ax.clear() lineplot(data=flat_array) ax.clear() lineplot(data=flat_list) ax.clear() lineplot(x="x", y="y", data=long_df) ax.clear() lineplot(x=long_df.x, y=long_df.y) ax.clear() lineplot(x=long_df.x, y="y", data=long_df) ax.clear() lineplot(x="x", y=long_df.y.values, data=long_df) ax.clear() lineplot(x="x", y="t", data=long_df) ax.clear() lineplot(x="x", y="y", hue="a", data=long_df) ax.clear() lineplot(x="x", y="y", hue="a", style="a", data=long_df) ax.clear() lineplot(x="x", y="y", hue="a", style="b", data=long_df) ax.clear() lineplot(x="x", y="y", hue="a", style="a", data=missing_df) ax.clear() lineplot(x="x", y="y", hue="a", style="b", data=missing_df) ax.clear() lineplot(x="x", y="y", hue="a", size="a", data=long_df) ax.clear() lineplot(x="x", y="y", hue="a", size="s", data=long_df) ax.clear() lineplot(x="x", y="y", hue="a", size="a", data=missing_df) ax.clear() lineplot(x="x", y="y", hue="a", size="s", data=missing_df) ax.clear() lineplot(x="x", y="y", hue="f", data=object_df) ax.clear() lineplot(x="x", y="y", hue="c", size="f", data=object_df) ax.clear() lineplot(x="x", y="y", hue="f", size="s", data=object_df) ax.clear() class TestScatterPlotter(Helpers): def test_legend_data(self, long_df): m = mpl.markers.MarkerStyle("o") default_mark = m.get_path().transformed(m.get_transform()) m = mpl.markers.MarkerStyle("") null = m.get_path().transformed(m.get_transform()) f, ax = plt.subplots() p = _ScatterPlotter( data=long_df, variables=dict(x="x", y="y"), legend="full", ) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert handles == [] # -- ax.clear() p = _ScatterPlotter( data=long_df, variables=dict(x="x", y="y", hue="a"), legend="full", ) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_facecolors()[0] for h in handles] expected_colors = p._hue_map(p._hue_map.levels) assert labels == p._hue_map.levels assert same_color(colors, expected_colors) # -- ax.clear() p = _ScatterPlotter( data=long_df, variables=dict(x="x", y="y", hue="a", style="a"), legend="full", ) p.map_style(markers=True) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_facecolors()[0] for h in handles] expected_colors = p._hue_map(p._hue_map.levels) paths = [h.get_paths()[0] for h in handles] expected_paths = p._style_map(p._style_map.levels, "path") assert labels == p._hue_map.levels assert labels == p._style_map.levels assert same_color(colors, expected_colors) assert self.paths_equal(paths, expected_paths) # -- ax.clear() p = _ScatterPlotter( data=long_df, variables=dict(x="x", y="y", hue="a", style="b"), legend="full", ) p.map_style(markers=True) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_facecolors()[0] for h in handles] paths = [h.get_paths()[0] for h in handles] expected_colors = ( ["w"] + p._hue_map(p._hue_map.levels) + ["w"] + [".2" for _ in p._style_map.levels] ) expected_paths = ( [null] + [default_mark for _ in p._hue_map.levels] + [null] + p._style_map(p._style_map.levels, "path") ) assert labels == ( ["a"] + p._hue_map.levels + ["b"] + p._style_map.levels ) assert same_color(colors, expected_colors) assert self.paths_equal(paths, expected_paths) # -- ax.clear() p = _ScatterPlotter( data=long_df, variables=dict(x="x", y="y", hue="a", size="a"), legend="full" ) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_facecolors()[0] for h in handles] expected_colors = p._hue_map(p._hue_map.levels) sizes = [h.get_sizes()[0] for h in handles] expected_sizes = p._size_map(p._size_map.levels) assert labels == p._hue_map.levels assert labels == p._size_map.levels assert same_color(colors, expected_colors) assert sizes == expected_sizes # -- ax.clear() sizes_list = [10, 100, 200] p = _ScatterPlotter( data=long_df, variables=dict(x="x", y="y", size="s"), legend="full", ) p.map_size(sizes=sizes_list) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() sizes = [h.get_sizes()[0] for h in handles] expected_sizes = p._size_map(p._size_map.levels) assert labels == [str(l) for l in p._size_map.levels] assert sizes == expected_sizes # -- ax.clear() sizes_dict = {2: 10, 4: 100, 8: 200} p = _ScatterPlotter( data=long_df, variables=dict(x="x", y="y", size="s"), legend="full" ) p.map_size(sizes=sizes_dict) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() sizes = [h.get_sizes()[0] for h in handles] expected_sizes = p._size_map(p._size_map.levels) assert labels == [str(l) for l in p._size_map.levels] assert sizes == expected_sizes # -- x, y = np.random.randn(2, 40) z = np.tile(np.arange(20), 2) p = _ScatterPlotter( variables=dict(x=x, y=y, hue=z), ) ax.clear() p.legend = "full" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert labels == [str(l) for l in p._hue_map.levels] ax.clear() p.legend = "brief" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert len(labels) < len(p._hue_map.levels) p = _ScatterPlotter( variables=dict(x=x, y=y, size=z), ) ax.clear() p.legend = "full" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert labels == [str(l) for l in p._size_map.levels] ax.clear() p.legend = "brief" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert len(labels) < len(p._size_map.levels) ax.clear() p.legend = "bad_value" with pytest.raises(ValueError): p.add_legend_data(ax) def test_plot(self, long_df, repeated_df): f, ax = plt.subplots() p = _ScatterPlotter(data=long_df, variables=dict(x="x", y="y")) p.plot(ax, {}) points = ax.collections[0] assert_array_equal(points.get_offsets(), long_df[["x", "y"]].values) ax.clear() p.plot(ax, {"color": "k", "label": "test"}) points = ax.collections[0] assert same_color(points.get_facecolor(), "k") assert points.get_label() == "test" p = _ScatterPlotter( data=long_df, variables=dict(x="x", y="y", hue="a") ) ax.clear() p.plot(ax, {}) points = ax.collections[0] expected_colors = p._hue_map(p.plot_data["hue"]) assert same_color(points.get_facecolors(), expected_colors) p = _ScatterPlotter( data=long_df, variables=dict(x="x", y="y", style="c"), ) p.map_style(markers=["+", "x"]) ax.clear() color = (1, .3, .8) p.plot(ax, {"color": color}) points = ax.collections[0] assert same_color(points.get_edgecolors(), [color]) p = _ScatterPlotter( data=long_df, variables=dict(x="x", y="y", size="a"), ) ax.clear() p.plot(ax, {}) points = ax.collections[0] expected_sizes = p._size_map(p.plot_data["size"]) assert_array_equal(points.get_sizes(), expected_sizes) p = _ScatterPlotter( data=long_df, variables=dict(x="x", y="y", hue="a", style="a"), ) p.map_style(markers=True) ax.clear() p.plot(ax, {}) points = ax.collections[0] expected_colors = p._hue_map(p.plot_data["hue"]) expected_paths = p._style_map(p.plot_data["style"], "path") assert same_color(points.get_facecolors(), expected_colors) assert self.paths_equal(points.get_paths(), expected_paths) p = _ScatterPlotter( data=long_df, variables=dict(x="x", y="y", hue="a", style="b"), ) p.map_style(markers=True) ax.clear() p.plot(ax, {}) points = ax.collections[0] expected_colors = p._hue_map(p.plot_data["hue"]) expected_paths = p._style_map(p.plot_data["style"], "path") assert same_color(points.get_facecolors(), expected_colors) assert self.paths_equal(points.get_paths(), expected_paths) x_str = long_df["x"].astype(str) p = _ScatterPlotter( data=long_df, variables=dict(x="x", y="y", hue=x_str), ) ax.clear() p.plot(ax, {}) p = _ScatterPlotter( data=long_df, variables=dict(x="x", y="y", size=x_str), ) ax.clear() p.plot(ax, {}) def test_axis_labels(self, long_df): f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) p = _ScatterPlotter(data=long_df, variables=dict(x="x", y="y")) p.plot(ax1, {}) assert ax1.get_xlabel() == "x" assert ax1.get_ylabel() == "y" p.plot(ax2, {}) assert ax2.get_xlabel() == "x" assert ax2.get_ylabel() == "y" assert not ax2.yaxis.label.get_visible() def test_scatterplot_axes(self, wide_df): f1, ax1 = plt.subplots() f2, ax2 = plt.subplots() ax = scatterplot(data=wide_df) assert ax is ax2 ax = scatterplot(data=wide_df, ax=ax1) assert ax is ax1 def test_literal_attribute_vectors(self): f, ax = plt.subplots() x = y = [1, 2, 3] s = [5, 10, 15] c = [(1, 1, 0, 1), (1, 0, 1, .5), (.5, 1, 0, 1)] scatterplot(x=x, y=y, c=c, s=s, ax=ax) points, = ax.collections assert_array_equal(points.get_sizes().squeeze(), s) assert_array_equal(points.get_facecolors(), c) def test_linewidths(self, long_df): f, ax = plt.subplots() scatterplot(data=long_df, x="x", y="y", s=10) scatterplot(data=long_df, x="x", y="y", s=20) points1, points2 = ax.collections assert ( points1.get_linewidths().item() < points2.get_linewidths().item() ) ax.clear() scatterplot(data=long_df, x="x", y="y", s=long_df["x"]) scatterplot(data=long_df, x="x", y="y", s=long_df["x"] * 2) points1, points2 = ax.collections assert ( points1.get_linewidths().item() < points2.get_linewidths().item() ) ax.clear() scatterplot(data=long_df, x="x", y="y", size=long_df["x"]) scatterplot(data=long_df, x="x", y="y", size=long_df["x"] * 2) points1, points2, *_ = ax.collections assert ( points1.get_linewidths().item() < points2.get_linewidths().item() ) ax.clear() lw = 2 scatterplot(data=long_df, x="x", y="y", linewidth=lw) assert ax.collections[0].get_linewidths().item() == lw def test_size_norm_extrapolation(self): # https://github.com/mwaskom/seaborn/issues/2539 x = np.arange(0, 20, 2) f, axs = plt.subplots(1, 2, sharex=True, sharey=True) slc = 5 kws = dict(sizes=(50, 200), size_norm=(0, x.max()), legend="brief") scatterplot(x=x, y=x, size=x, ax=axs[0], **kws) scatterplot(x=x[:slc], y=x[:slc], size=x[:slc], ax=axs[1], **kws) assert np.allclose( axs[0].collections[0].get_sizes()[:slc], axs[1].collections[0].get_sizes() ) legends = [ax.legend_ for ax in axs] legend_data = [ { label.get_text(): handle.get_sizes().item() for label, handle in zip(legend.get_texts(), legend.legendHandles) } for legend in legends ] for key in set(legend_data[0]) & set(legend_data[1]): if key == "y": # At some point (circa 3.0) matplotlib auto-added pandas series # with a valid name into the legend, which messes up this test. # I can't track down when that was added (or removed), so let's # just anticipate and ignore it here. continue assert legend_data[0][key] == legend_data[1][key] def test_datetime_scale(self, long_df): ax = scatterplot(data=long_df, x="t", y="y") # Check that we avoid weird matplotlib default auto scaling # https://github.com/matplotlib/matplotlib/issues/17586 ax.get_xlim()[0] > ax.xaxis.convert_units(np.datetime64("2002-01-01")) def test_unfilled_marker_edgecolor_warning(self, long_df): # GH2636 with pytest.warns(None) as record: scatterplot(data=long_df, x="x", y="y", marker="+") assert not record def test_scatterplot_vs_relplot(self, long_df, long_semantics): ax = scatterplot(data=long_df, **long_semantics) g = relplot(data=long_df, kind="scatter", **long_semantics) for s_pts, r_pts in zip(ax.collections, g.ax.collections): assert_array_equal(s_pts.get_offsets(), r_pts.get_offsets()) assert_array_equal(s_pts.get_sizes(), r_pts.get_sizes()) assert_array_equal(s_pts.get_facecolors(), r_pts.get_facecolors()) assert self.paths_equal(s_pts.get_paths(), r_pts.get_paths()) def test_scatterplot_smoke( self, wide_df, wide_array, flat_series, flat_array, flat_list, wide_list_of_series, wide_list_of_arrays, wide_list_of_lists, long_df, missing_df, object_df ): f, ax = plt.subplots() scatterplot(x=[], y=[]) ax.clear() scatterplot(data=wide_df) ax.clear() scatterplot(data=wide_array) ax.clear() scatterplot(data=wide_list_of_series) ax.clear() scatterplot(data=wide_list_of_arrays) ax.clear() scatterplot(data=wide_list_of_lists) ax.clear() scatterplot(data=flat_series) ax.clear() scatterplot(data=flat_array) ax.clear() scatterplot(data=flat_list) ax.clear() scatterplot(x="x", y="y", data=long_df) ax.clear() scatterplot(x=long_df.x, y=long_df.y) ax.clear() scatterplot(x=long_df.x, y="y", data=long_df) ax.clear() scatterplot(x="x", y=long_df.y.values, data=long_df) ax.clear() scatterplot(x="x", y="y", hue="a", data=long_df) ax.clear() scatterplot(x="x", y="y", hue="a", style="a", data=long_df) ax.clear() scatterplot(x="x", y="y", hue="a", style="b", data=long_df) ax.clear() scatterplot(x="x", y="y", hue="a", style="a", data=missing_df) ax.clear() scatterplot(x="x", y="y", hue="a", style="b", data=missing_df) ax.clear() scatterplot(x="x", y="y", hue="a", size="a", data=long_df) ax.clear() scatterplot(x="x", y="y", hue="a", size="s", data=long_df) ax.clear() scatterplot(x="x", y="y", hue="a", size="a", data=missing_df) ax.clear() scatterplot(x="x", y="y", hue="a", size="s", data=missing_df) ax.clear() scatterplot(x="x", y="y", hue="f", data=object_df) ax.clear() scatterplot(x="x", y="y", hue="c", size="f", data=object_df) ax.clear() scatterplot(x="x", y="y", hue="f", size="s", data=object_df) ax.clear() seaborn-0.11.2/seaborn/tests/test_statistics.py000066400000000000000000000310651410631356500216330ustar00rootroot00000000000000import numpy as np from scipy import integrate try: import statsmodels.distributions as smdist except ImportError: smdist = None import pytest from numpy.testing import assert_array_equal, assert_array_almost_equal from .._statistics import ( KDE, Histogram, ECDF, ) class DistributionFixtures: @pytest.fixture def x(self, rng): return rng.normal(0, 1, 100) @pytest.fixture def y(self, rng): return rng.normal(0, 5, 100) @pytest.fixture def weights(self, rng): return rng.uniform(0, 5, 100) class TestKDE: def test_gridsize(self, rng): x = rng.normal(0, 3, 1000) n = 200 kde = KDE(gridsize=n) density, support = kde(x) assert density.size == n assert support.size == n def test_cut(self, rng): x = rng.normal(0, 3, 1000) kde = KDE(cut=0) _, support = kde(x) assert support.min() == x.min() assert support.max() == x.max() cut = 2 bw_scale = .5 bw = x.std() * bw_scale kde = KDE(cut=cut, bw_method=bw_scale, gridsize=1000) _, support = kde(x) assert support.min() == pytest.approx(x.min() - bw * cut, abs=1e-2) assert support.max() == pytest.approx(x.max() + bw * cut, abs=1e-2) def test_clip(self, rng): x = rng.normal(0, 3, 100) clip = -1, 1 kde = KDE(clip=clip) _, support = kde(x) assert support.min() >= clip[0] assert support.max() <= clip[1] def test_density_normalization(self, rng): x = rng.normal(0, 3, 1000) kde = KDE() density, support = kde(x) assert integrate.trapz(density, support) == pytest.approx(1, abs=1e-5) def test_cumulative(self, rng): x = rng.normal(0, 3, 1000) kde = KDE(cumulative=True) density, _ = kde(x) assert density[0] == pytest.approx(0, abs=1e-5) assert density[-1] == pytest.approx(1, abs=1e-5) def test_cached_support(self, rng): x = rng.normal(0, 3, 100) kde = KDE() kde.define_support(x) _, support = kde(x[(x > -1) & (x < 1)]) assert_array_equal(support, kde.support) def test_bw_method(self, rng): x = rng.normal(0, 3, 100) kde1 = KDE(bw_method=.2) kde2 = KDE(bw_method=2) d1, _ = kde1(x) d2, _ = kde2(x) assert np.abs(np.diff(d1)).mean() > np.abs(np.diff(d2)).mean() def test_bw_adjust(self, rng): x = rng.normal(0, 3, 100) kde1 = KDE(bw_adjust=.2) kde2 = KDE(bw_adjust=2) d1, _ = kde1(x) d2, _ = kde2(x) assert np.abs(np.diff(d1)).mean() > np.abs(np.diff(d2)).mean() def test_bivariate_grid(self, rng): n = 100 x, y = rng.normal(0, 3, (2, 50)) kde = KDE(gridsize=n) density, (xx, yy) = kde(x, y) assert density.shape == (n, n) assert xx.size == n assert yy.size == n def test_bivariate_normalization(self, rng): x, y = rng.normal(0, 3, (2, 50)) kde = KDE(gridsize=100) density, (xx, yy) = kde(x, y) dx = xx[1] - xx[0] dy = yy[1] - yy[0] total = density.sum() * (dx * dy) assert total == pytest.approx(1, abs=1e-2) def test_bivariate_cumulative(self, rng): x, y = rng.normal(0, 3, (2, 50)) kde = KDE(gridsize=100, cumulative=True) density, _ = kde(x, y) assert density[0, 0] == pytest.approx(0, abs=1e-2) assert density[-1, -1] == pytest.approx(1, abs=1e-2) class TestHistogram(DistributionFixtures): def test_string_bins(self, x): h = Histogram(bins="sqrt") bin_kws = h.define_bin_params(x) assert bin_kws["range"] == (x.min(), x.max()) assert bin_kws["bins"] == int(np.sqrt(len(x))) def test_int_bins(self, x): n = 24 h = Histogram(bins=n) bin_kws = h.define_bin_params(x) assert bin_kws["range"] == (x.min(), x.max()) assert bin_kws["bins"] == n def test_array_bins(self, x): bins = [-3, -2, 1, 2, 3] h = Histogram(bins=bins) bin_kws = h.define_bin_params(x) assert_array_equal(bin_kws["bins"], bins) def test_bivariate_string_bins(self, x, y): s1, s2 = "sqrt", "fd" h = Histogram(bins=s1) e1, e2 = h.define_bin_params(x, y)["bins"] assert_array_equal(e1, np.histogram_bin_edges(x, s1)) assert_array_equal(e2, np.histogram_bin_edges(y, s1)) h = Histogram(bins=(s1, s2)) e1, e2 = h.define_bin_params(x, y)["bins"] assert_array_equal(e1, np.histogram_bin_edges(x, s1)) assert_array_equal(e2, np.histogram_bin_edges(y, s2)) def test_bivariate_int_bins(self, x, y): b1, b2 = 5, 10 h = Histogram(bins=b1) e1, e2 = h.define_bin_params(x, y)["bins"] assert len(e1) == b1 + 1 assert len(e2) == b1 + 1 h = Histogram(bins=(b1, b2)) e1, e2 = h.define_bin_params(x, y)["bins"] assert len(e1) == b1 + 1 assert len(e2) == b2 + 1 def test_bivariate_array_bins(self, x, y): b1 = [-3, -2, 1, 2, 3] b2 = [-5, -2, 3, 6] h = Histogram(bins=b1) e1, e2 = h.define_bin_params(x, y)["bins"] assert_array_equal(e1, b1) assert_array_equal(e2, b1) h = Histogram(bins=(b1, b2)) e1, e2 = h.define_bin_params(x, y)["bins"] assert_array_equal(e1, b1) assert_array_equal(e2, b2) def test_binwidth(self, x): binwidth = .5 h = Histogram(binwidth=binwidth) bin_kws = h.define_bin_params(x) n_bins = bin_kws["bins"] left, right = bin_kws["range"] assert (right - left) / n_bins == pytest.approx(binwidth) def test_bivariate_binwidth(self, x, y): w1, w2 = .5, 1 h = Histogram(binwidth=w1) e1, e2 = h.define_bin_params(x, y)["bins"] assert np.all(np.diff(e1) == w1) assert np.all(np.diff(e2) == w1) h = Histogram(binwidth=(w1, w2)) e1, e2 = h.define_bin_params(x, y)["bins"] assert np.all(np.diff(e1) == w1) assert np.all(np.diff(e2) == w2) def test_binrange(self, x): binrange = (-4, 4) h = Histogram(binrange=binrange) bin_kws = h.define_bin_params(x) assert bin_kws["range"] == binrange def test_bivariate_binrange(self, x, y): r1, r2 = (-4, 4), (-10, 10) h = Histogram(binrange=r1) e1, e2 = h.define_bin_params(x, y)["bins"] assert e1.min() == r1[0] assert e1.max() == r1[1] assert e2.min() == r1[0] assert e2.max() == r1[1] h = Histogram(binrange=(r1, r2)) e1, e2 = h.define_bin_params(x, y)["bins"] assert e1.min() == r1[0] assert e1.max() == r1[1] assert e2.min() == r2[0] assert e2.max() == r2[1] def test_discrete_bins(self, rng): x = rng.binomial(20, .5, 100) h = Histogram(discrete=True) bin_kws = h.define_bin_params(x) assert bin_kws["range"] == (x.min() - .5, x.max() + .5) assert bin_kws["bins"] == (x.max() - x.min() + 1) def test_histogram(self, x): h = Histogram() heights, edges = h(x) heights_mpl, edges_mpl = np.histogram(x, bins="auto") assert_array_equal(heights, heights_mpl) assert_array_equal(edges, edges_mpl) def test_count_stat(self, x): h = Histogram(stat="count") heights, _ = h(x) assert heights.sum() == len(x) def test_density_stat(self, x): h = Histogram(stat="density") heights, edges = h(x) assert (heights * np.diff(edges)).sum() == 1 def test_probability_stat(self, x): h = Histogram(stat="probability") heights, _ = h(x) assert heights.sum() == 1 def test_frequency_stat(self, x): h = Histogram(stat="frequency") heights, edges = h(x) assert (heights * np.diff(edges)).sum() == len(x) def test_cumulative_count(self, x): h = Histogram(stat="count", cumulative=True) heights, _ = h(x) assert heights[-1] == len(x) def test_cumulative_density(self, x): h = Histogram(stat="density", cumulative=True) heights, _ = h(x) assert heights[-1] == 1 def test_cumulative_probability(self, x): h = Histogram(stat="probability", cumulative=True) heights, _ = h(x) assert heights[-1] == 1 def test_cumulative_frequency(self, x): h = Histogram(stat="frequency", cumulative=True) heights, _ = h(x) assert heights[-1] == len(x) def test_bivariate_histogram(self, x, y): h = Histogram() heights, edges = h(x, y) bins_mpl = ( np.histogram_bin_edges(x, "auto"), np.histogram_bin_edges(y, "auto"), ) heights_mpl, *edges_mpl = np.histogram2d(x, y, bins_mpl) assert_array_equal(heights, heights_mpl) assert_array_equal(edges[0], edges_mpl[0]) assert_array_equal(edges[1], edges_mpl[1]) def test_bivariate_count_stat(self, x, y): h = Histogram(stat="count") heights, _ = h(x, y) assert heights.sum() == len(x) def test_bivariate_density_stat(self, x, y): h = Histogram(stat="density") heights, (edges_x, edges_y) = h(x, y) areas = np.outer(np.diff(edges_x), np.diff(edges_y)) assert (heights * areas).sum() == pytest.approx(1) def test_bivariate_probability_stat(self, x, y): h = Histogram(stat="probability") heights, _ = h(x, y) assert heights.sum() == 1 def test_bivariate_frequency_stat(self, x, y): h = Histogram(stat="frequency") heights, (x_edges, y_edges) = h(x, y) area = np.outer(np.diff(x_edges), np.diff(y_edges)) assert (heights * area).sum() == len(x) def test_bivariate_cumulative_count(self, x, y): h = Histogram(stat="count", cumulative=True) heights, _ = h(x, y) assert heights[-1, -1] == len(x) def test_bivariate_cumulative_density(self, x, y): h = Histogram(stat="density", cumulative=True) heights, _ = h(x, y) assert heights[-1, -1] == pytest.approx(1) def test_bivariate_cumulative_frequency(self, x, y): h = Histogram(stat="frequency", cumulative=True) heights, _ = h(x, y) assert heights[-1, -1] == len(x) def test_bivariate_cumulative_probability(self, x, y): h = Histogram(stat="probability", cumulative=True) heights, _ = h(x, y) assert heights[-1, -1] == pytest.approx(1) def test_bad_stat(self): with pytest.raises(ValueError): Histogram(stat="invalid") class TestECDF(DistributionFixtures): def test_univariate_proportion(self, x): ecdf = ECDF() stat, vals = ecdf(x) assert_array_equal(vals[1:], np.sort(x)) assert_array_almost_equal(stat[1:], np.linspace(0, 1, len(x) + 1)[1:]) assert stat[0] == 0 def test_univariate_count(self, x): ecdf = ECDF(stat="count") stat, vals = ecdf(x) assert_array_equal(vals[1:], np.sort(x)) assert_array_almost_equal(stat[1:], np.arange(len(x)) + 1) assert stat[0] == 0 def test_univariate_proportion_weights(self, x, weights): ecdf = ECDF() stat, vals = ecdf(x, weights=weights) assert_array_equal(vals[1:], np.sort(x)) expected_stats = weights[x.argsort()].cumsum() / weights.sum() assert_array_almost_equal(stat[1:], expected_stats) assert stat[0] == 0 def test_univariate_count_weights(self, x, weights): ecdf = ECDF(stat="count") stat, vals = ecdf(x, weights=weights) assert_array_equal(vals[1:], np.sort(x)) assert_array_almost_equal(stat[1:], weights[x.argsort()].cumsum()) assert stat[0] == 0 @pytest.mark.skipif(smdist is None, reason="Requires statsmodels") def test_against_statsmodels(self, x): sm_ecdf = smdist.empirical_distribution.ECDF(x) ecdf = ECDF() stat, vals = ecdf(x) assert_array_equal(vals, sm_ecdf.x) assert_array_almost_equal(stat, sm_ecdf.y) ecdf = ECDF(complementary=True) stat, vals = ecdf(x) assert_array_equal(vals, sm_ecdf.x) assert_array_almost_equal(stat, sm_ecdf.y[::-1]) def test_invalid_stat(self, x): with pytest.raises(ValueError, match="`stat` must be one of"): ECDF(stat="density") def test_bivariate_error(self, x, y): with pytest.raises(NotImplementedError, match="Bivariate ECDF"): ecdf = ECDF() ecdf(x, y) seaborn-0.11.2/seaborn/tests/test_utils.py000066400000000000000000000404411410631356500205770ustar00rootroot00000000000000"""Tests for seaborn utility functions.""" import tempfile from urllib.request import urlopen from http.client import HTTPException import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt from cycler import cycler import pytest from numpy.testing import ( assert_array_equal, ) from pandas.testing import ( assert_series_equal, assert_frame_equal, ) from distutils.version import LooseVersion from .. import utils, rcmod from ..utils import ( get_dataset_names, get_color_cycle, remove_na, load_dataset, _assign_default_kwargs, _draw_figure, ) a_norm = np.random.randn(100) def _network(t=None, url="https://github.com"): """ Decorator that will skip a test if `url` is unreachable. Parameters ---------- t : function, optional url : str, optional """ if t is None: return lambda x: _network(x, url=url) def wrapper(*args, **kwargs): # attempt to connect try: f = urlopen(url) except (IOError, HTTPException): pytest.skip("No internet connection") else: f.close() return t(*args, **kwargs) return wrapper def test_pmf_hist_basics(): """Test the function to return barplot args for pmf hist.""" with pytest.warns(FutureWarning): out = utils.pmf_hist(a_norm) assert len(out) == 3 x, h, w = out assert len(x) == len(h) # Test simple case a = np.arange(10) with pytest.warns(FutureWarning): x, h, w = utils.pmf_hist(a, 10) assert np.all(h == h[0]) # Test width with pytest.warns(FutureWarning): x, h, w = utils.pmf_hist(a_norm) assert x[1] - x[0] == w # Test normalization with pytest.warns(FutureWarning): x, h, w = utils.pmf_hist(a_norm) assert sum(h) == pytest.approx(1) assert h.max() <= 1 # Test bins with pytest.warns(FutureWarning): x, h, w = utils.pmf_hist(a_norm, 20) assert len(x) == 20 def test_ci_to_errsize(): """Test behavior of ci_to_errsize.""" cis = [[.5, .5], [1.25, 1.5]] heights = [1, 1.5] actual_errsize = np.array([[.5, 1], [.25, 0]]) test_errsize = utils.ci_to_errsize(cis, heights) assert_array_equal(actual_errsize, test_errsize) def test_desaturate(): """Test color desaturation.""" out1 = utils.desaturate("red", .5) assert out1 == (.75, .25, .25) out2 = utils.desaturate("#00FF00", .5) assert out2 == (.25, .75, .25) out3 = utils.desaturate((0, 0, 1), .5) assert out3 == (.25, .25, .75) out4 = utils.desaturate("red", .5) assert out4 == (.75, .25, .25) def test_desaturation_prop(): """Test that pct outside of [0, 1] raises exception.""" with pytest.raises(ValueError): utils.desaturate("blue", 50) def test_saturate(): """Test performance of saturation function.""" out = utils.saturate((.75, .25, .25)) assert out == (1, 0, 0) @pytest.mark.parametrize( "p,annot", [(.0001, "***"), (.001, "**"), (.01, "*"), (.09, "."), (1, "")] ) def test_sig_stars(p, annot): """Test the sig stars function.""" with pytest.warns(FutureWarning): stars = utils.sig_stars(p) assert stars == annot def test_iqr(): """Test the IQR function.""" a = np.arange(5) with pytest.warns(FutureWarning): iqr = utils.iqr(a) assert iqr == 2 @pytest.mark.parametrize( "s,exp", [ ("a", "a"), ("abc", "abc"), (b"a", "a"), (b"abc", "abc"), (bytearray("abc", "utf-8"), "abc"), (bytearray(), ""), (1, "1"), (0, "0"), ([], str([])), ], ) def test_to_utf8(s, exp): """Test the to_utf8 function: object to string""" u = utils.to_utf8(s) assert type(u) == str assert u == exp class TestSpineUtils(object): sides = ["left", "right", "bottom", "top"] outer_sides = ["top", "right"] inner_sides = ["left", "bottom"] offset = 10 original_position = ("outward", 0) offset_position = ("outward", offset) def test_despine(self): f, ax = plt.subplots() for side in self.sides: assert ax.spines[side].get_visible() utils.despine() for side in self.outer_sides: assert ~ax.spines[side].get_visible() for side in self.inner_sides: assert ax.spines[side].get_visible() utils.despine(**dict(zip(self.sides, [True] * 4))) for side in self.sides: assert ~ax.spines[side].get_visible() def test_despine_specific_axes(self): f, (ax1, ax2) = plt.subplots(2, 1) utils.despine(ax=ax2) for side in self.sides: assert ax1.spines[side].get_visible() for side in self.outer_sides: assert ~ax2.spines[side].get_visible() for side in self.inner_sides: assert ax2.spines[side].get_visible() def test_despine_with_offset(self): f, ax = plt.subplots() for side in self.sides: pos = ax.spines[side].get_position() assert pos == self.original_position utils.despine(ax=ax, offset=self.offset) for side in self.sides: is_visible = ax.spines[side].get_visible() new_position = ax.spines[side].get_position() if is_visible: assert new_position == self.offset_position else: assert new_position == self.original_position def test_despine_side_specific_offset(self): f, ax = plt.subplots() utils.despine(ax=ax, offset=dict(left=self.offset)) for side in self.sides: is_visible = ax.spines[side].get_visible() new_position = ax.spines[side].get_position() if is_visible and side == "left": assert new_position == self.offset_position else: assert new_position == self.original_position def test_despine_with_offset_specific_axes(self): f, (ax1, ax2) = plt.subplots(2, 1) utils.despine(offset=self.offset, ax=ax2) for side in self.sides: pos1 = ax1.spines[side].get_position() pos2 = ax2.spines[side].get_position() assert pos1 == self.original_position if ax2.spines[side].get_visible(): assert pos2 == self.offset_position else: assert pos2 == self.original_position def test_despine_trim_spines(self): f, ax = plt.subplots() ax.plot([1, 2, 3], [1, 2, 3]) ax.set_xlim(.75, 3.25) utils.despine(trim=True) for side in self.inner_sides: bounds = ax.spines[side].get_bounds() assert bounds == (1, 3) def test_despine_trim_inverted(self): f, ax = plt.subplots() ax.plot([1, 2, 3], [1, 2, 3]) ax.set_ylim(.85, 3.15) ax.invert_yaxis() utils.despine(trim=True) for side in self.inner_sides: bounds = ax.spines[side].get_bounds() assert bounds == (1, 3) def test_despine_trim_noticks(self): f, ax = plt.subplots() ax.plot([1, 2, 3], [1, 2, 3]) ax.set_yticks([]) utils.despine(trim=True) assert ax.get_yticks().size == 0 def test_despine_trim_categorical(self): f, ax = plt.subplots() ax.plot(["a", "b", "c"], [1, 2, 3]) utils.despine(trim=True) bounds = ax.spines["left"].get_bounds() assert bounds == (1, 3) bounds = ax.spines["bottom"].get_bounds() assert bounds == (0, 2) def test_despine_moved_ticks(self): f, ax = plt.subplots() for t in ax.yaxis.majorTicks: t.tick1line.set_visible(True) utils.despine(ax=ax, left=True, right=False) for t in ax.yaxis.majorTicks: assert t.tick2line.get_visible() plt.close(f) f, ax = plt.subplots() for t in ax.yaxis.majorTicks: t.tick1line.set_visible(False) utils.despine(ax=ax, left=True, right=False) for t in ax.yaxis.majorTicks: assert not t.tick2line.get_visible() plt.close(f) f, ax = plt.subplots() for t in ax.xaxis.majorTicks: t.tick1line.set_visible(True) utils.despine(ax=ax, bottom=True, top=False) for t in ax.xaxis.majorTicks: assert t.tick2line.get_visible() plt.close(f) f, ax = plt.subplots() for t in ax.xaxis.majorTicks: t.tick1line.set_visible(False) utils.despine(ax=ax, bottom=True, top=False) for t in ax.xaxis.majorTicks: assert not t.tick2line.get_visible() plt.close(f) def test_ticklabels_overlap(): rcmod.set() f, ax = plt.subplots(figsize=(2, 2)) f.tight_layout() # This gets the Agg renderer working assert not utils.axis_ticklabels_overlap(ax.get_xticklabels()) big_strings = "abcdefgh", "ijklmnop" ax.set_xlim(-.5, 1.5) ax.set_xticks([0, 1]) ax.set_xticklabels(big_strings) assert utils.axis_ticklabels_overlap(ax.get_xticklabels()) x, y = utils.axes_ticklabels_overlap(ax) assert x assert not y def test_locator_to_legend_entries(): locator = mpl.ticker.MaxNLocator(nbins=3) limits = (0.09, 0.4) levels, str_levels = utils.locator_to_legend_entries( locator, limits, float ) assert str_levels == ["0.15", "0.30"] limits = (0.8, 0.9) levels, str_levels = utils.locator_to_legend_entries( locator, limits, float ) assert str_levels == ["0.80", "0.84", "0.88"] limits = (1, 6) levels, str_levels = utils.locator_to_legend_entries(locator, limits, int) assert str_levels == ["2", "4", "6"] locator = mpl.ticker.LogLocator(numticks=5) limits = (5, 1425) levels, str_levels = utils.locator_to_legend_entries(locator, limits, int) if LooseVersion(mpl.__version__) >= "3.1": assert str_levels == ['10', '100', '1000'] limits = (0.00003, 0.02) levels, str_levels = utils.locator_to_legend_entries( locator, limits, float ) if LooseVersion(mpl.__version__) >= "3.1": assert str_levels == ['1e-04', '1e-03', '1e-02'] def test_move_legend_matplotlib_objects(): fig, ax = plt.subplots() colors = "C2", "C5" labels = "first label", "second label" title = "the legend" for color, label in zip(colors, labels): ax.plot([0, 1], color=color, label=label) ax.legend(loc="upper right", title=title) utils._draw_figure(fig) xfm = ax.transAxes.inverted().transform # --- Test axes legend old_pos = xfm(ax.legend_.legendPatch.get_extents()) new_fontsize = 14 utils.move_legend(ax, "lower left", title_fontsize=new_fontsize) utils._draw_figure(fig) new_pos = xfm(ax.legend_.legendPatch.get_extents()) assert (new_pos < old_pos).all() assert ax.legend_.get_title().get_text() == title assert ax.legend_.get_title().get_size() == new_fontsize # --- Test title replacement new_title = "new title" utils.move_legend(ax, "lower left", title=new_title) utils._draw_figure(fig) assert ax.legend_.get_title().get_text() == new_title # --- Test figure legend fig.legend(loc="upper right", title=title) _draw_figure(fig) xfm = fig.transFigure.inverted().transform old_pos = xfm(fig.legends[0].legendPatch.get_extents()) utils.move_legend(fig, "lower left", title=new_title) _draw_figure(fig) new_pos = xfm(fig.legends[0].legendPatch.get_extents()) assert (new_pos < old_pos).all() assert fig.legends[0].get_title().get_text() == new_title def test_move_legend_grid_object(long_df): from seaborn.axisgrid import FacetGrid hue_var = "a" g = FacetGrid(long_df, hue=hue_var) g.map(plt.plot, "x", "y") g.add_legend() _draw_figure(g.figure) xfm = g.figure.transFigure.inverted().transform old_pos = xfm(g.legend.legendPatch.get_extents()) fontsize = 20 utils.move_legend(g, "lower left", title_fontsize=fontsize) _draw_figure(g.figure) new_pos = xfm(g.legend.legendPatch.get_extents()) assert (new_pos < old_pos).all() assert g.legend.get_title().get_text() == hue_var assert g.legend.get_title().get_size() == fontsize assert g.legend.legendHandles for i, h in enumerate(g.legend.legendHandles): assert mpl.colors.to_rgb(h.get_color()) == mpl.colors.to_rgb(f"C{i}") def test_move_legend_input_checks(): ax = plt.figure().subplots() with pytest.raises(TypeError): utils.move_legend(ax.xaxis, "best") with pytest.raises(ValueError): utils.move_legend(ax, "best") with pytest.raises(ValueError): utils.move_legend(ax.figure, "best") def check_load_dataset(name): ds = load_dataset(name, cache=False) assert(isinstance(ds, pd.DataFrame)) def check_load_cached_dataset(name): # Test the cacheing using a temporary file. with tempfile.TemporaryDirectory() as tmpdir: # download and cache ds = load_dataset(name, cache=True, data_home=tmpdir) # use cached version ds2 = load_dataset(name, cache=True, data_home=tmpdir) assert_frame_equal(ds, ds2) @_network(url="https://github.com/mwaskom/seaborn-data") def test_get_dataset_names(): names = get_dataset_names() assert names assert "tips" in names @_network(url="https://github.com/mwaskom/seaborn-data") def test_load_datasets(): # Heavy test to verify that we can load all available datasets for name in get_dataset_names(): # unfortunately @network somehow obscures this generator so it # does not get in effect, so we need to call explicitly # yield check_load_dataset, name check_load_dataset(name) @_network(url="https://github.com/mwaskom/seaborn-data") def test_load_dataset_string_error(): name = "bad_name" err = f"'{name}' is not one of the example datasets." with pytest.raises(ValueError, match=err): load_dataset(name) def test_load_dataset_passed_data_error(): df = pd.DataFrame() err = "This function accepts only strings" with pytest.raises(TypeError, match=err): load_dataset(df) @_network(url="https://github.com/mwaskom/seaborn-data") def test_load_cached_datasets(): # Heavy test to verify that we can load all available datasets for name in get_dataset_names(): # unfortunately @network somehow obscures this generator so it # does not get in effect, so we need to call explicitly # yield check_load_dataset, name check_load_cached_dataset(name) def test_relative_luminance(): """Test relative luminance.""" out1 = utils.relative_luminance("white") assert out1 == 1 out2 = utils.relative_luminance("#000000") assert out2 == 0 out3 = utils.relative_luminance((.25, .5, .75)) assert out3 == pytest.approx(0.201624536) rgbs = mpl.cm.RdBu(np.linspace(0, 1, 10)) lums1 = [utils.relative_luminance(rgb) for rgb in rgbs] lums2 = utils.relative_luminance(rgbs) for lum1, lum2 in zip(lums1, lums2): assert lum1 == pytest.approx(lum2) @pytest.mark.parametrize( "cycler,result", [ (cycler(color=["y"]), ["y"]), (cycler(color=["k"]), ["k"]), (cycler(color=["k", "y"]), ["k", "y"]), (cycler(color=["y", "k"]), ["y", "k"]), (cycler(color=["b", "r"]), ["b", "r"]), (cycler(color=["r", "b"]), ["r", "b"]), (cycler(lw=[1, 2]), [".15"]), # no color in cycle ], ) def test_get_color_cycle(cycler, result): with mpl.rc_context(rc={"axes.prop_cycle": cycler}): assert get_color_cycle() == result def test_remove_na(): a_array = np.array([1, 2, np.nan, 3]) a_array_rm = remove_na(a_array) assert_array_equal(a_array_rm, np.array([1, 2, 3])) a_series = pd.Series([1, 2, np.nan, 3]) a_series_rm = remove_na(a_series) assert_series_equal(a_series_rm, pd.Series([1., 2, 3], [0, 1, 3])) def test_assign_default_kwargs(): def f(a, b, c, d): pass def g(c=1, d=2): pass kws = {"c": 3} kws = _assign_default_kwargs(kws, f, g) assert kws == {"c": 3, "d": 2} def test_draw_figure(): f, ax = plt.subplots() ax.plot(["a", "b", "c"], [1, 2, 3]) _draw_figure(f) assert not f.stale # ticklabels are not populated until a draw, but this may change assert ax.get_xticklabels()[0].get_text() == "a" seaborn-0.11.2/seaborn/utils.py000066400000000000000000000614571410631356500164100ustar00rootroot00000000000000"""Utility functions, mostly for internal use.""" import os import re import inspect import warnings import colorsys from urllib.request import urlopen, urlretrieve import numpy as np from scipy import stats import pandas as pd import matplotlib as mpl import matplotlib.colors as mplcol import matplotlib.pyplot as plt from matplotlib.cbook import normalize_kwargs __all__ = ["desaturate", "saturate", "set_hls_values", "move_legend", "despine", "get_dataset_names", "get_data_home", "load_dataset"] def sort_df(df, *args, **kwargs): """Wrapper to handle different pandas sorting API pre/post 0.17.""" msg = "This function is deprecated and will be removed in a future version" warnings.warn(msg) try: return df.sort_values(*args, **kwargs) except AttributeError: return df.sort(*args, **kwargs) def ci_to_errsize(cis, heights): """Convert intervals to error arguments relative to plot heights. Parameters ---------- cis : 2 x n sequence sequence of confidence interval limits heights : n sequence sequence of plot heights Returns ------- errsize : 2 x n array sequence of error size relative to height values in correct format as argument for plt.bar """ cis = np.atleast_2d(cis).reshape(2, -1) heights = np.atleast_1d(heights) errsize = [] for i, (low, high) in enumerate(np.transpose(cis)): h = heights[i] elow = h - low ehigh = high - h errsize.append([elow, ehigh]) errsize = np.asarray(errsize).T return errsize def pmf_hist(a, bins=10): """Return arguments to plt.bar for pmf-like histogram of an array. DEPRECATED: will be removed in a future version. Parameters ---------- a: array-like array to make histogram of bins: int number of bins Returns ------- x: array left x position of bars h: array height of bars w: float width of bars """ msg = "This function is deprecated and will be removed in a future version" warnings.warn(msg, FutureWarning) n, x = np.histogram(a, bins) h = n / n.sum() w = x[1] - x[0] return x[:-1], h, w def _draw_figure(fig): """Force draw of a matplotlib figure, accounting for back-compat.""" # See https://github.com/matplotlib/matplotlib/issues/19197 for context fig.canvas.draw() if fig.stale: try: fig.draw(fig.canvas.get_renderer()) except AttributeError: pass def desaturate(color, prop): """Decrease the saturation channel of a color by some percent. Parameters ---------- color : matplotlib color hex, rgb-tuple, or html color name prop : float saturation channel of color will be multiplied by this value Returns ------- new_color : rgb tuple desaturated color code in RGB tuple representation """ # Check inputs if not 0 <= prop <= 1: raise ValueError("prop must be between 0 and 1") # Get rgb tuple rep rgb = mplcol.colorConverter.to_rgb(color) # Convert to hls h, l, s = colorsys.rgb_to_hls(*rgb) # Desaturate the saturation channel s *= prop # Convert back to rgb new_color = colorsys.hls_to_rgb(h, l, s) return new_color def saturate(color): """Return a fully saturated color with the same hue. Parameters ---------- color : matplotlib color hex, rgb-tuple, or html color name Returns ------- new_color : rgb tuple saturated color code in RGB tuple representation """ return set_hls_values(color, s=1) def set_hls_values(color, h=None, l=None, s=None): # noqa """Independently manipulate the h, l, or s channels of a color. Parameters ---------- color : matplotlib color hex, rgb-tuple, or html color name h, l, s : floats between 0 and 1, or None new values for each channel in hls space Returns ------- new_color : rgb tuple new color code in RGB tuple representation """ # Get an RGB tuple representation rgb = mplcol.colorConverter.to_rgb(color) vals = list(colorsys.rgb_to_hls(*rgb)) for i, val in enumerate([h, l, s]): if val is not None: vals[i] = val rgb = colorsys.hls_to_rgb(*vals) return rgb def axlabel(xlabel, ylabel, **kwargs): """Grab current axis and label it. DEPRECATED: will be removed in a future version. """ msg = "This function is deprecated and will be removed in a future version" warnings.warn(msg, FutureWarning) ax = plt.gca() ax.set_xlabel(xlabel, **kwargs) ax.set_ylabel(ylabel, **kwargs) def remove_na(vector): """Helper method for removing null values from data vectors. Parameters ---------- vector : vector object Must implement boolean masking with [] subscript syntax. Returns ------- clean_clean : same type as ``vector`` Vector of data with null values removed. May be a copy or a view. """ return vector[pd.notnull(vector)] def get_color_cycle(): """Return the list of colors in the current matplotlib color cycle Parameters ---------- None Returns ------- colors : list List of matplotlib colors in the current cycle, or dark gray if the current color cycle is empty. """ cycler = mpl.rcParams['axes.prop_cycle'] return cycler.by_key()['color'] if 'color' in cycler.keys else [".15"] def despine(fig=None, ax=None, top=True, right=True, left=False, bottom=False, offset=None, trim=False): """Remove the top and right spines from plot(s). fig : matplotlib figure, optional Figure to despine all axes of, defaults to the current figure. ax : matplotlib axes, optional Specific axes object to despine. Ignored if fig is provided. top, right, left, bottom : boolean, optional If True, remove that spine. offset : int or dict, optional Absolute distance, in points, spines should be moved away from the axes (negative values move spines inward). A single value applies to all spines; a dict can be used to set offset values per side. trim : bool, optional If True, limit spines to the smallest and largest major tick on each non-despined axis. Returns ------- None """ # Get references to the axes we want if fig is None and ax is None: axes = plt.gcf().axes elif fig is not None: axes = fig.axes elif ax is not None: axes = [ax] for ax_i in axes: for side in ["top", "right", "left", "bottom"]: # Toggle the spine objects is_visible = not locals()[side] ax_i.spines[side].set_visible(is_visible) if offset is not None and is_visible: try: val = offset.get(side, 0) except AttributeError: val = offset ax_i.spines[side].set_position(('outward', val)) # Potentially move the ticks if left and not right: maj_on = any( t.tick1line.get_visible() for t in ax_i.yaxis.majorTicks ) min_on = any( t.tick1line.get_visible() for t in ax_i.yaxis.minorTicks ) ax_i.yaxis.set_ticks_position("right") for t in ax_i.yaxis.majorTicks: t.tick2line.set_visible(maj_on) for t in ax_i.yaxis.minorTicks: t.tick2line.set_visible(min_on) if bottom and not top: maj_on = any( t.tick1line.get_visible() for t in ax_i.xaxis.majorTicks ) min_on = any( t.tick1line.get_visible() for t in ax_i.xaxis.minorTicks ) ax_i.xaxis.set_ticks_position("top") for t in ax_i.xaxis.majorTicks: t.tick2line.set_visible(maj_on) for t in ax_i.xaxis.minorTicks: t.tick2line.set_visible(min_on) if trim: # clip off the parts of the spines that extend past major ticks xticks = np.asarray(ax_i.get_xticks()) if xticks.size: firsttick = np.compress(xticks >= min(ax_i.get_xlim()), xticks)[0] lasttick = np.compress(xticks <= max(ax_i.get_xlim()), xticks)[-1] ax_i.spines['bottom'].set_bounds(firsttick, lasttick) ax_i.spines['top'].set_bounds(firsttick, lasttick) newticks = xticks.compress(xticks <= lasttick) newticks = newticks.compress(newticks >= firsttick) ax_i.set_xticks(newticks) yticks = np.asarray(ax_i.get_yticks()) if yticks.size: firsttick = np.compress(yticks >= min(ax_i.get_ylim()), yticks)[0] lasttick = np.compress(yticks <= max(ax_i.get_ylim()), yticks)[-1] ax_i.spines['left'].set_bounds(firsttick, lasttick) ax_i.spines['right'].set_bounds(firsttick, lasttick) newticks = yticks.compress(yticks <= lasttick) newticks = newticks.compress(newticks >= firsttick) ax_i.set_yticks(newticks) def move_legend(obj, loc, **kwargs): """ Recreate a plot's legend at a new location. The name is a slight misnomer. Matplotlib legends do not expose public control over their position parameters. So this function creates a new legend, copying over the data from the original object, which is then removed. Parameters ---------- obj : the object with the plot This argument can be either a seaborn or matplotlib object: - :class:`seaborn.FacetGrid` or :class:`seaborn.PairGrid` - :class:`matplotlib.axes.Axes` or :class:`matplotlib.figure.Figure` loc : str or int Location argument, as in :meth:`matplotlib.axes.Axes.legend`. kwargs Other keyword arguments are passed to :meth:`matplotlib.axes.Axes.legend`. Examples -------- .. include:: ../docstrings/move_legend.rst """ # This is a somewhat hackish solution that will hopefully be obviated by # upstream improvements to matplotlib legends that make them easier to # modify after creation. from seaborn.axisgrid import Grid # Avoid circular import # Locate the legend object and a method to recreate the legend if isinstance(obj, Grid): old_legend = obj.legend legend_func = obj.figure.legend elif isinstance(obj, mpl.axes.Axes): old_legend = obj.legend_ legend_func = obj.legend elif isinstance(obj, mpl.figure.Figure): if obj.legends: old_legend = obj.legends[-1] else: old_legend = None legend_func = obj.legend else: err = "`obj` must be a seaborn Grid or matplotlib Axes or Figure instance." raise TypeError(err) if old_legend is None: err = f"{obj} has no legend attached." raise ValueError(err) # Extract the components of the legend we need to reuse handles = old_legend.legendHandles labels = [t.get_text() for t in old_legend.get_texts()] # Extract legend properties that can be passed to the recreation method # (Vexingly, these don't all round-trip) legend_kws = inspect.signature(mpl.legend.Legend).parameters props = {k: v for k, v in old_legend.properties().items() if k in legend_kws} # Delegate default bbox_to_anchor rules to matplotlib props.pop("bbox_to_anchor") # Try to propagate the existing title and font properties; respect new ones too title = props.pop("title") if "title" in kwargs: title.set_text(kwargs.pop("title")) title_kwargs = {k: v for k, v in kwargs.items() if k.startswith("title_")} for key, val in title_kwargs.items(): title.set(**{key[6:]: val}) kwargs.pop(key) # Try to respect the frame visibility kwargs.setdefault("frameon", old_legend.legendPatch.get_visible()) # Remove the old legend and create the new one props.update(kwargs) old_legend.remove() new_legend = legend_func(handles, labels, loc=loc, **props) new_legend.set_title(title.get_text(), title.get_fontproperties()) # Let the Grid object continue to track the correct legend object if isinstance(obj, Grid): obj._legend = new_legend def _kde_support(data, bw, gridsize, cut, clip): """Establish support for a kernel density estimate.""" support_min = max(data.min() - bw * cut, clip[0]) support_max = min(data.max() + bw * cut, clip[1]) support = np.linspace(support_min, support_max, gridsize) return support def percentiles(a, pcts, axis=None): """Like scoreatpercentile but can take and return array of percentiles. DEPRECATED: will be removed in a future version. Parameters ---------- a : array data pcts : sequence of percentile values percentile or percentiles to find score at axis : int or None if not None, computes scores over this axis Returns ------- scores: array array of scores at requested percentiles first dimension is length of object passed to ``pcts`` """ msg = "This function is deprecated and will be removed in a future version" warnings.warn(msg, FutureWarning) scores = [] try: n = len(pcts) except TypeError: pcts = [pcts] n = 0 for i, p in enumerate(pcts): if axis is None: score = stats.scoreatpercentile(a.ravel(), p) else: score = np.apply_along_axis(stats.scoreatpercentile, axis, a, p) scores.append(score) scores = np.asarray(scores) if not n: scores = scores.squeeze() return scores def ci(a, which=95, axis=None): """Return a percentile range from an array of values.""" p = 50 - which / 2, 50 + which / 2 return np.nanpercentile(a, p, axis) def sig_stars(p): """Return a R-style significance string corresponding to p values. DEPRECATED: will be removed in a future version. """ msg = "This function is deprecated and will be removed in a future version" warnings.warn(msg, FutureWarning) if p < 0.001: return "***" elif p < 0.01: return "**" elif p < 0.05: return "*" elif p < 0.1: return "." return "" def iqr(a): """Calculate the IQR for an array of numbers. DEPRECATED: will be removed in a future version. """ msg = "This function is deprecated and will be removed in a future version" warnings.warn(msg, FutureWarning) a = np.asarray(a) q1 = stats.scoreatpercentile(a, 25) q3 = stats.scoreatpercentile(a, 75) return q3 - q1 def get_dataset_names(): """Report available example datasets, useful for reporting issues. Requires an internet connection. """ url = "https://github.com/mwaskom/seaborn-data" with urlopen(url) as resp: html = resp.read() pat = r"/mwaskom/seaborn-data/blob/master/(\w*).csv" datasets = re.findall(pat, html.decode()) return datasets def get_data_home(data_home=None): """Return a path to the cache directory for example datasets. This directory is then used by :func:`load_dataset`. If the ``data_home`` argument is not specified, it tries to read from the ``SEABORN_DATA`` environment variable and defaults to ``~/seaborn-data``. """ if data_home is None: data_home = os.environ.get('SEABORN_DATA', os.path.join('~', 'seaborn-data')) data_home = os.path.expanduser(data_home) if not os.path.exists(data_home): os.makedirs(data_home) return data_home def load_dataset(name, cache=True, data_home=None, **kws): """Load an example dataset from the online repository (requires internet). This function provides quick access to a small number of example datasets that are useful for documenting seaborn or generating reproducible examples for bug reports. It is not necessary for normal usage. Note that some of the datasets have a small amount of preprocessing applied to define a proper ordering for categorical variables. Use :func:`get_dataset_names` to see a list of available datasets. Parameters ---------- name : str Name of the dataset (``{name}.csv`` on https://github.com/mwaskom/seaborn-data). cache : boolean, optional If True, try to load from the local cache first, and save to the cache if a download is required. data_home : string, optional The directory in which to cache data; see :func:`get_data_home`. kws : keys and values, optional Additional keyword arguments are passed to passed through to :func:`pandas.read_csv`. Returns ------- df : :class:`pandas.DataFrame` Tabular data, possibly with some preprocessing applied. """ # A common beginner mistake is to assume that one's personal data needs # to be passed through this function to be usable with seaborn. # Let's provide a more helpful error than you would otherwise get. if isinstance(name, pd.DataFrame): err = ( "This function accepts only strings (the name of an example dataset). " "You passed a pandas DataFrame. If you have your own dataset, " "it is not necessary to use this function before plotting." ) raise TypeError(err) url = f"https://raw.githubusercontent.com/mwaskom/seaborn-data/master/{name}.csv" if cache: cache_path = os.path.join(get_data_home(data_home), os.path.basename(url)) if not os.path.exists(cache_path): if name not in get_dataset_names(): raise ValueError(f"'{name}' is not one of the example datasets.") urlretrieve(url, cache_path) full_path = cache_path else: full_path = url df = pd.read_csv(full_path, **kws) if df.iloc[-1].isnull().all(): df = df.iloc[:-1] # Set some columns as a categorical type with ordered levels if name == "tips": df["day"] = pd.Categorical(df["day"], ["Thur", "Fri", "Sat", "Sun"]) df["sex"] = pd.Categorical(df["sex"], ["Male", "Female"]) df["time"] = pd.Categorical(df["time"], ["Lunch", "Dinner"]) df["smoker"] = pd.Categorical(df["smoker"], ["Yes", "No"]) if name == "flights": months = df["month"].str[:3] df["month"] = pd.Categorical(months, months.unique()) if name == "exercise": df["time"] = pd.Categorical(df["time"], ["1 min", "15 min", "30 min"]) df["kind"] = pd.Categorical(df["kind"], ["rest", "walking", "running"]) df["diet"] = pd.Categorical(df["diet"], ["no fat", "low fat"]) if name == "titanic": df["class"] = pd.Categorical(df["class"], ["First", "Second", "Third"]) df["deck"] = pd.Categorical(df["deck"], list("ABCDEFG")) if name == "penguins": df["sex"] = df["sex"].str.title() if name == "diamonds": df["color"] = pd.Categorical( df["color"], ["D", "E", "F", "G", "H", "I", "J"], ) df["clarity"] = pd.Categorical( df["clarity"], ["IF", "VVS1", "VVS2", "VS1", "VS2", "SI1", "SI2", "I1"], ) df["cut"] = pd.Categorical( df["cut"], ["Ideal", "Premium", "Very Good", "Good", "Fair"], ) return df def axis_ticklabels_overlap(labels): """Return a boolean for whether the list of ticklabels have overlaps. Parameters ---------- labels : list of matplotlib ticklabels Returns ------- overlap : boolean True if any of the labels overlap. """ if not labels: return False try: bboxes = [l.get_window_extent() for l in labels] overlaps = [b.count_overlaps(bboxes) for b in bboxes] return max(overlaps) > 1 except RuntimeError: # Issue on macos backend raises an error in the above code return False def axes_ticklabels_overlap(ax): """Return booleans for whether the x and y ticklabels on an Axes overlap. Parameters ---------- ax : matplotlib Axes Returns ------- x_overlap, y_overlap : booleans True when the labels on that axis overlap. """ return (axis_ticklabels_overlap(ax.get_xticklabels()), axis_ticklabels_overlap(ax.get_yticklabels())) def locator_to_legend_entries(locator, limits, dtype): """Return levels and formatted levels for brief numeric legends.""" raw_levels = locator.tick_values(*limits).astype(dtype) # The locator can return ticks outside the limits, clip them here raw_levels = [l for l in raw_levels if l >= limits[0] and l <= limits[1]] class dummy_axis: def get_view_interval(self): return limits if isinstance(locator, mpl.ticker.LogLocator): formatter = mpl.ticker.LogFormatter() else: formatter = mpl.ticker.ScalarFormatter() formatter.axis = dummy_axis() # TODO: The following two lines should be replaced # once pinned matplotlib>=3.1.0 with: # formatted_levels = formatter.format_ticks(raw_levels) formatter.set_locs(raw_levels) formatted_levels = [formatter(x) for x in raw_levels] return raw_levels, formatted_levels def relative_luminance(color): """Calculate the relative luminance of a color according to W3C standards Parameters ---------- color : matplotlib color or sequence of matplotlib colors Hex code, rgb-tuple, or html color name. Returns ------- luminance : float(s) between 0 and 1 """ rgb = mpl.colors.colorConverter.to_rgba_array(color)[:, :3] rgb = np.where(rgb <= .03928, rgb / 12.92, ((rgb + .055) / 1.055) ** 2.4) lum = rgb.dot([.2126, .7152, .0722]) try: return lum.item() except ValueError: return lum def to_utf8(obj): """Return a string representing a Python object. Strings (i.e. type ``str``) are returned unchanged. Byte strings (i.e. type ``bytes``) are returned as UTF-8-decoded strings. For other objects, the method ``__str__()`` is called, and the result is returned as a string. Parameters ---------- obj : object Any Python object Returns ------- s : str UTF-8-decoded string representation of ``obj`` """ if isinstance(obj, str): return obj try: return obj.decode(encoding="utf-8") except AttributeError: # obj is not bytes-like return str(obj) def _normalize_kwargs(kws, artist): """Wrapper for mpl.cbook.normalize_kwargs that supports <= 3.2.1.""" _alias_map = { 'color': ['c'], 'linewidth': ['lw'], 'linestyle': ['ls'], 'facecolor': ['fc'], 'edgecolor': ['ec'], 'markerfacecolor': ['mfc'], 'markeredgecolor': ['mec'], 'markeredgewidth': ['mew'], 'markersize': ['ms'] } try: kws = normalize_kwargs(kws, artist) except AttributeError: kws = normalize_kwargs(kws, _alias_map) return kws def _check_argument(param, options, value): """Raise if value for param is not in options.""" if value not in options: raise ValueError( f"`{param}` must be one of {options}, but {value} was passed.`" ) def _assign_default_kwargs(kws, call_func, source_func): """Assign default kwargs for call_func using values from source_func.""" # This exists so that axes-level functions and figure-level functions can # both call a Plotter method while having the default kwargs be defined in # the signature of the axes-level function. # An alternative would be to have a decorator on the method that sets its # defaults based on those defined in the axes-level function. # Then the figure-level function would not need to worry about defaults. # I am not sure which is better. needed = inspect.signature(call_func).parameters defaults = inspect.signature(source_func).parameters for param in needed: if param in defaults and param not in kws: kws[param] = defaults[param].default return kws def adjust_legend_subtitles(legend): """Make invisible-handle "subtitles" entries look more like titles.""" # Legend title not in rcParams until 3.0 font_size = plt.rcParams.get("legend.title_fontsize", None) hpackers = legend.findobj(mpl.offsetbox.VPacker)[0].get_children() for hpack in hpackers: draw_area, text_area = hpack.get_children() handles = draw_area.get_children() if not all(artist.get_visible() for artist in handles): draw_area.set_width(0) for text in text_area.get_children(): if font_size is not None: text.set_size(font_size) seaborn-0.11.2/seaborn/widgets.py000066400000000000000000000354161410631356500167120ustar00rootroot00000000000000import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import LinearSegmentedColormap # Lots of different places that widgets could come from... try: from ipywidgets import interact, FloatSlider, IntSlider except ImportError: import warnings # ignore ShimWarning raised by IPython, see GH #892 with warnings.catch_warnings(): warnings.simplefilter("ignore") try: from IPython.html.widgets import interact, FloatSlider, IntSlider except ImportError: try: from IPython.html.widgets import (interact, FloatSliderWidget, IntSliderWidget) FloatSlider = FloatSliderWidget IntSlider = IntSliderWidget except ImportError: pass from .miscplot import palplot from .palettes import (color_palette, dark_palette, light_palette, diverging_palette, cubehelix_palette) __all__ = ["choose_colorbrewer_palette", "choose_cubehelix_palette", "choose_dark_palette", "choose_light_palette", "choose_diverging_palette"] def _init_mutable_colormap(): """Create a matplotlib colormap that will be updated by the widgets.""" greys = color_palette("Greys", 256) cmap = LinearSegmentedColormap.from_list("interactive", greys) cmap._init() cmap._set_extremes() return cmap def _update_lut(cmap, colors): """Change the LUT values in a matplotlib colormap in-place.""" cmap._lut[:256] = colors cmap._set_extremes() def _show_cmap(cmap): """Show a continuous matplotlib colormap.""" from .rcmod import axes_style # Avoid circular import with axes_style("white"): f, ax = plt.subplots(figsize=(8.25, .75)) ax.set(xticks=[], yticks=[]) x = np.linspace(0, 1, 256)[np.newaxis, :] ax.pcolormesh(x, cmap=cmap) def choose_colorbrewer_palette(data_type, as_cmap=False): """Select a palette from the ColorBrewer set. These palettes are built into matplotlib and can be used by name in many seaborn functions, or by passing the object returned by this function. Parameters ---------- data_type : {'sequential', 'diverging', 'qualitative'} This describes the kind of data you want to visualize. See the seaborn color palette docs for more information about how to choose this value. Note that you can pass substrings (e.g. 'q' for 'qualitative. as_cmap : bool If True, the return value is a matplotlib colormap rather than a list of discrete colors. Returns ------- pal or cmap : list of colors or matplotlib colormap Object that can be passed to plotting functions. See Also -------- dark_palette : Create a sequential palette with dark low values. light_palette : Create a sequential palette with bright low values. diverging_palette : Create a diverging palette from selected colors. cubehelix_palette : Create a sequential palette or colormap using the cubehelix system. """ if data_type.startswith("q") and as_cmap: raise ValueError("Qualitative palettes cannot be colormaps.") pal = [] if as_cmap: cmap = _init_mutable_colormap() if data_type.startswith("s"): opts = ["Greys", "Reds", "Greens", "Blues", "Oranges", "Purples", "BuGn", "BuPu", "GnBu", "OrRd", "PuBu", "PuRd", "RdPu", "YlGn", "PuBuGn", "YlGnBu", "YlOrBr", "YlOrRd"] variants = ["regular", "reverse", "dark"] @interact def choose_sequential(name=opts, n=(2, 18), desat=FloatSlider(min=0, max=1, value=1), variant=variants): if variant == "reverse": name += "_r" elif variant == "dark": name += "_d" if as_cmap: colors = color_palette(name, 256, desat) _update_lut(cmap, np.c_[colors, np.ones(256)]) _show_cmap(cmap) else: pal[:] = color_palette(name, n, desat) palplot(pal) elif data_type.startswith("d"): opts = ["RdBu", "RdGy", "PRGn", "PiYG", "BrBG", "RdYlBu", "RdYlGn", "Spectral"] variants = ["regular", "reverse"] @interact def choose_diverging(name=opts, n=(2, 16), desat=FloatSlider(min=0, max=1, value=1), variant=variants): if variant == "reverse": name += "_r" if as_cmap: colors = color_palette(name, 256, desat) _update_lut(cmap, np.c_[colors, np.ones(256)]) _show_cmap(cmap) else: pal[:] = color_palette(name, n, desat) palplot(pal) elif data_type.startswith("q"): opts = ["Set1", "Set2", "Set3", "Paired", "Accent", "Pastel1", "Pastel2", "Dark2"] @interact def choose_qualitative(name=opts, n=(2, 16), desat=FloatSlider(min=0, max=1, value=1)): pal[:] = color_palette(name, n, desat) palplot(pal) if as_cmap: return cmap return pal def choose_dark_palette(input="husl", as_cmap=False): """Launch an interactive widget to create a dark sequential palette. This corresponds with the :func:`dark_palette` function. This kind of palette is good for data that range between relatively uninteresting low values and interesting high values. Requires IPython 2+ and must be used in the notebook. Parameters ---------- input : {'husl', 'hls', 'rgb'} Color space for defining the seed value. Note that the default is different than the default input for :func:`dark_palette`. as_cmap : bool If True, the return value is a matplotlib colormap rather than a list of discrete colors. Returns ------- pal or cmap : list of colors or matplotlib colormap Object that can be passed to plotting functions. See Also -------- dark_palette : Create a sequential palette with dark low values. light_palette : Create a sequential palette with bright low values. cubehelix_palette : Create a sequential palette or colormap using the cubehelix system. """ pal = [] if as_cmap: cmap = _init_mutable_colormap() if input == "rgb": @interact def choose_dark_palette_rgb(r=(0., 1.), g=(0., 1.), b=(0., 1.), n=(3, 17)): color = r, g, b if as_cmap: colors = dark_palette(color, 256, input="rgb") _update_lut(cmap, colors) _show_cmap(cmap) else: pal[:] = dark_palette(color, n, input="rgb") palplot(pal) elif input == "hls": @interact def choose_dark_palette_hls(h=(0., 1.), l=(0., 1.), # noqa: E741 s=(0., 1.), n=(3, 17)): color = h, l, s if as_cmap: colors = dark_palette(color, 256, input="hls") _update_lut(cmap, colors) _show_cmap(cmap) else: pal[:] = dark_palette(color, n, input="hls") palplot(pal) elif input == "husl": @interact def choose_dark_palette_husl(h=(0, 359), s=(0, 99), l=(0, 99), # noqa: E741 n=(3, 17)): color = h, s, l if as_cmap: colors = dark_palette(color, 256, input="husl") _update_lut(cmap, colors) _show_cmap(cmap) else: pal[:] = dark_palette(color, n, input="husl") palplot(pal) if as_cmap: return cmap return pal def choose_light_palette(input="husl", as_cmap=False): """Launch an interactive widget to create a light sequential palette. This corresponds with the :func:`light_palette` function. This kind of palette is good for data that range between relatively uninteresting low values and interesting high values. Requires IPython 2+ and must be used in the notebook. Parameters ---------- input : {'husl', 'hls', 'rgb'} Color space for defining the seed value. Note that the default is different than the default input for :func:`light_palette`. as_cmap : bool If True, the return value is a matplotlib colormap rather than a list of discrete colors. Returns ------- pal or cmap : list of colors or matplotlib colormap Object that can be passed to plotting functions. See Also -------- light_palette : Create a sequential palette with bright low values. dark_palette : Create a sequential palette with dark low values. cubehelix_palette : Create a sequential palette or colormap using the cubehelix system. """ pal = [] if as_cmap: cmap = _init_mutable_colormap() if input == "rgb": @interact def choose_light_palette_rgb(r=(0., 1.), g=(0., 1.), b=(0., 1.), n=(3, 17)): color = r, g, b if as_cmap: colors = light_palette(color, 256, input="rgb") _update_lut(cmap, colors) _show_cmap(cmap) else: pal[:] = light_palette(color, n, input="rgb") palplot(pal) elif input == "hls": @interact def choose_light_palette_hls(h=(0., 1.), l=(0., 1.), # noqa: E741 s=(0., 1.), n=(3, 17)): color = h, l, s if as_cmap: colors = light_palette(color, 256, input="hls") _update_lut(cmap, colors) _show_cmap(cmap) else: pal[:] = light_palette(color, n, input="hls") palplot(pal) elif input == "husl": @interact def choose_light_palette_husl(h=(0, 359), s=(0, 99), l=(0, 99), # noqa: E741 n=(3, 17)): color = h, s, l if as_cmap: colors = light_palette(color, 256, input="husl") _update_lut(cmap, colors) _show_cmap(cmap) else: pal[:] = light_palette(color, n, input="husl") palplot(pal) if as_cmap: return cmap return pal def choose_diverging_palette(as_cmap=False): """Launch an interactive widget to choose a diverging color palette. This corresponds with the :func:`diverging_palette` function. This kind of palette is good for data that range between interesting low values and interesting high values with a meaningful midpoint. (For example, change scores relative to some baseline value). Requires IPython 2+ and must be used in the notebook. Parameters ---------- as_cmap : bool If True, the return value is a matplotlib colormap rather than a list of discrete colors. Returns ------- pal or cmap : list of colors or matplotlib colormap Object that can be passed to plotting functions. See Also -------- diverging_palette : Create a diverging color palette or colormap. choose_colorbrewer_palette : Interactively choose palettes from the colorbrewer set, including diverging palettes. """ pal = [] if as_cmap: cmap = _init_mutable_colormap() @interact def choose_diverging_palette( h_neg=IntSlider(min=0, max=359, value=220), h_pos=IntSlider(min=0, max=359, value=10), s=IntSlider(min=0, max=99, value=74), l=IntSlider(min=0, max=99, value=50), # noqa: E741 sep=IntSlider(min=1, max=50, value=10), n=(2, 16), center=["light", "dark"] ): if as_cmap: colors = diverging_palette(h_neg, h_pos, s, l, sep, 256, center) _update_lut(cmap, colors) _show_cmap(cmap) else: pal[:] = diverging_palette(h_neg, h_pos, s, l, sep, n, center) palplot(pal) if as_cmap: return cmap return pal def choose_cubehelix_palette(as_cmap=False): """Launch an interactive widget to create a sequential cubehelix palette. This corresponds with the :func:`cubehelix_palette` function. This kind of palette is good for data that range between relatively uninteresting low values and interesting high values. The cubehelix system allows the palette to have more hue variance across the range, which can be helpful for distinguishing a wider range of values. Requires IPython 2+ and must be used in the notebook. Parameters ---------- as_cmap : bool If True, the return value is a matplotlib colormap rather than a list of discrete colors. Returns ------- pal or cmap : list of colors or matplotlib colormap Object that can be passed to plotting functions. See Also -------- cubehelix_palette : Create a sequential palette or colormap using the cubehelix system. """ pal = [] if as_cmap: cmap = _init_mutable_colormap() @interact def choose_cubehelix(n_colors=IntSlider(min=2, max=16, value=9), start=FloatSlider(min=0, max=3, value=0), rot=FloatSlider(min=-1, max=1, value=.4), gamma=FloatSlider(min=0, max=5, value=1), hue=FloatSlider(min=0, max=1, value=.8), light=FloatSlider(min=0, max=1, value=.85), dark=FloatSlider(min=0, max=1, value=.15), reverse=False): if as_cmap: colors = cubehelix_palette(256, start, rot, gamma, hue, light, dark, reverse) _update_lut(cmap, np.c_[colors, np.ones(256)]) _show_cmap(cmap) else: pal[:] = cubehelix_palette(n_colors, start, rot, gamma, hue, light, dark, reverse) palplot(pal) if as_cmap: return cmap return pal seaborn-0.11.2/setup.cfg000066400000000000000000000002021410631356500150430ustar00rootroot00000000000000[metadata] license_file = LICENSE [flake8] max-line-length = 88 exclude = seaborn/cm.py,seaborn/external ignore = E741,F522,W503 seaborn-0.11.2/setup.py000066400000000000000000000057421410631356500147520ustar00rootroot00000000000000#! /usr/bin/env python # # Copyright (C) 2012-2020 Michael Waskom DESCRIPTION = "seaborn: statistical data visualization" LONG_DESCRIPTION = """\ Seaborn is a library for making statistical graphics in Python. It is built on top of `matplotlib `_ and closely integrated with `pandas `_ data structures. Here is some of the functionality that seaborn offers: - A dataset-oriented API for examining relationships between multiple variables - Convenient views onto the overall structure of complex datasets - Specialized support for using categorical variables to show observations or aggregate statistics - Options for visualizing univariate or bivariate distributions and for comparing them between subsets of data - Automatic estimation and plotting of linear regression models for different kinds of dependent variables - High-level abstractions for structuring multi-plot grids that let you easily build complex visualizations - Concise control over matplotlib figure styling with several built-in themes - Tools for choosing color palettes that faithfully reveal patterns in your data Seaborn aims to make visualization a central part of exploring and understanding data. Its dataset-oriented plotting functions operate on dataframes and arrays containing whole datasets and internally perform the necessary semantic mappings and statistical aggregations to produce informative plots. """ DISTNAME = 'seaborn' MAINTAINER = 'Michael Waskom' MAINTAINER_EMAIL = 'mwaskom@gmail.com' URL = 'https://seaborn.pydata.org' LICENSE = 'BSD (3-clause)' DOWNLOAD_URL = 'https://github.com/mwaskom/seaborn/' VERSION = '0.11.2' PYTHON_REQUIRES = ">=3.6" INSTALL_REQUIRES = [ 'numpy>=1.15', 'scipy>=1.0', 'pandas>=0.23', 'matplotlib>=2.2', ] PACKAGES = [ 'seaborn', 'seaborn.colors', 'seaborn.external', 'seaborn.tests', ] CLASSIFIERS = [ 'Intended Audience :: Science/Research', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', 'License :: OSI Approved :: BSD License', 'Topic :: Scientific/Engineering :: Visualization', 'Topic :: Multimedia :: Graphics', 'Operating System :: OS Independent', 'Framework :: Matplotlib', ] if __name__ == "__main__": from setuptools import setup import sys if sys.version_info[:2] < (3, 6): raise RuntimeError("seaborn requires python >= 3.6.") setup( name=DISTNAME, author=MAINTAINER, author_email=MAINTAINER_EMAIL, maintainer=MAINTAINER, maintainer_email=MAINTAINER_EMAIL, description=DESCRIPTION, long_description=LONG_DESCRIPTION, license=LICENSE, url=URL, version=VERSION, download_url=DOWNLOAD_URL, python_requires=PYTHON_REQUIRES, install_requires=INSTALL_REQUIRES, packages=PACKAGES, classifiers=CLASSIFIERS )