pax_global_header00006660000000000000000000000064140000506060014501gustar00rootroot0000000000000052 comment=9bd26ac1758f988961e07a11b893b9254980e204 pyfilesystem2-2.4.12/000077500000000000000000000000001400005060600144065ustar00rootroot00000000000000pyfilesystem2-2.4.12/.github/000077500000000000000000000000001400005060600157465ustar00rootroot00000000000000pyfilesystem2-2.4.12/.github/FUNDING.yml000066400000000000000000000001031400005060600175550ustar00rootroot00000000000000# These are supported funding model platforms custom: willmcgugan pyfilesystem2-2.4.12/.github/PULL_REQUEST_TEMPLATE.md000066400000000000000000000010341400005060600215450ustar00rootroot00000000000000## Type of changes - Bug fix - New feature - Documentation / docstrings - Tests - Other ## Checklist - [ ] I've run the latest [black](https://github.com/ambv/black) with default args on new code. - [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate. - [ ] I've added tests for new code. - [ ] I accept that @PyFilesystem/maintainers may be pedantic in the code review. ## Description Please describe your changes here. If this fixes a bug, please link to the issue, if possible. pyfilesystem2-2.4.12/.gitignore000066400000000000000000000022721400005060600164010ustar00rootroot00000000000000.DS_Store # Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class # C extensions *.so # Distribution / packaging .Python env/ build/ develop-eggs/ dist/ downloads/ eggs/ .eggs/ lib/ lib64/ parts/ sdist/ var/ *.egg-info/ .installed.cfg *.egg # PyInstaller # Usually these files are written by a python script from a template # before PyInstaller builds the exe, so as to inject date/other infos into it. *.manifest *.spec # Installer logs pip-log.txt pip-delete-this-directory.txt # Unit test / coverage reports htmlcov/ .tox/ .coverage .coverage.* .cache nosetests.xml coverage.xml *,cover .hypothesis/ # Translations *.mo *.pot # Django stuff: *.log local_settings.py # Flask stuff: instance/ .webassets-cache # Scrapy stuff: .scrapy # Sphinx documentation docs/_build/ # PyBuilder target/ # IPython Notebook .ipynb_checkpoints # pyenv .python-version # celery beat schedule file celerybeat-schedule # dotenv .env # virtualenv venv/ ENV/ # Spyder project settings .spyderproject # Rope project settings .ropeproject # PyCharm .idea/ # MyPy cache .mypy_cache .vscode pyfilesystem2-2.4.12/.travis.yml000066400000000000000000000017261400005060600165250ustar00rootroot00000000000000dist: xenial sudo: false language: python python: - "2.7" - "3.4" - "3.5" - "3.6" - "3.7" - "3.8" - "3.9" - "pypy" - "pypy3.5-7.0" # Need 7.0+ due to a bug in earlier versions that broke our tests. matrix: include: - name: "Type checking" python: "3.7" env: TOXENV=typecheck - name: "Lint" python: "3.7" env: TOXENV=lint # Temporary bandaid for https://github.com/PyFilesystem/pyfilesystem2/issues/342 allow_failures: - python: pypy - python: pypy3.5-7.0 before_install: - pip install -U tox tox-travis - pip --version - pip install -r testrequirements.txt - pip freeze install: - pip install -e . # command to run tests script: tox after_success: - coveralls before_deploy: - pip install -U twine wheel - python setup.py sdist bdist_wheel deploy: provider: script script: twine upload dist/* skip_cleanup: true on: python: 3.9 tags: true repo: PyFilesystem/pyfilesystem2 pyfilesystem2-2.4.12/CHANGELOG.md000066400000000000000000000250401400005060600162200ustar00rootroot00000000000000# Change Log All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](http://keepachangelog.com/) and this project adheres to [Semantic Versioning](http://semver.org/). ## [2.4.12] - 2021-01-14 ### Added - Missing `mode` attribute to `_MemoryFile` objects returned by `MemoryFS.openbin`. - Missing `readinto` method for `MemoryFS` and `FTPFS` file objects. Closes [#380](https://github.com/PyFilesystem/pyfilesystem2/issues/380). - Added compatibility if a Windows FTP server returns file information to the `LIST` command with 24-hour times. Closes [#438](https://github.com/PyFilesystem/pyfilesystem2/issues/438). ### Changed - Start testing on PyPy. Due to [#342](https://github.com/PyFilesystem/pyfilesystem2/issues/342) we have to treat PyPy builds specially and allow them to fail, but at least we'll be able to see if we break something aside from known issues with FTP tests. - Include docs in source distributions as well as the whole tests folder, ensuring `conftest.py` is present, fixes [#364](https://github.com/PyFilesystem/pyfilesystem2/issues/364). - Stop patching copy with Python 3.8+ because it already uses `sendfile`. ### Fixed - Fixed crash when CPython's -OO flag is used - Fixed error when parsing timestamps from a FTP directory served from a WindowsNT FTP Server, fixes [#395](https://github.com/PyFilesystem/pyfilesystem2/issues/395). - Fixed documentation of `Mode.to_platform_bin`. Closes [#382](https://github.com/PyFilesystem/pyfilesystem2/issues/382). - Fixed the code example in the "Testing Filesystems" section of the "Implementing Filesystems" guide. Closes [#407](https://github.com/PyFilesystem/pyfilesystem2/issues/407). - Fixed `FTPFS.openbin` not implicitly opening files in binary mode like expected from `openbin`. Closes [#406](https://github.com/PyFilesystem/pyfilesystem2/issues/406). ## [2.4.11] - 2019-09-07 ### Added - Added geturl for TarFS and ZipFS for 'fs' purpose. NoURL for 'download' purpose. - Added helpful root path in CreateFailed exception [#340](https://github.com/PyFilesystem/pyfilesystem2/issues/340) - Added Python 3.8 support ### Fixed - Fixed tests leaving tmp files - Fixed typing issues - Fixed link namespace returning bytes - Fixed broken FSURL in windows [#329](https://github.com/PyFilesystem/pyfilesystem2/issues/329) - Fixed hidden exception at fs.close() when opening an absent zip/tar file URL [#333](https://github.com/PyFilesystem/pyfilesystem2/issues/333) - Fixed abstract class import from `collections` which would break on Python 3.8 - Fixed incorrect imports of `mock` on Python 3 - Removed some unused imports and unused `requirements.txt` file - Added mypy checks to Travis. Closes [#332](https://github.com/PyFilesystem/pyfilesystem2/issues/332). - Fixed missing `errno.ENOTSUP` on PyPy. Closes [#338](https://github.com/PyFilesystem/pyfilesystem2/issues/338). - Fixed bug in a decorator that would trigger an `AttributeError` when a class was created that implemented a deprecated method and had no docstring of its own. ### Changed - Entire test suite has been migrated to [pytest](https://docs.pytest.org/en/latest/). Closes [#327](https://github.com/PyFilesystem/pyfilesystem2/issues/327). - Style checking is now enforced using `flake8`; this involved some code cleanup such as removing unused imports. ## [2.4.10] - 2019-07-29 ### Fixed - Fixed broken WrapFS.movedir [#322](https://github.com/PyFilesystem/pyfilesystem2/issues/322) ## [2.4.9] - 2019-07-28 ### Fixed - Restored fs.path import - Fixed potential race condition in makedirs. Fixes [#310](https://github.com/PyFilesystem/pyfilesystem2/issues/310) - Added missing methods to WrapFS. Fixed [#294](https://github.com/PyFilesystem/pyfilesystem2/issues/294) ### Changed - `MemFS` now immediately releases all memory it holds when `close()` is called, rather than when it gets garbage collected. Closes [issue #308](https://github.com/PyFilesystem/pyfilesystem2/issues/308). - `FTPFS` now translates `EOFError` into `RemoteConnectionError`. Closes [#292](https://github.com/PyFilesystem/pyfilesystem2/issues/292) - Added automatic close for filesystems that go out of scope. Fixes [#298](https://github.com/PyFilesystem/pyfilesystem2/issues/298) ## [2.4.8] - 2019-06-12 ### Changed - `geturl` will return URL with user/password if needed @zmej-serow ## [2.4.7] - 2019-06-08 ### Added - Flag to OSFS to disable env var expansion ## [2.4.6] - 2019-06-08 ### Added - Implemented `geturl` in FTPFS @zmej-serow ### Fixed - Fixed FTP test suite when time is not UTC-0 @mrg0029 - Fixed issues with paths in tarfs https://github.com/PyFilesystem/pyfilesystem2/issues/284 ### Changed - Dropped Python3.3 support ## [2.4.5] - 2019-05-05 ### Fixed - Restored deprecated `setfile` method with deprecation warning to change to `writefile` - Fixed exception when a tarfile contains a path called '.' https://github.com/PyFilesystem/pyfilesystem2/issues/275 - Made TarFS directory loading lazy ### Changed - Detect case insensitivity using by writing temp file ## [2.4.4] - 2019-02-23 ### Fixed - OSFS fail in nfs mounts ## [2.4.3] - 2019-02-23 ### Fixed - Fixed broken "case_insensitive" check - Fixed Windows test fails ## [2.4.2] - 2019-02-22 ### Fixed - Fixed exception when Python runs with -OO ## [2.4.1] - 2019-02-20 ### Fixed - Fixed hash method missing from WrapFS ## [2.4.0] - 2019-02-15 ### Added - Added `exclude` and `filter_dirs` arguments to walk - Micro-optimizations to walk ## [2.3.1] - 2019-02-10 ### Fixed - Add encoding check in OSFS.validatepath ## [2.3.0] - 2019-01-30 ### Fixed - IllegalBackReference had mangled error message ### Added - FS.hash method ## [2.2.1] - 2019-01-06 ### Fixed - `Registry.install` returns its argument. ## [2.2.0] - 2019-01-01 A few methods have been renamed for greater clarity (but functionality remains the same). The old methods are now aliases and will continue to work, but will issue a deprecation warning via the `warnings` module. Please update your code accordingly. - `getbytes` -> `readbytes` - `getfile` -> `download` - `gettext` -> `readtext` - `setbytes` -> `writebytes` - `setbinfile` -> `upload` - `settext` -> `writetext` ### Changed - Changed default chunk size in `copy_file_data` to 1MB - Added `chunk_size` and `options` to `FS.upload` ## [2.1.3] - 2018-12-24 ### Fixed - Incomplete FTPFile.write when using `workers` @geoffjukes - Fixed AppFS not creating directory ### Added - Added load_extern switch to opener, fixes #228 @althanos ## [2.1.2] - 2018-11-10 ### Added - Support for Windows NT FTP servers @sspross ### Fixed - Root dir of MemoryFS accesible as a file - Packaging issues @televi - Deprecation warning re collections.Mapping ## [2.1.1] - 2018-10-03 ### Added - Added PEP 561 py.typed files - Use sendfile for faster copies @althonos - Atomic exclusive mode in Py2.7 @sqwishy ### Fixed - Fixed lstat @kamomil ## [2.1.0] - 2018-08-12 ### Added - fs.glob support ## [2.0.27] - 2018-08-05 ### Fixed - Fixed for Winows paths #152 - Fixed ftp dir parsing (@dhirschfeld) ## [2.0.26] - 2018-07-26 ### Fixed - fs.copy and fs.move disable workers if not thread-safe - fs.match detects case insensitivity - Open in exclusive mode is atomic (@squishy) - Exceptions can be pickleabe (@Spacerat) ## [2.0.25] - 2018-07-20 ### Added - workers parameter to fs.copy, fs.move, and fs.mirror for concurrent copies ## [2.0.24] - 2018-06-28 ### Added - timeout to FTP opener ## [2.0.23] - 2018-05-02 - Fix for Markdown on PyPi, no code changes ## [2.0.22] - 2018-05-02 ### Fixed - Handling of broken unicode on Python2.7 ### Added - Added fs.getospath ## [2.0.21] - 2018-05-02 ### Added - Typing information - Added Info.suffix, Info.suffixes, Info.stem attributes ### Fixed - Fixed issue with implied directories in TarFS ### Changed - Changed path.splitext so that 'leading periods on the basename are ignored', which is the behaviour of os.path.splitext ## [2.0.20] - 2018-03-13 ### Fixed - MultiFS.listdir now correctly filters out duplicates ## [2.0.19] - 2018-03-11 ### Fixed - encoding issue with TarFS - CreateFailed now contains the original exception in `exc` attribute ## [2.0.18] - 2018-01-31 ### Added - fs.getfile function ### Changed - Modified walk to use iterators internally (for more efficient walking) - Modified fs.copy to use getfile ## [2.0.17] - 2017-11-20 ### Fixed - Issue with ZipFS files missing a byte ## [2.0.16] - 2017-11-11 ### Added - fs.parts ### Fixed - Walk now yields Step named tuples as advertised ### Added - Added max_depth parameter to fs.walk ## [2.0.15] - 2017-11-05 ### Changed - ZipFS files are now seekable (Martin Larralde) ## [2.0.14] - 2016-11-05 No changes, pushed wrong branch to PyPi. ## [2.0.13] - 2017-10-17 ### Fixed - Fixed ignore_errors in walk.py ## [2.0.12] - 2017-10-15 ### Fixed - settext, appendtext, appendbytes, setbytes now raise a TypeError if the type is wrong, rather than ValueError - More efficient feature detection for FTPFS - Fixes for `fs.filesize` - Major documentation refactor (Martin Larralde) ## [2.0.11] ### Added - fs.mirror ## [2.0.10] ### Added - Added params support to FS URLs ### Fixed - Many fixes to FTPFS contributed by Martin Larralde. ## [2.0.9] ### Changed - MountFS and MultiFS now accept FS URLS - Add openers for AppFS ## [2.0.8] - 2017-08-13 ### Added - Lstat info namespace - Link info namespace - FS.islink method - Info.is_link method ## [2.0.7] - 2017-08-06 ### Fixes - Fixed entry point breaking pip ## [2.0.6] - 2017-08-05 ### Fixes - Opener refinements ## [2.0.5] - 2017-08-02 ### Fixed - Fixed potential for deadlock in MemoryFS ### Added - Added factory parameter to opendir. - ClosingSubFS. - File objects are all derived from io.IOBase. ### Fixed - Fix closing for FTP opener. ## [2.0.4] - 2017-06-11 ### Added - Opener extension mechanism contributed by Martin Larralde. - Support for pathlike objects. ### Fixed - Stat information was missing from info. ### Changed - More specific error when `validatepath` throws an error about the path argument being the wrong type, and changed from a ValueError to a TypeError. - Deprecated `encoding` parameter in OSFS. ## [2.0.3] - 2017-04-22 ### Added - New `copy_if_newer' functionality in`copy` module. ### Fixed - Improved `FTPFS` support for non-strict servers. ## [2.0.2] - 2017-03-12 ### Changed - Improved FTP support for non-compliant servers - Fix for ZipFS implied directories ## [2.0.1] - 2017-03-11 ### Added - TarFS contributed by Martin Larralde ### Fixed - FTPFS bugs. ## [2.0.0] - 2016-12-07 New version of the PyFilesystem API. pyfilesystem2-2.4.12/CONTRIBUTING.md000066400000000000000000000013101400005060600166320ustar00rootroot00000000000000# Contributing to PyFilesystem Pull Requests are very welcome for this project! For bug fixes or new features, please file an issue before submitting a pull request. If the change isn't trivial, it may be best to wait for feedback. For a quicker response, contact [Will McGugan](mailto:willmcgugan+pyfs@gmail.com) directly. ## Coding Guidelines This project runs on Python2.7 and Python3.X. Python2.7 will be dropped at some point, but for now, please maintain compatibility. Please format new code with [black](https://github.com/ambv/black), using the default settings. ## Tests New code should have unit tests. We strive to have near 100% coverage. Get in touch, if you need assistance with the tests. pyfilesystem2-2.4.12/CONTRIBUTORS.md000066400000000000000000000012521400005060600166650ustar00rootroot00000000000000# Contributors (sorted alphabetically) Many thanks to the following developers for contributing to this project: - [Andreas Tollkötter](https://github.com/atollk) - [C. W.](https://github.com/chfw) - [Diego Argueta](https://github.com/dargueta) - [Geoff Jukes](https://github.com/geoffjukes) - [Giampaolo Cimino](https://github.com/gpcimino) - [Justin Charlong](https://github.com/jcharlong) - [Louis Sautier](https://github.com/sbraz) - [Martin Larralde](https://github.com/althonos) - [Morten Engelhardt Olsen](https://github.com/xoriath) - [Nick Henderson](https://github.com/nwh) - [Will McGugan](https://github.com/willmcgugan) - [Zmej Serow](https://github.com/zmej-serow) pyfilesystem2-2.4.12/LICENSE000066400000000000000000000021511400005060600154120ustar00rootroot00000000000000MIT License Copyright (c) 2017-2021 The PyFilesystem2 contributors Copyright (c) 2016-2019 Will McGugan Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. pyfilesystem2-2.4.12/MANIFEST.in000066400000000000000000000001321400005060600161400ustar00rootroot00000000000000include LICENSE graft tests graft docs global-exclude __pycache__ global-exclude *.py[co] pyfilesystem2-2.4.12/Makefile000066400000000000000000000012641400005060600160510ustar00rootroot00000000000000 .PHONY: release release: cleandist python3 setup.py sdist bdist_wheel twine upload dist/*.whl dist/*.tar.gz .PHONY: cleandist cleandist: rm -f dist/*.whl dist/*.tar.gz .PHONY: cleandocs cleandocs: $(MAKE) -C docs clean .PHONY: clean clean: cleandist cleandocs .PHONY: test test: nosetests --with-coverage --cover-package=fs -a "!slow" tests .PHONY: slowtest slowtest: nosetests --with-coverage --cover-erase --cover-package=fs tests .PHONY: testall testall: tox .PHONY: docs docs: $(MAKE) -C docs html python -c "import os, webbrowser; webbrowser.open('file://' + os.path.abspath('./docs/build/html/index.html'))" .PHONY: typecheck typecheck: mypy -p fs --config setup.cfg pyfilesystem2-2.4.12/README.md000066400000000000000000000103661400005060600156730ustar00rootroot00000000000000# PyFilesystem2 Python's Filesystem abstraction layer. [![PyPI version](https://badge.fury.io/py/fs.svg)](https://badge.fury.io/py/fs) [![PyPI](https://img.shields.io/pypi/pyversions/fs.svg)](https://pypi.org/project/fs/) [![Downloads](https://pepy.tech/badge/fs/month)](https://pepy.tech/project/fs/month) [![Build Status](https://travis-ci.org/PyFilesystem/pyfilesystem2.svg?branch=master)](https://travis-ci.org/PyFilesystem/pyfilesystem2) [![Windows Build Status](https://ci.appveyor.com/api/projects/status/github/pyfilesystem/pyfilesystem2?branch=master&svg=true)](https://ci.appveyor.com/project/willmcgugan/pyfilesystem2) [![Coverage Status](https://coveralls.io/repos/github/PyFilesystem/pyfilesystem2/badge.svg)](https://coveralls.io/github/PyFilesystem/pyfilesystem2) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/30ad6445427349218425d93886ade9ee)](https://www.codacy.com/app/will-mcgugan/pyfilesystem2?utm_source=github.com&utm_medium=referral&utm_content=PyFilesystem/pyfilesystem2&utm_campaign=Badge_Grade) ## Documentation - [Wiki](https://www.pyfilesystem.org) - [API Documentation](https://docs.pyfilesystem.org/) - [GitHub Repository](https://github.com/PyFilesystem/pyfilesystem2) - [Blog](https://www.willmcgugan.com/tag/fs/) ## Introduction Think of PyFilesystem's `FS` objects as the next logical step to Python's `file` objects. In the same way that file objects abstract a single file, FS objects abstract an entire filesystem. Let's look at a simple piece of code as an example. The following function uses the PyFilesystem API to count the number of non-blank lines of Python code in a directory. It works _recursively_, so it will find `.py` files in all sub-directories. ```python def count_python_loc(fs): """Count non-blank lines of Python code.""" count = 0 for path in fs.walk.files(filter=['*.py']): with fs.open(path) as python_file: count += sum(1 for line in python_file if line.strip()) return count ``` We can call `count_python_loc` as follows: ```python from fs import open_fs projects_fs = open_fs('~/projects') print(count_python_loc(projects_fs)) ``` The line `project_fs = open_fs('~/projects')` opens an FS object that maps to the `projects` directory in your home folder. That object is used by `count_python_loc` when counting lines of code. To count the lines of Python code in a _zip file_, we can make the following change: ```python projects_fs = open_fs('zip://projects.zip') ``` Or to count the Python lines on an FTP server: ```python projects_fs = open_fs('ftp://ftp.example.org/projects') ``` No changes to `count_python_loc` are necessary, because PyFileystem provides a simple consistent interface to anything that resembles a collection of files and directories. Essentially, it allows you to write code that is independent of where and how the files are physically stored. Contrast that with a version that purely uses the standard library: ```python def count_py_loc(path): count = 0 for root, dirs, files in os.walk(path): for name in files: if name.endswith('.py'): with open(os.path.join(root, name), 'rt') as python_file: count += sum(1 for line in python_file if line.strip()) return count ``` This version is similar to the PyFilesystem code above, but would only work with the OS filesystem. Any other filesystem would require an entirely different API, and you would likely have to re-implement the directory walking functionality of `os.walk`. ## Credits The following developers have contributed code and their time to this projects: - [Will McGugan](https://github.com/willmcgugan) - [Martin Larralde](https://github.com/althonos) - [Giampaolo Cimino](https://github.com/gpcimino) - [Geoff Jukes](https://github.com/geoffjukes) See [CONTRIBUTORS.md](https://github.com/PyFilesystem/pyfilesystem2/blob/master/CONTRIBUTORS.md) for a full list of contributors. PyFilesystem2 owes a massive debt of gratitude to the following developers who contributed code and ideas to the original version. - Ryan Kelly - Andrew Scheller - Ben Timby Apologies if I missed anyone, feel free to prompt me if your name is missing here. ## Support If commercial support is required, please contact [Will McGugan](mailto:willmcgugan@gmail.com). pyfilesystem2-2.4.12/appveyor.yml000066400000000000000000000016401400005060600167770ustar00rootroot00000000000000environment: matrix: # For Python versions available on Appveyor, see # https://www.appveyor.com/docs/windows-images-software/#python # The list here is complete (excluding Python 2.6, which # isn't covered by this document) at the time of writing. # - PYTHON: "C:\\Python27" # - PYTHON: "C:\\Python33" # - PYTHON: "C:\\Python34" # - PYTHON: "C:\\Python35" # - PYTHON: "C:\\Python27-x64" # - PYTHON: "C:\\Python33-x64" # DISTUTILS_USE_SDK: "1" # - PYTHON: "C:\\Python34-x64" # DISTUTILS_USE_SDK: "1" # - PYTHON: "C:\\Python35-x64" - PYTHON: "C:\\Python36-x64" - PYTHON: "C:\\Python37-x64" install: # We need wheel installed to build wheels - "%PYTHON%\\python.exe -m pip install pytest pytest-randomly pytest-cov psutil pyftpdlib mock" - "%PYTHON%\\python.exe setup.py install" build: off test_script: - "%PYTHON%\\python.exe -m pytest -v tests" pyfilesystem2-2.4.12/docs/000077500000000000000000000000001400005060600153365ustar00rootroot00000000000000pyfilesystem2-2.4.12/docs/Makefile000066400000000000000000000176451400005060600170130ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = build # User-friendly check for sphinx-build ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don\'t have Sphinx installed, grab it from http://sphinx-doc.org/) endif # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source .PHONY: help help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " applehelp to make an Apple Help Book" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " epub3 to make an epub3" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @echo " coverage to run coverage check of the documentation (if enabled)" @echo " dummy to check syntax errors of document sources" .PHONY: clean clean: rm -rf $(BUILDDIR)/* .PHONY: html html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." .PHONY: dirhtml dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." .PHONY: singlehtml singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." .PHONY: pickle pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." .PHONY: json json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." .PHONY: htmlhelp htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." .PHONY: qthelp qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/PyFilesystem2.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/PyFilesystem2.qhc" .PHONY: applehelp applehelp: $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp @echo @echo "Build finished. The help book is in $(BUILDDIR)/applehelp." @echo "N.B. You won't be able to view it unless you put it in" \ "~/Library/Documentation/Help or install it in your application" \ "bundle." .PHONY: devhelp devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/PyFilesystem2" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/PyFilesystem2" @echo "# devhelp" .PHONY: epub epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." .PHONY: epub3 epub3: $(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3 @echo @echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3." .PHONY: latex latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." .PHONY: latexpdf latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." .PHONY: latexpdfja latexpdfja: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." .PHONY: text text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." .PHONY: man man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." .PHONY: texinfo texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." .PHONY: info info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." .PHONY: gettext gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." .PHONY: changes changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." .PHONY: linkcheck linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." .PHONY: doctest doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." .PHONY: coverage coverage: $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage @echo "Testing of coverage in the sources finished, look at the " \ "results in $(BUILDDIR)/coverage/python.txt." .PHONY: xml xml: $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." .PHONY: pseudoxml pseudoxml: $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." .PHONY: dummy dummy: $(SPHINXBUILD) -b dummy $(ALLSPHINXOPTS) $(BUILDDIR)/dummy @echo @echo "Build finished. Dummy builder generates no files." pyfilesystem2-2.4.12/docs/editdocs.sh000077500000000000000000000003431400005060600174730ustar00rootroot00000000000000#!/bin/sh make html python -c "import os, webbrowser; webbrowser.open('file://' + os.path.abspath('./build/html/index.html'))" watchmedo shell-command ../ --patterns "*.rst;*.py" --recursive --command="rm -rf build;make html;" pyfilesystem2-2.4.12/docs/make.bat000066400000000000000000000171131400005060600167460ustar00rootroot00000000000000@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source set I18NSPHINXOPTS=%SPHINXOPTS% source if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. epub3 to make an epub3 echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. texinfo to make Texinfo files echo. gettext to make PO message catalogs echo. changes to make an overview over all changed/added/deprecated items echo. xml to make Docutils-native XML files echo. pseudoxml to make pseudoxml-XML files for display purposes echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled echo. coverage to run coverage check of the documentation if enabled echo. dummy to check syntax errors of document sources goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) REM Check if sphinx-build is available and fallback to Python version if any %SPHINXBUILD% 1>NUL 2>NUL if errorlevel 9009 goto sphinx_python goto sphinx_ok :sphinx_python set SPHINXBUILD=python -m sphinx.__init__ %SPHINXBUILD% 2> nul if errorlevel 9009 ( echo. echo.The 'sphinx-build' command was not found. Make sure you have Sphinx echo.installed, then set the SPHINXBUILD environment variable to point echo.to the full path of the 'sphinx-build' executable. Alternatively you echo.may add the Sphinx directory to PATH. echo. echo.If you don't have Sphinx installed, grab it from echo.http://sphinx-doc.org/ exit /b 1 ) :sphinx_ok if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\PyFilesystem2.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\PyFilesystem2.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp if errorlevel 1 exit /b 1 echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "epub3" ( %SPHINXBUILD% -b epub3 %ALLSPHINXOPTS% %BUILDDIR%/epub3 if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub3 file is in %BUILDDIR%/epub3. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex if errorlevel 1 exit /b 1 echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "latexpdf" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex cd %BUILDDIR%/latex make all-pdf cd %~dp0 echo. echo.Build finished; the PDF files are in %BUILDDIR%/latex. goto end ) if "%1" == "latexpdfja" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex cd %BUILDDIR%/latex make all-pdf-ja cd %~dp0 echo. echo.Build finished; the PDF files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text if errorlevel 1 exit /b 1 echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man if errorlevel 1 exit /b 1 echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "texinfo" ( %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo if errorlevel 1 exit /b 1 echo. echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. goto end ) if "%1" == "gettext" ( %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale if errorlevel 1 exit /b 1 echo. echo.Build finished. The message catalogs are in %BUILDDIR%/locale. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes if errorlevel 1 exit /b 1 echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck if errorlevel 1 exit /b 1 echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest if errorlevel 1 exit /b 1 echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) if "%1" == "coverage" ( %SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage if errorlevel 1 exit /b 1 echo. echo.Testing of coverage in the sources finished, look at the ^ results in %BUILDDIR%/coverage/python.txt. goto end ) if "%1" == "xml" ( %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml if errorlevel 1 exit /b 1 echo. echo.Build finished. The XML files are in %BUILDDIR%/xml. goto end ) if "%1" == "pseudoxml" ( %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml if errorlevel 1 exit /b 1 echo. echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml. goto end ) if "%1" == "dummy" ( %SPHINXBUILD% -b dummy %ALLSPHINXOPTS% %BUILDDIR%/dummy if errorlevel 1 exit /b 1 echo. echo.Build finished. Dummy builder generates no files. goto end ) :end pyfilesystem2-2.4.12/docs/source/000077500000000000000000000000001400005060600166365ustar00rootroot00000000000000pyfilesystem2-2.4.12/docs/source/builtin.rst000066400000000000000000000004641400005060600210420ustar00rootroot00000000000000Builtin Filesystems =================== .. toctree:: :maxdepth: 3 reference/appfs.rst reference/ftpfs.rst reference/memoryfs.rst reference/mountfs.rst reference/multifs.rst reference/osfs.rst reference/subfs.rst reference/tarfs.rst reference/tempfs.rst reference/zipfs.rst pyfilesystem2-2.4.12/docs/source/concepts.rst000066400000000000000000000103031400005060600212030ustar00rootroot00000000000000.. _concepts: Concepts ======== The following describes some core concepts when working with PyFilesystem. If you are skimming this documentation, pay particular attention to the first section on paths. .. _paths: Paths ----- With the possible exception of the constructor, all paths in a filesystem are *PyFilesystem paths*, which have the following properties: * Paths are ``str`` type in Python3, and ``unicode`` in Python2 * Path components are separated by a forward slash (``/``) * Paths beginning with a ``/`` are *absolute* * Paths not beginning with a forward slash are *relative* * A single dot (``.``) means 'current directory' * A double dot (``..``) means 'previous directory' Note that paths used by the FS interface will use this format, but the constructor may not. Notably the :class:`~fs.osfs.OSFS` constructor which requires an OS path -- the format of which is platform-dependent. .. note:: There are many helpful functions for working with paths in the :mod:`~fs.path` module. PyFilesystem paths are platform-independent, and will be automatically converted to the format expected by your operating system -- so you won't need to make any modifications to your filesystem code to make it run on other platforms. System Paths ------------ Not all Python modules can use file-like objects, especially those which interface with C libraries. For these situations you will need to retrieve the *system path*. You can do this with the :meth:`~fs.base.FS.getsyspath` method which converts a valid path in the context of the FS object to an absolute path that would be understood by your OS. For example:: >>> from fs.osfs import OSFS >>> home_fs = OSFS('~/') >>> home_fs.getsyspath('test.txt') '/home/will/test.txt' Not all filesystems map to a system path (for example, files in a :class:`~fs.memoryfs.MemoryFS` will only ever exists in memory). If you call ``getsyspath`` on a filesystem which doesn't map to a system path, it will raise a :class:`~fs.errors.NoSysPath` exception. If you prefer a *look before you leap* approach, you can check if a resource has a system path by calling :meth:`~fs.base.FS.hassyspath` Sandboxing ---------- FS objects are not permitted to work with any files outside of their *root*. If you attempt to open a file or directory outside the filesystem instance (with a backref such as ``"../foo.txt"``), a :class:`~fs.errors.IllegalBackReference` exception will be thrown. This ensures that any code using a FS object won't be able to read or modify anything you didn't intend it to, thus limiting the scope of any bugs. Unlike your OS, there is no concept of a current working directory in PyFilesystem. If you want to work with a sub-directory of an FS object, you can use the :meth:`~fs.base.FS.opendir` method which returns another FS object representing the contents of that sub-directory. For example, consider the following directory structure. The directory ``foo`` contains two sub-directories; ``bar`` and ``baz``:: --foo |--bar | |--readme.txt | `--photo.jpg `--baz |--private.txt `--dontopen.jpg We can open the ``foo`` directory with the following code:: from fs.osfs import OSFS foo_fs = OSFS('foo') The ``foo_fs`` object can work with any of the contents of ``bar`` and ``baz``, which may not be desirable if we are passing ``foo_fs`` to a function that has the potential to delete files. Fortunately we can isolate a single sub-directory with the :meth:`~fs.base.FS.opendir` method:: bar_fs = foo_fs.opendir('bar') This creates a completely new FS object that represents everything in the ``foo/bar`` directory. The root directory of ``bar_fs`` has been re- position, so that from ``bar_fs``'s point of view, the readme.txt and photo.jpg files are in the root:: --bar |--readme.txt `--photo.jpg .. note:: This *sandboxing* only works if your code uses the filesystem interface exclusively. It won't prevent code using standard OS level file manipulation. Errors ------ PyFilesystem converts errors in to a common exception hierarchy. This ensures that error handling code can be written once, regardless of the filesystem being used. See :mod:`~fs.errors` for details. pyfilesystem2-2.4.12/docs/source/conf.py000066400000000000000000000231361400005060600201420ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # PyFilesystem2 documentation build configuration file, created by # sphinx-quickstart on Tue May 10 16:45:12 2016. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys import os import sphinx_rtd_theme html_theme = "sphinx_rtd_theme" html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('../..')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'sphinx.ext.intersphinx' ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # intersphinx domain mapping intersphinx_mapping = { 'python': ('https://docs.python.org/3.6', None) } # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # source_suffix = ['.rst', '.md'] source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'PyFilesystem' copyright = u'2016-2017, Will McGugan' author = u'Will McGugan' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # from fs import __version__ # The short X.Y version. version = '.'.join(__version__.split('.')[:2]) # The full version, including alpha/beta/rc tags. release = __version__ # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. language = 'en' # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This patterns also effect to html_static_path and html_extra_path exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. default_role = 'py:obj' # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. #keep_warnings = False # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. # " v documentation" by default. #html_title = u'PyFilesystem2 v0.1.0' # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (relative to this directory) to use as a favicon of # the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. #html_extra_path = [] # If not None, a 'Last updated on:' timestamp is inserted at every page # bottom, using the given strftime format. # The empty string is equivalent to '%b %d, %Y'. #html_last_updated_fmt = None # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Language to be used for generating the HTML full-text search index. # Sphinx supports the following languages: # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja' # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh' #html_search_language = 'en' # A dictionary with options for the search language support, empty by default. # 'ja' uses this config value. # 'zh' user can custom change `jieba` dictionary path. #html_search_options = {'type': 'default'} # The name of a javascript file (relative to the configuration directory) that # implements a search results scorer. If empty, the default will be used. #html_search_scorer = 'scorer.js' # Output file base name for HTML help builder. htmlhelp_basename = 'PyFilesystem2doc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', # Latex figure (float) alignment #'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ (master_doc, 'PyFilesystem2.tex', u'PyFilesystem Documentation', u'Will McGugan', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ (master_doc, 'pyfilesystem', u'PyFilesystem Documentation', [author], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ (master_doc, 'PyFilesystem', u'PyFilesystem Documentation', author, 'PyFilesystem', 'Filesystem interface.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. #texinfo_no_detailmenu = False napoleon_include_special_with_doc = True pyfilesystem2-2.4.12/docs/source/extension.rst000066400000000000000000000067421400005060600214150ustar00rootroot00000000000000.. _extension: Creating an extension ===================== Once a filesystem has been implemented, it can be integrated with other applications and projects using PyFilesystem. Naming Convention ----------------- For visibility in PyPi, we recommend that your package be prefixed with ``fs-``. For instance if you have implemented an ``AwesomeFS`` PyFilesystem class, your packaged could be be named ``fs-awesome`` or ``fs-awesomefs``. Opener ------ In order for your filesystem to be opened with an :ref:`FS URL ` you should define an :class:`~fs.opener.base.Opener` class. Here's an example taken from an Amazon S3 Filesystem:: """Defines the S3FSOpener.""" __all__ = ['S3FSOpener'] from fs.opener import Opener, OpenerError from ._s3fs import S3FS class S3FSOpener(Opener): protocols = ['s3'] def open_fs(self, fs_url, parse_result, writeable, create, cwd): bucket_name, _, dir_path = parse_result.resource.partition('/') if not bucket_name: raise OpenerError( "invalid bucket name in '{}'".format(fs_url) ) s3fs = S3FS( bucket_name, dir_path=dir_path or '/', aws_access_key_id=parse_result.username or None, aws_secret_access_key=parse_result.password or None, ) return s3fs By convention this would be defined in ``opener.py``. To register the opener you will need to define an `entry point `_ in your setup.py. See below for an example. The setup.py file ----------------- Refer to the `setuptools documentation `_ to see how to write a ``setup.py`` file. There are only a few things that should be kept in mind when creating a Pyfilesystem2 extension. Make sure that: * ``fs`` is in the ``install_requires`` list. You should reference the version number with the ``~=`` operator which ensures that the install will get any bugfix releases of PyFilesystem but not any potentially breaking changes. * Ìf you created an opener, include it as an ``fs.opener`` entry point, using the name of the entry point as the protocol to be used. Here is an minimal ``setup.py`` for our project: .. code:: python from setuptools import setup setup( name='fs-awesomefs', # Name in PyPi author="You !", author_email="your.email@domain.ext", description="An awesome filesystem for pyfilesystem2 !", install_requires=[ "fs~=2.0.5" ], entry_points = { 'fs.opener': [ 'awe = awesomefs.opener:AwesomeFSOpener', ] }, license="MY LICENSE", packages=['awesomefs'], version="X.Y.Z", ) Good Practices -------------- Keep track of your achievements! Add the following values to your ``__init__.py``: * ``__version__`` The version of the extension (we recommend following `Semantic Versioning `_), * ``__author__`` Your name(s). * ``__author_email__`` Your email(s). * ``__license__`` The module's license. Let us Know ----------- Contact us to add your filesystem to the `PyFilesystem Wiki `_. Live Example ------------ See `fs.sshfs `_ for a functioning PyFilesystem2 extension implementing a Pyfilesystem2 filesystem over SSH. pyfilesystem2-2.4.12/docs/source/external.rst000066400000000000000000000006221400005060600212120ustar00rootroot00000000000000External Filesystems ==================== See the following wiki page for a list of filesystems not in the core library, and community contributed filesystems. https://www.pyfilesystem.org/page/index-of-filesystems/ If you have developed a filesystem that you would like added to the above page, please let us know by opening a `Github issue `_.pyfilesystem2-2.4.12/docs/source/globbing.rst000066400000000000000000000043521400005060600211570ustar00rootroot00000000000000.. _globbing: Globbing ======== Globbing is the process of matching paths according to the rules used by the Unix shell. Generally speaking, you can think of a glob pattern as a path containing one or more wildcard patterns, separated by forward slashes. Matching Files and Directories ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In a glob pattern, A ``*`` means match anything text in a filename. A ``?`` matches any single character. A ``**`` matches any number of subdirectories, making the glob *recursive*. If the glob pattern ends in a ``/``, it will only match directory paths, otherwise it will match files and directories. .. note:: A recursive glob requires that PyFilesystem scan a lot of files, and can potentially be slow for large (or network based) filesystems. Here's a summary of glob patterns: ``*`` Matches all files in the current directory. ``*.py`` Matches all .py file in the current directory. ``*.py?`` Matches all .py files and .pyi, .pyc etc in the currenct directory. ``project/*.py`` Matches all .py files in a directory called ``project``. ``*/*.py`` Matches all .py files in any sub directory. ``**/*.py`` Recursively matches all .py files. ``**/.git/`` Recursively matches all the git directories. Interface ~~~~~~~~~ PyFilesystem supports globbing via the ``glob`` attribute on every FS instance, which is an instance of :class:`~fs.glob.BoundGlobber`. Here's how you might use it to find all the Python files in your filesystem:: for match in my_fs.glob("**/*.py"): print(f"{match.path} is {match.info.size} bytes long") Calling ``.glob`` with a pattern will return an iterator of :class:`~fs.glob.GlobMatch` named tuples for each matching file or directory. A glob match contains two attributes; ``path`` which is the full path in the filesystem, and ``info`` which is an :class:`fs.info.Info` info object for the matched resource. Batch Methods ~~~~~~~~~~~~~ In addition to iterating over the results, you can also call methods on the :class:`~fs.glob.Globber` which apply to every matched path. For instance, here is how you can use glob to remove all ``.pyc`` files from a project directory:: >>> import fs >>> fs.open_fs('~/projects/my_project').glob('**/*.pyc').remove() 29 pyfilesystem2-2.4.12/docs/source/guide.rst000066400000000000000000000313561400005060600204750ustar00rootroot00000000000000Guide ===== The PyFilesytem interface simplifies most aspects of working with files and directories. This guide covers what you need to know about working with FS objects. Why use PyFilesystem? ~~~~~~~~~~~~~~~~~~~~~ If you are comfortable using the Python standard library, you may be wondering; *why learn another API for working with files?* The :ref:`interface` is generally simpler than the ``os`` and ``io`` modules -- there are fewer edge cases and less ways to shoot yourself in the foot. This may be reason alone to use it, but there are other compelling reasons you should use ``import fs`` for even straightforward filesystem code. The abstraction offered by FS objects means that you can write code that is agnostic to where your files are physically located. For instance, if you wrote a function that searches a directory for duplicates files, it will work unaltered with a directory on your hard-drive, or in a zip file, on an FTP server, on Amazon S3, etc. As long as an FS object exists for your chosen filesystem (or any data store that resembles a filesystem), you can use the same API. This means that you can defer the decision regarding where you store data to later. If you decide to store configuration in the *cloud*, it could be a single line change and not a major refactor. PyFilesystem can also be beneficial for unit-testing; by swapping the OS filesystem with an in-memory filesystem, you can write tests without having to manage (or mock) file IO. And you can be sure that your code will work on Linux, MacOS, and Windows. Opening Filesystems ~~~~~~~~~~~~~~~~~~~ There are two ways you can open a filesystem. The first and most natural way is to import the appropriate filesystem class and construct it. Here's how you would open a :class:`~fs.osfs.OSFS` (Operating System File System), which maps to the files and directories of your hard-drive:: >>> from fs.osfs import OSFS >>> home_fs = OSFS("~/") This constructs an FS object which manages the files and directories under a given system path. In this case, ``'~/'``, which is a shortcut for your home directory. Here's how you would list the files/directories in your home directory:: >>> home_fs.listdir('/') ['world domination.doc', 'paella-recipe.txt', 'jokes.txt', 'projects'] Notice that the parameter to ``listdir`` is a single forward slash, indicating that we want to list the *root* of the filesystem. This is because from the point of view of ``home_fs``, the root is the directory we used to construct the ``OSFS``. Also note that it is a forward slash, even on Windows. This is because FS paths are in a consistent format regardless of the platform. Details such as the separator and encoding are abstracted away. See :ref:`paths` for details. Other filesystems interfaces may have other requirements for their constructor. For instance, here is how you would open a FTP filesystem:: >>> from ftpfs import FTPFS >>> debian_fs = FTPFS('ftp.mirror.nl') >>> debian_fs.listdir('/') ['debian-archive', 'debian-backports', 'debian', 'pub', 'robots.txt'] The second, and more general way of opening filesystems objects, is via an *opener* which opens a filesystem from a URL-like syntax. Here's an alternative way of opening your home directory:: >>> from fs import open_fs >>> home_fs = open_fs('osfs://~/') >>> home_fs.listdir('/') ['world domination.doc', 'paella-recipe.txt', 'jokes.txt', 'projects'] The opener system is particularly useful when you want to store the physical location of your application's files in a configuration file. If you don't specify the protocol in the FS URL, then PyFilesystem will assume you want a OSFS relative from the current working directory. So the following would be an equivalent way of opening your home directory:: >>> from fs import open_fs >>> home_fs = open_fs('.') >>> home_fs.listdir('/') ['world domination.doc', 'paella-recipe.txt', 'jokes.txt', 'projects'] Tree Printing ~~~~~~~~~~~~~ Calling :meth:`~fs.base.FS.tree` on a FS object will print an ascii tree view of your filesystem. Here's an example:: >>> from fs import open_fs >>> my_fs = open_fs('.') >>> my_fs.tree() ├── locale │ └── readme.txt ├── logic │ ├── content.xml │ ├── data.xml │ ├── mountpoints.xml │ └── readme.txt ├── lib.ini └── readme.txt This can be a useful debugging aid! Closing ~~~~~~~ FS objects have a :meth:`~fs.base.FS.close` methd which will perform any required clean-up actions. For many filesystems (notably :class:`~fs.osfs.OSFS`), the ``close`` method does very little. Other filesystems may only finalize files or release resources once ``close()`` is called. You can call ``close`` explicitly once you are finished using a filesystem. For example:: >>> home_fs = open_fs('osfs://~/') >>> home_fs.writetext('reminder.txt', 'buy coffee') >>> home_fs.close() If you use FS objects as a context manager, ``close`` will be called automatically. The following is equivalent to the previous example:: >>> with open_fs('osfs://~/') as home_fs: ... home_fs.writetext('reminder.txt', 'buy coffee') Using FS objects as a context manager is recommended as it will ensure every FS is closed. Directory Information ~~~~~~~~~~~~~~~~~~~~~ Filesystem objects have a :meth:`~fs.base.FS.listdir` method which is similar to ``os.listdir``; it takes a path to a directory and returns a list of file names. Here's an example:: >>> home_fs.listdir('/projects') ['fs', 'moya', 'README.md'] An alternative method exists for listing directories; :meth:`~fs.base.FS.scandir` returns an *iterable* of :ref:`info` objects. Here's an example:: >>> directory = list(home_fs.scandir('/projects')) >>> directory [, , ] Info objects have a number of advantages over just a filename. For instance you can tell if an info object references a file or a directory with the :attr:`~fs.info.Info.is_dir` attribute, without an additional system call. Info objects may also contain information such as size, modified time, etc. if you request it in the ``namespaces`` parameter. .. note:: The reason that ``scandir`` returns an iterable rather than a list, is that it can be more efficient to retrieve directory information in chunks if the directory is very large, or if the information must be retrieved over a network. Additionally, FS objects have a :meth:`~fs.base.FS.filterdir` method which extends ``scandir`` with the ability to filter directory contents by wildcard(s). Here's how you might find all the Python files in a directory: >>> code_fs = OSFS('~/projects/src') >>> directory = list(code_fs.filterdir('/', files=['*.py'])) By default, the resource information objects returned by ``scandir`` and ``listdir`` will contain only the file name and the ``is_dir`` flag. You can request additional information with the ``namespaces`` parameter. Here's how you can request additional details (such as file size and file modified times):: >>> directory = code_fs.filterdir('/', files=['*.py'], namespaces=['details']) This will add a ``size`` and ``modified`` property (and others) to the resource info objects. Which makes code such as this work:: >>> sum(info.size for info in directory) See :ref:`info` for more information. Sub Directories ~~~~~~~~~~~~~~~ PyFilesystem has no notion of a *current working directory*, so you won't find a ``chdir`` method on FS objects. Fortunately you won't miss it; working with sub-directories is a breeze with PyFilesystem. You can always specify a directory with methods which accept a path. For instance, ``home_fs.listdir('/projects')`` would get the directory listing for the `projects` directory. Alternatively, you can call :meth:`~fs.base.FS.opendir` which returns a new FS object for the sub-directory. For example, here's how you could list the directory contents of a `projects` folder in your home directory:: >>> home_fs = open_fs('~/') >>> projects_fs = home_fs.opendir('/projects') >>> projects_fs.listdir('/') ['fs', 'moya', 'README.md'] When you call ``opendir``, the FS object returns an instance of a :class:`~fs.subfs.SubFS`. If you call any of the methods on a ``SubFS`` object, it will be as though you called the same method on the parent filesystem with a path relative to the sub-directory. The :class:`~fs.base.FS.makedir` and :class:`~fs.base.FS.makedirs` methods also return ``SubFS`` objects for the newly create directory. Here's how you might create a new directory in ``~/projects`` and initialize it with a couple of files:: >>> home_fs = open_fs('~/') >>> game_fs = home_fs.makedirs('projects/game') >>> game_fs.touch('__init__.py') >>> game_fs.writetext('README.md', "Tetris clone") >>> game_fs.listdir('/') ['__init__.py', 'README.md'] Working with ``SubFS`` objects means that you can generally avoid writing much path manipulation code, which tends to be error prone. Working with Files ~~~~~~~~~~~~~~~~~~ You can open a file from a FS object with :meth:`~fs.base.FS.open`, which is very similar to ``io.open`` in the standard library. Here's how you might open a file called "reminder.txt" in your home directory:: >>> with open_fs('~/') as home_fs: ... with home_fs.open('reminder.txt') as reminder_file: ... print(reminder_file.read()) buy coffee In the case of a ``OSFS``, a standard file-like object will be returned. Other filesystems may return a different object supporting the same methods. For instance, :class:`~fs.memoryfs.MemoryFS` will return a ``io.BytesIO`` object. PyFilesystem also offers a number of shortcuts for common file related operations. For instance, :meth:`~fs.base.FS.readbytes` will return the file contents as a bytes, and :meth:`~fs.base.FS.readtext` will read unicode text. These methods is generally preferable to explicitly opening files, as the FS object may have an optimized implementation. Other *shortcut* methods are :meth:`~fs.base.FS.download`, :meth:`~fs.base.FS.upload`, :meth:`~fs.base.FS.writebytes`, :meth:`~fs.base.FS.writetext`. Walking ~~~~~~~ Often you will need to scan the files in a given directory, and any sub-directories. This is known as *walking* the filesystem. Here's how you would print the paths to all your Python files in your home directory:: >>> from fs import open_fs >>> home_fs = open_fs('~/') >>> for path in home_fs.walk.files(filter=['*.py']): ... print(path) The ``walk`` attribute on FS objects is instance of a :class:`~fs.walk.BoundWalker`, which should be able to handle most directory walking requirements. See :ref:`walking` for more information on walking directories. Globbing ~~~~~~~~ Closely related to walking a filesystem is *globbing*, which is a slightly higher level way of scanning filesystems. Paths can be filtered by a *glob* pattern, which is similar to a wildcard (such as ``*.py``), but can match multiple levels of a directory structure. Here's an example of globbing, which removes all the ``.pyc`` files in your project directory:: >>> from fs import open_fs >>> open_fs('~/project').glob('**/*.pyc').remove() 62 See :ref:`globbing` for more information. Moving and Copying ~~~~~~~~~~~~~~~~~~ You can move and copy file contents with :meth:`~fs.base.FS.move` and :meth:`~fs.base.FS.copy` methods, and the equivalent :meth:`~fs.base.FS.movedir` and :meth:`~fs.base.FS.copydir` methods which operate on directories rather than files. These move and copy methods are optimized where possible, and depending on the implementation, they may be more performant than reading and writing files. To move and/or copy files *between* filesystems (as apposed to within the same filesystem), use the :mod:`~fs.move` and :mod:`~fs.copy` modules. The methods in these modules accept both FS objects and FS URLS. For instance, the following will compress the contents of your projects folder:: >>> from fs.copy import copy_fs >>> copy_fs('~/projects', 'zip://projects.zip') Which is the equivalent to this, more verbose, code:: >>> from fs.copy import copy_fs >>> from fs.osfs import OSFS >>> from fs.zipfs import ZipFS >>> copy_fs(OSFS('~/projects'), ZipFS('projects.zip')) The :func:`~fs.copy.copy_fs` and :func:`~fs.copy.copy_dir` functions also accept a :class:`~fs.walk.Walker` parameter, which can you use to filter the files that will be copied. For instance, if you only wanted back up your python files, you could use something like this:: >>> from fs.copy import copy_fs >>> from fs.walk import Walker >>> copy_fs('~/projects', 'zip://projects.zip', walker=Walker(filter=['*.py'])) An alternative to copying is *mirroring*, which will copy a filesystem them keep it up to date by copying only changed files / directories. See :func:`~fs.mirror.mirror`. pyfilesystem2-2.4.12/docs/source/implementers.rst000066400000000000000000000127251400005060600221030ustar00rootroot00000000000000.. _implementers: Implementing Filesystems ======================== With a little care, you can implement a PyFilesystem interface for any filesystem, which will allow it to work interchangeably with any of the built-in FS classes and tools. To create a PyFilesystem interface, derive a class from :class:`~fs.base.FS` and implement the :ref:`essential-methods`. This should give you a working FS class. Take care to copy the method signatures *exactly*, including default values. It is also essential that you follow the same logic with regards to exceptions, and only raise exceptions in :mod:`~fs.errors`. Constructor ----------- There are no particular requirements regarding how a PyFilesystem class is constructed, but be sure to call the base class ``__init__`` method with no parameters. Thread Safety ------------- All Filesystems should be *thread-safe*. The simplest way to achieve that is by using the ``_lock`` attribute supplied by the :class:`~fs.base.FS` constructor. This is a ``RLock`` object from the standard library, which you can use as a context manager, so methods you implement will start something like this:: with self._lock: do_something() You aren't *required* to use ``_lock``. Just as long as calling methods on the FS object from multiple threads doesn't break anything. Python Versions --------------- PyFilesystem supports Python2.7 and Python3.X. The differences between the two major Python versions are largely managed by the ``six`` library. You aren't obligated to support the same versions of Python that PyFilesystem itself supports, but it is recommended if your project is for general use. Testing Filesystems ------------------- To test your implementation, you can borrow the test suite used to test the built in filesystems. If your code passes these tests, then you can be confident your implementation will work seamlessly. Here's the simplest possible example to test a filesystem class called ``MyFS``:: import unittest from fs.test import FSTestCases class TestMyFS(FSTestCases, unittest.TestCase): def make_fs(self): # Return an instance of your FS object here return MyFS() You may also want to override some of the methods in the test suite for more targeted testing: .. autoclass:: fs.test.FSTestCases :members: .. note:: As of version 2.4.11 this project uses `pytest `_ to run its tests. While it's completely compatible with ``unittest``-style tests, it's much more powerful and feature-rich. We suggest you take advantage of it and its plugins in new tests you write, rather than sticking to strict ``unittest`` features. For benefits and limitations, see `here `_. .. _essential-methods: Essential Methods ----------------- The following methods MUST be implemented in a PyFilesystem interface. * :meth:`~fs.base.FS.getinfo` Get info regarding a file or directory. * :meth:`~fs.base.FS.listdir` Get a list of resources in a directory. * :meth:`~fs.base.FS.makedir` Make a directory. * :meth:`~fs.base.FS.openbin` Open a binary file. * :meth:`~fs.base.FS.remove` Remove a file. * :meth:`~fs.base.FS.removedir` Remove a directory. * :meth:`~fs.base.FS.setinfo` Set resource information. .. _non-essential-methods: Non - Essential Methods ----------------------- The following methods MAY be implemented in a PyFilesystem interface. These methods have a default implementation in the base class, but may be overridden if you can supply a more optimal version. Exactly which methods you should implement depends on how and where the data is stored. For network filesystems, a good candidate to implement, is the ``scandir`` method which would otherwise call a combination of ``listdir`` and ``getinfo`` for each file. In the general case, it is a good idea to look at how these methods are implemented in :class:`~fs.base.FS`, and only write a custom version if it would be more efficient than the default. * :meth:`~fs.base.FS.appendbytes` * :meth:`~fs.base.FS.appendtext` * :meth:`~fs.base.FS.close` * :meth:`~fs.base.FS.copy` * :meth:`~fs.base.FS.copydir` * :meth:`~fs.base.FS.create` * :meth:`~fs.base.FS.desc` * :meth:`~fs.base.FS.download` * :meth:`~fs.base.FS.exists` * :meth:`~fs.base.FS.filterdir` * :meth:`~fs.base.FS.getmeta` * :meth:`~fs.base.FS.getospath` * :meth:`~fs.base.FS.getsize` * :meth:`~fs.base.FS.getsyspath` * :meth:`~fs.base.FS.gettype` * :meth:`~fs.base.FS.geturl` * :meth:`~fs.base.FS.hassyspath` * :meth:`~fs.base.FS.hasurl` * :meth:`~fs.base.FS.isclosed` * :meth:`~fs.base.FS.isempty` * :meth:`~fs.base.FS.isdir` * :meth:`~fs.base.FS.isfile` * :meth:`~fs.base.FS.islink` * :meth:`~fs.base.FS.lock` * :meth:`~fs.base.FS.makedirs` * :meth:`~fs.base.FS.move` * :meth:`~fs.base.FS.movedir` * :meth:`~fs.base.FS.open` * :meth:`~fs.base.FS.opendir` * :meth:`~fs.base.FS.readbytes` * :meth:`~fs.base.FS.readtext` * :meth:`~fs.base.FS.removetree` * :meth:`~fs.base.FS.scandir` * :meth:`~fs.base.FS.settimes` * :meth:`~fs.base.FS.touch` * :meth:`~fs.base.FS.upload` * :meth:`~fs.base.FS.validatepath` * :meth:`~fs.base.FS.writebytes` * :meth:`~fs.base.FS.writefile` * :meth:`~fs.base.FS.writetext` .. _helper-methods: Helper Methods -------------- These methods SHOULD NOT be implemented. Implementing these is highly unlikely to be worthwhile. * :meth:`~fs.base.FS.check` * :meth:`~fs.base.FS.getbasic` * :meth:`~fs.base.FS.getdetails` * :meth:`~fs.base.FS.hash` * :meth:`~fs.base.FS.match` * :meth:`~fs.base.FS.tree` pyfilesystem2-2.4.12/docs/source/index.rst000066400000000000000000000012161400005060600204770ustar00rootroot00000000000000.. PyFilesystem2 documentation master file, created by sphinx-quickstart on Tue May 10 16:45:12 2016. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to PyFilesystem2's documentation! ========================================= Contents: .. toctree:: :maxdepth: 2 introduction.rst guide.rst concepts.rst info.rst openers.rst walking.rst globbing.rst builtin.rst implementers.rst extension.rst external.rst interface.rst reference.rst Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` pyfilesystem2-2.4.12/docs/source/info.rst000066400000000000000000000220121400005060600203200ustar00rootroot00000000000000.. _info: Resource Info ============= Resource information (or *info*) describes standard file details such as name, type, size, etc., and potentially other less-common information associated with a file or directory. You can retrieve resource info for a single resource by calling :meth:`~fs.base.FS.getinfo`, or by calling :meth:`~fs.base.FS.scandir` which returns an iterator of resource information for the contents of a directory. Additionally, :meth:`~fs.base.FS.filterdir` can filter the resources in a directory by type and wildcard. Here's an example of retrieving file information:: >>> from fs.osfs import OSFS >>> fs = OSFS('.') >>> fs.writetext('example.txt', 'Hello, World!') >>> info = fs.getinfo('example.txt', namespaces=['details']) >>> info.name 'example.txt' >>> info.is_dir False >>> info.size 13 Info Objects ------------ PyFilesystem exposes the resource information via properties of :class:`~fs.info.Info` objects. Namespaces ---------- All resource information is contained within one of a number of potential *namespaces*, which are logical key/value groups. You can specify which namespace(s) you are interested in with the `namespaces` argument to :meth:`~fs.base.FS.getinfo`. For example, the following retrieves the ``details`` and ``access`` namespaces for a file:: resource_info = fs.getinfo('myfile.txt', namespaces=['details', 'access']) In addition to the specified namespaces, the fileystem will also return the ``basic`` namespace, which contains the name of the resource, and a flag which indicates if the resource is a directory. Basic Namespace ~~~~~~~~~~~~~~~ The ``basic`` namespace is always returned. It contains the following keys: =============== =================== =========================================== Name Type Description --------------- ------------------- ------------------------------------------- name str Name of the resource. is_dir bool A boolean that indicates if the resource is a directory. =============== =================== =========================================== The keys in this namespace can generally be retrieved very quickly. In the case of :class:`~fs.osfs.OSFS` the namespace can be retrieved without a potentially expensive system call. Details Namespace ~~~~~~~~~~~~~~~~~ The ``details`` namespace contains the following keys. ================ =================== ========================================== Name type Description ---------------- ------------------- ------------------------------------------ accessed datetime The time the file was last accessed. created datetime The time the file was created. metadata_changed datetime The time of the last *metadata* (e.g. owner, group) change. modified datetime The time file data was last changed. size int Number of bytes used to store the resource. In the case of files, this is the number of bytes in the file. For directories, the *size* is the overhead (in bytes) used to store the directory entry. type ResourceType Resource type, one of the values defined in :class:`~fs.enums.ResourceType`. ================ =================== ========================================== The time values (``accessed_time``, ``created_time`` etc.) may be ``None`` if the filesystem doesn't store that information. The ``size`` and ``type`` keys are guaranteed to be available, although ``type`` may be :attr:`~fs.enums.ResourceType.unknown` if the filesystem is unable to retrieve the resource type. Access Namespace ~~~~~~~~~~~~~~~~ The ``access`` namespace reports permission and ownership information, and contains the following keys. ================ =================== ========================================== Name type Description ---------------- ------------------- ------------------------------------------ gid int The group ID. group str The group name. permissions Permissions An instance of :class:`~fs.permissions.Permissions`, which contains the permissions for the resource. uid int The user ID. user str The user name of the owner. ================ =================== ========================================== This namespace is optional, as not all filesystems have a concept of ownership or permissions. It is supported by :class:`~fs.osfs.OSFS`. Some values may be ``None`` if they aren't supported by the filesystem. Stat Namespace ~~~~~~~~~~~~~~ The ``stat`` namespace contains information reported by a call to `os.stat `_. This namespace is supported by :class:`~fs.osfs.OSFS` and potentially other filesystems which map directly to the OS filesystem. Most other filesystems will not support this namespace. LStat Namespace ~~~~~~~~~~~~~~~ The ``lstat`` namespace contains information reported by a call to `os.lstat `_. This namespace is supported by :class:`~fs.osfs.OSFS` and potentially other filesystems which map directly to the OS filesystem. Most other filesystems will not support this namespace. Link Namespace ~~~~~~~~~~~~~~ The ``link`` namespace contains information about a symlink. =================== ======= ============================================ Name type Description ------------------- ------- -------------------------------------------- target str A path to the symlink target, or ``None`` if this path is not a symlink. Note, the meaning of this target is somewhat filesystem dependent, and may not be a valid path on the FS object. =================== ======= ============================================ Other Namespaces ~~~~~~~~~~~~~~~~ Some filesystems may support other namespaces not covered here. See the documentation for the specific filesystem for information on what namespaces are supported. You can retrieve such implementation specific resource information with the :meth:`~fs.info.Info.get` method. .. note:: It is not an error to request a namespace (or namespaces) that the filesystem does *not* support. Any unknown namespaces will be ignored. Missing Namespaces ------------------ Some attributes on the Info objects require that a given namespace be present. If you attempt to reference them without the namespace being present (because you didn't request it, or the filesystem doesn't support it) then a :class:`~fs.errors.MissingInfoNamespace` exception will be thrown. Here's how you might handle such exceptions:: try: print('user is {}'.format(info.user)) except errors.MissingInfoNamespace: # No 'access' namespace pass If you prefer a *look before you leap* approach, you can use use the :meth:`~fs.info.Info.has_namespace` method. Here's an example:: if info.has_namespace('access'): print('user is {}'.format(info.user)) See :class:`~fs.info.Info` for details regarding info attributes. Raw Info -------- The :class:`~fs.info.Info` class is a wrapper around a simple data structure containing the *raw* info. You can access this raw info with the ``info.raw`` property. .. note:: The following is probably only of interest if you intend to implement a filesystem yourself. Raw info data consists of a dictionary that maps the namespace name on to a dictionary of information. Here's an example:: { 'access': { 'group': 'staff', 'permissions': ['g_r', 'o_r', 'u_r', 'u_w'], 'user': 'will' }, 'basic': { 'is_dir': False, 'name': 'README.txt' }, 'details': { 'accessed': 1474979730.0, 'created': 1462266356.0, 'metadata_changed': 1473071537.0, 'modified': 1462266356.0, 'size': 79, 'type': 2 } } Raw resource information contains basic types only (strings, numbers, lists, dict, None). This makes the resource information simple to send over a network as it can be trivially serialized as JSON or other data format. Because of this requirement, times are stored as `epoch times `_. The Info object will convert these to datetime objects from the standard library. Additionally, the Info object will convert permissions from a list of strings in to a :class:`~fs.permissions.Permissions` objects. pyfilesystem2-2.4.12/docs/source/interface.rst000066400000000000000000000066611400005060600213410ustar00rootroot00000000000000.. _interface: PyFilesystem API ---------------- The following is a complete list of methods on PyFilesystem objects. * :meth:`~fs.base.FS.appendbytes` Append bytes to a file. * :meth:`~fs.base.FS.appendtext` Append text to a file. * :meth:`~fs.base.FS.check` Check if a filesystem is open or raise error. * :meth:`~fs.base.FS.close` Close the filesystem. * :meth:`~fs.base.FS.copy` Copy a file to another location. * :meth:`~fs.base.FS.copydir` Copy a directory to another location. * :meth:`~fs.base.FS.create` Create or truncate a file. * :meth:`~fs.base.FS.desc` Get a description of a resource. * :meth:`~fs.base.FS.download` Copy a file on the filesystem to a file-like object. * :meth:`~fs.base.FS.exists` Check if a path exists. * :meth:`~fs.base.FS.filterdir` Iterate resources, filtering by wildcard(s). * :meth:`~fs.base.FS.getbasic` Get basic info namespace for a resource. * :meth:`~fs.base.FS.getdetails` Get details info namespace for a resource. * :meth:`~fs.base.FS.getinfo` Get info regarding a file or directory. * :meth:`~fs.base.FS.getmeta` Get meta information for a resource. * :meth:`~fs.base.FS.getospath` Get path with encoding expected by the OS. * :meth:`~fs.base.FS.getsize` Get the size of a file. * :meth:`~fs.base.FS.getsyspath` Get the system path of a resource, if one exists. * :meth:`~fs.base.FS.gettype` Get the type of a resource. * :meth:`~fs.base.FS.geturl` Get a URL to a resource, if one exists. * :meth:`~fs.base.FS.hassyspath` Check if a resource maps to the OS filesystem. * :meth:`~fs.base.FS.hash` Get the hash of a file's contents. * :meth:`~fs.base.FS.hasurl` Check if a resource has a URL. * :meth:`~fs.base.FS.isclosed` Check if the filesystem is closed. * :meth:`~fs.base.FS.isempty` Check if a directory is empty. * :meth:`~fs.base.FS.isdir` Check if path maps to a directory. * :meth:`~fs.base.FS.isfile` Check if path maps to a file. * :meth:`~fs.base.FS.islink` Check if path is a link. * :meth:`~fs.base.FS.listdir` Get a list of resources in a directory. * :meth:`~fs.base.FS.lock` Get a thread lock context manager. * :meth:`~fs.base.FS.makedir` Make a directory. * :meth:`~fs.base.FS.makedirs` Make a directory and intermediate directories. * :meth:`~fs.base.FS.match` Match one or more wildcard patterns against a path. * :meth:`~fs.base.FS.move` Move a file to another location. * :meth:`~fs.base.FS.movedir` Move a directory to another location. * :meth:`~fs.base.FS.open` Open a file on the filesystem. * :meth:`~fs.base.FS.openbin` Open a binary file. * :meth:`~fs.base.FS.opendir` Get a filesystem object for a directory. * :meth:`~fs.base.FS.readbytes` Read file as bytes. * :meth:`~fs.base.FS.readtext` Read file as text. * :meth:`~fs.base.FS.remove` Remove a file. * :meth:`~fs.base.FS.removedir` Remove a directory. * :meth:`~fs.base.FS.removetree` Recursively remove file and directories. * :meth:`~fs.base.FS.scandir` Scan files and directories. * :meth:`~fs.base.FS.setinfo` Set resource information. * :meth:`~fs.base.FS.settimes` Set modified times for a resource. * :meth:`~fs.base.FS.touch` Create a file or update times. * :meth:`~fs.base.FS.tree` Render a tree view of the filesystem. * :meth:`~fs.base.FS.upload` Copy a binary file to the filesystem. * :meth:`~fs.base.FS.validatepath` Check a path is valid and return normalized path. * :meth:`~fs.base.FS.writebytes` Write a file as bytes. * :meth:`~fs.base.FS.writefile` Write a file-like object to the filesystem. * :meth:`~fs.base.FS.writetext` Write a file as text. pyfilesystem2-2.4.12/docs/source/introduction.rst000066400000000000000000000013461400005060600221150ustar00rootroot00000000000000Introduction ============ PyFilesystem is a Python module that provides a common interface to any filesystem. Think of PyFilesystem ``FS`` objects as the next logical step to Python's ``file`` objects. In the same way that file objects abstract a single file, FS objects abstract an entire filesystem. Installing ---------- You can install PyFilesystem with ``pip`` as follows:: pip install fs Or to upgrade to the most recent version:: pip install fs --upgrade PyFilesystem is also available on conda_:: conda install fs -c conda-forge Alternatively, if you would like to install from source, you can check out `the code from Github `_. .. _conda: https://conda.io/docs/pyfilesystem2-2.4.12/docs/source/openers.rst000066400000000000000000000035161400005060600210500ustar00rootroot00000000000000.. _fs-urls: FS URLs ======= PyFilesystem can open a filesystem via an *FS URL*, which is similar to a URL you might enter in to a browser. FS URLs are useful if you want to specify a filesystem dynamically, such as in a conf file or from the command line. Format ------ FS URLs are formatted in the following way:: ://:@ The components are as follows: * ```` Identifies the type of filesystem to create. e.g. ``osfs``, ``ftp``. * ```` Optional username. * ```` Optional password. * ```` A *resource*, which may be a domain, path, or both. Here are a few examples:: osfs://~/projects osfs://c://system32 ftp://ftp.example.org/pub mem:// ftp://will:daffodil@ftp.example.org/private If ```` is not specified then it is assumed to be an :class:`~fs.osfs.OSFS`, i.e. the following FS URLs are equivalent:: osfs://~/projects ~/projects .. note:: The `username` and `passwords` fields may not contain a colon (``:``) or an ``@`` symbol. If you need these symbols they may be `percent encoded `_. URL Parameters -------------- FS URLs may also be appended with a ``?`` symbol followed by a url-encoded query string. For example:: myprotocol://example.org?key1=value1&key2 The query string would be decoded as ``{"key1": "value1", "key2": ""}``. Query strings are used to provide additional filesystem-specific information used when opening. See the filesystem documentation for information on what query string parameters are supported. Opening FS URLS --------------- To open a filesysem with a FS URL, you can use :meth:`~fs.opener.registry.Registry.open_fs`, which may be imported and used as follows:: from fs import open_fs projects_fs = open_fs('osfs://~/projects') pyfilesystem2-2.4.12/docs/source/reference.rst000066400000000000000000000010201400005060600213170ustar00rootroot00000000000000Reference ========= .. toctree:: :maxdepth: 3 reference/base.rst reference/compress.rst reference/copy.rst reference/enums.rst reference/errors.rst reference/glob.rst reference/info_objects.rst reference/filesize.rst reference/mirror.rst reference/move.rst reference/mode.rst reference/opener.rst reference/path.rst reference/permissions.rst reference/tools.rst reference/tree.rst reference/walk.rst reference/wildcard.rst reference/wrap.rst reference/wrapfs.rst pyfilesystem2-2.4.12/docs/source/reference/000077500000000000000000000000001400005060600205745ustar00rootroot00000000000000pyfilesystem2-2.4.12/docs/source/reference/appfs.rst000066400000000000000000000001101400005060600224270ustar00rootroot00000000000000App Filesystems =============== .. automodule:: fs.appfs :members: pyfilesystem2-2.4.12/docs/source/reference/base.rst000066400000000000000000000000721400005060600222370ustar00rootroot00000000000000fs.base ======= .. automodule:: fs.base :members: FS pyfilesystem2-2.4.12/docs/source/reference/compress.rst000066400000000000000000000001021400005060600231520ustar00rootroot00000000000000fs.compress =========== .. automodule:: fs.compress :members:pyfilesystem2-2.4.12/docs/source/reference/copy.rst000066400000000000000000000000671400005060600223030ustar00rootroot00000000000000fs.copy ======= .. automodule:: fs.copy :members: pyfilesystem2-2.4.12/docs/source/reference/enums.rst000066400000000000000000000001471400005060600224570ustar00rootroot00000000000000fs.enums ======== .. automodule:: fs.enums :members: ResourceType, Seek :member-order: bysource pyfilesystem2-2.4.12/docs/source/reference/errors.rst000066400000000000000000000001241400005060600226370ustar00rootroot00000000000000fs.errors ========= .. automodule:: fs.errors :members: :show-inheritance: pyfilesystem2-2.4.12/docs/source/reference/filesize.rst000066400000000000000000000001021400005060600231310ustar00rootroot00000000000000fs.filesize =========== .. automodule:: fs.filesize :members: pyfilesystem2-2.4.12/docs/source/reference/ftpfs.rst000066400000000000000000000001061400005060600224450ustar00rootroot00000000000000FTP Filesystem ============== .. automodule:: fs.ftpfs :members: pyfilesystem2-2.4.12/docs/source/reference/glob.rst000066400000000000000000000000671400005060600222540ustar00rootroot00000000000000fs.glob ======= .. automodule:: fs.glob :members: pyfilesystem2-2.4.12/docs/source/reference/info_objects.rst000066400000000000000000000000671400005060600237750ustar00rootroot00000000000000fs.info ======= .. automodule:: fs.info :members: pyfilesystem2-2.4.12/docs/source/reference/memoryfs.rst000066400000000000000000000001171400005060600231660ustar00rootroot00000000000000Memory Filesystem ================= .. automodule:: fs.memoryfs :members: pyfilesystem2-2.4.12/docs/source/reference/mirror.rst000066400000000000000000000001031400005060600226320ustar00rootroot00000000000000fs.mirror ========= .. automodule:: fs.mirror :members: mirror pyfilesystem2-2.4.12/docs/source/reference/mode.rst000066400000000000000000000000671400005060600222550ustar00rootroot00000000000000fs.mode ======= .. automodule:: fs.mode :members: pyfilesystem2-2.4.12/docs/source/reference/mountfs.rst000066400000000000000000000023521400005060600230230ustar00rootroot00000000000000Mount Filesystem ================ A Mount FS is a *virtual* filesystem which can seamlessly map sub-directories on to other filesystems. For example, lets say we have two filesystems containing config files and resources respectively:: [config_fs] |-- config.cfg `-- defaults.cfg [resources_fs] |-- images | |-- logo.jpg | `-- photo.jpg `-- data.dat We can combine these filesystems in to a single filesystem with the following code:: from fs.mountfs import MountFS combined_fs = MountFS() combined_fs.mount('config', config_fs) combined_fs.mount('resources', resources_fs) This will create a filesystem where paths under ``config/`` map to ``config_fs``, and paths under ``resources/`` map to ``resources_fs``:: [combined_fs] |-- config | |-- config.cfg | `-- defaults.cfg `-- resources |-- images | |-- logo.jpg | `-- photo.jpg `-- data.dat Now both filesystems may be accessed with the same path structure:: print(combined_fs.gettext('/config/defaults.cfg')) read_jpg(combined_fs.open('/resources/images/logo.jpg', 'rb') .. autoclass:: fs.mountfs.MountFS :members: .. autoexception:: fs.mountfs.MountError :show-inheritance: pyfilesystem2-2.4.12/docs/source/reference/move.rst000066400000000000000000000000671400005060600222770ustar00rootroot00000000000000fs.move ======= .. automodule:: fs.move :members: pyfilesystem2-2.4.12/docs/source/reference/multifs.rst000066400000000000000000000025011400005060600230070ustar00rootroot00000000000000Multi Filesystem ================ A MultiFS is a filesystem composed of a sequence of other filesystems, where the directory structure of each overlays the previous filesystem in the sequence. One use for such a filesystem would be to selectively override a set of files, to customize behavior. For example, to create a filesystem that could be used to *theme* a web application. We start with the following directories:: `-- templates |-- snippets | `-- panel.html |-- index.html |-- profile.html `-- base.html `-- theme |-- snippets | |-- widget.html | `-- extra.html |-- index.html `-- theme.html And we want to create a single filesystem that will load a file from ``templates/`` only if it isn't found in ``theme/``. Here's how we could do that:: from fs.osfs import OSFS from fs.multifs import MultiFS theme_fs = MultiFS() theme_fs.add_fs('templates', OSFS('templates')) theme_fs.add_fs('theme', OSFS('theme')) Now we have a ``theme_fs`` filesystem that presents a single view of both directories:: |-- snippets | |-- panel.html | |-- widget.html | `-- extra.html |-- index.html |-- profile.html |-- base.html `-- theme.html .. autoclass:: fs.multifs.MultiFS :members: pyfilesystem2-2.4.12/docs/source/reference/opener.rst000066400000000000000000000006161400005060600226210ustar00rootroot00000000000000fs.opener ========= Open filesystems from a URL. fs.opener.base -------------- .. automodule:: fs.opener.base :members: fs.opener.parse --------------- .. automodule:: fs.opener.parse :members: fs.opener.registry ------------------ .. automodule:: fs.opener.registry :members: fs.opener.errors ---------------- .. automodule:: fs.opener.errors :members: :show-inheritance: pyfilesystem2-2.4.12/docs/source/reference/osfs.rst000066400000000000000000000001031400005060600222720ustar00rootroot00000000000000OS Filesystem ============= .. automodule:: fs.osfs :members: pyfilesystem2-2.4.12/docs/source/reference/path.rst000066400000000000000000000000671400005060600222650ustar00rootroot00000000000000fs.path ======= .. automodule:: fs.path :members: pyfilesystem2-2.4.12/docs/source/reference/permissions.rst000066400000000000000000000001131400005060600236740ustar00rootroot00000000000000fs.permissions ============== .. automodule:: fs.permissions :members:pyfilesystem2-2.4.12/docs/source/reference/subfs.rst000066400000000000000000000001421400005060600224450ustar00rootroot00000000000000Sub Filesystem ============== .. automodule:: fs.subfs :members: :member-order: bysource pyfilesystem2-2.4.12/docs/source/reference/tarfs.rst000066400000000000000000000001421400005060600224420ustar00rootroot00000000000000Tar Filesystem ============== .. automodule:: fs.tarfs :members: :member-order: bysource pyfilesystem2-2.4.12/docs/source/reference/tempfs.rst000066400000000000000000000001231400005060600226200ustar00rootroot00000000000000Temporary Filesystem ==================== .. automodule:: fs.tempfs :members: pyfilesystem2-2.4.12/docs/source/reference/tools.rst000066400000000000000000000000721400005060600224650ustar00rootroot00000000000000fs.tools ======== .. automodule:: fs.tools :members: pyfilesystem2-2.4.12/docs/source/reference/tree.rst000066400000000000000000000000671400005060600222700ustar00rootroot00000000000000fs.tree ======= .. automodule:: fs.tree :members: pyfilesystem2-2.4.12/docs/source/reference/walk.rst000066400000000000000000000000671400005060600222670ustar00rootroot00000000000000fs.walk ======= .. automodule:: fs.walk :members: pyfilesystem2-2.4.12/docs/source/reference/wildcard.rst000066400000000000000000000001031400005060600231110ustar00rootroot00000000000000fs.wildcard =========== .. automodule:: fs.wildcard :members: pyfilesystem2-2.4.12/docs/source/reference/wrap.rst000066400000000000000000000000671400005060600223020ustar00rootroot00000000000000fs.wrap ======= .. automodule:: fs.wrap :members: pyfilesystem2-2.4.12/docs/source/reference/wrapfs.rst000066400000000000000000000000751400005060600226320ustar00rootroot00000000000000fs.wrapfs ========= .. automodule:: fs.wrapfs :members: pyfilesystem2-2.4.12/docs/source/reference/zipfs.rst000066400000000000000000000001421400005060600224560ustar00rootroot00000000000000Zip Filesystem ============== .. automodule:: fs.zipfs :members: :member-order: bysource pyfilesystem2-2.4.12/docs/source/walking.rst000066400000000000000000000075461400005060600210400ustar00rootroot00000000000000.. _walking: Walking ======= *Walking* a filesystem means recursively visiting a directory and any sub-directories. It is a fairly common requirement for copying, searching etc. To walk a filesystem (or directory) you can construct a :class:`~fs.walk.Walker` object and use its methods to do the walking. Here's an example that prints the path to every Python file in your projects directory:: >>> from fs import open_fs >>> from fs.walk import Walker >>> home_fs = open_fs('~/projects') >>> walker = Walker(filter=['*.py']) >>> for path in walker.files(home_fs): ... print(path) Generally speaking, however, you will only need to construct a Walker object if you want to customize some behavior of the walking algorithm. This is because you can access the functionality of a Walker object via the ``walk`` attribute on FS objects. Here's an example:: >>> from fs import open_fs >>> home_fs = open_fs('~/projects') >>> for path in home_fs.walk.files(filter=['*.py']): ... print(path) Note that the ``files`` method above doesn't require a ``fs`` parameter. This is because the ``walk`` attribute is a property which returns a :class:`~fs.walk.BoundWalker` object, which associates the filesystem with a walker. Walk Methods ~~~~~~~~~~~~ If you call the ``walk`` attribute on a :class:`~fs.walk.BoundWalker` it will return an iterable of :class:`~fs.walk.Step` named tuples with three values; a path to the directory, a list of :class:`~fs.info.Info` objects for directories, and a list of :class:`~fs.info.Info` objects for the files. Here's an example:: for step in home_fs.walk(filter=['*.py']): print('In dir {}'.format(step.path)) print('sub-directories: {!r}'.format(step.dirs)) print('files: {!r}'.format(step.files)) .. note :: Methods of :class:`~fs.walk.BoundWalker` invoke a corresponding method on a :class:`~fs.walk.Walker` object, with the *bound* filesystem. The ``walk`` attribute may appear to be a method, but is in fact a callable object. It supports other convenient methods that supply different information from the walk. For instance, :meth:`~fs.walk.BoundWalker.files`, which returns an iterable of file paths. Here's an example:: for path in home_fs.walk.files(filter=['*.py']): print('Python file: {}'.format(path)) The complement to ``files`` is :meth:`~fs.walk.BoundWalker.dirs` which returns paths to just the directories (and ignoring the files). Here's an example:: for dir_path in home_fs.walk.dirs(): print("{!r} contains sub-directory {}".format(home_fs, dir_path)) The :meth:`~fs.walk.BoundWalker.info` method returns a generator of tuples containing a path and an :class:`~fs.info.Info` object. You can use the ``is_dir`` attribute to know if the path refers to a directory or file. Here's an example:: for path, info in home_fs.walk.info(): if info.is_dir: print("[dir] {}".format(path)) else: print("[file] {}".format(path)) Finally, here's a nice example that counts the number of bytes of Python code in your home directory:: bytes_of_python = sum( info.size for info in home_fs.walk.info(namespaces=['details']) if not info.is_dir ) Search Algorithms ~~~~~~~~~~~~~~~~~ There are two general algorithms for searching a directory tree. The first method is `"breadth"`, which yields resources in the top of the directory tree first, before moving on to sub-directories. The second is `"depth"` which yields the most deeply nested resources, and works backwards to the top-most directory. Generally speaking, you will only need the a *depth* search if you will be deleting resources as you walk through them. The default *breadth* search is a generally more efficient way of looking through a filesystem. You can specify which method you want with the ``search`` parameter on most ``Walker`` methods. pyfilesystem2-2.4.12/docs/tree.html000066400000000000000000000456321400005060600171750ustar00rootroot00000000000000 Directory Tree

Directory Tree

.
├── Makefile
├── build
│   ├── doctrees
│   │   ├── builtin.doctree
│   │   ├── concepts.doctree
│   │   ├── environment.pickle
│   │   ├── guide.doctree
│   │   ├── implementers.doctree
│   │   ├── index.doctree
│   │   ├── info.doctree
│   │   ├── introduction.doctree
│   │   ├── reference
│   │   │   ├── base.doctree
│   │   │   ├── copy.doctree
│   │   │   ├── enums.doctree
│   │   │   ├── errors.doctree
│   │   │   ├── ftpfs.doctree
│   │   │   ├── info_objects.doctree
│   │   │   ├── memoryfs.doctree
│   │   │   ├── mountfs.doctree
│   │   │   ├── move.doctree
│   │   │   ├── multifs.doctree
│   │   │   ├── opener.doctree
│   │   │   ├── osfs.doctree
│   │   │   ├── path.doctree
│   │   │   ├── subfs.doctree
│   │   │   ├── tempfs.doctree
│   │   │   ├── tree.doctree
│   │   │   ├── walk.doctree
│   │   │   └── wildcard.doctree
│   │   ├── reference.doctree
│   │   └── walking.doctree
│   └── html
│       ├── _sources
│       │   ├── builtin.txt
│       │   ├── concepts.txt
│       │   ├── guide.txt
│       │   ├── implementers.txt
│       │   ├── index.txt
│       │   ├── info.txt
│       │   ├── introduction.txt
│       │   ├── reference
│       │   │   ├── base.txt
│       │   │   ├── copy.txt
│       │   │   ├── enums.txt
│       │   │   ├── errors.txt
│       │   │   ├── ftpfs.txt
│       │   │   ├── info_objects.txt
│       │   │   ├── memoryfs.txt
│       │   │   ├── mountfs.txt
│       │   │   ├── move.txt
│       │   │   ├── multifs.txt
│       │   │   ├── opener.txt
│       │   │   ├── osfs.txt
│       │   │   ├── path.txt
│       │   │   ├── subfs.txt
│       │   │   ├── tempfs.txt
│       │   │   ├── tree.txt
│       │   │   ├── walk.txt
│       │   │   └── wildcard.txt
│       │   ├── reference.txt
│       │   └── walking.txt
│       ├── _static
│       │   ├── ajax-loader.gif
│       │   ├── basic.css
│       │   ├── comment-bright.png
│       │   ├── comment-close.png
│       │   ├── comment.png
│       │   ├── css
│       │   │   ├── badge_only.css
│       │   │   └── theme.css
│       │   ├── doctools.js
│       │   ├── down-pressed.png
│       │   ├── down.png
│       │   ├── file.png
│       │   ├── fonts
│       │   │   ├── Inconsolata-Bold.ttf
│       │   │   ├── Inconsolata-Regular.ttf
│       │   │   ├── Lato-Bold.ttf
│       │   │   ├── Lato-Regular.ttf
│       │   │   ├── RobotoSlab-Bold.ttf
│       │   │   ├── RobotoSlab-Regular.ttf
│       │   │   ├── fontawesome-webfont.eot
│       │   │   ├── fontawesome-webfont.svg
│       │   │   ├── fontawesome-webfont.ttf
│       │   │   └── fontawesome-webfont.woff
│       │   ├── jquery-1.11.1.js
│       │   ├── jquery.js
│       │   ├── js
│       │   │   ├── modernizr.min.js
│       │   │   └── theme.js
│       │   ├── minus.png
│       │   ├── plus.png
│       │   ├── pygments.css
│       │   ├── searchtools.js
│       │   ├── underscore-1.3.1.js
│       │   ├── underscore.js
│       │   ├── up-pressed.png
│       │   ├── up.png
│       │   └── websupport.js
│       ├── builtin.html
│       ├── concepts.html
│       ├── genindex.html
│       ├── guide.html
│       ├── implementers.html
│       ├── index.html
│       ├── info.html
│       ├── introduction.html
│       ├── objects.inv
│       ├── py-modindex.html
│       ├── reference
│       │   ├── base.html
│       │   ├── copy.html
│       │   ├── enums.html
│       │   ├── errors.html
│       │   ├── ftpfs.html
│       │   ├── info_objects.html
│       │   ├── memoryfs.html
│       │   ├── mountfs.html
│       │   ├── move.html
│       │   ├── multifs.html
│       │   ├── opener.html
│       │   ├── osfs.html
│       │   ├── path.html
│       │   ├── subfs.html
│       │   ├── tempfs.html
│       │   ├── tree.html
│       │   ├── walk.html
│       │   └── wildcard.html
│       ├── reference.html
│       ├── search.html
│       ├── searchindex.js
│       └── walking.html
├── builddocs.sh
├── make.bat
├── source
│   ├── builtin.rst
│   ├── concepts.rst
│   ├── conf.py
│   ├── guide.rst
│   ├── implementers.rst
│   ├── index.rst
│   ├── info.rst
│   ├── introduction.rst
│   ├── reference
│   │   ├── base.rst
│   │   ├── copy.rst
│   │   ├── enums.rst
│   │   ├── errors.rst
│   │   ├── ftpfs.rst
│   │   ├── info_objects.rst
│   │   ├── memoryfs.rst
│   │   ├── mountfs.rst
│   │   ├── move.rst
│   │   ├── multifs.rst
│   │   ├── opener.rst
│   │   ├── osfs.rst
│   │   ├── path.rst
│   │   ├── subfs.rst
│   │   ├── tempfs.rst
│   │   ├── tree.rst
│   │   ├── walk.rst
│   │   └── wildcard.rst
│   ├── reference.rst
│   └── walking.rst
└── tree.html


13 directories, 153 files


tree v1.7.0 © 1996 - 2014 by Steve Baker and Thomas Moore
HTML output hacked and copyleft © 1998 by Francesc Rocher
JSON output hacked and copyleft © 2014 by Florian Sesser
Charsets / OS/2 support © 2001 by Kyosuke Tokoro

pyfilesystem2-2.4.12/examples/000077500000000000000000000000001400005060600162245ustar00rootroot00000000000000pyfilesystem2-2.4.12/examples/README.txt000066400000000000000000000003161400005060600177220ustar00rootroot00000000000000This directory contains a number of example command line apps using PyFilesystem. They are intended to be a learning aid and not exactly finished products, but all these examples are completely functional.pyfilesystem2-2.4.12/examples/count_py.py000066400000000000000000000006401400005060600204360ustar00rootroot00000000000000""" Display how much storage is used in your Python files. Usage: python count_py.py """ import sys from fs import open_fs from fs.filesize import traditional fs_url = sys.argv[1] count = 0 with open_fs(fs_url) as fs: for _path, info in fs.walk.info(filter=["*.py"], namespaces=["details"]): count += info.size print(f'There is {traditional(count)} of Python in "{fs_url}"') pyfilesystem2-2.4.12/examples/find_dups.py000066400000000000000000000007311400005060600205520ustar00rootroot00000000000000""" Find paths to files with identical contents. Usage: python find_dups.py """ from collections import defaultdict import sys from fs import open_fs hashes = defaultdict(list) with open_fs(sys.argv[1]) as fs: for path in fs.walk.files(): file_hash = fs.hash(path, "md5") hashes[file_hash].append(path) for paths in hashes.values(): if len(paths) > 1: for path in paths: print(path) print() pyfilesystem2-2.4.12/examples/rm_pyc.py000066400000000000000000000003611400005060600200670ustar00rootroot00000000000000""" Remove all pyc files in a directory. Usage: python rm_pyc.py """ import sys from fs import open_fs with open_fs(sys.argv[1]) as fs: count = fs.glob("**/*.pyc").remove() print(f"{count} .pyc files remove") pyfilesystem2-2.4.12/examples/upload.py000066400000000000000000000010131400005060600200550ustar00rootroot00000000000000""" Upload a file to a server (or other filesystem) Usage: python upload.py FILENAME example: python upload.py foo.txt ftp://example.org/uploads/ """ import os import sys from fs import open_fs _, file_path, fs_url = sys.argv filename = os.path.basename(file_path) with open_fs(fs_url) as fs: if fs.exists(filename): print("destination exists! aborting.") else: with open(file_path, "rb") as bin_file: fs.upload(filename, bin_file) print("upload successful!") pyfilesystem2-2.4.12/fs/000077500000000000000000000000001400005060600150165ustar00rootroot00000000000000pyfilesystem2-2.4.12/fs/__init__.py000066400000000000000000000005251400005060600171310ustar00rootroot00000000000000"""Python filesystem abstraction layer. """ __import__("pkg_resources").declare_namespace(__name__) # type: ignore from ._version import __version__ from .enums import ResourceType, Seek from .opener import open_fs from ._fscompat import fsencode, fsdecode from . import path __all__ = ["__version__", "ResourceType", "Seek", "open_fs"] pyfilesystem2-2.4.12/fs/_bulk.py000066400000000000000000000077001400005060600164700ustar00rootroot00000000000000""" Implements a thread pool for parallel copying of files. """ from __future__ import unicode_literals import threading import typing from six.moves.queue import Queue from .copy import copy_file_internal from .errors import BulkCopyFailed from .tools import copy_file_data if typing.TYPE_CHECKING: from .base import FS from types import TracebackType from typing import IO, List, Optional, Text, Type class _Worker(threading.Thread): """Worker thread that pulls tasks from a queue.""" def __init__(self, copier): # type (Copier) -> None self.copier = copier super(_Worker, self).__init__() self.daemon = True def run(self): # type () -> None queue = self.copier.queue while True: task = queue.get(block=True) try: if task is None: break # Sentinel to exit thread task() except Exception as error: self.copier.add_error(error) finally: queue.task_done() class _Task(object): """Base class for a task.""" def __call__(self): # type: () -> None """Task implementation.""" class _CopyTask(_Task): """A callable that copies from one file another.""" def __init__(self, src_file, dst_file): # type: (IO, IO) -> None self.src_file = src_file self.dst_file = dst_file def __call__(self): # type: () -> None try: copy_file_data(self.src_file, self.dst_file, chunk_size=1024 * 1024) finally: try: self.src_file.close() finally: self.dst_file.close() class Copier(object): """Copy files in worker threads.""" def __init__(self, num_workers=4): # type: (int) -> None if num_workers < 0: raise ValueError("num_workers must be >= 0") self.num_workers = num_workers self.queue = None # type: Optional[Queue[_Task]] self.workers = [] # type: List[_Worker] self.errors = [] # type: List[Exception] self.running = False def start(self): """Start the workers.""" if self.num_workers: self.queue = Queue(maxsize=self.num_workers) self.workers = [_Worker(self) for _ in range(self.num_workers)] for worker in self.workers: worker.start() self.running = True def stop(self): """Stop the workers (will block until they are finished).""" if self.running and self.num_workers: for _worker in self.workers: self.queue.put(None) for worker in self.workers: worker.join() # Free up references held by workers del self.workers[:] self.queue.join() self.running = False def add_error(self, error): """Add an exception raised by a task.""" self.errors.append(error) def __enter__(self): self.start() return self def __exit__( self, exc_type, # type: Optional[Type[BaseException]] exc_value, # type: Optional[BaseException] traceback, # type: Optional[TracebackType] ): self.stop() if traceback is None and self.errors: raise BulkCopyFailed(self.errors) def copy(self, src_fs, src_path, dst_fs, dst_path): # type: (FS, Text, FS, Text) -> None """Copy a file from one fs to another.""" if self.queue is None: # This should be the most performant for a single-thread copy_file_internal(src_fs, src_path, dst_fs, dst_path) else: src_file = src_fs.openbin(src_path, "r") try: dst_file = dst_fs.openbin(dst_path, "w") except Exception: src_file.close() raise task = _CopyTask(src_file, dst_file) self.queue.put(task) pyfilesystem2-2.4.12/fs/_fscompat.py000066400000000000000000000026701400005060600173500ustar00rootroot00000000000000import six try: from os import fsencode, fsdecode except ImportError: from backports.os import fsencode, fsdecode # type: ignore try: from os import fspath except ImportError: def fspath(path): # type: ignore """Return the path representation of a path-like object. If str or bytes is passed in, it is returned unchanged. Otherwise the os.PathLike interface is used to get the path representation. If the path representation is not str or bytes, TypeError is raised. If the provided path is not str, bytes, or os.PathLike, TypeError is raised. """ if isinstance(path, (six.text_type, bytes)): return path # Work from the object's type to match method resolution of other magic # methods. path_type = type(path) try: path_repr = path_type.__fspath__(path) except AttributeError: if hasattr(path_type, "__fspath__"): raise else: raise TypeError( "expected string type or os.PathLike object, " "not " + path_type.__name__ ) if isinstance(path_repr, (six.text_type, bytes)): return path_repr else: raise TypeError( "expected {}.__fspath__() to return string type " "not {}".format(path_type.__name__, type(path_repr).__name__) ) pyfilesystem2-2.4.12/fs/_ftp_parse.py000066400000000000000000000077161400005060600175250ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import unicodedata import datetime import re import time from pytz import UTC from .enums import ResourceType from .permissions import Permissions EPOCH_DT = datetime.datetime.fromtimestamp(0, UTC) RE_LINUX = re.compile( r""" ^ ([ldrwx-]{10}) \s+? (\d+) \s+? ([\w\-]+) \s+? ([\w\-]+) \s+? (\d+) \s+? (\w{3}\s+\d{1,2}\s+[\w:]+) \s+ (.*?) $ """, re.VERBOSE, ) RE_WINDOWSNT = re.compile( r""" ^ (?P\S+) \s+ (?P\S+(AM|PM)?) \s+ (?P(|\d+)) \s+ (?P.*) $ """, re.VERBOSE, ) def get_decoders(): """ Returns all available FTP LIST line decoders with their matching regexes. """ decoders = [ (RE_LINUX, decode_linux), (RE_WINDOWSNT, decode_windowsnt), ] return decoders def parse(lines): info = [] for line in lines: if not line.strip(): continue raw_info = parse_line(line) if raw_info is not None: info.append(raw_info) return info def parse_line(line): for line_re, decode_callable in get_decoders(): match = line_re.match(line) if match is not None: return decode_callable(line, match) return None def _parse_time(t, formats): for frmt in formats: try: _t = time.strptime(t, frmt) break except ValueError: continue else: return None year = _t.tm_year if _t.tm_year != 1900 else time.localtime().tm_year month = _t.tm_mon day = _t.tm_mday hour = _t.tm_hour minutes = _t.tm_min dt = datetime.datetime(year, month, day, hour, minutes, tzinfo=UTC) epoch_time = (dt - EPOCH_DT).total_seconds() return epoch_time def _decode_linux_time(mtime): return _parse_time(mtime, formats=["%b %d %Y", "%b %d %H:%M"]) def decode_linux(line, match): perms, links, uid, gid, size, mtime, name = match.groups() is_link = perms.startswith("l") is_dir = perms.startswith("d") or is_link if is_link: name, _, _link_name = name.partition("->") name = name.strip() _link_name = _link_name.strip() permissions = Permissions.parse(perms[1:]) mtime_epoch = _decode_linux_time(mtime) name = unicodedata.normalize("NFC", name) raw_info = { "basic": {"name": name, "is_dir": is_dir}, "details": { "size": int(size), "type": int(ResourceType.directory if is_dir else ResourceType.file), }, "access": {"permissions": permissions.dump()}, "ftp": {"ls": line}, } access = raw_info["access"] details = raw_info["details"] if mtime_epoch is not None: details["modified"] = mtime_epoch access["user"] = uid access["group"] = gid return raw_info def _decode_windowsnt_time(mtime): return _parse_time(mtime, formats=["%d-%m-%y %I:%M%p", "%d-%m-%y %H:%M"]) def decode_windowsnt(line, match): """ Decodes a Windows NT FTP LIST line like one of these: `11-02-18 02:12PM images` `11-02-18 03:33PM 9276 logo.gif` Alternatively, the time (02:12PM) might also be present in 24-hour format (14:12). """ is_dir = match.group("size") == "" raw_info = { "basic": { "name": match.group("name"), "is_dir": is_dir, }, "details": { "type": int(ResourceType.directory if is_dir else ResourceType.file), }, "ftp": {"ls": line}, } if not is_dir: raw_info["details"]["size"] = int(match.group("size")) modified = _decode_windowsnt_time( match.group("modified_date") + " " + match.group("modified_time") ) if modified is not None: raw_info["details"]["modified"] = modified return raw_info pyfilesystem2-2.4.12/fs/_repr.py000066400000000000000000000022661400005060600165050ustar00rootroot00000000000000"""Tools to generate __repr__ strings. """ from __future__ import unicode_literals import typing if typing.TYPE_CHECKING: from typing import Text, Tuple def make_repr(class_name, *args, **kwargs): # type: (Text, *object, **Tuple[object, object]) -> Text """Generate a repr string. Positional arguments should be the positional arguments used to construct the class. Keyword arguments should consist of tuples of the attribute value and default. If the value is the default, then it won't be rendered in the output. Example: >>> class MyClass(object): ... def __init__(self, name=None): ... self.name = name ... def __repr__(self): ... return make_repr('MyClass', 'foo', name=(self.name, None)) >>> MyClass('Will') MyClass('foo', name='Will') >>> MyClass(None) MyClass() """ arguments = [repr(arg) for arg in args] arguments.extend( [ "{}={!r}".format(name, value) for name, (value, default) in sorted(kwargs.items()) if value != default ] ) return "{}({})".format(class_name, ", ".join(arguments)) pyfilesystem2-2.4.12/fs/_typing.py000066400000000000000000000006171400005060600170450ustar00rootroot00000000000000""" Typing objects missing from Python3.5.1 """ import sys import six _PY = sys.version_info from typing import overload # type: ignore if _PY.major == 3 and _PY.minor == 5 and _PY.micro in (0, 1): def overload(func): # pragma: no cover # noqa: F811 return func try: from typing import Text except ImportError: # pragma: no cover Text = six.text_type # type: ignore pyfilesystem2-2.4.12/fs/_url_tools.py000066400000000000000000000024361400005060600175560ustar00rootroot00000000000000import re import six import platform import typing if typing.TYPE_CHECKING: from typing import Text _WINDOWS_PLATFORM = platform.system() == "Windows" def url_quote(path_snippet): # type: (Text) -> Text """ On Windows, it will separate drive letter and quote windows path alone. No magic on Unix-alie path, just pythonic `pathname2url` Arguments: path_snippet: a file path, relative or absolute. """ if _WINDOWS_PLATFORM and _has_drive_letter(path_snippet): drive_letter, path = path_snippet.split(":", 1) if six.PY2: path = path.encode("utf-8") path = six.moves.urllib.request.pathname2url(path) path_snippet = "{}:{}".format(drive_letter, path) else: if six.PY2: path_snippet = path_snippet.encode("utf-8") path_snippet = six.moves.urllib.request.pathname2url(path_snippet) return path_snippet def _has_drive_letter(path_snippet): # type: (Text) -> bool """ The following path will get True D:/Data C:\\My Dcouments\\ test And will get False /tmp/abc:test Arguments: path_snippet: a file path, relative or absolute. """ windows_drive_pattern = ".:[/\\\\].*$" return re.match(windows_drive_pattern, path_snippet) is not None pyfilesystem2-2.4.12/fs/_version.py000066400000000000000000000001041400005060600172070ustar00rootroot00000000000000"""Version, used in module and setup.py. """ __version__ = "2.4.12" pyfilesystem2-2.4.12/fs/appfs.py000066400000000000000000000133251400005060600165050ustar00rootroot00000000000000"""Manage filesystems in platform-specific application directories. These classes abstract away the different requirements for user data across platforms, which vary in their conventions. They are all subclasses of `~fs.osfs.OSFS`. """ # Thanks to authors of https://pypi.org/project/appdirs # see http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx import typing from .osfs import OSFS from ._repr import make_repr from appdirs import AppDirs if typing.TYPE_CHECKING: from typing import Optional, Text __all__ = [ "UserDataFS", "UserConfigFS", "SiteDataFS", "SiteConfigFS", "UserCacheFS", "UserLogFS", ] class _AppFS(OSFS): """Abstract base class for an app FS. """ # FIXME(@althonos): replace by ClassVar[Text] once # https://github.com/python/mypy/pull/4718 is accepted # (subclass override will raise errors until then) app_dir = None # type: Text def __init__( self, appname, # type: Text author=None, # type: Optional[Text] version=None, # type: Optional[Text] roaming=False, # type: bool create=True, # type: bool ): # type: (...) -> None self.app_dirs = AppDirs(appname, author, version, roaming) self._create = create super(_AppFS, self).__init__( getattr(self.app_dirs, self.app_dir), create=create ) def __repr__(self): # type: () -> Text return make_repr( self.__class__.__name__, self.app_dirs.appname, author=(self.app_dirs.appauthor, None), version=(self.app_dirs.version, None), roaming=(self.app_dirs.roaming, False), create=(self._create, True), ) def __str__(self): # type: () -> Text return "<{} '{}'>".format( self.__class__.__name__.lower(), self.app_dirs.appname ) class UserDataFS(_AppFS): """A filesystem for per-user application data. May also be opened with ``open_fs('userdata://appname:author:version')``. Arguments: appname (str): The name of the application. author (str): The name of the author (used on Windows). version (str): Optional version string, if a unique location per version of the application is required. roaming (bool): If `True`, use a *roaming* profile on Windows. create (bool): If `True` (the default) the directory will be created if it does not exist. """ app_dir = "user_data_dir" class UserConfigFS(_AppFS): """A filesystem for per-user config data. May also be opened with ``open_fs('userconf://appname:author:version')``. Arguments: appname (str): The name of the application. author (str): The name of the author (used on Windows). version (str): Optional version string, if a unique location per version of the application is required. roaming (bool): If `True`, use a *roaming* profile on Windows. create (bool): If `True` (the default) the directory will be created if it does not exist. """ app_dir = "user_config_dir" class UserCacheFS(_AppFS): """A filesystem for per-user application cache data. May also be opened with ``open_fs('usercache://appname:author:version')``. Arguments: appname (str): The name of the application. author (str): The name of the author (used on Windows). version (str): Optional version string, if a unique location per version of the application is required. roaming (bool): If `True`, use a *roaming* profile on Windows. create (bool): If `True` (the default) the directory will be created if it does not exist. """ app_dir = "user_cache_dir" class SiteDataFS(_AppFS): """A filesystem for application site data. May also be opened with ``open_fs('sitedata://appname:author:version')``. Arguments: appname (str): The name of the application. author (str): The name of the author (used on Windows). version (str): Optional version string, if a unique location per version of the application is required. roaming (bool): If `True`, use a *roaming* profile on Windows. create (bool): If `True` (the default) the directory will be created if it does not exist. """ app_dir = "site_data_dir" class SiteConfigFS(_AppFS): """A filesystem for application config data. May also be opened with ``open_fs('siteconf://appname:author:version')``. Arguments: appname (str): The name of the application. author (str): The name of the author (used on Windows). version (str): Optional version string, if a unique location per version of the application is required. roaming (bool): If `True`, use a *roaming* profile on Windows. create (bool): If `True` (the default) the directory will be created if it does not exist. """ app_dir = "site_config_dir" class UserLogFS(_AppFS): """A filesystem for per-user application log data. May also be opened with ``open_fs('userlog://appname:author:version')``. Arguments: appname (str): The name of the application. author (str): The name of the author (used on Windows). version (str): Optional version string, if a unique location per version of the application is required. roaming (bool): If `True`, use a *roaming* profile on Windows. create (bool): If `True` (the default) the directory will be created if it does not exist. """ app_dir = "user_log_dir" pyfilesystem2-2.4.12/fs/base.py000066400000000000000000001555451400005060600163210ustar00rootroot00000000000000"""PyFilesystem base class. The filesystem base class is common to all filesystems. If you familiarize yourself with this (rather straightforward) API, you can work with any of the supported filesystems. """ from __future__ import absolute_import, print_function, unicode_literals import abc import hashlib import itertools import os import threading import time import typing from contextlib import closing from functools import partial, wraps import warnings import six from . import copy, errors, fsencode, iotools, move, tools, walk, wildcard from .glob import BoundGlobber from .mode import validate_open_mode from .path import abspath, join, normpath from .time import datetime_to_epoch from .walk import Walker if typing.TYPE_CHECKING: from datetime import datetime from threading import RLock from typing import ( Any, BinaryIO, Callable, Collection, Dict, IO, Iterable, Iterator, List, Mapping, Optional, Text, Tuple, Type, Union, ) from types import TracebackType from .enums import ResourceType from .info import Info, RawInfo from .subfs import SubFS from .permissions import Permissions from .walk import BoundWalker _F = typing.TypeVar("_F", bound="FS") _T = typing.TypeVar("_T", bound="FS") _OpendirFactory = Callable[[_T, Text], SubFS[_T]] __all__ = ["FS"] def _new_name(method, old_name): """Return a method with a deprecation warning.""" # Looks suspiciously like a decorator, but isn't! @wraps(method) def _method(*args, **kwargs): warnings.warn( "method '{}' has been deprecated, please rename to '{}'".format( old_name, method.__name__ ), DeprecationWarning, ) return method(*args, **kwargs) deprecated_msg = """ Note: .. deprecated:: 2.2.0 Please use `~{}` """.format( method.__name__ ) if getattr(_method, "__doc__", None) is not None: _method.__doc__ += deprecated_msg return _method @six.add_metaclass(abc.ABCMeta) class FS(object): """Base class for FS objects. """ # This is the "standard" meta namespace. _meta = {} # type: Dict[Text, Union[Text, int, bool, None]] # most FS will use default walking algorithms walker_class = Walker # default to SubFS, used by opendir and should be returned by makedir(s) subfs_class = None def __init__(self): # type: (...) -> None """Create a filesystem. See help(type(self)) for accurate signature. """ self._closed = False self._lock = threading.RLock() super(FS, self).__init__() def __del__(self): """Auto-close the filesystem on exit.""" self.close() def __enter__(self): # type: (...) -> FS """Allow use of filesystem as a context manager. """ return self def __exit__( self, exc_type, # type: Optional[Type[BaseException]] exc_value, # type: Optional[BaseException] traceback, # type: Optional[TracebackType] ): # type: (...) -> None """Close filesystem on exit. """ self.close() @property def glob(self): """`~fs.glob.BoundGlobber`: a globber object.. """ return BoundGlobber(self) @property def walk(self): # type: (_F) -> BoundWalker[_F] """`~fs.walk.BoundWalker`: a walker bound to this filesystem. """ return self.walker_class.bind(self) # ---------------------------------------------------------------- # # Required methods # # Filesystems must implement these methods. # # ---------------------------------------------------------------- # @abc.abstractmethod def getinfo(self, path, namespaces=None): # type: (Text, Optional[Collection[Text]]) -> Info """Get information about a resource on a filesystem. Arguments: path (str): A path to a resource on the filesystem. namespaces (list, optional): Info namespaces to query (defaults to *[basic]*). Returns: ~fs.info.Info: resource information object. For more information regarding resource information, see :ref:`info`. """ @abc.abstractmethod def listdir(self, path): # type: (Text) -> List[Text] """Get a list of the resource names in a directory. This method will return a list of the resources in a directory. A *resource* is a file, directory, or one of the other types defined in `~fs.enums.ResourceType`. Arguments: path (str): A path to a directory on the filesystem Returns: list: list of names, relative to ``path``. Raises: fs.errors.DirectoryExpected: If ``path`` is not a directory. fs.errors.ResourceNotFound: If ``path`` does not exist. """ @abc.abstractmethod def makedir( self, path, # type: Text permissions=None, # type: Optional[Permissions] recreate=False, # type: bool ): # type: (...) -> SubFS[FS] """Make a directory. Arguments: path (str): Path to directory from root. permissions (~fs.permissions.Permissions, optional): a `Permissions` instance, or `None` to use default. recreate (bool): Set to `True` to avoid raising an error if the directory already exists (defaults to `False`). Returns: ~fs.subfs.SubFS: a filesystem whose root is the new directory. Raises: fs.errors.DirectoryExists: If the path already exists. fs.errors.ResourceNotFound: If the path is not found. """ @abc.abstractmethod def openbin( self, path, # type: Text mode="r", # type: Text buffering=-1, # type: int **options # type: Any ): # type: (...) -> BinaryIO """Open a binary file-like object. Arguments: path (str): A path on the filesystem. mode (str): Mode to open file (must be a valid non-text mode, defaults to *r*). Since this method only opens binary files, the ``b`` in the mode string is implied. buffering (int): Buffering policy (-1 to use default buffering, 0 to disable buffering, or any positive integer to indicate a buffer size). **options: keyword arguments for any additional information required by the filesystem (if any). Returns: io.IOBase: a *file-like* object. Raises: fs.errors.FileExpected: If the path is not a file. fs.errors.FileExists: If the file exists, and *exclusive mode* is specified (``x`` in the mode). fs.errors.ResourceNotFound: If the path does not exist. """ @abc.abstractmethod def remove(self, path): # type: (Text) -> None """Remove a file from the filesystem. Arguments: path (str): Path of the file to remove. Raises: fs.errors.FileExpected: If the path is a directory. fs.errors.ResourceNotFound: If the path does not exist. """ @abc.abstractmethod def removedir(self, path): # type: (Text) -> None """Remove a directory from the filesystem. Arguments: path (str): Path of the directory to remove. Raises: fs.errors.DirectoryNotEmpty: If the directory is not empty ( see `~fs.base.FS.removetree` for a way to remove the directory contents.). fs.errors.DirectoryExpected: If the path does not refer to a directory. fs.errors.ResourceNotFound: If no resource exists at the given path. fs.errors.RemoveRootError: If an attempt is made to remove the root directory (i.e. ``'/'``) """ @abc.abstractmethod def setinfo(self, path, info): # type: (Text, RawInfo) -> None """Set info on a resource. This method is the complement to `~fs.base.FS.getinfo` and is used to set info values on a resource. Arguments: path (str): Path to a resource on the filesystem. info (dict): Dictionary of resource info. Raises: fs.errors.ResourceNotFound: If ``path`` does not exist on the filesystem The ``info`` dict should be in the same format as the raw info returned by ``getinfo(file).raw``. Example: >>> details_info = {"details": { ... "modified": time.time() ... }} >>> my_fs.setinfo('file.txt', details_info) """ # ---------------------------------------------------------------- # # Optional methods # # Filesystems *may* implement these methods. # # ---------------------------------------------------------------- # def appendbytes(self, path, data): # type: (Text, bytes) -> None # FIXME(@althonos): accept bytearray and memoryview as well ? """Append bytes to the end of a file, creating it if needed. Arguments: path (str): Path to a file. data (bytes): Bytes to append. Raises: TypeError: If ``data`` is not a `bytes` instance. fs.errors.ResourceNotFound: If a parent directory of ``path`` does not exist. """ if not isinstance(data, bytes): raise TypeError("must be bytes") with self._lock: with self.open(path, "ab") as append_file: append_file.write(data) def appendtext( self, path, # type: Text text, # type: Text encoding="utf-8", # type: Text errors=None, # type: Optional[Text] newline="", # type: Text ): # type: (...) -> None """Append text to the end of a file, creating it if needed. Arguments: path (str): Path to a file. text (str): Text to append. encoding (str): Encoding for text files (defaults to ``utf-8``). errors (str, optional): What to do with unicode decode errors (see `codecs` module for more information). newline (str): Newline parameter. Raises: TypeError: if ``text`` is not an unicode string. fs.errors.ResourceNotFound: if a parent directory of ``path`` does not exist. """ if not isinstance(text, six.text_type): raise TypeError("must be unicode string") with self._lock: with self.open( path, "at", encoding=encoding, errors=errors, newline=newline ) as append_file: append_file.write(text) def close(self): # type: () -> None """Close the filesystem and release any resources. It is important to call this method when you have finished working with the filesystem. Some filesystems may not finalize changes until they are closed (archives for example). You may call this method explicitly (it is safe to call close multiple times), or you can use the filesystem as a context manager to automatically close. Example: >>> with OSFS('~/Desktop') as desktop_fs: ... desktop_fs.writetext( ... 'note.txt', ... "Don't forget to tape Game of Thrones" ... ) If you attempt to use a filesystem that has been closed, a `~fs.errors.FilesystemClosed` exception will be thrown. """ self._closed = True def copy(self, src_path, dst_path, overwrite=False): # type: (Text, Text, bool) -> None """Copy file contents from ``src_path`` to ``dst_path``. Arguments: src_path (str): Path of source file. dst_path (str): Path to destination file. overwrite (bool): If `True`, overwrite the destination file if it exists (defaults to `False`). Raises: fs.errors.DestinationExists: If ``dst_path`` exists, and ``overwrite`` is `False`. fs.errors.ResourceNotFound: If a parent directory of ``dst_path`` does not exist. """ with self._lock: if not overwrite and self.exists(dst_path): raise errors.DestinationExists(dst_path) with closing(self.open(src_path, "rb")) as read_file: # FIXME(@althonos): typing complains because open return IO self.upload(dst_path, read_file) # type: ignore def copydir(self, src_path, dst_path, create=False): # type: (Text, Text, bool) -> None """Copy the contents of ``src_path`` to ``dst_path``. Arguments: src_path (str): Path of source directory. dst_path (str): Path to destination directory. create (bool): If `True`, then ``dst_path`` will be created if it doesn't exist already (defaults to `False`). Raises: fs.errors.ResourceNotFound: If the ``dst_path`` does not exist, and ``create`` is not `True`. """ with self._lock: if not create and not self.exists(dst_path): raise errors.ResourceNotFound(dst_path) if not self.getinfo(src_path).is_dir: raise errors.DirectoryExpected(src_path) copy.copy_dir(self, src_path, self, dst_path) def create(self, path, wipe=False): # type: (Text, bool) -> bool """Create an empty file. The default behavior is to create a new file if one doesn't already exist. If ``wipe`` is `True`, any existing file will be truncated. Arguments: path (str): Path to a new file in the filesystem. wipe (bool): If `True`, truncate any existing file to 0 bytes (defaults to `False`). Returns: bool: `True` if a new file had to be created. """ with self._lock: if not wipe and self.exists(path): return False with closing(self.open(path, "wb")): pass return True def desc(self, path): # type: (Text) -> Text """Return a short descriptive text regarding a path. Arguments: path (str): A path to a resource on the filesystem. Returns: str: a short description of the path. """ if not self.exists(path): raise errors.ResourceNotFound(path) try: syspath = self.getsyspath(path) except (errors.ResourceNotFound, errors.NoSysPath): return "{} on {}".format(path, self) else: return syspath def exists(self, path): # type: (Text) -> bool """Check if a path maps to a resource. Arguments: path (str): Path to a resource. Returns: bool: `True` if a resource exists at the given path. """ try: self.getinfo(path) except errors.ResourceNotFound: return False else: return True def filterdir( self, path, # type: Text files=None, # type: Optional[Iterable[Text]] dirs=None, # type: Optional[Iterable[Text]] exclude_dirs=None, # type: Optional[Iterable[Text]] exclude_files=None, # type: Optional[Iterable[Text]] namespaces=None, # type: Optional[Collection[Text]] page=None, # type: Optional[Tuple[int, int]] ): # type: (...) -> Iterator[Info] """Get an iterator of resource info, filtered by patterns. This method enhances `~fs.base.FS.scandir` with additional filtering functionality. Arguments: path (str): A path to a directory on the filesystem. files (list, optional): A list of UNIX shell-style patterns to filter file names, e.g. ``['*.py']``. dirs (list, optional): A list of UNIX shell-style patterns to filter directory names. exclude_dirs (list, optional): A list of patterns used to exclude directories. exclude_files (list, optional): A list of patterns used to exclude files. namespaces (list, optional): A list of namespaces to include in the resource information, e.g. ``['basic', 'access']``. page (tuple, optional): May be a tuple of ``(, )`` indexes to return an iterator of a subset of the resource info, or `None` to iterate over the entire directory. Paging a directory scan may be necessary for very large directories. Returns: ~collections.abc.Iterator: an iterator of `Info` objects. """ resources = self.scandir(path, namespaces=namespaces) filters = [] def match_dir(patterns, info): # type: (Optional[Iterable[Text]], Info) -> bool """Pattern match info.name. """ return info.is_file or self.match(patterns, info.name) def match_file(patterns, info): # type: (Optional[Iterable[Text]], Info) -> bool """Pattern match info.name. """ return info.is_dir or self.match(patterns, info.name) def exclude_dir(patterns, info): # type: (Optional[Iterable[Text]], Info) -> bool """Pattern match info.name. """ return info.is_file or not self.match(patterns, info.name) def exclude_file(patterns, info): # type: (Optional[Iterable[Text]], Info) -> bool """Pattern match info.name. """ return info.is_dir or not self.match(patterns, info.name) if files: filters.append(partial(match_file, files)) if dirs: filters.append(partial(match_dir, dirs)) if exclude_dirs: filters.append(partial(exclude_dir, exclude_dirs)) if exclude_files: filters.append(partial(exclude_file, exclude_files)) if filters: resources = ( info for info in resources if all(_filter(info) for _filter in filters) ) iter_info = iter(resources) if page is not None: start, end = page iter_info = itertools.islice(iter_info, start, end) return iter_info def readbytes(self, path): # type: (Text) -> bytes """Get the contents of a file as bytes. Arguments: path (str): A path to a readable file on the filesystem. Returns: bytes: the file contents. Raises: fs.errors.ResourceNotFound: if ``path`` does not exist. """ with closing(self.open(path, mode="rb")) as read_file: contents = read_file.read() return contents getbytes = _new_name(readbytes, "getbytes") def download(self, path, file, chunk_size=None, **options): # type: (Text, BinaryIO, Optional[int], **Any) -> None """Copies a file from the filesystem to a file-like object. This may be more efficient that opening and copying files manually if the filesystem supplies an optimized method. Arguments: path (str): Path to a resource. file (file-like): A file-like object open for writing in binary mode. chunk_size (int, optional): Number of bytes to read at a time, if a simple copy is used, or `None` to use sensible default. **options: Implementation specific options required to open the source file. Note that the file object ``file`` will *not* be closed by this method. Take care to close it after this method completes (ideally with a context manager). Example: >>> with open('starwars.mov', 'wb') as write_file: ... my_fs.download('/movies/starwars.mov', write_file) """ with self._lock: with self.openbin(path, **options) as read_file: tools.copy_file_data(read_file, file, chunk_size=chunk_size) getfile = _new_name(download, "getfile") def readtext( self, path, # type: Text encoding=None, # type: Optional[Text] errors=None, # type: Optional[Text] newline="", # type: Text ): # type: (...) -> Text """Get the contents of a file as a string. Arguments: path (str): A path to a readable file on the filesystem. encoding (str, optional): Encoding to use when reading contents in text mode (defaults to `None`, reading in binary mode). errors (str, optional): Unicode errors parameter. newline (str): Newlines parameter. Returns: str: file contents. Raises: fs.errors.ResourceNotFound: If ``path`` does not exist. """ with closing( self.open( path, mode="rt", encoding=encoding, errors=errors, newline=newline ) ) as read_file: contents = read_file.read() return contents gettext = _new_name(readtext, "gettext") def getmeta(self, namespace="standard"): # type: (Text) -> Mapping[Text, object] """Get meta information regarding a filesystem. Arguments: namespace (str): The meta namespace (defaults to ``"standard"``). Returns: dict: the meta information. Meta information is associated with a *namespace* which may be specified with the ``namespace`` parameter. The default namespace, ``"standard"``, contains common information regarding the filesystem's capabilities. Some filesystems may provide other namespaces which expose less common or implementation specific information. If a requested namespace is not supported by a filesystem, then an empty dictionary will be returned. The ``"standard"`` namespace supports the following keys: =================== ============================================ key Description ------------------- -------------------------------------------- case_insensitive `True` if this filesystem is case insensitive. invalid_path_chars A string containing the characters that may not be used on this filesystem. max_path_length Maximum number of characters permitted in a path, or `None` for no limit. max_sys_path_length Maximum number of characters permitted in a sys path, or `None` for no limit. network `True` if this filesystem requires a network. read_only `True` if this filesystem is read only. supports_rename `True` if this filesystem supports an `os.rename` operation. =================== ============================================ Most builtin filesystems will provide all these keys, and third- party filesystems should do so whenever possible, but a key may not be present if there is no way to know the value. Note: Meta information is constant for the lifetime of the filesystem, and may be cached. """ if namespace == "standard": meta = self._meta.copy() else: meta = {} return meta def getsize(self, path): # type: (Text) -> int """Get the size (in bytes) of a resource. Arguments: path (str): A path to a resource. Returns: int: the *size* of the resource. The *size* of a file is the total number of readable bytes, which may not reflect the exact number of bytes of reserved disk space (or other storage medium). The size of a directory is the number of bytes of overhead use to store the directory entry. """ size = self.getdetails(path).size return size def getsyspath(self, path): # type: (Text) -> Text """Get the *system path* of a resource. Parameters: path (str): A path on the filesystem. Returns: str: the *system path* of the resource, if any. Raises: fs.errors.NoSysPath: If there is no corresponding system path. A system path is one recognized by the OS, that may be used outside of PyFilesystem (in an application or a shell for example). This method will get the corresponding system path that would be referenced by ``path``. Not all filesystems have associated system paths. Network and memory based filesystems, for example, may not physically store data anywhere the OS knows about. It is also possible for some paths to have a system path, whereas others don't. This method will always return a str on Py3.* and unicode on Py2.7. See `~getospath` if you need to encode the path as bytes. If ``path`` doesn't have a system path, a `~fs.errors.NoSysPath` exception will be thrown. Note: A filesystem may return a system path even if no resource is referenced by that path -- as long as it can be certain what that system path would be. """ raise errors.NoSysPath(path=path) def getospath(self, path): # type: (Text) -> bytes """Get a *system path* to a resource, encoded in the operating system's prefered encoding. Parameters: path (str): A path on the filesystem. Returns: str: the *system path* of the resource, if any. Raises: fs.errors.NoSysPath: If there is no corresponding system path. This method takes the output of `~getsyspath` and encodes it to the filesystem's prefered encoding. In Python3 this step is not required, as the `os` module will do it automatically. In Python2.7, the encoding step is required to support filenames on the filesystem that don't encode correctly. Note: If you want your code to work in Python2.7 and Python3 then use this method if you want to work will the OS filesystem outside of the OSFS interface. """ syspath = self.getsyspath(path) ospath = fsencode(syspath) return ospath def gettype(self, path): # type: (Text) -> ResourceType """Get the type of a resource. Parameters: path (str): A path on the filesystem. Returns: ~fs.enums.ResourceType: the type of the resource. A type of a resource is an integer that identifies the what the resource references. The standard type integers may be one of the values in the `~fs.enums.ResourceType` enumerations. The most common resource types, supported by virtually all filesystems are ``directory`` (1) and ``file`` (2), but the following types are also possible: =================== ====== ResourceType value ------------------- ------ unknown 0 directory 1 file 2 character 3 block_special_file 4 fifo 5 socket 6 symlink 7 =================== ====== Standard resource types are positive integers, negative values are reserved for implementation specific resource types. """ resource_type = self.getdetails(path).type return resource_type def geturl(self, path, purpose="download"): # type: (Text, Text) -> Text """Get the URL to a given resource. Parameters: path (str): A path on the filesystem purpose (str): A short string that indicates which URL to retrieve for the given path (if there is more than one). The default is ``'download'``, which should return a URL that serves the file. Other filesystems may support other values for ``purpose``. Returns: str: a URL. Raises: fs.errors.NoURL: If the path does not map to a URL. """ raise errors.NoURL(path, purpose) def hassyspath(self, path): # type: (Text) -> bool """Check if a path maps to a system path. Parameters: path (str): A path on the filesystem. Returns: bool: `True` if the resource at ``path`` has a *syspath*. """ has_sys_path = True try: self.getsyspath(path) except errors.NoSysPath: has_sys_path = False return has_sys_path def hasurl(self, path, purpose="download"): # type: (Text, Text) -> bool """Check if a path has a corresponding URL. Parameters: path (str): A path on the filesystem. purpose (str): A purpose parameter, as given in `~fs.base.FS.geturl`. Returns: bool: `True` if an URL for the given purpose exists. """ has_url = True try: self.geturl(path, purpose=purpose) except errors.NoURL: has_url = False return has_url def isclosed(self): # type: () -> bool """Check if the filesystem is closed. """ return getattr(self, "_closed", False) def isdir(self, path): # type: (Text) -> bool """Check if a path maps to an existing directory. Parameters: path (str): A path on the filesystem. Returns: bool: `True` if ``path`` maps to a directory. """ try: return self.getinfo(path).is_dir except errors.ResourceNotFound: return False def isempty(self, path): # type: (Text) -> bool """Check if a directory is empty. A directory is considered empty when it does not contain any file or any directory. Parameters: path (str): A path to a directory on the filesystem. Returns: bool: `True` if the directory is empty. Raises: errors.DirectoryExpected: If ``path`` is not a directory. errors.ResourceNotFound: If ``path`` does not exist. """ return next(iter(self.scandir(path)), None) is None def isfile(self, path): # type: (Text) -> bool """Check if a path maps to an existing file. Parameters: path (str): A path on the filesystem. Returns: bool: `True` if ``path`` maps to a file. """ try: return not self.getinfo(path).is_dir except errors.ResourceNotFound: return False def islink(self, path): # type: (Text) -> bool """Check if a path maps to a symlink. Parameters: path (str): A path on the filesystem. Returns: bool: `True` if ``path`` maps to a symlink. """ self.getinfo(path) return False def lock(self): # type: () -> RLock """Get a context manager that *locks* the filesystem. Locking a filesystem gives a thread exclusive access to it. Other threads will block until the threads with the lock has left the context manager. Returns: threading.RLock: a lock specific to the filesystem instance. Example: >>> with my_fs.lock(): # May block ... # code here has exclusive access to the filesystem It is a good idea to put a lock around any operations that you would like to be *atomic*. For instance if you are copying files, and you don't want another thread to delete or modify anything while the copy is in progress. Locking with this method is only required for code that calls multiple filesystem methods. Individual methods are thread safe already, and don't need to be locked. Note: This only locks at the Python level. There is nothing to prevent other processes from modifying the filesystem outside of the filesystem instance. """ return self._lock def movedir(self, src_path, dst_path, create=False): # type: (Text, Text, bool) -> None """Move directory ``src_path`` to ``dst_path``. Parameters: src_path (str): Path of source directory on the filesystem. dst_path (str): Path to destination directory. create (bool): If `True`, then ``dst_path`` will be created if it doesn't exist already (defaults to `False`). Raises: fs.errors.ResourceNotFound: if ``dst_path`` does not exist, and ``create`` is `False`. """ with self._lock: if not create and not self.exists(dst_path): raise errors.ResourceNotFound(dst_path) move.move_dir(self, src_path, self, dst_path) def makedirs( self, path, # type: Text permissions=None, # type: Optional[Permissions] recreate=False, # type: bool ): # type: (...) -> SubFS[FS] """Make a directory, and any missing intermediate directories. Arguments: path (str): Path to directory from root. permissions (~fs.permissions.Permissions, optional): Initial permissions, or `None` to use defaults. recreate (bool): If `False` (the default), attempting to create an existing directory will raise an error. Set to `True` to ignore existing directories. Returns: ~fs.subfs.SubFS: A sub-directory filesystem. Raises: fs.errors.DirectoryExists: if the path is already a directory, and ``recreate`` is `False`. fs.errors.DirectoryExpected: if one of the ancestors in the path is not a directory. """ self.check() with self._lock: dir_paths = tools.get_intermediate_dirs(self, path) for dir_path in dir_paths: try: self.makedir(dir_path, permissions=permissions) except errors.DirectoryExists: if not recreate: raise try: self.makedir(path, permissions=permissions) except errors.DirectoryExists: if not recreate: raise return self.opendir(path) def move(self, src_path, dst_path, overwrite=False): # type: (Text, Text, bool) -> None """Move a file from ``src_path`` to ``dst_path``. Arguments: src_path (str): A path on the filesystem to move. dst_path (str): A path on the filesystem where the source file will be written to. overwrite (bool): If `True`, destination path will be overwritten if it exists. Raises: fs.errors.FileExpected: If ``src_path`` maps to a directory instead of a file. fs.errors.DestinationExists: If ``dst_path`` exists, and ``overwrite`` is `False`. fs.errors.ResourceNotFound: If a parent directory of ``dst_path`` does not exist. """ if not overwrite and self.exists(dst_path): raise errors.DestinationExists(dst_path) if self.getinfo(src_path).is_dir: raise errors.FileExpected(src_path) if self.getmeta().get("supports_rename", False): try: src_sys_path = self.getsyspath(src_path) dst_sys_path = self.getsyspath(dst_path) except errors.NoSysPath: # pragma: no cover pass else: try: os.rename(src_sys_path, dst_sys_path) except OSError: pass else: return with self._lock: with self.open(src_path, "rb") as read_file: # FIXME(@althonos): typing complains because open return IO self.upload(dst_path, read_file) # type: ignore self.remove(src_path) def open( self, path, # type: Text mode="r", # type: Text buffering=-1, # type: int encoding=None, # type: Optional[Text] errors=None, # type: Optional[Text] newline="", # type: Text **options # type: Any ): # type: (...) -> IO """Open a file. Arguments: path (str): A path to a file on the filesystem. mode (str): Mode to open the file object with (defaults to *r*). buffering (int): Buffering policy (-1 to use default buffering, 0 to disable buffering, 1 to select line buffering, of any positive integer to indicate a buffer size). encoding (str): Encoding for text files (defaults to ``utf-8``) errors (str, optional): What to do with unicode decode errors (see `codecs` module for more information). newline (str): Newline parameter. **options: keyword arguments for any additional information required by the filesystem (if any). Returns: io.IOBase: a *file-like* object. Raises: fs.errors.FileExpected: If the path is not a file. fs.errors.FileExists: If the file exists, and *exclusive mode* is specified (``x`` in the mode). fs.errors.ResourceNotFound: If the path does not exist. """ validate_open_mode(mode) bin_mode = mode.replace("t", "") bin_file = self.openbin(path, mode=bin_mode, buffering=buffering) io_stream = iotools.make_stream( path, bin_file, mode=mode, buffering=buffering, encoding=encoding or "utf-8", errors=errors, newline=newline, **options ) return io_stream def opendir( self, # type: _F path, # type: Text factory=None, # type: Optional[_OpendirFactory] ): # type: (...) -> SubFS[FS] # FIXME(@althonos): use generics here if possible """Get a filesystem object for a sub-directory. Arguments: path (str): Path to a directory on the filesystem. factory (callable, optional): A callable that when invoked with an FS instance and ``path`` will return a new FS object representing the sub-directory contents. If no ``factory`` is supplied then `~fs.subfs_class` will be used. Returns: ~fs.subfs.SubFS: A filesystem representing a sub-directory. Raises: fs.errors.DirectoryExpected: If ``dst_path`` does not exist or is not a directory. """ from .subfs import SubFS _factory = factory or self.subfs_class or SubFS if not self.getbasic(path).is_dir: raise errors.DirectoryExpected(path=path) return _factory(self, path) def removetree(self, dir_path): # type: (Text) -> None """Recursively remove the contents of a directory. This method is similar to `~fs.base.removedir`, but will remove the contents of the directory if it is not empty. Arguments: dir_path (str): Path to a directory on the filesystem. """ _dir_path = abspath(normpath(dir_path)) with self._lock: walker = walk.Walker(search="depth") gen_info = walker.info(self, _dir_path) for _path, info in gen_info: if info.is_dir: self.removedir(_path) else: self.remove(_path) if _dir_path != "/": self.removedir(dir_path) def scandir( self, path, # type: Text namespaces=None, # type: Optional[Collection[Text]] page=None, # type: Optional[Tuple[int, int]] ): # type: (...) -> Iterator[Info] """Get an iterator of resource info. Arguments: path (str): A path to a directory on the filesystem. namespaces (list, optional): A list of namespaces to include in the resource information, e.g. ``['basic', 'access']``. page (tuple, optional): May be a tuple of ``(, )`` indexes to return an iterator of a subset of the resource info, or `None` to iterate over the entire directory. Paging a directory scan may be necessary for very large directories. Returns: ~collections.abc.Iterator: an iterator of `Info` objects. Raises: fs.errors.DirectoryExpected: If ``path`` is not a directory. fs.errors.ResourceNotFound: If ``path`` does not exist. """ namespaces = namespaces or () _path = abspath(normpath(path)) info = ( self.getinfo(join(_path, name), namespaces=namespaces) for name in self.listdir(path) ) iter_info = iter(info) if page is not None: start, end = page iter_info = itertools.islice(iter_info, start, end) return iter_info def writebytes(self, path, contents): # type: (Text, bytes) -> None # FIXME(@althonos): accept bytearray and memoryview as well ? """Copy binary data to a file. Arguments: path (str): Destination path on the filesystem. contents (bytes): Data to be written. Raises: TypeError: if contents is not bytes. """ if not isinstance(contents, bytes): raise TypeError("contents must be bytes") with closing(self.open(path, mode="wb")) as write_file: write_file.write(contents) setbytes = _new_name(writebytes, "setbytes") def upload(self, path, file, chunk_size=None, **options): # type: (Text, BinaryIO, Optional[int], **Any) -> None """Set a file to the contents of a binary file object. This method copies bytes from an open binary file to a file on the filesystem. If the destination exists, it will first be truncated. Arguments: path (str): A path on the filesystem. file (io.IOBase): a file object open for reading in binary mode. chunk_size (int, optional): Number of bytes to read at a time, if a simple copy is used, or `None` to use sensible default. **options: Implementation specific options required to open the source file. Note that the file object ``file`` will *not* be closed by this method. Take care to close it after this method completes (ideally with a context manager). Example: >>> with open('~/movies/starwars.mov', 'rb') as read_file: ... my_fs.upload('starwars.mov', read_file) """ with self._lock: with self.openbin(path, mode="wb", **options) as dst_file: tools.copy_file_data(file, dst_file, chunk_size=chunk_size) setbinfile = _new_name(upload, "setbinfile") def writefile( self, path, # type: Text file, # type: IO encoding=None, # type: Optional[Text] errors=None, # type: Optional[Text] newline="", # type: Text ): # type: (...) -> None """Set a file to the contents of a file object. Arguments: path (str): A path on the filesystem. file (io.IOBase): A file object open for reading. encoding (str, optional): Encoding of destination file, defaults to `None` for binary. errors (str, optional): How encoding errors should be treated (same as `io.open`). newline (str): Newline parameter (same as `io.open`). This method is similar to `~FS.upload`, in that it copies data from a file-like object to a resource on the filesystem, but unlike ``upload``, this method also supports creating files in text-mode (if the ``encoding`` argument is supplied). Note that the file object ``file`` will *not* be closed by this method. Take care to close it after this method completes (ideally with a context manager). Example: >>> with open('myfile.txt') as read_file: ... my_fs.writefile('myfile.txt', read_file) """ mode = "wb" if encoding is None else "wt" with self._lock: with self.open( path, mode=mode, encoding=encoding, errors=errors, newline=newline ) as dst_file: tools.copy_file_data(file, dst_file) setfile = _new_name(writefile, "setfile") def settimes(self, path, accessed=None, modified=None): # type: (Text, Optional[datetime], Optional[datetime]) -> None """Set the accessed and modified time on a resource. Arguments: path: A path to a resource on the filesystem. accessed (datetime, optional): The accessed time, or `None` (the default) to use the current time. modified (datetime, optional): The modified time, or `None` (the default) to use the same time as the ``accessed`` parameter. """ details = {} # type: dict raw_info = {"details": details} details["accessed"] = ( time.time() if accessed is None else datetime_to_epoch(accessed) ) details["modified"] = ( details["accessed"] if modified is None else datetime_to_epoch(modified) ) self.setinfo(path, raw_info) def writetext( self, path, # type: Text contents, # type: Text encoding="utf-8", # type: Text errors=None, # type: Optional[Text] newline="", # type: Text ): # type: (...) -> None """Create or replace a file with text. Arguments: path (str): Destination path on the filesystem. contents (str): Text to be written. encoding (str, optional): Encoding of destination file (defaults to ``'utf-8'``). errors (str, optional): How encoding errors should be treated (same as `io.open`). newline (str): Newline parameter (same as `io.open`). Raises: TypeError: if ``contents`` is not a unicode string. """ if not isinstance(contents, six.text_type): raise TypeError("contents must be unicode") with closing( self.open( path, mode="wt", encoding=encoding, errors=errors, newline=newline ) ) as write_file: write_file.write(contents) settext = _new_name(writetext, "settext") def touch(self, path): # type: (Text) -> None """Touch a file on the filesystem. Touching a file means creating a new file if ``path`` doesn't exist, or update accessed and modified times if the path does exist. This method is similar to the linux command of the same name. Arguments: path (str): A path to a file on the filesystem. """ with self._lock: now = time.time() if not self.create(path): raw_info = {"details": {"accessed": now, "modified": now}} self.setinfo(path, raw_info) def validatepath(self, path): # type: (Text) -> Text """Check if a path is valid, returning a normalized absolute path. Many filesystems have restrictions on the format of paths they support. This method will check that ``path`` is valid on the underlaying storage mechanism and throw a `~fs.errors.InvalidPath` exception if it is not. Arguments: path (str): A path. Returns: str: A normalized, absolute path. Raises: fs.errors.InvalidCharsInPath: If the path contains invalid characters. fs.errors.InvalidPath: If the path is invalid. fs.errors.FilesystemClosed: if the filesystem is closed. """ self.check() if isinstance(path, bytes): raise TypeError( "paths must be unicode (not str)" if six.PY2 else "paths must be str (not bytes)" ) meta = self.getmeta() invalid_chars = typing.cast(six.text_type, meta.get("invalid_path_chars")) if invalid_chars: if set(path).intersection(invalid_chars): raise errors.InvalidCharsInPath(path) max_sys_path_length = typing.cast(int, meta.get("max_sys_path_length", -1)) if max_sys_path_length != -1: try: sys_path = self.getsyspath(path) except errors.NoSysPath: # pragma: no cover pass else: if len(sys_path) > max_sys_path_length: _msg = "path too long (max {max_chars} characters in sys path)" msg = _msg.format(max_chars=max_sys_path_length) raise errors.InvalidPath(path, msg=msg) path = abspath(normpath(path)) return path # ---------------------------------------------------------------- # # Helper methods # # Filesystems should not implement these methods. # # ---------------------------------------------------------------- # def getbasic(self, path): # type: (Text) -> Info """Get the *basic* resource info. This method is shorthand for the following:: fs.getinfo(path, namespaces=['basic']) Arguments: path (str): A path on the filesystem. Returns: ~fs.info.Info: Resource information object for ``path``. """ return self.getinfo(path, namespaces=["basic"]) def getdetails(self, path): # type: (Text) -> Info """Get the *details* resource info. This method is shorthand for the following:: fs.getinfo(path, namespaces=['details']) Arguments: path (str): A path on the filesystem. Returns: ~fs.info.Info: Resource information object for ``path``. """ return self.getinfo(path, namespaces=["details"]) def check(self): # type: () -> None """Check if a filesystem may be used. Raises: fs.errors.FilesystemClosed: if the filesystem is closed. """ if self.isclosed(): raise errors.FilesystemClosed() def match(self, patterns, name): # type: (Optional[Iterable[Text]], Text) -> bool """Check if a name matches any of a list of wildcards. Arguments: patterns (list): A list of patterns, e.g. ``['*.py']`` name (str): A file or directory name (not a path) Returns: bool: `True` if ``name`` matches any of the patterns. If a filesystem is case *insensitive* (such as Windows) then this method will perform a case insensitive match (i.e. ``*.py`` will match the same names as ``*.PY``). Otherwise the match will be case sensitive (``*.py`` and ``*.PY`` will match different names). Example: >>> home_fs.match(['*.py'], '__init__.py') True >>> home_fs.match(['*.jpg', '*.png'], 'foo.gif') False Note: If ``patterns`` is `None` (or ``['*']``), then this method will always return `True`. """ if patterns is None: return True if isinstance(patterns, six.text_type): raise TypeError("patterns must be a list or sequence") case_sensitive = not typing.cast( bool, self.getmeta().get("case_insensitive", False) ) matcher = wildcard.get_matcher(patterns, case_sensitive) return matcher(name) def tree(self, **kwargs): # type: (**Any) -> None """Render a tree view of the filesystem to stdout or a file. The parameters are passed to :func:`~fs.tree.render`. Keyword Arguments: path (str): The path of the directory to start rendering from (defaults to root folder, i.e. ``'/'``). file (io.IOBase): An open file-like object to render the tree, or `None` for stdout. encoding (str): Unicode encoding, or `None` to auto-detect. max_levels (int): Maximum number of levels to display, or `None` for no maximum. with_color (bool): Enable terminal color output, or `None` to auto-detect terminal. dirs_first (bool): Show directories first. exclude (list): Option list of directory patterns to exclude from the tree render. filter (list): Optional list of files patterns to match in the tree render. """ from .tree import render render(self, **kwargs) def hash(self, path, name): # type: (Text, Text) -> Text """Get the hash of a file's contents. Arguments: path(str): A path on the filesystem. name(str): One of the algorithms supported by the hashlib module, e.g. `"md5"` Returns: str: The hex digest of the hash. Raises: fs.errors.UnsupportedHash: If the requested hash is not supported. """ self.validatepath(path) try: hash_object = hashlib.new(name) except ValueError: raise errors.UnsupportedHash("hash '{}' is not supported".format(name)) with self.openbin(path) as binary_file: while True: chunk = binary_file.read(1024 * 1024) if not chunk: break hash_object.update(chunk) return hash_object.hexdigest() pyfilesystem2-2.4.12/fs/compress.py000066400000000000000000000155641400005060600172360ustar00rootroot00000000000000"""Functions to compress the contents of a filesystem. Currently zip and tar are supported, using the `zipfile` and `tarfile` modules from the standard library. """ from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import time import tarfile import typing import zipfile from datetime import datetime import six from .enums import ResourceType from .path import relpath from .time import datetime_to_epoch from .errors import NoSysPath, MissingInfoNamespace from .walk import Walker if typing.TYPE_CHECKING: from typing import BinaryIO, Optional, Text, Tuple, Union from .base import FS ZipTime = Tuple[int, int, int, int, int, int] def write_zip( src_fs, # type: FS file, # type: Union[Text, BinaryIO] compression=zipfile.ZIP_DEFLATED, # type: int encoding="utf-8", # type: Text walker=None, # type: Optional[Walker] ): # type: (...) -> None """Write the contents of a filesystem to a zip file. Arguments: src_fs (~fs.base.FS): The source filesystem to compress. file (str or io.IOBase): Destination file, may be a file name or an open file object. compression (int): Compression to use (one of the constants defined in the `zipfile` module in the stdlib). Defaults to `zipfile.ZIP_DEFLATED`. encoding (str): The encoding to use for filenames. The default is ``"utf-8"``, use ``"CP437"`` if compatibility with WinZip is desired. walker (~fs.walk.Walker, optional): A `Walker` instance, or `None` to use default walker. You can use this to specify which files you want to compress. """ _zip = zipfile.ZipFile(file, mode="w", compression=compression, allowZip64=True) walker = walker or Walker() with _zip: gen_walk = walker.info(src_fs, namespaces=["details", "stat", "access"]) for path, info in gen_walk: # Zip names must be relative, directory names must end # with a slash. zip_name = relpath(path + "/" if info.is_dir else path) if not six.PY3: # Python2 expects bytes filenames zip_name = zip_name.encode(encoding, "replace") if info.has_namespace("stat"): # If the file has a stat namespace, get the # zip time directory from the stat structure st_mtime = info.get("stat", "st_mtime", None) _mtime = time.localtime(st_mtime) zip_time = _mtime[0:6] # type: ZipTime else: # Otherwise, use the modified time from details # namespace. mt = info.modified or datetime.utcnow() zip_time = (mt.year, mt.month, mt.day, mt.hour, mt.minute, mt.second) # NOTE(@althonos): typeshed's `zipfile.py` on declares # ZipInfo.__init__ for Python < 3 ?! zip_info = zipfile.ZipInfo(zip_name, zip_time) # type: ignore try: if info.permissions is not None: zip_info.external_attr = info.permissions.mode << 16 except MissingInfoNamespace: pass if info.is_dir: zip_info.external_attr |= 0x10 # This is how to record directories with zipfile _zip.writestr(zip_info, b"") else: # Get a syspath if possible try: sys_path = src_fs.getsyspath(path) except NoSysPath: # Write from bytes _zip.writestr(zip_info, src_fs.readbytes(path)) else: # Write from a file which is (presumably) # more memory efficient _zip.write(sys_path, zip_name) def write_tar( src_fs, # type: FS file, # type: Union[Text, BinaryIO] compression=None, # type: Optional[Text] encoding="utf-8", # type: Text walker=None, # type: Optional[Walker] ): # type: (...) -> None """Write the contents of a filesystem to a tar file. Arguments: file (str or io.IOBase): Destination file, may be a file name or an open file object. compression (str, optional): Compression to use, or `None` for a plain Tar archive without compression. encoding(str): The encoding to use for filenames. The default is ``"utf-8"``. walker (~fs.walk.Walker, optional): A `Walker` instance, or `None` to use default walker. You can use this to specify which files you want to compress. """ type_map = { ResourceType.block_special_file: tarfile.BLKTYPE, ResourceType.character: tarfile.CHRTYPE, ResourceType.directory: tarfile.DIRTYPE, ResourceType.fifo: tarfile.FIFOTYPE, ResourceType.file: tarfile.REGTYPE, ResourceType.socket: tarfile.AREGTYPE, # no type for socket ResourceType.symlink: tarfile.SYMTYPE, ResourceType.unknown: tarfile.AREGTYPE, # no type for unknown } tar_attr = [("uid", "uid"), ("gid", "gid"), ("uname", "user"), ("gname", "group")] mode = "w:{}".format(compression or "") if isinstance(file, (six.text_type, six.binary_type)): _tar = tarfile.open(file, mode=mode) else: _tar = tarfile.open(fileobj=file, mode=mode) current_time = time.time() walker = walker or Walker() with _tar: gen_walk = walker.info(src_fs, namespaces=["details", "stat", "access"]) for path, info in gen_walk: # Tar names must be relative tar_name = relpath(path) if not six.PY3: # Python2 expects bytes filenames tar_name = tar_name.encode(encoding, "replace") tar_info = tarfile.TarInfo(tar_name) if info.has_namespace("stat"): mtime = info.get("stat", "st_mtime", current_time) else: mtime = info.modified or current_time if isinstance(mtime, datetime): mtime = datetime_to_epoch(mtime) if isinstance(mtime, float): mtime = int(mtime) tar_info.mtime = mtime for tarattr, infoattr in tar_attr: if getattr(info, infoattr, None) is not None: setattr(tar_info, tarattr, getattr(info, infoattr, None)) if info.has_namespace("access"): tar_info.mode = getattr(info.permissions, "mode", 0o420) if info.is_dir: tar_info.type = tarfile.DIRTYPE _tar.addfile(tar_info) else: tar_info.type = type_map.get(info.type, tarfile.REGTYPE) tar_info.size = info.size with src_fs.openbin(path) as bin_file: _tar.addfile(tar_info, bin_file) pyfilesystem2-2.4.12/fs/constants.py000066400000000000000000000002561400005060600174070ustar00rootroot00000000000000"""Constants used by PyFilesystem. """ import io DEFAULT_CHUNK_SIZE = io.DEFAULT_BUFFER_SIZE * 16 """`int`: the size of a single chunk read from or written to a file. """ pyfilesystem2-2.4.12/fs/copy.py000066400000000000000000000353071400005060600163520ustar00rootroot00000000000000"""Functions for copying resources *between* filesystem. """ from __future__ import print_function, unicode_literals import typing from .errors import FSError from .opener import manage_fs from .path import abspath, combine, frombase, normpath from .tools import is_thread_safe from .walk import Walker if typing.TYPE_CHECKING: from typing import Callable, Optional, Text, Union from .base import FS _OnCopy = Callable[[FS, Text, FS, Text], object] def copy_fs( src_fs, # type: Union[FS, Text] dst_fs, # type: Union[FS, Text] walker=None, # type: Optional[Walker] on_copy=None, # type: Optional[_OnCopy] workers=0, # type: int ): # type: (...) -> None """Copy the contents of one filesystem to another. Arguments: src_fs (FS or str): Source filesystem (URL or instance). dst_fs (FS or str): Destination filesystem (URL or instance). walker (~fs.walk.Walker, optional): A walker object that will be used to scan for files in ``src_fs``. Set this if you only want to consider a sub-set of the resources in ``src_fs``. on_copy (callable): A function callback called after a single file copy is executed. Expected signature is ``(src_fs, src_path, dst_fs, dst_path)``. workers (int): Use `worker` threads to copy data, or ``0`` (default) for a single-threaded copy. """ return copy_dir( src_fs, "/", dst_fs, "/", walker=walker, on_copy=on_copy, workers=workers ) def copy_fs_if_newer( src_fs, # type: Union[FS, Text] dst_fs, # type: Union[FS, Text] walker=None, # type: Optional[Walker] on_copy=None, # type: Optional[_OnCopy] workers=0, # type: int ): # type: (...) -> None """Copy the contents of one filesystem to another, checking times. If both source and destination files exist, the copy is executed only if the source file is newer than the destination file. In case modification times of source or destination files are not available, copy file is always executed. Arguments: src_fs (FS or str): Source filesystem (URL or instance). dst_fs (FS or str): Destination filesystem (URL or instance). walker (~fs.walk.Walker, optional): A walker object that will be used to scan for files in ``src_fs``. Set this if you only want to consider a sub-set of the resources in ``src_fs``. on_copy (callable):A function callback called after a single file copy is executed. Expected signature is ``(src_fs, src_path, dst_fs, dst_path)``. workers (int): Use ``worker`` threads to copy data, or ``0`` (default) for a single-threaded copy. """ return copy_dir_if_newer( src_fs, "/", dst_fs, "/", walker=walker, on_copy=on_copy, workers=workers ) def _source_is_newer(src_fs, src_path, dst_fs, dst_path): # type: (FS, Text, FS, Text) -> bool """Determine if source file is newer than destination file. Arguments: src_fs (FS): Source filesystem (instance or URL). src_path (str): Path to a file on the source filesystem. dst_fs (FS): Destination filesystem (instance or URL). dst_path (str): Path to a file on the destination filesystem. Returns: bool: `True` if the source file is newer than the destination file or file modification time cannot be determined, `False` otherwise. """ try: if dst_fs.exists(dst_path): namespace = ("details", "modified") src_modified = src_fs.getinfo(src_path, namespace).modified if src_modified is not None: dst_modified = dst_fs.getinfo(dst_path, namespace).modified return dst_modified is None or src_modified > dst_modified return True except FSError: # pragma: no cover # todo: should log something here return True def copy_file( src_fs, # type: Union[FS, Text] src_path, # type: Text dst_fs, # type: Union[FS, Text] dst_path, # type: Text ): # type: (...) -> None """Copy a file from one filesystem to another. If the destination exists, and is a file, it will be first truncated. Arguments: src_fs (FS or str): Source filesystem (instance or URL). src_path (str): Path to a file on the source filesystem. dst_fs (FS or str): Destination filesystem (instance or URL). dst_path (str): Path to a file on the destination filesystem. """ with manage_fs(src_fs, writeable=False) as _src_fs: with manage_fs(dst_fs, create=True) as _dst_fs: if _src_fs is _dst_fs: # Same filesystem, so we can do a potentially optimized # copy _src_fs.copy(src_path, dst_path, overwrite=True) else: # Standard copy with _src_fs.lock(), _dst_fs.lock(): if _dst_fs.hassyspath(dst_path): with _dst_fs.openbin(dst_path, "w") as write_file: _src_fs.download(src_path, write_file) else: with _src_fs.openbin(src_path) as read_file: _dst_fs.upload(dst_path, read_file) def copy_file_internal( src_fs, # type: FS src_path, # type: Text dst_fs, # type: FS dst_path, # type: Text ): # type: (...) -> None """Low level copy, that doesn't call manage_fs or lock. If the destination exists, and is a file, it will be first truncated. This method exists to optimize copying in loops. In general you should prefer `copy_file`. Arguments: src_fs (FS): Source filesystem. src_path (str): Path to a file on the source filesystem. dst_fs (FS: Destination filesystem. dst_path (str): Path to a file on the destination filesystem. """ if src_fs is dst_fs: # Same filesystem, so we can do a potentially optimized # copy src_fs.copy(src_path, dst_path, overwrite=True) elif dst_fs.hassyspath(dst_path): with dst_fs.openbin(dst_path, "w") as write_file: src_fs.download(src_path, write_file) else: with src_fs.openbin(src_path) as read_file: dst_fs.upload(dst_path, read_file) def copy_file_if_newer( src_fs, # type: Union[FS, Text] src_path, # type: Text dst_fs, # type: Union[FS, Text] dst_path, # type: Text ): # type: (...) -> bool """Copy a file from one filesystem to another, checking times. If the destination exists, and is a file, it will be first truncated. If both source and destination files exist, the copy is executed only if the source file is newer than the destination file. In case modification times of source or destination files are not available, copy is always executed. Arguments: src_fs (FS or str): Source filesystem (instance or URL). src_path (str): Path to a file on the source filesystem. dst_fs (FS or str): Destination filesystem (instance or URL). dst_path (str): Path to a file on the destination filesystem. Returns: bool: `True` if the file copy was executed, `False` otherwise. """ with manage_fs(src_fs, writeable=False) as _src_fs: with manage_fs(dst_fs, create=True) as _dst_fs: if _src_fs is _dst_fs: # Same filesystem, so we can do a potentially optimized # copy if _source_is_newer(_src_fs, src_path, _dst_fs, dst_path): _src_fs.copy(src_path, dst_path, overwrite=True) return True else: return False else: # Standard copy with _src_fs.lock(), _dst_fs.lock(): if _source_is_newer(_src_fs, src_path, _dst_fs, dst_path): copy_file_internal(_src_fs, src_path, _dst_fs, dst_path) return True else: return False def copy_structure( src_fs, # type: Union[FS, Text] dst_fs, # type: Union[FS, Text] walker=None, # type: Optional[Walker] ): # type: (...) -> None """Copy directories (but not files) from ``src_fs`` to ``dst_fs``. Arguments: src_fs (FS or str): Source filesystem (instance or URL). dst_fs (FS or str): Destination filesystem (instance or URL). walker (~fs.walk.Walker, optional): A walker object that will be used to scan for files in ``src_fs``. Set this if you only want to consider a sub-set of the resources in ``src_fs``. """ walker = walker or Walker() with manage_fs(src_fs) as _src_fs: with manage_fs(dst_fs, create=True) as _dst_fs: with _src_fs.lock(), _dst_fs.lock(): for dir_path in walker.dirs(_src_fs): _dst_fs.makedir(dir_path, recreate=True) def copy_dir( src_fs, # type: Union[FS, Text] src_path, # type: Text dst_fs, # type: Union[FS, Text] dst_path, # type: Text walker=None, # type: Optional[Walker] on_copy=None, # type: Optional[_OnCopy] workers=0, # type: int ): # type: (...) -> None """Copy a directory from one filesystem to another. Arguments: src_fs (FS or str): Source filesystem (instance or URL). src_path (str): Path to a directory on the source filesystem. dst_fs (FS or str): Destination filesystem (instance or URL). dst_path (str): Path to a directory on the destination filesystem. walker (~fs.walk.Walker, optional): A walker object that will be used to scan for files in ``src_fs``. Set this if you only want to consider a sub-set of the resources in ``src_fs``. on_copy (callable, optional): A function callback called after a single file copy is executed. Expected signature is ``(src_fs, src_path, dst_fs, dst_path)``. workers (int): Use ``worker`` threads to copy data, or ``0`` (default) for a single-threaded copy. """ on_copy = on_copy or (lambda *args: None) walker = walker or Walker() _src_path = abspath(normpath(src_path)) _dst_path = abspath(normpath(dst_path)) def src(): return manage_fs(src_fs, writeable=False) def dst(): return manage_fs(dst_fs, create=True) from ._bulk import Copier with src() as _src_fs, dst() as _dst_fs: with _src_fs.lock(), _dst_fs.lock(): _thread_safe = is_thread_safe(_src_fs, _dst_fs) with Copier(num_workers=workers if _thread_safe else 0) as copier: _dst_fs.makedir(_dst_path, recreate=True) for dir_path, dirs, files in walker.walk(_src_fs, _src_path): copy_path = combine(_dst_path, frombase(_src_path, dir_path)) for info in dirs: _dst_fs.makedir(info.make_path(copy_path), recreate=True) for info in files: src_path = info.make_path(dir_path) dst_path = info.make_path(copy_path) copier.copy(_src_fs, src_path, _dst_fs, dst_path) on_copy(_src_fs, src_path, _dst_fs, dst_path) def copy_dir_if_newer( src_fs, # type: Union[FS, Text] src_path, # type: Text dst_fs, # type: Union[FS, Text] dst_path, # type: Text walker=None, # type: Optional[Walker] on_copy=None, # type: Optional[_OnCopy] workers=0, # type: int ): # type: (...) -> None """Copy a directory from one filesystem to another, checking times. If both source and destination files exist, the copy is executed only if the source file is newer than the destination file. In case modification times of source or destination files are not available, copy is always executed. Arguments: src_fs (FS or str): Source filesystem (instance or URL). src_path (str): Path to a directory on the source filesystem. dst_fs (FS or str): Destination filesystem (instance or URL). dst_path (str): Path to a directory on the destination filesystem. walker (~fs.walk.Walker, optional): A walker object that will be used to scan for files in ``src_fs``. Set this if you only want to consider a sub-set of the resources in ``src_fs``. on_copy (callable, optional): A function callback called after a single file copy is executed. Expected signature is ``(src_fs, src_path, dst_fs, dst_path)``. workers (int): Use ``worker`` threads to copy data, or ``0`` (default) for a single-threaded copy. """ on_copy = on_copy or (lambda *args: None) walker = walker or Walker() _src_path = abspath(normpath(src_path)) _dst_path = abspath(normpath(dst_path)) def src(): return manage_fs(src_fs, writeable=False) def dst(): return manage_fs(dst_fs, create=True) from ._bulk import Copier with src() as _src_fs, dst() as _dst_fs: with _src_fs.lock(), _dst_fs.lock(): _thread_safe = is_thread_safe(_src_fs, _dst_fs) with Copier(num_workers=workers if _thread_safe else 0) as copier: _dst_fs.makedir(_dst_path, recreate=True) namespace = ("details", "modified") dst_state = { path: info for path, info in walker.info(_dst_fs, _dst_path, namespace) if info.is_file } src_state = [ (path, info) for path, info in walker.info(_src_fs, _src_path, namespace) ] for dir_path, copy_info in src_state: copy_path = combine(_dst_path, frombase(_src_path, dir_path)) if copy_info.is_dir: _dst_fs.makedir(copy_path, recreate=True) elif copy_info.is_file: # dst file is present, try to figure out if copy # is necessary try: src_modified = copy_info.modified dst_modified = dst_state[dir_path].modified except KeyError: do_copy = True else: do_copy = ( src_modified is None or dst_modified is None or src_modified > dst_modified ) if do_copy: copier.copy(_src_fs, dir_path, _dst_fs, copy_path) on_copy(_src_fs, dir_path, _dst_fs, copy_path) pyfilesystem2-2.4.12/fs/enums.py000066400000000000000000000023441400005060600165220ustar00rootroot00000000000000"""Enums used by PyFilesystem. """ from __future__ import absolute_import from __future__ import unicode_literals import os from enum import IntEnum, unique @unique class ResourceType(IntEnum): """Resource Types. Positive values are reserved, negative values are implementation dependent. Most filesystems will support only directory(1) and file(2). Other types exist to identify more exotic resource types supported by Linux filesystems. """ #: Unknown resource type, used if the filesystem is unable to #: tell what the resource is. unknown = 0 #: A directory. directory = 1 #: A simple file. file = 2 #: A character file. character = 3 #: A block special file. block_special_file = 4 #: A first in first out file. fifo = 5 #: A socket. socket = 6 #: A symlink. symlink = 7 @unique class Seek(IntEnum): """Constants used by `io.IOBase.seek`. These match `os.SEEK_CUR`, `os.SEEK_END`, and `os.SEEK_SET` from the standard library. """ #: Seek from the current file position. current = os.SEEK_CUR #: Seek from the end of the file. end = os.SEEK_END #: Seek from the start of the file. set = os.SEEK_SET pyfilesystem2-2.4.12/fs/error_tools.py000066400000000000000000000074121400005060600177450ustar00rootroot00000000000000"""Tools for managing OS errors. """ from __future__ import print_function from __future__ import unicode_literals import errno import platform import sys import typing from contextlib import contextmanager from six import reraise from . import errors if typing.TYPE_CHECKING: from types import TracebackType from typing import Iterator, Optional, Text, Type, Union try: from collections.abc import Mapping except ImportError: from collections import Mapping # noqa: E811 _WINDOWS_PLATFORM = platform.system() == "Windows" class _ConvertOSErrors(object): """Context manager to convert OSErrors in to FS Errors. """ FILE_ERRORS = { 64: errors.RemoteConnectionError, # ENONET errno.EACCES: errors.PermissionDenied, errno.ENOENT: errors.ResourceNotFound, errno.EFAULT: errors.ResourceNotFound, errno.ESRCH: errors.ResourceNotFound, errno.ENOTEMPTY: errors.DirectoryNotEmpty, errno.EEXIST: errors.FileExists, 183: errors.DirectoryExists, # errno.ENOTDIR: errors.DirectoryExpected, errno.ENOTDIR: errors.ResourceNotFound, errno.EISDIR: errors.FileExpected, errno.EINVAL: errors.FileExpected, errno.ENOSPC: errors.InsufficientStorage, errno.EPERM: errors.PermissionDenied, errno.ENETDOWN: errors.RemoteConnectionError, errno.ECONNRESET: errors.RemoteConnectionError, errno.ENAMETOOLONG: errors.PathError, errno.EOPNOTSUPP: errors.Unsupported, errno.ENOSYS: errors.Unsupported, } DIR_ERRORS = FILE_ERRORS.copy() DIR_ERRORS[errno.ENOTDIR] = errors.DirectoryExpected DIR_ERRORS[errno.EEXIST] = errors.DirectoryExists DIR_ERRORS[errno.EINVAL] = errors.DirectoryExpected if _WINDOWS_PLATFORM: # pragma: no cover DIR_ERRORS[13] = errors.DirectoryExpected DIR_ERRORS[267] = errors.DirectoryExpected FILE_ERRORS[13] = errors.FileExpected def __init__(self, opname, path, directory=False): # type: (Text, Text, bool) -> None self._opname = opname self._path = path self._directory = directory def __enter__(self): # type: () -> _ConvertOSErrors return self def __exit__( self, exc_type, # type: Optional[Type[BaseException]] exc_value, # type: Optional[BaseException] traceback, # type: Optional[TracebackType] ): # type: (...) -> None os_errors = self.DIR_ERRORS if self._directory else self.FILE_ERRORS if exc_type and isinstance(exc_value, EnvironmentError): _errno = exc_value.errno fserror = os_errors.get(_errno, errors.OperationFailed) if _errno == errno.EACCES and sys.platform == "win32": if getattr(exc_value, "args", None) == 32: # pragma: no cover fserror = errors.ResourceLocked reraise(fserror, fserror(self._path, exc=exc_value), traceback) # Stops linter complaining about invalid class name convert_os_errors = _ConvertOSErrors @contextmanager def unwrap_errors(path_replace): # type: (Union[Text, Mapping[Text, Text]]) -> Iterator[None] """Get a context to map OS errors to their `fs.errors` counterpart. The context will re-write the paths in resource exceptions to be in the same context as the wrapped filesystem. The only parameter may be the path from the parent, if only one path is to be unwrapped. Or it may be a dictionary that maps wrapped paths on to unwrapped paths. """ try: yield except errors.ResourceError as e: if hasattr(e, "path"): if isinstance(path_replace, Mapping): e.path = path_replace.get(e.path, e.path) else: e.path = path_replace raise pyfilesystem2-2.4.12/fs/errors.py000066400000000000000000000217231400005060600167110ustar00rootroot00000000000000"""Exception classes thrown by filesystem operations. Errors relating to the underlying filesystem are translated in to one of the following exceptions. All Exception classes are derived from `~fs.errors.FSError` which may be used as a catch-all filesystem exception. """ from __future__ import unicode_literals from __future__ import print_function import functools import typing import six from six import text_type if typing.TYPE_CHECKING: from typing import Optional, Text __all__ = [ "BulkCopyFailed", "CreateFailed", "DestinationExists", "DirectoryExists", "DirectoryExpected", "DirectoryNotEmpty", "FileExists", "FileExpected", "FilesystemClosed", "FSError", "IllegalBackReference", "InsufficientStorage", "InvalidCharsInPath", "InvalidPath", "MissingInfoNamespace", "NoSysPath", "NoURL", "OperationFailed", "OperationTimeout", "PathError", "PermissionDenied", "RemoteConnectionError", "RemoveRootError", "ResourceError", "ResourceInvalid", "ResourceLocked", "ResourceNotFound", "ResourceReadOnly", "Unsupported", ] class MissingInfoNamespace(AttributeError): """An expected namespace is missing. """ def __init__(self, namespace): # type: (Text) -> None self.namespace = namespace msg = "namespace '{}' is required for this attribute" super(MissingInfoNamespace, self).__init__(msg.format(namespace)) def __reduce__(self): return type(self), (self.namespace,) @six.python_2_unicode_compatible class FSError(Exception): """Base exception for the `fs` module. """ default_message = "Unspecified error" def __init__(self, msg=None): # type: (Optional[Text]) -> None self._msg = msg or self.default_message super(FSError, self).__init__() def __str__(self): # type: () -> Text """Return the error message. """ msg = self._msg.format(**self.__dict__) return msg def __repr__(self): # type: () -> Text msg = self._msg.format(**self.__dict__) return "{}({!r})".format(self.__class__.__name__, msg) class FilesystemClosed(FSError): """Attempt to use a closed filesystem. """ default_message = "attempt to use closed filesystem" class BulkCopyFailed(FSError): """A copy operation failed in worker threads.""" default_message = "One or more copy operations failed (see errors attribute)" def __init__(self, errors): self.errors = errors super(BulkCopyFailed, self).__init__() class CreateFailed(FSError): """Filesystem could not be created. """ default_message = "unable to create filesystem, {details}" def __init__(self, msg=None, exc=None): # type: (Optional[Text], Optional[Exception]) -> None self._msg = msg or self.default_message self.details = "" if exc is None else text_type(exc) self.exc = exc @classmethod def catch_all(cls, func): @functools.wraps(func) def new_func(*args, **kwargs): try: return func(*args, **kwargs) except cls: raise except Exception as e: raise cls(exc=e) return new_func # type: ignore def __reduce__(self): return type(self), (self._msg, self.exc) class PathError(FSError): """Base exception for errors to do with a path string. """ default_message = "path '{path}' is invalid" def __init__(self, path, msg=None): # type: (Text, Optional[Text]) -> None self.path = path super(PathError, self).__init__(msg=msg) def __reduce__(self): return type(self), (self.path, self._msg) class NoSysPath(PathError): """The filesystem does not provide *sys paths* to the resource. """ default_message = "path '{path}' does not map to the local filesystem" class NoURL(PathError): """The filesystem does not provide an URL for the resource. """ default_message = "path '{path}' has no '{purpose}' URL" def __init__(self, path, purpose, msg=None): # type: (Text, Text, Optional[Text]) -> None self.purpose = purpose super(NoURL, self).__init__(path, msg=msg) def __reduce__(self): return type(self), (self.path, self.purpose, self._msg) class InvalidPath(PathError): """Path can't be mapped on to the underlaying filesystem. """ default_message = "path '{path}' is invalid on this filesystem " class InvalidCharsInPath(InvalidPath): """Path contains characters that are invalid on this filesystem. """ default_message = "path '{path}' contains invalid characters" class OperationFailed(FSError): """A specific operation failed. """ default_message = "operation failed, {details}" def __init__( self, path=None, # type: Optional[Text] exc=None, # type: Optional[Exception] msg=None, # type: Optional[Text] ): # type: (...) -> None self.path = path self.exc = exc self.details = "" if exc is None else text_type(exc) self.errno = getattr(exc, "errno", None) super(OperationFailed, self).__init__(msg=msg) def __reduce__(self): return type(self), (self.path, self.exc, self._msg) class Unsupported(OperationFailed): """Operation not supported by the filesystem. """ default_message = "not supported" class RemoteConnectionError(OperationFailed): """Operations encountered remote connection trouble. """ default_message = "remote connection error" class InsufficientStorage(OperationFailed): """Storage is insufficient for requested operation. """ default_message = "insufficient storage space" class PermissionDenied(OperationFailed): """Not enough permissions. """ default_message = "permission denied" class OperationTimeout(OperationFailed): """Filesystem took too long. """ default_message = "operation timed out" class RemoveRootError(OperationFailed): """Attempt to remove the root directory. """ default_message = "root directory may not be removed" class ResourceError(FSError): """Base exception class for error associated with a specific resource. """ default_message = "failed on path {path}" def __init__(self, path, exc=None, msg=None): # type: (Text, Optional[Exception], Optional[Text]) -> None self.path = path self.exc = exc super(ResourceError, self).__init__(msg=msg) def __reduce__(self): return type(self), (self.path, self.exc, self._msg) class ResourceNotFound(ResourceError): """Required resource not found. """ default_message = "resource '{path}' not found" class ResourceInvalid(ResourceError): """Resource has the wrong type. """ default_message = "resource '{path}' is invalid for this operation" class FileExists(ResourceError): """File already exists. """ default_message = "resource '{path}' exists" class FileExpected(ResourceInvalid): """Operation only works on files. """ default_message = "path '{path}' should be a file" class DirectoryExpected(ResourceInvalid): """Operation only works on directories. """ default_message = "path '{path}' should be a directory" class DestinationExists(ResourceError): """Target destination already exists. """ default_message = "destination '{path}' exists" class DirectoryExists(ResourceError): """Directory already exists. """ default_message = "directory '{path}' exists" class DirectoryNotEmpty(ResourceError): """Attempt to remove a non-empty directory. """ default_message = "directory '{path}' is not empty" class ResourceLocked(ResourceError): """Attempt to use a locked resource. """ default_message = "resource '{path}' is locked" class ResourceReadOnly(ResourceError): """Attempting to modify a read-only resource. """ default_message = "resource '{path}' is read only" class IllegalBackReference(ValueError): """Too many backrefs exist in a path. This error will occur if the back references in a path would be outside of the root. For example, ``"/foo/../../"``, contains two back references which would reference a directory above the root. Note: This exception is a subclass of `ValueError` as it is not strictly speaking an issue with a filesystem or resource. """ def __init__(self, path): # type: (Text) -> None self.path = path msg = ("path '{path}' contains back-references outside of filesystem").format( path=path ) super(IllegalBackReference, self).__init__(msg) def __reduce__(self): return type(self), (self.path,) class UnsupportedHash(ValueError): """The requested hash algorithm is not supported. This exception will be thrown if a hash algorithm is requested that is not supported by hashlib. """ pyfilesystem2-2.4.12/fs/filesize.py000066400000000000000000000066661400005060600172200ustar00rootroot00000000000000# coding: utf-8 """Functions for reporting filesizes. The functions declared in this module should cover the different usecases needed to generate a string representation of a file size using several different units. Since there are many standards regarding file size units, three different functions have been implemented. See Also: * `Wikipedia: Binary prefix `_ """ from __future__ import division from __future__ import unicode_literals import typing if typing.TYPE_CHECKING: from typing import Iterable, SupportsInt, Text __all__ = ["traditional", "decimal", "binary"] def _to_str(size, suffixes, base): # type: (SupportsInt, Iterable[Text], int) -> Text try: size = int(size) except ValueError: raise TypeError("filesize requires a numeric value, not {!r}".format(size)) if size == 1: return "1 byte" elif size < base: return "{:,} bytes".format(size) # TODO (dargueta): Don't rely on unit or suffix being defined in the loop. for i, suffix in enumerate(suffixes, 2): # noqa: B007 unit = base ** i if size < unit: break return "{:,.1f} {}".format((base * size / unit), suffix) def traditional(size): # type: (SupportsInt) -> Text """Convert a filesize in to a string (powers of 1024, JDEC prefixes). In this convention, ``1024 B = 1 KB``. This is the format that was used to display the size of DVDs (*700 MB* meaning actually about *734 003 200 bytes*) before standardisation of IEC units among manufacturers, and still used by **Windows** to report the storage capacity of hard drives (*279.4 GB* meaning *279.4 × 1024³ bytes*). Arguments: size (int): A file size. Returns: `str`: A string containing an abbreviated file size and units. Example: >>> filesize.traditional(30000) '29.3 KB' """ return _to_str(size, ("KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB"), 1024) def binary(size): # type: (SupportsInt) -> Text """Convert a filesize in to a string (powers of 1024, IEC prefixes). In this convention, ``1024 B = 1 KiB``. This is the format that has gained adoption among manufacturers to avoid ambiguity regarding size units, since it explicitly states using a binary base (*KiB = kibi bytes = kilo binary bytes*). This format is notably being used by the **Linux** kernel (see ``man 7 units``). Arguments: int (size): A file size. Returns: `str`: A string containing a abbreviated file size and units. Example: >>> filesize.binary(30000) '29.3 KiB' """ return _to_str(size, ("KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB"), 1024) def decimal(size): # type: (SupportsInt) -> Text """Convert a filesize in to a string (powers of 1000, SI prefixes). In this convention, ``1000 B = 1 kB``. This is typically the format used to advertise the storage capacity of USB flash drives and the like (*256 MB* meaning actually a storage capacity of more than *256 000 000 B*), or used by **Mac OS X** since v10.6 to report file sizes. Arguments: int (size): A file size. Returns: `str`: A string containing a abbreviated file size and units. Example: >>> filesize.decimal(30000) '30.0 kB' """ return _to_str(size, ("kB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB"), 1000) pyfilesystem2-2.4.12/fs/ftpfs.py000066400000000000000000000630711400005060600165210ustar00rootroot00000000000000"""Manage filesystems on remote FTP servers. """ from __future__ import print_function from __future__ import unicode_literals import calendar import io import itertools import socket import threading import typing from collections import OrderedDict from contextlib import contextmanager from ftplib import FTP from ftplib import error_perm from ftplib import error_temp from typing import cast from six import PY2 from six import text_type from . import errors from .base import FS from .constants import DEFAULT_CHUNK_SIZE from .enums import ResourceType from .enums import Seek from .info import Info from .iotools import line_iterator from .mode import Mode from .path import abspath from .path import dirname from .path import basename from .path import normpath from .path import split from . import _ftp_parse as ftp_parse if typing.TYPE_CHECKING: import ftplib from typing import ( Any, BinaryIO, ByteString, ContextManager, Iterable, Iterator, Container, Dict, List, Optional, SupportsInt, Text, Tuple, Union, ) from .base import _OpendirFactory from .info import RawInfo from .permissions import Permissions from .subfs import SubFS _F = typing.TypeVar("_F", bound="FTPFS") __all__ = ["FTPFS"] @contextmanager def ftp_errors(fs, path=None): # type: (FTPFS, Optional[Text]) -> Iterator[None] try: with fs._lock: yield except socket.error: raise errors.RemoteConnectionError( msg="unable to connect to {}".format(fs.host) ) except EOFError: raise errors.RemoteConnectionError(msg="lost connection to {}".format(fs.host)) except error_temp as error: if path is not None: raise errors.ResourceError( path, msg="ftp error on resource '{}' ({})".format(path, error) ) else: raise errors.OperationFailed(msg="ftp error ({})".format(error)) except error_perm as error: code, message = _parse_ftp_error(error) if code == "552": raise errors.InsufficientStorage(path=path, msg=message) elif code in ("501", "550"): raise errors.ResourceNotFound(path=cast(str, path)) raise errors.PermissionDenied(msg=message) @contextmanager def manage_ftp(ftp): # type: (FTP) -> Iterator[FTP] try: yield ftp finally: try: ftp.quit() except Exception: # pragma: no cover pass def _parse_ftp_error(error): # type: (ftplib.Error) -> Tuple[Text, Text] """Extract code and message from ftp error.""" code, _, message = text_type(error).partition(" ") return code, message if PY2: def _encode(st, encoding): # type: (Union[Text, bytes], Text) -> str return st.encode(encoding) if isinstance(st, text_type) else st def _decode(st, encoding): # type: (Union[Text, bytes], Text) -> Text return st.decode(encoding, "replace") if isinstance(st, bytes) else st else: def _encode(st, _): # type: (str, str) -> str return st def _decode(st, _): # type: (str, str) -> str return st class FTPFile(io.RawIOBase): def __init__(self, ftpfs, path, mode): # type: (FTPFS, Text, Text) -> None super(FTPFile, self).__init__() self.fs = ftpfs self.path = path self.mode = Mode(mode) self.pos = 0 self._lock = threading.Lock() self.ftp = self._open_ftp() self._read_conn = None # type: Optional[socket.socket] self._write_conn = None # type: Optional[socket.socket] def _open_ftp(self): # type: () -> FTP """Open an ftp object for the file.""" ftp = self.fs._open_ftp() ftp.voidcmd(str("TYPE I")) return ftp @property def read_conn(self): # type: () -> socket.socket if self._read_conn is None: self._read_conn = self.ftp.transfercmd( str("RETR ") + _encode(self.path, self.ftp.encoding), self.pos ) return self._read_conn @property def write_conn(self): # type: () -> socket.socket if self._write_conn is None: if self.mode.appending: self._write_conn = self.ftp.transfercmd( str("APPE ") + _encode(self.path, self.ftp.encoding) ) else: self._write_conn = self.ftp.transfercmd( str("STOR ") + _encode(self.path, self.ftp.encoding), self.pos ) return self._write_conn def __repr__(self): # type: () -> str _repr = "" return _repr.format(self.fs.ftp_url, self.path, self.mode) def close(self): # type: () -> None if not self.closed: with self._lock: try: if self._write_conn is not None: self._write_conn.close() self._write_conn = None self.ftp.voidresp() # Ensure last write completed if self._read_conn is not None: self._read_conn.close() self._read_conn = None try: self.ftp.quit() except error_temp: # pragma: no cover pass finally: super(FTPFile, self).close() def tell(self): # type: () -> int return self.pos def readable(self): # type: () -> bool return self.mode.reading def read(self, size=-1): # type: (int) -> bytes if not self.mode.reading: raise IOError("File not open for reading") chunks = [] remaining = size conn = self.read_conn with self._lock: while remaining: if remaining < 0: read_size = DEFAULT_CHUNK_SIZE else: read_size = min(DEFAULT_CHUNK_SIZE, remaining) try: chunk = conn.recv(read_size) except socket.error: # pragma: no cover break if not chunk: break chunks.append(chunk) self.pos += len(chunk) remaining -= len(chunk) return b"".join(chunks) def readinto(self, buffer): # type: (bytearray) -> int data = self.read(len(buffer)) bytes_read = len(data) buffer[:bytes_read] = data return bytes_read def readline(self, size=-1): # type: (int) -> bytes return next(line_iterator(self, size)) # type: ignore def readlines(self, hint=-1): # type: (int) -> List[bytes] lines = [] size = 0 for line in line_iterator(self): # type: ignore lines.append(line) size += len(line) if hint != -1 and size > hint: break return lines def writable(self): # type: () -> bool return self.mode.writing def write(self, data): # type: (bytes) -> int if not self.mode.writing: raise IOError("File not open for writing") with self._lock: conn = self.write_conn data_pos = 0 remaining_data = len(data) while remaining_data: chunk_size = min(remaining_data, DEFAULT_CHUNK_SIZE) sent_size = conn.send(data[data_pos : data_pos + chunk_size]) data_pos += sent_size remaining_data -= sent_size self.pos += sent_size return data_pos def writelines(self, lines): # type: (Iterable[bytes]) -> None self.write(b"".join(lines)) def truncate(self, size=None): # type: (Optional[int]) -> int # Inefficient, but I don't know if truncate is possible with ftp with self._lock: if size is None: size = self.tell() with self.fs.openbin(self.path) as f: data = f.read(size) with self.fs.openbin(self.path, "w") as f: f.write(data) if len(data) < size: f.write(b"\0" * (size - len(data))) return size def seekable(self): # type: () -> bool return True def seek(self, pos, whence=Seek.set): # type: (int, SupportsInt) -> int _whence = int(whence) if _whence not in (Seek.set, Seek.current, Seek.end): raise ValueError("invalid value for whence") with self._lock: if _whence == Seek.set: new_pos = pos elif _whence == Seek.current: new_pos = self.pos + pos elif _whence == Seek.end: file_size = self.fs.getsize(self.path) new_pos = file_size + pos self.pos = max(0, new_pos) self.ftp.quit() self.ftp = self._open_ftp() if self._read_conn: self._read_conn.close() self._read_conn = None if self._write_conn: self._write_conn.close() self._write_conn = None return self.tell() class FTPFS(FS): """A FTP (File Transport Protocol) Filesystem. Arguments: host (str): A FTP host, e.g. ``'ftp.mirror.nl'``. user (str): A username (default is ``'anonymous'``). passwd (str): Password for the server, or `None` for anon. acct (str): FTP account. timeout (int): Timeout for contacting server (in seconds, defaults to 10). port (int): FTP port number (default 21). proxy (str, optional): An FTP proxy, or ``None`` (default) for no proxy. """ _meta = { "invalid_path_chars": "\0", "network": True, "read_only": False, "thread_safe": True, "unicode_paths": True, "virtual": False, } def __init__( self, host, # type: Text user="anonymous", # type: Text passwd="", # type: Text acct="", # type: Text timeout=10, # type: int port=21, # type: int proxy=None, # type: Optional[Text] ): # type: (...) -> None super(FTPFS, self).__init__() self._host = host self._user = user self.passwd = passwd self.acct = acct self.timeout = timeout self.port = port self.proxy = proxy self.encoding = "latin-1" self._ftp = None # type: Optional[FTP] self._welcome = None # type: Optional[Text] self._features = {} # type: Dict[Text, Text] def __repr__(self): # type: (...) -> Text return "FTPFS({!r}, port={!r})".format(self.host, self.port) def __str__(self): # type: (...) -> Text _fmt = "" if self.port == 21 else "" return _fmt.format(host=self.host, port=self.port) @property def user(self): # type: () -> Text return ( self._user if self.proxy is None else "{}@{}".format(self._user, self._host) ) @property def host(self): # type: () -> Text return self._host if self.proxy is None else self.proxy @classmethod def _parse_features(cls, feat_response): # type: (Text) -> Dict[Text, Text] """Parse a dict of features from FTP feat response. """ features = {} if feat_response.split("-")[0] == "211": for line in feat_response.splitlines(): if line.startswith(" "): key, _, value = line[1:].partition(" ") features[key] = value return features def _open_ftp(self): # type: () -> FTP """Open a new ftp object. """ _ftp = FTP() _ftp.set_debuglevel(0) with ftp_errors(self): _ftp.connect(self.host, self.port, self.timeout) _ftp.login(self.user, self.passwd, self.acct) self._features = {} try: feat_response = _decode(_ftp.sendcmd("FEAT"), "latin-1") except error_perm: # pragma: no cover self.encoding = "latin-1" else: self._features = self._parse_features(feat_response) self.encoding = "utf-8" if "UTF8" in self._features else "latin-1" if not PY2: _ftp.file = _ftp.sock.makefile( # type: ignore "r", encoding=self.encoding ) _ftp.encoding = self.encoding self._welcome = _ftp.welcome return _ftp def _manage_ftp(self): # type: () -> ContextManager[FTP] ftp = self._open_ftp() return manage_ftp(ftp) @property def ftp_url(self): # type: () -> Text """Get the FTP url this filesystem will open.""" if self.port == 21: _host_part = self.host else: _host_part = "{}:{}".format(self.host, self.port) if self.user == "anonymous" or self.user is None: _user_part = "" else: _user_part = "{}:{}@".format(self.user, self.passwd) url = "ftp://{}{}".format(_user_part, _host_part) return url @property def ftp(self): # type: () -> FTP """~ftplib.FTP: the underlying FTP client. """ return self._get_ftp() def geturl(self, path, purpose="download"): # type: (str, str) -> Text """Get FTP url for resource.""" _path = self.validatepath(path) if purpose != "download": raise errors.NoURL(_path, purpose) return "{}{}".format(self.ftp_url, _path) def _get_ftp(self): # type: () -> FTP if self._ftp is None: self._ftp = self._open_ftp() return self._ftp @property def features(self): # type: () -> Dict[Text, Text] """dict: features of the remote FTP server. """ self._get_ftp() return self._features def _read_dir(self, path): # type: (Text) -> Dict[Text, Info] _path = abspath(normpath(path)) lines = [] # type: List[Union[ByteString, Text]] with ftp_errors(self, path=path): self.ftp.retrlines( str("LIST ") + _encode(_path, self.ftp.encoding), lines.append ) lines = [ line.decode("utf-8") if isinstance(line, bytes) else line for line in lines ] _list = [Info(raw_info) for raw_info in ftp_parse.parse(lines)] dir_listing = OrderedDict({info.name: info for info in _list}) return dir_listing @property def supports_mlst(self): # type: () -> bool """bool: whether the server supports MLST feature. """ return "MLST" in self.features def create(self, path, wipe=False): # type: (Text, bool) -> bool _path = self.validatepath(path) with ftp_errors(self, path): if wipe or not self.isfile(path): empty_file = io.BytesIO() self.ftp.storbinary( str("STOR ") + _encode(_path, self.ftp.encoding), empty_file ) return True return False @classmethod def _parse_ftp_time(cls, time_text): # type: (Text) -> Optional[int] """Parse a time from an ftp directory listing. """ try: tm_year = int(time_text[0:4]) tm_month = int(time_text[4:6]) tm_day = int(time_text[6:8]) tm_hour = int(time_text[8:10]) tm_min = int(time_text[10:12]) tm_sec = int(time_text[12:14]) except ValueError: return None epoch_time = calendar.timegm( (tm_year, tm_month, tm_day, tm_hour, tm_min, tm_sec) ) return epoch_time @classmethod def _parse_facts(cls, line): # type: (Text) -> Tuple[Optional[Text], Dict[Text, Text]] name = None facts = {} for fact in line.split(";"): key, sep, value = fact.partition("=") if sep: key = key.strip().lower() value = value.strip() facts[key] = value else: name = basename(fact.rstrip("/").strip()) return name if name not in (".", "..") else None, facts @classmethod def _parse_mlsx(cls, lines): # type: (Iterable[Text]) -> Iterator[RawInfo] for line in lines: name, facts = cls._parse_facts(line.strip()) if name is None: continue _type = facts.get("type", "file") if _type not in {"dir", "file"}: continue is_dir = _type == "dir" raw_info = {} # type: Dict[Text, Dict[Text, object]] raw_info["basic"] = {"name": name, "is_dir": is_dir} raw_info["ftp"] = facts # type: ignore raw_info["details"] = { "type": (int(ResourceType.directory if is_dir else ResourceType.file)) } details = raw_info["details"] size_str = facts.get("size", facts.get("sizd", "0")) size = 0 if size_str.isdigit(): size = int(size_str) details["size"] = size if "modify" in facts: details["modified"] = cls._parse_ftp_time(facts["modify"]) if "create" in facts: details["created"] = cls._parse_ftp_time(facts["create"]) yield raw_info if typing.TYPE_CHECKING: def opendir(self, path, factory=None): # type: (_F, Text, Optional[_OpendirFactory]) -> SubFS[_F] pass def getinfo(self, path, namespaces=None): # type: (Text, Optional[Container[Text]]) -> Info _path = self.validatepath(path) namespaces = namespaces or () if _path == "/": return Info( { "basic": {"name": "", "is_dir": True}, "details": {"type": int(ResourceType.directory)}, } ) if self.supports_mlst: with self._lock: with ftp_errors(self, path=path): response = self.ftp.sendcmd( str("MLST ") + _encode(_path, self.ftp.encoding) ) lines = _decode(response, self.ftp.encoding).splitlines()[1:-1] for raw_info in self._parse_mlsx(lines): return Info(raw_info) with ftp_errors(self, path=path): dir_name, file_name = split(_path) directory = self._read_dir(dir_name) if file_name not in directory: raise errors.ResourceNotFound(path) info = directory[file_name] return info def getmeta(self, namespace="standard"): # type: (Text) -> Dict[Text, object] _meta = {} # type: Dict[Text, object] self._get_ftp() if namespace == "standard": _meta = self._meta.copy() _meta["unicode_paths"] = "UTF8" in self.features return _meta def listdir(self, path): # type: (Text) -> List[Text] _path = self.validatepath(path) with self._lock: dir_list = [info.name for info in self.scandir(_path)] return dir_list def makedir( self, # type: _F path, # type: Text permissions=None, # type: Optional[Permissions] recreate=False, # type: bool ): # type: (...) -> SubFS[_F] _path = self.validatepath(path) with ftp_errors(self, path=path): if _path == "/": if recreate: return self.opendir(path) else: raise errors.DirectoryExists(path) if not (recreate and self.isdir(path)): try: self.ftp.mkd(_encode(_path, self.ftp.encoding)) except error_perm as error: code, _ = _parse_ftp_error(error) if code == "550": if self.isdir(path): raise errors.DirectoryExists(path) else: if self.exists(path): raise errors.DirectoryExists(path) raise errors.ResourceNotFound(path) return self.opendir(path) def openbin(self, path, mode="r", buffering=-1, **options): # type: (Text, Text, int, **Any) -> BinaryIO _mode = Mode(mode) _mode.validate_bin() _path = self.validatepath(path) with self._lock: try: info = self.getinfo(_path) except errors.ResourceNotFound: if _mode.reading: raise errors.ResourceNotFound(path) if _mode.writing and not self.isdir(dirname(_path)): raise errors.ResourceNotFound(path) else: if info.is_dir: raise errors.FileExpected(path) if _mode.exclusive: raise errors.FileExists(path) ftp_file = FTPFile(self, _path, _mode.to_platform_bin()) return ftp_file # type: ignore def remove(self, path): # type: (Text) -> None self.check() _path = self.validatepath(path) with self._lock: if self.isdir(path): raise errors.FileExpected(path=path) with ftp_errors(self, path): self.ftp.delete(_encode(_path, self.ftp.encoding)) def removedir(self, path): # type: (Text) -> None _path = self.validatepath(path) if _path == "/": raise errors.RemoveRootError() with ftp_errors(self, path): try: self.ftp.rmd(_encode(_path, self.ftp.encoding)) except error_perm as error: code, _ = _parse_ftp_error(error) if code == "550": if self.isfile(path): raise errors.DirectoryExpected(path) if not self.isempty(path): raise errors.DirectoryNotEmpty(path) raise # pragma: no cover def _scandir(self, path, namespaces=None): # type: (Text, Optional[Container[Text]]) -> Iterator[Info] _path = self.validatepath(path) with self._lock: if self.supports_mlst: lines = [] with ftp_errors(self, path=path): try: self.ftp.retrlines( str("MLSD ") + _encode(_path, self.ftp.encoding), lambda l: lines.append(_decode(l, self.ftp.encoding)), ) except error_perm: if not self.getinfo(path).is_dir: raise errors.DirectoryExpected(path) raise # pragma: no cover if lines: for raw_info in self._parse_mlsx(lines): yield Info(raw_info) return for info in self._read_dir(_path).values(): yield info def scandir( self, path, # type: Text namespaces=None, # type: Optional[Container[Text]] page=None, # type: Optional[Tuple[int, int]] ): # type: (...) -> Iterator[Info] if not self.supports_mlst and not self.getinfo(path).is_dir: raise errors.DirectoryExpected(path) iter_info = self._scandir(path, namespaces=namespaces) if page is not None: start, end = page iter_info = itertools.islice(iter_info, start, end) return iter_info def upload(self, path, file, chunk_size=None, **options): # type: (Text, BinaryIO, Optional[int], **Any) -> None _path = self.validatepath(path) with self._lock: with self._manage_ftp() as ftp: with ftp_errors(self, path): ftp.storbinary( str("STOR ") + _encode(_path, self.ftp.encoding), file ) def writebytes(self, path, contents): # type: (Text, ByteString) -> None if not isinstance(contents, bytes): raise TypeError("contents must be bytes") self.upload(path, io.BytesIO(contents)) def setinfo(self, path, info): # type: (Text, RawInfo) -> None if not self.exists(path): raise errors.ResourceNotFound(path) def readbytes(self, path): # type: (Text) -> bytes _path = self.validatepath(path) data = io.BytesIO() with ftp_errors(self, path): with self._manage_ftp() as ftp: try: ftp.retrbinary( str("RETR ") + _encode(_path, self.ftp.encoding), data.write ) except error_perm as error: code, _ = _parse_ftp_error(error) if code == "550": if self.isdir(path): raise errors.FileExpected(path) raise data_bytes = data.getvalue() return data_bytes def close(self): # type: () -> None if not self.isclosed(): try: self.ftp.quit() except Exception: # pragma: no cover pass self._ftp = None super(FTPFS, self).close() pyfilesystem2-2.4.12/fs/glob.py000066400000000000000000000205011400005060600163110ustar00rootroot00000000000000from __future__ import unicode_literals from collections import namedtuple import re import typing from .lrucache import LRUCache from ._repr import make_repr from .path import iteratepath from . import wildcard GlobMatch = namedtuple("GlobMatch", ["path", "info"]) Counts = namedtuple("Counts", ["files", "directories", "data"]) LineCounts = namedtuple("LineCounts", ["lines", "non_blank"]) if typing.TYPE_CHECKING: from typing import Iterator, List, Optional, Pattern, Text, Tuple from .base import FS _PATTERN_CACHE = LRUCache( 1000 ) # type: LRUCache[Tuple[Text, bool], Tuple[int, bool, Pattern]] def _translate_glob(pattern, case_sensitive=True): levels = 0 recursive = False re_patterns = [""] for component in iteratepath(pattern): if component == "**": re_patterns.append(".*/?") recursive = True else: re_patterns.append( "/" + wildcard._translate(component, case_sensitive=case_sensitive) ) levels += 1 re_glob = "(?ms)^" + "".join(re_patterns) + ("/$" if pattern.endswith("/") else "$") return ( levels, recursive, re.compile(re_glob, 0 if case_sensitive else re.IGNORECASE), ) def match(pattern, path): # type: (str, str) -> bool """Compare a glob pattern with a path (case sensitive). Arguments: pattern (str): A glob pattern. path (str): A path. Returns: bool: ``True`` if the path matches the pattern. Example: >>> from fs.glob import match >>> match("**/*.py", "/fs/glob.py") True """ try: levels, recursive, re_pattern = _PATTERN_CACHE[(pattern, True)] except KeyError: levels, recursive, re_pattern = _translate_glob(pattern, case_sensitive=True) _PATTERN_CACHE[(pattern, True)] = (levels, recursive, re_pattern) return bool(re_pattern.match(path)) def imatch(pattern, path): # type: (str, str) -> bool """Compare a glob pattern with a path (case insensitive). Arguments: pattern (str): A glob pattern. path (str): A path. Returns: bool: ``True`` if the path matches the pattern. """ try: levels, recursive, re_pattern = _PATTERN_CACHE[(pattern, False)] except KeyError: levels, recursive, re_pattern = _translate_glob(pattern, case_sensitive=True) _PATTERN_CACHE[(pattern, False)] = (levels, recursive, re_pattern) return bool(re_pattern.match(path)) class Globber(object): """A generator of glob results. Arguments: fs (~fs.base.FS): A filesystem object pattern (str): A glob pattern, e.g. ``"**/*.py"`` path (str): A path to a directory in the filesystem. namespaces (list): A list of additional info namespaces. case_sensitive (bool): If ``True``, the path matching will be case *sensitive* i.e. ``"FOO.py"`` and ``"foo.py"`` will be different, otherwise path matching will be case *insensitive*. exclude_dirs (list): A list of patterns to exclude when searching, e.g. ``["*.git"]``. """ def __init__( self, fs, pattern, path="/", namespaces=None, case_sensitive=True, exclude_dirs=None, ): # type: (FS, str, str, Optional[List[str]], bool, Optional[List[str]]) -> None self.fs = fs self.pattern = pattern self.path = path self.namespaces = namespaces self.case_sensitive = case_sensitive self.exclude_dirs = exclude_dirs def __repr__(self): return make_repr( self.__class__.__name__, self.fs, self.pattern, path=(self.path, "/"), namespaces=(self.namespaces, None), case_sensitive=(self.case_sensitive, True), exclude_dirs=(self.exclude_dirs, None), ) def _make_iter(self, search="breadth", namespaces=None): # type: (str, List[str]) -> Iterator[GlobMatch] try: levels, recursive, re_pattern = _PATTERN_CACHE[ (self.pattern, self.case_sensitive) ] except KeyError: levels, recursive, re_pattern = _translate_glob( self.pattern, case_sensitive=self.case_sensitive ) for path, info in self.fs.walk.info( path=self.path, namespaces=namespaces or self.namespaces, max_depth=None if recursive else levels, search=search, exclude_dirs=self.exclude_dirs, ): if info.is_dir: path += "/" if re_pattern.match(path): yield GlobMatch(path, info) def __iter__(self): # type: () -> Iterator[GlobMatch] """An iterator of :class:`fs.glob.GlobMatch` objects.""" return self._make_iter() def count(self): # type: () -> Counts """Count files / directories / data in matched paths. Example: >>> import fs >>> fs.open_fs('~/projects').glob('**/*.py').count() Counts(files=18519, directories=0, data=206690458) Returns: `~Counts`: A named tuple containing results. """ directories = 0 files = 0 data = 0 for _path, info in self._make_iter(namespaces=["details"]): if info.is_dir: directories += 1 else: files += 1 data += info.size return Counts(directories=directories, files=files, data=data) def count_lines(self): # type: () -> LineCounts """Count the lines in the matched files. Returns: `~LineCounts`: A named tuple containing line counts. Example: >>> import fs >>> fs.open_fs('~/projects').glob('**/*.py').count_lines() LineCounts(lines=5767102, non_blank=4915110) """ lines = 0 non_blank = 0 for path, info in self._make_iter(): if info.is_file: for line in self.fs.open(path, "rb"): lines += 1 if line.rstrip(): non_blank += 1 return LineCounts(lines=lines, non_blank=non_blank) def remove(self): # type: () -> int """Removed all matched paths. Returns: int: Number of file and directories removed. Example: >>> import fs >>> fs.open_fs('~/projects/my_project').glob('**/*.pyc').remove() 29 """ removes = 0 for path, info in self._make_iter(search="depth"): if info.is_dir: self.fs.removetree(path) else: self.fs.remove(path) removes += 1 return removes class BoundGlobber(object): """A :class:`~Globber` object bound to a filesystem. An instance of this object is available on every Filesystem object as ``.glob``. Arguments: fs (FS): A filesystem object. """ __slots__ = ["fs"] def __init__(self, fs): # type: (FS) -> None self.fs = fs def __repr__(self): return make_repr(self.__class__.__name__, self.fs) def __call__( self, pattern, path="/", namespaces=None, case_sensitive=True, exclude_dirs=None ): # type: (str, str, Optional[List[str]], bool, Optional[List[str]]) -> Globber """Match resources on the bound filesystem againsts a glob pattern. Arguments: pattern (str): A glob pattern, e.g. ``"**/*.py"`` namespaces (list): A list of additional info namespaces. case_sensitive (bool): If ``True``, the path matching will be case *sensitive* i.e. ``"FOO.py"`` and ``"foo.py"`` will be different, otherwise path matching will be case **insensitive**. exclude_dirs (list): A list of patterns to exclude when searching, e.g. ``["*.git"]``. Returns: `~Globber`: An object that may be queried for the glob matches. """ return Globber( self.fs, pattern, path, namespaces=namespaces, case_sensitive=case_sensitive, exclude_dirs=exclude_dirs, ) pyfilesystem2-2.4.12/fs/info.py000066400000000000000000000303371400005060600163310ustar00rootroot00000000000000"""Container for filesystem resource informations. """ from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import typing from typing import cast from copy import deepcopy import six from .path import join from .enums import ResourceType from .errors import MissingInfoNamespace from .permissions import Permissions from .time import epoch_to_datetime from ._typing import overload, Text if typing.TYPE_CHECKING: from datetime import datetime from typing import Any, Callable, List, Mapping, Optional, Union RawInfo = Mapping[Text, Mapping[Text, object]] ToDatetime = Callable[[int], datetime] T = typing.TypeVar("T") @six.python_2_unicode_compatible class Info(object): """Container for :ref:`info`. Resource information is returned by the following methods: * `~fs.base.FS.getinfo` * `~fs.base.FS.scandir` * `~fs.base.FS.filterdir` Arguments: raw_info (dict): A dict containing resource info. to_datetime (callable): A callable that converts an epoch time to a datetime object. The default uses :func:`~fs.time.epoch_to_datetime`. """ __slots__ = ["raw", "_to_datetime", "namespaces"] def __init__(self, raw_info, to_datetime=epoch_to_datetime): # type: (RawInfo, ToDatetime) -> None """Create a resource info object from a raw info dict. """ self.raw = raw_info self._to_datetime = to_datetime self.namespaces = frozenset(self.raw.keys()) def __str__(self): # type: () -> str if self.is_dir: return "".format(self.name) else: return "".format(self.name) __repr__ = __str__ def __eq__(self, other): # type: (object) -> bool return self.raw == getattr(other, "raw", None) @overload def _make_datetime(self, t): # type: (None) -> None pass @overload # noqa: F811 def _make_datetime(self, t): # type: (int) -> datetime pass def _make_datetime(self, t): # noqa: F811 # type: (Optional[int]) -> Optional[datetime] if t is not None: return self._to_datetime(t) else: return None @overload def get(self, namespace, key): # type: (Text, Text) -> Any pass @overload # noqa: F811 def get(self, namespace, key, default): # type: (Text, Text, T) -> Union[Any, T] pass def get(self, namespace, key, default=None): # noqa: F811 # type: (Text, Text, Optional[Any]) -> Optional[Any] """Get a raw info value. Arguments: namespace (str): A namespace identifier. key (str): A key within the namespace. default (object, optional): A default value to return if either the namespace or the key within the namespace is not found. Example: >>> info.get('access', 'permissions') ['u_r', 'u_w', '_wx'] """ try: return self.raw[namespace].get(key, default) # type: ignore except KeyError: return default def _require_namespace(self, namespace): # type: (Text) -> None """Check if the given namespace is present in the info. Raises: ~fs.errors.MissingInfoNamespace: if the given namespace is not present in the info. """ if namespace not in self.raw: raise MissingInfoNamespace(namespace) def is_writeable(self, namespace, key): # type: (Text, Text) -> bool """Check if a given key in a namespace is writable. Uses `~fs.base.FS.setinfo`. Arguments: namespace (str): A namespace identifier. key (str): A key within the namespace. Returns: bool: `True` if the key can be modified, `False` otherwise. """ _writeable = self.get(namespace, "_write", ()) return key in _writeable def has_namespace(self, namespace): # type: (Text) -> bool """Check if the resource info contains a given namespace. Arguments: namespace (str): A namespace identifier. Returns: bool: `True` if the namespace was found, `False` otherwise. """ return namespace in self.raw def copy(self, to_datetime=None): # type: (Optional[ToDatetime]) -> Info """Create a copy of this resource info object. """ return Info(deepcopy(self.raw), to_datetime=to_datetime or self._to_datetime) def make_path(self, dir_path): # type: (Text) -> Text """Make a path by joining ``dir_path`` with the resource name. Arguments: dir_path (str): A path to a directory. Returns: str: A path to the resource. """ return join(dir_path, self.name) @property def name(self): # type: () -> Text """`str`: the resource name. """ return cast(Text, self.get("basic", "name")) @property def suffix(self): # type: () -> Text """`str`: the last component of the name (including dot), or an empty string if there is no suffix. Example: >>> info >>> info.suffix '.py' """ name = self.get("basic", "name") if name.startswith(".") and name.count(".") == 1: return "" basename, dot, ext = name.rpartition(".") return "." + ext if dot else "" @property def suffixes(self): # type: () -> List[Text] """`List`: a list of any suffixes in the name. Example: >>> info >>> info.suffixes ['.tar', '.gz'] """ name = self.get("basic", "name") if name.startswith(".") and name.count(".") == 1: return [] return ["." + suffix for suffix in name.split(".")[1:]] @property def stem(self): # type: () -> Text """`str`: the name minus any suffixes. Example: >>> info >>> info.stem 'foo' """ name = self.get("basic", "name") if name.startswith("."): return name return name.split(".")[0] @property def is_dir(self): # type: () -> bool """`bool`: `True` if the resource references a directory. """ return cast(bool, self.get("basic", "is_dir")) @property def is_file(self): # type: () -> bool """`bool`: `True` if the resource references a file. """ return not cast(bool, self.get("basic", "is_dir")) @property def is_link(self): # type: () -> bool """`bool`: `True` if the resource is a symlink. """ self._require_namespace("link") return self.get("link", "target", None) is not None @property def type(self): # type: () -> ResourceType """`~fs.enums.ResourceType`: the type of the resource. Requires the ``"details"`` namespace. Raises: ~fs.errors.MissingInfoNamespace: if the 'details' namespace is not in the Info. """ self._require_namespace("details") return ResourceType(self.get("details", "type", 0)) @property def accessed(self): # type: () -> Optional[datetime] """`~datetime.datetime`: the resource last access time, or `None`. Requires the ``"details"`` namespace. Raises: ~fs.errors.MissingInfoNamespace: if the ``"details"`` namespace is not in the Info. """ self._require_namespace("details") _time = self._make_datetime(self.get("details", "accessed")) return _time @property def modified(self): # type: () -> Optional[datetime] """`~datetime.datetime`: the resource last modification time, or `None`. Requires the ``"details"`` namespace. Raises: ~fs.errors.MissingInfoNamespace: if the ``"details"`` namespace is not in the Info. """ self._require_namespace("details") _time = self._make_datetime(self.get("details", "modified")) return _time @property def created(self): # type: () -> Optional[datetime] """`~datetime.datetime`: the resource creation time, or `None`. Requires the ``"details"`` namespace. Raises: ~fs.errors.MissingInfoNamespace: if the ``"details"`` namespace is not in the Info. """ self._require_namespace("details") _time = self._make_datetime(self.get("details", "created")) return _time @property def metadata_changed(self): # type: () -> Optional[datetime] """`~datetime.datetime`: the resource metadata change time, or `None`. Requires the ``"details"`` namespace. Raises: ~fs.errors.MissingInfoNamespace: if the ``"details"`` namespace is not in the Info. """ self._require_namespace("details") _time = self._make_datetime(self.get("details", "metadata_changed")) return _time @property def permissions(self): # type: () -> Optional[Permissions] """`Permissions`: the permissions of the resource, or `None`. Requires the ``"access"`` namespace. Raises: ~fs.errors.MissingInfoNamespace: if the ``"access"`` namespace is not in the Info. """ self._require_namespace("access") _perm_names = self.get("access", "permissions") if _perm_names is None: return None permissions = Permissions(_perm_names) return permissions @property def size(self): # type: () -> int """`int`: the size of the resource, in bytes. Requires the ``"details"`` namespace. Raises: ~fs.errors.MissingInfoNamespace: if the ``"details"`` namespace is not in the Info. """ self._require_namespace("details") return cast(int, self.get("details", "size")) @property def user(self): # type: () -> Optional[Text] """`str`: the owner of the resource, or `None`. Requires the ``"access"`` namespace. Raises: ~fs.errors.MissingInfoNamespace: if the ``"access"`` namespace is not in the Info. """ self._require_namespace("access") return self.get("access", "user") @property def uid(self): # type: () -> Optional[int] """`int`: the user id of the resource, or `None`. Requires the ``"access"`` namespace. Raises: ~fs.errors.MissingInfoNamespace: if the ``"access"`` namespace is not in the Info. """ self._require_namespace("access") return self.get("access", "uid") @property def group(self): # type: () -> Optional[Text] """`str`: the group of the resource owner, or `None`. Requires the ``"access"`` namespace. Raises: ~fs.errors.MissingInfoNamespace: if the ``"access"`` namespace is not in the Info. """ self._require_namespace("access") return self.get("access", "group") @property def gid(self): # type: () -> Optional[int] """`int`: the group id of the resource, or `None`. Requires the ``"access"`` namespace. Raises: ~fs.errors.MissingInfoNamespace: if the ``"access"`` namespace is not in the Info. """ self._require_namespace("access") return self.get("access", "gid") @property def target(self): # noqa: D402 # type: () -> Optional[Text] """`str`: the link target (if resource is a symlink), or `None`. Requires the ``"link"`` namespace. Raises: ~fs.errors.MissingInfoNamespace: if the ``"link"`` namespace is not in the Info. """ self._require_namespace("link") return self.get("link", "target") pyfilesystem2-2.4.12/fs/iotools.py000066400000000000000000000140701400005060600170620ustar00rootroot00000000000000"""Compatibility tools between Python 2 and Python 3 I/O interfaces. """ from __future__ import print_function from __future__ import unicode_literals import io import typing from io import SEEK_SET, SEEK_CUR from .mode import Mode if typing.TYPE_CHECKING: from io import RawIOBase from typing import ( Any, Iterable, Iterator, IO, List, Optional, Text, Union, ) class RawWrapper(io.RawIOBase): """Convert a Python 2 style file-like object in to a IO object. """ def __init__(self, f, mode=None, name=None): # type: (IO[bytes], Optional[Text], Optional[Text]) -> None self._f = f self.mode = mode or getattr(f, "mode", None) self.name = name super(RawWrapper, self).__init__() def close(self): # type: () -> None if not self.closed: # Close self first since it will # flush itself, so we can't close # self._f before that super(RawWrapper, self).close() self._f.close() def fileno(self): # type: () -> int return self._f.fileno() def flush(self): # type: () -> None return self._f.flush() def isatty(self): # type: () -> bool return self._f.isatty() def seek(self, offset, whence=SEEK_SET): # type: (int, int) -> int return self._f.seek(offset, whence) def readable(self): # type: () -> bool return getattr(self._f, "readable", lambda: Mode(self.mode).reading)() def writable(self): # type: () -> bool return getattr(self._f, "writable", lambda: Mode(self.mode).writing)() def seekable(self): # type: () -> bool try: return self._f.seekable() except AttributeError: try: self.seek(0, SEEK_CUR) except IOError: return False else: return True def tell(self): # type: () -> int return self._f.tell() def truncate(self, size=None): # type: (Optional[int]) -> int return self._f.truncate(size) def write(self, data): # type: (bytes) -> int count = self._f.write(data) return len(data) if count is None else count @typing.no_type_check def read(self, n=-1): # type: (int) -> bytes if n == -1: return self.readall() return self._f.read(n) def read1(self, n=-1): # type: (int) -> bytes return getattr(self._f, "read1", self.read)(n) @typing.no_type_check def readall(self): # type: () -> bytes return self._f.read() @typing.no_type_check def readinto(self, b): # type: (bytearray) -> int try: return self._f.readinto(b) except AttributeError: data = self._f.read(len(b)) bytes_read = len(data) b[:bytes_read] = data return bytes_read @typing.no_type_check def readinto1(self, b): # type: (bytearray) -> int try: return self._f.readinto1(b) except AttributeError: data = self._f.read1(len(b)) bytes_read = len(data) b[:bytes_read] = data return bytes_read def readline(self, limit=-1): # type: (int) -> bytes return self._f.readline(limit) def readlines(self, hint=-1): # type: (int) -> List[bytes] return self._f.readlines(hint) def writelines(self, sequence): # type: (Iterable[Union[bytes, bytearray]]) -> None return self._f.writelines(sequence) def __iter__(self): # type: () -> Iterator[bytes] return iter(self._f) @typing.no_type_check def make_stream( name, # type: Text bin_file, # type: RawIOBase mode="r", # type: Text buffering=-1, # type: int encoding=None, # type: Optional[Text] errors=None, # type: Optional[Text] newline="", # type: Optional[Text] line_buffering=False, # type: bool **kwargs # type: Any ): # type: (...) -> IO """Take a Python 2.x binary file and return an IO Stream. """ reading = "r" in mode writing = "w" in mode appending = "a" in mode binary = "b" in mode if "+" in mode: reading = True writing = True encoding = None if binary else (encoding or "utf-8") io_object = RawWrapper(bin_file, mode=mode, name=name) # type: io.IOBase if buffering >= 0: if reading and writing: io_object = io.BufferedRandom( typing.cast(io.RawIOBase, io_object), buffering or io.DEFAULT_BUFFER_SIZE, ) elif reading: io_object = io.BufferedReader( typing.cast(io.RawIOBase, io_object), buffering or io.DEFAULT_BUFFER_SIZE, ) elif writing or appending: io_object = io.BufferedWriter( typing.cast(io.RawIOBase, io_object), buffering or io.DEFAULT_BUFFER_SIZE, ) if not binary: io_object = io.TextIOWrapper( io_object, encoding=encoding, errors=errors, newline=newline, line_buffering=line_buffering, ) return io_object def line_iterator(readable_file, size=None): # type: (IO[bytes], Optional[int]) -> Iterator[bytes] """Iterate over the lines of a file. Implementation reads each char individually, which is not very efficient. Yields: str: a single line in the file. """ read = readable_file.read line = [] byte = b"1" if size is None or size < 0: while byte: byte = read(1) line.append(byte) if byte in b"\n": yield b"".join(line) del line[:] else: while byte and size: byte = read(1) size -= len(byte) line.append(byte) if byte in b"\n" or not size: yield b"".join(line) del line[:] pyfilesystem2-2.4.12/fs/lrucache.py000066400000000000000000000024061400005060600171600ustar00rootroot00000000000000"""Least Recently Used cache mapping. """ from __future__ import absolute_import from __future__ import unicode_literals import typing from collections import OrderedDict _K = typing.TypeVar("_K") _V = typing.TypeVar("_V") class LRUCache(OrderedDict, typing.Generic[_K, _V]): """A dictionary-like container that stores a given maximum items. If an additional item is added when the LRUCache is full, the least recently used key is discarded to make room for the new item. """ def __init__(self, cache_size): # type: (int) -> None self.cache_size = cache_size super(LRUCache, self).__init__() def __setitem__(self, key, value): # type: (_K, _V) -> None """Store a new views, potentially discarding an old value. """ if key not in self: if len(self) >= self.cache_size: self.popitem(last=False) OrderedDict.__setitem__(self, key, value) def __getitem__(self, key): # type: (_K) -> _V """Get the item, but also makes it most recent. """ _super = typing.cast(OrderedDict, super(LRUCache, self)) value = _super.__getitem__(key) _super.__delitem__(key) _super.__setitem__(key, value) return value pyfilesystem2-2.4.12/fs/memoryfs.py000066400000000000000000000414111400005060600172320ustar00rootroot00000000000000"""Manage a volatile in-memory filesystem. """ from __future__ import absolute_import from __future__ import unicode_literals import contextlib import io import os import time import typing from collections import OrderedDict from threading import RLock import six from . import errors from .base import FS from .enums import ResourceType, Seek from .info import Info from .mode import Mode from .path import iteratepath from .path import normpath from .path import split from ._typing import overload if typing.TYPE_CHECKING: from typing import ( Any, BinaryIO, Collection, Dict, Iterator, List, Optional, SupportsInt, Union, Text, ) from .base import _OpendirFactory from .info import RawInfo from .permissions import Permissions from .subfs import SubFS _M = typing.TypeVar("_M", bound="MemoryFS") @six.python_2_unicode_compatible class _MemoryFile(io.RawIOBase): def __init__(self, path, memory_fs, mode, dir_entry): # type: (Text, MemoryFS, Text, _DirEntry) -> None super(_MemoryFile, self).__init__() self._path = path self._memory_fs = memory_fs self._mode = Mode(mode) self._dir_entry = dir_entry # We are opening a file - dir_entry.bytes_file is not None self._bytes_io = typing.cast(io.BytesIO, dir_entry.bytes_file) self.accessed_time = time.time() self.modified_time = time.time() self.pos = 0 if self._mode.truncate: with self._dir_entry.lock: self._bytes_io.seek(0) self._bytes_io.truncate() elif self._mode.appending: with self._dir_entry.lock: self._bytes_io.seek(0, os.SEEK_END) self.pos = self._bytes_io.tell() def __str__(self): # type: () -> str _template = "" return _template.format(path=self._path, mode=self._mode) @property def mode(self): # type: () -> Text return self._mode.to_platform_bin() @contextlib.contextmanager def _seek_lock(self): # type: () -> Iterator[None] with self._dir_entry.lock: self._bytes_io.seek(self.pos) yield self.pos = self._bytes_io.tell() def on_modify(self): # noqa: D401 # type: () -> None """Called when file data is modified. """ self._dir_entry.modified_time = self.modified_time = time.time() def on_access(self): # noqa: D401 # type: () -> None """Called when file is accessed. """ self._dir_entry.accessed_time = self.accessed_time = time.time() def flush(self): # type: () -> None pass def __iter__(self): # type: () -> typing.Iterator[bytes] self._bytes_io.seek(self.pos) for line in self._bytes_io: yield line def next(self): # type: () -> bytes with self._seek_lock(): self.on_access() return next(self._bytes_io) __next__ = next def readline(self, size=-1): # type: (int) -> bytes if not self._mode.reading: raise IOError("File not open for reading") with self._seek_lock(): self.on_access() return self._bytes_io.readline(size) def close(self): # type: () -> None if not self.closed: with self._dir_entry.lock: self._dir_entry.remove_open_file(self) super(_MemoryFile, self).close() def read(self, size=-1): # type: (Optional[int]) -> bytes if not self._mode.reading: raise IOError("File not open for reading") with self._seek_lock(): self.on_access() return self._bytes_io.read(size) def readable(self): # type: () -> bool return self._mode.reading def readinto(self, buffer): # type (bytearray) -> Optional[int] if not self._mode.reading: raise IOError("File not open for reading") with self._seek_lock(): self.on_access() return self._bytes_io.readinto(buffer) def readlines(self, hint=-1): # type: (int) -> List[bytes] if not self._mode.reading: raise IOError("File not open for reading") with self._seek_lock(): self.on_access() return self._bytes_io.readlines(hint) def seekable(self): # type: () -> bool return True def seek(self, pos, whence=Seek.set): # type: (int, SupportsInt) -> int # NOTE(@althonos): allows passing both Seek.set and os.SEEK_SET with self._seek_lock(): self.on_access() return self._bytes_io.seek(pos, int(whence)) def tell(self): # type: () -> int return self.pos def truncate(self, size=None): # type: (Optional[int]) -> int with self._seek_lock(): self.on_modify() new_size = self._bytes_io.truncate(size) if size is not None and self._bytes_io.tell() < size: file_size = self._bytes_io.seek(0, os.SEEK_END) self._bytes_io.write(b"\0" * (size - file_size)) self._bytes_io.seek(-size + file_size, os.SEEK_END) return size or new_size def writable(self): # type: () -> bool return self._mode.writing def write(self, data): # type: (bytes) -> int if not self._mode.writing: raise IOError("File not open for writing") with self._seek_lock(): self.on_modify() return self._bytes_io.write(data) def writelines(self, sequence): # type: ignore # type: (List[bytes]) -> None # FIXME(@althonos): For some reason the stub for IOBase.writelines # is List[Any] ?! It should probably be Iterable[ByteString] with self._seek_lock(): self.on_modify() self._bytes_io.writelines(sequence) class _DirEntry(object): def __init__(self, resource_type, name): # type: (ResourceType, Text) -> None self.resource_type = resource_type self.name = name self._dir = OrderedDict() # type: typing.MutableMapping[Text, _DirEntry] self._open_files = [] # type: typing.MutableSequence[_MemoryFile] self._bytes_file = None # type: Optional[io.BytesIO] self.lock = RLock() current_time = time.time() self.created_time = current_time self.accessed_time = current_time self.modified_time = current_time if not self.is_dir: self._bytes_file = io.BytesIO() @property def bytes_file(self): # type: () -> Optional[io.BytesIO] return self._bytes_file @property def is_dir(self): # type: () -> bool return self.resource_type == ResourceType.directory @property def size(self): # type: () -> int with self.lock: if self.is_dir: return 0 else: _bytes_file = typing.cast(io.BytesIO, self._bytes_file) _bytes_file.seek(0, os.SEEK_END) return _bytes_file.tell() @overload # noqa: F811 def get_entry(self, name, default): # type: (Text, _DirEntry) -> _DirEntry pass @overload # noqa: F811 def get_entry(self, name): # type: (Text) -> Optional[_DirEntry] pass @overload # noqa: F811 def get_entry(self, name, default): # type: (Text, None) -> Optional[_DirEntry] pass def get_entry(self, name, default=None): # noqa: F811 # type: (Text, Optional[_DirEntry]) -> Optional[_DirEntry] assert self.is_dir, "must be a directory" return self._dir.get(name, default) def set_entry(self, name, dir_entry): # type: (Text, _DirEntry) -> None self._dir[name] = dir_entry def remove_entry(self, name): # type: (Text) -> None del self._dir[name] def __contains__(self, name): # type: (object) -> bool return name in self._dir def __len__(self): # type: () -> int return len(self._dir) def list(self): # type: () -> List[Text] return list(self._dir.keys()) def add_open_file(self, memory_file): # type: (_MemoryFile) -> None self._open_files.append(memory_file) def remove_open_file(self, memory_file): # type: (_MemoryFile) -> None self._open_files.remove(memory_file) @six.python_2_unicode_compatible class MemoryFS(FS): """A filesystem that stored in memory. Memory filesystems are useful for caches, temporary data stores, unit testing, etc. Since all the data is in memory, they are very fast, but non-permanent. The `MemoryFS` constructor takes no arguments. Example: >>> mem_fs = MemoryFS() Or via an FS URL: >>> import fs >>> mem_fs = fs.open_fs('mem://') """ _meta = { "case_insensitive": False, "invalid_path_chars": "\0", "network": False, "read_only": False, "thread_safe": True, "unicode_paths": True, "virtual": False, } # type: Dict[Text, Union[Text, int, bool, None]] def __init__(self): # type: () -> None """Create an in-memory filesystem. """ self._meta = self._meta.copy() self.root = self._make_dir_entry(ResourceType.directory, "") super(MemoryFS, self).__init__() def __repr__(self): # type: () -> str return "MemoryFS()" def __str__(self): # type: () -> str return "" def _make_dir_entry(self, resource_type, name): # type: (ResourceType, Text) -> _DirEntry return _DirEntry(resource_type, name) def _get_dir_entry(self, dir_path): # type: (Text) -> Optional[_DirEntry] """Get a directory entry, or `None` if one doesn't exist. """ with self._lock: dir_path = normpath(dir_path) current_entry = self.root # type: Optional[_DirEntry] for path_component in iteratepath(dir_path): if current_entry is None: return None if not current_entry.is_dir: return None current_entry = current_entry.get_entry(path_component) return current_entry def close(self): # type: () -> None if not self._closed: del self.root return super(MemoryFS, self).close() def getinfo(self, path, namespaces=None): # type: (Text, Optional[Collection[Text]]) -> Info namespaces = namespaces or () _path = self.validatepath(path) dir_entry = self._get_dir_entry(_path) if dir_entry is None: raise errors.ResourceNotFound(path) info = {"basic": {"name": dir_entry.name, "is_dir": dir_entry.is_dir}} if "details" in namespaces: info["details"] = { "_write": ["accessed", "modified"], "type": int(dir_entry.resource_type), "size": dir_entry.size, "accessed": dir_entry.accessed_time, "modified": dir_entry.modified_time, "created": dir_entry.created_time, } return Info(info) def listdir(self, path): # type: (Text) -> List[Text] self.check() _path = self.validatepath(path) with self._lock: dir_entry = self._get_dir_entry(_path) if dir_entry is None: raise errors.ResourceNotFound(path) if not dir_entry.is_dir: raise errors.DirectoryExpected(path) return dir_entry.list() if typing.TYPE_CHECKING: def opendir(self, path, factory=None): # type: (_M, Text, Optional[_OpendirFactory]) -> SubFS[_M] pass def makedir( self, # type: _M path, # type: Text permissions=None, # type: Optional[Permissions] recreate=False, # type: bool ): # type: (...) -> SubFS[_M] _path = self.validatepath(path) with self._lock: if _path == "/": if recreate: return self.opendir(path) else: raise errors.DirectoryExists(path) dir_path, dir_name = split(_path) parent_dir = self._get_dir_entry(dir_path) if parent_dir is None: raise errors.ResourceNotFound(path) dir_entry = parent_dir.get_entry(dir_name) if dir_entry is not None and not recreate: raise errors.DirectoryExists(path) if dir_entry is None: new_dir = self._make_dir_entry(ResourceType.directory, dir_name) parent_dir.set_entry(dir_name, new_dir) return self.opendir(path) def openbin(self, path, mode="r", buffering=-1, **options): # type: (Text, Text, int, **Any) -> BinaryIO _mode = Mode(mode) _mode.validate_bin() _path = self.validatepath(path) dir_path, file_name = split(_path) if not file_name: raise errors.FileExpected(path) with self._lock: parent_dir_entry = self._get_dir_entry(dir_path) if parent_dir_entry is None or not parent_dir_entry.is_dir: raise errors.ResourceNotFound(path) if _mode.create: if file_name not in parent_dir_entry: file_dir_entry = self._make_dir_entry(ResourceType.file, file_name) parent_dir_entry.set_entry(file_name, file_dir_entry) else: file_dir_entry = self._get_dir_entry(_path) # type: ignore if _mode.exclusive: raise errors.FileExists(path) if file_dir_entry.is_dir: raise errors.FileExpected(path) mem_file = _MemoryFile( path=_path, memory_fs=self, mode=mode, dir_entry=file_dir_entry ) file_dir_entry.add_open_file(mem_file) return mem_file # type: ignore if file_name not in parent_dir_entry: raise errors.ResourceNotFound(path) file_dir_entry = parent_dir_entry.get_entry(file_name) # type: ignore if file_dir_entry.is_dir: raise errors.FileExpected(path) mem_file = _MemoryFile( path=_path, memory_fs=self, mode=mode, dir_entry=file_dir_entry ) file_dir_entry.add_open_file(mem_file) return mem_file # type: ignore def remove(self, path): # type: (Text) -> None _path = self.validatepath(path) with self._lock: dir_path, file_name = split(_path) parent_dir_entry = self._get_dir_entry(dir_path) if parent_dir_entry is None or file_name not in parent_dir_entry: raise errors.ResourceNotFound(path) file_dir_entry = typing.cast(_DirEntry, self._get_dir_entry(_path)) if file_dir_entry.is_dir: raise errors.FileExpected(path) parent_dir_entry.remove_entry(file_name) def removedir(self, path): # type: (Text) -> None _path = self.validatepath(path) if _path == "/": raise errors.RemoveRootError() with self._lock: dir_path, file_name = split(_path) parent_dir_entry = self._get_dir_entry(dir_path) if parent_dir_entry is None or file_name not in parent_dir_entry: raise errors.ResourceNotFound(path) dir_dir_entry = typing.cast(_DirEntry, self._get_dir_entry(_path)) if not dir_dir_entry.is_dir: raise errors.DirectoryExpected(path) if len(dir_dir_entry): raise errors.DirectoryNotEmpty(path) parent_dir_entry.remove_entry(file_name) def setinfo(self, path, info): # type: (Text, RawInfo) -> None _path = self.validatepath(path) with self._lock: dir_path, file_name = split(_path) parent_dir_entry = self._get_dir_entry(dir_path) if parent_dir_entry is None or file_name not in parent_dir_entry: raise errors.ResourceNotFound(path) resource_entry = typing.cast( _DirEntry, parent_dir_entry.get_entry(file_name) ) if "details" in info: details = info["details"] if "accessed" in details: resource_entry.accessed_time = details["accessed"] # type: ignore if "modified" in details: resource_entry.modified_time = details["modified"] # type: ignore pyfilesystem2-2.4.12/fs/mirror.py000066400000000000000000000116231400005060600167050ustar00rootroot00000000000000"""Function for *mirroring* a filesystem. Mirroring will create a copy of a source filesystem on a destination filesystem. If there are no files on the destination, then mirroring is simply a straight copy. If there are any files or directories on the destination they may be deleted or modified to match the source. In order to avoid redundant copying of files, `mirror` can compare timestamps, and only copy files with a newer modified date. This timestamp comparison is only done if the file sizes are different. This scheme will work if you have mirrored a directory previously, and you would like to copy any changes. Otherwise you should set the ``copy_if_newer`` parameter to `False` to guarantee an exact copy, at the expense of potentially copying extra files. """ from __future__ import print_function from __future__ import unicode_literals import typing from ._bulk import Copier from .copy import copy_file_internal from .errors import ResourceNotFound from .opener import manage_fs from .tools import is_thread_safe from .walk import Walker if typing.TYPE_CHECKING: from typing import Callable, Optional, Text, Union from .base import FS from .info import Info def _compare(info1, info2): # type: (Info, Info) -> bool """Compare two `Info` objects to see if they should be copied. Returns: bool: `True` if the `Info` are different in size or mtime. """ # Check filesize has changed if info1.size != info2.size: return True # Check modified dates date1 = info1.modified date2 = info2.modified return date1 is None or date2 is None or date1 > date2 def mirror( src_fs, # type: Union[FS, Text] dst_fs, # type: Union[FS, Text] walker=None, # type: Optional[Walker] copy_if_newer=True, # type: bool workers=0, # type: int ): # type: (...) -> None """Mirror files / directories from one filesystem to another. Mirroring a filesystem will create an exact copy of ``src_fs`` on ``dst_fs``, by removing any files / directories on the destination that aren't on the source, and copying files that aren't. Arguments: src_fs (FS or str): Source filesystem (URL or instance). dst_fs (FS or str): Destination filesystem (URL or instance). walker (~fs.walk.Walker, optional): An optional walker instance. copy_if_newer (bool): Only copy newer files (the default). workers (int): Number of worker threads used (0 for single threaded). Set to a relatively low number for network filesystems, 4 would be a good start. """ def src(): return manage_fs(src_fs, writeable=False) def dst(): return manage_fs(dst_fs, create=True) with src() as _src_fs, dst() as _dst_fs: with _src_fs.lock(), _dst_fs.lock(): _thread_safe = is_thread_safe(_src_fs, _dst_fs) with Copier(num_workers=workers if _thread_safe else 0) as copier: _mirror( _src_fs, _dst_fs, walker=walker, copy_if_newer=copy_if_newer, copy_file=copier.copy, ) def _mirror( src_fs, dst_fs, walker=None, copy_if_newer=True, copy_file=copy_file_internal ): # type: (FS, FS, Optional[Walker], bool, Callable[[FS, str, FS, str], None]) -> None walker = walker or Walker() walk = walker.walk(src_fs, namespaces=["details"]) for path, dirs, files in walk: try: dst = { info.name: info for info in dst_fs.scandir(path, namespaces=["details"]) } except ResourceNotFound: dst_fs.makedir(path) dst = {} # Copy files for _file in files: _path = _file.make_path(path) dst_file = dst.pop(_file.name, None) if dst_file is not None: if dst_file.is_dir: # Destination is a directory, remove it dst_fs.removetree(_path) else: # Compare file info if copy_if_newer and not _compare(_file, dst_file): continue copy_file(src_fs, _path, dst_fs, _path) # Make directories for _dir in dirs: _path = _dir.make_path(path) dst_dir = dst.pop(_dir.name, None) if dst_dir is not None: # Directory name exists on dst if not dst_dir.is_dir: # Not a directory, so remove it dst_fs.remove(_path) else: # Make the directory in dst dst_fs.makedir(_path, recreate=True) # Remove any remaining resources while dst: _, info = dst.popitem() _path = info.make_path(path) if info.is_dir: dst_fs.removetree(_path) else: dst_fs.remove(_path) pyfilesystem2-2.4.12/fs/mode.py000066400000000000000000000141671400005060600163250ustar00rootroot00000000000000"""Abstract I/O mode container. Mode strings are used in in `~fs.base.FS.open` and `~fs.base.FS.openbin`. """ from __future__ import print_function from __future__ import unicode_literals import typing import six from ._typing import Text if typing.TYPE_CHECKING: from typing import FrozenSet, Set, Union __all__ = ["Mode", "check_readable", "check_writable", "validate_openbin_mode"] # https://docs.python.org/3/library/functions.html#open @six.python_2_unicode_compatible class Mode(typing.Container[Text]): """An abstraction for I/O modes. A mode object provides properties that can be used to interrogate the `mode strings `_ used when opening files. Arguments: mode (str): A *mode* string, as used by `io.open`. Raises: ValueError: If the mode string is invalid. Example: >>> mode = Mode('rb') >>> mode.reading True >>> mode.writing False >>> mode.binary True >>> mode.text False """ def __init__(self, mode): # type: (Text) -> None self._mode = mode self.validate() def __repr__(self): # type: () -> Text return "Mode({!r})".format(self._mode) def __str__(self): # type: () -> Text return self._mode def __contains__(self, character): # type: (object) -> bool """Check if a mode contains a given character. """ assert isinstance(character, Text) return character in self._mode def to_platform(self): # type: () -> Text """Get a mode string for the current platform. Currently, this just removes the 'x' on PY2 because PY2 doesn't support exclusive mode. """ return self._mode.replace("x", "w") if six.PY2 else self._mode def to_platform_bin(self): # type: () -> Text """Get a *binary* mode string for the current platform. This removes the 't' and adds a 'b' if needed. """ _mode = self.to_platform().replace("t", "") return _mode if "b" in _mode else _mode + "b" def validate(self, _valid_chars=frozenset("rwxtab+")): # type: (Union[Set[Text], FrozenSet[Text]]) -> None """Validate the mode string. Raises: ValueError: if the mode contains invalid chars. """ mode = self._mode if not mode: raise ValueError("mode must not be empty") if not _valid_chars.issuperset(mode): raise ValueError("mode '{}' contains invalid characters".format(mode)) if mode[0] not in "rwxa": raise ValueError("mode must start with 'r', 'w', 'x', or 'a'") if "t" in mode and "b" in mode: raise ValueError("mode can't be binary ('b') and text ('t')") def validate_bin(self): # type: () -> None """Validate a mode for opening a binary file. Raises: ValueError: if the mode contains invalid chars. """ self.validate() if "t" in self: raise ValueError("mode must be binary") @property def create(self): # type: () -> bool """`bool`: `True` if the mode would create a file. """ return "a" in self or "w" in self or "x" in self @property def reading(self): # type: () -> bool """`bool`: `True` if the mode permits reading. """ return "r" in self or "+" in self @property def writing(self): # type: () -> bool """`bool`: `True` if the mode permits writing. """ return "w" in self or "a" in self or "+" in self or "x" in self @property def appending(self): # type: () -> bool """`bool`: `True` if the mode permits appending. """ return "a" in self @property def updating(self): # type: () -> bool """`bool`: `True` if the mode permits both reading and writing. """ return "+" in self @property def truncate(self): # type: () -> bool """`bool`: `True` if the mode would truncate an existing file. """ return "w" in self or "x" in self @property def exclusive(self): # type: () -> bool """`bool`: `True` if the mode require exclusive creation. """ return "x" in self @property def binary(self): # type: () -> bool """`bool`: `True` if a mode specifies binary. """ return "b" in self @property def text(self): # type: () -> bool """`bool`: `True` if a mode specifies text. """ return "t" in self or "b" not in self def check_readable(mode): # type: (Text) -> bool """Check a mode string allows reading. Arguments: mode (str): A mode string, e.g. ``"rt"`` Returns: bool: `True` if the mode allows reading. """ return Mode(mode).reading def check_writable(mode): # type: (Text) -> bool """Check a mode string allows writing. Arguments: mode (str): A mode string, e.g. ``"wt"`` Returns: bool: `True` if the mode allows writing. """ return Mode(mode).writing def validate_open_mode(mode): # type: (Text) -> None """Check ``mode`` parameter of `~fs.base.FS.open` is valid. Arguments: mode (str): Mode parameter. Raises: `ValueError` if mode is not valid. """ Mode(mode) def validate_openbin_mode(mode, _valid_chars=frozenset("rwxab+")): # type: (Text, Union[Set[Text], FrozenSet[Text]]) -> None """Check ``mode`` parameter of `~fs.base.FS.openbin` is valid. Arguments: mode (str): Mode parameter. Raises: `ValueError` if mode is not valid. """ if "t" in mode: raise ValueError("text mode not valid in openbin") if not mode: raise ValueError("mode must not be empty") if mode[0] not in "rwxa": raise ValueError("mode must start with 'r', 'w', 'a' or 'x'") if not _valid_chars.issuperset(mode): raise ValueError("mode '{}' contains invalid characters".format(mode)) pyfilesystem2-2.4.12/fs/mountfs.py000066400000000000000000000224701400005060600170700ustar00rootroot00000000000000"""Manage other filesystems as a folder hierarchy. """ from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import typing from six import text_type from . import errors from .base import FS from .memoryfs import MemoryFS from .path import abspath from .path import forcedir from .path import normpath from .mode import validate_open_mode from .mode import validate_openbin_mode if typing.TYPE_CHECKING: from typing import ( Any, BinaryIO, Collection, Iterator, IO, List, MutableSequence, Optional, Text, Tuple, Union, ) from .enums import ResourceType from .info import Info, RawInfo from .permissions import Permissions from .subfs import SubFS M = typing.TypeVar("M", bound="MountFS") class MountError(Exception): """Thrown when mounts conflict. """ class MountFS(FS): """A virtual filesystem that maps directories on to other file-systems. Arguments: auto_close (bool): If `True` (the default), the child filesystems will be closed when `MountFS` is closed. """ _meta = { "virtual": True, "read_only": False, "unicode_paths": True, "case_insensitive": False, "invalid_path_chars": "\0", } def __init__(self, auto_close=True): # type: (bool) -> None super(MountFS, self).__init__() self.auto_close = auto_close self.default_fs = MemoryFS() # type: FS self.mounts = [] # type: MutableSequence[Tuple[Text, FS]] def __repr__(self): # type: () -> str return "MountFS(auto_close={!r})".format(self.auto_close) def __str__(self): # type: () -> str return "" def _delegate(self, path): # type: (Text) -> Tuple[FS, Text] """Get the delegate FS for a given path. Arguments: path (str): A path. Returns: (FS, str): a tuple of ``(, )`` for a mounted filesystem, or ``(None, None)`` if no filesystem is mounted on the given ``path``. """ _path = forcedir(abspath(normpath(path))) is_mounted = _path.startswith for mount_path, fs in self.mounts: if is_mounted(mount_path): return fs, _path[len(mount_path) :].rstrip("/") return self.default_fs, path def mount(self, path, fs): # type: (Text, Union[FS, Text]) -> None """Mounts a host FS object on a given path. Arguments: path (str): A path within the MountFS. fs (FS or str): A filesystem (instance or URL) to mount. """ if isinstance(fs, text_type): from .opener import open_fs fs = open_fs(fs) if not isinstance(fs, FS): raise TypeError("fs argument must be an FS object or a FS URL") if fs is self: raise ValueError("Unable to mount self") _path = forcedir(abspath(normpath(path))) for mount_path, _ in self.mounts: if _path.startswith(mount_path): raise MountError("mount point overlaps existing mount") self.mounts.append((_path, fs)) self.default_fs.makedirs(_path, recreate=True) def close(self): # type: () -> None # Explicitly closes children if requested if self.auto_close: for _path, fs in self.mounts: fs.close() del self.mounts[:] self.default_fs.close() super(MountFS, self).close() def desc(self, path): # type: (Text) -> Text if not self.exists(path): raise errors.ResourceNotFound(path) fs, delegate_path = self._delegate(path) if fs is self.default_fs: fs = self return "{path} on {fs}".format(fs=fs, path=delegate_path) def getinfo(self, path, namespaces=None): # type: (Text, Optional[Collection[Text]]) -> Info self.check() fs, _path = self._delegate(path) return fs.getinfo(_path, namespaces=namespaces) def listdir(self, path): # type: (Text) -> List[Text] self.check() fs, _path = self._delegate(path) return fs.listdir(_path) def makedir(self, path, permissions=None, recreate=False): # type: (Text, Optional[Permissions], bool) -> SubFS[FS] self.check() fs, _path = self._delegate(path) return fs.makedir(_path, permissions=permissions, recreate=recreate) def openbin(self, path, mode="r", buffering=-1, **kwargs): # type: (Text, Text, int, **Any) -> BinaryIO validate_openbin_mode(mode) self.check() fs, _path = self._delegate(path) return fs.openbin(_path, mode=mode, buffering=-1, **kwargs) def remove(self, path): # type: (Text) -> None self.check() fs, _path = self._delegate(path) return fs.remove(_path) def removedir(self, path): # type: (Text) -> None self.check() path = normpath(path) if path in ("", "/"): raise errors.RemoveRootError(path) fs, _path = self._delegate(path) return fs.removedir(_path) def readbytes(self, path): # type: (Text) -> bytes self.check() fs, _path = self._delegate(path) return fs.readbytes(_path) def download(self, path, file, chunk_size=None, **options): # type: (Text, BinaryIO, Optional[int], **Any) -> None fs, _path = self._delegate(path) return fs.download(_path, file, chunk_size=chunk_size, **options) def readtext( self, path, # type: Text encoding=None, # type: Optional[Text] errors=None, # type: Optional[Text] newline="", # type: Text ): # type: (...) -> Text self.check() fs, _path = self._delegate(path) return fs.readtext(_path, encoding=encoding, errors=errors, newline=newline) def getsize(self, path): # type: (Text) -> int self.check() fs, _path = self._delegate(path) return fs.getsize(_path) def getsyspath(self, path): # type: (Text) -> Text self.check() fs, _path = self._delegate(path) return fs.getsyspath(_path) def gettype(self, path): # type: (Text) -> ResourceType self.check() fs, _path = self._delegate(path) return fs.gettype(_path) def geturl(self, path, purpose="download"): # type: (Text, Text) -> Text self.check() fs, _path = self._delegate(path) return fs.geturl(_path, purpose=purpose) def hasurl(self, path, purpose="download"): # type: (Text, Text) -> bool self.check() fs, _path = self._delegate(path) return fs.hasurl(_path, purpose=purpose) def isdir(self, path): # type: (Text) -> bool self.check() fs, _path = self._delegate(path) return fs.isdir(_path) def isfile(self, path): # type: (Text) -> bool self.check() fs, _path = self._delegate(path) return fs.isfile(_path) def scandir( self, path, # type: Text namespaces=None, # type: Optional[Collection[Text]] page=None, # type: Optional[Tuple[int, int]] ): # type: (...) -> Iterator[Info] self.check() fs, _path = self._delegate(path) return fs.scandir(_path, namespaces=namespaces, page=page) def setinfo(self, path, info): # type: (Text, RawInfo) -> None self.check() fs, _path = self._delegate(path) return fs.setinfo(_path, info) def validatepath(self, path): # type: (Text) -> Text self.check() fs, _path = self._delegate(path) fs.validatepath(_path) path = abspath(normpath(path)) return path def open( self, path, # type: Text mode="r", # type: Text buffering=-1, # type: int encoding=None, # type: Optional[Text] errors=None, # type: Optional[Text] newline="", # type: Text **options # type: Any ): # type: (...) -> IO validate_open_mode(mode) self.check() fs, _path = self._delegate(path) return fs.open( _path, mode=mode, buffering=buffering, encoding=encoding, errors=errors, newline=newline, **options ) def upload(self, path, file, chunk_size=None, **options): # type: (Text, BinaryIO, Optional[int], **Any) -> None self.check() fs, _path = self._delegate(path) return fs.upload(_path, file, chunk_size=chunk_size, **options) def writebytes(self, path, contents): # type: (Text, bytes) -> None self.check() fs, _path = self._delegate(path) return fs.writebytes(_path, contents) def writetext( self, path, # type: Text contents, # type: Text encoding="utf-8", # type: Text errors=None, # type: Optional[Text] newline="", # type: Text ): # type: (...) -> None fs, _path = self._delegate(path) return fs.writetext( _path, contents, encoding=encoding, errors=errors, newline=newline ) pyfilesystem2-2.4.12/fs/move.py000066400000000000000000000054151400005060600163430ustar00rootroot00000000000000"""Functions for moving files between filesystems. """ from __future__ import print_function from __future__ import unicode_literals import typing from .copy import copy_dir from .copy import copy_file from .opener import manage_fs if typing.TYPE_CHECKING: from .base import FS from typing import Text, Union def move_fs(src_fs, dst_fs, workers=0): # type: (Union[Text, FS], Union[Text, FS], int) -> None """Move the contents of a filesystem to another filesystem. Arguments: src_fs (FS or str): Source filesystem (instance or URL). dst_fs (FS or str): Destination filesystem (instance or URL). workers (int): Use `worker` threads to copy data, or ``0`` (default) for a single-threaded copy. """ move_dir(src_fs, "/", dst_fs, "/", workers=workers) def move_file( src_fs, # type: Union[Text, FS] src_path, # type: Text dst_fs, # type: Union[Text, FS] dst_path, # type: Text ): # type: (...) -> None """Move a file from one filesystem to another. Arguments: src_fs (FS or str): Source filesystem (instance or URL). src_path (str): Path to a file on ``src_fs``. dst_fs (FS or str); Destination filesystem (instance or URL). dst_path (str): Path to a file on ``dst_fs``. """ with manage_fs(src_fs) as _src_fs: with manage_fs(dst_fs, create=True) as _dst_fs: if _src_fs is _dst_fs: # Same filesystem, may be optimized _src_fs.move(src_path, dst_path, overwrite=True) else: # Standard copy and delete with _src_fs.lock(), _dst_fs.lock(): copy_file(_src_fs, src_path, _dst_fs, dst_path) _src_fs.remove(src_path) def move_dir( src_fs, # type: Union[Text, FS] src_path, # type: Text dst_fs, # type: Union[Text, FS] dst_path, # type: Text workers=0, # type: int ): # type: (...) -> None """Move a directory from one filesystem to another. Arguments: src_fs (FS or str): Source filesystem (instance or URL). src_path (str): Path to a directory on ``src_fs`` dst_fs (FS or str): Destination filesystem (instance or URL). dst_path (str): Path to a directory on ``dst_fs``. workers (int): Use `worker` threads to copy data, or ``0`` (default) for a single-threaded copy. """ def src(): return manage_fs(src_fs, writeable=False) def dst(): return manage_fs(dst_fs, create=True) with src() as _src_fs, dst() as _dst_fs: with _src_fs.lock(), _dst_fs.lock(): _dst_fs.makedir(dst_path, recreate=True) copy_dir(src_fs, src_path, dst_fs, dst_path, workers=workers) _src_fs.removetree(src_path) pyfilesystem2-2.4.12/fs/multifs.py000066400000000000000000000314161400005060600170600ustar00rootroot00000000000000"""Manage several filesystems through a single view. """ from __future__ import absolute_import from __future__ import unicode_literals from __future__ import print_function import typing from collections import namedtuple, OrderedDict from operator import itemgetter from six import text_type from . import errors from .base import FS from .mode import check_writable from .opener import open_fs from .path import abspath, normpath if typing.TYPE_CHECKING: from typing import ( Any, BinaryIO, Collection, Iterator, IO, MutableMapping, List, MutableSet, Optional, Text, Tuple, ) from .enums import ResourceType from .info import Info, RawInfo from .permissions import Permissions from .subfs import SubFS M = typing.TypeVar("M", bound="MultiFS") _PrioritizedFS = namedtuple("_PrioritizedFS", ["priority", "fs"]) class MultiFS(FS): """A filesystem that delegates to a sequence of other filesystems. Operations on the MultiFS will try each 'child' filesystem in order, until it succeeds. In effect, creating a filesystem that combines the files and dirs of its children. """ _meta = {"virtual": True, "read_only": False, "case_insensitive": False} def __init__(self, auto_close=True): # type: (bool) -> None super(MultiFS, self).__init__() self._auto_close = auto_close self.write_fs = None # type: Optional[FS] self._write_fs_name = None # type: Optional[Text] self._sort_index = 0 self._filesystems = {} # type: MutableMapping[Text, _PrioritizedFS] self._fs_sequence = None # type: Optional[List[Tuple[Text, FS]]] self._closed = False def __repr__(self): # type: () -> Text if self._auto_close: return "MultiFS()" else: return "MultiFS(auto_close=False)" def __str__(self): # type: () -> Text return "" def add_fs(self, name, fs, write=False, priority=0): # type: (Text, FS, bool, int) -> None """Add a filesystem to the MultiFS. Arguments: name (str): A unique name to refer to the filesystem being added. fs (FS or str): The filesystem (instance or URL) to add. write (bool): If this value is True, then the ``fs`` will be used as the writeable FS (defaults to False). priority (int): An integer that denotes the priority of the filesystem being added. Filesystems will be searched in descending priority order and then by the reverse order they were added. So by default, the most recently added filesystem will be looked at first. """ if isinstance(fs, text_type): fs = open_fs(fs) if not isinstance(fs, FS): raise TypeError("fs argument should be an FS object or FS URL") self._filesystems[name] = _PrioritizedFS( priority=(priority, self._sort_index), fs=fs ) self._sort_index += 1 self._resort() if write: self.write_fs = fs self._write_fs_name = name def get_fs(self, name): # type: (Text) -> FS """Get a filesystem from its name. Arguments: name (str): The name of a filesystem previously added. Returns: FS: the filesystem added as ``name`` previously. Raises: KeyError: If no filesystem with given ``name`` could be found. """ return self._filesystems[name].fs def _resort(self): # type: () -> None """Force `iterate_fs` to re-sort on next reference. """ self._fs_sequence = None def iterate_fs(self): # type: () -> Iterator[Tuple[Text, FS]] """Get iterator that returns (name, fs) in priority order. """ if self._fs_sequence is None: self._fs_sequence = [ (name, fs) for name, (_order, fs) in sorted( self._filesystems.items(), key=itemgetter(1), reverse=True ) ] return iter(self._fs_sequence) def _delegate(self, path): # type: (Text) -> Optional[FS] """Get a filesystem which has a given path. """ for _name, fs in self.iterate_fs(): if fs.exists(path): return fs return None def _delegate_required(self, path): # type: (Text) -> FS """Check that there is a filesystem with the given ``path``. """ fs = self._delegate(path) if fs is None: raise errors.ResourceNotFound(path) return fs def _writable_required(self, path): # type: (Text) -> FS """Check that ``path`` is writeable. """ if self.write_fs is None: raise errors.ResourceReadOnly(path) return self.write_fs def which(self, path, mode="r"): # type: (Text, Text) -> Tuple[Optional[Text], Optional[FS]] """Get a tuple of (name, fs) that the given path would map to. Arguments: path (str): A path on the filesystem. mode (str): An `io.open` mode. """ if check_writable(mode): return self._write_fs_name, self.write_fs for name, fs in self.iterate_fs(): if fs.exists(path): return name, fs return None, None def close(self): # type: () -> None self._closed = True if self._auto_close: try: for _order, fs in self._filesystems.values(): fs.close() finally: self._filesystems.clear() self._resort() def getinfo(self, path, namespaces=None): # type: (Text, Optional[Collection[Text]]) -> Info self.check() namespaces = namespaces or () fs = self._delegate(path) if fs is None: raise errors.ResourceNotFound(path) _path = abspath(normpath(path)) info = fs.getinfo(_path, namespaces=namespaces) return info def listdir(self, path): # type: (Text) -> List[Text] self.check() directory = [] exists = False for _name, _fs in self.iterate_fs(): try: directory.extend(_fs.listdir(path)) except errors.ResourceNotFound: pass else: exists = True if not exists: raise errors.ResourceNotFound(path) directory = list(OrderedDict.fromkeys(directory)) return directory def makedir( self, # type: M path, # type: Text permissions=None, # type: Optional[Permissions] recreate=False, # type: bool ): # type: (...) -> SubFS[FS] self.check() write_fs = self._writable_required(path) return write_fs.makedir(path, permissions=permissions, recreate=recreate) def openbin(self, path, mode="r", buffering=-1, **options): # type: (Text, Text, int, **Any) -> BinaryIO self.check() if check_writable(mode): _fs = self._writable_required(path) else: _fs = self._delegate_required(path) return _fs.openbin(path, mode=mode, buffering=buffering, **options) def remove(self, path): # type: (Text) -> None self.check() fs = self._delegate_required(path) return fs.remove(path) def removedir(self, path): # type: (Text) -> None self.check() fs = self._delegate_required(path) return fs.removedir(path) def scandir( self, path, # type: Text namespaces=None, # type: Optional[Collection[Text]] page=None, # type: Optional[Tuple[int, int]] ): # type: (...) -> Iterator[Info] self.check() seen = set() # type: MutableSet[Text] exists = False for _name, fs in self.iterate_fs(): try: for info in fs.scandir(path, namespaces=namespaces, page=page): if info.name not in seen: yield info seen.add(info.name) exists = True except errors.ResourceNotFound: pass if not exists: raise errors.ResourceNotFound(path) def readbytes(self, path): # type: (Text) -> bytes self.check() fs = self._delegate(path) if fs is None: raise errors.ResourceNotFound(path) return fs.readbytes(path) def download(self, path, file, chunk_size=None, **options): # type: (Text, BinaryIO, Optional[int], **Any) -> None fs = self._delegate_required(path) return fs.download(path, file, chunk_size=chunk_size, **options) def readtext(self, path, encoding=None, errors=None, newline=""): # type: (Text, Optional[Text], Optional[Text], Text) -> Text self.check() fs = self._delegate_required(path) return fs.readtext(path, encoding=encoding, errors=errors, newline=newline) def getsize(self, path): # type: (Text) -> int self.check() fs = self._delegate_required(path) return fs.getsize(path) def getsyspath(self, path): # type: (Text) -> Text self.check() fs = self._delegate_required(path) return fs.getsyspath(path) def gettype(self, path): # type: (Text) -> ResourceType self.check() fs = self._delegate_required(path) return fs.gettype(path) def geturl(self, path, purpose="download"): # type: (Text, Text) -> Text self.check() fs = self._delegate_required(path) return fs.geturl(path, purpose=purpose) def hassyspath(self, path): # type: (Text) -> bool self.check() fs = self._delegate(path) return fs is not None and fs.hassyspath(path) def hasurl(self, path, purpose="download"): # type: (Text, Text) -> bool self.check() fs = self._delegate(path) return fs is not None and fs.hasurl(path, purpose=purpose) def isdir(self, path): # type: (Text) -> bool self.check() fs = self._delegate(path) return fs is not None and fs.isdir(path) def isfile(self, path): # type: (Text) -> bool self.check() fs = self._delegate(path) return fs is not None and fs.isfile(path) def setinfo(self, path, info): # type:(Text, RawInfo) -> None self.check() write_fs = self._writable_required(path) return write_fs.setinfo(path, info) def validatepath(self, path): # type: (Text) -> Text self.check() if self.write_fs is not None: self.write_fs.validatepath(path) else: super(MultiFS, self).validatepath(path) path = abspath(normpath(path)) return path def makedirs( self, path, # type: Text permissions=None, # type: Optional[Permissions] recreate=False, # type: bool ): # type: (...) -> SubFS[FS] self.check() write_fs = self._writable_required(path) return write_fs.makedirs(path, permissions=permissions, recreate=recreate) def open( self, path, # type: Text mode="r", # type: Text buffering=-1, # type: int encoding=None, # type: Optional[Text] errors=None, # type: Optional[Text] newline="", # type: Text **kwargs # type: Any ): # type: (...) -> IO self.check() if check_writable(mode): _fs = self._writable_required(path) else: _fs = self._delegate_required(path) return _fs.open( path, mode=mode, buffering=buffering, encoding=encoding, errors=errors, newline=newline, **kwargs ) def upload(self, path, file, chunk_size=None, **options): # type: (Text, BinaryIO, Optional[int], **Any) -> None self._writable_required(path).upload( path, file, chunk_size=chunk_size, **options ) def writebytes(self, path, contents): # type: (Text, bytes) -> None self._writable_required(path).writebytes(path, contents) def writetext( self, path, # type: Text contents, # type: Text encoding="utf-8", # type: Text errors=None, # type: Optional[Text] newline="", # type: Text ): # type: (...) -> None write_fs = self._writable_required(path) return write_fs.writetext( path, contents, encoding=encoding, errors=errors, newline=newline ) pyfilesystem2-2.4.12/fs/opener/000077500000000000000000000000001400005060600163065ustar00rootroot00000000000000pyfilesystem2-2.4.12/fs/opener/__init__.py000066400000000000000000000012561400005060600204230ustar00rootroot00000000000000# coding: utf-8 """Open filesystems from a URL. """ # Declare fs.opener as a namespace package __import__("pkg_resources").declare_namespace(__name__) # type: ignore # Import objects into fs.opener namespace from .base import Opener from .parse import parse_fs_url as parse from .registry import registry # Import opener modules so that `registry.install` if called on each opener from . import appfs, ftpfs, memoryfs, osfs, tarfs, tempfs, zipfs # Alias functions defined as Registry methods open_fs = registry.open_fs open = registry.open manage_fs = registry.manage_fs # __all__ with aliases and classes __all__ = ["registry", "Opener", "open_fs", "open", "manage_fs", "parse"] pyfilesystem2-2.4.12/fs/opener/appfs.py000066400000000000000000000040731400005060600177750ustar00rootroot00000000000000# coding: utf-8 """``AppFS`` opener definition. """ from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import typing from .base import Opener from .registry import registry from .errors import OpenerError if typing.TYPE_CHECKING: from typing import Text, Union from .parse import ParseResult from ..appfs import _AppFS from ..subfs import SubFS @registry.install class AppFSOpener(Opener): """``AppFS`` opener. """ protocols = ["userdata", "userconf", "sitedata", "siteconf", "usercache", "userlog"] _protocol_mapping = None def open_fs( self, fs_url, # type: Text parse_result, # type: ParseResult writeable, # type: bool create, # type: bool cwd, # type: Text ): # type: (...) -> Union[_AppFS, SubFS[_AppFS]] from ..subfs import ClosingSubFS from .. import appfs if self._protocol_mapping is None: self._protocol_mapping = { "userdata": appfs.UserDataFS, "userconf": appfs.UserConfigFS, "sitedata": appfs.SiteDataFS, "siteconf": appfs.SiteConfigFS, "usercache": appfs.UserCacheFS, "userlog": appfs.UserLogFS, } fs_class = self._protocol_mapping[parse_result.protocol] resource, delim, path = parse_result.resource.partition("/") tokens = resource.split(":", 3) if len(tokens) == 2: appname, author = tokens version = None elif len(tokens) == 3: appname, author, version = tokens else: raise OpenerError( "resource should be : " "or ::" ) app_fs = fs_class(appname, author=author, version=version, create=create) if delim: if create: app_fs.makedir(path, recreate=True) return app_fs.opendir(path, factory=ClosingSubFS) return app_fs pyfilesystem2-2.4.12/fs/opener/base.py000066400000000000000000000027611400005060600176000ustar00rootroot00000000000000# coding: utf-8 """`Opener` abstract base class. """ import abc import typing import six if typing.TYPE_CHECKING: from typing import List, Text from ..base import FS from .parse import ParseResult @six.add_metaclass(abc.ABCMeta) class Opener(object): """The base class for filesystem openers. An opener is responsible for opening a filesystem for a given protocol. """ protocols = [] # type: List[Text] def __repr__(self): # type: () -> Text return "".format(self.protocols) @abc.abstractmethod def open_fs( self, fs_url, # type: Text parse_result, # type: ParseResult writeable, # type: bool create, # type: bool cwd, # type: Text ): # type: (...) -> FS """Open a filesystem object from a FS URL. Arguments: fs_url (str): A filesystem URL. parse_result (~fs.opener.parse.ParseResult): A parsed filesystem URL. writeable (bool): `True` if the filesystem must be writable. create (bool): `True` if the filesystem should be created if it does not exist. cwd (str): The current working directory (generally only relevant for OS filesystems). Raises: fs.opener.errors.OpenerError: If a filesystem could not be opened for any reason. Returns: `~fs.base.FS`: A filesystem instance. """ pyfilesystem2-2.4.12/fs/opener/errors.py000066400000000000000000000010071400005060600201720ustar00rootroot00000000000000# coding: utf-8 """Errors raised when attempting to open a filesystem. """ class ParseError(ValueError): """Attempt to parse an invalid FS URL. """ class OpenerError(Exception): """Base exception for opener related errors. """ class UnsupportedProtocol(OpenerError): """No opener found for the given protocol. """ class EntryPointError(OpenerError): """An entry point could not be loaded. """ class NotWriteable(OpenerError): """A writable FS could not be created. """ pyfilesystem2-2.4.12/fs/opener/ftpfs.py000066400000000000000000000030561400005060600200060ustar00rootroot00000000000000# coding: utf-8 """`FTPFS` opener definition. """ from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import typing from .base import Opener from .registry import registry from ..errors import CreateFailed if typing.TYPE_CHECKING: from typing import Text, Union from ..ftpfs import FTPFS # noqa: F401 from ..subfs import SubFS from .parse import ParseResult @registry.install class FTPOpener(Opener): """`FTPFS` opener. """ protocols = ["ftp"] @CreateFailed.catch_all def open_fs( self, fs_url, # type: Text parse_result, # type: ParseResult writeable, # type: bool create, # type: bool cwd, # type: Text ): # type: (...) -> Union[FTPFS, SubFS[FTPFS]] from ..ftpfs import FTPFS from ..subfs import ClosingSubFS ftp_host, _, dir_path = parse_result.resource.partition("/") ftp_host, _, ftp_port = ftp_host.partition(":") ftp_port = int(ftp_port) if ftp_port.isdigit() else 21 ftp_fs = FTPFS( ftp_host, port=ftp_port, user=parse_result.username, passwd=parse_result.password, proxy=parse_result.params.get("proxy"), timeout=int(parse_result.params.get("timeout", "10")), ) if dir_path: if create: ftp_fs.makedirs(dir_path, recreate=True) return ftp_fs.opendir(dir_path, factory=ClosingSubFS) else: return ftp_fs pyfilesystem2-2.4.12/fs/opener/memoryfs.py000066400000000000000000000014551400005060600205260ustar00rootroot00000000000000# coding: utf-8 """`MemoryFS` opener definition. """ from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import typing from .base import Opener from .registry import registry if typing.TYPE_CHECKING: from typing import Text from .parse import ParseResult from ..memoryfs import MemoryFS # noqa: F401 @registry.install class MemOpener(Opener): """`MemoryFS` opener. """ protocols = ["mem"] def open_fs( self, fs_url, # type: Text parse_result, # type: ParseResult writeable, # type: bool create, # type: bool cwd, # type: Text ): # type: (...) -> MemoryFS from ..memoryfs import MemoryFS mem_fs = MemoryFS() return mem_fs pyfilesystem2-2.4.12/fs/opener/osfs.py000066400000000000000000000017131400005060600176340ustar00rootroot00000000000000# coding: utf-8 """`OSFS` opener definition. """ from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import typing from .base import Opener from .registry import registry if typing.TYPE_CHECKING: from typing import Text from .parse import ParseResult from ..osfs import OSFS # noqa: F401 @registry.install class OSFSOpener(Opener): """`OSFS` opener. """ protocols = ["file", "osfs"] def open_fs( self, fs_url, # type: Text parse_result, # type: ParseResult writeable, # type: bool create, # type: bool cwd, # type: Text ): # type: (...) -> OSFS from ..osfs import OSFS from os.path import abspath, expanduser, normpath, join _path = abspath(join(cwd, expanduser(parse_result.resource))) path = normpath(_path) osfs = OSFS(path, create=create) return osfs pyfilesystem2-2.4.12/fs/opener/parse.py000066400000000000000000000045051400005060600177760ustar00rootroot00000000000000"""Function to parse FS URLs in to their constituent parts. """ from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import collections import re import typing import six from six.moves.urllib.parse import parse_qs, unquote from .errors import ParseError if typing.TYPE_CHECKING: from typing import Optional, Text _ParseResult = collections.namedtuple( "ParseResult", ["protocol", "username", "password", "resource", "params", "path"] ) class ParseResult(_ParseResult): """A named tuple containing fields of a parsed FS URL. Attributes: protocol (str): The protocol part of the url, e.g. ``osfs`` or ``ftp``. username (str, optional): A username, or `None`. password (str, optional): A password, or `None`. resource (str): A *resource*, typically a domain and path, e.g. ``ftp.example.org/dir``. params (dict): A dictionary of parameters extracted from the query string. path (str, optional): A path within the filesystem, or `None`. """ _RE_FS_URL = re.compile( r""" ^ (.*?) :\/\/ (?: (?:(.*?)@(.*?)) |(.*?) ) (?: !(.*?)$ )*$ """, re.VERBOSE, ) def parse_fs_url(fs_url): # type: (Text) -> ParseResult """Parse a Filesystem URL and return a `ParseResult`. Arguments: fs_url (str): A filesystem URL. Returns: ~fs.opener.parse.ParseResult: a parse result instance. Raises: ~fs.errors.ParseError: if the FS URL is not valid. """ match = _RE_FS_URL.match(fs_url) if match is None: raise ParseError("{!r} is not a fs2 url".format(fs_url)) fs_name, credentials, url1, url2, path = match.groups() if not credentials: username = None # type: Optional[Text] password = None # type: Optional[Text] url = url2 else: username, _, password = credentials.partition(":") username = unquote(username) password = unquote(password) url = url1 url, has_qs, qs = url.partition("?") resource = unquote(url) if has_qs: _params = parse_qs(qs, keep_blank_values=True) params = {k: unquote(v[0]) for k, v in six.iteritems(_params)} else: params = {} return ParseResult(fs_name, username, password, resource, params, path) pyfilesystem2-2.4.12/fs/opener/py.typed000066400000000000000000000000001400005060600177730ustar00rootroot00000000000000pyfilesystem2-2.4.12/fs/opener/registry.py000066400000000000000000000215771400005060600205440ustar00rootroot00000000000000# coding: utf-8 """`Registry` class mapping protocols and FS URLs to their `Opener`. """ from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import collections import contextlib import typing import pkg_resources from .base import Opener from .errors import UnsupportedProtocol, EntryPointError from .parse import parse_fs_url if typing.TYPE_CHECKING: from typing import ( Callable, Dict, Iterator, List, Text, Type, Tuple, Union, ) from ..base import FS class Registry(object): """A registry for `Opener` instances. """ def __init__(self, default_opener="osfs", load_extern=False): # type: (Text, bool) -> None """Create a registry object. Arguments: default_opener (str, optional): The protocol to use, if one is not supplied. The default is to use 'osfs', so that the FS URL is treated as a system path if no protocol is given. load_extern (bool, optional): Set to `True` to load openers from PyFilesystem2 extensions. Defaults to `False`. """ self.default_opener = default_opener self.load_extern = load_extern self._protocols = {} # type: Dict[Text, Opener] def __repr__(self): # type: () -> Text return "".format(self.protocols) def install(self, opener): # type: (Union[Type[Opener], Opener, Callable[[], Opener]]) -> Opener """Install an opener. Arguments: opener (`Opener`): an `Opener` instance, or a callable that returns an opener instance. Note: May be used as a class decorator. For example:: registry = Registry() @registry.install class ArchiveOpener(Opener): protocols = ['zip', 'tar'] """ _opener = opener if isinstance(opener, Opener) else opener() assert isinstance(_opener, Opener), "Opener instance required" assert _opener.protocols, "must list one or more protocols" for protocol in _opener.protocols: self._protocols[protocol] = _opener return _opener @property def protocols(self): # type: () -> List[Text] """`list`: the list of supported protocols. """ _protocols = list(self._protocols) if self.load_extern: _protocols.extend( entry_point.name for entry_point in pkg_resources.iter_entry_points("fs.opener") ) _protocols = list(collections.OrderedDict.fromkeys(_protocols)) return _protocols def get_opener(self, protocol): # type: (Text) -> Opener """Get the opener class associated to a given protocol. Arguments: protocol (str): A filesystem protocol. Returns: Opener: an opener instance. Raises: ~fs.opener.errors.UnsupportedProtocol: If no opener could be found for the given protocol. EntryPointLoadingError: If the returned entry point is not an `Opener` subclass or could not be loaded successfully. """ protocol = protocol or self.default_opener if self.load_extern: entry_point = next( pkg_resources.iter_entry_points("fs.opener", protocol), None ) else: entry_point = None # If not entry point was loaded from the extensions, try looking # into the registered protocols if entry_point is None: if protocol in self._protocols: opener_instance = self._protocols[protocol] else: raise UnsupportedProtocol( "protocol '{}' is not supported".format(protocol) ) # If an entry point was found in an extension, attempt to load it else: try: opener = entry_point.load() except Exception as exception: raise EntryPointError( "could not load entry point; {}".format(exception) ) if not issubclass(opener, Opener): raise EntryPointError("entry point did not return an opener") try: opener_instance = opener() except Exception as exception: raise EntryPointError( "could not instantiate opener; {}".format(exception) ) return opener_instance def open( self, fs_url, # type: Text writeable=True, # type: bool create=False, # type: bool cwd=".", # type: Text default_protocol="osfs", # type: Text ): # type: (...) -> Tuple[FS, Text] """Open a filesystem from a FS URL. Returns a tuple of a filesystem object and a path. If there is no path in the FS URL, the path value will be `None`. Arguments: fs_url (str): A filesystem URL. writeable (bool, optional): `True` if the filesystem must be writeable. create (bool, optional): `True` if the filesystem should be created if it does not exist. cwd (str): The current working directory. Returns: (FS, str): a tuple of ``(, )`` """ if "://" not in fs_url: # URL may just be a path fs_url = "{}://{}".format(default_protocol, fs_url) parse_result = parse_fs_url(fs_url) protocol = parse_result.protocol open_path = parse_result.path opener = self.get_opener(protocol) open_fs = opener.open_fs(fs_url, parse_result, writeable, create, cwd) return open_fs, open_path def open_fs( self, fs_url, # type: Union[FS, Text] writeable=False, # type: bool create=False, # type: bool cwd=".", # type: Text default_protocol="osfs", # type: Text ): # type: (...) -> FS """Open a filesystem from a FS URL (ignoring the path component). Arguments: fs_url (str): A filesystem URL. writeable (bool, optional): `True` if the filesystem must be writeable. create (bool, optional): `True` if the filesystem should be created if it does not exist. cwd (str): The current working directory (generally only relevant for OS filesystems). default_protocol (str): The protocol to use if one is not supplied in the FS URL (defaults to ``"osfs"``). Returns: ~fs.base.FS: A filesystem instance. """ from ..base import FS if isinstance(fs_url, FS): _fs = fs_url else: _fs, _path = self.open( fs_url, writeable=writeable, create=create, cwd=cwd, default_protocol=default_protocol, ) return _fs @contextlib.contextmanager def manage_fs( self, fs_url, # type: Union[FS, Text] create=False, # type: bool writeable=False, # type: bool cwd=".", # type: Text ): # type: (...) -> Iterator[FS] """Get a context manager to open and close a filesystem. Arguments: fs_url (FS or str): A filesystem instance or a FS URL. create (bool, optional): If `True`, then create the filesystem if it doesn't already exist. writeable (bool, optional): If `True`, then the filesystem must be writeable. cwd (str): The current working directory, if opening a `~fs.osfs.OSFS`. Sometimes it is convenient to be able to pass either a FS object *or* an FS URL to a function. This context manager handles the required logic for that. Example: >>> def print_ls(list_fs): ... '''List a directory.''' ... with manage_fs(list_fs) as fs: ... print(' '.join(fs.listdir())) This function may be used in two ways. You may either pass a ``str``, as follows:: >>> print_list('zip://projects.zip') Or, an filesystem instance:: >>> from fs.osfs import OSFS >>> projects_fs = OSFS('~/') >>> print_list(projects_fs) """ from ..base import FS if isinstance(fs_url, FS): yield fs_url else: _fs = self.open_fs(fs_url, create=create, writeable=writeable, cwd=cwd) try: yield _fs finally: _fs.close() registry = Registry(load_extern=True) pyfilesystem2-2.4.12/fs/opener/tarfs.py000066400000000000000000000017151400005060600200030ustar00rootroot00000000000000# coding: utf-8 """`TarFS` opener definition. """ from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import typing from .base import Opener from .registry import registry from .errors import NotWriteable if typing.TYPE_CHECKING: from typing import Text from .parse import ParseResult from ..tarfs import TarFS # noqa: F401 @registry.install class TarOpener(Opener): """`TarFS` opener. """ protocols = ["tar"] def open_fs( self, fs_url, # type: Text parse_result, # type: ParseResult writeable, # type: bool create, # type: bool cwd, # type: Text ): # type: (...) -> TarFS from ..tarfs import TarFS if not create and writeable: raise NotWriteable("Unable to open existing TAR file for writing") tar_fs = TarFS(parse_result.resource, write=create) return tar_fs pyfilesystem2-2.4.12/fs/opener/tempfs.py000066400000000000000000000015011400005060600201530ustar00rootroot00000000000000# coding: utf-8 """`TempFS` opener definition. """ from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import typing from .base import Opener from .registry import registry if typing.TYPE_CHECKING: from typing import Text from .parse import ParseResult from ..tempfs import TempFS # noqa: F401 @registry.install class TempOpener(Opener): """`TempFS` opener. """ protocols = ["temp"] def open_fs( self, fs_url, # type: Text parse_result, # type: ParseResult writeable, # type: bool create, # type: bool cwd, # type: Text ): # type: (...) -> TempFS from ..tempfs import TempFS temp_fs = TempFS(identifier=parse_result.resource) return temp_fs pyfilesystem2-2.4.12/fs/opener/zipfs.py000066400000000000000000000017151400005060600200170ustar00rootroot00000000000000# coding: utf-8 """`ZipFS` opener definition. """ from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import typing from .base import Opener from .registry import registry from .errors import NotWriteable if typing.TYPE_CHECKING: from typing import Text from .parse import ParseResult from ..zipfs import ZipFS # noqa: F401 @registry.install class ZipOpener(Opener): """`ZipFS` opener. """ protocols = ["zip"] def open_fs( self, fs_url, # type: Text parse_result, # type: ParseResult writeable, # type: bool create, # type: bool cwd, # type: Text ): # type: (...) -> ZipFS from ..zipfs import ZipFS if not create and writeable: raise NotWriteable("Unable to open existing ZIP file for writing") zip_fs = ZipFS(parse_result.resource, write=create) return zip_fs pyfilesystem2-2.4.12/fs/osfs.py000066400000000000000000000612461400005060600163530ustar00rootroot00000000000000"""Manage the filesystem provided by your OS. In essence, an `OSFS` is a thin layer over the `io` and `os` modules of the Python standard library. """ from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import errno import io import itertools import logging import os import platform import shutil import stat import sys import tempfile import typing import six try: from os import scandir except ImportError: try: from scandir import scandir # type: ignore except ImportError: # pragma: no cover scandir = None # type: ignore # pragma: no cover try: from os import sendfile except ImportError: try: from sendfile import sendfile # type: ignore except ImportError: sendfile = None # type: ignore # pragma: no cover from . import errors from .base import FS from .enums import ResourceType from ._fscompat import fsencode, fsdecode, fspath from .info import Info from .path import basename, dirname from .permissions import Permissions from .error_tools import convert_os_errors from .mode import Mode, validate_open_mode from .errors import FileExpected, NoURL from ._url_tools import url_quote if typing.TYPE_CHECKING: from typing import ( Any, BinaryIO, Collection, Dict, Iterator, IO, List, Optional, SupportsInt, Text, Tuple, ) from .base import _OpendirFactory from .info import RawInfo from .subfs import SubFS _O = typing.TypeVar("_O", bound="OSFS") log = logging.getLogger("fs.osfs") _WINDOWS_PLATFORM = platform.system() == "Windows" @six.python_2_unicode_compatible class OSFS(FS): """Create an OSFS. Arguments: root_path (str or ~os.PathLike): An OS path or path-like object to the location on your HD you wish to manage. create (bool): Set to `True` to create the root directory if it does not already exist, otherwise the directory should exist prior to creating the ``OSFS`` instance (defaults to `False`). create_mode (int): The permissions that will be used to create the directory if ``create`` is `True` and the path doesn't exist, defaults to ``0o777``. expand_vars(bool): If `True` (the default) environment variables of the form $name or ${name} will be expanded. Raises: `fs.errors.CreateFailed`: If ``root_path`` does not exist, or could not be created. Examples: >>> current_directory_fs = OSFS('.') >>> home_fs = OSFS('~/') >>> windows_system32_fs = OSFS('c://system32') """ def __init__( self, root_path, # type: Text create=False, # type: bool create_mode=0o777, # type: SupportsInt expand_vars=True, # type: bool ): # type: (...) -> None """Create an OSFS instance. """ super(OSFS, self).__init__() if isinstance(root_path, bytes): root_path = fsdecode(root_path) self.root_path = root_path _drive, _root_path = os.path.splitdrive(fsdecode(fspath(root_path))) _root_path = _drive + (_root_path or "/") if _drive else _root_path _root_path = os.path.expanduser( os.path.expandvars(_root_path) if expand_vars else _root_path ) _root_path = os.path.normpath(os.path.abspath(_root_path)) self._root_path = _root_path if create: try: if not os.path.isdir(_root_path): os.makedirs(_root_path, mode=int(create_mode)) except OSError as error: raise errors.CreateFailed( "unable to create {} ({})".format(root_path, error), error ) else: if not os.path.isdir(_root_path): message = "root path '{}' does not exist".format(_root_path) raise errors.CreateFailed(message) _meta = self._meta = { "network": False, "read_only": False, "supports_rename": True, "thread_safe": True, "unicode_paths": os.path.supports_unicode_filenames, "virtual": False, } try: # https://stackoverflow.com/questions/7870041/check-if-file-system-is-case-insensitive-in-python # I don't know of a better way of detecting case insensitivity of a # filesystem with tempfile.NamedTemporaryFile(prefix="TmP") as _tmp_file: _meta["case_insensitive"] = os.path.exists(_tmp_file.name.lower()) except Exception: if platform.system() != "Darwin": _meta["case_insensitive"] = os.path.normcase("Aa") == "aa" if _WINDOWS_PLATFORM: # pragma: no cover _meta["invalid_path_chars"] = ( "".join(six.unichr(n) for n in range(31)) + '\\:*?"<>|' ) else: _meta["invalid_path_chars"] = "\0" if "PC_PATH_MAX" in os.pathconf_names: try: _meta["max_sys_path_length"] = os.pathconf( fsencode(_root_path), os.pathconf_names["PC_PATH_MAX"] ) except OSError: # pragma: no cover # The above fails with nfs mounts on OSX. Go figure. pass def __repr__(self): # type: () -> str _fmt = "{}({!r})" _class_name = self.__class__.__name__ return _fmt.format(_class_name, self.root_path) def __str__(self): # type: () -> str fmt = "<{} '{}'>" _class_name = self.__class__.__name__ return fmt.format(_class_name.lower(), self.root_path) def _to_sys_path(self, path): # type: (Text) -> bytes """Convert a FS path to a path on the OS. """ sys_path = fsencode( os.path.join(self._root_path, path.lstrip("/").replace("/", os.sep)) ) return sys_path @classmethod def _make_details_from_stat(cls, stat_result): # type: (os.stat_result) -> Dict[Text, object] """Make a *details* info dict from an `os.stat_result` object. """ details = { "_write": ["accessed", "modified"], "accessed": stat_result.st_atime, "modified": stat_result.st_mtime, "size": stat_result.st_size, "type": int(cls._get_type_from_stat(stat_result)), } # On other Unix systems (such as FreeBSD), the following # attributes may be available (but may be only filled out if # root tries to use them): details["created"] = getattr(stat_result, "st_birthtime", None) ctime_key = "created" if _WINDOWS_PLATFORM else "metadata_changed" details[ctime_key] = stat_result.st_ctime return details @classmethod def _make_access_from_stat(cls, stat_result): # type: (os.stat_result) -> Dict[Text, object] """Make an *access* info dict from an `os.stat_result` object. """ access = {} # type: Dict[Text, object] access["permissions"] = Permissions(mode=stat_result.st_mode).dump() access["gid"] = gid = stat_result.st_gid access["uid"] = uid = stat_result.st_uid if not _WINDOWS_PLATFORM: import grp import pwd try: access["group"] = grp.getgrgid(gid).gr_name except KeyError: # pragma: no cover pass try: access["user"] = pwd.getpwuid(uid).pw_name except KeyError: # pragma: no cover pass return access STAT_TO_RESOURCE_TYPE = { stat.S_IFDIR: ResourceType.directory, stat.S_IFCHR: ResourceType.character, stat.S_IFBLK: ResourceType.block_special_file, stat.S_IFREG: ResourceType.file, stat.S_IFIFO: ResourceType.fifo, stat.S_IFLNK: ResourceType.symlink, stat.S_IFSOCK: ResourceType.socket, } @classmethod def _get_type_from_stat(cls, _stat): # type: (os.stat_result) -> ResourceType """Get the resource type from an `os.stat_result` object. """ st_mode = _stat.st_mode st_type = stat.S_IFMT(st_mode) return cls.STAT_TO_RESOURCE_TYPE.get(st_type, ResourceType.unknown) # -------------------------------------------------------- # Required Methods # -------------------------------------------------------- def _gettarget(self, sys_path): # type: (Text) -> Optional[Text] if hasattr(os, "readlink"): try: if _WINDOWS_PLATFORM: # pragma: no cover return os.readlink(sys_path) else: return fsdecode(os.readlink(fsencode(sys_path))) except OSError: pass return None def _make_link_info(self, sys_path): # type: (Text) -> Dict[Text, object] _target = self._gettarget(sys_path) return {"target": _target} def getinfo(self, path, namespaces=None): # type: (Text, Optional[Collection[Text]]) -> Info self.check() namespaces = namespaces or () _path = self.validatepath(path) sys_path = self.getsyspath(_path) _lstat = None with convert_os_errors("getinfo", path): _stat = os.stat(fsencode(sys_path)) if "lstat" in namespaces: _lstat = os.lstat(fsencode(sys_path)) info = { "basic": {"name": basename(_path), "is_dir": stat.S_ISDIR(_stat.st_mode)} } if "details" in namespaces: info["details"] = self._make_details_from_stat(_stat) if "stat" in namespaces: info["stat"] = { k: getattr(_stat, k) for k in dir(_stat) if k.startswith("st_") } if "lstat" in namespaces: info["lstat"] = { k: getattr(_lstat, k) for k in dir(_lstat) if k.startswith("st_") } if "link" in namespaces: info["link"] = self._make_link_info(sys_path) if "access" in namespaces: info["access"] = self._make_access_from_stat(_stat) return Info(info) def listdir(self, path): # type: (Text) -> List[Text] self.check() _path = self.validatepath(path) sys_path = self._to_sys_path(_path) with convert_os_errors("listdir", path, directory=True): names = os.listdir(fsencode(sys_path)) return [fsdecode(name) for name in names] # return names def makedir( self, # type: _O path, # type: Text permissions=None, # type: Optional[Permissions] recreate=False, # type: bool ): # type: (...) -> SubFS[_O] self.check() mode = Permissions.get_mode(permissions) _path = self.validatepath(path) sys_path = self._to_sys_path(_path) with convert_os_errors("makedir", path, directory=True): try: os.mkdir(sys_path, mode) except OSError as error: if error.errno == errno.ENOENT: raise errors.ResourceNotFound(path) elif error.errno == errno.EEXIST and recreate: pass else: raise return self.opendir(_path) def openbin(self, path, mode="r", buffering=-1, **options): # type: (Text, Text, int, **Any) -> BinaryIO _mode = Mode(mode) _mode.validate_bin() self.check() _path = self.validatepath(path) if _path == "/": raise errors.FileExpected(path) sys_path = self._to_sys_path(_path) with convert_os_errors("openbin", path): if six.PY2 and _mode.exclusive: sys_path = os.open(sys_path, os.O_RDWR | os.O_CREAT | os.O_EXCL) binary_file = io.open( sys_path, mode=_mode.to_platform_bin(), buffering=buffering, **options ) return binary_file # type: ignore def remove(self, path): # type: (Text) -> None self.check() _path = self.validatepath(path) sys_path = self._to_sys_path(_path) with convert_os_errors("remove", path): try: os.remove(sys_path) except OSError as error: if error.errno == errno.EACCES and sys.platform == "win32": # sometimes windows says this for attempts to remove a dir if os.path.isdir(sys_path): # pragma: no cover raise errors.FileExpected(path) if error.errno == errno.EPERM and sys.platform == "darwin": # sometimes OSX says this for attempts to remove a dir if os.path.isdir(sys_path): # pragma: no cover raise errors.FileExpected(path) raise def removedir(self, path): # type: (Text) -> None self.check() _path = self.validatepath(path) if _path == "/": raise errors.RemoveRootError() sys_path = self._to_sys_path(path) with convert_os_errors("removedir", path, directory=True): os.rmdir(sys_path) # -------------------------------------------------------- # Optional Methods # -------------------------------------------------------- # --- Type hint for opendir ------------------------------ if typing.TYPE_CHECKING: def opendir(self, path, factory=None): # type: (_O, Text, Optional[_OpendirFactory]) -> SubFS[_O] pass # --- Backport of os.sendfile for Python < 3.8 ----------- def _check_copy(self, src_path, dst_path, overwrite=False): # validate individual paths _src_path = self.validatepath(src_path) _dst_path = self.validatepath(dst_path) # check src_path exists and is a file if self.gettype(src_path) is not ResourceType.file: raise errors.FileExpected(src_path) # check dst_path does not exist if we are not overwriting if not overwrite and self.exists(_dst_path): raise errors.DestinationExists(dst_path) # check parent dir of _dst_path exists and is a directory if self.gettype(dirname(dst_path)) is not ResourceType.directory: raise errors.DirectoryExpected(dirname(dst_path)) return _src_path, _dst_path if sys.version_info[:2] < (3, 8) and sendfile is not None: _sendfile_error_codes = { errno.EIO, errno.EINVAL, errno.ENOSYS, errno.EBADF, errno.ENOTSOCK, errno.EOPNOTSUPP, } # PyPy doesn't define ENOTSUP so we have to add it conditionally. if hasattr(errno, "ENOTSUP"): _sendfile_error_codes.add(errno.ENOTSUP) def copy(self, src_path, dst_path, overwrite=False): # type: (Text, Text, bool) -> None with self._lock: # validate and canonicalise paths _src_path, _dst_path = self._check_copy(src_path, dst_path, overwrite) _src_sys, _dst_sys = ( self.getsyspath(_src_path), self.getsyspath(_dst_path), ) # attempt using sendfile try: # initialise variables to pass to sendfile # open files to obtain a file descriptor with io.open(_src_sys, "r") as src: with io.open(_dst_sys, "w") as dst: fd_src, fd_dst = src.fileno(), dst.fileno() sent = maxsize = os.fstat(fd_src).st_size offset = 0 while sent > 0: sent = sendfile(fd_dst, fd_src, offset, maxsize) offset += sent except OSError as e: # the error is not a simple "sendfile not supported" error if e.errno not in self._sendfile_error_codes: raise # fallback using the shutil implementation shutil.copy2(_src_sys, _dst_sys) else: def copy(self, src_path, dst_path, overwrite=False): # type: (Text, Text, bool) -> None with self._lock: _src_path, _dst_path = self._check_copy(src_path, dst_path, overwrite) shutil.copy2(self.getsyspath(_src_path), self.getsyspath(_dst_path)) # --- Backport of os.scandir for Python < 3.5 ------------ if scandir: def _scandir(self, path, namespaces=None): # type: (Text, Optional[Collection[Text]]) -> Iterator[Info] self.check() namespaces = namespaces or () _path = self.validatepath(path) if _WINDOWS_PLATFORM: sys_path = os.path.join( self._root_path, path.lstrip("/").replace("/", os.sep) ) else: sys_path = self._to_sys_path(_path) # type: ignore with convert_os_errors("scandir", path, directory=True): for dir_entry in scandir(sys_path): info = { "basic": { "name": fsdecode(dir_entry.name), "is_dir": dir_entry.is_dir(), } } if "details" in namespaces: stat_result = dir_entry.stat() info["details"] = self._make_details_from_stat(stat_result) if "stat" in namespaces: stat_result = dir_entry.stat() info["stat"] = { k: getattr(stat_result, k) for k in dir(stat_result) if k.startswith("st_") } if "lstat" in namespaces: lstat_result = dir_entry.stat(follow_symlinks=False) info["lstat"] = { k: getattr(lstat_result, k) for k in dir(lstat_result) if k.startswith("st_") } if "link" in namespaces: info["link"] = self._make_link_info( os.path.join(sys_path, dir_entry.name) ) if "access" in namespaces: stat_result = dir_entry.stat() info["access"] = self._make_access_from_stat(stat_result) yield Info(info) else: def _scandir(self, path, namespaces=None): # type: (Text, Optional[Collection[Text]]) -> Iterator[Info] self.check() namespaces = namespaces or () _path = self.validatepath(path) sys_path = self.getsyspath(_path) with convert_os_errors("scandir", path, directory=True): for entry_name in os.listdir(sys_path): _entry_name = fsdecode(entry_name) entry_path = os.path.join(sys_path, _entry_name) stat_result = os.stat(fsencode(entry_path)) info = { "basic": { "name": _entry_name, "is_dir": stat.S_ISDIR(stat_result.st_mode), } } # type: Dict[Text, Dict[Text, Any]] if "details" in namespaces: info["details"] = self._make_details_from_stat(stat_result) if "stat" in namespaces: info["stat"] = { k: getattr(stat_result, k) for k in dir(stat_result) if k.startswith("st_") } if "lstat" in namespaces: lstat_result = os.lstat(entry_path) info["lstat"] = { k: getattr(lstat_result, k) for k in dir(lstat_result) if k.startswith("st_") } if "link" in namespaces: info["link"] = self._make_link_info( os.path.join(sys_path, entry_name) ) if "access" in namespaces: info["access"] = self._make_access_from_stat(stat_result) yield Info(info) def scandir( self, path, # type: Text namespaces=None, # type: Optional[Collection[Text]] page=None, # type: Optional[Tuple[int, int]] ): # type: (...) -> Iterator[Info] iter_info = self._scandir(path, namespaces=namespaces) if page is not None: start, end = page iter_info = itertools.islice(iter_info, start, end) return iter_info # --- Miscellaneous -------------------------------------- def getsyspath(self, path): # type: (Text) -> Text sys_path = os.path.join(self._root_path, path.lstrip("/").replace("/", os.sep)) return sys_path def geturl(self, path, purpose="download"): # type: (Text, Text) -> Text sys_path = self.getsyspath(path) if purpose == "download": return "file://" + sys_path elif purpose == "fs": url_path = url_quote(sys_path) return "osfs://" + url_path else: raise NoURL(path, purpose) def gettype(self, path): # type: (Text) -> ResourceType self.check() sys_path = self._to_sys_path(path) with convert_os_errors("gettype", path): stat = os.stat(sys_path) resource_type = self._get_type_from_stat(stat) return resource_type def islink(self, path): # type: (Text) -> bool self.check() _path = self.validatepath(path) sys_path = self._to_sys_path(_path) if not self.exists(path): raise errors.ResourceNotFound(path) with convert_os_errors("islink", path): return os.path.islink(sys_path) def open( self, path, # type: Text mode="r", # type: Text buffering=-1, # type: int encoding=None, # type: Optional[Text] errors=None, # type: Optional[Text] newline="", # type: Text line_buffering=False, # type: bool **options # type: Any ): # type: (...) -> IO _mode = Mode(mode) validate_open_mode(mode) self.check() _path = self.validatepath(path) if _path == "/": raise FileExpected(path) sys_path = self._to_sys_path(_path) with convert_os_errors("open", path): if six.PY2 and _mode.exclusive: sys_path = os.open(sys_path, os.O_RDWR | os.O_CREAT | os.O_EXCL) _encoding = encoding or "utf-8" return io.open( sys_path, mode=_mode.to_platform(), buffering=buffering, encoding=None if _mode.binary else _encoding, errors=errors, newline=None if _mode.binary else newline, **options ) def setinfo(self, path, info): # type: (Text, RawInfo) -> None self.check() _path = self.validatepath(path) sys_path = self._to_sys_path(_path) if not os.path.exists(sys_path): raise errors.ResourceNotFound(path) if "details" in info: details = info["details"] if "accessed" in details or "modified" in details: _accessed = typing.cast(int, details.get("accessed")) _modified = typing.cast(int, details.get("modified", _accessed)) accessed = int(_modified if _accessed is None else _accessed) modified = int(_modified) if accessed is not None or modified is not None: with convert_os_errors("setinfo", path): os.utime(sys_path, (accessed, modified)) def validatepath(self, path): # type: (Text) -> Text """Check path may be encoded, in addition to usual checks.""" try: fsencode(path) except UnicodeEncodeError as error: raise errors.InvalidCharsInPath( path, msg="path '{path}' could not be encoded for the filesystem (check LANG" " env var); {error}".format(path=path, error=error), ) return super(OSFS, self).validatepath(path) pyfilesystem2-2.4.12/fs/path.py000066400000000000000000000320541400005060600163300ustar00rootroot00000000000000"""Useful functions for working with PyFilesystem paths. This is broadly similar to the standard `os.path` module but works with paths in the canonical format expected by all FS objects (that is, separated by forward slashes and with an optional leading slash). See :ref:`paths` for an explanation of PyFilesystem paths. """ from __future__ import print_function from __future__ import unicode_literals import re import typing from .errors import IllegalBackReference if typing.TYPE_CHECKING: from typing import List, Text, Tuple __all__ = [ "abspath", "basename", "combine", "dirname", "forcedir", "frombase", "isabs", "isbase", "isdotfile", "isparent", "issamedir", "iswildcard", "iteratepath", "join", "normpath", "parts", "recursepath", "relativefrom", "relpath", "split", "splitext", ] _requires_normalization = re.compile(r"(^|/)\.\.?($|/)|//", re.UNICODE).search def normpath(path): # type: (Text) -> Text """Normalize a path. This function simplifies a path by collapsing back-references and removing duplicated separators. Arguments: path (str): Path to normalize. Returns: str: A valid FS path. Example: >>> normpath("/foo//bar/frob/../baz") '/foo/bar/baz' >>> normpath("foo/../../bar") Traceback (most recent call last) ... IllegalBackReference: path 'foo/../../bar' contains back-references outside of filesystem" """ # noqa: E501 if path in "/": return path # An early out if there is no need to normalize this path if not _requires_normalization(path): return path.rstrip("/") prefix = "/" if path.startswith("/") else "" components = [] # type: List[Text] try: for component in path.split("/"): if component in "..": # True for '..', '.', and '' if component == "..": components.pop() else: components.append(component) except IndexError: raise IllegalBackReference(path) return prefix + "/".join(components) def iteratepath(path): # type: (Text) -> List[Text] """Iterate over the individual components of a path. Arguments: path (str): Path to iterate over. Returns: list: A list of path components. Example: >>> iteratepath('/foo/bar/baz') ['foo', 'bar', 'baz'] """ path = relpath(normpath(path)) if not path: return [] return path.split("/") def recursepath(path, reverse=False): # type: (Text, bool) -> List[Text] """Get intermediate paths from the root to the given path. Arguments: path (str): A PyFilesystem path reverse (bool): Reverses the order of the paths (default `False`). Returns: list: A list of paths. Example: >>> recursepath('a/b/c') ['/', '/a', '/a/b', '/a/b/c'] """ if path in "/": return ["/"] path = abspath(normpath(path)) + "/" paths = ["/"] find = path.find append = paths.append pos = 1 len_path = len(path) while pos < len_path: pos = find("/", pos) append(path[:pos]) pos += 1 if reverse: return paths[::-1] return paths def isabs(path): # type: (Text) -> bool """Check if a path is an absolute path. Arguments: path (str): A PyFilesytem path. Returns: bool: `True` if the path is absolute (starts with a ``'/'``). """ # Somewhat trivial, but helps to make code self-documenting return path.startswith("/") def abspath(path): # type: (Text) -> Text """Convert the given path to an absolute path. Since FS objects have no concept of a *current directory*, this simply adds a leading ``/`` character if the path doesn't already have one. Arguments: path (str): A PyFilesytem path. Returns: str: An absolute path. """ if not path.startswith("/"): return "/" + path return path def relpath(path): # type: (Text) -> Text """Convert the given path to a relative path. This is the inverse of `abspath`, stripping a leading ``'/'`` from the path if it is present. Arguments: path (str): A path to adjust. Returns: str: A relative path. Example: >>> relpath('/a/b') 'a/b' """ return path.lstrip("/") def join(*paths): # type: (*Text) -> Text """Join any number of paths together. Arguments: *paths (str): Paths to join, given as positional arguments. Returns: str: The joined path. Example: >>> join('foo', 'bar', 'baz') 'foo/bar/baz' >>> join('foo/bar', '../baz') 'foo/baz' >>> join('foo/bar', '/baz') '/baz' """ absolute = False relpaths = [] # type: List[Text] for p in paths: if p: if p[0] == "/": del relpaths[:] absolute = True relpaths.append(p) path = normpath("/".join(relpaths)) if absolute: path = abspath(path) return path def combine(path1, path2): # type: (Text, Text) -> Text """Join two paths together. This is faster than :func:`~fs.path.join`, but only works when the second path is relative, and there are no back references in either path. Arguments: path1 (str): A PyFilesytem path. path2 (str): A PyFilesytem path. Returns: str: The joint path. Example: >>> combine("foo/bar", "baz") 'foo/bar/baz' """ if not path1: return path2.lstrip() return "{}/{}".format(path1.rstrip("/"), path2.lstrip("/")) def parts(path): # type: (Text) -> List[Text] """Split a path in to its component parts. Arguments: path (str): Path to split in to parts. Returns: list: List of components Example: >>> parts('/foo/bar/baz') ['/', 'foo', 'bar', 'baz'] """ _path = normpath(path) components = _path.strip("/") _parts = ["/" if _path.startswith("/") else "./"] if components: _parts += components.split("/") return _parts def split(path): # type: (Text) -> Tuple[Text, Text] """Split a path into (head, tail) pair. This function splits a path into a pair (head, tail) where 'tail' is the last pathname component and 'head' is all preceding components. Arguments: path (str): Path to split Returns: (str, str): a tuple containing the head and the tail of the path. Example: >>> split("foo/bar") ('foo', 'bar') >>> split("foo/bar/baz") ('foo/bar', 'baz') >>> split("/foo/bar/baz") ('/foo/bar', 'baz') """ if "/" not in path: return ("", path) split = path.rsplit("/", 1) return (split[0] or "/", split[1]) def splitext(path): # type: (Text) -> Tuple[Text, Text] """Split the extension from the path. Arguments: path (str): A path to split. Returns: (str, str): A tuple containing the path and the extension. Example: >>> splitext('baz.txt') ('baz', '.txt') >>> splitext('foo/bar/baz.txt') ('foo/bar/baz', '.txt') >>> splitext('foo/bar/.foo') ('foo/bar/.foo', '') """ parent_path, pathname = split(path) if pathname.startswith(".") and pathname.count(".") == 1: return path, "" if "." not in pathname: return path, "" pathname, ext = pathname.rsplit(".", 1) path = join(parent_path, pathname) return path, "." + ext def isdotfile(path): # type: (Text) -> bool """Detect if a path references a dot file. Arguments: path (str): Path to check. Returns: bool: `True` if the resource name starts with a ``'.'``. Example: >>> isdotfile('.baz') True >>> isdotfile('foo/bar/.baz') True >>> isdotfile('foo/bar.baz') False """ return basename(path).startswith(".") def dirname(path): # type: (Text) -> Text """Return the parent directory of a path. This is always equivalent to the 'head' component of the value returned by ``split(path)``. Arguments: path (str): A PyFilesytem path. Returns: str: the parent directory of the given path. Example: >>> dirname('foo/bar/baz') 'foo/bar' >>> dirname('/foo/bar') '/foo' >>> dirname('/foo') '/' """ return split(path)[0] def basename(path): # type: (Text) -> Text """Return the basename of the resource referenced by a path. This is always equivalent to the 'tail' component of the value returned by split(path). Arguments: path (str): A PyFilesytem path. Returns: str: the name of the resource at the given path. Example: >>> basename('foo/bar/baz') 'baz' >>> basename('foo/bar') 'bar' >>> basename('foo/bar/') '' """ return split(path)[1] def issamedir(path1, path2): # type: (Text, Text) -> bool """Check if two paths reference a resource in the same directory. Arguments: path1 (str): A PyFilesytem path. path2 (str): A PyFilesytem path. Returns: bool: `True` if the two resources are in the same directory. Example: >>> issamedir("foo/bar/baz.txt", "foo/bar/spam.txt") True >>> issamedir("foo/bar/baz/txt", "spam/eggs/spam.txt") False """ return dirname(normpath(path1)) == dirname(normpath(path2)) def isbase(path1, path2): # type: (Text, Text) -> bool """Check if ``path1`` is a base of ``path2``. Arguments: path1 (str): A PyFilesytem path. path2 (str): A PyFilesytem path. Returns: bool: `True` if ``path2`` starts with ``path1`` Example: >>> isbase('foo/bar', 'foo/bar/baz/egg.txt') True """ _path1 = forcedir(abspath(path1)) _path2 = forcedir(abspath(path2)) return _path2.startswith(_path1) # longer one is child def isparent(path1, path2): # type: (Text, Text) -> bool """Check if ``path1`` is a parent directory of ``path2``. Arguments: path1 (str): A PyFilesytem path. path2 (str): A PyFilesytem path. Returns: bool: `True` if ``path1`` is a parent directory of ``path2`` Example: >>> isparent("foo/bar", "foo/bar/spam.txt") True >>> isparent("foo/bar/", "foo/bar") True >>> isparent("foo/barry", "foo/baz/bar") False >>> isparent("foo/bar/baz/", "foo/baz/bar") False """ bits1 = path1.split("/") bits2 = path2.split("/") while bits1 and bits1[-1] == "": bits1.pop() if len(bits1) > len(bits2): return False for (bit1, bit2) in zip(bits1, bits2): if bit1 != bit2: return False return True def forcedir(path): # type: (Text) -> Text """Ensure the path ends with a trailing forward slash. Arguments: path (str): A PyFilesytem path. Returns: str: The path, ending with a slash. Example: >>> forcedir("foo/bar") 'foo/bar/' >>> forcedir("foo/bar/") 'foo/bar/' >>> forcedir("foo/spam.txt") 'foo/spam.txt/' """ if not path.endswith("/"): return path + "/" return path def frombase(path1, path2): # type: (Text, Text) -> Text """Get the final path of ``path2`` that isn't in ``path1``. Arguments: path1 (str): A PyFilesytem path. path2 (str): A PyFilesytem path. Returns: str: the final part of ``path2``. Example: >>> frombase('foo/bar/', 'foo/bar/baz/egg') 'baz/egg' """ if not isparent(path1, path2): raise ValueError("path1 must be a prefix of path2") return path2[len(path1) :] def relativefrom(base, path): # type: (Text, Text) -> Text """Return a path relative from a given base path. Insert backrefs as appropriate to reach the path from the base. Arguments: base (str): Path to a directory. path (str): Path to make relative. Returns: str: the path to ``base`` from ``path``. >>> relativefrom("foo/bar", "baz/index.html") '../../baz/index.html' """ base_parts = list(iteratepath(base)) path_parts = list(iteratepath(path)) common = 0 for component_a, component_b in zip(base_parts, path_parts): if component_a != component_b: break common += 1 return "/".join([".."] * (len(base_parts) - common) + path_parts[common:]) _WILD_CHARS = frozenset("*?[]!{}") def iswildcard(path): # type: (Text) -> bool """Check if a path ends with a wildcard. Arguments: path (str): A PyFilesystem path. Returns: bool: `True` if path ends with a wildcard. Example: >>> iswildcard('foo/bar/baz.*') True >>> iswildcard('foo/bar') False """ assert path is not None return not _WILD_CHARS.isdisjoint(path) pyfilesystem2-2.4.12/fs/permissions.py000066400000000000000000000231131400005060600177430ustar00rootroot00000000000000"""Abstract permissions container. """ from __future__ import print_function from __future__ import unicode_literals import typing from typing import Iterable import six from ._typing import Text if typing.TYPE_CHECKING: from typing import Iterator, List, Optional, Tuple, Type, Union def make_mode(init): # type: (Union[int, Iterable[Text], None]) -> int """Make a mode integer from an initial value. """ return Permissions.get_mode(init) class _PermProperty(object): """Creates simple properties to get/set permissions. """ def __init__(self, name): # type: (Text) -> None self._name = name self.__doc__ = "Boolean for '{}' permission.".format(name) def __get__(self, obj, obj_type=None): # type: (Permissions, Optional[Type[Permissions]]) -> bool return self._name in obj def __set__(self, obj, value): # type: (Permissions, bool) -> None if value: obj.add(self._name) else: obj.remove(self._name) @six.python_2_unicode_compatible class Permissions(object): """An abstraction for file system permissions. Permissions objects store information regarding the permissions on a resource. It supports Linux permissions, but is generic enough to manage permission information from almost any filesystem. Arguments: names (list, optional): A list of permissions. mode (int, optional): A mode integer. user (str, optional): A triplet of *user* permissions, e.g. ``"rwx"`` or ``"r--"`` group (str, optional): A triplet of *group* permissions, e.g. ``"rwx"`` or ``"r--"`` other (str, optional): A triplet of *other* permissions, e.g. ``"rwx"`` or ``"r--"`` sticky (bool, optional): A boolean for the *sticky* bit. setuid (bool, optional): A boolean for the *setuid* bit. setguid (bool, optional): A boolean for the *setguid* bit. Example: >>> from fs.permissions import Permissions >>> p = Permissions(user='rwx', group='rw-', other='r--') >>> print(p) rwxrw-r-- >>> p.mode 500 >>> oct(p.mode) '0764' """ _LINUX_PERMS = [ ("setuid", 2048), ("setguid", 1024), ("sticky", 512), ("u_r", 256), ("u_w", 128), ("u_x", 64), ("g_r", 32), ("g_w", 16), ("g_x", 8), ("o_r", 4), ("o_w", 2), ("o_x", 1), ] # type: List[Tuple[Text, int]] _LINUX_PERMS_NAMES = [_name for _name, _mask in _LINUX_PERMS] # type: List[Text] def __init__( self, names=None, # type: Optional[Iterable[Text]] mode=None, # type: Optional[int] user=None, # type: Optional[Text] group=None, # type: Optional[Text] other=None, # type: Optional[Text] sticky=None, # type: Optional[bool] setuid=None, # type: Optional[bool] setguid=None, # type: Optional[bool] ): # type: (...) -> None if names is not None: self._perms = set(names) elif mode is not None: self._perms = {name for name, mask in self._LINUX_PERMS if mode & mask} else: perms = self._perms = set() perms.update("u_" + p for p in user or "" if p != "-") perms.update("g_" + p for p in group or "" if p != "-") perms.update("o_" + p for p in other or "" if p != "-") if sticky: self._perms.add("sticky") if setuid: self._perms.add("setuid") if setguid: self._perms.add("setguid") def __repr__(self): # type: () -> Text if not self._perms.issubset(self._LINUX_PERMS_NAMES): _perms_str = ", ".join("'{}'".format(p) for p in sorted(self._perms)) return "Permissions(names=[{}])".format(_perms_str) def _check(perm, name): # type: (Text, Text) -> Text return name if perm in self._perms else "" user = "".join((_check("u_r", "r"), _check("u_w", "w"), _check("u_x", "x"))) group = "".join((_check("g_r", "r"), _check("g_w", "w"), _check("g_x", "x"))) other = "".join((_check("o_r", "r"), _check("o_w", "w"), _check("o_x", "x"))) args = [] _fmt = "user='{}', group='{}', other='{}'" basic = _fmt.format(user, group, other) args.append(basic) if self.sticky: args.append("sticky=True") if self.setuid: args.append("setuid=True") if self.setuid: args.append("setguid=True") return "Permissions({})".format(", ".join(args)) def __str__(self): # type: () -> Text return self.as_str() def __iter__(self): # type: () -> Iterator[Text] return iter(self._perms) def __contains__(self, permission): # type: (object) -> bool return permission in self._perms def __eq__(self, other): # type: (object) -> bool if isinstance(other, Permissions): names = other.dump() # type: object else: names = other return self.dump() == names def __ne__(self, other): # type: (object) -> bool return not self.__eq__(other) @classmethod def parse(cls, ls): # type: (Text) -> Permissions """Parse permissions in Linux notation. """ user = ls[:3] group = ls[3:6] other = ls[6:9] return cls(user=user, group=group, other=other) @classmethod def load(cls, permissions): # type: (List[Text]) -> Permissions """Load a serialized permissions object. """ return cls(names=permissions) @classmethod def create(cls, init=None): # type: (Union[int, Iterable[Text], None]) -> Permissions """Create a permissions object from an initial value. Arguments: init (int or list, optional): May be None to use `0o777` permissions, a mode integer, or a list of permission names. Returns: int: mode integer that may be used for instance by `os.makedir`. Example: >>> Permissions.create(None) Permissions(user='rwx', group='rwx', other='rwx') >>> Permissions.create(0o700) Permissions(user='rwx', group='', other='') >>> Permissions.create(['u_r', 'u_w', 'u_x']) Permissions(user='rwx', group='', other='') """ if init is None: return cls(mode=0o777) if isinstance(init, cls): return init if isinstance(init, int): return cls(mode=init) if isinstance(init, list): return cls(names=init) raise ValueError("permissions is invalid") @classmethod def get_mode(cls, init): # type: (Union[int, Iterable[Text], None]) -> int """Convert an initial value to a mode integer. """ return cls.create(init).mode def copy(self): # type: () -> Permissions """Make a copy of this permissions object. """ return Permissions(names=list(self._perms)) def dump(self): # type: () -> List[Text] """Get a list suitable for serialization. """ return sorted(self._perms) def as_str(self): # type: () -> Text """Get a Linux-style string representation of permissions. """ perms = [ c if name in self._perms else "-" for name, c in zip(self._LINUX_PERMS_NAMES[-9:], "rwxrwxrwx") ] if "setuid" in self._perms: perms[2] = "s" if "u_x" in self._perms else "S" if "setguid" in self._perms: perms[5] = "s" if "g_x" in self._perms else "S" if "sticky" in self._perms: perms[8] = "t" if "o_x" in self._perms else "T" perm_str = "".join(perms) return perm_str @property def mode(self): # type: () -> int """`int`: mode integer. """ mode = 0 for name, mask in self._LINUX_PERMS: if name in self._perms: mode |= mask return mode @mode.setter def mode(self, mode): # type: (int) -> None self._perms = {name for name, mask in self._LINUX_PERMS if mode & mask} u_r = _PermProperty("u_r") u_w = _PermProperty("u_w") u_x = _PermProperty("u_x") g_r = _PermProperty("g_r") g_w = _PermProperty("g_w") g_x = _PermProperty("g_x") o_r = _PermProperty("o_r") o_w = _PermProperty("o_w") o_x = _PermProperty("o_x") sticky = _PermProperty("sticky") setuid = _PermProperty("setuid") setguid = _PermProperty("setguid") def add(self, *permissions): # type: (*Text) -> None """Add permission(s). Arguments: *permissions (str): Permission name(s), such as ``'u_w'`` or ``'u_x'``. """ self._perms.update(permissions) def remove(self, *permissions): # type: (*Text) -> None """Remove permission(s). Arguments: *permissions (str): Permission name(s), such as ``'u_w'`` or ``'u_x'``.s """ self._perms.difference_update(permissions) def check(self, *permissions): # type: (*Text) -> bool """Check if one or more permissions are enabled. Arguments: *permissions (str): Permission name(s), such as ``'u_w'`` or ``'u_x'``. Returns: bool: `True` if all given permissions are set. """ return self._perms.issuperset(permissions) pyfilesystem2-2.4.12/fs/py.typed000066400000000000000000000000001400005060600165030ustar00rootroot00000000000000pyfilesystem2-2.4.12/fs/subfs.py000066400000000000000000000032011400005060600165060ustar00rootroot00000000000000"""Manage a directory in a *parent* filesystem. """ from __future__ import print_function from __future__ import unicode_literals import typing import six from .wrapfs import WrapFS from .path import abspath, join, normpath, relpath if typing.TYPE_CHECKING: from .base import FS # noqa: F401 from typing import Text, Tuple _F = typing.TypeVar("_F", bound="FS", covariant=True) @six.python_2_unicode_compatible class SubFS(WrapFS[_F], typing.Generic[_F]): """A sub-directory on another filesystem. A SubFS is a filesystem object that maps to a sub-directory of another filesystem. This is the object that is returned by `~fs.base.FS.opendir`. """ def __init__(self, parent_fs, path): # type: (_F, Text) -> None super(SubFS, self).__init__(parent_fs) self._sub_dir = abspath(normpath(path)) def __repr__(self): # type: () -> Text return "{}({!r}, {!r})".format( self.__class__.__name__, self._wrap_fs, self._sub_dir ) def __str__(self): # type: () -> Text return "{parent}{dir}".format(parent=self._wrap_fs, dir=self._sub_dir) def delegate_fs(self): # type: () -> _F return self._wrap_fs def delegate_path(self, path): # type: (Text) -> Tuple[_F, Text] _path = join(self._sub_dir, relpath(normpath(path))) return self._wrap_fs, _path class ClosingSubFS(SubFS[_F], typing.Generic[_F]): """A version of `SubFS` which closes its parent when closed. """ def close(self): # type: () -> None self.delegate_fs().close() super(ClosingSubFS, self).close() pyfilesystem2-2.4.12/fs/tarfs.py000066400000000000000000000360531400005060600165160ustar00rootroot00000000000000"""Manage the filesystem in a Tar archive. """ from __future__ import print_function from __future__ import unicode_literals import os import tarfile import typing from collections import OrderedDict from typing import cast, IO import six from . import errors from .base import FS from .compress import write_tar from .enums import ResourceType from .errors import IllegalBackReference, NoURL from .info import Info from .iotools import RawWrapper from .opener import open_fs from .permissions import Permissions from ._url_tools import url_quote from .path import relpath, basename, isbase, normpath, parts, frombase from .wrapfs import WrapFS if typing.TYPE_CHECKING: from tarfile import TarInfo from typing import ( Any, BinaryIO, Collection, Dict, List, Optional, Text, Tuple, Union, ) from .info import RawInfo from .subfs import SubFS T = typing.TypeVar("T", bound="ReadTarFS") __all__ = ["TarFS", "WriteTarFS", "ReadTarFS"] if six.PY2: def _get_member_info(member, encoding): # type: (TarInfo, Text) -> Dict[Text, object] return member.get_info(encoding, None) else: def _get_member_info(member, encoding): # type: (TarInfo, Text) -> Dict[Text, object] # NOTE(@althonos): TarInfo.get_info is neither in the doc nor # in the `tarfile` stub, and yet it exists and is public ! return member.get_info() # type: ignore class TarFS(WrapFS): """Read and write tar files. There are two ways to open a TarFS for the use cases of reading a tar file, and creating a new one. If you open the TarFS with ``write`` set to `False` (the default), then the filesystem will be a read only filesystem which maps to the files and directories within the tar file. Files are decompressed on the fly when you open them. Here's how you might extract and print a readme from a tar file:: with TarFS('foo.tar.gz') as tar_fs: readme = tar_fs.readtext('readme.txt') If you open the TarFS with ``write`` set to `True`, then the TarFS will be a empty temporary filesystem. Any files / directories you create in the TarFS will be written in to a tar file when the TarFS is closed. The compression is set from the new file name but may be set manually with the ``compression`` argument. Here's how you might write a new tar file containing a readme.txt file:: with TarFS('foo.tar.xz', write=True) as new_tar: new_tar.writetext( 'readme.txt', 'This tar file was written by PyFilesystem' ) Arguments: file (str or io.IOBase): An OS filename, or an open file handle. write (bool): Set to `True` to write a new tar file, or use default (`False`) to read an existing tar file. compression (str, optional): Compression to use (one of the formats supported by `tarfile`: ``xz``, ``gz``, ``bz2``, or `None`). temp_fs (str): An FS URL for the temporary filesystem used to store data prior to tarring. """ _compression_formats = { # FMT #UNIX #MSDOS "xz": (".tar.xz", ".txz"), "bz2": (".tar.bz2", ".tbz"), "gz": (".tar.gz", ".tgz"), } def __new__( # type: ignore cls, file, # type: Union[Text, BinaryIO] write=False, # type: bool compression=None, # type: Optional[Text] encoding="utf-8", # type: Text temp_fs="temp://__tartemp__", # type: Text ): # type: (...) -> FS if isinstance(file, (six.text_type, six.binary_type)): file = os.path.expanduser(file) filename = file # type: Text else: filename = getattr(file, "name", "") if write and compression is None: compression = None for comp, extensions in six.iteritems(cls._compression_formats): if filename.endswith(extensions): compression = comp break if write: return WriteTarFS( file, compression=compression, encoding=encoding, temp_fs=temp_fs ) else: return ReadTarFS(file, encoding=encoding) if typing.TYPE_CHECKING: def __init__( self, file, # type: Union[Text, BinaryIO] write=False, # type: bool compression=None, # type: Optional[Text] encoding="utf-8", # type: Text temp_fs="temp://__tartemp__", # type: Text ): # type: (...) -> None pass @six.python_2_unicode_compatible class WriteTarFS(WrapFS): """A writable tar file. """ def __init__( self, file, # type: Union[Text, BinaryIO] compression=None, # type: Optional[Text] encoding="utf-8", # type: Text temp_fs="temp://__tartemp__", # type: Text ): # type: (...) -> None self._file = file # type: Union[Text, BinaryIO] self.compression = compression self.encoding = encoding self._temp_fs_url = temp_fs self._temp_fs = open_fs(temp_fs) self._meta = dict(self._temp_fs.getmeta()) # type: ignore super(WriteTarFS, self).__init__(self._temp_fs) def __repr__(self): # type: () -> Text t = "WriteTarFS({!r}, compression={!r}, encoding={!r}, temp_fs={!r})" return t.format(self._file, self.compression, self.encoding, self._temp_fs_url) def __str__(self): # type: () -> Text return "".format(self._file) def delegate_path(self, path): # type: (Text) -> Tuple[FS, Text] return self._temp_fs, path def delegate_fs(self): # type: () -> FS return self._temp_fs def close(self): # type: () -> None if not self.isclosed(): try: self.write_tar() finally: self._temp_fs.close() super(WriteTarFS, self).close() def write_tar( self, file=None, # type: Union[Text, BinaryIO, None] compression=None, # type: Optional[Text] encoding=None, # type: Optional[Text] ): # type: (...) -> None """Write tar to a file. Arguments: file (str or io.IOBase, optional): Destination file, may be a file name or an open file object. compression (str, optional): Compression to use (one of the constants defined in `tarfile` in the stdlib). encoding (str, optional): The character encoding to use (default uses the encoding defined in `~WriteTarFS.__init__`). Note: This is called automatically when the TarFS is closed. """ if not self.isclosed(): write_tar( self._temp_fs, file or self._file, compression=compression or self.compression, encoding=encoding or self.encoding, ) @six.python_2_unicode_compatible class ReadTarFS(FS): """A readable tar file. """ _meta = { "case_insensitive": True, "network": False, "read_only": True, "supports_rename": False, "thread_safe": True, "unicode_paths": True, "virtual": False, } _typemap = type_map = { tarfile.BLKTYPE: ResourceType.block_special_file, tarfile.CHRTYPE: ResourceType.character, tarfile.DIRTYPE: ResourceType.directory, tarfile.FIFOTYPE: ResourceType.fifo, tarfile.REGTYPE: ResourceType.file, tarfile.AREGTYPE: ResourceType.file, tarfile.SYMTYPE: ResourceType.symlink, tarfile.CONTTYPE: ResourceType.file, tarfile.LNKTYPE: ResourceType.symlink, } @errors.CreateFailed.catch_all def __init__(self, file, encoding="utf-8"): # type: (Union[Text, BinaryIO], Text) -> None super(ReadTarFS, self).__init__() self._file = file self.encoding = encoding if isinstance(file, (six.text_type, six.binary_type)): self._tar = tarfile.open(file, mode="r") else: self._tar = tarfile.open(fileobj=file, mode="r") self._directory_cache = None @property def _directory_entries(self): """Lazy directory cache.""" if self._directory_cache is None: _decode = self._decode _directory_entries = ( (_decode(info.name).strip("/"), info) for info in self._tar ) def _list_tar(): for name, info in _directory_entries: try: _name = normpath(name) except IllegalBackReference: # Back references outside root, must be up to no good. pass else: if _name: yield _name, info self._directory_cache = OrderedDict(_list_tar()) return self._directory_cache def __repr__(self): # type: () -> Text return "ReadTarFS({!r})".format(self._file) def __str__(self): # type: () -> Text return "".format(self._file) if six.PY2: def _encode(self, s): # type: (Text) -> str return s.encode(self.encoding) def _decode(self, s): # type: (str) -> Text return s.decode(self.encoding) else: def _encode(self, s): # type: (Text) -> str return s def _decode(self, s): # type: (str) -> Text return s def getinfo(self, path, namespaces=None): # type: (Text, Optional[Collection[Text]]) -> Info _path = relpath(self.validatepath(path)) namespaces = namespaces or () raw_info = {} # type: Dict[Text, Dict[Text, object]] if not _path: raw_info["basic"] = {"name": "", "is_dir": True} if "details" in namespaces: raw_info["details"] = {"type": int(ResourceType.directory)} else: try: implicit = False member = self._directory_entries[_path] except KeyError: if not self.isdir(_path): raise errors.ResourceNotFound(path) implicit = True member = tarfile.TarInfo(_path) member.type = tarfile.DIRTYPE raw_info["basic"] = { "name": basename(self._decode(member.name)), "is_dir": member.isdir(), } if "details" in namespaces: raw_info["details"] = { "size": member.size, "type": int(self.type_map[member.type]), } if not implicit: raw_info["details"]["modified"] = member.mtime if "access" in namespaces and not implicit: raw_info["access"] = { "gid": member.gid, "group": member.gname, "permissions": Permissions(mode=member.mode).dump(), "uid": member.uid, "user": member.uname, } if "tar" in namespaces and not implicit: raw_info["tar"] = _get_member_info(member, self.encoding) raw_info["tar"].update( { k.replace("is", "is_"): getattr(member, k)() for k in dir(member) if k.startswith("is") } ) return Info(raw_info) def isdir(self, path): _path = relpath(self.validatepath(path)) try: return self._directory_entries[_path].isdir() except KeyError: return any(isbase(_path, name) for name in self._directory_entries) def isfile(self, path): _path = relpath(self.validatepath(path)) try: return self._directory_entries[_path].isfile() except KeyError: return False def setinfo(self, path, info): # type: (Text, RawInfo) -> None self.check() raise errors.ResourceReadOnly(path) def listdir(self, path): # type: (Text) -> List[Text] _path = relpath(self.validatepath(path)) if not self.gettype(path) is ResourceType.directory: raise errors.DirectoryExpected(path) children = ( frombase(_path, n) for n in self._directory_entries if isbase(_path, n) ) content = (parts(child)[1] for child in children if relpath(child)) return list(OrderedDict.fromkeys(content)) def makedir( self, # type: T path, # type: Text permissions=None, # type: Optional[Permissions] recreate=False, # type: bool ): # type: (...) -> SubFS[T] self.check() raise errors.ResourceReadOnly(path) def openbin(self, path, mode="r", buffering=-1, **options): # type: (Text, Text, int, **Any) -> BinaryIO _path = relpath(self.validatepath(path)) if "w" in mode or "+" in mode or "a" in mode: raise errors.ResourceReadOnly(path) try: member = self._directory_entries[_path] except KeyError: six.raise_from(errors.ResourceNotFound(path), None) if not member.isfile(): raise errors.FileExpected(path) rw = RawWrapper(cast(IO, self._tar.extractfile(member))) if six.PY2: # Patch nonexistent file.flush in Python2 def _flush(): pass rw.flush = _flush return rw # type: ignore def remove(self, path): # type: (Text) -> None self.check() raise errors.ResourceReadOnly(path) def removedir(self, path): # type: (Text) -> None self.check() raise errors.ResourceReadOnly(path) def close(self): # type: () -> None super(ReadTarFS, self).close() if hasattr(self, "_tar"): self._tar.close() def isclosed(self): # type: () -> bool return self._tar.closed # type: ignore def geturl(self, path, purpose="download"): # type: (Text, Text) -> Text if purpose == "fs" and isinstance(self._file, six.string_types): quoted_file = url_quote(self._file) quoted_path = url_quote(path) return "tar://{}!/{}".format(quoted_file, quoted_path) else: raise NoURL(path, purpose) if __name__ == "__main__": # pragma: no cover from fs.tree import render with TarFS("tests.tar") as tar_fs: print(tar_fs.listdir("/")) print(tar_fs.listdir("/tests/")) print(tar_fs.readtext("tests/ttt/settings.ini")) render(tar_fs) print(tar_fs) print(repr(tar_fs)) with TarFS("TarFS.tar", write=True) as tar_fs: tar_fs.makedirs("foo/bar") tar_fs.writetext("foo/bar/baz.txt", "Hello, World") print(tar_fs) print(repr(tar_fs)) pyfilesystem2-2.4.12/fs/tempfs.py000066400000000000000000000051751400005060600166760ustar00rootroot00000000000000"""Manage filesystems in temporary locations. A temporary filesytem is stored in a location defined by your OS (``/tmp`` on linux). The contents are deleted when the filesystem is closed. A `TempFS` is a good way of preparing a directory structure in advance, that you can later copy. It can also be used as a temporary data store. """ from __future__ import print_function from __future__ import unicode_literals import shutil import tempfile import typing import six from . import errors from .osfs import OSFS if typing.TYPE_CHECKING: from typing import Optional, Text @six.python_2_unicode_compatible class TempFS(OSFS): """A temporary filesystem on the OS. Arguments: identifier (str): A string to distinguish the directory within the OS temp location, used as part of the directory name. temp_dir (str, optional): An OS path to your temp directory (leave as `None` to auto-detect) auto_clean (bool): If `True` (the default), the directory contents will be wiped on close. ignore_clean_errors (bool): If `True` (the default), any errors in the clean process will be suppressed. If `False`, they will be raised. """ def __init__( self, identifier="__tempfs__", # type: Text temp_dir=None, # type: Optional[Text] auto_clean=True, # type: bool ignore_clean_errors=True, # type: bool ): # type: (...) -> None self.identifier = identifier self._auto_clean = auto_clean self._ignore_clean_errors = ignore_clean_errors self._cleaned = False self.identifier = identifier.replace("/", "-") self._temp_dir = tempfile.mkdtemp(identifier or "fsTempFS", dir=temp_dir) super(TempFS, self).__init__(self._temp_dir) def __repr__(self): # type: () -> Text return "TempFS()" def __str__(self): # type: () -> Text return "".format(self._temp_dir) def close(self): # type: () -> None if self._auto_clean: self.clean() super(TempFS, self).close() def clean(self): # type: () -> None """Clean (delete) temporary files created by this filesystem. """ if self._cleaned: return try: shutil.rmtree(self._temp_dir) except Exception as error: if not self._ignore_clean_errors: raise errors.OperationFailed( msg="failed to remove temporary directory; {}".format(error), exc=error, ) self._cleaned = True pyfilesystem2-2.4.12/fs/test.py000066400000000000000000002136601400005060600163570ustar00rootroot00000000000000# coding: utf-8 """Base class for tests. All Filesystems should be able to pass these. """ from __future__ import absolute_import from __future__ import unicode_literals from datetime import datetime import io import itertools import json import math import os import time import unittest import fs.copy import fs.move from fs import ResourceType, Seek from fs import errors from fs import walk from fs import glob from fs.opener import open_fs from fs.subfs import ClosingSubFS, SubFS import pytz import six from six import text_type if six.PY2: import collections as collections_abc else: import collections.abc as collections_abc UNICODE_TEXT = """ UTF-8 encoded sample plain-text file ‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾ Markus Kuhn [ˈmaʳkʊs kuːn] <mkuhn@acm.org> — 1999-08-20 The ASCII compatible UTF-8 encoding of ISO 10646 and Unicode plain-text files is defined in RFC 2279 and in ISO 10646-1 Annex R. Using Unicode/UTF-8, you can write in emails and source code things such as Mathematics and Sciences: ∮ E⋅da = Q, n → ∞, ∑ f(i) = ∏ g(i), ∀x∈ℝ: ⌈x⌉ = −⌊−x⌋, α ∧ ¬β = ¬(¬α ∨ β), ℕ ⊆ ℕ₀ ⊂ ℤ ⊂ ℚ ⊂ ℝ ⊂ ℂ, ⊥ < a ≠ b ≡ c ≤ d ≪ ⊤ ⇒ (A ⇔ B), 2H₂ + O₂ ⇌ 2H₂O, R = 4.7 kΩ, ⌀ 200 mm Linguistics and dictionaries: ði ıntəˈnæʃənəl fəˈnɛtık əsoʊsiˈeıʃn Y [ˈʏpsilɔn], Yen [jɛn], Yoga [ˈjoːgɑ] APL: ((V⍳V)=⍳⍴V)/V←,V ⌷←⍳→⍴∆∇⊃‾⍎⍕⌈ Nicer typography in plain text files: ╔══════════════════════════════════════════╗ ║ ║ ║ • ‘single’ and “double” quotes ║ ║ ║ ║ • Curly apostrophes: “We’ve been here” ║ ║ ║ ║ • Latin-1 apostrophe and accents: '´` ║ ║ ║ ║ • ‚deutsche‘ „Anführungszeichen“ ║ ║ ║ ║ • †, ‡, ‰, •, 3–4, —, −5/+5, ™, … ║ ║ ║ ║ • ASCII safety test: 1lI|, 0OD, 8B ║ ║ ╭─────────╮ ║ ║ • the euro symbol: │ 14.95 € │ ║ ║ ╰─────────╯ ║ ╚══════════════════════════════════════════╝ Greek (in Polytonic): The Greek anthem: Σὲ γνωρίζω ἀπὸ τὴν κόψη τοῦ σπαθιοῦ τὴν τρομερή, σὲ γνωρίζω ἀπὸ τὴν ὄψη ποὺ μὲ βία μετράει τὴ γῆ. ᾿Απ᾿ τὰ κόκκαλα βγαλμένη τῶν ῾Ελλήνων τὰ ἱερά καὶ σὰν πρῶτα ἀνδρειωμένη χαῖρε, ὦ χαῖρε, ᾿Ελευθεριά! From a speech of Demosthenes in the 4th century BC: Οὐχὶ ταὐτὰ παρίσταταί μοι γιγνώσκειν, ὦ ἄνδρες ᾿Αθηναῖοι, ὅταν τ᾿ εἰς τὰ πράγματα ἀποβλέψω καὶ ὅταν πρὸς τοὺς λόγους οὓς ἀκούω· τοὺς μὲν γὰρ λόγους περὶ τοῦ τιμωρήσασθαι Φίλιππον ὁρῶ γιγνομένους, τὰ δὲ πράγματ᾿ εἰς τοῦτο προήκοντα, ὥσθ᾿ ὅπως μὴ πεισόμεθ᾿ αὐτοὶ πρότερον κακῶς σκέψασθαι δέον. οὐδέν οὖν ἄλλο μοι δοκοῦσιν οἱ τὰ τοιαῦτα λέγοντες ἢ τὴν ὑπόθεσιν, περὶ ἧς βουλεύεσθαι, οὐχὶ τὴν οὖσαν παριστάντες ὑμῖν ἁμαρτάνειν. ἐγὼ δέ, ὅτι μέν ποτ᾿ ἐξῆν τῇ πόλει καὶ τὰ αὑτῆς ἔχειν ἀσφαλῶς καὶ Φίλιππον τιμωρήσασθαι, καὶ μάλ᾿ ἀκριβῶς οἶδα· ἐπ᾿ ἐμοῦ γάρ, οὐ πάλαι γέγονεν ταῦτ᾿ ἀμφότερα· νῦν μέντοι πέπεισμαι τοῦθ᾿ ἱκανὸν προλαβεῖν ἡμῖν εἶναι τὴν πρώτην, ὅπως τοὺς συμμάχους σώσομεν. ἐὰν γὰρ τοῦτο βεβαίως ὑπάρξῃ, τότε καὶ περὶ τοῦ τίνα τιμωρήσεταί τις καὶ ὃν τρόπον ἐξέσται σκοπεῖν· πρὶν δὲ τὴν ἀρχὴν ὀρθῶς ὑποθέσθαι, μάταιον ἡγοῦμαι περὶ τῆς τελευτῆς ὁντινοῦν ποιεῖσθαι λόγον. Δημοσθένους, Γ´ ᾿Ολυνθιακὸς Georgian: From a Unicode conference invitation: გთხოვთ ახლავე გაიაროთ რეგისტრაცია Unicode-ის მეათე საერთაშორისო კონფერენციაზე დასასწრებად, რომელიც გაიმართება 10-12 მარტს, ქ. მაინცში, გერმანიაში. კონფერენცია შეჰკრებს ერთად მსოფლიოს ექსპერტებს ისეთ დარგებში როგორიცაა ინტერნეტი და Unicode-ი, ინტერნაციონალიზაცია და ლოკალიზაცია, Unicode-ის გამოყენება ოპერაციულ სისტემებსა, და გამოყენებით პროგრამებში, შრიფტებში, ტექსტების დამუშავებასა და მრავალენოვან კომპიუტერულ სისტემებში. Russian: From a Unicode conference invitation: Зарегистрируйтесь сейчас на Десятую Международную Конференцию по Unicode, которая состоится 10-12 марта 1997 года в Майнце в Германии. Конференция соберет широкий круг экспертов по вопросам глобального Интернета и Unicode, локализации и интернационализации, воплощению и применению Unicode в различных операционных системах и программных приложениях, шрифтах, верстке и многоязычных компьютерных системах. Thai (UCS Level 2): Excerpt from a poetry on The Romance of The Three Kingdoms (a Chinese classic 'San Gua'): [----------------------------|------------------------] ๏ แผ่นดินฮั่นเสื่อมโทรมแสนสังเวช พระปกเกศกองบู๊กู้ขึ้นใหม่ สิบสองกษัตริย์ก่อนหน้าแลถัดไป สององค์ไซร้โง่เขลาเบาปัญญา ทรงนับถือขันทีเป็นที่พึ่ง บ้านเมืองจึงวิปริตเป็นนักหนา โฮจิ๋นเรียกทัพทั่วหัวเมืองมา หมายจะฆ่ามดชั่วตัวสำคัญ เหมือนขับไสไล่เสือจากเคหา รับหมาป่าเข้ามาเลยอาสัญ ฝ่ายอ้องอุ้นยุแยกให้แตกกัน ใช้สาวนั้นเป็นชนวนชื่นชวนใจ พลันลิฉุยกุยกีกลับก่อเหตุ ช่างอาเพศจริงหนาฟ้าร้องไห้ ต้องรบราฆ่าฟันจนบรรลัย ฤๅหาใครค้ำชูกู้บรรลังก์ ฯ (The above is a two-column text. If combining characters are handled correctly, the lines of the second column should be aligned with the | character above.) Ethiopian: Proverbs in the Amharic language: ሰማይ አይታረስ ንጉሥ አይከሰስ። ብላ ካለኝ እንደአባቴ በቆመጠኝ። ጌጥ ያለቤቱ ቁምጥና ነው። ደሀ በሕልሙ ቅቤ ባይጠጣ ንጣት በገደለው። የአፍ ወለምታ በቅቤ አይታሽም። አይጥ በበላ ዳዋ ተመታ። ሲተረጉሙ ይደረግሙ። ቀስ በቀስ፥ ዕንቁላል በእግሩ ይሄዳል። ድር ቢያብር አንበሳ ያስር። ሰው እንደቤቱ እንጅ እንደ ጉረቤቱ አይተዳደርም። እግዜር የከፈተውን ጉሮሮ ሳይዘጋው አይድርም። የጎረቤት ሌባ፥ ቢያዩት ይስቅ ባያዩት ያጠልቅ። ሥራ ከመፍታት ልጄን ላፋታት። ዓባይ ማደሪያ የለው፥ ግንድ ይዞ ይዞራል። የእስላም አገሩ መካ የአሞራ አገሩ ዋርካ። ተንጋሎ ቢተፉ ተመልሶ ባፉ። ወዳጅህ ማር ቢሆን ጨርስህ አትላሰው። እግርህን በፍራሽህ ልክ ዘርጋ። Runes: ᚻᛖ ᚳᚹᚫᚦ ᚦᚫᛏ ᚻᛖ ᛒᚢᛞᛖ ᚩᚾ ᚦᚫᛗ ᛚᚪᚾᛞᛖ ᚾᚩᚱᚦᚹᛖᚪᚱᛞᚢᛗ ᚹᛁᚦ ᚦᚪ ᚹᛖᛥᚫ (Old English, which transcribed into Latin reads 'He cwaeth that he bude thaem lande northweardum with tha Westsae.' and means 'He said that he lived in the northern land near the Western Sea.') Braille: ⡌⠁⠧⠑ ⠼⠁⠒ ⡍⠜⠇⠑⠹⠰⠎ ⡣⠕⠌ ⡍⠜⠇⠑⠹ ⠺⠁⠎ ⠙⠑⠁⠙⠒ ⠞⠕ ⠃⠑⠛⠔ ⠺⠊⠹⠲ ⡹⠻⠑ ⠊⠎ ⠝⠕ ⠙⠳⠃⠞ ⠱⠁⠞⠑⠧⠻ ⠁⠃⠳⠞ ⠹⠁⠞⠲ ⡹⠑ ⠗⠑⠛⠊⠌⠻ ⠕⠋ ⠙⠊⠎ ⠃⠥⠗⠊⠁⠇ ⠺⠁⠎ ⠎⠊⠛⠝⠫ ⠃⠹ ⠹⠑ ⠊⠇⠻⠛⠹⠍⠁⠝⠂ ⠹⠑ ⠊⠇⠻⠅⠂ ⠹⠑ ⠥⠝⠙⠻⠞⠁⠅⠻⠂ ⠁⠝⠙ ⠹⠑ ⠡⠊⠑⠋ ⠍⠳⠗⠝⠻⠲ ⡎⠊⠗⠕⠕⠛⠑ ⠎⠊⠛⠝⠫ ⠊⠞⠲ ⡁⠝⠙ ⡎⠊⠗⠕⠕⠛⠑⠰⠎ ⠝⠁⠍⠑ ⠺⠁⠎ ⠛⠕⠕⠙ ⠥⠏⠕⠝ ⠰⡡⠁⠝⠛⠑⠂ ⠋⠕⠗ ⠁⠝⠹⠹⠔⠛ ⠙⠑ ⠡⠕⠎⠑ ⠞⠕ ⠏⠥⠞ ⠙⠊⠎ ⠙⠁⠝⠙ ⠞⠕⠲ ⡕⠇⠙ ⡍⠜⠇⠑⠹ ⠺⠁⠎ ⠁⠎ ⠙⠑⠁⠙ ⠁⠎ ⠁ ⠙⠕⠕⠗⠤⠝⠁⠊⠇⠲ ⡍⠔⠙⠖ ⡊ ⠙⠕⠝⠰⠞ ⠍⠑⠁⠝ ⠞⠕ ⠎⠁⠹ ⠹⠁⠞ ⡊ ⠅⠝⠪⠂ ⠕⠋ ⠍⠹ ⠪⠝ ⠅⠝⠪⠇⠫⠛⠑⠂ ⠱⠁⠞ ⠹⠻⠑ ⠊⠎ ⠏⠜⠞⠊⠊⠥⠇⠜⠇⠹ ⠙⠑⠁⠙ ⠁⠃⠳⠞ ⠁ ⠙⠕⠕⠗⠤⠝⠁⠊⠇⠲ ⡊ ⠍⠊⠣⠞ ⠙⠁⠧⠑ ⠃⠑⠲ ⠔⠊⠇⠔⠫⠂ ⠍⠹⠎⠑⠇⠋⠂ ⠞⠕ ⠗⠑⠛⠜⠙ ⠁ ⠊⠕⠋⠋⠔⠤⠝⠁⠊⠇ ⠁⠎ ⠹⠑ ⠙⠑⠁⠙⠑⠌ ⠏⠊⠑⠊⠑ ⠕⠋ ⠊⠗⠕⠝⠍⠕⠝⠛⠻⠹ ⠔ ⠹⠑ ⠞⠗⠁⠙⠑⠲ ⡃⠥⠞ ⠹⠑ ⠺⠊⠎⠙⠕⠍ ⠕⠋ ⠳⠗ ⠁⠝⠊⠑⠌⠕⠗⠎ ⠊⠎ ⠔ ⠹⠑ ⠎⠊⠍⠊⠇⠑⠆ ⠁⠝⠙ ⠍⠹ ⠥⠝⠙⠁⠇⠇⠪⠫ ⠙⠁⠝⠙⠎ ⠩⠁⠇⠇ ⠝⠕⠞ ⠙⠊⠌⠥⠗⠃ ⠊⠞⠂ ⠕⠗ ⠹⠑ ⡊⠳⠝⠞⠗⠹⠰⠎ ⠙⠕⠝⠑ ⠋⠕⠗⠲ ⡹⠳ ⠺⠊⠇⠇ ⠹⠻⠑⠋⠕⠗⠑ ⠏⠻⠍⠊⠞ ⠍⠑ ⠞⠕ ⠗⠑⠏⠑⠁⠞⠂ ⠑⠍⠏⠙⠁⠞⠊⠊⠁⠇⠇⠹⠂ ⠹⠁⠞ ⡍⠜⠇⠑⠹ ⠺⠁⠎ ⠁⠎ ⠙⠑⠁⠙ ⠁⠎ ⠁ ⠙⠕⠕⠗⠤⠝⠁⠊⠇⠲ (The first couple of paragraphs of "A Christmas Carol" by Dickens) Compact font selection example text: ABCDEFGHIJKLMNOPQRSTUVWXYZ /0123456789 abcdefghijklmnopqrstuvwxyz £©µÀÆÖÞßéöÿ –—‘“”„†•…‰™œŠŸž€ ΑΒΓΔΩαβγδω АБВГДабвгд ∀∂∈ℝ∧∪≡∞ ↑↗↨↻⇣ ┐┼╔╘░►☺♀ fi�⑀₂ἠḂӥẄɐː⍎אԱა Greetings in various languages: Hello world, Καλημέρα κόσμε, コンニチハ Box drawing alignment tests: █ ▉ ╔══╦══╗ ┌──┬──┐ ╭──┬──╮ ╭──┬──╮ ┏━━┳━━┓ ┎┒┏┑ ╷ ╻ ┏┯┓ ┌┰┐ ▊ ╱╲╱╲╳╳╳ ║┌─╨─┐║ │╔═╧═╗│ │╒═╪═╕│ │╓─╁─╖│ ┃┌─╂─┐┃ ┗╃╄┙ ╶┼╴╺╋╸┠┼┨ ┝╋┥ ▋ ╲╱╲╱╳╳╳ ║│╲ ╱│║ │║ ║│ ││ │ ││ │║ ┃ ║│ ┃│ ╿ │┃ ┍╅╆┓ ╵ ╹ ┗┷┛ └┸┘ ▌ ╱╲╱╲╳╳╳ ╠╡ ╳ ╞╣ ├╢ ╟┤ ├┼─┼─┼┤ ├╫─╂─╫┤ ┣┿╾┼╼┿┫ ┕┛┖┚ ┌┄┄┐ ╎ ┏┅┅┓ ┋ ▍ ╲╱╲╱╳╳╳ ║│╱ ╲│║ │║ ║│ ││ │ ││ │║ ┃ ║│ ┃│ ╽ │┃ ░░▒▒▓▓██ ┊ ┆ ╎ ╏ ┇ ┋ ▎ ║└─╥─┘║ │╚═╤═╝│ │╘═╪═╛│ │╙─╀─╜│ ┃└─╂─┘┃ ░░▒▒▓▓██ ┊ ┆ ╎ ╏ ┇ ┋ ▏ ╚══╩══╝ └──┴──┘ ╰──┴──╯ ╰──┴──╯ ┗━━┻━━┛ └╌╌┘ ╎ ┗╍╍┛ ┋ ▁▂▃▄▅▆▇█ """ class FSTestCases(object): """Basic FS tests. """ def make_fs(self): """Return an FS instance. """ raise NotImplementedError("implement me") def destroy_fs(self, fs): """Destroy a FS instance. Arguments: fs (FS): A filesystem instance previously opened by `~fs.test.FSTestCases.make_fs`. """ fs.close() def setUp(self): self.fs = self.make_fs() def tearDown(self): self.destroy_fs(self.fs) del self.fs def assert_exists(self, path): """Assert a path exists. Arguments: path (str): A path on the filesystem. """ self.assertTrue(self.fs.exists(path)) def assert_not_exists(self, path): """Assert a path does not exist. Arguments: path (str): A path on the filesystem. """ self.assertFalse(self.fs.exists(path)) def assert_isfile(self, path): """Assert a path is a file. Arguments: path (str): A path on the filesystem. """ self.assertTrue(self.fs.isfile(path)) def assert_isdir(self, path): """Assert a path is a directory. Arguments: path (str): A path on the filesystem. """ self.assertTrue(self.fs.isdir(path)) def assert_bytes(self, path, contents): """Assert a file contains the given bytes. Arguments: path (str): A path on the filesystem. contents (bytes): Bytes to compare. """ assert isinstance(contents, bytes) data = self.fs.readbytes(path) self.assertEqual(data, contents) self.assertIsInstance(data, bytes) def assert_text(self, path, contents): """Assert a file contains the given text. Arguments: path (str): A path on the filesystem. contents (str): Text to compare. """ assert isinstance(contents, text_type) with self.fs.open(path, "rt") as f: data = f.read() self.assertEqual(data, contents) self.assertIsInstance(data, text_type) def test_root_dir(self): with self.assertRaises(errors.FileExpected): self.fs.open("/") with self.assertRaises(errors.FileExpected): self.fs.openbin("/") def test_appendbytes(self): with self.assertRaises(TypeError): self.fs.appendbytes("foo", "bar") self.fs.appendbytes("foo", b"bar") self.assert_bytes("foo", b"bar") self.fs.appendbytes("foo", b"baz") self.assert_bytes("foo", b"barbaz") def test_appendtext(self): with self.assertRaises(TypeError): self.fs.appendtext("foo", b"bar") self.fs.appendtext("foo", "bar") self.assert_text("foo", "bar") self.fs.appendtext("foo", "baz") self.assert_text("foo", "barbaz") def test_basic(self): #  Check str and repr don't break repr(self.fs) self.assertIsInstance(six.text_type(self.fs), six.text_type) def test_getmeta(self): # Get the meta dict meta = self.fs.getmeta() # Check default namespace self.assertEqual(meta, self.fs.getmeta(namespace="standard")) # Must be a dict self.assertTrue(isinstance(meta, dict)) no_meta = self.fs.getmeta("__nosuchnamespace__") self.assertIsInstance(no_meta, dict) self.assertFalse(no_meta) def test_isfile(self): self.assertFalse(self.fs.isfile("foo.txt")) self.fs.create("foo.txt") self.assertTrue(self.fs.isfile("foo.txt")) self.fs.makedir("bar") self.assertFalse(self.fs.isfile("bar")) def test_isdir(self): self.assertFalse(self.fs.isdir("foo")) self.fs.create("bar") self.fs.makedir("foo") self.assertTrue(self.fs.isdir("foo")) self.assertFalse(self.fs.isdir("bar")) def test_islink(self): self.fs.touch("foo") self.assertFalse(self.fs.islink("foo")) with self.assertRaises(errors.ResourceNotFound): self.fs.islink("bar") def test_getsize(self): self.fs.writebytes("empty", b"") self.fs.writebytes("one", b"a") self.fs.writebytes("onethousand", ("b" * 1000).encode("ascii")) self.assertEqual(self.fs.getsize("empty"), 0) self.assertEqual(self.fs.getsize("one"), 1) self.assertEqual(self.fs.getsize("onethousand"), 1000) with self.assertRaises(errors.ResourceNotFound): self.fs.getsize("doesnotexist") def test_getsyspath(self): self.fs.create("foo") try: syspath = self.fs.getsyspath("foo") except errors.NoSysPath: self.assertFalse(self.fs.hassyspath("foo")) else: self.assertIsInstance(syspath, text_type) self.assertIsInstance(self.fs.getospath("foo"), bytes) self.assertTrue(self.fs.hassyspath("foo")) # Should not throw an error self.fs.hassyspath("a/b/c/foo/bar") def test_geturl(self): self.fs.create("foo") try: self.fs.geturl("foo") except errors.NoURL: self.assertFalse(self.fs.hasurl("foo")) else: self.assertTrue(self.fs.hasurl("foo")) # Should not throw an error self.fs.hasurl("a/b/c/foo/bar") def test_geturl_purpose(self): """Check an unknown purpose raises a NoURL error. """ self.fs.create("foo") with self.assertRaises(errors.NoURL): self.fs.geturl("foo", purpose="__nosuchpurpose__") def test_validatepath(self): """Check validatepath returns an absolute path. """ path = self.fs.validatepath("foo") self.assertEqual(path, "/foo") def test_invalid_chars(self): # Test invalid path method. with self.assertRaises(errors.InvalidCharsInPath): self.fs.open("invalid\0file", "wb") with self.assertRaises(errors.InvalidCharsInPath): self.fs.validatepath("invalid\0file") def test_getinfo(self): # Test special case of root directory # Root directory has a name of '' root_info = self.fs.getinfo("/") self.assertEqual(root_info.name, "") self.assertTrue(root_info.is_dir) # Make a file of known size self.fs.writebytes("foo", b"bar") self.fs.makedir("dir") # Check basic namespace info = self.fs.getinfo("foo").raw self.assertIsInstance(info["basic"]["name"], text_type) self.assertEqual(info["basic"]["name"], "foo") self.assertFalse(info["basic"]["is_dir"]) # Check basic namespace dir info = self.fs.getinfo("dir").raw self.assertEqual(info["basic"]["name"], "dir") self.assertTrue(info["basic"]["is_dir"]) # Get the info info = self.fs.getinfo("foo", namespaces=["details"]).raw self.assertIsInstance(info, dict) self.assertEqual(info["details"]["size"], 3) self.assertEqual(info["details"]["type"], int(ResourceType.file)) # Test getdetails self.assertEqual(info, self.fs.getdetails("foo").raw) # Raw info should be serializable try: json.dumps(info) except (TypeError, ValueError): raise AssertionError("info should be JSON serializable") # Non existant namespace is not an error no_info = self.fs.getinfo("foo", "__nosuchnamespace__").raw self.assertIsInstance(no_info, dict) self.assertEqual(no_info["basic"], {"name": "foo", "is_dir": False}) # Check a number of standard namespaces # FS objects may not support all these, but we can at least # invoke the code info = self.fs.getinfo("foo", namespaces=["access", "stat", "details"]) # Check that if the details namespace is present, times are # of valid types. if "details" in info.namespaces: details = info.raw["details"] self.assertIsInstance(details.get("accessed"), (type(None), int, float)) self.assertIsInstance(details.get("modified"), (type(None), int, float)) self.assertIsInstance(details.get("created"), (type(None), int, float)) self.assertIsInstance( details.get("metadata_changed"), (type(None), int, float) ) def test_exists(self): # Test exists method. # Check root directory always exists self.assertTrue(self.fs.exists("/")) self.assertTrue(self.fs.exists("")) # Check files don't exist self.assertFalse(self.fs.exists("foo")) self.assertFalse(self.fs.exists("foo/bar")) self.assertFalse(self.fs.exists("foo/bar/baz")) self.assertFalse(self.fs.exists("egg")) # make some files and directories self.fs.makedirs("foo/bar") self.fs.writebytes("foo/bar/baz", b"test") # Check files exists self.assertTrue(self.fs.exists("foo")) self.assertTrue(self.fs.exists("foo/bar")) self.assertTrue(self.fs.exists("foo/bar/baz")) self.assertFalse(self.fs.exists("egg")) self.assert_exists("foo") self.assert_exists("foo/bar") self.assert_exists("foo/bar/baz") self.assert_not_exists("egg") # Delete a file self.fs.remove("foo/bar/baz") # Check it no longer exists self.assert_not_exists("foo/bar/baz") self.assertFalse(self.fs.exists("foo/bar/baz")) self.assert_not_exists("foo/bar/baz") # Check root directory always exists self.assertTrue(self.fs.exists("/")) self.assertTrue(self.fs.exists("")) def test_listdir(self): # Check listing directory that doesn't exist with self.assertRaises(errors.ResourceNotFound): self.fs.listdir("foobar") # Check aliases for root self.assertEqual(self.fs.listdir("/"), []) self.assertEqual(self.fs.listdir("."), []) self.assertEqual(self.fs.listdir("./"), []) # Make a few objects self.fs.writebytes("foo", b"egg") self.fs.writebytes("bar", b"egg") self.fs.makedir("baz") # This should not be listed self.fs.writebytes("baz/egg", b"egg") # Check list works six.assertCountEqual(self, self.fs.listdir("/"), ["foo", "bar", "baz"]) six.assertCountEqual(self, self.fs.listdir("."), ["foo", "bar", "baz"]) six.assertCountEqual(self, self.fs.listdir("./"), ["foo", "bar", "baz"]) # Check paths are unicode strings for name in self.fs.listdir("/"): self.assertIsInstance(name, text_type) # Create a subdirectory self.fs.makedir("dir") # Should start empty self.assertEqual(self.fs.listdir("/dir"), []) # Write some files self.fs.writebytes("dir/foofoo", b"egg") self.fs.writebytes("dir/barbar", b"egg") # Check listing subdirectory six.assertCountEqual(self, self.fs.listdir("dir"), ["foofoo", "barbar"]) # Make sure they are unicode stringd for name in self.fs.listdir("dir"): self.assertIsInstance(name, text_type) self.fs.create("notadir") with self.assertRaises(errors.DirectoryExpected): self.fs.listdir("notadir") def test_move(self): # Make a file self.fs.writebytes("foo", b"egg") self.assert_isfile("foo") # Move it self.fs.move("foo", "bar") # Check it has gone from original location self.assert_not_exists("foo") # Check it exists in the new location, and contents match self.assert_exists("bar") self.assert_bytes("bar", b"egg") # Check moving to existing file fails self.fs.writebytes("foo2", b"eggegg") with self.assertRaises(errors.DestinationExists): self.fs.move("foo2", "bar") # Check move with overwrite=True self.fs.move("foo2", "bar", overwrite=True) self.assert_not_exists("foo2") # Check moving to a non-existant directory with self.assertRaises(errors.ResourceNotFound): self.fs.move("bar", "egg/bar") # Check moving an unexisting source with self.assertRaises(errors.ResourceNotFound): self.fs.move("egg", "spam") # Check moving between different directories self.fs.makedir("baz") self.fs.writebytes("baz/bazbaz", b"bazbaz") self.fs.makedir("baz2") self.fs.move("baz/bazbaz", "baz2/bazbaz") self.assert_not_exists("baz/bazbaz") self.assert_bytes("baz2/bazbaz", b"bazbaz") # Check moving a directory raises an error self.assert_isdir("baz2") self.assert_not_exists("yolk") with self.assertRaises(errors.FileExpected): self.fs.move("baz2", "yolk") def test_makedir(self): # Check edge case of root with self.assertRaises(errors.DirectoryExists): self.fs.makedir("/") # Making root is a null op with recreate slash_fs = self.fs.makedir("/", recreate=True) self.assertIsInstance(slash_fs, SubFS) self.assertEqual(self.fs.listdir("/"), []) self.assert_not_exists("foo") self.fs.makedir("foo") self.assert_isdir("foo") self.assertEqual(self.fs.gettype("foo"), ResourceType.directory) self.fs.writebytes("foo/bar.txt", b"egg") self.assert_bytes("foo/bar.txt", b"egg") # Directory exists with self.assertRaises(errors.DirectoryExists): self.fs.makedir("foo") # Parent directory doesn't exist with self.assertRaises(errors.ResourceNotFound): self.fs.makedir("/foo/bar/baz") self.fs.makedir("/foo/bar") self.fs.makedir("/foo/bar/baz") with self.assertRaises(errors.DirectoryExists): self.fs.makedir("foo/bar/baz") with self.assertRaises(errors.DirectoryExists): self.fs.makedir("foo/bar.txt") def test_makedirs(self): self.assertFalse(self.fs.exists("foo")) self.fs.makedirs("foo") self.assertEqual(self.fs.gettype("foo"), ResourceType.directory) self.fs.makedirs("foo/bar/baz") self.assertTrue(self.fs.isdir("foo/bar")) self.assertTrue(self.fs.isdir("foo/bar/baz")) with self.assertRaises(errors.DirectoryExists): self.fs.makedirs("foo/bar/baz") self.fs.makedirs("foo/bar/baz", recreate=True) self.fs.writebytes("foo.bin", b"test") with self.assertRaises(errors.DirectoryExpected): self.fs.makedirs("foo.bin/bar") with self.assertRaises(errors.DirectoryExpected): self.fs.makedirs("foo.bin/bar/baz/egg") def test_repeat_dir(self): # Catches bug with directories contain repeated names, # discovered in s3fs self.fs.makedirs("foo/foo/foo") self.assertEqual(self.fs.listdir(""), ["foo"]) self.assertEqual(self.fs.listdir("foo"), ["foo"]) self.assertEqual(self.fs.listdir("foo/foo"), ["foo"]) self.assertEqual(self.fs.listdir("foo/foo/foo"), []) scan = list(self.fs.scandir("foo")) self.assertEqual(len(scan), 1) self.assertEqual(scan[0].name, "foo") def test_open(self): # Open a file that doesn't exist with self.assertRaises(errors.ResourceNotFound): self.fs.open("doesnotexist", "r") self.fs.makedir("foo") # Create a new text file text = "Hello, World" with self.fs.open("foo/hello", "wt") as f: repr(f) self.assertIsInstance(f, io.IOBase) self.assertTrue(f.writable()) self.assertFalse(f.readable()) self.assertFalse(f.closed) f.write(text) self.assertTrue(f.closed) # Read it back with self.fs.open("foo/hello", "rt") as f: self.assertIsInstance(f, io.IOBase) self.assertTrue(f.readable()) self.assertFalse(f.writable()) self.assertFalse(f.closed) hello = f.read() self.assertTrue(f.closed) self.assertEqual(hello, text) self.assert_text("foo/hello", text) # Test overwrite text = "Goodbye, World" with self.fs.open("foo/hello", "wt") as f: f.write(text) self.assert_text("foo/hello", text) # Open from missing dir with self.assertRaises(errors.ResourceNotFound): self.fs.open("/foo/bar/test.txt") # Test fileno returns a file number, if supported by the file. with self.fs.open("foo/hello") as f: try: fn = f.fileno() except io.UnsupportedOperation: pass else: self.assertEqual(os.read(fn, 7), b"Goodbye") # Test text files are proper iterators over themselves lines = os.linesep.join(["Line 1", "Line 2", "Line 3"]) self.fs.writetext("iter.txt", lines) with self.fs.open("iter.txt") as f: for actual, expected in zip(f, lines.splitlines(1)): self.assertEqual(actual, expected) def test_openbin_rw(self): # Open a file that doesn't exist with self.assertRaises(errors.ResourceNotFound): self.fs.openbin("doesnotexist", "r") self.fs.makedir("foo") # Create a new text file text = b"Hello, World\n" with self.fs.openbin("foo/hello", "w") as f: repr(f) self.assertIn("b", f.mode) self.assertIsInstance(f, io.IOBase) self.assertTrue(f.writable()) self.assertFalse(f.readable()) self.assertEqual(len(text), f.write(text)) self.assertFalse(f.closed) self.assertTrue(f.closed) with self.assertRaises(errors.FileExists): with self.fs.openbin("foo/hello", "x") as f: pass # Read it back with self.fs.openbin("foo/hello", "r") as f: self.assertIn("b", f.mode) self.assertIsInstance(f, io.IOBase) self.assertTrue(f.readable()) self.assertFalse(f.writable()) hello = f.read() self.assertFalse(f.closed) self.assertTrue(f.closed) self.assertEqual(hello, text) self.assert_bytes("foo/hello", text) # Test overwrite text = b"Goodbye, World" with self.fs.openbin("foo/hello", "w") as f: self.assertEqual(len(text), f.write(text)) self.assert_bytes("foo/hello", text) # Test FileExpected raised with self.assertRaises(errors.FileExpected): self.fs.openbin("foo") # directory # Open from missing dir with self.assertRaises(errors.ResourceNotFound): self.fs.openbin("/foo/bar/test.txt") # Test fileno returns a file number, if supported by the file. with self.fs.openbin("foo/hello") as f: try: fn = f.fileno() except io.UnsupportedOperation: pass else: self.assertEqual(os.read(fn, 7), b"Goodbye") # Test binary files are proper iterators over themselves lines = b"\n".join([b"Line 1", b"Line 2", b"Line 3"]) self.fs.writebytes("iter.bin", lines) with self.fs.openbin("iter.bin") as f: for actual, expected in zip(f, lines.splitlines(1)): self.assertEqual(actual, expected) def test_open_files(self): # Test file-like objects work as expected. with self.fs.open("text", "w") as f: repr(f) text_type(f) self.assertIsInstance(f, io.IOBase) self.assertTrue(f.writable()) self.assertFalse(f.readable()) self.assertFalse(f.closed) self.assertEqual(f.tell(), 0) f.write("Hello\nWorld\n") self.assertEqual(f.tell(), 12) f.writelines(["foo\n", "bar\n", "baz\n"]) with self.assertRaises(IOError): f.read(1) self.assertTrue(f.closed) with self.fs.open("bin", "wb") as f: with self.assertRaises(IOError): f.read(1) with self.fs.open("text", "r") as f: repr(f) text_type(f) self.assertIsInstance(f, io.IOBase) self.assertFalse(f.writable()) self.assertTrue(f.readable()) self.assertFalse(f.closed) self.assertEqual( f.readlines(), ["Hello\n", "World\n", "foo\n", "bar\n", "baz\n"] ) with self.assertRaises(IOError): f.write("no") self.assertTrue(f.closed) with self.fs.open("text", "rb") as f: self.assertIsInstance(f, io.IOBase) self.assertFalse(f.writable()) self.assertTrue(f.readable()) self.assertFalse(f.closed) self.assertEqual(f.readlines(8), [b"Hello\n", b"World\n"]) self.assertEqual(f.tell(), 12) buffer = bytearray(4) self.assertEqual(f.readinto(buffer), 4) self.assertEqual(f.tell(), 16) self.assertEqual(buffer, b"foo\n") with self.assertRaises(IOError): f.write(b"no") self.assertTrue(f.closed) with self.fs.open("text", "r") as f: self.assertEqual(list(f), ["Hello\n", "World\n", "foo\n", "bar\n", "baz\n"]) self.assertFalse(f.closed) self.assertTrue(f.closed) iter_lines = iter(self.fs.open("text")) self.assertEqual(next(iter_lines), "Hello\n") with self.fs.open("unicode", "w") as f: self.assertEqual(12, f.write("Héllo\nWörld\n")) with self.fs.open("text", "rb") as f: self.assertIsInstance(f, io.IOBase) self.assertFalse(f.writable()) self.assertTrue(f.readable()) self.assertTrue(f.seekable()) self.assertFalse(f.closed) self.assertEqual(f.read(1), b"H") self.assertEqual(3, f.seek(3, Seek.set)) self.assertEqual(f.read(1), b"l") self.assertEqual(6, f.seek(2, Seek.current)) self.assertEqual(f.read(1), b"W") self.assertEqual(22, f.seek(-2, Seek.end)) self.assertEqual(f.read(1), b"z") with self.assertRaises(ValueError): f.seek(10, 77) self.assertTrue(f.closed) with self.fs.open("text", "r+b") as f: self.assertIsInstance(f, io.IOBase) self.assertTrue(f.readable()) self.assertTrue(f.writable()) self.assertTrue(f.seekable()) self.assertFalse(f.closed) self.assertEqual(5, f.seek(5)) self.assertEqual(5, f.truncate()) self.assertEqual(0, f.seek(0)) self.assertEqual(f.read(), b"Hello") self.assertEqual(10, f.truncate(10)) self.assertEqual(5, f.tell()) self.assertEqual(0, f.seek(0)) print(repr(self.fs)) print(repr(f)) self.assertEqual(f.read(), b"Hello\0\0\0\0\0") self.assertEqual(4, f.seek(4)) f.write(b"O") self.assertEqual(4, f.seek(4)) self.assertEqual(f.read(1), b"O") self.assertTrue(f.closed) def test_openbin(self): # Write a binary file with self.fs.openbin("file.bin", "wb") as write_file: repr(write_file) text_type(write_file) self.assertIn("b", write_file.mode) self.assertIsInstance(write_file, io.IOBase) self.assertTrue(write_file.writable()) self.assertFalse(write_file.readable()) self.assertFalse(write_file.closed) self.assertEqual(3, write_file.write(b"\0\1\2")) self.assertTrue(write_file.closed) # Read a binary file with self.fs.openbin("file.bin", "rb") as read_file: repr(write_file) text_type(write_file) self.assertIn("b", read_file.mode) self.assertIsInstance(read_file, io.IOBase) self.assertTrue(read_file.readable()) self.assertFalse(read_file.writable()) self.assertFalse(read_file.closed) data = read_file.read() self.assertEqual(data, b"\0\1\2") self.assertTrue(read_file.closed) # Check disallow text mode with self.assertRaises(ValueError): with self.fs.openbin("file.bin", "rt") as read_file: pass # Check errors with self.assertRaises(errors.ResourceNotFound): self.fs.openbin("foo.bin") # Open from missing dir with self.assertRaises(errors.ResourceNotFound): self.fs.openbin("/foo/bar/test.txt") self.fs.makedir("foo") # Attempt to open a directory with self.assertRaises(errors.FileExpected): self.fs.openbin("/foo") # Attempt to write to a directory with self.assertRaises(errors.FileExpected): self.fs.openbin("/foo", "w") # Opening a file in a directory which doesn't exist with self.assertRaises(errors.ResourceNotFound): self.fs.openbin("/egg/bar") # Opening a file in a directory which doesn't exist with self.assertRaises(errors.ResourceNotFound): self.fs.openbin("/egg/bar", "w") # Opening with a invalid mode with self.assertRaises(ValueError): self.fs.openbin("foo.bin", "h") def test_open_exclusive(self): with self.fs.open("test_open_exclusive", "x") as f: f.write("bananas") with self.assertRaises(errors.FileExists): self.fs.open("test_open_exclusive", "x") def test_openbin_exclusive(self): with self.fs.openbin("test_openbin_exclusive", "x") as f: f.write(b"bananas") with self.assertRaises(errors.FileExists): self.fs.openbin("test_openbin_exclusive", "x") def test_opendir(self): # Make a simple directory structure self.fs.makedir("foo") self.fs.writebytes("foo/bar", b"barbar") self.fs.writebytes("foo/egg", b"eggegg") # Open a sub directory with self.fs.opendir("foo") as foo_fs: repr(foo_fs) text_type(foo_fs) six.assertCountEqual(self, foo_fs.listdir("/"), ["bar", "egg"]) self.assertTrue(foo_fs.isfile("bar")) self.assertTrue(foo_fs.isfile("egg")) self.assertEqual(foo_fs.readbytes("bar"), b"barbar") self.assertEqual(foo_fs.readbytes("egg"), b"eggegg") self.assertFalse(self.fs.isclosed()) # Attempt to open a non-existent directory with self.assertRaises(errors.ResourceNotFound): self.fs.opendir("egg") # Check error when doing opendir on a non dir with self.assertRaises(errors.DirectoryExpected): self.fs.opendir("foo/egg") # These should work, and will essentially return a 'clone' of sorts self.fs.opendir("") self.fs.opendir("/") # Check ClosingSubFS closes 'parent' with self.fs.opendir("foo", factory=ClosingSubFS) as foo_fs: six.assertCountEqual(self, foo_fs.listdir("/"), ["bar", "egg"]) self.assertTrue(foo_fs.isfile("bar")) self.assertTrue(foo_fs.isfile("egg")) self.assertEqual(foo_fs.readbytes("bar"), b"barbar") self.assertEqual(foo_fs.readbytes("egg"), b"eggegg") self.assertTrue(self.fs.isclosed()) def test_remove(self): self.fs.writebytes("foo1", b"test1") self.fs.writebytes("foo2", b"test2") self.fs.writebytes("foo3", b"test3") self.assert_isfile("foo1") self.assert_isfile("foo2") self.assert_isfile("foo3") self.fs.remove("foo2") self.assert_isfile("foo1") self.assert_not_exists("foo2") self.assert_isfile("foo3") with self.assertRaises(errors.ResourceNotFound): self.fs.remove("bar") self.fs.makedir("dir") with self.assertRaises(errors.FileExpected): self.fs.remove("dir") self.fs.makedirs("foo/bar/baz/") error_msg = "resource 'foo/bar/egg/test.txt' not found" assertRaisesRegex = getattr(self, "assertRaisesRegex", self.assertRaisesRegexp) with assertRaisesRegex(errors.ResourceNotFound, error_msg): self.fs.remove("foo/bar/egg/test.txt") def test_removedir(self): # Test removing root with self.assertRaises(errors.RemoveRootError): self.fs.removedir("/") self.fs.makedirs("foo/bar/baz") self.assertTrue(self.fs.exists("foo/bar/baz")) self.fs.removedir("foo/bar/baz") self.assertFalse(self.fs.exists("foo/bar/baz")) self.assertTrue(self.fs.isdir("foo/bar")) with self.assertRaises(errors.ResourceNotFound): self.fs.removedir("nodir") # Test force removal self.fs.makedirs("foo/bar/baz") self.fs.writebytes("foo/egg", b"test") with self.assertRaises(errors.DirectoryExpected): self.fs.removedir("foo/egg") with self.assertRaises(errors.DirectoryNotEmpty): self.fs.removedir("foo/bar") def test_removetree(self): self.fs.makedirs("foo/bar/baz") self.fs.makedirs("foo/egg") self.fs.makedirs("foo/a/b/c/d/e") self.fs.create("foo/egg.txt") self.fs.create("foo/bar/egg.bin") self.fs.create("foo/bar/baz/egg.txt") self.fs.create("foo/a/b/c/1.txt") self.fs.create("foo/a/b/c/2.txt") self.fs.create("foo/a/b/c/3.txt") self.assert_exists("foo/egg.txt") self.assert_exists("foo/bar/egg.bin") self.fs.removetree("foo") self.assert_not_exists("foo") def test_setinfo(self): self.fs.create("birthday.txt") now = math.floor(time.time()) change_info = {"details": {"accessed": now + 60, "modified": now + 60 * 60}} self.fs.setinfo("birthday.txt", change_info) new_info = self.fs.getinfo("birthday.txt", namespaces=["details"]).raw if "accessed" in new_info.get("_write", []): self.assertEqual(new_info["details"]["accessed"], now + 60) if "modified" in new_info.get("_write", []): self.assertEqual(new_info["details"]["modified"], now + 60 * 60) with self.assertRaises(errors.ResourceNotFound): self.fs.setinfo("nothing", {}) def test_settimes(self): self.fs.create("birthday.txt") self.fs.settimes("birthday.txt", accessed=datetime(2016, 7, 5)) info = self.fs.getinfo("birthday.txt", namespaces=["details"]) writeable = info.get("details", "_write", []) if "accessed" in writeable: self.assertEqual(info.accessed, datetime(2016, 7, 5, tzinfo=pytz.UTC)) if "modified" in writeable: self.assertEqual(info.modified, datetime(2016, 7, 5, tzinfo=pytz.UTC)) def test_touch(self): self.fs.touch("new.txt") self.assert_isfile("new.txt") self.fs.settimes("new.txt", datetime(2016, 7, 5)) info = self.fs.getinfo("new.txt", namespaces=["details"]) if info.is_writeable("details", "accessed"): self.assertEqual(info.accessed, datetime(2016, 7, 5, tzinfo=pytz.UTC)) now = time.time() self.fs.touch("new.txt") accessed = self.fs.getinfo("new.txt", namespaces=["details"]).raw[ "details" ]["accessed"] self.assertTrue(accessed - now < 5) def test_close(self): self.assertFalse(self.fs.isclosed()) self.fs.close() self.assertTrue(self.fs.isclosed()) # Check second close call is a no-op self.fs.close() self.assertTrue(self.fs.isclosed()) # Check further operations raise a FilesystemClosed exception with self.assertRaises(errors.FilesystemClosed): self.fs.openbin("test.bin") def test_copy(self): # Test copy to new path self.fs.writebytes("foo", b"test") self.fs.copy("foo", "bar") self.assert_bytes("bar", b"test") # Test copy over existing path self.fs.writebytes("baz", b"truncateme") self.fs.copy("foo", "baz", overwrite=True) self.assert_bytes("foo", b"test") # Test copying a file to a destination that exists with self.assertRaises(errors.DestinationExists): self.fs.copy("baz", "foo") # Test copying to a directory that doesn't exist with self.assertRaises(errors.ResourceNotFound): self.fs.copy("baz", "a/b/c/baz") # Test copying a source that doesn't exist with self.assertRaises(errors.ResourceNotFound): self.fs.copy("egg", "spam") # Test copying a directory self.fs.makedir("dir") with self.assertRaises(errors.FileExpected): self.fs.copy("dir", "folder") def _test_upload(self, workers): """Test fs.copy with varying number of worker threads.""" data1 = b"foo" * 256 * 1024 data2 = b"bar" * 2 * 256 * 1024 data3 = b"baz" * 3 * 256 * 1024 data4 = b"egg" * 7 * 256 * 1024 with open_fs("temp://") as src_fs: src_fs.writebytes("foo", data1) src_fs.writebytes("bar", data2) src_fs.makedir("dir1").writebytes("baz", data3) src_fs.makedirs("dir2/dir3").writebytes("egg", data4) dst_fs = self.fs fs.copy.copy_fs(src_fs, dst_fs, workers=workers) self.assertEqual(dst_fs.readbytes("foo"), data1) self.assertEqual(dst_fs.readbytes("bar"), data2) self.assertEqual(dst_fs.readbytes("dir1/baz"), data3) self.assertEqual(dst_fs.readbytes("dir2/dir3/egg"), data4) def test_upload_0(self): self._test_upload(0) def test_upload_1(self): self._test_upload(1) def test_upload_2(self): self._test_upload(2) def test_upload_4(self): self._test_upload(4) def _test_download(self, workers): """Test fs.copy with varying number of worker threads.""" data1 = b"foo" * 256 * 1024 data2 = b"bar" * 2 * 256 * 1024 data3 = b"baz" * 3 * 256 * 1024 data4 = b"egg" * 7 * 256 * 1024 src_fs = self.fs with open_fs("temp://") as dst_fs: src_fs.writebytes("foo", data1) src_fs.writebytes("bar", data2) src_fs.makedir("dir1").writebytes("baz", data3) src_fs.makedirs("dir2/dir3").writebytes("egg", data4) fs.copy.copy_fs(src_fs, dst_fs, workers=workers) self.assertEqual(dst_fs.readbytes("foo"), data1) self.assertEqual(dst_fs.readbytes("bar"), data2) self.assertEqual(dst_fs.readbytes("dir1/baz"), data3) self.assertEqual(dst_fs.readbytes("dir2/dir3/egg"), data4) def test_download_0(self): self._test_download(0) def test_download_1(self): self._test_download(1) def test_download_2(self): self._test_download(2) def test_download_4(self): self._test_download(4) def test_create(self): # Test create new file self.assertFalse(self.fs.exists("foo")) self.fs.create("foo") self.assertTrue(self.fs.exists("foo")) self.assertEqual(self.fs.gettype("foo"), ResourceType.file) self.assertEqual(self.fs.getsize("foo"), 0) # Test wipe existing file self.fs.writebytes("foo", b"bar") self.assertEqual(self.fs.getsize("foo"), 3) self.fs.create("foo", wipe=True) self.assertEqual(self.fs.getsize("foo"), 0) # Test create with existing file, and not wipe self.fs.writebytes("foo", b"bar") self.assertEqual(self.fs.getsize("foo"), 3) self.fs.create("foo", wipe=False) self.assertEqual(self.fs.getsize("foo"), 3) def test_desc(self): # Describe a file self.fs.create("foo") description = self.fs.desc("foo") self.assertIsInstance(description, text_type) # Describe a dir self.fs.makedir("dir") self.fs.desc("dir") # Special cases that may hide bugs self.fs.desc("/") self.fs.desc("") with self.assertRaises(errors.ResourceNotFound): self.fs.desc("bar") def test_scandir(self): # Check exception for scanning dir that doesn't exist with self.assertRaises(errors.ResourceNotFound): for _info in self.fs.scandir("/foobar"): pass # Check scandir returns an iterable iter_scandir = self.fs.scandir("/") self.assertTrue(isinstance(iter_scandir, collections_abc.Iterable)) self.assertEqual(list(iter_scandir), []) # Check scanning self.fs.create("foo") # Can't scandir on a file with self.assertRaises(errors.DirectoryExpected): list(self.fs.scandir("foo")) self.fs.create("bar") self.fs.makedir("dir") iter_scandir = self.fs.scandir("/") self.assertTrue(isinstance(iter_scandir, collections_abc.Iterable)) scandir = sorted( (r.raw for r in iter_scandir), key=lambda info: info["basic"]["name"] ) # Filesystems may send us more than we ask for # We just want to test the 'basic' namespace scandir = [{"basic": i["basic"]} for i in scandir] self.assertEqual( scandir, [ {"basic": {"name": "bar", "is_dir": False}}, {"basic": {"name": "dir", "is_dir": True}}, {"basic": {"name": "foo", "is_dir": False}}, ], ) # Hard to test optional namespaces, but at least run the code list( self.fs.scandir( "/", namespaces=["details", "link", "stat", "lstat", "access"] ) ) # Test paging page1 = list(self.fs.scandir("/", page=(None, 2))) self.assertEqual(len(page1), 2) page2 = list(self.fs.scandir("/", page=(2, 4))) self.assertEqual(len(page2), 1) page3 = list(self.fs.scandir("/", page=(4, 6))) self.assertEqual(len(page3), 0) paged = {r.name for r in itertools.chain(page1, page2)} self.assertEqual(paged, {"foo", "bar", "dir"}) def test_filterdir(self): self.assertEqual(list(self.fs.filterdir("/", files=["*.py"])), []) self.fs.makedir("bar") self.fs.create("foo.txt") self.fs.create("foo.py") self.fs.create("foo.pyc") page1 = list(self.fs.filterdir("/", page=(None, 2))) page2 = list(self.fs.filterdir("/", page=(2, 4))) page3 = list(self.fs.filterdir("/", page=(4, 6))) self.assertEqual(len(page1), 2) self.assertEqual(len(page2), 2) self.assertEqual(len(page3), 0) names = [info.name for info in itertools.chain(page1, page2, page3)] self.assertEqual(set(names), {"foo.txt", "foo.py", "foo.pyc", "bar"}) # Check filtering by wildcard dir_list = [info.name for info in self.fs.filterdir("/", files=["*.py"])] self.assertEqual(set(dir_list), {"bar", "foo.py"}) # Check filtering by miltiple wildcard dir_list = [ info.name for info in self.fs.filterdir("/", files=["*.py", "*.pyc"]) ] self.assertEqual(set(dir_list), {"bar", "foo.py", "foo.pyc"}) # Check excluding dirs dir_list = [ info.name for info in self.fs.filterdir( "/", exclude_dirs=["*"], files=["*.py", "*.pyc"] ) ] self.assertEqual(set(dir_list), {"foo.py", "foo.pyc"}) # Check excluding files dir_list = [info.name for info in self.fs.filterdir("/", exclude_files=["*"])] self.assertEqual(set(dir_list), {"bar"}) # Check wildcards must be a list with self.assertRaises(TypeError): dir_list = [info.name for info in self.fs.filterdir("/", files="*.py")] self.fs.makedir("baz") dir_list = [ info.name for info in self.fs.filterdir("/", exclude_files=["*"], dirs=["??z"]) ] self.assertEqual(set(dir_list), {"baz"}) with self.assertRaises(TypeError): dir_list = [ info.name for info in self.fs.filterdir("/", exclude_files=["*"], dirs="*.py") ] def test_readbytes(self): # Test readbytes method. all_bytes = b"".join(six.int2byte(n) for n in range(256)) with self.fs.open("foo", "wb") as f: f.write(all_bytes) self.assertEqual(self.fs.readbytes("foo"), all_bytes) _all_bytes = self.fs.readbytes("foo") self.assertIsInstance(_all_bytes, bytes) self.assertEqual(_all_bytes, all_bytes) with self.assertRaises(errors.ResourceNotFound): self.fs.readbytes("foo/bar") self.fs.makedir("baz") with self.assertRaises(errors.FileExpected): self.fs.readbytes("baz") def test_download(self): test_bytes = b"Hello, World" self.fs.writebytes("hello.bin", test_bytes) write_file = io.BytesIO() self.fs.download("hello.bin", write_file) self.assertEqual(write_file.getvalue(), test_bytes) with self.assertRaises(errors.ResourceNotFound): self.fs.download("foo.bin", write_file) def test_download_chunk_size(self): test_bytes = b"Hello, World" * 100 self.fs.writebytes("hello.bin", test_bytes) write_file = io.BytesIO() self.fs.download("hello.bin", write_file, chunk_size=8) self.assertEqual(write_file.getvalue(), test_bytes) def test_isempty(self): self.assertTrue(self.fs.isempty("/")) self.fs.makedir("foo") self.assertFalse(self.fs.isempty("/")) self.assertTrue(self.fs.isempty("/foo")) self.fs.create("foo/bar.txt") self.assertFalse(self.fs.isempty("/foo")) self.fs.remove("foo/bar.txt") self.assertTrue(self.fs.isempty("/foo")) def test_writebytes(self): all_bytes = b"".join(six.int2byte(n) for n in range(256)) self.fs.writebytes("foo", all_bytes) with self.fs.open("foo", "rb") as f: _bytes = f.read() self.assertIsInstance(_bytes, bytes) self.assertEqual(_bytes, all_bytes) self.assert_bytes("foo", all_bytes) with self.assertRaises(TypeError): self.fs.writebytes("notbytes", "unicode") def test_readtext(self): self.fs.makedir("foo") with self.fs.open("foo/unicode.txt", "wt") as f: f.write(UNICODE_TEXT) text = self.fs.readtext("foo/unicode.txt") self.assertIsInstance(text, text_type) self.assertEqual(text, UNICODE_TEXT) self.assert_text("foo/unicode.txt", UNICODE_TEXT) def test_writetext(self): # Test writetext method. self.fs.writetext("foo", "bar") with self.fs.open("foo", "rt") as f: foo = f.read() self.assertEqual(foo, "bar") self.assertIsInstance(foo, text_type) with self.assertRaises(TypeError): self.fs.writetext("nottext", b"bytes") def test_writefile(self): bytes_file = io.BytesIO(b"bar") self.fs.writefile("foo", bytes_file) with self.fs.open("foo", "rb") as f: data = f.read() self.assertEqual(data, b"bar") def test_upload(self): bytes_file = io.BytesIO(b"bar") self.fs.upload("foo", bytes_file) with self.fs.open("foo", "rb") as f: data = f.read() self.assertEqual(data, b"bar") def test_upload_chunk_size(self): test_data = b"bar" * 128 bytes_file = io.BytesIO(test_data) self.fs.upload("foo", bytes_file, chunk_size=8) with self.fs.open("foo", "rb") as f: data = f.read() self.assertEqual(data, test_data) def test_bin_files(self): # Check binary files. with self.fs.openbin("foo1", "wb") as f: text_type(f) repr(f) f.write(b"a") f.write(b"b") f.write(b"c") self.assert_bytes("foo1", b"abc") # Test writelines with self.fs.openbin("foo2", "wb") as f: f.writelines([b"hello\n", b"world"]) self.assert_bytes("foo2", b"hello\nworld") # Test readline with self.fs.openbin("foo2") as f: self.assertEqual(f.readline(), b"hello\n") self.assertEqual(f.readline(), b"world") # Test readlines with self.fs.openbin("foo2") as f: lines = f.readlines() self.assertEqual(lines, [b"hello\n", b"world"]) with self.fs.openbin("foo2") as f: lines = list(f) self.assertEqual(lines, [b"hello\n", b"world"]) with self.fs.openbin("foo2") as f: lines = [] for line in f: lines.append(line) self.assertEqual(lines, [b"hello\n", b"world"]) with self.fs.openbin("foo2") as f: print(repr(f)) self.assertEqual(next(f), b"hello\n") # Test truncate with self.fs.open("foo2", "r+b") as f: f.truncate(3) self.assertEqual(self.fs.getsize("foo2"), 3) self.assert_bytes("foo2", b"hel") def test_files(self): # Test multiple writes with self.fs.open("foo1", "wt") as f: text_type(f) repr(f) f.write("a") f.write("b") f.write("c") self.assert_text("foo1", "abc") # Test writelines with self.fs.open("foo2", "wt") as f: f.writelines(["hello\n", "world"]) self.assert_text("foo2", "hello\nworld") # Test readline with self.fs.open("foo2") as f: self.assertEqual(f.readline(), "hello\n") self.assertEqual(f.readline(), "world") # Test readlines with self.fs.open("foo2") as f: lines = f.readlines() self.assertEqual(lines, ["hello\n", "world"]) with self.fs.open("foo2") as f: lines = list(f) self.assertEqual(lines, ["hello\n", "world"]) with self.fs.open("foo2") as f: lines = [] for line in f: lines.append(line) self.assertEqual(lines, ["hello\n", "world"]) # Test truncate with self.fs.open("foo2", "r+") as f: f.truncate(3) self.assertEqual(self.fs.getsize("foo2"), 3) self.assert_text("foo2", "hel") with self.fs.open("foo2", "ab") as f: f.write(b"p") self.assert_bytes("foo2", b"help") # Test __del__ doesn't throw traceback f = self.fs.open("foo2", "r") del f with self.assertRaises(IOError): with self.fs.open("foo2", "r") as f: f.write("no!") with self.assertRaises(IOError): with self.fs.open("newfoo", "w") as f: f.read(2) def test_copy_file(self): # Test fs.copy.copy_file bytes_test = b"Hello, World" self.fs.writebytes("foo.txt", bytes_test) fs.copy.copy_file(self.fs, "foo.txt", self.fs, "bar.txt") self.assert_bytes("bar.txt", bytes_test) mem_fs = open_fs("mem://") fs.copy.copy_file(self.fs, "foo.txt", mem_fs, "bar.txt") self.assertEqual(mem_fs.readbytes("bar.txt"), bytes_test) def test_copy_structure(self): mem_fs = open_fs("mem://") self.fs.makedirs("foo/bar/baz") self.fs.makedir("egg") fs.copy.copy_structure(self.fs, mem_fs) expected = {"/egg", "/foo", "/foo/bar", "/foo/bar/baz"} self.assertEqual(set(walk.walk_dirs(mem_fs)), expected) def _test_copy_dir(self, protocol): # Test copy.copy_dir. # Test copying to a another fs other_fs = open_fs(protocol) self.fs.makedirs("foo/bar/baz") self.fs.makedir("egg") self.fs.writetext("top.txt", "Hello, World") self.fs.writetext("/foo/bar/baz/test.txt", "Goodbye, World") fs.copy.copy_dir(self.fs, "/", other_fs, "/") expected = {"/egg", "/foo", "/foo/bar", "/foo/bar/baz"} self.assertEqual(set(walk.walk_dirs(other_fs)), expected) self.assert_text("top.txt", "Hello, World") self.assert_text("/foo/bar/baz/test.txt", "Goodbye, World") # Test copying a sub dir other_fs = open_fs("mem://") fs.copy.copy_dir(self.fs, "/foo", other_fs, "/") self.assertEqual(list(walk.walk_files(other_fs)), ["/bar/baz/test.txt"]) print("BEFORE") self.fs.tree() other_fs.tree() fs.copy.copy_dir(self.fs, "/foo", other_fs, "/egg") print("FS") self.fs.tree() print("OTHER") other_fs.tree() self.assertEqual( list(walk.walk_files(other_fs)), ["/bar/baz/test.txt", "/egg/bar/baz/test.txt"], ) def _test_copy_dir_write(self, protocol): # Test copying to this filesystem from another. other_fs = open_fs(protocol) other_fs.makedirs("foo/bar/baz") other_fs.makedir("egg") other_fs.writetext("top.txt", "Hello, World") other_fs.writetext("/foo/bar/baz/test.txt", "Goodbye, World") fs.copy.copy_dir(other_fs, "/", self.fs, "/") expected = {"/egg", "/foo", "/foo/bar", "/foo/bar/baz"} self.assertEqual(set(walk.walk_dirs(self.fs)), expected) self.assert_text("top.txt", "Hello, World") self.assert_text("/foo/bar/baz/test.txt", "Goodbye, World") def test_copy_dir_mem(self): # Test copy_dir with a mem fs. self._test_copy_dir("mem://") self._test_copy_dir_write("mem://") def test_copy_dir_temp(self): # Test copy_dir with a temp fs. self._test_copy_dir("temp://") self._test_copy_dir_write("temp://") def _test_move_dir_write(self, protocol): # Test moving to this filesystem from another. other_fs = open_fs(protocol) other_fs.makedirs("foo/bar/baz") other_fs.makedir("egg") other_fs.writetext("top.txt", "Hello, World") other_fs.writetext("/foo/bar/baz/test.txt", "Goodbye, World") fs.move.move_dir(other_fs, "/", self.fs, "/") expected = {"/egg", "/foo", "/foo/bar", "/foo/bar/baz"} self.assertEqual(other_fs.listdir("/"), []) self.assertEqual(set(walk.walk_dirs(self.fs)), expected) self.assert_text("top.txt", "Hello, World") self.assert_text("/foo/bar/baz/test.txt", "Goodbye, World") def test_move_dir_mem(self): self._test_move_dir_write("mem://") def test_move_dir_temp(self): self._test_move_dir_write("temp://") def test_move_same_fs(self): self.fs.makedirs("foo/bar/baz") self.fs.makedir("egg") self.fs.writetext("top.txt", "Hello, World") self.fs.writetext("/foo/bar/baz/test.txt", "Goodbye, World") fs.move.move_dir(self.fs, "foo", self.fs, "foo2") expected = {"/egg", "/foo2", "/foo2/bar", "/foo2/bar/baz"} self.assertEqual(set(walk.walk_dirs(self.fs)), expected) self.assert_text("top.txt", "Hello, World") self.assert_text("/foo2/bar/baz/test.txt", "Goodbye, World") def test_move_file_same_fs(self): text = "Hello, World" self.fs.makedir("foo").writetext("test.txt", text) self.assert_text("foo/test.txt", text) fs.move.move_file(self.fs, "foo/test.txt", self.fs, "foo/test2.txt") self.assert_not_exists("foo/test.txt") self.assert_text("foo/test2.txt", text) def _test_move_file(self, protocol): other_fs = open_fs(protocol) text = "Hello, World" self.fs.makedir("foo").writetext("test.txt", text) self.assert_text("foo/test.txt", text) with self.assertRaises(errors.ResourceNotFound): fs.move.move_file(self.fs, "foo/test.txt", other_fs, "foo/test2.txt") other_fs.makedir("foo") fs.move.move_file(self.fs, "foo/test.txt", other_fs, "foo/test2.txt") self.assertEqual(other_fs.readtext("foo/test2.txt"), text) def test_move_file_mem(self): self._test_move_file("mem://") def test_move_file_temp(self): self._test_move_file("temp://") def test_copydir(self): self.fs.makedirs("foo/bar/baz/egg") self.fs.writetext("foo/bar/foofoo.txt", "Hello") self.fs.makedir("foo2") self.fs.copydir("foo/bar", "foo2") self.assert_text("foo2/foofoo.txt", "Hello") self.assert_isdir("foo2/baz/egg") self.assert_text("foo/bar/foofoo.txt", "Hello") self.assert_isdir("foo/bar/baz/egg") with self.assertRaises(errors.ResourceNotFound): self.fs.copydir("foo", "foofoo") with self.assertRaises(errors.ResourceNotFound): self.fs.copydir("spam", "egg", create=True) with self.assertRaises(errors.DirectoryExpected): self.fs.copydir("foo2/foofoo.txt", "foofoo.txt", create=True) def test_movedir(self): self.fs.makedirs("foo/bar/baz/egg") self.fs.writetext("foo/bar/foofoo.txt", "Hello") self.fs.makedir("foo2") self.fs.movedir("foo/bar", "foo2") self.assert_text("foo2/foofoo.txt", "Hello") self.assert_isdir("foo2/baz/egg") self.assert_not_exists("foo/bar") self.assert_not_exists("foo/bar/foofoo.txt") self.assert_not_exists("foo/bar/baz/egg") # Check moving to an unexisting directory with self.assertRaises(errors.ResourceNotFound): self.fs.movedir("foo", "foofoo") # Check moving an unexisting directory with self.assertRaises(errors.ResourceNotFound): self.fs.movedir("spam", "egg", create=True) # Check moving a file with self.assertRaises(errors.DirectoryExpected): self.fs.movedir("foo2/foofoo.txt", "foo2/baz/egg") def test_match(self): self.assertTrue(self.fs.match(["*.py"], "foo.py")) self.assertEqual( self.fs.match(["*.py"], "FOO.PY"), self.fs.getmeta().get("case_insensitive", False), ) def test_tree(self): self.fs.makedirs("foo/bar") self.fs.create("test.txt") write_tree = io.StringIO() self.fs.tree(file=write_tree) written = write_tree.getvalue() expected = "|-- foo\n| `-- bar\n`-- test.txt\n" self.assertEqual(expected, written) def test_unicode_path(self): if not self.fs.getmeta().get("unicode_paths", False): raise unittest.SkipTest("the filesystem does not support unicode paths.") self.fs.makedir("földér") self.fs.writetext("☭.txt", "Smells like communism.") self.fs.writebytes("földér/☣.txt", b"Smells like an old syringe.") self.assert_isdir("földér") self.assertEqual(["☣.txt"], self.fs.listdir("földér")) self.assertEqual("☣.txt", self.fs.getinfo("földér/☣.txt").name) self.assert_text("☭.txt", "Smells like communism.") self.assert_bytes("földér/☣.txt", b"Smells like an old syringe.") if self.fs.hassyspath("földér/☣.txt"): self.assertTrue(os.path.exists(self.fs.getsyspath("földér/☣.txt"))) self.fs.remove("földér/☣.txt") self.assert_not_exists("földér/☣.txt") self.fs.removedir("földér") self.assert_not_exists("földér") def test_case_sensitive(self): meta = self.fs.getmeta() if "case_insensitive" not in meta: raise unittest.SkipTest("case sensitivity not known") if meta.get("case_insensitive", False): raise unittest.SkipTest("the filesystem is not case sensitive.") self.fs.makedir("foo") self.fs.makedir("Foo") self.fs.touch("fOO") self.assert_exists("foo") self.assert_exists("Foo") self.assert_exists("fOO") self.assert_not_exists("FoO") self.assert_isdir("foo") self.assert_isdir("Foo") self.assert_isfile("fOO") def test_glob(self): self.assertIsInstance(self.fs.glob, glob.BoundGlobber) def test_hash(self): self.fs.makedir("foo").writebytes("hashme.txt", b"foobar" * 1024) self.assertEqual( self.fs.hash("foo/hashme.txt", "md5"), "9fff4bb103ab8ce4619064109c54cb9c" ) with self.assertRaises(errors.UnsupportedHash): self.fs.hash("foo/hashme.txt", "nohash") with self.fs.opendir("foo") as foo_fs: self.assertEqual( foo_fs.hash("hashme.txt", "md5"), "9fff4bb103ab8ce4619064109c54cb9c" ) pyfilesystem2-2.4.12/fs/time.py000066400000000000000000000011421400005060600163240ustar00rootroot00000000000000"""Time related tools. """ from __future__ import print_function from __future__ import unicode_literals from calendar import timegm from datetime import datetime from pytz import UTC, timezone utcfromtimestamp = datetime.utcfromtimestamp utclocalize = UTC.localize GMT = timezone("GMT") def datetime_to_epoch(d): # type: (datetime) -> int """Convert datetime to epoch. """ return timegm(d.utctimetuple()) def epoch_to_datetime(t): # type: (int) -> datetime """Convert epoch time to a UTC datetime. """ return utclocalize(utcfromtimestamp(t)) if t is not None else None pyfilesystem2-2.4.12/fs/tools.py000066400000000000000000000053651400005060600165410ustar00rootroot00000000000000"""Miscellaneous tools for operating on filesystems. """ from __future__ import print_function from __future__ import unicode_literals import typing from . import errors from .errors import DirectoryNotEmpty from .errors import ResourceNotFound from .path import abspath from .path import dirname from .path import normpath from .path import recursepath if typing.TYPE_CHECKING: from typing import IO, List, Optional, Text, Union from .base import FS def remove_empty(fs, path): # type: (FS, Text) -> None """Remove all empty parents. Arguments: fs (FS): A filesystem instance. path (str): Path to a directory on the filesystem. """ path = abspath(normpath(path)) try: while path not in ("", "/"): fs.removedir(path) path = dirname(path) except DirectoryNotEmpty: pass def copy_file_data(src_file, dst_file, chunk_size=None): # type: (IO, IO, Optional[int]) -> None """Copy data from one file object to another. Arguments: src_file (io.IOBase): File open for reading. dst_file (io.IOBase): File open for writing. chunk_size (int): Number of bytes to copy at a time (or `None` to use sensible default). """ _chunk_size = 1024 * 1024 if chunk_size is None else chunk_size read = src_file.read write = dst_file.write # The 'or None' is so that it works with binary and text files for chunk in iter( lambda: read(_chunk_size) or None, None ): # type: Optional[Union[bytes, str]] write(chunk) def get_intermediate_dirs(fs, dir_path): # type: (FS, Text) -> List[Text] """Get a list of non-existing intermediate directories. Arguments: fs (FS): A filesystem instance. dir_path (str): A path to a new directory on the filesystem. Returns: list: A list of non-existing paths. Raises: ~fs.errors.DirectoryExpected: If a path component references a file and not a directory. """ intermediates = [] with fs.lock(): for path in recursepath(abspath(dir_path), reverse=True): try: resource = fs.getinfo(path) except ResourceNotFound: intermediates.append(abspath(path)) else: if resource.is_dir: break raise errors.DirectoryExpected(dir_path) return intermediates[::-1][:-1] def is_thread_safe(*filesystems): # type: (FS) -> bool """Check if all filesystems are thread-safe. Arguments: filesystems (FS): Filesystems instances to check. Returns: bool: if all filesystems are thread safe. """ return all(fs.getmeta().get("thread_safe", False) for fs in filesystems) pyfilesystem2-2.4.12/fs/tree.py000066400000000000000000000131451400005060600163330ustar00rootroot00000000000000# coding: utf-8 """Render a FS object as text tree views. Color is supported on UNIX terminals. """ from __future__ import print_function from __future__ import unicode_literals import sys import typing from fs.path import abspath, join, normpath if typing.TYPE_CHECKING: from typing import List, Optional, Text, TextIO, Tuple from .base import FS from .info import Info def render( fs, # type: FS path="/", # type: Text file=None, # type: Optional[TextIO] encoding=None, # type: Optional[Text] max_levels=5, # type: int with_color=None, # type: Optional[bool] dirs_first=True, # type: bool exclude=None, # type: Optional[List[Text]] filter=None, # type: Optional[List[Text]] ): # type: (...) -> Tuple[int, int] """Render a directory structure in to a pretty tree. Arguments: fs (~fs.base.FS): A filesystem instance. path (str): The path of the directory to start rendering from (defaults to root folder, i.e. ``'/'``). file (io.IOBase): An open file-like object to render the tree, or `None` for stdout. encoding (str, optional): Unicode encoding, or `None` to auto-detect. max_levels (int, optional): Maximum number of levels to display, or `None` for no maximum. with_color (bool, optional): Enable terminal color output, or `None` to auto-detect terminal. dirs_first (bool): Show directories first. exclude (list, optional): Option list of directory patterns to exclude from the tree render. filter (list, optional): Optional list of files patterns to match in the tree render. Returns: (int, int): A tuple of ``(, )``. """ file = file or sys.stdout if encoding is None: encoding = getattr(file, "encoding", "utf-8") or "utf-8" is_tty = hasattr(file, "isatty") and file.isatty() if with_color is None: is_windows = sys.platform.startswith("win") with_color = False if is_windows else is_tty if encoding.lower() == "utf-8" and with_color: char_vertline = "│" char_newnode = "├" char_line = "──" char_corner = "└" else: char_vertline = "|" char_newnode = "|" char_line = "--" char_corner = "`" indent = " " * 4 line_indent = char_vertline + " " * 3 def write(line): # type: (Text) -> None """Write a line to the output. """ print(line, file=file) # FIXME(@althonos): define functions using `with_color` and # avoid checking `with_color` at every function call ! def format_prefix(prefix): # type: (Text) -> Text """Format the prefix lines. """ if not with_color: return prefix return "\x1b[32m%s\x1b[0m" % prefix def format_dirname(dirname): # type: (Text) -> Text """Format a directory name. """ if not with_color: return dirname return "\x1b[1;34m%s\x1b[0m" % dirname def format_error(msg): # type: (Text) -> Text """Format an error. """ if not with_color: return msg return "\x1b[31m%s\x1b[0m" % msg def format_filename(fname): # type: (Text) -> Text """Format a filename. """ if not with_color: return fname if fname.startswith("."): fname = "\x1b[33m%s\x1b[0m" % fname return fname def sort_key_dirs_first(info): # type: (Info) -> Tuple[bool, Text] """Get the info sort function with directories first. """ return (not info.is_dir, info.name.lower()) def sort_key(info): # type: (Info) -> Text """Get the default info sort function using resource name. """ return info.name.lower() counts = {"dirs": 0, "files": 0} def format_directory(path, levels): # type: (Text, List[bool]) -> None """Recursive directory function. """ try: directory = sorted( fs.filterdir(path, exclude_dirs=exclude, files=filter), key=sort_key_dirs_first if dirs_first else sort_key, ) except Exception as error: prefix = ( "".join(indent if last else line_indent for last in levels) + char_corner + char_line ) write( "{} {}".format( format_prefix(prefix), format_error("error ({})".format(error)) ) ) return _last = len(directory) - 1 for i, info in enumerate(directory): is_last_entry = i == _last counts["dirs" if info.is_dir else "files"] += 1 prefix = "".join(indent if last else line_indent for last in levels) prefix += char_corner if is_last_entry else char_newnode if info.is_dir: write( "{} {}".format( format_prefix(prefix + char_line), format_dirname(info.name) ) ) if max_levels is None or len(levels) < max_levels: format_directory(join(path, info.name), levels + [is_last_entry]) else: write( "{} {}".format( format_prefix(prefix + char_line), format_filename(info.name) ) ) format_directory(abspath(normpath(path)), []) return counts["dirs"], counts["files"] pyfilesystem2-2.4.12/fs/walk.py000066400000000000000000000653621400005060600163420ustar00rootroot00000000000000"""Machinery for walking a filesystem. *Walking* a filesystem means recursively visiting a directory and any sub-directories. It is a fairly common requirement for copying, searching etc. See :ref:`walking` for details. """ from __future__ import unicode_literals import typing from collections import defaultdict from collections import deque from collections import namedtuple from ._repr import make_repr from .errors import FSError from .path import abspath from .path import combine from .path import normpath if typing.TYPE_CHECKING: from typing import ( Any, Callable, Collection, Iterator, List, Optional, MutableMapping, Text, Tuple, Type, ) from .base import FS from .info import Info OnError = Callable[[Text, Exception], bool] _F = typing.TypeVar("_F", bound="FS") Step = namedtuple("Step", "path, dirs, files") """type: a *step* in a directory walk. """ # TODO(@althonos): It could be a good idea to create an Abstract Base Class # BaseWalker (with methods walk, files, dirs and info) ? class Walker(object): """A walker object recursively lists directories in a filesystem. Arguments: ignore_errors (bool): If `True`, any errors reading a directory will be ignored, otherwise exceptions will be raised. on_error (callable, optional): If ``ignore_errors`` is `False`, then this callable will be invoked for a path and the exception object. It should return `True` to ignore the error, or `False` to re-raise it. search (str): If ``'breadth'`` then the directory will be walked *top down*. Set to ``'depth'`` to walk *bottom up*. filter (list, optional): If supplied, this parameter should be a list of filename patterns, e.g. ``['*.py']``. Files will only be returned if the final component matches one of the patterns. exclude (list, optional): If supplied, this parameter should be a list of filename patterns, e.g. ``['~*']``. Files matching any of these patterns will be removed from the walk. filter_dirs (list, optional): A list of patterns that will be used to match directories paths. The walk will only open directories that match at least one of these patterns. exclude_dirs (list, optional): A list of patterns that will be used to filter out directories from the walk. e.g. ``['*.svn', '*.git']``. max_depth (int, optional): Maximum directory depth to walk. """ def __init__( self, ignore_errors=False, # type: bool on_error=None, # type: Optional[OnError] search="breadth", # type: Text filter=None, # type: Optional[List[Text]] exclude=None, # type: Optional[List[Text]] filter_dirs=None, # type: Optional[List[Text]] exclude_dirs=None, # type: Optional[List[Text]] max_depth=None, # type: Optional[int] ): # type: (...) -> None if search not in ("breadth", "depth"): raise ValueError("search must be 'breadth' or 'depth'") self.ignore_errors = ignore_errors if on_error: if ignore_errors: raise ValueError("on_error is invalid when ignore_errors==True") else: on_error = self._ignore_errors if ignore_errors else self._raise_errors if not callable(on_error): raise TypeError("on_error must be callable") self.on_error = on_error self.search = search self.filter = filter self.exclude = exclude self.filter_dirs = filter_dirs self.exclude_dirs = exclude_dirs self.max_depth = max_depth super(Walker, self).__init__() @classmethod def _ignore_errors(cls, path, error): # type: (Text, Exception) -> bool """Default on_error callback.""" return True @classmethod def _raise_errors(cls, path, error): # type: (Text, Exception) -> bool """Callback to re-raise dir scan errors.""" return False @classmethod def _calculate_depth(cls, path): # type: (Text) -> int """Calculate the 'depth' of a directory path (number of components). """ _path = path.strip("/") return _path.count("/") + 1 if _path else 0 @classmethod def bind(cls, fs): # type: (_F) -> BoundWalker[_F] """Bind a `Walker` instance to a given filesystem. This *binds* in instance of the Walker to a given filesystem, so that you won't need to explicitly provide the filesystem as a parameter. Arguments: fs (FS): A filesystem object. Returns: ~fs.walk.BoundWalker: a bound walker. Example: >>> from fs import open_fs >>> from fs.walk import Walker >>> home_fs = open_fs('~/') >>> walker = Walker.bind(home_fs) >>> for path in walker.files(filter=['*.py']): ... print(path) Unless you have written a customized walker class, you will be unlikely to need to call this explicitly, as filesystem objects already have a ``walk`` attribute which is a bound walker object. Example: >>> from fs import open_fs >>> home_fs = open_fs('~/') >>> for path in home_fs.walk.files(filter=['*.py']): ... print(path) """ return BoundWalker(fs) def __repr__(self): # type: () -> Text return make_repr( self.__class__.__name__, ignore_errors=(self.ignore_errors, False), on_error=(self.on_error, None), search=(self.search, "breadth"), filter=(self.filter, None), exclude=(self.exclude, None), filter_dirs=(self.filter_dirs, None), exclude_dirs=(self.exclude_dirs, None), max_depth=(self.max_depth, None), ) def _iter_walk( self, fs, # type: FS path, # type: Text namespaces=None, # type: Optional[Collection[Text]] ): # type: (...) -> Iterator[Tuple[Text, Optional[Info]]] """Get the walk generator.""" if self.search == "breadth": return self._walk_breadth(fs, path, namespaces=namespaces) else: return self._walk_depth(fs, path, namespaces=namespaces) def _check_open_dir(self, fs, path, info): # type: (FS, Text, Info) -> bool """Check if a directory should be considered in the walk. """ if self.exclude_dirs is not None and fs.match(self.exclude_dirs, info.name): return False if self.filter_dirs is not None and not fs.match(self.filter_dirs, info.name): return False return self.check_open_dir(fs, path, info) def check_open_dir(self, fs, path, info): # type: (FS, Text, Info) -> bool """Check if a directory should be opened. Override to exclude directories from the walk. Arguments: fs (FS): A filesystem instance. path (str): Path to directory. info (Info): A resource info object for the directory. Returns: bool: `True` if the directory should be opened. """ return True def _check_scan_dir(self, fs, path, info, depth): # type: (FS, Text, Info, int) -> bool """Check if a directory contents should be scanned.""" if self.max_depth is not None and depth >= self.max_depth: return False return self.check_scan_dir(fs, path, info) def check_scan_dir(self, fs, path, info): # type: (FS, Text, Info) -> bool """Check if a directory should be scanned. Override to omit scanning of certain directories. If a directory is omitted, it will appear in the walk but its files and sub-directories will not. Arguments: fs (FS): A filesystem instance. path (str): Path to directory. info (Info): A resource info object for the directory. Returns: bool: `True` if the directory should be scanned. """ return True def check_file(self, fs, info): # type: (FS, Info) -> bool """Check if a filename should be included. Override to exclude files from the walk. Arguments: fs (FS): A filesystem instance. info (Info): A resource info object. Returns: bool: `True` if the file should be included. """ if self.exclude is not None and fs.match(self.exclude, info.name): return False return fs.match(self.filter, info.name) def _scan( self, fs, # type: FS dir_path, # type: Text namespaces=None, # type: Optional[Collection[Text]] ): # type: (...) -> Iterator[Info] """Get an iterator of `Info` objects for a directory path. Arguments: fs (FS): A filesystem instance. dir_path (str): A path to a directory on the filesystem. namespaces (list): A list of additional namespaces to include in the `Info` objects. Returns: ~collections.Iterator: iterator of `Info` objects for resources within the given path. """ try: for info in fs.scandir(dir_path, namespaces=namespaces): yield info except FSError as error: if not self.on_error(dir_path, error): raise def walk( self, fs, # type: FS path="/", # type: Text namespaces=None, # type: Optional[Collection[Text]] ): # type: (...) -> Iterator[Step] """Walk the directory structure of a filesystem. Arguments: fs (FS): A filesystem instance. path (str): A path to a directory on the filesystem. namespaces (list, optional): A list of additional namespaces to add to the `Info` objects. Returns: collections.Iterator: an iterator of `~fs.walk.Step` instances. The return value is an iterator of ``(, , )`` named tuples, where ```` is an absolute path to a directory, and ```` and ```` are a list of `~fs.info.Info` objects for directories and files in ````. Example: >>> home_fs = open_fs('~/') >>> walker = Walker(filter=['*.py']) >>> namespaces = ['details'] >>> for path, dirs, files in walker.walk(home_fs, namespaces) ... print("[{}]".format(path)) ... print("{} directories".format(len(dirs))) ... total = sum(info.size for info in files) ... print("{} bytes {}".format(total)) """ _path = abspath(normpath(path)) dir_info = defaultdict(list) # type: MutableMapping[Text, List[Info]] _walk = self._iter_walk(fs, _path, namespaces=namespaces) for dir_path, info in _walk: if info is None: dirs = [] # type: List[Info] files = [] # type: List[Info] for _info in dir_info[dir_path]: (dirs if _info.is_dir else files).append(_info) yield Step(dir_path, dirs, files) del dir_info[dir_path] else: dir_info[dir_path].append(info) def files(self, fs, path="/"): # type: (FS, Text) -> Iterator[Text] """Walk a filesystem, yielding absolute paths to files. Arguments: fs (FS): A filesystem instance. path (str): A path to a directory on the filesystem. Yields: str: absolute path to files on the filesystem found recursively within the given directory. """ _combine = combine for _path, info in self._iter_walk(fs, path=path): if info is not None and not info.is_dir: yield _combine(_path, info.name) def dirs(self, fs, path="/"): # type: (FS, Text) -> Iterator[Text] """Walk a filesystem, yielding absolute paths to directories. Arguments: fs (FS): A filesystem instance. path (str): A path to a directory on the filesystem. Yields: str: absolute path to directories on the filesystem found recursively within the given directory. """ _combine = combine for _path, info in self._iter_walk(fs, path=path): if info is not None and info.is_dir: yield _combine(_path, info.name) def info( self, fs, # type: FS path="/", # type: Text namespaces=None, # type: Optional[Collection[Text]] ): # type: (...) -> Iterator[Tuple[Text, Info]] """Walk a filesystem, yielding tuples of ``(, )``. Arguments: fs (FS): A filesystem instance. path (str): A path to a directory on the filesystem. namespaces (list, optional): A list of additional namespaces to add to the `Info` objects. Yields: (str, Info): a tuple of ``(, )``. """ _combine = combine _walk = self._iter_walk(fs, path=path, namespaces=namespaces) for _path, info in _walk: if info is not None: yield _combine(_path, info.name), info def _walk_breadth( self, fs, # type: FS path, # type: Text namespaces=None, # type: Optional[Collection[Text]] ): # type: (...) -> Iterator[Tuple[Text, Optional[Info]]] """Walk files using a *breadth first* search. """ queue = deque([path]) push = queue.appendleft pop = queue.pop _combine = combine _scan = self._scan _calculate_depth = self._calculate_depth _check_open_dir = self._check_open_dir _check_scan_dir = self._check_scan_dir _check_file = self.check_file depth = _calculate_depth(path) while queue: dir_path = pop() for info in _scan(fs, dir_path, namespaces=namespaces): if info.is_dir: _depth = _calculate_depth(dir_path) - depth + 1 if _check_open_dir(fs, dir_path, info): yield dir_path, info # Opened a directory if _check_scan_dir(fs, dir_path, info, _depth): push(_combine(dir_path, info.name)) else: if _check_file(fs, info): yield dir_path, info # Found a file yield dir_path, None # End of directory def _walk_depth( self, fs, # type: FS path, # type: Text namespaces=None, # type: Optional[Collection[Text]] ): # type: (...) -> Iterator[Tuple[Text, Optional[Info]]] """Walk files using a *depth first* search. """ # No recursion! _combine = combine _scan = self._scan _calculate_depth = self._calculate_depth _check_open_dir = self._check_open_dir _check_scan_dir = self._check_scan_dir _check_file = self.check_file depth = _calculate_depth(path) stack = [ (path, _scan(fs, path, namespaces=namespaces), None) ] # type: List[Tuple[Text, Iterator[Info], Optional[Tuple[Text, Info]]]] push = stack.append while stack: dir_path, iter_files, parent = stack[-1] info = next(iter_files, None) if info is None: if parent is not None: yield parent yield dir_path, None del stack[-1] elif info.is_dir: _depth = _calculate_depth(dir_path) - depth + 1 if _check_open_dir(fs, dir_path, info): if _check_scan_dir(fs, dir_path, info, _depth): _path = _combine(dir_path, info.name) push( ( _path, _scan(fs, _path, namespaces=namespaces), (dir_path, info), ) ) else: yield dir_path, info else: if _check_file(fs, info): yield dir_path, info class BoundWalker(typing.Generic[_F]): """A class that binds a `Walker` instance to a `FS` instance. Arguments: fs (FS): A filesystem instance. walker_class (type): A `~fs.walk.WalkerBase` sub-class. The default uses `~fs.walk.Walker`. You will typically not need to create instances of this class explicitly. Filesystems have a `~FS.walk` property which returns a `BoundWalker` object. Example: >>> import fs >>> home_fs = fs.open_fs('~/') >>> home_fs.walk BoundWalker(OSFS('/Users/will', encoding='utf-8')) A `BoundWalker` is callable. Calling it is an alias for `~fs.walk.BoundWalker.walk`. """ def __init__(self, fs, walker_class=Walker): # type: (_F, Type[Walker]) -> None self.fs = fs self.walker_class = walker_class def __repr__(self): # type: () -> Text return "BoundWalker({!r})".format(self.fs) def _make_walker(self, *args, **kwargs): # type: (*Any, **Any) -> Walker """Create a walker instance. """ walker = self.walker_class(*args, **kwargs) return walker def walk( self, path="/", # type: Text namespaces=None, # type: Optional[Collection[Text]] **kwargs # type: Any ): # type: (...) -> Iterator[Step] """Walk the directory structure of a filesystem. Arguments: path (str): namespaces (list, optional): A list of namespaces to include in the resource information, e.g. ``['basic', 'access']`` (defaults to ``['basic']``). Keyword Arguments: ignore_errors (bool): If `True`, any errors reading a directory will be ignored, otherwise exceptions will be raised. on_error (callable): If ``ignore_errors`` is `False`, then this callable will be invoked with a path and the exception object. It should return `True` to ignore the error, or `False` to re-raise it. search (str): If ``'breadth'`` then the directory will be walked *top down*. Set to ``'depth'`` to walk *bottom up*. filter (list): If supplied, this parameter should be a list of file name patterns, e.g. ``['*.py']``. Files will only be returned if the final component matches one of the patterns. exclude (list, optional): If supplied, this parameter should be a list of filename patterns, e.g. ``['~*', '.*']``. Files matching any of these patterns will be removed from the walk. filter_dirs (list, optional): A list of patterns that will be used to match directories paths. The walk will only open directories that match at least one of these patterns. exclude_dirs (list): A list of patterns that will be used to filter out directories from the walk, e.g. ``['*.svn', '*.git']``. max_depth (int, optional): Maximum directory depth to walk. Returns: ~collections.Iterator: an iterator of ``(, , )`` named tuples, where ```` is an absolute path to a directory, and ```` and ```` are a list of `~fs.info.Info` objects for directories and files in ````. Example: >>> home_fs = open_fs('~/') >>> walker = Walker(filter=['*.py']) >>> for path, dirs, files in walker.walk(home_fs, namespaces=['details']): ... print("[{}]".format(path)) ... print("{} directories".format(len(dirs))) ... total = sum(info.size for info in files) ... print("{} bytes {}".format(total)) This method invokes `Walker.walk` with bound `FS` object. """ walker = self._make_walker(**kwargs) return walker.walk(self.fs, path=path, namespaces=namespaces) __call__ = walk def files(self, path="/", **kwargs): # type: (Text, **Any) -> Iterator[Text] """Walk a filesystem, yielding absolute paths to files. Arguments: path (str): A path to a directory. Keyword Arguments: ignore_errors (bool): If `True`, any errors reading a directory will be ignored, otherwise exceptions will be raised. on_error (callable): If ``ignore_errors`` is `False`, then this callable will be invoked with a path and the exception object. It should return `True` to ignore the error, or `False` to re-raise it. search (str): If ``'breadth'`` then the directory will be walked *top down*. Set to ``'depth'`` to walk *bottom up*. filter (list): If supplied, this parameter should be a list of file name patterns, e.g. ``['*.py']``. Files will only be returned if the final component matches one of the patterns. exclude (list, optional): If supplied, this parameter should be a list of filename patterns, e.g. ``['~*', '.*']``. Files matching any of these patterns will be removed from the walk. filter_dirs (list, optional): A list of patterns that will be used to match directories paths. The walk will only open directories that match at least one of these patterns. exclude_dirs (list): A list of patterns that will be used to filter out directories from the walk, e.g. ``['*.svn', '*.git']``. max_depth (int, optional): Maximum directory depth to walk. Returns: ~collections.Iterator: An iterator over file paths (absolute from the filesystem root). This method invokes `Walker.files` with the bound `FS` object. """ walker = self._make_walker(**kwargs) return walker.files(self.fs, path=path) def dirs(self, path="/", **kwargs): # type: (Text, **Any) -> Iterator[Text] """Walk a filesystem, yielding absolute paths to directories. Arguments: path (str): A path to a directory. Keyword Arguments: ignore_errors (bool): If `True`, any errors reading a directory will be ignored, otherwise exceptions will be raised. on_error (callable): If ``ignore_errors`` is `False`, then this callable will be invoked with a path and the exception object. It should return `True` to ignore the error, or `False` to re-raise it. search (str): If ``'breadth'`` then the directory will be walked *top down*. Set to ``'depth'`` to walk *bottom up*. filter_dirs (list, optional): A list of patterns that will be used to match directories paths. The walk will only open directories that match at least one of these patterns. exclude_dirs (list): A list of patterns that will be used to filter out directories from the walk, e.g. ``['*.svn', '*.git']``. max_depth (int, optional): Maximum directory depth to walk. Returns: ~collections.Iterator: an iterator over directory paths (absolute from the filesystem root). This method invokes `Walker.dirs` with the bound `FS` object. """ walker = self._make_walker(**kwargs) return walker.dirs(self.fs, path=path) def info( self, path="/", # type: Text namespaces=None, # type: Optional[Collection[Text]] **kwargs # type: Any ): # type: (...) -> Iterator[Tuple[Text, Info]] """Walk a filesystem, yielding path and `Info` of resources. Arguments: path (str): A path to a directory. namespaces (list, optional): A list of namespaces to include in the resource information, e.g. ``['basic', 'access']`` (defaults to ``['basic']``). Keyword Arguments: ignore_errors (bool): If `True`, any errors reading a directory will be ignored, otherwise exceptions will be raised. on_error (callable): If ``ignore_errors`` is `False`, then this callable will be invoked with a path and the exception object. It should return `True` to ignore the error, or `False` to re-raise it. search (str): If ``'breadth'`` then the directory will be walked *top down*. Set to ``'depth'`` to walk *bottom up*. filter (list): If supplied, this parameter should be a list of file name patterns, e.g. ``['*.py']``. Files will only be returned if the final component matches one of the patterns. exclude (list, optional): If supplied, this parameter should be a list of filename patterns, e.g. ``['~*', '.*']``. Files matching any of these patterns will be removed from the walk. filter_dirs (list, optional): A list of patterns that will be used to match directories paths. The walk will only open directories that match at least one of these patterns. exclude_dirs (list): A list of patterns that will be used to filter out directories from the walk, e.g. ``['*.svn', '*.git']``. max_depth (int, optional): Maximum directory depth to walk. Returns: ~collections.Iterable: an iterable yielding tuples of ``(, )``. This method invokes `Walker.info` with the bound `FS` object. """ walker = self._make_walker(**kwargs) return walker.info(self.fs, path=path, namespaces=namespaces) # Allow access to default walker from the module # For example: # fs.walk.walk_files() default_walker = Walker() walk = default_walker.walk walk_files = default_walker.files walk_info = default_walker.info walk_dirs = default_walker.dirs pyfilesystem2-2.4.12/fs/wildcard.py000066400000000000000000000115571400005060600171720ustar00rootroot00000000000000"""Match wildcard filenames. """ # Adapted from https://hg.python.org/cpython/file/2.7/Lib/fnmatch.py from __future__ import unicode_literals, print_function import re import typing from functools import partial from .lrucache import LRUCache if typing.TYPE_CHECKING: from typing import Callable, Iterable, Text, Tuple, Pattern _PATTERN_CACHE = LRUCache(1000) # type: LRUCache[Tuple[Text, bool], Pattern] def match(pattern, name): # type: (Text, Text) -> bool """Test whether a name matches a wildcard pattern. Arguments: pattern (str): A wildcard pattern, e.g. ``"*.py"``. name (str): A filename. Returns: bool: `True` if the filename matches the pattern. """ try: re_pat = _PATTERN_CACHE[(pattern, True)] except KeyError: res = "(?ms)" + _translate(pattern) + r'\Z' _PATTERN_CACHE[(pattern, True)] = re_pat = re.compile(res) return re_pat.match(name) is not None def imatch(pattern, name): # type: (Text, Text) -> bool """Test whether a name matches a wildcard pattern (case insensitive). Arguments: pattern (str): A wildcard pattern, e.g. ``"*.py"``. name (bool): A filename. Returns: bool: `True` if the filename matches the pattern. """ try: re_pat = _PATTERN_CACHE[(pattern, False)] except KeyError: res = "(?ms)" + _translate(pattern, case_sensitive=False) + r'\Z' _PATTERN_CACHE[(pattern, False)] = re_pat = re.compile(res, re.IGNORECASE) return re_pat.match(name) is not None def match_any(patterns, name): # type: (Iterable[Text], Text) -> bool """Test if a name matches any of a list of patterns. Will return `True` if ``patterns`` is an empty list. Arguments: patterns (list): A list of wildcard pattern, e.g ``["*.py", "*.pyc"]`` name (str): A filename. Returns: bool: `True` if the name matches at least one of the patterns. """ if not patterns: return True return any(match(pattern, name) for pattern in patterns) def imatch_any(patterns, name): # type: (Iterable[Text], Text) -> bool """Test if a name matches any of a list of patterns (case insensitive). Will return `True` if ``patterns`` is an empty list. Arguments: patterns (list): A list of wildcard pattern, e.g ``["*.py", "*.pyc"]`` name (str): A filename. Returns: bool: `True` if the name matches at least one of the patterns. """ if not patterns: return True return any(imatch(pattern, name) for pattern in patterns) def get_matcher(patterns, case_sensitive): # type: (Iterable[Text], bool) -> Callable[[Text], bool] """Get a callable that matches names against the given patterns. Arguments: patterns (list): A list of wildcard pattern. e.g. ``["*.py", "*.pyc"]`` case_sensitive (bool): If ``True``, then the callable will be case sensitive, otherwise it will be case insensitive. Returns: callable: a matcher that will return `True` if the name given as an argument matches any of the given patterns. Example: >>> from fs import wildcard >>> is_python = wildcard.get_matcher(['*.py'], True) >>> is_python('__init__.py') True >>> is_python('foo.txt') False """ if not patterns: return lambda name: True if case_sensitive: return partial(match_any, patterns) else: return partial(imatch_any, patterns) def _translate(pattern, case_sensitive=True): # type: (Text, bool) -> Text """Translate a wildcard pattern to a regular expression. There is no way to quote meta-characters. Arguments: pattern (str): A wildcard pattern. case_sensitive (bool): Set to `False` to use a case insensitive regex (default `True`). Returns: str: A regex equivalent to the given pattern. """ if not case_sensitive: pattern = pattern.lower() i, n = 0, len(pattern) res = "" while i < n: c = pattern[i] i = i + 1 if c == "*": res = res + "[^/]*" elif c == "?": res = res + "." elif c == "[": j = i if j < n and pattern[j] == "!": j = j + 1 if j < n and pattern[j] == "]": j = j + 1 while j < n and pattern[j] != "]": j = j + 1 if j >= n: res = res + "\\[" else: stuff = pattern[i:j].replace("\\", "\\\\") i = j + 1 if stuff[0] == "!": stuff = "^" + stuff[1:] elif stuff[0] == "^": stuff = "\\" + stuff res = "%s[%s]" % (res, stuff) else: res = res + re.escape(c) return res pyfilesystem2-2.4.12/fs/wrap.py000066400000000000000000000206371400005060600163510ustar00rootroot00000000000000"""Collection of useful `~fs.wrapfs.WrapFS` subclasses. Here's an example that opens a filesystem then makes it *read only*:: >>> from fs import open_fs >>> from fs.wrap import read_only >>> projects_fs = open_fs('~/projects') >>> read_only_projects_fs = read_only(projects_fs) >>> read_only_projects_fs.remove('__init__.py') Traceback (most recent call last): ... fs.errors.ResourceReadOnly: resource '__init__.py' is read only """ from __future__ import print_function from __future__ import unicode_literals import typing from .wrapfs import WrapFS from .path import abspath, normpath, split from .errors import ResourceReadOnly, ResourceNotFound from .info import Info from .mode import check_writable if typing.TYPE_CHECKING: from datetime import datetime from typing import ( Any, BinaryIO, Collection, Dict, Iterator, IO, Optional, Text, Tuple, ) from .base import FS # noqa: F401 from .info import RawInfo from .subfs import SubFS from .permissions import Permissions _W = typing.TypeVar("_W", bound="WrapFS") _T = typing.TypeVar("_T", bound="FS") _F = typing.TypeVar("_F", bound="FS", covariant=True) def read_only(fs): # type: (_T) -> WrapReadOnly[_T] """Make a read-only filesystem. Arguments: fs (FS): A filesystem instance. Returns: FS: A read only version of ``fs`` """ return WrapReadOnly(fs) def cache_directory(fs): # type: (_T) -> WrapCachedDir[_T] """Make a filesystem that caches directory information. Arguments: fs (FS): A filesystem instance. Returns: FS: A filesystem that caches results of `~FS.scandir`, `~FS.isdir` and other methods which read directory information. """ return WrapCachedDir(fs) class WrapCachedDir(WrapFS[_F], typing.Generic[_F]): """Caches filesystem directory information. This filesystem caches directory information retrieved from a scandir call. This *may* speed up code that calls `~FS.isdir`, `~FS.isfile`, or `~FS.gettype` too frequently. Note: Using this wrap will prevent changes to directory information being visible to the filesystem object. Consequently it is best used only in a fairly limited scope where you don't expected anything on the filesystem to change. """ wrap_name = "cached-dir" def __init__(self, wrap_fs): # type: (_F) -> None super(WrapCachedDir, self).__init__(wrap_fs) self._cache = {} # type: Dict[Tuple[Text, frozenset], Dict[Text, Info]] def scandir( self, path, # type: Text namespaces=None, # type: Optional[Collection[Text]] page=None, # type: Optional[Tuple[int, int]] ): # type: (...) -> Iterator[Info] _path = abspath(normpath(path)) cache_key = (_path, frozenset(namespaces or ())) if cache_key not in self._cache: _scan_result = self._wrap_fs.scandir(path, namespaces=namespaces, page=page) _dir = {info.name: info for info in _scan_result} self._cache[cache_key] = _dir gen_scandir = iter(self._cache[cache_key].values()) return gen_scandir def getinfo(self, path, namespaces=None): # type: (Text, Optional[Collection[Text]]) -> Info _path = abspath(normpath(path)) if _path == "/": return Info({"basic": {"name": "", "is_dir": True}}) dir_path, resource_name = split(_path) cache_key = (dir_path, frozenset(namespaces or ())) if cache_key not in self._cache: self.scandir(dir_path, namespaces=namespaces) _dir = self._cache[cache_key] try: info = _dir[resource_name] except KeyError: raise ResourceNotFound(path) return info def isdir(self, path): # type: (Text) -> bool # FIXME(@althonos): this raises an error on non-existing file ! return self.getinfo(path).is_dir def isfile(self, path): # type: (Text) -> bool # FIXME(@althonos): this raises an error on non-existing file ! return not self.getinfo(path).is_dir class WrapReadOnly(WrapFS[_F], typing.Generic[_F]): """Makes a Filesystem read-only. Any call that would would write data or modify the filesystem in any way will raise a `~fs.errors.ResourceReadOnly` exception. """ wrap_name = "read-only" def appendbytes(self, path, data): # type: (Text, bytes) -> None self.check() raise ResourceReadOnly(path) def appendtext( self, path, # type: Text text, # type: Text encoding="utf-8", # type: Text errors=None, # type: Optional[Text] newline="", # type: Text ): # type: (...) -> None self.check() raise ResourceReadOnly(path) def makedir( self, # type: _W path, # type: Text permissions=None, # type: Optional[Permissions] recreate=False, # type: bool ): # type: (...) -> SubFS[_W] self.check() raise ResourceReadOnly(path) def move(self, src_path, dst_path, overwrite=False): # type: (Text, Text, bool) -> None self.check() raise ResourceReadOnly(dst_path) def openbin(self, path, mode="r", buffering=-1, **options): # type: (Text, Text, int, **Any) -> BinaryIO self.check() if check_writable(mode): raise ResourceReadOnly(path) return self._wrap_fs.openbin(path, mode=mode, buffering=-1, **options) def remove(self, path): # type: (Text) -> None self.check() raise ResourceReadOnly(path) def removedir(self, path): # type: (Text) -> None self.check() raise ResourceReadOnly(path) def setinfo(self, path, info): # type: (Text, RawInfo) -> None self.check() raise ResourceReadOnly(path) def writetext( self, path, # type: Text contents, # type: Text encoding="utf-8", # type: Text errors=None, # type: Optional[Text] newline="", # type: Text ): # type: (...) -> None self.check() raise ResourceReadOnly(path) def settimes(self, path, accessed=None, modified=None): # type: (Text, Optional[datetime], Optional[datetime]) -> None self.check() raise ResourceReadOnly(path) def copy(self, src_path, dst_path, overwrite=False): # type: (Text, Text, bool) -> None self.check() raise ResourceReadOnly(dst_path) def create(self, path, wipe=False): # type: (Text, bool) -> bool self.check() raise ResourceReadOnly(path) def makedirs( self, # type: _W path, # type: Text permissions=None, # type: Optional[Permissions] recreate=False, # type: bool ): # type: (...) -> SubFS[_W] self.check() raise ResourceReadOnly(path) def open( self, path, # type: Text mode="r", # type: Text buffering=-1, # type: int encoding=None, # type: Optional[Text] errors=None, # type: Optional[Text] newline="", # type: Text line_buffering=False, # type: bool **options # type: Any ): # type: (...) -> IO self.check() if check_writable(mode): raise ResourceReadOnly(path) return self._wrap_fs.open( path, mode=mode, buffering=buffering, encoding=encoding, errors=errors, newline=newline, line_buffering=line_buffering, **options ) def writebytes(self, path, contents): # type: (Text, bytes) -> None self.check() raise ResourceReadOnly(path) def upload(self, path, file, chunk_size=None, **options): # type: (Text, BinaryIO, Optional[int], **Any) -> None self.check() raise ResourceReadOnly(path) def writefile( self, path, # type: Text file, # type: IO encoding=None, # type: Optional[Text] errors=None, # type: Optional[Text] newline="", # type: Text ): # type: (...) -> None self.check() raise ResourceReadOnly(path) def touch(self, path): # type: (Text) -> None self.check() raise ResourceReadOnly(path) pyfilesystem2-2.4.12/fs/wrapfs.py000066400000000000000000000413401400005060600166740ustar00rootroot00000000000000"""Base class for filesystem wrappers. """ from __future__ import unicode_literals import typing import six from . import errors from .base import FS from .copy import copy_file, copy_dir from .info import Info from .move import move_file, move_dir from .path import abspath, normpath from .error_tools import unwrap_errors if typing.TYPE_CHECKING: from datetime import datetime from threading import RLock from typing import ( Any, AnyStr, BinaryIO, Callable, Collection, Iterator, Iterable, IO, List, Mapping, Optional, Text, Tuple, Union, ) from .enums import ResourceType from .info import RawInfo from .permissions import Permissions from .subfs import SubFS from .walk import BoundWalker _T = typing.TypeVar("_T", bound="FS") _OpendirFactory = Callable[[_T, Text], SubFS[_T]] _F = typing.TypeVar("_F", bound="FS", covariant=True) _W = typing.TypeVar("_W", bound="WrapFS[FS]") @six.python_2_unicode_compatible class WrapFS(FS, typing.Generic[_F]): """A proxy for a filesystem object. This class exposes an filesystem interface, where the data is stored on another filesystem(s), and is the basis for `~fs.subfs.SubFS` and other *virtual* filesystems. """ wrap_name = None # type: Optional[Text] def __init__(self, wrap_fs): # type: (_F) -> None self._wrap_fs = wrap_fs super(WrapFS, self).__init__() def __repr__(self): # type: () -> Text return "{}({!r})".format(self.__class__.__name__, self._wrap_fs) def __str__(self): # type: () -> Text wraps = [] _fs = self # type: Union[FS, WrapFS[FS]] while hasattr(_fs, "_wrap_fs"): wrap_name = getattr(_fs, "wrap_name", None) if wrap_name is not None: wraps.append(wrap_name) _fs = _fs._wrap_fs # type: ignore if wraps: _str = "{}({})".format(_fs, ", ".join(wraps[::-1])) else: _str = "{}".format(_fs) return _str def delegate_path(self, path): # type: (Text) -> Tuple[_F, Text] """Encode a path for proxied filesystem. Arguments: path (str): A path on the filesystem. Returns: (FS, str): a tuple of ``(, )`` """ return self._wrap_fs, path def delegate_fs(self): # type: () -> _F """Get the proxied filesystem. This method should return a filesystem for methods not associated with a path, e.g. `~fs.base.FS.getmeta`. """ return self._wrap_fs def appendbytes(self, path, data): # type: (Text, bytes) -> None self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): return _fs.appendbytes(_path, data) def appendtext( self, path, # type: Text text, # type: Text encoding="utf-8", # type: Text errors=None, # type: Optional[Text] newline="", # type: Text ): # type: (...) -> None self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): return _fs.appendtext( _path, text, encoding=encoding, errors=errors, newline=newline ) def getinfo(self, path, namespaces=None): # type: (Text, Optional[Collection[Text]]) -> Info self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): raw_info = _fs.getinfo(_path, namespaces=namespaces).raw if abspath(normpath(path)) == "/": raw_info = dict(raw_info) raw_info["basic"]["name"] = "" # type: ignore return Info(raw_info) def listdir(self, path): # type: (Text) -> List[Text] self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): dir_list = _fs.listdir(_path) return dir_list def lock(self): # type: () -> RLock self.check() _fs = self.delegate_fs() return _fs.lock() def makedir( self, path, # type: Text permissions=None, # type: Optional[Permissions] recreate=False, # type: bool ): # type: (...) -> SubFS[FS] self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): return _fs.makedir(_path, permissions=permissions, recreate=recreate) def move(self, src_path, dst_path, overwrite=False): # type: (Text, Text, bool) -> None # A custom move permits a potentially optimized code path src_fs, _src_path = self.delegate_path(src_path) dst_fs, _dst_path = self.delegate_path(dst_path) with unwrap_errors({_src_path: src_path, _dst_path: dst_path}): if not overwrite and dst_fs.exists(_dst_path): raise errors.DestinationExists(_dst_path) move_file(src_fs, _src_path, dst_fs, _dst_path) def movedir(self, src_path, dst_path, create=False): # type: (Text, Text, bool) -> None src_fs, _src_path = self.delegate_path(src_path) dst_fs, _dst_path = self.delegate_path(dst_path) with unwrap_errors({_src_path: src_path, _dst_path: dst_path}): if not create and not dst_fs.exists(_dst_path): raise errors.ResourceNotFound(dst_path) if not src_fs.getinfo(_src_path).is_dir: raise errors.DirectoryExpected(src_path) move_dir(src_fs, _src_path, dst_fs, _dst_path) def openbin(self, path, mode="r", buffering=-1, **options): # type: (Text, Text, int, **Any) -> BinaryIO self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): bin_file = _fs.openbin(_path, mode=mode, buffering=-1, **options) return bin_file def remove(self, path): # type: (Text) -> None self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _fs.remove(_path) def removedir(self, path): # type: (Text) -> None self.check() _path = abspath(normpath(path)) if _path == "/": raise errors.RemoveRootError() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _fs.removedir(_path) def removetree(self, dir_path): # type: (Text) -> None self.check() _path = abspath(normpath(dir_path)) if _path == "/": raise errors.RemoveRootError() _fs, _path = self.delegate_path(dir_path) with unwrap_errors(dir_path): _fs.removetree(_path) def scandir( self, path, # type: Text namespaces=None, # type: Optional[Collection[Text]] page=None, # type: Optional[Tuple[int, int]] ): # type: (...) -> Iterator[Info] self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): for info in _fs.scandir(_path, namespaces=namespaces, page=page): yield info def setinfo(self, path, info): # type: (Text, RawInfo) -> None self.check() _fs, _path = self.delegate_path(path) return _fs.setinfo(_path, info) def settimes(self, path, accessed=None, modified=None): # type: (Text, Optional[datetime], Optional[datetime]) -> None self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _fs.settimes(_path, accessed=accessed, modified=modified) def touch(self, path): # type: (Text) -> None self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _fs.touch(_path) def copy(self, src_path, dst_path, overwrite=False): # type: (Text, Text, bool) -> None src_fs, _src_path = self.delegate_path(src_path) dst_fs, _dst_path = self.delegate_path(dst_path) with unwrap_errors({_src_path: src_path, _dst_path: dst_path}): if not overwrite and dst_fs.exists(_dst_path): raise errors.DestinationExists(_dst_path) copy_file(src_fs, _src_path, dst_fs, _dst_path) def copydir(self, src_path, dst_path, create=False): # type: (Text, Text, bool) -> None src_fs, _src_path = self.delegate_path(src_path) dst_fs, _dst_path = self.delegate_path(dst_path) with unwrap_errors({_src_path: src_path, _dst_path: dst_path}): if not create and not dst_fs.exists(_dst_path): raise errors.ResourceNotFound(dst_path) if not src_fs.getinfo(_src_path).is_dir: raise errors.DirectoryExpected(src_path) copy_dir(src_fs, _src_path, dst_fs, _dst_path) def create(self, path, wipe=False): # type: (Text, bool) -> bool self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): return _fs.create(_path, wipe=wipe) def desc(self, path): # type: (Text) -> Text self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): desc = _fs.desc(_path) return desc def download(self, path, file, chunk_size=None, **options): # type: (Text, BinaryIO, Optional[int], **Any) -> None self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _fs.download(_path, file, chunk_size=chunk_size, **options) def exists(self, path): # type: (Text) -> bool self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): exists = _fs.exists(_path) return exists def filterdir( self, path, # type: Text files=None, # type: Optional[Iterable[Text]] dirs=None, # type: Optional[Iterable[Text]] exclude_dirs=None, # type: Optional[Iterable[Text]] exclude_files=None, # type: Optional[Iterable[Text]] namespaces=None, # type: Optional[Collection[Text]] page=None, # type: Optional[Tuple[int, int]] ): # type: (...) -> Iterator[Info] self.check() _fs, _path = self.delegate_path(path) iter_files = iter( _fs.filterdir( _path, exclude_dirs=exclude_dirs, exclude_files=exclude_files, files=files, dirs=dirs, namespaces=namespaces, page=page, ) ) with unwrap_errors(path): for info in iter_files: yield info def readbytes(self, path): # type: (Text) -> bytes self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _bytes = _fs.readbytes(_path) return _bytes def readtext( self, path, # type: Text encoding=None, # type: Optional[Text] errors=None, # type: Optional[Text] newline="", # type: Text ): # type: (...) -> Text self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _text = _fs.readtext( _path, encoding=encoding, errors=errors, newline=newline ) return _text def getmeta(self, namespace="standard"): # type: (Text) -> Mapping[Text, object] self.check() meta = self.delegate_fs().getmeta(namespace=namespace) return meta def getsize(self, path): # type: (Text) -> int self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): size = _fs.getsize(_path) return size def getsyspath(self, path): # type: (Text) -> Text self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): sys_path = _fs.getsyspath(_path) return sys_path def gettype(self, path): # type: (Text) -> ResourceType self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _type = _fs.gettype(_path) return _type def geturl(self, path, purpose="download"): # type: (Text, Text) -> Text self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): return _fs.geturl(_path, purpose=purpose) def hassyspath(self, path): # type: (Text) -> bool self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): has_sys_path = _fs.hassyspath(_path) return has_sys_path def hasurl(self, path, purpose="download"): # type: (Text, Text) -> bool self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): has_url = _fs.hasurl(_path, purpose=purpose) return has_url def isdir(self, path): # type: (Text) -> bool self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _isdir = _fs.isdir(_path) return _isdir def isfile(self, path): # type: (Text) -> bool self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _isfile = _fs.isfile(_path) return _isfile def islink(self, path): # type: (Text) -> bool self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _islink = _fs.islink(_path) return _islink def makedirs( self, path, # type: Text permissions=None, # type: Optional[Permissions] recreate=False, # type: bool ): # type: (...) -> SubFS[FS] self.check() _fs, _path = self.delegate_path(path) return _fs.makedirs(_path, permissions=permissions, recreate=recreate) # FIXME(@althonos): line_buffering is not a FS.open declared argument def open( self, path, # type: Text mode="r", # type: Text buffering=-1, # type: int encoding=None, # type: Optional[Text] errors=None, # type: Optional[Text] newline="", # type: Text line_buffering=False, # type: bool **options # type: Any ): # type: (...) -> IO[AnyStr] self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): open_file = _fs.open( _path, mode=mode, buffering=buffering, encoding=encoding, errors=errors, newline=newline, line_buffering=line_buffering, **options ) return open_file def opendir( self, # type: _W path, # type: Text factory=None, # type: Optional[_OpendirFactory] ): # type: (...) -> SubFS[_W] from .subfs import SubFS factory = factory or SubFS if not self.getinfo(path).is_dir: raise errors.DirectoryExpected(path=path) with unwrap_errors(path): return factory(self, path) def writebytes(self, path, contents): # type: (Text, bytes) -> None self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _fs.writebytes(_path, contents) def upload(self, path, file, chunk_size=None, **options): # type: (Text, BinaryIO, Optional[int], **Any) -> None self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _fs.upload(_path, file, chunk_size=chunk_size, **options) def writefile( self, path, # type: Text file, # type: IO[AnyStr] encoding=None, # type: Optional[Text] errors=None, # type: Optional[Text] newline="", # type: Text ): # type: (...) -> None self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _fs.writefile( _path, file, encoding=encoding, errors=errors, newline=newline ) def validatepath(self, path): # type: (Text) -> Text self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): _fs.validatepath(_path) path = abspath(normpath(path)) return path def hash(self, path, name): # type: (Text, Text) -> Text self.check() _fs, _path = self.delegate_path(path) with unwrap_errors(path): return _fs.hash(_path, name) @property def walk(self): # type: () -> BoundWalker return self._wrap_fs.walker_class.bind(self) pyfilesystem2-2.4.12/fs/zipfs.py000066400000000000000000000355531400005060600165360ustar00rootroot00000000000000"""Manage the filesystem in a Zip archive. """ from __future__ import print_function from __future__ import unicode_literals import typing import zipfile from datetime import datetime import six from . import errors from .base import FS from .compress import write_zip from .enums import ResourceType, Seek from .info import Info from .iotools import RawWrapper from .permissions import Permissions from .memoryfs import MemoryFS from .opener import open_fs from .path import dirname, forcedir, normpath, relpath from .time import datetime_to_epoch from .wrapfs import WrapFS from ._url_tools import url_quote if typing.TYPE_CHECKING: from typing import ( Any, BinaryIO, Collection, Dict, List, Optional, SupportsInt, Text, Tuple, Union, ) from .info import RawInfo from .subfs import SubFS R = typing.TypeVar("R", bound="ReadZipFS") class _ZipExtFile(RawWrapper): def __init__(self, fs, name): # type: (ReadZipFS, Text) -> None self._zip = _zip = fs._zip self._end = _zip.getinfo(name).file_size self._pos = 0 super(_ZipExtFile, self).__init__(_zip.open(name), "r", name) def read(self, size=-1): # type: (int) -> bytes buf = self._f.read(-1 if size is None else size) self._pos += len(buf) return buf def read1(self, size=-1): # type: (int) -> bytes buf = self._f.read1(-1 if size is None else size) # type: ignore self._pos += len(buf) return buf def seek(self, offset, whence=Seek.set): # type: (int, SupportsInt) -> int """Change stream position. Change the stream position to the given byte offset. The offset is interpreted relative to the position indicated by ``whence``. Arguments: offset (int): the offset to the new position, in bytes. whence (int): the position reference. Possible values are: * `Seek.set`: start of stream (the default). * `Seek.current`: current position; offset may be negative. * `Seek.end`: end of stream; offset must be negative. Returns: int: the new absolute position. Raises: ValueError: when ``whence`` is not known, or ``offset`` is invalid. Note: Zip compression does not support seeking, so the seeking is emulated. Seeking somewhere else than the current position will need to either: * reopen the file and restart decompression * read and discard data to advance in the file """ _whence = int(whence) if _whence == Seek.current: offset += self._pos if _whence == Seek.current or _whence == Seek.set: if offset < 0: raise ValueError("Negative seek position {}".format(offset)) elif _whence == Seek.end: if offset > 0: raise ValueError("Positive seek position {}".format(offset)) offset += self._end else: raise ValueError( "Invalid whence ({}, should be {}, {} or {})".format( _whence, Seek.set, Seek.current, Seek.end ) ) if offset < self._pos: self._f = self._zip.open(self.name) # type: ignore self._pos = 0 self.read(offset - self._pos) return self._pos def tell(self): # type: () -> int return self._pos class ZipFS(WrapFS): """Read and write zip files. There are two ways to open a ZipFS for the use cases of reading a zip file, and creating a new one. If you open the ZipFS with ``write`` set to `False` (the default) then the filesystem will be a read only filesystem which maps to the files and directories within the zip file. Files are decompressed on the fly when you open them. Here's how you might extract and print a readme from a zip file:: with ZipFS('foo.zip') as zip_fs: readme = zip_fs.readtext('readme.txt') If you open the ZipFS with ``write`` set to `True`, then the ZipFS will be a empty temporary filesystem. Any files / directories you create in the ZipFS will be written in to a zip file when the ZipFS is closed. Here's how you might write a new zip file containing a readme.txt file:: with ZipFS('foo.zip', write=True) as new_zip: new_zip.writetext( 'readme.txt', 'This zip file was written by PyFilesystem' ) Arguments: file (str or io.IOBase): An OS filename, or an open file object. write (bool): Set to `True` to write a new zip file, or `False` (default) to read an existing zip file. compression (int): Compression to use (one of the constants defined in the `zipfile` module in the stdlib). temp_fs (str): An FS URL for the temporary filesystem used to store data prior to zipping. """ # TODO: __new__ returning different types may be too 'magical' def __new__( # type: ignore cls, file, # type: Union[Text, BinaryIO] write=False, # type: bool compression=zipfile.ZIP_DEFLATED, # type: int encoding="utf-8", # type: Text temp_fs="temp://__ziptemp__", # type: Text ): # type: (...) -> FS # This magic returns a different class instance based on the # value of the ``write`` parameter. if write: return WriteZipFS( file, compression=compression, encoding=encoding, temp_fs=temp_fs ) else: return ReadZipFS(file, encoding=encoding) if typing.TYPE_CHECKING: def __init__( self, file, # type: Union[Text, BinaryIO] write=False, # type: bool compression=zipfile.ZIP_DEFLATED, # type: int encoding="utf-8", # type: Text temp_fs="temp://__ziptemp__", # type: Text ): # type: (...) -> None pass @six.python_2_unicode_compatible class WriteZipFS(WrapFS): """A writable zip file. """ def __init__( self, file, # type: Union[Text, BinaryIO] compression=zipfile.ZIP_DEFLATED, # type: int encoding="utf-8", # type: Text temp_fs="temp://__ziptemp__", # type: Text ): # type: (...) -> None self._file = file self.compression = compression self.encoding = encoding self._temp_fs_url = temp_fs self._temp_fs = open_fs(temp_fs) self._meta = dict(self._temp_fs.getmeta()) # type: ignore super(WriteZipFS, self).__init__(self._temp_fs) def __repr__(self): # type: () -> Text t = "WriteZipFS({!r}, compression={!r}, encoding={!r}, temp_fs={!r})" return t.format(self._file, self.compression, self.encoding, self._temp_fs_url) def __str__(self): # type: () -> Text return "".format(self._file) def delegate_path(self, path): # type: (Text) -> Tuple[FS, Text] return self._temp_fs, path def delegate_fs(self): # type: () -> FS return self._temp_fs def close(self): # type: () -> None if not self.isclosed(): try: self.write_zip() finally: self._temp_fs.close() super(WriteZipFS, self).close() def write_zip( self, file=None, # type: Union[Text, BinaryIO, None] compression=None, # type: Optional[int] encoding=None, # type: Optional[Text] ): # type: (...) -> None """Write zip to a file. Arguments: file (str or io.IOBase, optional): Destination file, may be a file name or an open file handle. compression (int, optional): Compression to use (one of the constants defined in the `zipfile` module in the stdlib). encoding (str, optional): The character encoding to use (default uses the encoding defined in `~WriteZipFS.__init__`). Note: This is called automatically when the ZipFS is closed. """ if not self.isclosed(): write_zip( self._temp_fs, file or self._file, compression=compression or self.compression, encoding=encoding or self.encoding, ) @six.python_2_unicode_compatible class ReadZipFS(FS): """A readable zip file. """ _meta = { "case_insensitive": True, "network": False, "read_only": True, "supports_rename": False, "thread_safe": True, "unicode_paths": True, "virtual": False, } @errors.CreateFailed.catch_all def __init__(self, file, encoding="utf-8"): # type: (Union[BinaryIO, Text], Text) -> None super(ReadZipFS, self).__init__() self._file = file self.encoding = encoding self._zip = zipfile.ZipFile(file, "r") self._directory_fs = None # type: Optional[MemoryFS] def __repr__(self): # type: () -> Text return "ReadZipFS({!r})".format(self._file) def __str__(self): # type: () -> Text return "".format(self._file) def _path_to_zip_name(self, path): # type: (Text) -> str """Convert a path to a zip file name. """ path = relpath(normpath(path)) if self._directory.isdir(path): path = forcedir(path) if six.PY2: return path.encode(self.encoding) return path @property def _directory(self): # type: () -> MemoryFS """`MemoryFS`: a filesystem with the same folder hierarchy as the zip. """ self.check() with self._lock: if self._directory_fs is None: self._directory_fs = _fs = MemoryFS() for zip_name in self._zip.namelist(): resource_name = zip_name if six.PY2: resource_name = resource_name.decode(self.encoding, "replace") if resource_name.endswith("/"): _fs.makedirs(resource_name, recreate=True) else: _fs.makedirs(dirname(resource_name), recreate=True) _fs.create(resource_name) return self._directory_fs def getinfo(self, path, namespaces=None): # type: (Text, Optional[Collection[Text]]) -> Info _path = self.validatepath(path) namespaces = namespaces or () raw_info = {} # type: Dict[Text, Dict[Text, object]] if _path == "/": raw_info["basic"] = {"name": "", "is_dir": True} if "details" in namespaces: raw_info["details"] = {"type": int(ResourceType.directory)} else: basic_info = self._directory.getinfo(_path) raw_info["basic"] = {"name": basic_info.name, "is_dir": basic_info.is_dir} if not {"details", "access", "zip"}.isdisjoint(namespaces): zip_name = self._path_to_zip_name(path) try: zip_info = self._zip.getinfo(zip_name) except KeyError: # Can occur if there is an implied directory in the zip pass else: if "details" in namespaces: raw_info["details"] = { "size": zip_info.file_size, "type": int( ResourceType.directory if basic_info.is_dir else ResourceType.file ), "modified": datetime_to_epoch( datetime(*zip_info.date_time) ), } if "zip" in namespaces: raw_info["zip"] = { k: getattr(zip_info, k) for k in zip_info.__slots__ # type: ignore if not k.startswith("_") } if "access" in namespaces: # check the zip was created on UNIX to get permissions if zip_info.external_attr and zip_info.create_system == 3: raw_info["access"] = { "permissions": Permissions( mode=zip_info.external_attr >> 16 & 0xFFF ).dump() } return Info(raw_info) def setinfo(self, path, info): # type: (Text, RawInfo) -> None self.check() raise errors.ResourceReadOnly(path) def listdir(self, path): # type: (Text) -> List[Text] self.check() return self._directory.listdir(path) def makedir( self, # type: R path, # type: Text permissions=None, # type: Optional[Permissions] recreate=False, # type: bool ): # type: (...) -> SubFS[R] self.check() raise errors.ResourceReadOnly(path) def openbin(self, path, mode="r", buffering=-1, **kwargs): # type: (Text, Text, int, **Any) -> BinaryIO self.check() if "w" in mode or "+" in mode or "a" in mode: raise errors.ResourceReadOnly(path) if not self._directory.exists(path): raise errors.ResourceNotFound(path) elif self._directory.isdir(path): raise errors.FileExpected(path) zip_name = self._path_to_zip_name(path) return _ZipExtFile(self, zip_name) # type: ignore def remove(self, path): # type: (Text) -> None self.check() raise errors.ResourceReadOnly(path) def removedir(self, path): # type: (Text) -> None self.check() raise errors.ResourceReadOnly(path) def close(self): # type: () -> None super(ReadZipFS, self).close() if hasattr(self, "_zip"): self._zip.close() def readbytes(self, path): # type: (Text) -> bytes self.check() if not self._directory.isfile(path): raise errors.ResourceNotFound(path) zip_name = self._path_to_zip_name(path) zip_bytes = self._zip.read(zip_name) return zip_bytes def geturl(self, path, purpose="download"): # type: (Text, Text) -> Text if purpose == "fs" and isinstance(self._file, six.string_types): quoted_file = url_quote(self._file) quoted_path = url_quote(path) return "zip://{}!/{}".format(quoted_file, quoted_path) else: raise errors.NoURL(path, purpose) pyfilesystem2-2.4.12/requirements-readthedocs.txt000066400000000000000000000000471400005060600221560ustar00rootroot00000000000000# requirements for readthedocs.io -e . pyfilesystem2-2.4.12/setup.cfg000066400000000000000000000055361400005060600162400ustar00rootroot00000000000000[bdist_wheel] universal = 1 [metadata] version = attr: fs._version.__version__ name = fs author = Will McGugan author_email = will@willmcgugan.com maintainer = Martin Larralde maintainer_email = martin.larralde@embl.de url = https://github.com/PyFilesystem/pyfilesystem2 license = MIT license_file = LICENSE description = Python's filesystem abstraction layer long_description = file: README.md long_description_content_type = text/markdown platform = any classifiers = Development Status :: 5 - Production/Stable Intended Audience :: Developers License :: OSI Approved :: MIT License Operating System :: OS Independent Programming Language :: Python Programming Language :: Python :: 2.7 Programming Language :: Python :: 3.4 Programming Language :: Python :: 3.5 Programming Language :: Python :: 3.6 Programming Language :: Python :: 3.7 Programming Language :: Python :: 3.8 Programming Language :: Python :: 3.9 Programming Language :: Python :: Implementation :: CPython Programming Language :: Python :: Implementation :: PyPy Topic :: System :: Filesystems project_urls = Bug Reports = https://github.com/PyFilesystem/pyfilesystem2/issues Documentation = https://pyfilesystem2.readthedocs.io/en/latest/ Wiki = https://www.pyfilesystem.org/ [options] zip_safe = false packages = find: setup_requires = setuptools >=38.3.0 install_requires = appdirs~=1.4.3 pytz setuptools six ~=1.10 enum34 ~=1.1.6 ; python_version < '3.4' typing ~=3.6 ; python_version < '3.6' backports.os ~=0.1 ; python_version < '3.0' [options.extras_require] scandir = scandir~=1.5 ; python_version < '3.5' [options.packages.find] exclude = tests [options.package_data] fs = py.typed [pydocstyle] inherit = false ignore = D102,D105,D200,D203,D213,D406,D407 match-dir = (?!tests)(?!docs)[^\.].* match = (?!test)(?!setup)[^\._].*\.py [mypy] ignore_missing_imports = true [mypy-fs.*] disallow_any_decorated = false disallow_any_generics = false disallow_any_unimported = true disallow_subclassing_any = true disallow_untyped_calls = false disallow_untyped_defs = false ignore_missing_imports = false warn_unused_ignores = false warn_return_any = false [mypy-fs.test] disallow_untyped_defs = false [coverage:run] branch = true omit = fs/test.py source = fs [coverage:report] show_missing = true skip_covered = true exclude_lines = pragma: no cover if False: @typing.overload @overload [tool:pytest] markers = slow: marks tests as slow (deselect with '-m "not slow"') [flake8] extend-ignore = E203,E402,W503 max-line-length = 88 per-file-ignores = fs/__init__.py:F401 fs/*/__init__.py:F401 tests/*:E501 fs/opener/*:F811 fs/_fscompat.py:F401 [isort] default_section = THIRD_PARTY known_first_party = fs known_standard_library = typing line_length = 88 pyfilesystem2-2.4.12/setup.py000066400000000000000000000002411400005060600161150ustar00rootroot00000000000000#!/usr/bin/env python import os with open(os.path.join("fs", "_version.py")) as f: exec(f.read()) from setuptools import setup setup(version=__version__) pyfilesystem2-2.4.12/testrequirements.txt000066400000000000000000000004531400005060600205740ustar00rootroot00000000000000pytest==4.6.5 pytest-cov==2.7.1 pytest-randomly==1.2.3 ; python_version<"3.5" pytest-randomly==3.0.0 ; python_version>="3.5" mock==3.0.5 ; python_version<"3.3" pyftpdlib==1.5.5 # Not directly required. `pyftpdlib` appears to need these but doesn't list them # as requirements. psutil pysendfile pyfilesystem2-2.4.12/tests/000077500000000000000000000000001400005060600155505ustar00rootroot00000000000000pyfilesystem2-2.4.12/tests/__init__.py000066400000000000000000000000001400005060600176470ustar00rootroot00000000000000pyfilesystem2-2.4.12/tests/conftest.py000066400000000000000000000026121400005060600177500ustar00rootroot00000000000000import pytest try: from unittest import mock except ImportError: import mock @pytest.fixture @mock.patch("appdirs.user_data_dir", autospec=True, spec_set=True) @mock.patch("appdirs.site_data_dir", autospec=True, spec_set=True) @mock.patch("appdirs.user_config_dir", autospec=True, spec_set=True) @mock.patch("appdirs.site_config_dir", autospec=True, spec_set=True) @mock.patch("appdirs.user_cache_dir", autospec=True, spec_set=True) @mock.patch("appdirs.user_state_dir", autospec=True, spec_set=True) @mock.patch("appdirs.user_log_dir", autospec=True, spec_set=True) def mock_appdir_directories( user_log_dir_mock, user_state_dir_mock, user_cache_dir_mock, site_config_dir_mock, user_config_dir_mock, site_data_dir_mock, user_data_dir_mock, tmpdir ): """Mock out every single AppDir directory so tests can't access real ones.""" user_log_dir_mock.return_value = str(tmpdir.join("user_log").mkdir()) user_state_dir_mock.return_value = str(tmpdir.join("user_state").mkdir()) user_cache_dir_mock.return_value = str(tmpdir.join("user_cache").mkdir()) site_config_dir_mock.return_value = str(tmpdir.join("site_config").mkdir()) user_config_dir_mock.return_value = str(tmpdir.join("user_config").mkdir()) site_data_dir_mock.return_value = str(tmpdir.join("site_data").mkdir()) user_data_dir_mock.return_value = str(tmpdir.join("user_data").mkdir()) pyfilesystem2-2.4.12/tests/test_appfs.py000066400000000000000000000013441400005060600202740ustar00rootroot00000000000000from __future__ import unicode_literals import pytest import six from fs import appfs @pytest.fixture def fs(mock_appdir_directories): """Create a UserDataFS but strictly using a temporary directory.""" return appfs.UserDataFS("fstest", "willmcgugan", "1.0") @pytest.mark.skipif(six.PY2, reason="Test requires Python 3 repr") def test_user_data_repr_py3(fs): assert repr(fs) == "UserDataFS('fstest', author='willmcgugan', version='1.0')" assert str(fs) == "" @pytest.mark.skipif(not six.PY2, reason="Test requires Python 2 repr") def test_user_data_repr_py2(fs): assert repr(fs) == "UserDataFS(u'fstest', author=u'willmcgugan', version=u'1.0')" assert str(fs) == "" pyfilesystem2-2.4.12/tests/test_archives.py000066400000000000000000000105621400005060600207710ustar00rootroot00000000000000# -*- encoding: UTF-8 from __future__ import unicode_literals import os import stat from six import text_type from fs.opener import open_fs from fs.enums import ResourceType from fs import walk from fs import errors from fs.test import UNICODE_TEXT class ArchiveTestCases(object): def make_source_fs(self): return open_fs("temp://") def build_source(self, fs): fs.makedirs("foo/bar/baz") fs.makedir("tmp") fs.writetext("Файл", "unicode filename") fs.writetext("top.txt", "Hello, World") fs.writetext("top2.txt", "Hello, World") fs.writetext("foo/bar/egg", "foofoo") fs.makedir("unicode") fs.writetext("unicode/text.txt", UNICODE_TEXT) def compress(self, fs): pass def load_archive(self): pass def remove_archive(self): pass def setUp(self): self.source_fs = source_fs = self.make_source_fs() self.build_source(source_fs) self.compress(source_fs) self.fs = self.load_archive() def tearDown(self): self.source_fs.close() self.fs.close() self.remove_archive() def test_repr(self): repr(self.fs) def test_str(self): self.assertIsInstance(text_type(self.fs), text_type) def test_readonly(self): with self.assertRaises(errors.ResourceReadOnly): self.fs.makedir("newdir") with self.assertRaises(errors.ResourceReadOnly): self.fs.remove("top.txt") with self.assertRaises(errors.ResourceReadOnly): self.fs.removedir("foo/bar/baz") with self.assertRaises(errors.ResourceReadOnly): self.fs.create("foo.txt") with self.assertRaises(errors.ResourceReadOnly): self.fs.setinfo("foo.txt", {}) def test_getinfo(self): root = self.fs.getinfo("/", ["details"]) self.assertEqual(root.name, "") self.assertTrue(root.is_dir) self.assertEqual(root.get("details", "type"), ResourceType.directory) bar = self.fs.getinfo("foo/bar", ["details"]) self.assertEqual(bar.name, "bar") self.assertTrue(bar.is_dir) self.assertEqual(bar.get("details", "type"), ResourceType.directory) top = self.fs.getinfo("top.txt", ["details", "access"]) self.assertEqual(top.size, 12) self.assertFalse(top.is_dir) try: source_syspath = self.source_fs.getsyspath("/top.txt") except errors.NoSysPath: pass else: if top.has_namespace("access"): self.assertEqual( top.permissions.mode, stat.S_IMODE(os.stat(source_syspath).st_mode) ) self.assertEqual(top.get("details", "type"), ResourceType.file) def test_listdir(self): self.assertEqual( sorted(self.source_fs.listdir("/")), sorted(self.fs.listdir("/")) ) for name in self.fs.listdir("/"): self.assertIsInstance(name, text_type) with self.assertRaises(errors.DirectoryExpected): self.fs.listdir("top.txt") with self.assertRaises(errors.ResourceNotFound): self.fs.listdir("nothere") def test_open(self): with self.fs.open("top.txt") as f: chars = [] while True: c = f.read(2) if not c: break chars.append(c) self.assertEqual("".join(chars), "Hello, World") with self.assertRaises(errors.ResourceNotFound): with self.fs.open("nothere.txt") as f: pass with self.assertRaises(errors.FileExpected): with self.fs.open("foo") as f: pass def test_gets(self): self.assertEqual(self.fs.readtext("top.txt"), "Hello, World") self.assertEqual(self.fs.readtext("foo/bar/egg"), "foofoo") self.assertEqual(self.fs.readbytes("top.txt"), b"Hello, World") self.assertEqual(self.fs.readbytes("foo/bar/egg"), b"foofoo") with self.assertRaises(errors.ResourceNotFound): self.fs.readbytes("what.txt") def test_walk_files(self): source_files = sorted(walk.walk_files(self.source_fs)) archive_files = sorted(walk.walk_files(self.fs)) self.assertEqual(source_files, archive_files) def test_implied_dir(self): self.fs.getinfo("foo/bar") self.fs.getinfo("foo") pyfilesystem2-2.4.12/tests/test_base.py000066400000000000000000000027741400005060600201050ustar00rootroot00000000000000"""Test (abstract) base FS class.""" from __future__ import unicode_literals import unittest from fs.base import FS from fs import errors class DummyFS(FS): def getinfo(self, path, namespaces=None): pass def listdir(self, path): pass def makedir(self, path, permissions=None, recreate=False): pass def openbin(self, path, mode="r", buffering=-1, **options): pass def remove(self, path): pass def removedir(self, path): pass def setinfo(self, path, info): pass class TestBase(unittest.TestCase): def setUp(self): self.fs = DummyFS() def test_validatepath(self): """Test validatepath method.""" with self.assertRaises(TypeError): self.fs.validatepath(b"bytes") self.fs._meta["invalid_path_chars"] = "Z" with self.assertRaises(errors.InvalidCharsInPath): self.fs.validatepath("Time for some ZZZs") self.fs.validatepath("fine") self.fs.validatepath("good.fine") self.fs._meta["invalid_path_chars"] = "" self.fs.validatepath("Time for some ZZZs") def mock_getsyspath(path): return path self.fs.getsyspath = mock_getsyspath self.fs._meta["max_sys_path_length"] = 10 self.fs.validatepath("0123456789") self.fs.validatepath("012345678") self.fs.validatepath("01234567") with self.assertRaises(errors.InvalidPath): self.fs.validatepath("0123456789A") pyfilesystem2-2.4.12/tests/test_bulk.py000066400000000000000000000006321400005060600201170ustar00rootroot00000000000000from __future__ import unicode_literals import unittest from fs._bulk import Copier, _Task from fs.errors import BulkCopyFailed class BrokenTask(_Task): def __call__(self): 1 / 0 class TestBulk(unittest.TestCase): def test_worker_error(self): with self.assertRaises(BulkCopyFailed): with Copier(num_workers=2) as copier: copier.queue.put(BrokenTask()) pyfilesystem2-2.4.12/tests/test_copy.py000066400000000000000000000323531400005060600201410ustar00rootroot00000000000000from __future__ import unicode_literals import errno import datetime import os import unittest import tempfile import shutil import calendar import fs.copy from fs import open_fs class TestCopy(unittest.TestCase): def test_copy_fs(self): for workers in (0, 1, 2, 4): src_fs = open_fs("mem://") src_fs.makedirs("foo/bar") src_fs.makedirs("foo/empty") src_fs.touch("test.txt") src_fs.touch("foo/bar/baz.txt") dst_fs = open_fs("mem://") fs.copy.copy_fs(src_fs, dst_fs, workers=workers) self.assertTrue(dst_fs.isdir("foo/empty")) self.assertTrue(dst_fs.isdir("foo/bar")) self.assertTrue(dst_fs.isfile("test.txt")) def test_copy_value_error(self): src_fs = open_fs("mem://") dst_fs = open_fs("mem://") with self.assertRaises(ValueError): fs.copy.copy_fs(src_fs, dst_fs, workers=-1) def test_copy_dir(self): src_fs = open_fs("mem://") src_fs.makedirs("foo/bar") src_fs.makedirs("foo/empty") src_fs.touch("test.txt") src_fs.touch("foo/bar/baz.txt") for workers in (0, 1, 2, 4): with open_fs("mem://") as dst_fs: fs.copy.copy_dir(src_fs, "/foo", dst_fs, "/", workers=workers) self.assertTrue(dst_fs.isdir("bar")) self.assertTrue(dst_fs.isdir("empty")) self.assertTrue(dst_fs.isfile("bar/baz.txt")) def test_copy_large(self): data1 = b"foo" * 512 * 1024 data2 = b"bar" * 2 * 512 * 1024 data3 = b"baz" * 3 * 512 * 1024 data4 = b"egg" * 7 * 512 * 1024 with open_fs("temp://") as src_fs: src_fs.writebytes("foo", data1) src_fs.writebytes("bar", data2) src_fs.makedir("dir1").writebytes("baz", data3) src_fs.makedirs("dir2/dir3").writebytes("egg", data4) for workers in (0, 1, 2, 4): with open_fs("temp://") as dst_fs: fs.copy.copy_fs(src_fs, dst_fs, workers=workers) self.assertEqual(dst_fs.readbytes("foo"), data1) self.assertEqual(dst_fs.readbytes("bar"), data2) self.assertEqual(dst_fs.readbytes("dir1/baz"), data3) self.assertEqual(dst_fs.readbytes("dir2/dir3/egg"), data4) def test_copy_dir_on_copy(self): src_fs = open_fs("mem://") src_fs.touch("baz.txt") on_copy_calls = [] def on_copy(*args): on_copy_calls.append(args) dst_fs = open_fs("mem://") fs.copy.copy_dir(src_fs, "/", dst_fs, "/", on_copy=on_copy) self.assertEqual(on_copy_calls, [(src_fs, "/baz.txt", dst_fs, "/baz.txt")]) def mkdirp(self, path): # os.makedirs(path, exist_ok=True) only for python3.? try: os.makedirs(path) except OSError as exc: if exc.errno == errno.EEXIST and os.path.isdir(path): pass else: raise def _create_sandbox_dir(self, prefix="pyfilesystem2_sandbox_", home=None): if home is None: return tempfile.mkdtemp(prefix=prefix) else: sandbox_path = os.path.join(home, prefix) self.mkdirp(sandbox_path) return sandbox_path def _touch(self, root, filepath): # create abs filename abs_filepath = os.path.join(root, filepath) # create dir dirname = os.path.dirname(abs_filepath) self.mkdirp(dirname) # touch file with open(abs_filepath, "a"): os.utime( abs_filepath, None ) # update the mtime in case the file exists, same as touch return abs_filepath def _write_file(self, filepath, write_chars=1024): with open(filepath, "w") as f: f.write("1" * write_chars) return filepath def _delay_file_utime(self, filepath, delta_sec): utcnow = datetime.datetime.utcnow() unix_timestamp = calendar.timegm(utcnow.timetuple()) times = unix_timestamp + delta_sec, unix_timestamp + delta_sec os.utime(filepath, times) def test_copy_file_if_newer_same_fs(self): src_fs = open_fs("mem://") src_fs.makedir("foo2").touch("exists") src_fs.makedir("foo1").touch("test1.txt") src_fs.settimes( "foo2/exists", datetime.datetime.utcnow() + datetime.timedelta(hours=1) ) self.assertTrue( fs.copy.copy_file_if_newer( src_fs, "foo1/test1.txt", src_fs, "foo2/test1.txt.copy" ) ) self.assertFalse( fs.copy.copy_file_if_newer(src_fs, "foo1/test1.txt", src_fs, "foo2/exists") ) self.assertTrue(src_fs.exists("foo2/test1.txt.copy")) def test_copy_file_if_newer_dst_older(self): try: # create first dst ==> dst is older the src ==> file should be copied dst_dir = self._create_sandbox_dir() dst_file1 = self._touch(dst_dir, "file1.txt") self._write_file(dst_file1) src_dir = self._create_sandbox_dir() src_file1 = self._touch(src_dir, "file1.txt") self._write_file(src_file1) # ensure src file is newer than dst, changing its modification time self._delay_file_utime(src_file1, delta_sec=60) src_fs = open_fs("osfs://" + src_dir) dst_fs = open_fs("osfs://" + dst_dir) self.assertTrue(dst_fs.exists("/file1.txt")) copied = fs.copy.copy_file_if_newer( src_fs, "/file1.txt", dst_fs, "/file1.txt" ) self.assertTrue(copied) self.assertTrue(dst_fs.exists("/file1.txt")) finally: shutil.rmtree(src_dir) shutil.rmtree(dst_dir) def test_copy_file_if_newer_dst_doesnt_exists(self): try: src_dir = self._create_sandbox_dir() src_file1 = self._touch(src_dir, "file1.txt") self._write_file(src_file1) dst_dir = self._create_sandbox_dir() src_fs = open_fs("osfs://" + src_dir) dst_fs = open_fs("osfs://" + dst_dir) copied = fs.copy.copy_file_if_newer( src_fs, "/file1.txt", dst_fs, "/file1.txt" ) self.assertTrue(copied) self.assertTrue(dst_fs.exists("/file1.txt")) finally: shutil.rmtree(src_dir) shutil.rmtree(dst_dir) def test_copy_file_if_newer_dst_is_newer(self): try: src_dir = self._create_sandbox_dir() src_file1 = self._touch(src_dir, "file1.txt") self._write_file(src_file1) dst_dir = self._create_sandbox_dir() dst_file1 = self._touch(dst_dir, "file1.txt") self._write_file(dst_file1) src_fs = open_fs("osfs://" + src_dir) dst_fs = open_fs("osfs://" + dst_dir) self.assertTrue(dst_fs.exists("/file1.txt")) copied = fs.copy.copy_file_if_newer( src_fs, "/file1.txt", dst_fs, "/file1.txt" ) self.assertEqual(copied, False) finally: shutil.rmtree(src_dir) shutil.rmtree(dst_dir) def test_copy_fs_if_newer_dst_older(self): try: # create first dst ==> dst is older the src ==> file should be copied dst_dir = self._create_sandbox_dir() dst_file1 = self._touch(dst_dir, "file1.txt") self._write_file(dst_file1) src_dir = self._create_sandbox_dir() src_file1 = self._touch(src_dir, "file1.txt") self._write_file(src_file1) # ensure src file is newer than dst, changing its modification time self._delay_file_utime(src_file1, delta_sec=60) src_fs = open_fs("osfs://" + src_dir) dst_fs = open_fs("osfs://" + dst_dir) self.assertTrue(dst_fs.exists("/file1.txt")) copied = [] def on_copy(src_fs, src_path, dst_fs, dst_path): copied.append(dst_path) fs.copy.copy_fs_if_newer(src_fs, dst_fs, on_copy=on_copy) self.assertEqual(copied, ["/file1.txt"]) self.assertTrue(dst_fs.exists("/file1.txt")) src_fs.close() dst_fs.close() finally: shutil.rmtree(src_dir) shutil.rmtree(dst_dir) def test_copy_fs_if_newer_when_dst_doesnt_exists(self): try: src_dir = self._create_sandbox_dir() src_file1 = self._touch(src_dir, "file1.txt") self._write_file(src_file1) src_file2 = self._touch(src_dir, "one_level_down" + os.sep + "file2.txt") self._write_file(src_file2) dst_dir = self._create_sandbox_dir() src_fs = open_fs("osfs://" + src_dir) dst_fs = open_fs("osfs://" + dst_dir) copied = [] def on_copy(src_fs, src_path, dst_fs, dst_path): copied.append(dst_path) fs.copy.copy_fs_if_newer(src_fs, dst_fs, on_copy=on_copy) self.assertEqual(copied, ["/file1.txt", "/one_level_down/file2.txt"]) self.assertTrue(dst_fs.exists("/file1.txt")) self.assertTrue(dst_fs.exists("/one_level_down/file2.txt")) src_fs.close() dst_fs.close() finally: shutil.rmtree(src_dir) shutil.rmtree(dst_dir) def test_copy_fs_if_newer_dont_copy_when_dst_exists(self): try: # src is older than dst => no copy should be necessary src_dir = self._create_sandbox_dir() src_file1 = self._touch(src_dir, "file1.txt") self._write_file(src_file1) dst_dir = self._create_sandbox_dir() dst_file1 = self._touch(dst_dir, "file1.txt") self._write_file(dst_file1) # ensure dst file is newer than src, changing its modification time self._delay_file_utime(dst_file1, delta_sec=60) src_fs = open_fs("osfs://" + src_dir) dst_fs = open_fs("osfs://" + dst_dir) self.assertTrue(dst_fs.exists("/file1.txt")) copied = [] def on_copy(src_fs, src_path, dst_fs, dst_path): copied.append(dst_path) fs.copy.copy_fs_if_newer(src_fs, dst_fs, on_copy=on_copy) self.assertEqual(copied, []) self.assertTrue(dst_fs.exists("/file1.txt")) src_fs.close() dst_fs.close() finally: shutil.rmtree(src_dir) shutil.rmtree(dst_dir) def test_copy_dir_if_newer_one_dst_doesnt_exist(self): try: src_dir = self._create_sandbox_dir() src_file1 = self._touch(src_dir, "file1.txt") self._write_file(src_file1) src_file2 = self._touch(src_dir, "one_level_down" + os.sep + "file2.txt") self._write_file(src_file2) dst_dir = self._create_sandbox_dir() dst_file1 = self._touch(dst_dir, "file1.txt") self._write_file(dst_file1) # ensure dst file is newer than src, changing its modification time self._delay_file_utime(dst_file1, delta_sec=60) src_fs = open_fs("osfs://" + src_dir) dst_fs = open_fs("osfs://" + dst_dir) copied = [] def on_copy(src_fs, src_path, dst_fs, dst_path): copied.append(dst_path) fs.copy.copy_dir_if_newer(src_fs, "/", dst_fs, "/", on_copy=on_copy) self.assertEqual(copied, ["/one_level_down/file2.txt"]) self.assertTrue(dst_fs.exists("/one_level_down/file2.txt")) src_fs.close() dst_fs.close() finally: shutil.rmtree(src_dir) shutil.rmtree(dst_dir) def test_copy_dir_if_newer_same_fs(self): try: src_dir = self._create_sandbox_dir() src_file1 = self._touch(src_dir, "src" + os.sep + "file1.txt") self._write_file(src_file1) self._create_sandbox_dir(home=src_dir) src_fs = open_fs("osfs://" + src_dir) copied = [] def on_copy(src_fs, src_path, dst_fs, dst_path): copied.append(dst_path) fs.copy.copy_dir_if_newer(src_fs, "/src", src_fs, "/dst", on_copy=on_copy) self.assertEqual(copied, ["/dst/file1.txt"]) self.assertTrue(src_fs.exists("/dst/file1.txt")) src_fs.close() finally: shutil.rmtree(src_dir) def test_copy_dir_if_newer_multiple_files(self): try: src_dir = self._create_sandbox_dir() src_fs = open_fs("osfs://" + src_dir) src_fs.makedirs("foo/bar") src_fs.makedirs("foo/empty") src_fs.touch("test.txt") src_fs.touch("foo/bar/baz.txt") dst_dir = self._create_sandbox_dir() dst_fs = open_fs("osfs://" + dst_dir) fs.copy.copy_dir_if_newer(src_fs, "/foo", dst_fs, "/") self.assertTrue(dst_fs.isdir("bar")) self.assertTrue(dst_fs.isdir("empty")) self.assertTrue(dst_fs.isfile("bar/baz.txt")) finally: shutil.rmtree(src_dir) shutil.rmtree(dst_dir) if __name__ == "__main__": unittest.main() pyfilesystem2-2.4.12/tests/test_encoding.py000066400000000000000000000042201400005060600207450ustar00rootroot00000000000000from __future__ import unicode_literals import os import platform import shutil import tempfile import unittest import pytest import six import fs from fs.osfs import OSFS if platform.system() != "Windows": @pytest.mark.skipif( platform.system() == "Darwin", reason="Bad unicode not possible on OSX" ) class TestEncoding(unittest.TestCase): TEST_FILENAME = b"foo\xb1bar" # fsdecode throws error on Windows TEST_FILENAME_UNICODE = fs.fsdecode(TEST_FILENAME) def setUp(self): dir_path = self.dir_path = tempfile.mkdtemp() if six.PY2: with open(os.path.join(dir_path, self.TEST_FILENAME), "wb") as f: f.write(b"baz") else: with open( os.path.join(dir_path, self.TEST_FILENAME_UNICODE), "wb" ) as f: f.write(b"baz") def tearDown(self): shutil.rmtree(self.dir_path) def test_open(self): with OSFS(self.dir_path) as test_fs: self.assertTrue(test_fs.exists(self.TEST_FILENAME_UNICODE)) self.assertTrue(test_fs.isfile(self.TEST_FILENAME_UNICODE)) self.assertFalse(test_fs.isdir(self.TEST_FILENAME_UNICODE)) with test_fs.open(self.TEST_FILENAME_UNICODE, "rb") as f: self.assertEqual(f.read(), b"baz") self.assertEqual(test_fs.readtext(self.TEST_FILENAME_UNICODE), "baz") test_fs.remove(self.TEST_FILENAME_UNICODE) self.assertFalse(test_fs.exists(self.TEST_FILENAME_UNICODE)) def test_listdir(self): with OSFS(self.dir_path) as test_fs: dirlist = test_fs.listdir("/") self.assertEqual(dirlist, [self.TEST_FILENAME_UNICODE]) self.assertEqual(test_fs.readtext(dirlist[0]), "baz") def test_scandir(self): with OSFS(self.dir_path) as test_fs: for info in test_fs.scandir("/"): self.assertIsInstance(info.name, six.text_type) self.assertEqual(info.name, self.TEST_FILENAME_UNICODE) pyfilesystem2-2.4.12/tests/test_enums.py000066400000000000000000000005171400005060600203130ustar00rootroot00000000000000import os from fs import enums import unittest class TestEnums(unittest.TestCase): def test_enums(self): self.assertEqual(enums.Seek.current, os.SEEK_CUR) self.assertEqual(enums.Seek.end, os.SEEK_END) self.assertEqual(enums.Seek.set, os.SEEK_SET) self.assertEqual(enums.ResourceType.unknown, 0) pyfilesystem2-2.4.12/tests/test_error_tools.py000066400000000000000000000006041400005060600215320ustar00rootroot00000000000000from __future__ import unicode_literals import errno import unittest from fs.error_tools import convert_os_errors from fs import errors as fserrors class TestErrorTools(unittest.TestCase): def assert_convert_os_errors(self): with self.assertRaises(fserrors.ResourceNotFound): with convert_os_errors("foo", "test"): raise OSError(errno.ENOENT) pyfilesystem2-2.4.12/tests/test_errors.py000066400000000000000000000035661400005060600205070ustar00rootroot00000000000000from __future__ import unicode_literals import multiprocessing import unittest from six import text_type from fs import errors from fs.errors import CreateFailed class TestErrors(unittest.TestCase): def test_str(self): err = errors.FSError("oh dear") repr(err) self.assertEqual(text_type(err), "oh dear") def test_unsupported(self): err = errors.Unsupported("stuff") repr(err) self.assertEqual(text_type(err), "not supported") def test_raise_in_multiprocessing(self): # Without the __reduce__ methods in FSError subclasses, this test will hang forever. tests = [ [errors.ResourceNotFound, "some_path"], [errors.FilesystemClosed], [errors.CreateFailed], [errors.NoSysPath, "some_path"], [errors.NoURL, "some_path", "some_purpose"], [errors.Unsupported], [errors.IllegalBackReference, "path"], [errors.MissingInfoNamespace, "path"] ] try: pool = multiprocessing.Pool(1) for args in tests: exc = args[0](*args[1:]) exc.__reduce__() with self.assertRaises(args[0]): pool.apply(_multiprocessing_test_task, args) finally: pool.close() def _multiprocessing_test_task(err, *args): raise err(*args) class TestCreateFailed(unittest.TestCase): def test_catch_all(self): errors = (ZeroDivisionError, ValueError, CreateFailed) @CreateFailed.catch_all def test(x): raise errors[x] for index, _exc in enumerate(errors): try: test(index) except Exception as e: self.assertIsInstance(e, CreateFailed) if e.exc is not None: self.assertNotIsInstance(e.exc, CreateFailed) pyfilesystem2-2.4.12/tests/test_filesize.py000066400000000000000000000032021400005060600207700ustar00rootroot00000000000000from __future__ import unicode_literals from fs import filesize import unittest class TestFilesize(unittest.TestCase): def test_traditional(self): self.assertEqual(filesize.traditional(0), "0 bytes") self.assertEqual(filesize.traditional(1), "1 byte") self.assertEqual(filesize.traditional(2), "2 bytes") self.assertEqual(filesize.traditional(1024), "1.0 KB") self.assertEqual(filesize.traditional(1024 * 1024), "1.0 MB") self.assertEqual(filesize.traditional(1024 * 1024 + 1), "1.0 MB") self.assertEqual(filesize.traditional(1.5 * 1024 * 1024), "1.5 MB") def test_binary(self): self.assertEqual(filesize.binary(0), "0 bytes") self.assertEqual(filesize.binary(1), "1 byte") self.assertEqual(filesize.binary(2), "2 bytes") self.assertEqual(filesize.binary(1024), "1.0 KiB") self.assertEqual(filesize.binary(1024 * 1024), "1.0 MiB") self.assertEqual(filesize.binary(1024 * 1024 + 1), "1.0 MiB") self.assertEqual(filesize.binary(1.5 * 1024 * 1024), "1.5 MiB") def test_decimal(self): self.assertEqual(filesize.decimal(0), "0 bytes") self.assertEqual(filesize.decimal(1), "1 byte") self.assertEqual(filesize.decimal(2), "2 bytes") self.assertEqual(filesize.decimal(1000), "1.0 kB") self.assertEqual(filesize.decimal(1000 * 1000), "1.0 MB") self.assertEqual(filesize.decimal(1000 * 1000 + 1), "1.0 MB") self.assertEqual(filesize.decimal(1200 * 1000), "1.2 MB") def test_errors(self): with self.assertRaises(TypeError): filesize.traditional("foo") pyfilesystem2-2.4.12/tests/test_fscompat.py000066400000000000000000000030341400005060600207750ustar00rootroot00000000000000from __future__ import unicode_literals import unittest import six from fs._fscompat import fsencode, fsdecode, fspath class PathMock(object): def __init__(self, path): self._path = path def __fspath__(self): return self._path class BrokenPathMock(object): def __init__(self, path): self._path = path def __fspath__(self): return self.broken class TestFSCompact(unittest.TestCase): def test_fspath(self): path = PathMock("foo") self.assertEqual(fspath(path), "foo") path = PathMock(b"foo") self.assertEqual(fspath(path), b"foo") path = "foo" assert path is fspath(path) with self.assertRaises(TypeError): fspath(100) with self.assertRaises(TypeError): fspath(PathMock(5)) with self.assertRaises(AttributeError): fspath(BrokenPathMock("foo")) def test_fsencode(self): encode_bytes = fsencode(b"foo") assert isinstance(encode_bytes, bytes) self.assertEqual(encode_bytes, b"foo") encode_bytes = fsencode("foo") assert isinstance(encode_bytes, bytes) self.assertEqual(encode_bytes, b"foo") with self.assertRaises(TypeError): fsencode(5) def test_fsdecode(self): decode_text = fsdecode(b"foo") assert isinstance(decode_text, six.text_type) decode_text = fsdecode("foo") assert isinstance(decode_text, six.text_type) with self.assertRaises(TypeError): fsdecode(5) pyfilesystem2-2.4.12/tests/test_ftp_parse.py000066400000000000000000000216431400005060600211520ustar00rootroot00000000000000from __future__ import unicode_literals import time import unittest from fs import _ftp_parse as ftp_parse try: from unittest import mock except ImportError: import mock time2017 = time.struct_time([2017, 11, 28, 1, 1, 1, 1, 332, 0]) class TestFTPParse(unittest.TestCase): @mock.patch("time.localtime") def test_parse_time(self, mock_localtime): self.assertEqual( ftp_parse._parse_time("JUL 05 1974", formats=["%b %d %Y"]), 142214400.0 ) mock_localtime.return_value = time2017 self.assertEqual( ftp_parse._parse_time("JUL 05 02:00", formats=["%b %d %H:%M"]), 1499220000.0 ) self.assertEqual( ftp_parse._parse_time("05-07-17 02:00AM", formats=["%d-%m-%y %I:%M%p"]), 1499220000.0, ) self.assertEqual(ftp_parse._parse_time("notadate", formats=["%b %d %Y"]), None) def test_parse(self): self.assertEqual(ftp_parse.parse([""]), []) def test_parse_line(self): self.assertIs(ftp_parse.parse_line("not a dir"), None) @mock.patch("time.localtime") def test_decode_linux(self, mock_localtime): mock_localtime.return_value = time2017 directory = """\ lrwxrwxrwx 1 0 0 19 Jan 18 2006 debian -> ./pub/mirror/debian drwxr-xr-x 10 0 0 4096 Aug 03 09:21 debian-archive lrwxrwxrwx 1 0 0 27 Nov 30 2015 debian-backports -> pub/mirror/debian-backports drwxr-xr-x 12 0 0 4096 Sep 29 13:13 pub -rw-r--r-- 1 0 0 26 Mar 04 2010 robots.txt drwxr-xr-x 8 foo bar 4096 Oct 4 09:05 test drwxr-xr-x 2 foo-user foo-group 0 Jan 5 11:59 240485 """ expected = [ { "access": { "group": "0", "permissions": [ "g_r", "g_w", "g_x", "o_r", "o_w", "o_x", "u_r", "u_w", "u_x", ], "user": "0", }, "basic": {"is_dir": True, "name": "debian"}, "details": {"modified": 1137542400.0, "size": 19, "type": 1}, "ftp": { "ls": "lrwxrwxrwx 1 0 0 19 Jan 18 2006 debian -> ./pub/mirror/debian" }, }, { "access": { "group": "0", "permissions": ["g_r", "g_x", "o_r", "o_x", "u_r", "u_w", "u_x"], "user": "0", }, "basic": {"is_dir": True, "name": "debian-archive"}, "details": {"modified": 1501752060.0, "size": 4096, "type": 1}, "ftp": { "ls": "drwxr-xr-x 10 0 0 4096 Aug 03 09:21 debian-archive" }, }, { "access": { "group": "0", "permissions": [ "g_r", "g_w", "g_x", "o_r", "o_w", "o_x", "u_r", "u_w", "u_x", ], "user": "0", }, "basic": {"is_dir": True, "name": "debian-backports"}, "details": {"modified": 1448841600.0, "size": 27, "type": 1}, "ftp": { "ls": "lrwxrwxrwx 1 0 0 27 Nov 30 2015 debian-backports -> pub/mirror/debian-backports" }, }, { "access": { "group": "0", "permissions": ["g_r", "g_x", "o_r", "o_x", "u_r", "u_w", "u_x"], "user": "0", }, "basic": {"is_dir": True, "name": "pub"}, "details": {"modified": 1506690780.0, "size": 4096, "type": 1}, "ftp": { "ls": "drwxr-xr-x 12 0 0 4096 Sep 29 13:13 pub" }, }, { "access": { "group": "0", "permissions": ["g_r", "o_r", "u_r", "u_w"], "user": "0", }, "basic": {"is_dir": False, "name": "robots.txt"}, "details": {"modified": 1267660800.0, "size": 26, "type": 2}, "ftp": { "ls": "-rw-r--r-- 1 0 0 26 Mar 04 2010 robots.txt" }, }, { "access": { "group": "bar", "permissions": ["g_r", "g_x", "o_r", "o_x", "u_r", "u_w", "u_x"], "user": "foo", }, "basic": {"is_dir": True, "name": "test"}, "details": {"modified": 1507107900.0, "size": 4096, "type": 1}, "ftp": { "ls": "drwxr-xr-x 8 foo bar 4096 Oct 4 09:05 test" }, }, { "access": { "group": "foo-group", "permissions": ["g_r", "g_x", "o_r", "o_x", "u_r", "u_w", "u_x"], "user": "foo-user", }, "basic": {"is_dir": True, "name": "240485"}, "details": {"modified": 1483617540.0, "size": 0, "type": 1}, "ftp": { "ls": "drwxr-xr-x 2 foo-user foo-group 0 Jan 5 11:59 240485" }, }, ] parsed = ftp_parse.parse(directory.splitlines()) self.assertEqual(parsed, expected) @mock.patch("time.localtime") def test_decode_windowsnt(self, mock_localtime): mock_localtime.return_value = time2017 directory = """\ unparsable line 11-02-17 02:00AM docs 11-02-17 02:12PM images 11-02-17 02:12PM AM to PM 11-02-17 03:33PM 9276 logo.gif 05-11-20 22:11 src 11-02-17 01:23 1 12 11-02-17 4:54 0 icon.bmp 11-02-17 4:54AM 0 icon.gif 11-02-17 4:54PM 0 icon.png 11-02-17 16:54 0 icon.jpg """ expected = [ { "basic": {"is_dir": True, "name": "docs"}, "details": {"modified": 1486778400.0, "type": 1}, "ftp": {"ls": "11-02-17 02:00AM docs"}, }, { "basic": {"is_dir": True, "name": "images"}, "details": {"modified": 1486822320.0, "type": 1}, "ftp": {"ls": "11-02-17 02:12PM images"}, }, { "basic": {"is_dir": True, "name": "AM to PM"}, "details": {"modified": 1486822320.0, "type": 1}, "ftp": {"ls": "11-02-17 02:12PM AM to PM"}, }, { "basic": {"is_dir": False, "name": "logo.gif"}, "details": {"modified": 1486827180.0, "size": 9276, "type": 2}, "ftp": {"ls": "11-02-17 03:33PM 9276 logo.gif"}, }, { "basic": {"is_dir": True, "name": "src"}, "details": {"modified": 1604614260.0, "type": 1}, "ftp": {"ls": "05-11-20 22:11 src"}, }, { "basic": {"is_dir": False, "name": "12"}, "details": {"modified": 1486776180.0, "size": 1, "type": 2}, "ftp": {"ls": "11-02-17 01:23 1 12"}, }, { "basic": {"is_dir": False, "name": "icon.bmp"}, "details": {"modified": 1486788840.0, "size": 0, "type": 2}, "ftp": {"ls": "11-02-17 4:54 0 icon.bmp"}, }, { "basic": {"is_dir": False, "name": "icon.gif"}, "details": {"modified": 1486788840.0, "size": 0, "type": 2}, "ftp": {"ls": "11-02-17 4:54AM 0 icon.gif"}, }, { "basic": {"is_dir": False, "name": "icon.png"}, "details": {"modified": 1486832040.0, "size": 0, "type": 2}, "ftp": {"ls": "11-02-17 4:54PM 0 icon.png"}, }, { "basic": {"is_dir": False, "name": "icon.jpg"}, "details": {"modified": 1486832040.0, "size": 0, "type": 2}, "ftp": {"ls": "11-02-17 16:54 0 icon.jpg"}, }, ] parsed = ftp_parse.parse(directory.splitlines()) self.assertEqual(parsed, expected) pyfilesystem2-2.4.12/tests/test_ftpfs.py000066400000000000000000000253561400005060600203160ustar00rootroot00000000000000# coding: utf-8 from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import socket import os import platform import shutil import tempfile import time import unittest import uuid import pytest from six import text_type from ftplib import error_perm from ftplib import error_temp from pyftpdlib.authorizers import DummyAuthorizer from fs import errors from fs.opener import open_fs from fs.ftpfs import FTPFS, ftp_errors from fs.path import join from fs.subfs import SubFS from fs.test import FSTestCases # Prevent socket timeouts from slowing tests too much socket.setdefaulttimeout(1) class TestFTPFSClass(unittest.TestCase): def test_parse_ftp_time(self): self.assertIsNone(FTPFS._parse_ftp_time("notreallyatime")) t = FTPFS._parse_ftp_time("19740705000000") self.assertEqual(t, 142214400) def test_parse_mlsx(self): info = list( FTPFS._parse_mlsx(["create=19740705000000;modify=19740705000000; /foo"]) )[0] self.assertEqual(info["details"]["modified"], 142214400) self.assertEqual(info["details"]["created"], 142214400) info = list(FTPFS._parse_mlsx(["foo=bar; .."])) self.assertEqual(info, []) def test_parse_mlsx_type(self): lines = [ "Type=cdir;Modify=20180731114724;UNIX.mode=0755; /tmp", "Type=pdir;Modify=20180731112024;UNIX.mode=0775; /", "Type=file;Size=331523;Modify=20180731112041;UNIX.mode=0644; a.csv", "Type=file;Size=368340;Modify=20180731112041;UNIX.mode=0644; b.csv", ] expected = [ { "basic": {"name": "a.csv", "is_dir": False}, "ftp": { "type": "file", "size": "331523", "modify": "20180731112041", "unix.mode": "0644", }, "details": {"type": 2, "size": 331523, "modified": 1533036041}, }, { "basic": {"name": "b.csv", "is_dir": False}, "ftp": { "type": "file", "size": "368340", "modify": "20180731112041", "unix.mode": "0644", }, "details": {"type": 2, "size": 368340, "modified": 1533036041}, }, ] info = list(FTPFS._parse_mlsx(lines)) self.assertEqual(info, expected) def test_opener(self): ftp_fs = open_fs("ftp://will:wfc@ftp.example.org") self.assertIsInstance(ftp_fs, FTPFS) self.assertEqual(ftp_fs.host, "ftp.example.org") class TestFTPErrors(unittest.TestCase): """Test the ftp_errors context manager.""" def test_manager(self): mem_fs = open_fs("mem://") with self.assertRaises(errors.ResourceError): with ftp_errors(mem_fs, path="foo"): raise error_temp with self.assertRaises(errors.OperationFailed): with ftp_errors(mem_fs): raise error_temp with self.assertRaises(errors.InsufficientStorage): with ftp_errors(mem_fs): raise error_perm("552 foo") with self.assertRaises(errors.ResourceNotFound): with ftp_errors(mem_fs): raise error_perm("501 foo") with self.assertRaises(errors.PermissionDenied): with ftp_errors(mem_fs): raise error_perm("999 foo") def test_manager_with_host(self): mem_fs = open_fs("mem://") mem_fs.host = "ftp.example.com" with self.assertRaises(errors.RemoteConnectionError) as err_info: with ftp_errors(mem_fs): raise EOFError self.assertEqual(str(err_info.exception), "lost connection to ftp.example.com") with self.assertRaises(errors.RemoteConnectionError) as err_info: with ftp_errors(mem_fs): raise socket.error self.assertEqual( str(err_info.exception), "unable to connect to ftp.example.com" ) @pytest.mark.slow class TestFTPFS(FSTestCases, unittest.TestCase): user = "user" pasw = "1234" @classmethod def setUpClass(cls): from pyftpdlib.test import ThreadedTestFTPd super(TestFTPFS, cls).setUpClass() cls._temp_dir = tempfile.mkdtemp("ftpfs2tests") cls._temp_path = os.path.join(cls._temp_dir, text_type(uuid.uuid4())) os.mkdir(cls._temp_path) cls.server = ThreadedTestFTPd() cls.server.shutdown_after = -1 cls.server.handler.authorizer = DummyAuthorizer() cls.server.handler.authorizer.add_user( cls.user, cls.pasw, cls._temp_path, perm="elradfmw" ) cls.server.handler.authorizer.add_anonymous(cls._temp_path) cls.server.start() # Don't know why this is necessary on Windows if platform.system() == "Windows": time.sleep(0.1) # Poll until a connection can be made if not cls.server.is_alive(): raise RuntimeError("could not start FTP server.") @classmethod def tearDownClass(cls): cls.server.stop() shutil.rmtree(cls._temp_dir) super(TestFTPFS, cls).tearDownClass() def make_fs(self): return open_fs( "ftp://{}:{}@{}:{}".format( self.user, self.pasw, self.server.host, self.server.port ) ) def tearDown(self): shutil.rmtree(self._temp_path) os.mkdir(self._temp_path) super(TestFTPFS, self).tearDown() def test_ftp_url(self): self.assertEqual( self.fs.ftp_url, "ftp://{}:{}@{}:{}".format( self.user, self.pasw, self.server.host, self.server.port ), ) def test_geturl(self): self.fs.makedir("foo") self.fs.create("bar") self.fs.create("foo/bar") self.assertEqual( self.fs.geturl("foo"), "ftp://{}:{}@{}:{}/foo".format( self.user, self.pasw, self.server.host, self.server.port ), ) self.assertEqual( self.fs.geturl("bar"), "ftp://{}:{}@{}:{}/bar".format( self.user, self.pasw, self.server.host, self.server.port ), ) self.assertEqual( self.fs.geturl("foo/bar"), "ftp://{}:{}@{}:{}/foo/bar".format( self.user, self.pasw, self.server.host, self.server.port ), ) def test_host(self): self.assertEqual(self.fs.host, self.server.host) def test_connection_error(self): fs = FTPFS("ftp.not.a.chance", timeout=1) with self.assertRaises(errors.RemoteConnectionError): fs.listdir("/") with self.assertRaises(errors.RemoteConnectionError): fs.makedir("foo") with self.assertRaises(errors.RemoteConnectionError): fs.open("foo.txt") def test_getmeta_unicode_path(self): self.assertTrue(self.fs.getmeta().get("unicode_paths")) self.fs.features del self.fs.features["UTF8"] self.assertFalse(self.fs.getmeta().get("unicode_paths")) def test_opener_path(self): self.fs.makedir("foo") self.fs.writetext("foo/bar", "baz") ftp_fs = open_fs( "ftp://user:1234@{}:{}/foo".format(self.server.host, self.server.port) ) self.assertIsInstance(ftp_fs, SubFS) self.assertEqual(ftp_fs.readtext("bar"), "baz") ftp_fs.close() def test_create(self): directory = join("home", self.user, "test", "directory") base = "ftp://user:1234@{}:{}/foo".format(self.server.host, self.server.port) url = "{}/{}".format(base, directory) # Make sure unexisting directory raises `CreateFailed` with self.assertRaises(errors.CreateFailed): ftp_fs = open_fs(url) # Open with `create` and try touching a file with open_fs(url, create=True) as ftp_fs: ftp_fs.touch("foo") # Open the base filesystem and check the subdirectory exists with open_fs(base) as ftp_fs: self.assertTrue(ftp_fs.isdir(directory)) self.assertTrue(ftp_fs.isfile(join(directory, "foo"))) # Open without `create` and check the file exists with open_fs(url) as ftp_fs: self.assertTrue(ftp_fs.isfile("foo")) # Open with create and check this does fail with open_fs(url, create=True) as ftp_fs: self.assertTrue(ftp_fs.isfile("foo")) class TestFTPFSNoMLSD(TestFTPFS): def make_fs(self): ftp_fs = super(TestFTPFSNoMLSD, self).make_fs() ftp_fs.features del ftp_fs.features["MLST"] return ftp_fs def test_features(self): pass @pytest.mark.slow class TestAnonFTPFS(FSTestCases, unittest.TestCase): user = "anonymous" pasw = "" @classmethod def setUpClass(cls): from pyftpdlib.test import ThreadedTestFTPd super(TestAnonFTPFS, cls).setUpClass() cls._temp_dir = tempfile.mkdtemp("ftpfs2tests") cls._temp_path = os.path.join(cls._temp_dir, text_type(uuid.uuid4())) os.mkdir(cls._temp_path) cls.server = ThreadedTestFTPd() cls.server.shutdown_after = -1 cls.server.handler.authorizer = DummyAuthorizer() cls.server.handler.authorizer.add_anonymous(cls._temp_path, perm="elradfmw") cls.server.start() # Don't know why this is necessary on Windows if platform.system() == "Windows": time.sleep(0.1) # Poll until a connection can be made if not cls.server.is_alive(): raise RuntimeError("could not start FTP server.") @classmethod def tearDownClass(cls): cls.server.stop() shutil.rmtree(cls._temp_dir) super(TestAnonFTPFS, cls).tearDownClass() def make_fs(self): return open_fs("ftp://{}:{}".format(self.server.host, self.server.port)) def tearDown(self): shutil.rmtree(self._temp_path) os.mkdir(self._temp_path) super(TestAnonFTPFS, self).tearDown() def test_ftp_url(self): self.assertEqual( self.fs.ftp_url, "ftp://{}:{}".format(self.server.host, self.server.port) ) def test_geturl(self): self.fs.makedir("foo") self.fs.create("bar") self.fs.create("foo/bar") self.assertEqual( self.fs.geturl("foo"), "ftp://{}:{}/foo".format(self.server.host, self.server.port), ) self.assertEqual( self.fs.geturl("bar"), "ftp://{}:{}/bar".format(self.server.host, self.server.port), ) self.assertEqual( self.fs.geturl("foo/bar"), "ftp://{}:{}/foo/bar".format(self.server.host, self.server.port), ) pyfilesystem2-2.4.12/tests/test_glob.py000066400000000000000000000073251400005060600201130ustar00rootroot00000000000000from __future__ import unicode_literals import unittest from fs import glob from fs import open_fs class TestGlob(unittest.TestCase): def setUp(self): fs = self.fs = open_fs("mem://") fs.writetext("foo.py", "Hello, World") fs.touch("bar.py") fs.touch("baz.py") fs.makedirs("egg") fs.writetext("egg/foo.py", "from fs import open_fs") fs.touch("egg/foo.pyc") fs.makedirs("a/b/c/").writetext("foo.py", "import fs") repr(fs.glob) def test_match(self): tests = [ ("*.?y", "/test.py", True), ("*.py", "/test.py", True), ("*.py", "/test.pc", False), ("*.py", "/foo/test.py", False), ("foo/*.py", "/foo/test.py", True), ("foo/*.py", "/bar/foo/test.py", False), ("?oo/*.py", "/foo/test.py", True), ("*/*.py", "/foo/test.py", True), ("foo/*.py", "/bar/foo/test.py", False), ("**/foo/*.py", "/bar/foo/test.py", True), ("foo/**/bar/*.py", "/foo/bar/test.py", True), ("foo/**/bar/*.py", "/foo/baz/egg/bar/test.py", True), ("foo/**/bar/*.py", "/foo/baz/egg/bar/egg/test.py", False), ("**", "/test.py", True), ("**", "/test", True), ("**", "/test/", True), ("**/", "/test/", True), ("**/", "/test.py", False), ] for pattern, path, expected in tests: self.assertEqual(glob.match(pattern, path), expected) # Run a second time to test cache for pattern, path, expected in tests: self.assertEqual(glob.match(pattern, path), expected) def test_count_1dir(self): globber = glob.BoundGlobber(self.fs) counts = globber("*.py").count() self.assertEqual(counts, glob.Counts(files=3, directories=0, data=12)) repr(globber("*.py")) def test_count_2dir(self): globber = glob.BoundGlobber(self.fs) counts = globber("*/*.py").count() self.assertEqual(counts, glob.Counts(files=1, directories=0, data=22)) def test_count_recurse_dir(self): globber = glob.BoundGlobber(self.fs) counts = globber("**/*.py").count() self.assertEqual(counts, glob.Counts(files=5, directories=0, data=43)) def test_count_lines(self): globber = glob.BoundGlobber(self.fs) line_counts = globber("**/*.py").count_lines() self.assertEqual(line_counts, glob.LineCounts(lines=3, non_blank=3)) def test_count_dirs(self): globber = glob.BoundGlobber(self.fs) counts = globber("**/?/").count() self.assertEqual(counts, glob.Counts(files=0, directories=3, data=0)) def test_count_all(self): globber = glob.BoundGlobber(self.fs) counts = globber("**").count() self.assertEqual(counts, glob.Counts(files=6, directories=4, data=43)) counts = globber("**/").count() self.assertEqual(counts, glob.Counts(files=0, directories=4, data=0)) def test_remove(self): globber = glob.BoundGlobber(self.fs) self.assertTrue(self.fs.exists("egg/foo.pyc")) removed_count = globber("**/*.pyc").remove() self.assertEqual(removed_count, 1) self.assertFalse(self.fs.exists("egg/foo.pyc")) def test_remove_dir(self): globber = glob.BoundGlobber(self.fs) self.assertTrue(self.fs.exists("egg/foo.pyc")) removed_count = globber("**/?/").remove() self.assertEqual(removed_count, 3) self.assertFalse(self.fs.exists("a")) self.assertTrue(self.fs.exists("egg")) def test_remove_all(self): globber = glob.BoundGlobber(self.fs) globber("**").remove() self.assertEqual(sorted(self.fs.listdir("/")), []) pyfilesystem2-2.4.12/tests/test_imports.py000066400000000000000000000006451400005060600206630ustar00rootroot00000000000000import sys import unittest class TestImports(unittest.TestCase): def test_import_path(self): """Test import fs also imports other symbols.""" restore_fs = sys.modules.pop("fs") sys.modules.pop("fs.path") try: import fs fs.path fs.Seek fs.ResourceType fs.open_fs finally: sys.modules["fs"] = restore_fs pyfilesystem2-2.4.12/tests/test_info.py000066400000000000000000000106071400005060600201200ustar00rootroot00000000000000 from __future__ import unicode_literals import datetime import unittest import pytz from fs.enums import ResourceType from fs.info import Info from fs.permissions import Permissions from fs.time import datetime_to_epoch class TestInfo(unittest.TestCase): def test_empty(self): """Test missing info.""" info = Info({"basic": {}, "details": {}, "access": {}, "link": {}}) self.assertIsNone(info.name) self.assertIsNone(info.is_dir) self.assertEqual(info.type, ResourceType.unknown) self.assertIsNone(info.accessed) self.assertIsNone(info.modified) self.assertIsNone(info.created) self.assertIsNone(info.metadata_changed) self.assertIsNone(info.accessed) self.assertIsNone(info.permissions) self.assertIsNone(info.user) self.assertIsNone(info.group) self.assertIsNone(info.target) self.assertFalse(info.is_link) def test_access(self): info = Info( { "access": { "uid": 10, "gid": 12, "user": "will", "group": "devs", "permissions": ["u_r"], } } ) self.assertIsInstance(info.permissions, Permissions) self.assertEqual(info.permissions, Permissions(user="r")) self.assertEqual(info.user, "will") self.assertEqual(info.group, "devs") self.assertEqual(info.uid, 10) self.assertEqual(info.gid, 12) def test_link(self): info = Info({"link": {"target": "foo"}}) self.assertTrue(info.is_link) self.assertEqual(info.target, "foo") def test_basic(self): # Check simple file info = Info({"basic": {"name": "bar.py", "is_dir": False}}) self.assertEqual(info.name, "bar.py") self.assertIsInstance(info.is_dir, bool) self.assertFalse(info.is_dir) self.assertEqual(repr(info), "") self.assertEqual(info.suffix, ".py") # Check dir info = Info({"basic": {"name": "foo", "is_dir": True}}) self.assertTrue(info.is_dir) self.assertEqual(repr(info), "") self.assertEqual(info.suffix, "") def test_details(self): dates = [ datetime.datetime(2016, 7, 5, tzinfo=pytz.UTC), datetime.datetime(2016, 7, 6, tzinfo=pytz.UTC), datetime.datetime(2016, 7, 7, tzinfo=pytz.UTC), datetime.datetime(2016, 7, 8, tzinfo=pytz.UTC), ] epochs = [datetime_to_epoch(d) for d in dates] info = Info( { "details": { "accessed": epochs[0], "modified": epochs[1], "created": epochs[2], "metadata_changed": epochs[3], "type": int(ResourceType.file), } } ) self.assertEqual(info.accessed, dates[0]) self.assertEqual(info.modified, dates[1]) self.assertEqual(info.created, dates[2]) self.assertEqual(info.metadata_changed, dates[3]) self.assertIsInstance(info.type, ResourceType) self.assertEqual(info.type, ResourceType.file) self.assertEqual(info.type, 2) def test_has_namespace(self): info = Info({"basic": {}, "details": {}}) self.assertTrue(info.has_namespace("basic")) self.assertTrue(info.has_namespace("details")) self.assertFalse(info.has_namespace("access")) def test_copy(self): info = Info({"basic": {"name": "bar", "is_dir": False}}) info_copy = info.copy() self.assertEqual(info.raw, info_copy.raw) def test_get(self): info = Info({"baz": {}}) self.assertIsNone(info.get("foo", "bar")) self.assertIsNone(info.get("baz", "bar")) def test_suffix(self): info = Info({"basic": {"name": "foo.tar.gz"}}) self.assertEqual(info.suffix, ".gz") self.assertEqual(info.suffixes, [".tar", ".gz"]) self.assertEqual(info.stem, "foo") info = Info({"basic": {"name": "foo"}}) self.assertEqual(info.suffix, "") self.assertEqual(info.suffixes, []) self.assertEqual(info.stem, "foo") info = Info({"basic": {"name": ".foo"}}) self.assertEqual(info.suffix, "") self.assertEqual(info.suffixes, []) self.assertEqual(info.stem, ".foo") pyfilesystem2-2.4.12/tests/test_iotools.py000066400000000000000000000123751400005060600206610ustar00rootroot00000000000000from __future__ import unicode_literals import io import unittest import six from fs import iotools from fs import tempfs from fs.test import UNICODE_TEXT class TestIOTools(unittest.TestCase): def setUp(self): self.fs = tempfs.TempFS("iotoolstest") def tearDown(self): self.fs.close() del self.fs def test_make_stream(self): """Test make_stream""" self.fs.writebytes("foo.bin", b"foofoo") with self.fs.openbin("foo.bin") as f: data = f.read() self.assertTrue(isinstance(data, bytes)) with self.fs.openbin("text.txt", "wb") as f: f.write(UNICODE_TEXT.encode("utf-8")) with self.fs.openbin("text.txt") as f: with iotools.make_stream("text.txt", f, "rt") as f2: repr(f2) text = f2.read() self.assertIsInstance(text, six.text_type) def test_readinto(self): self.fs.writebytes("bytes.bin", b"foofoobarbar") with self.fs.openbin("bytes.bin") as bin_file: with iotools.make_stream("bytes.bin", bin_file, "rb") as f: data = bytearray(3) bytes_read = f.readinto(data) self.assertEqual(bytes_read, 3) self.assertEqual(bytes(data), b"foo") self.assertEqual(f.readline(1), b"f") def no_readinto(size): raise AttributeError with self.fs.openbin("bytes.bin") as bin_file: bin_file.readinto = no_readinto with iotools.make_stream("bytes.bin", bin_file, "rb") as f: data = bytearray(3) bytes_read = f.readinto(data) self.assertEqual(bytes_read, 3) self.assertEqual(bytes(data), b"foo") self.assertEqual(f.readline(1), b"f") def test_readinto1(self): self.fs.writebytes("bytes.bin", b"foofoobarbar") with self.fs.openbin("bytes.bin") as bin_file: with iotools.make_stream("bytes.bin", bin_file, "rb") as f: data = bytearray(3) bytes_read = f.readinto1(data) self.assertEqual(bytes_read, 3) self.assertEqual(bytes(data), b"foo") self.assertEqual(f.readline(1), b"f") def no_readinto(size): raise AttributeError with self.fs.openbin("bytes.bin") as bin_file: bin_file.readinto = no_readinto with iotools.make_stream("bytes.bin", bin_file, "rb") as f: data = bytearray(3) bytes_read = f.readinto1(data) self.assertEqual(bytes_read, 3) self.assertEqual(bytes(data), b"foo") self.assertEqual(f.readline(1), b"f") def test_isatty(self): with self.fs.openbin("text.txt", "wb") as f: with iotools.make_stream("text.txt", f, "wb") as f1: self.assertFalse(f1.isatty()) def test_readlines(self): self.fs.writebytes("foo", b"barbar\nline1\nline2") with self.fs.open("foo", "rb") as f: f = iotools.make_stream("foo", f, "rb") self.assertEqual(list(f), [b"barbar\n", b"line1\n", b"line2"]) with self.fs.open("foo", "rt") as f: f = iotools.make_stream("foo", f, "rb") self.assertEqual(f.readlines(), ["barbar\n", "line1\n", "line2"]) def test_readall(self): self.fs.writebytes("foo", b"foobar") with self.fs.open("foo", "rt") as f: self.assertEqual(f.read(), "foobar") def test_writelines(self): with self.fs.open("foo", "wb") as f: f = iotools.make_stream("foo", f, "rb") f.writelines([b"foo", b"bar", b"baz"]) self.assertEqual(self.fs.readbytes("foo"), b"foobarbaz") def test_seekable(self): f = io.BytesIO(b"HelloWorld") raw_wrapper = iotools.RawWrapper(f) self.assertTrue(raw_wrapper.seekable()) def no_seekable(): raise AttributeError("seekable") f.seekable = no_seekable def seek(pos, whence): raise IOError("no seek") raw_wrapper.seek = seek self.assertFalse(raw_wrapper.seekable()) def test_line_iterator(self): f = io.BytesIO(b"Hello\nWorld\n\nfoo") self.assertEqual( list(iotools.line_iterator(f)), [b"Hello\n", b"World\n", b"\n", b"foo"] ) f = io.BytesIO(b"Hello\nWorld\n\nfoo") self.assertEqual(list(iotools.line_iterator(f, 10)), [b"Hello\n", b"Worl"]) def test_make_stream_writer(self): f = io.BytesIO() s = iotools.make_stream("foo", f, "wb", buffering=1) self.assertIsInstance(s, io.BufferedWriter) s.write(b"Hello") self.assertEqual(f.getvalue(), b"Hello") def test_make_stream_reader(self): f = io.BytesIO(b"Hello") s = iotools.make_stream("foo", f, "rb", buffering=1) self.assertIsInstance(s, io.BufferedReader) self.assertEqual(s.read(), b"Hello") def test_make_stream_reader_writer(self): f = io.BytesIO(b"Hello") s = iotools.make_stream("foo", f, "+b", buffering=1) self.assertIsInstance(s, io.BufferedRandom) self.assertEqual(s.read(), b"Hello") s.write(b" World") self.assertEqual(f.getvalue(), b"Hello World") pyfilesystem2-2.4.12/tests/test_lrucache.py000066400000000000000000000015361400005060600207540ustar00rootroot00000000000000from __future__ import unicode_literals import unittest from fs import lrucache class TestLRUCache(unittest.TestCase): def setUp(self): self.lrucache = lrucache.LRUCache(3) def test_lrucache(self): # insert some values self.lrucache["foo"] = 1 self.lrucache["bar"] = 2 self.lrucache["baz"] = 3 self.assertIn("foo", self.lrucache) # Cache size is 3, so the following should kick oldest one out self.lrucache["egg"] = 4 self.assertNotIn("foo", self.lrucache) self.assertIn("egg", self.lrucache) # cache is now full # look up two keys self.lrucache["bar"] self.lrucache["baz"] # Insert a new value self.lrucache["eggegg"] = 5 # Check it kicked out the 'oldest' key self.assertNotIn("egg", self.lrucache) pyfilesystem2-2.4.12/tests/test_memoryfs.py000066400000000000000000000041371400005060600210270ustar00rootroot00000000000000from __future__ import unicode_literals import posixpath import unittest import pytest from fs import memoryfs from fs.test import FSTestCases from fs.test import UNICODE_TEXT try: # Only supported on Python 3.4+ import tracemalloc except ImportError: tracemalloc = None class TestMemoryFS(FSTestCases, unittest.TestCase): """Test OSFS implementation.""" def make_fs(self): return memoryfs.MemoryFS() def _create_many_files(self): for parent_dir in {"/", "/one", "/one/two", "/one/other-two/three"}: self.fs.makedirs(parent_dir, recreate=True) for file_id in range(50): self.fs.writetext( posixpath.join(parent_dir, str(file_id)), UNICODE_TEXT ) @pytest.mark.skipif( not tracemalloc, reason="`tracemalloc` isn't supported on this Python version." ) def test_close_mem_free(self): """Ensure all file memory is freed when calling close(). Prevents regression against issue #308. """ trace_filters = [tracemalloc.Filter(True, "*/memoryfs.py")] tracemalloc.start() before = tracemalloc.take_snapshot().filter_traces(trace_filters) self._create_many_files() after_create = tracemalloc.take_snapshot().filter_traces(trace_filters) self.fs.close() after_close = tracemalloc.take_snapshot().filter_traces(trace_filters) tracemalloc.stop() [diff_create] = after_create.compare_to( before, key_type="filename", cumulative=True ) self.assertGreater( diff_create.size_diff, 0, "Memory usage didn't increase after creating files; diff is %0.2f KiB." % (diff_create.size_diff / 1024.0), ) [diff_close] = after_close.compare_to( after_create, key_type="filename", cumulative=True ) self.assertLess( diff_close.size_diff, 0, "Memory usage increased after closing the file system; diff is %0.2f KiB." % (diff_close.size_diff / 1024.0), ) pyfilesystem2-2.4.12/tests/test_mirror.py000066400000000000000000000064241400005060600205010ustar00rootroot00000000000000from __future__ import unicode_literals import unittest from fs.mirror import mirror from fs import open_fs class TestMirror(unittest.TestCase): WORKERS = 0 # Single threaded def _contents(self, fs): """Extract an FS in to a simple data structure.""" contents = [] for path, dirs, files in fs.walk(): for info in dirs: _path = info.make_path(path) contents.append((_path, "dir", b"")) for info in files: _path = info.make_path(path) contents.append((_path, "file", fs.readbytes(_path))) return sorted(contents) def assert_compare_fs(self, fs1, fs2): """Assert filesystems and contents are the same.""" self.assertEqual(self._contents(fs1), self._contents(fs2)) def test_empty_mirror(self): m1 = open_fs("mem://") m2 = open_fs("mem://") mirror(m1, m2, workers=self.WORKERS) self.assert_compare_fs(m1, m2) def test_mirror_one_file(self): m1 = open_fs("mem://") m1.writetext("foo", "hello") m2 = open_fs("mem://") mirror(m1, m2, workers=self.WORKERS) self.assert_compare_fs(m1, m2) def test_mirror_one_file_one_dir(self): m1 = open_fs("mem://") m1.writetext("foo", "hello") m1.makedir("bar") m2 = open_fs("mem://") mirror(m1, m2, workers=self.WORKERS) self.assert_compare_fs(m1, m2) def test_mirror_delete_replace(self): m1 = open_fs("mem://") m1.writetext("foo", "hello") m1.makedir("bar") m2 = open_fs("mem://") mirror(m1, m2, workers=self.WORKERS) self.assert_compare_fs(m1, m2) m2.remove("foo") mirror(m1, m2, workers=self.WORKERS) self.assert_compare_fs(m1, m2) m2.removedir("bar") mirror(m1, m2, workers=self.WORKERS) self.assert_compare_fs(m1, m2) def test_mirror_extra_dir(self): m1 = open_fs("mem://") m1.writetext("foo", "hello") m1.makedir("bar") m2 = open_fs("mem://") m2.makedir("baz") mirror(m1, m2, workers=self.WORKERS) self.assert_compare_fs(m1, m2) def test_mirror_extra_file(self): m1 = open_fs("mem://") m1.writetext("foo", "hello") m1.makedir("bar") m2 = open_fs("mem://") m2.makedir("baz") m2.touch("egg") mirror(m1, m2, workers=self.WORKERS) self.assert_compare_fs(m1, m2) def test_mirror_wrong_type(self): m1 = open_fs("mem://") m1.writetext("foo", "hello") m1.makedir("bar") m2 = open_fs("mem://") m2.makedir("foo") m2.touch("bar") mirror(m1, m2, workers=self.WORKERS) self.assert_compare_fs(m1, m2) def test_mirror_update(self): m1 = open_fs("mem://") m1.writetext("foo", "hello") m1.makedir("bar") m2 = open_fs("mem://") mirror(m1, m2, workers=self.WORKERS) self.assert_compare_fs(m1, m2) m2.appendtext("foo", " world!") mirror(m1, m2, workers=self.WORKERS) self.assert_compare_fs(m1, m2) class TestMirrorWorkers1(TestMirror): WORKERS = 1 class TestMirrorWorkers2(TestMirror): WORKERS = 2 class TestMirrorWorkers4(TestMirror): WORKERS = 4 pyfilesystem2-2.4.12/tests/test_mode.py000066400000000000000000000030651400005060600201110ustar00rootroot00000000000000from __future__ import unicode_literals import unittest from six import text_type from fs.mode import check_readable, check_writable, Mode class TestMode(unittest.TestCase): def test_checks(self): self.assertTrue(check_readable("r")) self.assertTrue(check_readable("r+")) self.assertTrue(check_readable("rt")) self.assertTrue(check_readable("rb")) self.assertFalse(check_readable("w")) self.assertTrue(check_readable("w+")) self.assertFalse(check_readable("wt")) self.assertFalse(check_readable("wb")) self.assertFalse(check_readable("a")) self.assertTrue(check_writable("w")) self.assertTrue(check_writable("w+")) self.assertTrue(check_writable("r+")) self.assertFalse(check_writable("r")) self.assertTrue(check_writable("a")) def test_mode_object(self): with self.assertRaises(ValueError): Mode("") with self.assertRaises(ValueError): Mode("J") with self.assertRaises(ValueError): Mode("b") with self.assertRaises(ValueError): Mode("rtb") mode = Mode("w") repr(mode) self.assertEqual(text_type(mode), "w") self.assertTrue(mode.create) self.assertFalse(mode.reading) self.assertTrue(mode.writing) self.assertFalse(mode.appending) self.assertFalse(mode.updating) self.assertTrue(mode.truncate) self.assertFalse(mode.exclusive) self.assertFalse(mode.binary) self.assertTrue(mode.text) pyfilesystem2-2.4.12/tests/test_mountfs.py000066400000000000000000000051761400005060600206650ustar00rootroot00000000000000from __future__ import unicode_literals import unittest from fs.mountfs import MountError, MountFS from fs.memoryfs import MemoryFS from fs.tempfs import TempFS from fs.test import FSTestCases class TestMountFS(FSTestCases, unittest.TestCase): """Test OSFS implementation.""" def make_fs(self): fs = MountFS() mem_fs = MemoryFS() fs.mount("/", mem_fs) return fs class TestMountFS2(FSTestCases, unittest.TestCase): """Test OSFS implementation.""" def make_fs(self): fs = MountFS() mem_fs = MemoryFS() fs.mount("/foo", mem_fs) return fs.opendir("foo") class TestMountFSBehaviours(unittest.TestCase): def test_bad_mount(self): mount_fs = MountFS() with self.assertRaises(TypeError): mount_fs.mount("foo", 5) with self.assertRaises(TypeError): mount_fs.mount("foo", b"bar") def test_listdir(self): mount_fs = MountFS() self.assertEqual(mount_fs.listdir("/"), []) m1 = MemoryFS() m3 = MemoryFS() m4 = TempFS() mount_fs.mount("/m1", m1) mount_fs.mount("/m2", "temp://") mount_fs.mount("/m3", m3) with self.assertRaises(MountError): mount_fs.mount("/m3/foo", m4) self.assertEqual(sorted(mount_fs.listdir("/")), ["m1", "m2", "m3"]) m3.makedir("foo") self.assertEqual(sorted(mount_fs.listdir("/m3")), ["foo"]) def test_auto_close(self): """Test MountFS auto close is working""" mount_fs = MountFS() m1 = MemoryFS() m2 = MemoryFS() mount_fs.mount("/m1", m1) mount_fs.mount("/m2", m2) self.assertFalse(m1.isclosed()) self.assertFalse(m2.isclosed()) mount_fs.close() self.assertTrue(m1.isclosed()) self.assertTrue(m2.isclosed()) def test_no_auto_close(self): """Test MountFS auto close can be disabled""" mount_fs = MountFS(auto_close=False) m1 = MemoryFS() m2 = MemoryFS() mount_fs.mount("/m1", m1) mount_fs.mount("/m2", m2) self.assertFalse(m1.isclosed()) self.assertFalse(m2.isclosed()) mount_fs.close() self.assertFalse(m1.isclosed()) self.assertFalse(m2.isclosed()) def test_empty(self): """Test MountFS with nothing mounted.""" mount_fs = MountFS() self.assertEqual(mount_fs.listdir("/"), []) def test_mount_self(self): mount_fs = MountFS() with self.assertRaises(ValueError): mount_fs.mount("/", mount_fs) def test_desc(self): mount_fs = MountFS() mount_fs.desc("/") pyfilesystem2-2.4.12/tests/test_move.py000066400000000000000000000016511400005060600201320ustar00rootroot00000000000000 from __future__ import unicode_literals import unittest import fs.move from fs import open_fs class TestMove(unittest.TestCase): def test_move_fs(self): src_fs = open_fs("mem://") src_fs.makedirs("foo/bar") src_fs.touch("test.txt") src_fs.touch("foo/bar/baz.txt") dst_fs = open_fs("mem://") fs.move.move_fs(src_fs, dst_fs) self.assertTrue(dst_fs.isdir("foo/bar")) self.assertTrue(dst_fs.isfile("test.txt")) self.assertTrue(src_fs.isempty("/")) def test_copy_dir(self): src_fs = open_fs("mem://") src_fs.makedirs("foo/bar") src_fs.touch("test.txt") src_fs.touch("foo/bar/baz.txt") dst_fs = open_fs("mem://") fs.move.move_dir(src_fs, "/foo", dst_fs, "/") self.assertTrue(dst_fs.isdir("bar")) self.assertTrue(dst_fs.isfile("bar/baz.txt")) self.assertFalse(src_fs.exists("foo")) pyfilesystem2-2.4.12/tests/test_multifs.py000066400000000000000000000105111400005060600206420ustar00rootroot00000000000000from __future__ import unicode_literals import unittest from fs.multifs import MultiFS from fs.memoryfs import MemoryFS from fs import errors from fs.test import FSTestCases class TestMultiFS(FSTestCases, unittest.TestCase): """Test OSFS implementation.""" def setUp(self): fs = MultiFS() mem_fs = MemoryFS() fs.add_fs("mem", mem_fs, write=True) self.fs = fs self.mem_fs = mem_fs def make_fs(self): fs = MultiFS() mem_fs = MemoryFS() fs.add_fs("mem", mem_fs, write=True) return fs def test_get_fs(self): self.assertIs(self.fs.get_fs("mem"), self.mem_fs) def test_which(self): self.fs.writebytes("foo", b"bar") self.assertEqual(self.fs.which("foo"), ("mem", self.mem_fs)) self.assertEqual(self.fs.which("bar", "w"), ("mem", self.mem_fs)) self.assertEqual(self.fs.which("baz"), (None, None)) def test_auto_close(self): """Test MultiFS auto close is working""" multi_fs = MultiFS() m1 = MemoryFS() m2 = MemoryFS() multi_fs.add_fs("m1", m1) multi_fs.add_fs("m2", m2) self.assertFalse(m1.isclosed()) self.assertFalse(m2.isclosed()) multi_fs.close() self.assertTrue(m1.isclosed()) self.assertTrue(m2.isclosed()) def test_no_auto_close(self): """Test MultiFS auto close can be disabled""" multi_fs = MultiFS(auto_close=False) self.assertEqual(repr(multi_fs), "MultiFS(auto_close=False)") m1 = MemoryFS() m2 = MemoryFS() multi_fs.add_fs("m1", m1) multi_fs.add_fs("m2", m2) self.assertFalse(m1.isclosed()) self.assertFalse(m2.isclosed()) multi_fs.close() self.assertFalse(m1.isclosed()) self.assertFalse(m2.isclosed()) def test_opener(self): """Test use of FS URLs.""" multi_fs = MultiFS() with self.assertRaises(TypeError): multi_fs.add_fs("foo", 5) multi_fs.add_fs("f1", "mem://") multi_fs.add_fs("f2", "temp://") self.assertIsInstance(multi_fs.get_fs("f1"), MemoryFS) def test_priority(self): """Test priority order is working""" m1 = MemoryFS() m2 = MemoryFS() m3 = MemoryFS() m1.writebytes("name", b"m1") m2.writebytes("name", b"m2") m3.writebytes("name", b"m3") multi_fs = MultiFS(auto_close=False) multi_fs.add_fs("m1", m1) multi_fs.add_fs("m2", m2) multi_fs.add_fs("m3", m3) self.assertEqual(multi_fs.readbytes("name"), b"m3") m1 = MemoryFS() m2 = MemoryFS() m3 = MemoryFS() m1.writebytes("name", b"m1") m2.writebytes("name", b"m2") m3.writebytes("name", b"m3") multi_fs = MultiFS(auto_close=False) multi_fs.add_fs("m1", m1) multi_fs.add_fs("m2", m2, priority=10) multi_fs.add_fs("m3", m3) self.assertEqual(multi_fs.readbytes("name"), b"m2") m1 = MemoryFS() m2 = MemoryFS() m3 = MemoryFS() m1.writebytes("name", b"m1") m2.writebytes("name", b"m2") m3.writebytes("name", b"m3") multi_fs = MultiFS(auto_close=False) multi_fs.add_fs("m1", m1) multi_fs.add_fs("m2", m2, priority=10) multi_fs.add_fs("m3", m3, priority=10) self.assertEqual(multi_fs.readbytes("name"), b"m3") m1 = MemoryFS() m2 = MemoryFS() m3 = MemoryFS() m1.writebytes("name", b"m1") m2.writebytes("name", b"m2") m3.writebytes("name", b"m3") multi_fs = MultiFS(auto_close=False) multi_fs.add_fs("m1", m1, priority=11) multi_fs.add_fs("m2", m2, priority=10) multi_fs.add_fs("m3", m3, priority=10) self.assertEqual(multi_fs.readbytes("name"), b"m1") def test_no_writable(self): fs = MultiFS() with self.assertRaises(errors.ResourceReadOnly): fs.writebytes("foo", b"bar") def test_validate_path(self): self.fs.write_fs = None self.fs.validatepath("foo") def test_listdir_duplicates(self): m1 = MemoryFS() m2 = MemoryFS() m1.touch("foo") m2.touch("foo") multi_fs = MultiFS() multi_fs.add_fs("m1", m1) multi_fs.add_fs("m2", m2) self.assertEqual(multi_fs.listdir("/"), ["foo"]) pyfilesystem2-2.4.12/tests/test_new_name.py000066400000000000000000000014301400005060600207500ustar00rootroot00000000000000from __future__ import unicode_literals import unittest import warnings from fs.base import _new_name class TestNewNameDecorator(unittest.TestCase): def double(self, n): "Double a number" return n * 2 times_2 = _new_name(double, "times_2") def test_old_name(self): """Test _new_name method issues a warning""" with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") result = self.times_2(2) self.assertEqual(len(w), 1) self.assertEqual(w[0].category, DeprecationWarning) self.assertEqual( str(w[0].message), "method 'times_2' has been deprecated, please rename to 'double'", ) self.assertEqual(result, 4) pyfilesystem2-2.4.12/tests/test_opener.py000066400000000000000000000252751400005060600204640ustar00rootroot00000000000000from __future__ import unicode_literals import os import sys import tempfile import unittest import pkg_resources import pytest from fs import open_fs, opener from fs.osfs import OSFS from fs.opener import registry, errors from fs.memoryfs import MemoryFS from fs.appfs import UserDataFS from fs.opener.parse import ParseResult from fs.opener.registry import Registry try: from unittest import mock except ImportError: import mock class TestParse(unittest.TestCase): def test_registry_repr(self): str(registry) repr(registry) def test_parse_not_url(self): with self.assertRaises(errors.ParseError): opener.parse("foo/bar") def test_parse_simple(self): parsed = opener.parse("osfs://foo/bar") expected = ParseResult("osfs", None, None, "foo/bar", {}, None) self.assertEqual(expected, parsed) def test_parse_credentials(self): parsed = opener.parse("ftp://user:pass@ftp.example.org") expected = ParseResult("ftp", "user", "pass", "ftp.example.org", {}, None) self.assertEqual(expected, parsed) parsed = opener.parse("ftp://user@ftp.example.org") expected = ParseResult("ftp", "user", "", "ftp.example.org", {}, None) self.assertEqual(expected, parsed) def test_parse_path(self): parsed = opener.parse("osfs://foo/bar!example.txt") expected = ParseResult("osfs", None, None, "foo/bar", {}, "example.txt") self.assertEqual(expected, parsed) def test_parse_params(self): parsed = opener.parse("ftp://ftp.example.org?proxy=ftp.proxy.org") expected = ParseResult( "ftp", None, None, "ftp.example.org", {"proxy": "ftp.proxy.org"}, None ) self.assertEqual(expected, parsed) def test_parse_params_multiple(self): parsed = opener.parse("ftp://ftp.example.org?foo&bar=1") expected = ParseResult( "ftp", None, None, "ftp.example.org", {"foo": "", "bar": "1"}, None ) self.assertEqual(expected, parsed) def test_parse_params_timeout(self): parsed = opener.parse("ftp://ftp.example.org?timeout=30") expected = ParseResult( "ftp", None, None, "ftp.example.org", {"timeout": "30"}, None ) self.assertEqual(expected, parsed) def test_parse_user_password_proxy(self): parsed = opener.parse("ftp://user:password@ftp.example.org?proxy=ftp.proxy.org") expected = ParseResult( "ftp", "user", "password", "ftp.example.org", {"proxy": "ftp.proxy.org"}, None, ) self.assertEqual(expected, parsed) def test_parse_user_password_decode(self): parsed = opener.parse("ftp://user%40large:password@ftp.example.org") expected = ParseResult( "ftp", "user@large", "password", "ftp.example.org", {}, None ) self.assertEqual(expected, parsed) def test_parse_resource_decode(self): parsed = opener.parse("ftp://user%40large:password@ftp.example.org/%7Econnolly") expected = ParseResult( "ftp", "user@large", "password", "ftp.example.org/~connolly", {}, None ) self.assertEqual(expected, parsed) def test_parse_params_decode(self): parsed = opener.parse("ftp://ftp.example.org?decode=is%20working") expected = ParseResult( "ftp", None, None, "ftp.example.org", {"decode": "is working"}, None ) self.assertEqual(expected, parsed) class TestRegistry(unittest.TestCase): def test_protocols(self): self.assertIsInstance(opener.registry.protocols, list) def test_registry_protocols(self): # Check registry.protocols list the names of all available extension extensions = [ pkg_resources.EntryPoint("proto1", "mod1"), pkg_resources.EntryPoint("proto2", "mod2"), ] m = mock.MagicMock(return_value=extensions) with mock.patch.object( sys.modules["pkg_resources"], "iter_entry_points", new=m ): self.assertIn("proto1", opener.registry.protocols) self.assertIn("proto2", opener.registry.protocols) def test_unknown_protocol(self): with self.assertRaises(errors.UnsupportedProtocol): opener.open_fs("unknown://") def test_entry_point_load_error(self): entry_point = mock.MagicMock() entry_point.load.side_effect = ValueError("some error") iter_entry_points = mock.MagicMock(return_value=iter([entry_point])) with mock.patch("pkg_resources.iter_entry_points", iter_entry_points): with self.assertRaises(errors.EntryPointError) as ctx: opener.open_fs("test://") self.assertEqual( "could not load entry point; some error", str(ctx.exception) ) def test_entry_point_type_error(self): class NotAnOpener(object): pass entry_point = mock.MagicMock() entry_point.load = mock.MagicMock(return_value=NotAnOpener) iter_entry_points = mock.MagicMock(return_value=iter([entry_point])) with mock.patch("pkg_resources.iter_entry_points", iter_entry_points): with self.assertRaises(errors.EntryPointError) as ctx: opener.open_fs("test://") self.assertEqual("entry point did not return an opener", str(ctx.exception)) def test_entry_point_create_error(self): class BadOpener(opener.Opener): def __init__(self, *args, **kwargs): raise ValueError("some creation error") def open_fs(self, *args, **kwargs): pass entry_point = mock.MagicMock() entry_point.load = mock.MagicMock(return_value=BadOpener) iter_entry_points = mock.MagicMock(return_value=iter([entry_point])) with mock.patch("pkg_resources.iter_entry_points", iter_entry_points): with self.assertRaises(errors.EntryPointError) as ctx: opener.open_fs("test://") self.assertEqual( "could not instantiate opener; some creation error", str(ctx.exception) ) def test_install(self): """Test Registry.install works as a decorator.""" registry = Registry() self.assertNotIn("foo", registry.protocols) @registry.install class FooOpener(opener.Opener): protocols = ["foo"] def open_fs(self, *args, **kwargs): pass self.assertIn("foo", registry.protocols) class TestManageFS(unittest.TestCase): def test_manage_fs_url(self): with opener.manage_fs("mem://") as mem_fs: self.assertIsInstance(mem_fs, MemoryFS) self.assertTrue(mem_fs.isclosed()) def test_manage_fs_obj(self): mem_fs = MemoryFS() with opener.manage_fs(mem_fs) as open_mem_fs: self.assertIs(mem_fs, open_mem_fs) self.assertFalse(mem_fs.isclosed()) def test_manage_fs_error(self): try: with opener.manage_fs("mem://") as mem_fs: 1 / 0 except ZeroDivisionError: pass self.assertTrue(mem_fs.isclosed()) @pytest.mark.usefixtures("mock_appdir_directories") class TestOpeners(unittest.TestCase): def test_repr(self): # Check __repr__ works for entry_point in pkg_resources.iter_entry_points("fs.opener"): _opener = entry_point.load() repr(_opener()) def test_open_osfs(self): fs = opener.open_fs("osfs://.") self.assertIsInstance(fs, OSFS) # test default protocol fs = opener.open_fs("./") self.assertIsInstance(fs, OSFS) def test_open_memfs(self): fs = opener.open_fs("mem://") self.assertIsInstance(fs, MemoryFS) def test_open_zipfs(self): fh, zip_name = tempfile.mkstemp() os.close(fh) try: # Test creating zip with opener.open_fs("zip://" + zip_name, create=True) as make_zip: make_zip.writetext("foo.txt", "foofoo") # Test opening zip with opener.open_fs("zip://" + zip_name, writeable=False) as zip_fs: self.assertEqual(zip_fs.readtext("foo.txt"), "foofoo") finally: os.remove(zip_name) def test_open_tarfs(self): fh, tar_name = tempfile.mkstemp(suffix=".tar.gz") os.close(fh) try: # Test creating tar with opener.open_fs("tar://" + tar_name, create=True) as make_tar: self.assertEqual(make_tar.compression, "gz") make_tar.writetext("foo.txt", "foofoo") # Test opening tar with opener.open_fs("tar://" + tar_name, writeable=False) as tar_fs: self.assertEqual(tar_fs.readtext("foo.txt"), "foofoo") finally: os.remove(tar_name) def test_open_fs(self): mem_fs = opener.open_fs("mem://") mem_fs_2 = opener.open_fs(mem_fs) self.assertEqual(mem_fs, mem_fs_2) def test_open_userdata(self): with self.assertRaises(errors.OpenerError): opener.open_fs("userdata://foo:bar:baz:egg") app_fs = opener.open_fs("userdata://fstest:willmcgugan:1.0", create=True) self.assertEqual(app_fs.app_dirs.appname, "fstest") self.assertEqual(app_fs.app_dirs.appauthor, "willmcgugan") self.assertEqual(app_fs.app_dirs.version, "1.0") def test_open_userdata_no_version(self): app_fs = opener.open_fs("userdata://fstest:willmcgugan", create=True) self.assertEqual(app_fs.app_dirs.appname, "fstest") self.assertEqual(app_fs.app_dirs.appauthor, "willmcgugan") self.assertEqual(app_fs.app_dirs.version, None) def test_user_data_opener(self): user_data_fs = open_fs("userdata://fstest:willmcgugan:1.0", create=True) self.assertIsInstance(user_data_fs, UserDataFS) user_data_fs.makedir("foo", recreate=True) user_data_fs.writetext("foo/bar.txt", "baz") user_data_fs_foo_dir = open_fs("userdata://fstest:willmcgugan:1.0/foo/") self.assertEqual(user_data_fs_foo_dir.readtext("bar.txt"), "baz") @mock.patch("fs.ftpfs.FTPFS") def test_open_ftp(self, mock_FTPFS): open_fs("ftp://foo:bar@ftp.example.org") mock_FTPFS.assert_called_once_with( "ftp.example.org", passwd="bar", port=21, user="foo", proxy=None, timeout=10 ) @mock.patch("fs.ftpfs.FTPFS") def test_open_ftp_proxy(self, mock_FTPFS): open_fs("ftp://foo:bar@ftp.example.org?proxy=ftp.proxy.org") mock_FTPFS.assert_called_once_with( "ftp.example.org", passwd="bar", port=21, user="foo", proxy="ftp.proxy.org", timeout=10, ) pyfilesystem2-2.4.12/tests/test_osfs.py000066400000000000000000000160701400005060600201370ustar00rootroot00000000000000# coding: utf-8 from __future__ import unicode_literals import errno import io import os import shutil import tempfile import sys import unittest import pytest from fs import osfs, open_fs from fs.path import relpath, dirname from fs import errors from fs.test import FSTestCases from six import text_type try: from unittest import mock except ImportError: import mock class TestOSFS(FSTestCases, unittest.TestCase): """Test OSFS implementation.""" def make_fs(self): temp_dir = tempfile.mkdtemp("fstestosfs") return osfs.OSFS(temp_dir) def destroy_fs(self, fs): self.fs.close() try: shutil.rmtree(fs.getsyspath("/")) except OSError: # Already deleted pass def _get_real_path(self, path): _path = os.path.join(self.fs.root_path, relpath(path)) return _path def assert_exists(self, path): _path = self._get_real_path(path) self.assertTrue(os.path.exists(_path)) def assert_not_exists(self, path): _path = self._get_real_path(path) self.assertFalse(os.path.exists(_path)) def assert_isfile(self, path): _path = self._get_real_path(path) self.assertTrue(os.path.isfile(_path)) def assert_isdir(self, path): _path = self._get_real_path(path) self.assertTrue(os.path.isdir(_path)) def assert_bytes(self, path, contents): assert isinstance(contents, bytes) _path = self._get_real_path(path) with io.open(_path, "rb") as f: data = f.read() self.assertEqual(data, contents) self.assertIsInstance(data, bytes) def assert_text(self, path, contents): assert isinstance(contents, text_type) _path = self._get_real_path(path) with io.open(_path, "rt", encoding="utf-8") as f: data = f.read() self.assertEqual(data, contents) self.assertIsInstance(data, text_type) def test_not_exists(self): with self.assertRaises(errors.CreateFailed): osfs.OSFS("/does/not/exists/") def test_expand_vars(self): self.fs.makedir("TYRIONLANISTER") self.fs.makedir("$FOO") path = self.fs.getsyspath("$FOO") os.environ["FOO"] = "TYRIONLANISTER" fs1 = osfs.OSFS(path) fs2 = osfs.OSFS(path, expand_vars=False) self.assertIn("TYRIONLANISTER", fs1.getsyspath("/")) self.assertNotIn("TYRIONLANISTER", fs2.getsyspath("/")) @pytest.mark.skipif(osfs.sendfile is None, reason="sendfile not supported") @pytest.mark.skipif( sys.version_info >= (3, 8), reason="the copy function uses sendfile in Python 3.8+, " "making the patched implementation irrelevant", ) def test_copy_sendfile(self): # try copying using sendfile with mock.patch.object(osfs, "sendfile") as sendfile: sendfile.side_effect = OSError(errno.ENOSYS, "sendfile not supported") self.test_copy() # check other errors are transmitted self.fs.touch("foo") with mock.patch.object(osfs, "sendfile") as sendfile: sendfile.side_effect = OSError(errno.EWOULDBLOCK) with self.assertRaises(OSError): self.fs.copy("foo", "foo_copy") # check parent exist and is dir with self.assertRaises(errors.ResourceNotFound): self.fs.copy("foo", "spam/eggs") with self.assertRaises(errors.DirectoryExpected): self.fs.copy("foo", "foo_copy/foo") def test_create(self): """Test create=True""" dir_path = tempfile.mkdtemp() try: create_dir = os.path.join(dir_path, "test_create") with osfs.OSFS(create_dir, create=True): self.assertTrue(os.path.isdir(create_dir)) self.assertTrue(os.path.isdir(create_dir)) finally: shutil.rmtree(dir_path) # Test exception when unable to create dir with tempfile.NamedTemporaryFile() as tmp_file: with self.assertRaises(errors.CreateFailed): # Trying to create a dir that exists as a file osfs.OSFS(tmp_file.name, create=True) def test_unicode_paths(self): dir_path = tempfile.mkdtemp() try: fs_dir = os.path.join(dir_path, "te\u0161t_\u00fanicod\u0113") os.mkdir(fs_dir) with osfs.OSFS(fs_dir): self.assertTrue(os.path.isdir(fs_dir)) finally: shutil.rmtree(dir_path) @pytest.mark.skipif(not hasattr(os, "symlink"), reason="No symlink support") def test_symlinks(self): with open(self._get_real_path("foo"), "wb") as f: f.write(b"foobar") os.symlink(self._get_real_path("foo"), self._get_real_path("bar")) self.assertFalse(self.fs.islink("foo")) self.assertFalse(self.fs.getinfo("foo", namespaces=["link"]).is_link) self.assertTrue(self.fs.islink("bar")) self.assertTrue(self.fs.getinfo("bar", namespaces=["link"]).is_link) foo_info = self.fs.getinfo("foo", namespaces=["link", "lstat"]) self.assertIn("link", foo_info.raw) self.assertIn("lstat", foo_info.raw) self.assertEqual(foo_info.get("link", "target"), None) self.assertEqual(foo_info.target, foo_info.raw["link"]["target"]) bar_info = self.fs.getinfo("bar", namespaces=["link", "lstat"]) self.assertIn("link", bar_info.raw) self.assertIn("lstat", bar_info.raw) def test_validatepath(self): """Check validatepath detects bad encodings.""" with mock.patch("fs.osfs.fsencode") as fsencode: fsencode.side_effect = lambda error: "–".encode("ascii") with self.assertRaises(errors.InvalidCharsInPath): with self.fs.open("13 – Marked Register.pdf", "wb") as fh: fh.write(b"foo") def test_consume_geturl(self): self.fs.create("foo") try: url = self.fs.geturl("foo", purpose="fs") except errors.NoURL: self.assertFalse(self.fs.hasurl("foo")) else: self.assertTrue(self.fs.hasurl("foo")) # Should not throw an error base_dir = dirname(url) open_fs(base_dir) def test_complex_geturl(self): self.fs.makedirs("foo/bar ha") test_fixtures = [ # test file, expected url path ["foo", "foo"], ["foo-bar", "foo-bar"], ["foo_bar", "foo_bar"], ["foo/bar ha/barz", "foo/bar%20ha/barz"], ["example b.txt", "example%20b.txt"], ["exampleㄓ.txt", "example%E3%84%93.txt"], ] file_uri_prefix = "osfs://" for test_file, relative_url_path in test_fixtures: self.fs.create(test_file) expected = file_uri_prefix + self.fs.getsyspath(relative_url_path).replace( "\\", "/" ) actual = self.fs.geturl(test_file, purpose="fs") self.assertEqual(actual, expected) def test_geturl_return_no_url(self): self.assertRaises(errors.NoURL, self.fs.geturl, "test/path", "upload") pyfilesystem2-2.4.12/tests/test_path.py000066400000000000000000000167471400005060600201340ustar00rootroot00000000000000from __future__ import absolute_import, unicode_literals, print_function """ fstests.test_path: testcases for the fs path functions """ import unittest from fs.path import ( abspath, basename, combine, dirname, forcedir, frombase, isabs, isbase, isdotfile, isparent, issamedir, iswildcard, iteratepath, join, normpath, parts, recursepath, relativefrom, relpath, split, splitext, ) class TestPathFunctions(unittest.TestCase): """Testcases for FS path functions.""" def test_normpath(self): tests = [ ("\\a\\b\\c", "\\a\\b\\c"), (".", ""), ("./", ""), ("", ""), ("/.", "/"), ("/a/b/c", "/a/b/c"), ("a/b/c", "a/b/c"), ("a/b/../c/", "a/c"), ("/", "/"), ("a/\N{GREEK SMALL LETTER BETA}/c", "a/\N{GREEK SMALL LETTER BETA}/c"), ] for path, result in tests: self.assertEqual(normpath(path), result) def test_pathjoin(self): tests = [ ("", "a", "a"), ("a", "a", "a/a"), ("a/b", "../c", "a/c"), ("a/b/../c", "d", "a/c/d"), ("/a/b/c", "d", "/a/b/c/d"), ("/a/b/c", "../../../d", "/d"), ("a", "b", "c", "a/b/c"), ("a/b/c", "../d", "c", "a/b/d/c"), ("a/b/c", "../d", "/a", "/a"), ("aaa", "bbb/ccc", "aaa/bbb/ccc"), ("aaa", "bbb\\ccc", "aaa/bbb\\ccc"), ("aaa", "bbb", "ccc", "/aaa", "eee", "/aaa/eee"), ("a/b", "./d", "e", "a/b/d/e"), ("/", "/", "/"), ("/", "", "/"), ("a/\N{GREEK SMALL LETTER BETA}", "c", "a/\N{GREEK SMALL LETTER BETA}/c"), ] for testpaths in tests: paths = testpaths[:-1] result = testpaths[-1] self.assertEqual(join(*paths), result) self.assertRaises(ValueError, join, "..") self.assertRaises(ValueError, join, "../") self.assertRaises(ValueError, join, "/..") self.assertRaises(ValueError, join, "./../") self.assertRaises(ValueError, join, "a/b", "../../..") self.assertRaises(ValueError, join, "a/b/../../../d") def test_relpath(self): tests = [("/a/b", "a/b"), ("a/b", "a/b"), ("/", "")] for path, result in tests: self.assertEqual(relpath(path), result) def test_abspath(self): tests = [("/a/b", "/a/b"), ("a/b", "/a/b"), ("/", "/")] for path, result in tests: self.assertEqual(abspath(path), result) def test_forcedir(self): self.assertEqual(forcedir("foo"), "foo/") self.assertEqual(forcedir("foo/"), "foo/") def test_frombase(self): with self.assertRaises(ValueError): frombase("foo", "bar/baz") self.assertEqual(frombase("foo", "foo/bar"), "/bar") def test_isabs(self): self.assertTrue(isabs("/")) self.assertTrue(isabs("/foo")) self.assertFalse(isabs("foo")) def test_iteratepath(self): tests = [ ("a/b", ["a", "b"]), ("", []), ("aaa/bbb/ccc", ["aaa", "bbb", "ccc"]), ("a/b/c/../d", ["a", "b", "d"]), ] for path, results in tests: for path_component, expected in zip(iteratepath(path), results): self.assertEqual(path_component, expected) def test_combine(self): self.assertEqual(combine("", "bar"), "bar") self.assertEqual(combine("foo", "bar"), "foo/bar") def test_parts(self): self.assertEqual(parts("/"), ["/"]) self.assertEqual(parts(""), ["./"]) self.assertEqual(parts("/foo"), ["/", "foo"]) self.assertEqual(parts("/foo/bar"), ["/", "foo", "bar"]) self.assertEqual(parts("/foo/bar/"), ["/", "foo", "bar"]) self.assertEqual(parts("./foo/bar/"), ["./", "foo", "bar"]) def test_pathsplit(self): tests = [ ("a/b", ("a", "b")), ("a/b/c", ("a/b", "c")), ("a", ("", "a")), ("", ("", "")), ("/", ("/", "")), ("/foo", ("/", "foo")), ("foo/bar", ("foo", "bar")), ("foo/bar/baz", ("foo/bar", "baz")), ] for path, result in tests: self.assertEqual(split(path), result) def test_splitext(self): self.assertEqual(splitext("foo.bar"), ("foo", ".bar")) self.assertEqual(splitext("foo.bar.baz"), ("foo.bar", ".baz")) self.assertEqual(splitext("foo"), ("foo", "")) self.assertEqual(splitext(".foo"), (".foo", "")) def test_recursepath(self): self.assertEqual(recursepath("/"), ["/"]) self.assertEqual(recursepath("hello"), ["/", "/hello"]) self.assertEqual(recursepath("/hello/world/"), ["/", "/hello", "/hello/world"]) self.assertEqual( recursepath("/hello/world/", reverse=True), ["/hello/world", "/hello", "/"] ) self.assertEqual(recursepath("hello", reverse=True), ["/hello", "/"]) self.assertEqual(recursepath("", reverse=True), ["/"]) def test_isbase(self): self.assertTrue(isbase("foo", "foo/bar")) self.assertFalse(isbase("baz", "foo/bar")) def test_isparent(self): self.assertTrue(isparent("foo/bar", "foo/bar/spam.txt")) self.assertTrue(isparent("foo/bar/", "foo/bar")) self.assertFalse(isparent("foo/barry", "foo/baz/bar")) self.assertFalse(isparent("foo/bar/baz/", "foo/baz/bar")) self.assertFalse(isparent("foo/var/baz/egg", "foo/baz/bar")) def test_issamedir(self): self.assertTrue(issamedir("foo/bar/baz.txt", "foo/bar/spam.txt")) self.assertFalse(issamedir("foo/bar/baz/txt", "spam/eggs/spam.txt")) def test_isdotfile(self): for path in [".foo", ".svn", "foo/.svn", "foo/bar/.svn", "/foo/.bar"]: self.assertTrue(isdotfile(path)) for path in ["asfoo", "df.svn", "foo/er.svn", "foo/bar/test.txt", "/foo/bar"]: self.assertFalse(isdotfile(path)) def test_dirname(self): tests = [ ("foo", ""), ("foo/bar", "foo"), ("foo/bar/baz", "foo/bar"), ("/foo/bar", "/foo"), ("/foo", "/"), ("/", "/"), ] for path, test_dirname in tests: self.assertEqual(dirname(path), test_dirname) def test_basename(self): tests = [("foo", "foo"), ("foo/bar", "bar"), ("foo/bar/baz", "baz"), ("/", "")] for path, test_basename in tests: self.assertEqual(basename(path), test_basename) def test_iswildcard(self): self.assertTrue(iswildcard("*")) self.assertTrue(iswildcard("*.jpg")) self.assertTrue(iswildcard("foo/*")) self.assertTrue(iswildcard("foo/{}")) self.assertFalse(iswildcard("foo")) self.assertFalse(iswildcard("img.jpg")) self.assertFalse(iswildcard("foo/bar")) def test_realtivefrom(self): tests = [ ("/", "/foo.html", "foo.html"), ("/foo", "/foo/bar.html", "bar.html"), ("/foo/bar/", "/egg.html", "../../egg.html"), ("/a/b/c/d", "e", "../../../../e"), ("/a/b/c/d", "a/d", "../../../d"), ("/docs/", "tags/index.html", "../tags/index.html"), ("foo/bar", "baz/index.html", "../../baz/index.html"), ("", "a", "a"), ("a", "b/c", "../b/c"), ] for base, path, result in tests: self.assertEqual(relativefrom(base, path), result) pyfilesystem2-2.4.12/tests/test_permissions.py000066400000000000000000000111531400005060600215350ustar00rootroot00000000000000from __future__ import unicode_literals from __future__ import print_function import unittest from six import text_type from fs.permissions import make_mode, Permissions class TestPermissions(unittest.TestCase): def test_make_mode(self): self.assertEqual(make_mode(None), 0o777) self.assertEqual(make_mode(0o755), 0o755) self.assertEqual(make_mode(["u_r", "u_w", "u_x"]), 0o700) self.assertEqual(make_mode(Permissions(user="rwx")), 0o700) def test_parse(self): self.assertEqual(Permissions.parse("---------").mode, 0) self.assertEqual(Permissions.parse("rwxrw-r--").mode, 0o764) def test_create(self): self.assertEqual(Permissions.create(None).mode, 0o777) self.assertEqual(Permissions.create(0o755).mode, 0o755) self.assertEqual(Permissions.create(["u_r", "u_w", "u_x"]).mode, 0o700) self.assertEqual(Permissions.create(Permissions(user="rwx")).mode, 0o700) with self.assertRaises(ValueError): Permissions.create("foo") def test_constructor(self): p = Permissions(names=["foo", "bar"]) self.assertIn("foo", p) self.assertIn("bar", p) self.assertNotIn("baz", p) p = Permissions(user="r", group="w", other="x") self.assertIn("u_r", p) self.assertIn("g_w", p) self.assertIn("o_x", p) self.assertNotIn("sticky", p) self.assertNotIn("setuid", p) self.assertNotIn("setguid", p) p = Permissions( user="rwx", group="rwx", other="rwx", sticky=True, setuid=True, setguid=True ) self.assertIn("sticky", p) self.assertIn("setuid", p) self.assertIn("setguid", p) p = Permissions(mode=0o421) self.assertIn("u_r", p) self.assertIn("g_w", p) self.assertIn("o_x", p) self.assertNotIn("u_w", p) self.assertNotIn("g_x", p) self.assertNotIn("o_r", p) self.assertNotIn("sticky", p) self.assertNotIn("setuid", p) self.assertNotIn("setguid", p) def test_properties(self): p = Permissions() self.assertFalse(p.u_r) self.assertNotIn("u_r", p) p.u_r = True self.assertIn("u_r", p) self.assertTrue(p.u_r) p.u_r = False self.assertNotIn("u_r", p) self.assertFalse(p.u_r) self.assertFalse(p.u_w) p.add("u_w") self.assertTrue(p.u_w) p.remove("u_w") self.assertFalse(p.u_w) def test_repr(self): self.assertEqual( repr(Permissions()), "Permissions(user='', group='', other='')" ) self.assertEqual(repr(Permissions(names=["foo"])), "Permissions(names=['foo'])") repr(Permissions(user="rwx", group="rw", other="r")) repr(Permissions(user="rwx", group="rw", other="r", sticky=True)) repr(Permissions(user="rwx", group="rw", other="r", setuid=True)) repr(Permissions(user="rwx", group="rw", other="r", setguid=True)) def test_as_str(self): p = Permissions(user="rwx", group="rwx", other="rwx") self.assertEqual(p.as_str(), "rwxrwxrwx") self.assertEqual(str(p), "rwxrwxrwx") p = Permissions(mode=0o777, setuid=True, setguid=True, sticky=True) self.assertEqual(p.as_str(), "rwsrwsrwt") def test_mode(self): p = Permissions(user="rwx", group="rw", other="") self.assertEqual(p.mode, 0o760) def test_serialize(self): p = Permissions(names=["foo"]) self.assertEqual(p.dump(), ["foo"]) pp = Permissions.load(["foo"]) self.assertIn("foo", pp) def test_iter(self): iter_p = iter(Permissions(names=["foo"])) self.assertEqual(list(iter_p), ["foo"]) def test_equality(self): self.assertEqual(Permissions(mode=0o700), Permissions(user="rwx")) self.assertNotEqual(Permissions(mode=0o500), Permissions(user="rwx")) self.assertEqual(Permissions(mode=0o700), ["u_r", "u_w", "u_x"]) def test_copy(self): p = Permissions(mode=0o700) p_copy = p.copy() self.assertIsNot(p, p_copy) self.assertEqual(p, p_copy) def test_check(self): p = Permissions(user="rwx") self.assertTrue(p.check("u_r")) self.assertTrue(p.check("u_r", "u_w")) self.assertTrue(p.check("u_r", "u_w", "u_x")) self.assertFalse(p.check("u_r", "g_w")) self.assertFalse(p.check("g_r", "g_w")) self.assertFalse(p.check("foo")) def test_mode_set(self): p = Permissions(user="r") self.assertEqual(text_type(p), "r--------") p.mode = 0o700 self.assertEqual(text_type(p), "rwx------") pyfilesystem2-2.4.12/tests/test_subfs.py000066400000000000000000000036061400005060600203100ustar00rootroot00000000000000from __future__ import unicode_literals import os import shutil import tempfile import unittest from fs import osfs from fs.subfs import SubFS from fs.memoryfs import MemoryFS from fs.path import relpath from .test_osfs import TestOSFS class TestSubFS(TestOSFS): """Test OSFS implementation.""" def setUp(self): self.temp_dir = tempfile.mkdtemp("fstest") self.parent_fs = osfs.OSFS(self.temp_dir) self.parent_fs.makedir("__subdir__") self.fs = self.parent_fs.opendir("__subdir__") def tearDown(self): shutil.rmtree(self.temp_dir) self.parent_fs.close() self.fs.close() def _get_real_path(self, path): _path = os.path.join(self.temp_dir, "__subdir__", relpath(path)) return _path class CustomSubFS(SubFS): """Just a custom class to change the type""" def custom_function(self, custom_path): fs, delegate_path = self.delegate_path(custom_path) fs.custom_function(delegate_path) class CustomSubFS2(SubFS): """Just a custom class to change the type""" class CustomFS(MemoryFS): subfs_class = CustomSubFS def __init__(self): super(CustomFS, self).__init__() self.custom_path = None def custom_function(self, custom_path): self.custom_path = custom_path class TestCustomSubFS(unittest.TestCase): """Test customization of the SubFS returned from opendir etc""" def test_opendir(self): fs = CustomFS() fs.makedir("__subdir__") subfs = fs.opendir("__subdir__") # By default, you get the fs's defined custom SubFS assert isinstance(subfs, CustomSubFS) subfs.custom_function("filename") assert fs.custom_path == "/__subdir__/filename" # Providing the factory explicitly still works subfs = fs.opendir("__subdir__", factory=CustomSubFS2) assert isinstance(subfs, CustomSubFS2) pyfilesystem2-2.4.12/tests/test_tarfs.py000066400000000000000000000227441400005060600203110ustar00rootroot00000000000000# -*- encoding: UTF-8 from __future__ import unicode_literals import io import os import six import tarfile import tempfile import unittest import pytest from fs import tarfs from fs.enums import ResourceType from fs.compress import write_tar from fs.opener import open_fs from fs.opener.errors import NotWriteable from fs.errors import NoURL from fs.test import FSTestCases from .test_archives import ArchiveTestCases class TestWriteReadTarFS(unittest.TestCase): def setUp(self): fh, self._temp_path = tempfile.mkstemp() os.close(fh) def tearDown(self): os.remove(self._temp_path) def test_unicode_paths(self): # https://github.com/PyFilesystem/pyfilesystem2/issues/135 with tarfs.TarFS(self._temp_path, write=True) as tar_fs: tar_fs.writetext("Файл", "some content") with tarfs.TarFS(self._temp_path) as tar_fs: paths = list(tar_fs.walk.files()) for path in paths: self.assertIsInstance(path, six.text_type) with tar_fs.openbin(path) as f: f.read() class TestWriteTarFS(FSTestCases, unittest.TestCase): """ Test TarFS implementation. When writing, a TarFS is essentially a TempFS. """ def make_fs(self): fh, _tar_file = tempfile.mkstemp() os.close(fh) fs = tarfs.TarFS(_tar_file, write=True) fs._tar_file = _tar_file return fs def destroy_fs(self, fs): fs.close() os.remove(fs._tar_file) del fs._tar_file class TestWriteTarFSToFileobj(FSTestCases, unittest.TestCase): """ Test TarFS implementation. When writing, a TarFS is essentially a TempFS. """ def make_fs(self): _tar_file = six.BytesIO() fs = tarfs.TarFS(_tar_file, write=True) fs._tar_file = _tar_file return fs def destroy_fs(self, fs): fs.close() del fs._tar_file class TestWriteGZippedTarFS(FSTestCases, unittest.TestCase): def make_fs(self): fh, _tar_file = tempfile.mkstemp() os.close(fh) fs = tarfs.TarFS(_tar_file, write=True, compression="gz") fs._tar_file = _tar_file return fs def destroy_fs(self, fs): fs.close() os.remove(fs._tar_file) del fs._tar_file @pytest.mark.skipif(six.PY2, reason="Python2 does not support LZMA") class TestWriteXZippedTarFS(FSTestCases, unittest.TestCase): def make_fs(self): fh, _tar_file = tempfile.mkstemp() os.close(fh) fs = tarfs.TarFS(_tar_file, write=True, compression="xz") fs._tar_file = _tar_file return fs def destroy_fs(self, fs): fs.close() self.assert_is_xz(fs) os.remove(fs._tar_file) del fs._tar_file def assert_is_xz(self, fs): try: tarfile.open(fs._tar_file, "r:xz") except tarfile.ReadError: self.fail("{} is not a valid xz archive".format(fs._tar_file)) for other_comps in ["bz2", "gz", ""]: with self.assertRaises(tarfile.ReadError): tarfile.open(fs._tar_file, "r:{}".format(other_comps)) class TestWriteBZippedTarFS(FSTestCases, unittest.TestCase): def make_fs(self): fh, _tar_file = tempfile.mkstemp() os.close(fh) fs = tarfs.TarFS(_tar_file, write=True, compression="bz2") fs._tar_file = _tar_file return fs def destroy_fs(self, fs): fs.close() self.assert_is_bzip(fs) os.remove(fs._tar_file) del fs._tar_file def assert_is_bzip(self, fs): try: tarfile.open(fs._tar_file, "r:bz2") except tarfile.ReadError: self.fail("{} is not a valid bz2 archive".format(fs._tar_file)) for other_comps in ["gz", ""]: with self.assertRaises(tarfile.ReadError): tarfile.open(fs._tar_file, "r:{}".format(other_comps)) class TestReadTarFS(ArchiveTestCases, unittest.TestCase): """ Test Reading tar files. """ def compress(self, fs): fh, self._temp_path = tempfile.mkstemp() os.close(fh) write_tar(fs, self._temp_path) def load_archive(self): return tarfs.TarFS(self._temp_path) def remove_archive(self): os.remove(self._temp_path) def test_read_from_fileobject(self): try: tarfs.TarFS(open(self._temp_path, "rb")) except Exception: self.fail("Couldn't open tarfs from fileobject") def test_read_from_filename(self): try: tarfs.TarFS(self._temp_path) except Exception: self.fail("Couldn't open tarfs from filename") def test_read_non_existent_file(self): fs = tarfs.TarFS(open(self._temp_path, "rb")) # it has been very difficult to catch exception in __del__() del fs._tar try: fs.close() except AttributeError: self.fail("Could not close tar fs properly") except Exception: self.fail("Strange exception in closing fs") def test_getinfo(self): super(TestReadTarFS, self).test_getinfo() top = self.fs.getinfo("top.txt", ["tar"]) self.assertTrue(top.get("tar", "is_file")) def test_geturl_for_fs(self): test_fixtures = [ # test_file, expected ["foo/bar/egg/foofoo", "foo/bar/egg/foofoo"], ["foo/bar egg/foo foo", "foo/bar%20egg/foo%20foo"], ] tar_file_path = self._temp_path.replace("\\", "/") for test_file, expected_file in test_fixtures: expected = "tar://{tar_file_path}!/{file_inside_tar}".format( tar_file_path=tar_file_path, file_inside_tar=expected_file ) self.assertEqual(self.fs.geturl(test_file, purpose="fs"), expected) def test_geturl_for_fs_but_file_is_binaryio(self): self.fs._file = six.BytesIO() self.assertRaises(NoURL, self.fs.geturl, "test", "fs") def test_geturl_for_download(self): test_file = "foo/bar/egg/foofoo" with self.assertRaises(NoURL): self.fs.geturl(test_file) class TestBrokenPaths(unittest.TestCase): @classmethod def setUpClass(cls): cls.tmpfs = open_fs("temp://tarfstest") @classmethod def tearDownClass(cls): cls.tmpfs.close() def setUp(self): self.tempfile = self.tmpfs.open("test.tar", "wb+") with tarfile.open(mode="w", fileobj=self.tempfile) as tf: tf.addfile(tarfile.TarInfo("."), io.StringIO()) tf.addfile(tarfile.TarInfo("../foo.txt"), io.StringIO()) self.tempfile.seek(0) self.fs = tarfs.TarFS(self.tempfile) def tearDown(self): self.fs.close() self.tempfile.close() def test_listdir(self): self.assertEqual(self.fs.listdir("/"), []) class TestImplicitDirectories(unittest.TestCase): """Regression tests for #160. """ @classmethod def setUpClass(cls): cls.tmpfs = open_fs("temp://") @classmethod def tearDownClass(cls): cls.tmpfs.close() def setUp(self): self.tempfile = self.tmpfs.open("test.tar", "wb+") with tarfile.open(mode="w", fileobj=self.tempfile) as tf: tf.addfile(tarfile.TarInfo("foo/bar/baz/spam.txt"), io.StringIO()) tf.addfile(tarfile.TarInfo("./foo/eggs.bin"), io.StringIO()) tf.addfile(tarfile.TarInfo("./foo/yolk/beans.txt"), io.StringIO()) info = tarfile.TarInfo("foo/yolk") info.type = tarfile.DIRTYPE tf.addfile(info, io.BytesIO()) self.tempfile.seek(0) self.fs = tarfs.TarFS(self.tempfile) def tearDown(self): self.fs.close() self.tempfile.close() def test_isfile(self): self.assertFalse(self.fs.isfile("foo")) self.assertFalse(self.fs.isfile("foo/bar")) self.assertFalse(self.fs.isfile("foo/bar/baz")) self.assertTrue(self.fs.isfile("foo/bar/baz/spam.txt")) self.assertTrue(self.fs.isfile("foo/yolk/beans.txt")) self.assertTrue(self.fs.isfile("foo/eggs.bin")) self.assertFalse(self.fs.isfile("foo/eggs.bin/baz")) def test_isdir(self): self.assertTrue(self.fs.isdir("foo")) self.assertTrue(self.fs.isdir("foo/yolk")) self.assertTrue(self.fs.isdir("foo/bar")) self.assertTrue(self.fs.isdir("foo/bar/baz")) self.assertFalse(self.fs.isdir("foo/bar/baz/spam.txt")) self.assertFalse(self.fs.isdir("foo/eggs.bin")) self.assertFalse(self.fs.isdir("foo/eggs.bin/baz")) self.assertFalse(self.fs.isdir("foo/yolk/beans.txt")) def test_listdir(self): self.assertEqual(sorted(self.fs.listdir("foo")), ["bar", "eggs.bin", "yolk"]) self.assertEqual(self.fs.listdir("foo/bar"), ["baz"]) self.assertEqual(self.fs.listdir("foo/bar/baz"), ["spam.txt"]) self.assertEqual(self.fs.listdir("foo/yolk"), ["beans.txt"]) def test_getinfo(self): info = self.fs.getdetails("foo/bar/baz") self.assertEqual(info.name, "baz") self.assertEqual(info.size, 0) self.assertIs(info.type, ResourceType.directory) info = self.fs.getdetails("foo") self.assertEqual(info.name, "foo") self.assertEqual(info.size, 0) self.assertIs(info.type, ResourceType.directory) class TestReadTarFSMem(TestReadTarFS): def make_source_fs(self): return open_fs("mem://") class TestOpener(unittest.TestCase): def test_not_writeable(self): with self.assertRaises(NotWriteable): open_fs("tar://foo.zip", writeable=True) pyfilesystem2-2.4.12/tests/test_tempfs.py000066400000000000000000000014751400005060600204660ustar00rootroot00000000000000from __future__ import unicode_literals import os from fs.tempfs import TempFS from fs import errors from .test_osfs import TestOSFS try: from unittest import mock except ImportError: import mock class TestTempFS(TestOSFS): """Test OSFS implementation.""" def make_fs(self): return TempFS() def test_clean(self): t = TempFS() _temp_dir = t.getsyspath("/") self.assertTrue(os.path.isdir(_temp_dir)) t.close() self.assertFalse(os.path.isdir(_temp_dir)) @mock.patch("shutil.rmtree", create=True) def test_clean_error(self, rmtree): rmtree.side_effect = Exception("boom") with self.assertRaises(errors.OperationFailed): t = TempFS(ignore_clean_errors=False) t.writebytes("foo", b"bar") t.close() pyfilesystem2-2.4.12/tests/test_time.py000066400000000000000000000010111400005060600201100ustar00rootroot00000000000000from __future__ import unicode_literals, print_function from datetime import datetime import unittest import pytz from fs.time import datetime_to_epoch, epoch_to_datetime class TestEpoch(unittest.TestCase): def test_epoch_to_datetime(self): self.assertEqual( epoch_to_datetime(142214400), datetime(1974, 7, 5, tzinfo=pytz.UTC) ) def test_datetime_to_epoch(self): self.assertEqual( datetime_to_epoch(datetime(1974, 7, 5, tzinfo=pytz.UTC)), 142214400 ) pyfilesystem2-2.4.12/tests/test_tools.py000066400000000000000000000033571400005060600203310ustar00rootroot00000000000000from __future__ import unicode_literals import unittest from fs.mode import validate_open_mode from fs.mode import validate_openbin_mode from fs import tools from fs.opener import open_fs class TestTools(unittest.TestCase): def test_remove_empty(self): fs = open_fs("temp://") fs.makedirs("foo/bar/baz/egg/") fs.create("foo/bar/test.txt") tools.remove_empty(fs, "foo/bar/baz/egg") self.assertFalse(fs.isdir("foo/bar/baz")) self.assertTrue(fs.isdir("foo/bar")) fs.remove("foo/bar/test.txt") tools.remove_empty(fs, "foo/bar") self.assertEqual(fs.listdir("/"), []) def test_validate_openbin_mode(self): with self.assertRaises(ValueError): validate_openbin_mode("X") with self.assertRaises(ValueError): validate_openbin_mode("") with self.assertRaises(ValueError): validate_openbin_mode("rX") with self.assertRaises(ValueError): validate_openbin_mode("rt") validate_openbin_mode("r") validate_openbin_mode("w") validate_openbin_mode("a") validate_openbin_mode("r+") validate_openbin_mode("w+") validate_openbin_mode("a+") def test_validate_open_mode(self): with self.assertRaises(ValueError): validate_open_mode("X") with self.assertRaises(ValueError): validate_open_mode("") with self.assertRaises(ValueError): validate_open_mode("rX") validate_open_mode("rt") validate_open_mode("r") validate_open_mode("rb") validate_open_mode("w") validate_open_mode("a") validate_open_mode("r+") validate_open_mode("w+") validate_open_mode("a+") pyfilesystem2-2.4.12/tests/test_tree.py000066400000000000000000000112551400005060600201240ustar00rootroot00000000000000from __future__ import print_function from __future__ import unicode_literals import io import unittest from fs import tree from fs.memoryfs import MemoryFS class TestInfo(unittest.TestCase): def setUp(self): self.fs = MemoryFS() self.fs.makedir("foo") self.fs.makedir("bar") self.fs.makedir("baz") self.fs.makedirs("foo/egg1") self.fs.makedirs("foo/egg2") self.fs.create("/root1") self.fs.create("/root2") self.fs.create("/foo/test.txt") self.fs.create("/foo/test2.txt") self.fs.create("/foo/.hidden") self.fs.makedirs("/deep/deep1/deep2/deep3/deep4/deep5/deep6") def test_tree(self): output_file = io.StringIO() tree.render(self.fs, file=output_file) expected = "|-- bar\n|-- baz\n|-- deep\n| `-- deep1\n| `-- deep2\n| `-- deep3\n| `-- deep4\n| `-- deep5\n|-- foo\n| |-- egg1\n| |-- egg2\n| |-- .hidden\n| |-- test.txt\n| `-- test2.txt\n|-- root1\n`-- root2\n" self.assertEqual(output_file.getvalue(), expected) def test_tree_encoding(self): output_file = io.StringIO() tree.render(self.fs, file=output_file, with_color=True) print(repr(output_file.getvalue())) expected = "\x1b[32m\u251c\u2500\u2500\x1b[0m \x1b[1;34mbar\x1b[0m\n\x1b[32m\u251c\u2500\u2500\x1b[0m \x1b[1;34mbaz\x1b[0m\n\x1b[32m\u251c\u2500\u2500\x1b[0m \x1b[1;34mdeep\x1b[0m\n\x1b[32m\u2502 \u2514\u2500\u2500\x1b[0m \x1b[1;34mdeep1\x1b[0m\n\x1b[32m\u2502 \u2514\u2500\u2500\x1b[0m \x1b[1;34mdeep2\x1b[0m\n\x1b[32m\u2502 \u2514\u2500\u2500\x1b[0m \x1b[1;34mdeep3\x1b[0m\n\x1b[32m\u2502 \u2514\u2500\u2500\x1b[0m \x1b[1;34mdeep4\x1b[0m\n\x1b[32m\u2502 \u2514\u2500\u2500\x1b[0m \x1b[1;34mdeep5\x1b[0m\n\x1b[32m\u251c\u2500\u2500\x1b[0m \x1b[1;34mfoo\x1b[0m\n\x1b[32m\u2502 \u251c\u2500\u2500\x1b[0m \x1b[1;34megg1\x1b[0m\n\x1b[32m\u2502 \u251c\u2500\u2500\x1b[0m \x1b[1;34megg2\x1b[0m\n\x1b[32m\u2502 \u251c\u2500\u2500\x1b[0m \x1b[33m.hidden\x1b[0m\n\x1b[32m\u2502 \u251c\u2500\u2500\x1b[0m test.txt\n\x1b[32m\u2502 \u2514\u2500\u2500\x1b[0m test2.txt\n\x1b[32m\u251c\u2500\u2500\x1b[0m root1\n\x1b[32m\u2514\u2500\u2500\x1b[0m root2\n" self.assertEqual(output_file.getvalue(), expected) def test_tree_bytes_no_dirs_first(self): output_file = io.StringIO() tree.render(self.fs, file=output_file, dirs_first=False) expected = "|-- bar\n|-- baz\n|-- deep\n| `-- deep1\n| `-- deep2\n| `-- deep3\n| `-- deep4\n| `-- deep5\n|-- foo\n| |-- .hidden\n| |-- egg1\n| |-- egg2\n| |-- test.txt\n| `-- test2.txt\n|-- root1\n`-- root2\n" self.assertEqual(output_file.getvalue(), expected) def test_error(self): output_file = io.StringIO() filterdir = self.fs.filterdir def broken_filterdir(path, **kwargs): if path.startswith("/deep/deep1/"): # Because error messages differ accross Python versions raise Exception("integer division or modulo by zero") return filterdir(path, **kwargs) self.fs.filterdir = broken_filterdir tree.render(self.fs, file=output_file, with_color=True) expected = "\x1b[32m\u251c\u2500\u2500\x1b[0m \x1b[1;34mbar\x1b[0m\n\x1b[32m\u251c\u2500\u2500\x1b[0m \x1b[1;34mbaz\x1b[0m\n\x1b[32m\u251c\u2500\u2500\x1b[0m \x1b[1;34mdeep\x1b[0m\n\x1b[32m\u2502 \u2514\u2500\u2500\x1b[0m \x1b[1;34mdeep1\x1b[0m\n\x1b[32m\u2502 \u2514\u2500\u2500\x1b[0m \x1b[1;34mdeep2\x1b[0m\n\x1b[32m\u2502 \u2514\u2500\u2500\x1b[0m \x1b[31merror (integer division or modulo by zero)\x1b[0m\n\x1b[32m\u251c\u2500\u2500\x1b[0m \x1b[1;34mfoo\x1b[0m\n\x1b[32m\u2502 \u251c\u2500\u2500\x1b[0m \x1b[1;34megg1\x1b[0m\n\x1b[32m\u2502 \u251c\u2500\u2500\x1b[0m \x1b[1;34megg2\x1b[0m\n\x1b[32m\u2502 \u251c\u2500\u2500\x1b[0m \x1b[33m.hidden\x1b[0m\n\x1b[32m\u2502 \u251c\u2500\u2500\x1b[0m test.txt\n\x1b[32m\u2502 \u2514\u2500\u2500\x1b[0m test2.txt\n\x1b[32m\u251c\u2500\u2500\x1b[0m root1\n\x1b[32m\u2514\u2500\u2500\x1b[0m root2\n" tree_output = output_file.getvalue() print(repr(tree_output)) self.assertEqual(expected, tree_output) output_file = io.StringIO() tree.render(self.fs, file=output_file, with_color=False) expected = "|-- bar\n|-- baz\n|-- deep\n| `-- deep1\n| `-- deep2\n| `-- error (integer division or modulo by zero)\n|-- foo\n| |-- egg1\n| |-- egg2\n| |-- .hidden\n| |-- test.txt\n| `-- test2.txt\n|-- root1\n`-- root2\n" self.assertEqual(expected, output_file.getvalue()) pyfilesystem2-2.4.12/tests/test_url_tools.py000066400000000000000000000025321400005060600212050ustar00rootroot00000000000000# coding: utf-8 """Test url tools. """ from __future__ import unicode_literals import platform import unittest from fs._url_tools import url_quote class TestBase(unittest.TestCase): def test_quote(self): test_fixtures = [ # test_snippet, expected ["foo/bar/egg/foofoo", "foo/bar/egg/foofoo"], ["foo/bar ha/barz", "foo/bar%20ha/barz"], ["example b.txt", "example%20b.txt"], ["exampleㄓ.txt", "example%E3%84%93.txt"], ] if platform.system() == "Windows": test_fixtures.extend( [ ["C:\\My Documents\\test.txt", "C:/My%20Documents/test.txt"], ["C:/My Documents/test.txt", "C:/My%20Documents/test.txt"], # on Windows '\' is regarded as path separator ["test/forward\\slash", "test/forward/slash"], ] ) else: test_fixtures.extend( [ # colon:tmp is bad path under Windows ["test/colon:tmp", "test/colon%3Atmp"], # Unix treat \ as %5C ["test/forward\\slash", "test/forward%5Cslash"], ] ) for test_snippet, expected in test_fixtures: self.assertEqual(url_quote(test_snippet), expected) pyfilesystem2-2.4.12/tests/test_walk.py000066400000000000000000000236261400005060600201300ustar00rootroot00000000000000from __future__ import unicode_literals import unittest from fs.errors import FSError from fs.memoryfs import MemoryFS from fs import walk from fs.wrap import read_only import six class TestWalker(unittest.TestCase): def setUp(self): self.walker = walk.Walker() def test_repr(self): repr(self.walker) def test_create(self): with self.assertRaises(ValueError): walk.Walker(ignore_errors=True, on_error=lambda path, error: True) walk.Walker(ignore_errors=True) class TestWalk(unittest.TestCase): def setUp(self): self.fs = MemoryFS() self.fs.makedir("foo1") self.fs.makedir("foo2") self.fs.makedir("foo3") self.fs.create("foo1/top1.txt") self.fs.create("foo1/top2.txt") self.fs.makedir("foo1/bar1") self.fs.makedir("foo2/bar2") self.fs.makedir("foo2/bar2/bar3") self.fs.create("foo2/bar2/bar3/test.txt") self.fs.create("foo2/top3.bin") def test_invalid(self): with self.assertRaises(ValueError): self.fs.walk(search="random") def test_repr(self): repr(self.fs.walk) def test_walk(self): _walk = [] for step in self.fs.walk(): self.assertIsInstance(step, walk.Step) path, dirs, files = step _walk.append( (path, [info.name for info in dirs], [info.name for info in files]) ) expected = [ ("/", ["foo1", "foo2", "foo3"], []), ("/foo1", ["bar1"], ["top1.txt", "top2.txt"]), ("/foo2", ["bar2"], ["top3.bin"]), ("/foo3", [], []), ("/foo1/bar1", [], []), ("/foo2/bar2", ["bar3"], []), ("/foo2/bar2/bar3", [], ["test.txt"]), ] self.assertEqual(_walk, expected) def test_walk_filter_dirs(self): _walk = [] for step in self.fs.walk(filter_dirs=["foo*"]): self.assertIsInstance(step, walk.Step) path, dirs, files = step _walk.append( (path, [info.name for info in dirs], [info.name for info in files]) ) expected = [ ("/", ["foo1", "foo2", "foo3"], []), ("/foo1", [], ["top1.txt", "top2.txt"]), ("/foo2", [], ["top3.bin"]), ("/foo3", [], []), ] self.assertEqual(_walk, expected) def test_walk_depth(self): _walk = [] for step in self.fs.walk(search="depth"): self.assertIsInstance(step, walk.Step) path, dirs, files = step _walk.append( (path, [info.name for info in dirs], [info.name for info in files]) ) expected = [ ("/foo1/bar1", [], []), ("/foo1", ["bar1"], ["top1.txt", "top2.txt"]), ("/foo2/bar2/bar3", [], ["test.txt"]), ("/foo2/bar2", ["bar3"], []), ("/foo2", ["bar2"], ["top3.bin"]), ("/foo3", [], []), ("/", ["foo1", "foo2", "foo3"], []), ] self.assertEqual(_walk, expected) def test_walk_directory(self): _walk = [] for step in self.fs.walk("foo2"): self.assertIsInstance(step, walk.Step) path, dirs, files = step _walk.append( (path, [info.name for info in dirs], [info.name for info in files]) ) expected = [ ("/foo2", ["bar2"], ["top3.bin"]), ("/foo2/bar2", ["bar3"], []), ("/foo2/bar2/bar3", [], ["test.txt"]), ] self.assertEqual(_walk, expected) def test_walk_levels_1(self): results = list(self.fs.walk(max_depth=1)) self.assertEqual(len(results), 1) dirs = sorted(info.name for info in results[0].dirs) self.assertEqual(dirs, ["foo1", "foo2", "foo3"]) files = sorted(info.name for info in results[0].files) self.assertEqual(files, []) def test_walk_levels_1_depth(self): results = list(self.fs.walk(max_depth=1, search="depth")) self.assertEqual(len(results), 1) dirs = sorted(info.name for info in results[0].dirs) self.assertEqual(dirs, ["foo1", "foo2", "foo3"]) files = sorted(info.name for info in results[0].files) self.assertEqual(files, []) def test_walk_levels_2(self): _walk = [] for step in self.fs.walk(max_depth=2): self.assertIsInstance(step, walk.Step) path, dirs, files = step _walk.append( ( path, sorted(info.name for info in dirs), sorted(info.name for info in files), ) ) expected = [ ("/", ["foo1", "foo2", "foo3"], []), ("/foo1", ["bar1"], ["top1.txt", "top2.txt"]), ("/foo2", ["bar2"], ["top3.bin"]), ("/foo3", [], []), ] self.assertEqual(_walk, expected) def test_walk_files(self): files = list(self.fs.walk.files()) self.assertEqual( files, [ "/foo1/top1.txt", "/foo1/top2.txt", "/foo2/top3.bin", "/foo2/bar2/bar3/test.txt", ], ) files = list(self.fs.walk.files(search="depth")) self.assertEqual( files, [ "/foo1/top1.txt", "/foo1/top2.txt", "/foo2/bar2/bar3/test.txt", "/foo2/top3.bin", ], ) def test_walk_dirs(self): dirs = list(self.fs.walk.dirs()) self.assertEqual( dirs, ["/foo1", "/foo2", "/foo3", "/foo1/bar1", "/foo2/bar2", "/foo2/bar2/bar3"], ) dirs = list(self.fs.walk.dirs(search="depth")) self.assertEqual( dirs, ["/foo1/bar1", "/foo1", "/foo2/bar2/bar3", "/foo2/bar2", "/foo2", "/foo3"], ) dirs = list(self.fs.walk.dirs(search="depth", exclude_dirs=["foo2"])) self.assertEqual(dirs, ["/foo1/bar1", "/foo1", "/foo3"]) def test_walk_files_filter(self): files = list(self.fs.walk.files(filter=["*.txt"])) self.assertEqual( files, ["/foo1/top1.txt", "/foo1/top2.txt", "/foo2/bar2/bar3/test.txt"] ) files = list(self.fs.walk.files(search="depth", filter=["*.txt"])) self.assertEqual( files, ["/foo1/top1.txt", "/foo1/top2.txt", "/foo2/bar2/bar3/test.txt"] ) files = list(self.fs.walk.files(filter=["*.bin"])) self.assertEqual(files, ["/foo2/top3.bin"]) files = list(self.fs.walk.files(filter=["*.nope"])) self.assertEqual(files, []) def test_walk_files_exclude(self): # Test exclude argument works files = list(self.fs.walk.files(exclude=["*.txt"])) self.assertEqual(files, ["/foo2/top3.bin"]) # Test exclude doesn't break filter files = list(self.fs.walk.files(filter=["*.bin"], exclude=["*.txt"])) self.assertEqual(files, ["/foo2/top3.bin"]) # Test excluding everything files = list(self.fs.walk.files(exclude=["*"])) self.assertEqual(files, []) def test_walk_info(self): walk = [] for path, info in self.fs.walk.info(): walk.append((path, info.is_dir, info.name)) expected = [ ("/foo1", True, "foo1"), ("/foo2", True, "foo2"), ("/foo3", True, "foo3"), ("/foo1/top1.txt", False, "top1.txt"), ("/foo1/top2.txt", False, "top2.txt"), ("/foo1/bar1", True, "bar1"), ("/foo2/bar2", True, "bar2"), ("/foo2/top3.bin", False, "top3.bin"), ("/foo2/bar2/bar3", True, "bar3"), ("/foo2/bar2/bar3/test.txt", False, "test.txt"), ] self.assertEqual(walk, expected) def test_broken(self): original_scandir = self.fs.scandir def broken_scandir(path, namespaces=None): if path == "/foo2": raise FSError("can't read dir") return original_scandir(path, namespaces=namespaces) self.fs.scandir = broken_scandir files = list(self.fs.walk.files(search="depth", ignore_errors=True)) self.assertEqual(files, ["/foo1/top1.txt", "/foo1/top2.txt"]) with self.assertRaises(FSError): list(self.fs.walk.files(on_error=lambda path, error: False)) def test_on_error_invalid(self): with self.assertRaises(TypeError): walk.Walker(on_error="nope") def test_subdir_uses_same_walker(self): class CustomWalker(walk.Walker): @classmethod def bind(cls, fs): return walk.BoundWalker(fs, walker_class=CustomWalker) class CustomizedMemoryFS(MemoryFS): walker_class = CustomWalker base_fs = CustomizedMemoryFS() base_fs.writetext("a", "a") base_fs.makedirs("b") base_fs.writetext("b/c", "c") base_fs.writetext("b/d", "d") base_walker = base_fs.walk self.assertEqual(base_walker.walker_class, CustomWalker) six.assertCountEqual(self, ["/a", "/b/c", "/b/d"], base_walker.files()) sub_fs = base_fs.opendir("b") sub_walker = sub_fs.walk self.assertEqual(sub_walker.walker_class, CustomWalker) six.assertCountEqual(self, ["/c", "/d"], sub_walker.files()) def test_readonly_wrapper_uses_same_walker(self): class CustomWalker(walk.Walker): @classmethod def bind(cls, fs): return walk.BoundWalker(fs, walker_class=CustomWalker) class CustomizedMemoryFS(MemoryFS): walker_class = CustomWalker base_fs = CustomizedMemoryFS() base_walker = base_fs.walk self.assertEqual(base_walker.walker_class, CustomWalker) readonly_fs = read_only(CustomizedMemoryFS()) readonly_walker = readonly_fs.walk self.assertEqual(readonly_walker.walker_class, CustomWalker) pyfilesystem2-2.4.12/tests/test_wildcard.py000066400000000000000000000035211400005060600207530ustar00rootroot00000000000000"""Test f2s.fnmatch.""" from __future__ import unicode_literals import unittest from fs import wildcard class TestFNMatch(unittest.TestCase): """Test wildcard.""" def test_wildcard(self): self.assertTrue(wildcard.match("*.py", "file.py")) self.assertTrue(wildcard.match("????.py", "????.py")) self.assertTrue(wildcard.match("file.py", "file.py")) self.assertTrue(wildcard.match("file.py[co]", "file.pyc")) self.assertTrue(wildcard.match("file.py[co]", "file.pyo")) self.assertTrue(wildcard.match("file.py[!c]", "file.py0")) self.assertTrue(wildcard.match("file.py[^]", "file.py^")) self.assertFalse(wildcard.match("*.jpg", "file.py")) self.assertFalse(wildcard.match("toolong.py", "????.py")) self.assertFalse(wildcard.match("file.pyc", "file.py")) self.assertFalse(wildcard.match("file.py[co]", "file.pyz")) self.assertFalse(wildcard.match("file.py[!o]", "file.pyo")) self.assertFalse(wildcard.match("file.py[]", "file.py0")) self.assertTrue(wildcard.imatch("*.py", "FILE.py")) self.assertTrue(wildcard.imatch("*.py", "file.PY")) def test_match_any(self): self.assertTrue(wildcard.match_any([], "foo.py")) self.assertTrue(wildcard.imatch_any([], "foo.py")) self.assertTrue(wildcard.match_any(["*.py", "*.pyc"], "foo.pyc")) self.assertTrue(wildcard.imatch_any(["*.py", "*.pyc"], "FOO.pyc")) def test_get_matcher(self): matcher = wildcard.get_matcher([], True) self.assertTrue(matcher("foo.py")) matcher = wildcard.get_matcher(["*.py"], True) self.assertTrue(matcher("foo.py")) self.assertFalse(matcher("foo.PY")) matcher = wildcard.get_matcher(["*.py"], False) self.assertTrue(matcher("foo.py")) self.assertTrue(matcher("FOO.py")) pyfilesystem2-2.4.12/tests/test_wrap.py000066400000000000000000000057631400005060600201450ustar00rootroot00000000000000from __future__ import unicode_literals import unittest from fs import errors from fs import open_fs from fs import wrap class TestWrap(unittest.TestCase): def test_readonly(self): mem_fs = open_fs("mem://") fs = wrap.read_only(mem_fs) with self.assertRaises(errors.ResourceReadOnly): fs.open("foo", "w") with self.assertRaises(errors.ResourceReadOnly): fs.appendtext("foo", "bar") with self.assertRaises(errors.ResourceReadOnly): fs.appendbytes("foo", b"bar") with self.assertRaises(errors.ResourceReadOnly): fs.makedir("foo") with self.assertRaises(errors.ResourceReadOnly): fs.move("foo", "bar") with self.assertRaises(errors.ResourceReadOnly): fs.openbin("foo", "w") with self.assertRaises(errors.ResourceReadOnly): fs.remove("foo") with self.assertRaises(errors.ResourceReadOnly): fs.removedir("foo") with self.assertRaises(errors.ResourceReadOnly): fs.setinfo("foo", {}) with self.assertRaises(errors.ResourceReadOnly): fs.settimes("foo", {}) with self.assertRaises(errors.ResourceReadOnly): fs.copy("foo", "bar") with self.assertRaises(errors.ResourceReadOnly): fs.create("foo") with self.assertRaises(errors.ResourceReadOnly): fs.writetext("foo", "bar") with self.assertRaises(errors.ResourceReadOnly): fs.writebytes("foo", b"bar") with self.assertRaises(errors.ResourceReadOnly): fs.makedirs("foo/bar") with self.assertRaises(errors.ResourceReadOnly): fs.touch("foo") with self.assertRaises(errors.ResourceReadOnly): fs.upload("foo", None) with self.assertRaises(errors.ResourceReadOnly): fs.writefile("foo", None) self.assertTrue(mem_fs.isempty("/")) mem_fs.writebytes("file", b"read me") with fs.openbin("file") as read_file: self.assertEqual(read_file.read(), b"read me") with fs.open("file", "rb") as read_file: self.assertEqual(read_file.read(), b"read me") def test_cachedir(self): mem_fs = open_fs("mem://") mem_fs.makedirs("foo/bar/baz") mem_fs.touch("egg") fs = wrap.cache_directory(mem_fs) self.assertEqual(sorted(fs.listdir("/")), ["egg", "foo"]) self.assertEqual(sorted(fs.listdir("/")), ["egg", "foo"]) self.assertTrue(fs.isdir("foo")) self.assertTrue(fs.isdir("foo")) self.assertTrue(fs.isfile("egg")) self.assertTrue(fs.isfile("egg")) self.assertEqual(fs.getinfo("foo"), mem_fs.getinfo("foo")) self.assertEqual(fs.getinfo("foo"), mem_fs.getinfo("foo")) self.assertEqual(fs.getinfo("/"), mem_fs.getinfo("/")) self.assertEqual(fs.getinfo("/"), mem_fs.getinfo("/")) with self.assertRaises(errors.ResourceNotFound): fs.getinfo("/foofoo") pyfilesystem2-2.4.12/tests/test_wrapfs.py000066400000000000000000000015261400005060600204670ustar00rootroot00000000000000from __future__ import unicode_literals import unittest from six import text_type from fs import wrapfs from fs.opener import open_fs class WrappedFS(wrapfs.WrapFS): wrap_name = "test" class TestWrapFS(unittest.TestCase): def setUp(self): self.wrapped_fs = open_fs("mem://") self.fs = WrappedFS(self.wrapped_fs) def test_encode(self): self.assertEqual((self.wrapped_fs, "foo"), self.fs.delegate_path("foo")) self.assertEqual((self.wrapped_fs, "bar"), self.fs.delegate_path("bar")) self.assertIs(self.wrapped_fs, self.fs.delegate_fs()) def test_repr(self): self.assertEqual(repr(self.fs), "WrappedFS(MemoryFS())") def test_str(self): self.assertEqual(text_type(self.fs), "(test)") self.assertEqual(text_type(wrapfs.WrapFS(open_fs("mem://"))), "") pyfilesystem2-2.4.12/tests/test_zipfs.py000066400000000000000000000171601400005060600203210ustar00rootroot00000000000000# -*- encoding: UTF-8 from __future__ import unicode_literals import os import sys import tempfile import unittest import zipfile import six from fs import zipfs from fs.compress import write_zip from fs.opener import open_fs from fs.opener.errors import NotWriteable from fs.errors import NoURL from fs.test import FSTestCases from fs.enums import Seek from .test_archives import ArchiveTestCases class TestWriteReadZipFS(unittest.TestCase): def setUp(self): fh, self._temp_path = tempfile.mkstemp() os.close(fh) def tearDown(self): os.remove(self._temp_path) def test_unicode_paths(self): # https://github.com/PyFilesystem/pyfilesystem2/issues/135 with zipfs.ZipFS(self._temp_path, write=True) as zip_fs: zip_fs.writetext("Файл", "some content") with zipfs.ZipFS(self._temp_path) as zip_fs: paths = list(zip_fs.walk.files()) for path in paths: self.assertIsInstance(path, six.text_type) with zip_fs.openbin(path) as f: f.read() class TestWriteZipFS(FSTestCases, unittest.TestCase): """ Test ZIPFS implementation. When writing, a ZipFS is essentially a TempFS. """ def make_fs(self): _zip_file = tempfile.TemporaryFile() fs = zipfs.ZipFS(_zip_file, write=True) fs._zip_file = _zip_file return fs def destroy_fs(self, fs): fs.close() del fs._zip_file class TestReadZipFS(ArchiveTestCases, unittest.TestCase): """ Test Reading zip files. """ def compress(self, fs): fh, self._temp_path = tempfile.mkstemp() os.close(fh) write_zip(fs, self._temp_path) def load_archive(self): return zipfs.ZipFS(self._temp_path) def remove_archive(self): os.remove(self._temp_path) def test_large(self): test_fs = open_fs("mem://") test_fs.writebytes("test.bin", b"a" * 50000) write_zip(test_fs, self._temp_path) self.fs = self.load_archive() with self.fs.openbin("test.bin") as f: self.assertEqual(f.read(), b"a" * 50000) with self.fs.openbin("test.bin") as f: self.assertEqual(f.read(50000), b"a" * 50000) with self.fs.openbin("test.bin") as f: self.assertEqual(f.read1(), b"a" * 50000) with self.fs.openbin("test.bin") as f: self.assertEqual(f.read1(50000), b"a" * 50000) def test_getinfo(self): super(TestReadZipFS, self).test_getinfo() top = self.fs.getinfo("top.txt", ["zip"]) if sys.platform in ("linux", "darwin"): self.assertEqual(top.get("zip", "create_system"), 3) def test_openbin(self): with self.fs.openbin("top.txt") as f: self.assertEqual(f.name, "top.txt") with self.fs.openbin("top.txt") as f: self.assertRaises(ValueError, f.seek, -2, Seek.set) with self.fs.openbin("top.txt") as f: self.assertRaises(ValueError, f.seek, 2, Seek.end) with self.fs.openbin("top.txt") as f: self.assertRaises(ValueError, f.seek, 0, 5) def test_read(self): with self.fs.openbin("top.txt") as f: self.assertEqual(f.read(), b"Hello, World") with self.fs.openbin("top.txt") as f: self.assertEqual(f.read(5), b"Hello") self.assertEqual(f.read(7), b", World") with self.fs.openbin("top.txt") as f: self.assertEqual(f.read(12), b"Hello, World") def test_read1(self): with self.fs.openbin("top.txt") as f: self.assertEqual(f.read1(), b"Hello, World") with self.fs.openbin("top.txt") as f: self.assertEqual(f.read1(5), b"Hello") self.assertEqual(f.read1(7), b", World") with self.fs.openbin("top.txt") as f: self.assertEqual(f.read1(12), b"Hello, World") def test_seek_set(self): with self.fs.openbin("top.txt") as f: self.assertEqual(f.tell(), 0) self.assertEqual(f.read(), b"Hello, World") self.assertEqual(f.tell(), 12) self.assertEqual(f.read(), b"") self.assertEqual(f.tell(), 12) self.assertEqual(f.seek(0), 0) self.assertEqual(f.tell(), 0) self.assertEqual(f.read1(), b"Hello, World") self.assertEqual(f.tell(), 12) self.assertEqual(f.seek(1), 1) self.assertEqual(f.tell(), 1) self.assertEqual(f.read(), b"ello, World") self.assertEqual(f.tell(), 12) self.assertEqual(f.seek(7), 7) self.assertEqual(f.tell(), 7) self.assertEqual(f.read(), b"World") self.assertEqual(f.tell(), 12) def test_seek_current(self): with self.fs.openbin("top.txt") as f: self.assertEqual(f.tell(), 0) self.assertEqual(f.read(5), b"Hello") self.assertEqual(f.tell(), 5) self.assertEqual(f.seek(2, Seek.current), 7) self.assertEqual(f.read1(), b"World") self.assertEqual(f.tell(), 12) self.assertEqual(f.seek(-1, Seek.current), 11) self.assertEqual(f.read(), b"d") with self.fs.openbin("top.txt") as f: self.assertRaises(ValueError, f.seek, -1, Seek.current) def test_seek_end(self): with self.fs.openbin("top.txt") as f: self.assertEqual(f.tell(), 0) self.assertEqual(f.seek(-12, Seek.end), 0) self.assertEqual(f.read1(5), b"Hello") self.assertEqual(f.seek(-7, Seek.end), 5) self.assertEqual(f.seek(-5, Seek.end), 7) self.assertEqual(f.read(), b"World") def test_geturl_for_fs(self): test_file = "foo/bar/egg/foofoo" expected = "zip://{zip_file_path}!/{file_inside_zip}".format( zip_file_path=self._temp_path.replace("\\", "/"), file_inside_zip=test_file ) self.assertEqual(self.fs.geturl(test_file, purpose="fs"), expected) def test_geturl_for_fs_but_file_is_binaryio(self): self.fs._file = six.BytesIO() self.assertRaises(NoURL, self.fs.geturl, "test", "fs") def test_geturl_for_download(self): test_file = "foo/bar/egg/foofoo" with self.assertRaises(NoURL): self.fs.geturl(test_file) def test_read_non_existent_file(self): fs = zipfs.ZipFS(open(self._temp_path, "rb")) # it has been very difficult to catch exception in __del__() del fs._zip try: fs.close() except AttributeError: self.fail("Could not close tar fs properly") except Exception: self.fail("Strange exception in closing fs") class TestReadZipFSMem(TestReadZipFS): def make_source_fs(self): return open_fs("mem://") class TestDirsZipFS(unittest.TestCase): def test_implied(self): """Test zipfs creates intermediate directories.""" fh, path = tempfile.mkstemp("testzip.zip") try: os.close(fh) with zipfile.ZipFile(path, mode="w") as z: z.writestr("foo/bar/baz/egg", b"hello") with zipfs.ReadZipFS(path) as zip_fs: foo = zip_fs.getinfo("foo", ["details"]) self.assertEqual(zip_fs.getinfo("foo/bar").name, "bar") self.assertEqual(zip_fs.getinfo("foo/bar/baz").name, "baz") self.assertTrue(foo.is_dir) self.assertTrue(zip_fs.isfile("foo/bar/baz/egg")) finally: os.remove(path) class TestOpener(unittest.TestCase): def test_not_writeable(self): with self.assertRaises(NotWriteable): open_fs("zip://foo.zip", writeable=True) pyfilesystem2-2.4.12/tox.ini000066400000000000000000000011021400005060600157130ustar00rootroot00000000000000[tox] envlist = {py27,py34,py35,py36,py37}{,-scandir},pypy,typecheck,lint sitepackages = False skip_missing_interpreters=True [testenv] deps = -r {toxinidir}/testrequirements.txt commands = coverage run -m pytest --cov-append {posargs} {toxinidir}/tests [testenv:typecheck] python = python37 deps = mypy==0.740 -r {toxinidir}/testrequirements.txt commands = make typecheck whitelist_externals = make [testenv:lint] python = python37 deps = flake8 # flake8-builtins flake8-bugbear flake8-comprehensions # flake8-isort flake8-mutable commands = flake8 fs tests