././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1714844135.8726869 file_read_backwards-3.1.0/0000755000076500000240000000000014615470750014675 5ustar00rrobinstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1510714224.0 file_read_backwards-3.1.0/AUTHORS.rst0000644000076500000240000000021113202725560016537 0ustar00rrobinstaff======= Credits ======= * Robin Robin * John Leslie * Samuel Giffard ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1680061622.0 file_read_backwards-3.1.0/CONTRIBUTING.rst0000644000076500000240000000611214410732266017332 0ustar00rrobinstaff.. highlight:: shell ============ Contributing ============ Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given. You can contribute in many ways: Types of Contributions ---------------------- Report Bugs ~~~~~~~~~~~ Report bugs at https://github.com/RobinNil/file_read_backwards/issues. If you are reporting a bug, please include: * Your operating system name and version. * Any details about your local setup that might be helpful in troubleshooting. * Detailed steps to reproduce the bug. Fix Bugs ~~~~~~~~ Look through the GitHub issues for bugs. Anything tagged with "bug" and "help wanted" is open to whoever wants to implement it. Implement Features ~~~~~~~~~~~~~~~~~~ Look through the GitHub issues for features. Anything tagged with "enhancement" and "help wanted" is open to whoever wants to implement it. Write Documentation ~~~~~~~~~~~~~~~~~~~ file_read_backwards could always use more documentation, whether as part of the official file_read_backwards docs, in docstrings, or even on the web in blog posts, articles, and such. Submit Feedback ~~~~~~~~~~~~~~~ The best way to send feedback is to file an issue at https://github.com/robin81/file_read_backwards/issues. If you are proposing a feature: * Explain in detail how it would work. * Keep the scope as narrow as possible, to make it easier to implement. * Remember that this is a volunteer-driven project, and that contributions are welcome :) Get Started! ------------ Ready to contribute? Here's how to set up `file_read_backwards` for local development. 1. Fork the `file_read_backwards` repo on GitHub. 2. Clone your fork locally:: $ git clone git@github.com:your_name_here/file_read_backwards.git 3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development:: $ mkvirtualenv file_read_backwards $ cd file_read_backwards/ $ python setup.py develop 4. Create a branch for local development:: $ git checkout -b name-of-your-bugfix-or-feature Now you can make your changes locally. 5. When you're done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox:: $ flake8 file_read_backwards tests $ python setup.py test or py.test $ tox To get flake8 and tox, just pip install them into your virtualenv. 6. Commit your changes and push your branch to GitHub:: $ git add . $ git commit -m "Your detailed description of your changes." $ git push origin name-of-your-bugfix-or-feature 7. Submit a pull request through the GitHub website. Pull Request Guidelines ----------------------- Before you submit a pull request, check that it meets these guidelines: 1. The pull request should include tests. 2. If the pull request adds functionality, the docs should be updated. Put your new functionality into a function with a docstring, and add the feature to the list in README.rst. Tips ---- To run a subset of tests:: $ python -m unittest tests.test_file_read_backwards ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1714843183.0 file_read_backwards-3.1.0/HISTORY.rst0000644000076500000240000000256014615467057016601 0ustar00rrobinstaff======= History ======= 1.0.0 (2016-12-18) ------------------ * First release on PyPI. 1.1.0 (2016-12-31) ------------------ * Added support for "latin-1". * Marked the package "Production/Stable". 1.1.1 (2017-01-09) ------------------ * Updated README.rst for more clarity around encoding support and Python 2.7 and 3 support. 1.1.2 (2017-01-11) ------------------ * Documentation re-arrangement. Usage examples are now in README.rst * Minor refactoring 1.2.0 (2017-09-01) ------------------ * Include context manager style as it provides cleaner/automatic close functionality 1.2.1 (2017-09-02) ------------------ * Made doc strings consistent to Google style and some code linting 1.2.2 (2017-11-19) ------------------ * Re-release of 1.2.1 for ease of updating pypi page for updated travis & pyup. 2.0.0 (2018-03-23) ------------------ Mimicing Python file object behavior. * FileReadBackwards no longer creates multiple iterators (a change of behavior from 1.x.y version) * Adding readline() function retuns one line at a time with a trailing new line and empty string when it reaches end of file. The fine print: the trailing new line will be `os.linesep` (rather than whichever new line type in the file). 3.0.0 (2023-03-29) ------------------ * Officially support Python 3.7 - 3.11. 3.1.0 (2024-05-02) ------------------ * Officially support Python 3.7 - 3.12 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1482088124.0 file_read_backwards-3.1.0/LICENSE0000644000076500000240000000205713025557274015707 0ustar00rrobinstaff MIT License Copyright (c) 2016, Robin Robin Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1482088124.0 file_read_backwards-3.1.0/MANIFEST.in0000644000076500000240000000041013025557274016427 0ustar00rrobinstaff include AUTHORS.rst include CONTRIBUTING.rst include HISTORY.rst include LICENSE include README.rst recursive-include tests * recursive-exclude * __pycache__ recursive-exclude * *.py[co] recursive-include docs *.rst conf.py Makefile make.bat *.jpg *.png *.gif ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1714844135.8728192 file_read_backwards-3.1.0/PKG-INFO0000644000076500000240000001016614615470750015776 0ustar00rrobinstaffMetadata-Version: 2.1 Name: file_read_backwards Version: 3.1.0 Summary: Memory efficient way of reading files line-by-line from the end of file Home-page: https://github.com/RobinNil/file_read_backwards Author: Robin Robin Author-email: robinsquare42@gmail.com License: MIT license Keywords: file_read_backwards Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Natural Language :: English Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 License-File: LICENSE License-File: AUTHORS.rst =============================== file_read_backwards =============================== .. image:: https://img.shields.io/pypi/v/file_read_backwards.svg :target: https://pypi.python.org/pypi/file_read_backwards .. image:: https://readthedocs.org/projects/file-read-backwards/badge/?version=latest :target: https://file-read-backwards.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status .. image:: https://pyup.io/repos/github/RobinNil/file_read_backwards/shield.svg :target: https://pyup.io/repos/github/RobinNil/file_read_backwards/ :alt: Updates Memory efficient way of reading files line-by-line from the end of file * Free software: MIT license * Documentation: https://file-read-backwards.readthedocs.io. Features -------- This package is for reading file backward line by line as unicode in a memory efficient manner for both Python 2.7 and Python 3. It currently supports ascii, latin-1, and utf-8 encodings. It supports "\\r", "\\r\\n", and "\\n" as new lines. Usage Examples -------------- Another example using `python3.11`:: from file_read_backwards import FileReadBackwards with FileReadBackwards("/tmp/file", encoding="utf-8") as frb: # getting lines by lines starting from the last line up for l in frb: print(l) Another way to consume the file is via `readline()`, in `python3.11`:: from file_read_backwards import FileReadBackwards with FileReadBackwards("/tmp/file", encoding="utf-8") as frb: while True: l = frb.readline() if not l: break print(l, end="") Credits --------- This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template. .. _Cookiecutter: https://github.com/audreyr/cookiecutter .. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage ======= History ======= 1.0.0 (2016-12-18) ------------------ * First release on PyPI. 1.1.0 (2016-12-31) ------------------ * Added support for "latin-1". * Marked the package "Production/Stable". 1.1.1 (2017-01-09) ------------------ * Updated README.rst for more clarity around encoding support and Python 2.7 and 3 support. 1.1.2 (2017-01-11) ------------------ * Documentation re-arrangement. Usage examples are now in README.rst * Minor refactoring 1.2.0 (2017-09-01) ------------------ * Include context manager style as it provides cleaner/automatic close functionality 1.2.1 (2017-09-02) ------------------ * Made doc strings consistent to Google style and some code linting 1.2.2 (2017-11-19) ------------------ * Re-release of 1.2.1 for ease of updating pypi page for updated travis & pyup. 2.0.0 (2018-03-23) ------------------ Mimicing Python file object behavior. * FileReadBackwards no longer creates multiple iterators (a change of behavior from 1.x.y version) * Adding readline() function retuns one line at a time with a trailing new line and empty string when it reaches end of file. The fine print: the trailing new line will be `os.linesep` (rather than whichever new line type in the file). 3.0.0 (2023-03-29) ------------------ * Officially support Python 3.7 - 3.11. 3.1.0 (2024-05-02) ------------------ * Officially support Python 3.7 - 3.12 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1680100909.0 file_read_backwards-3.1.0/README.rst0000644000076500000240000000357514411047055016366 0ustar00rrobinstaff=============================== file_read_backwards =============================== .. image:: https://img.shields.io/pypi/v/file_read_backwards.svg :target: https://pypi.python.org/pypi/file_read_backwards .. image:: https://readthedocs.org/projects/file-read-backwards/badge/?version=latest :target: https://file-read-backwards.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status .. image:: https://pyup.io/repos/github/RobinNil/file_read_backwards/shield.svg :target: https://pyup.io/repos/github/RobinNil/file_read_backwards/ :alt: Updates Memory efficient way of reading files line-by-line from the end of file * Free software: MIT license * Documentation: https://file-read-backwards.readthedocs.io. Features -------- This package is for reading file backward line by line as unicode in a memory efficient manner for both Python 2.7 and Python 3. It currently supports ascii, latin-1, and utf-8 encodings. It supports "\\r", "\\r\\n", and "\\n" as new lines. Usage Examples -------------- Another example using `python3.11`:: from file_read_backwards import FileReadBackwards with FileReadBackwards("/tmp/file", encoding="utf-8") as frb: # getting lines by lines starting from the last line up for l in frb: print(l) Another way to consume the file is via `readline()`, in `python3.11`:: from file_read_backwards import FileReadBackwards with FileReadBackwards("/tmp/file", encoding="utf-8") as frb: while True: l = frb.readline() if not l: break print(l, end="") Credits --------- This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template. .. _Cookiecutter: https://github.com/audreyr/cookiecutter .. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1714844135.8696113 file_read_backwards-3.1.0/docs/0000755000076500000240000000000014615470750015625 5ustar00rrobinstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1482088124.0 file_read_backwards-3.1.0/docs/Makefile0000644000076500000240000001523613025557274017275 0ustar00rrobinstaff# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # User-friendly check for sphinx-build ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) endif # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/file_read_backwards.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/file_read_backwards.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/file_read_backwards" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/file_read_backwards" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." latexpdfja: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." xml: $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." pseudoxml: $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1714844135.8647814 file_read_backwards-3.1.0/docs/_build/0000755000076500000240000000000014615470750017063 5ustar00rrobinstaff././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1714844135.8648496 file_read_backwards-3.1.0/docs/_build/html/0000755000076500000240000000000014615470750020027 5ustar00rrobinstaff././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1714844135.870142 file_read_backwards-3.1.0/docs/_build/html/_static/0000755000076500000240000000000014615470750021455 5ustar00rrobinstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1680111288.0 file_read_backwards-3.1.0/docs/_build/html/_static/file.png0000644000076500000240000000043614411073270023073 0ustar00rrobinstaffPNG  IHDRaIDATxR){l ۶f=@ :3~箄rX$AX-D ~ lj(P%8<<9:: PO&$ l~X&EW^4wQ}^ͣ i0/H/@F)Dzq+j[SU5h/oY G&Lfs|{3%U+S`AFIENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1680111288.0 file_read_backwards-3.1.0/docs/_build/html/_static/minus.png0000644000076500000240000000013214411073270023300 0ustar00rrobinstaffPNG  IHDR (!IDATxc8 g>@;(!&]f2nNIENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1680111288.0 file_read_backwards-3.1.0/docs/_build/html/_static/plus.png0000644000076500000240000000013214411073270023130 0ustar00rrobinstaffPNG  IHDR (!IDATxc8 g>@;([[U @l-!a@IENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1482088124.0 file_read_backwards-3.1.0/docs/authors.rst0000644000076500000240000000003413025557274020042 0ustar00rrobinstaff.. include:: ../AUTHORS.rst ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1482088124.0 file_read_backwards-3.1.0/docs/conf.py0000755000076500000240000002053313025557274017133 0ustar00rrobinstaff#!/usr/bin/env python # -*- coding: utf-8 -*- # # file_read_backwards documentation build configuration file, created by # sphinx-quickstart on Tue Jul 9 22:26:36 2013. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys import os # If extensions (or modules to document with autodoc) are in another # directory, add these directories to sys.path here. If the directory is # relative to the documentation root, use os.path.abspath to make it # absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # Get the project root dir, which is the parent dir of this cwd = os.getcwd() project_root = os.path.dirname(cwd) # Insert the project root dir as the first element in the PYTHONPATH. # This lets us ensure that the source package is imported, and that its # version is used. sys.path.insert(0, project_root) import file_read_backwards # -- General configuration --------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'file_read_backwards' copyright = u"2016, Robin Robin" # The version info for the project you're documenting, acts as replacement # for |version| and |release|, also used in various other places throughout # the built documents. # # The short X.Y version. version = file_read_backwards.__version__ # The full version, including alpha/beta/rc tags. release = file_read_backwards.__version__ # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to # some non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all # documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built # documents. #keep_warnings = False # -- Options for HTML output ------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a # theme further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as # html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the # top of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon # of the docs. This file should be a Windows icon file (.ico) being # 16x16 or 32x32 pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) # here, relative to this directory. They are copied after the builtin # static files, so a file named "default.css" will overwrite the builtin # "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page # bottom, using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names # to template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. # Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. # Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages # will contain a tag referring to it. The value of this option # must be the base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'file_read_backwardsdoc' # -- Options for LaTeX output ------------------------------------------ latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', 'file_read_backwards.tex', u'file_read_backwards Documentation', u'Robin Robin', 'manual'), ] # The name of an image file (relative to this directory) to place at # the top of the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings # are parts, not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output ------------------------------------ # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'file_read_backwards', u'file_read_backwards Documentation', [u'Robin Robin'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ---------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'file_read_backwards', u'file_read_backwards Documentation', u'Robin Robin', 'file_read_backwards', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. #texinfo_no_detailmenu = False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1482088124.0 file_read_backwards-3.1.0/docs/contributing.rst0000644000076500000240000000004113025557274021062 0ustar00rrobinstaff.. include:: ../CONTRIBUTING.rst ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1680112164.0 file_read_backwards-3.1.0/docs/file_read_backwards.rst0000644000076500000240000000116514411075044022304 0ustar00rrobinstafffile\_read\_backwards package ============================= Submodules ---------- file\_read\_backwards.buffer\_work\_space module ------------------------------------------------ .. automodule:: file_read_backwards.buffer_work_space :members: :undoc-members: :show-inheritance: file\_read\_backwards.file\_read\_backwards module -------------------------------------------------- .. automodule:: file_read_backwards.file_read_backwards :members: :undoc-members: :show-inheritance: Module contents --------------- .. automodule:: file_read_backwards :members: :undoc-members: :show-inheritance: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1482088124.0 file_read_backwards-3.1.0/docs/history.rst0000644000076500000240000000003413025557274020056 0ustar00rrobinstaff.. include:: ../HISTORY.rst ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1484170618.0 file_read_backwards-3.1.0/docs/index.rst0000644000076500000240000000044213035522572017461 0ustar00rrobinstaffWelcome to file_read_backwards's documentation! ====================================== Contents: .. toctree:: :maxdepth: 2 readme installation usage contributing authorshistory Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1510714578.0 file_read_backwards-3.1.0/docs/installation.rst0000644000076500000240000000227313202726322021052 0ustar00rrobinstaff.. highlight:: shell ============ Installation ============ Stable release -------------- To install file_read_backwards, run this command in your terminal: .. code-block:: console $ pip install file_read_backwards This is the preferred method to install file_read_backwards, as it will always install the most recent stable release. If you don't have `pip`_ installed, this `Python installation guide`_ can guide you through the process. .. _pip: https://pip.pypa.io .. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/ From sources ------------ The sources for file_read_backwards can be downloaded from the `Github repo`_. You can either clone the public repository: .. code-block:: console $ git clone git://github.com/RobinNil/file_read_backwards Or download the `tarball`_: .. code-block:: console $ curl -OL https://github.com/RobinNil/file_read_backwards/tarball/master Once you have a copy of the source, you can install it with: .. code-block:: console $ python setup.py install .. _Github repo: https://github.com/RobinNil/file_read_backwards .. _tarball: https://github.com/RobinNil/file_read_backwards/tarball/master ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1482088124.0 file_read_backwards-3.1.0/docs/make.bat0000644000076500000240000001452513025557274017242 0ustar00rrobinstaff@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=_build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . set I18NSPHINXOPTS=%SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. texinfo to make Texinfo files echo. gettext to make PO message catalogs echo. changes to make an overview over all changed/added/deprecated items echo. xml to make Docutils-native XML files echo. pseudoxml to make pseudoxml-XML files for display purposes echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) %SPHINXBUILD% 2> nul if errorlevel 9009 ( echo. echo.The 'sphinx-build' command was not found. Make sure you have Sphinx echo.installed, then set the SPHINXBUILD environment variable to point echo.to the full path of the 'sphinx-build' executable. Alternatively you echo.may add the Sphinx directory to PATH. echo. echo.If you don't have Sphinx installed, grab it from echo.http://sphinx-doc.org/ exit /b 1 ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\file_read_backwards.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\file_read_backwards.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp if errorlevel 1 exit /b 1 echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex if errorlevel 1 exit /b 1 echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "latexpdf" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex cd %BUILDDIR%/latex make all-pdf cd %BUILDDIR%/.. echo. echo.Build finished; the PDF files are in %BUILDDIR%/latex. goto end ) if "%1" == "latexpdfja" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex cd %BUILDDIR%/latex make all-pdf-ja cd %BUILDDIR%/.. echo. echo.Build finished; the PDF files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text if errorlevel 1 exit /b 1 echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man if errorlevel 1 exit /b 1 echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "texinfo" ( %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo if errorlevel 1 exit /b 1 echo. echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. goto end ) if "%1" == "gettext" ( %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale if errorlevel 1 exit /b 1 echo. echo.Build finished. The message catalogs are in %BUILDDIR%/locale. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes if errorlevel 1 exit /b 1 echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck if errorlevel 1 exit /b 1 echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest if errorlevel 1 exit /b 1 echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) if "%1" == "xml" ( %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml if errorlevel 1 exit /b 1 echo. echo.Build finished. The XML files are in %BUILDDIR%/xml. goto end ) if "%1" == "pseudoxml" ( %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml if errorlevel 1 exit /b 1 echo. echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml. goto end ) :end ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1680112164.0 file_read_backwards-3.1.0/docs/modules.rst0000644000076500000240000000013614411075044020016 0ustar00rrobinstafffile_read_backwards =================== .. toctree:: :maxdepth: 4 file_read_backwards ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1482088124.0 file_read_backwards-3.1.0/docs/readme.rst0000644000076500000240000000003313025557274017611 0ustar00rrobinstaff.. include:: ../README.rst ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1484180533.0 file_read_backwards-3.1.0/docs/usage.rst0000644000076500000240000000005513035546065017461 0ustar00rrobinstaff===== Usage ===== Please see :doc:`readme`. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1714844135.8709307 file_read_backwards-3.1.0/file_read_backwards/0000755000076500000240000000000014615470750020630 5ustar00rrobinstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1714843454.0 file_read_backwards-3.1.0/file_read_backwards/__init__.py0000644000076500000240000000026614615467476022760 0ustar00rrobinstaff# -*- coding: utf-8 -*- from .file_read_backwards import FileReadBackwards # noqa: F401 __author__ = """Robin Robin""" __email__ = 'robinsquare42@gmail.com' __version__ = '3.1.0' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1686279842.0 file_read_backwards-3.1.0/file_read_backwards/buffer_work_space.py0000644000076500000240000001454314440513242024665 0ustar00rrobinstaff#!/usr/bin/env python # -*- coding: utf-8 -*- """BufferWorkSpace module.""" import os new_lines = ["\r\n", "\n", "\r"] new_lines_bytes = [n.encode("ascii") for n in new_lines] # we only support encodings that's backward compat with ascii class BufferWorkSpace: """It is a helper module for FileReadBackwards.""" def __init__(self, fp, chunk_size): """Convention for the data. When read_buffer is not None, it represents contents of the file from `read_position` onwards that has not been processed/returned. read_position represents the file pointer position that has been read into read_buffer initialized to be just past the end of file. """ self.fp = fp self.read_position = _get_file_size(self.fp) # set the previously read position to the self.read_buffer = None self.chunk_size = chunk_size def add_to_buffer(self, content, read_position): """Add additional bytes content as read from the read_position. Args: content (bytes): data to be added to buffer working BufferWorkSpac. read_position (int): where in the file pointer the data was read from. """ self.read_position = read_position if self.read_buffer is None: self.read_buffer = content else: self.read_buffer = content + self.read_buffer def yieldable(self): """Return True if there is a line that the buffer can return, False otherwise.""" if self.read_buffer is None: return False t = _remove_trailing_new_line(self.read_buffer) n = _find_furthest_new_line(t) if n >= 0: return True # we have read in entire file and have some unprocessed lines if self.read_position == 0 and self.read_buffer is not None: return True return False def return_line(self): """Return a new line if it is available. Precondition: self.yieldable() must be True """ assert(self.yieldable()) # noqa: E275 t = _remove_trailing_new_line(self.read_buffer) i = _find_furthest_new_line(t) if i >= 0: delimiter = i + 1 after_new_line = slice(delimiter, None) up_to_include_new_line = slice(0, delimiter) r = t[after_new_line] self.read_buffer = t[up_to_include_new_line] else: # the case where we have read in entire file and at the "last" line r = t self.read_buffer = None return r def read_until_yieldable(self): """Read in additional chunks until it is yieldable.""" while not self.yieldable(): read_content, read_position = _get_next_chunk(self.fp, self.read_position, self.chunk_size) self.add_to_buffer(read_content, read_position) def has_returned_every_line(self): """Return True if every single line in the file has been returned, False otherwise.""" if self.read_position == 0 and self.read_buffer is None: return True return False def _get_file_size(fp): return os.fstat(fp.fileno()).st_size def _get_next_chunk(fp, previously_read_position, chunk_size): """Return next chunk of data that we would from the file pointer. Args: fp: file-like object previously_read_position: file pointer position that we have read from chunk_size: desired read chunk_size Returns: (bytestring, int): data that has been read in, the file pointer position where the data has been read from """ seek_position, read_size = _get_what_to_read_next(fp, previously_read_position, chunk_size) fp.seek(seek_position) read_content = fp.read(read_size) read_position = seek_position return read_content, read_position def _get_what_to_read_next(fp, previously_read_position, chunk_size): """Return information on which file pointer position to read from and how many bytes. Args: fp past_read_positon (int): The file pointer position that has been read previously chunk_size(int): ideal io chunk_size Returns: (int, int): The next seek position, how many bytes to read next """ seek_position = max(previously_read_position - chunk_size, 0) read_size = chunk_size # examples: say, our new_lines are potentially "\r\n", "\n", "\r" # find a reading point where it is not "\n", rewind further if necessary # if we have "\r\n" and we read in "\n", # the next iteration would treat "\r" as a different new line. # Q: why don't I just check if it is b"\n", but use a function ? # A: so that we can potentially expand this into generic sets of separators, later on. while seek_position > 0: fp.seek(seek_position) if _is_partially_read_new_line(fp.read(1)): seek_position -= 1 read_size += 1 # as we rewind further, let's make sure we read more to compensate else: break # take care of special case when we are back to the beginnin of the file read_size = min(previously_read_position - seek_position, read_size) return seek_position, read_size def _remove_trailing_new_line(line): """Remove a single instance of new line at the end of line if it exists. Returns: bytestring """ # replace only 1 instance of newline # match longest line first (hence the reverse=True), we want to match "\r\n" rather than "\n" if we can for n in sorted(new_lines_bytes, key=lambda x: len(x), reverse=True): if line.endswith(n): remove_new_line = slice(None, -len(n)) return line[remove_new_line] return line def _find_furthest_new_line(read_buffer): """Return -1 if read_buffer does not contain new line otherwise the position of the rightmost newline. Args: read_buffer (bytestring) Returns: int: The right most position of new line character in read_buffer if found, else -1 """ new_line_positions = [read_buffer.rfind(n) for n in new_lines_bytes] return max(new_line_positions) def _is_partially_read_new_line(b): """Return True when b is part of a new line separator found at index >= 1, False otherwise. Args: b (bytestring) Returns: bool """ for n in new_lines_bytes: if n.find(b) >= 1: return True return False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1680318322.0 file_read_backwards-3.1.0/file_read_backwards/file_read_backwards.py0000644000076500000240000000774514411717562025151 0ustar00rrobinstaff#!/usr/bin/env python # -*- coding: utf-8 -*- """FileReadBackwards module.""" import io import os from .buffer_work_space import BufferWorkSpace supported_encodings = ["utf-8", "ascii", "latin-1"] # any encodings that are backward compatible with ascii should work class FileReadBackwards: """Class definition for `FileReadBackwards`. A `FileReadBackwards` will spawn a `FileReadBackwardsIterator` and keep an opened file handler. It can be used as a Context Manager. If done so, when exited, it will close its file handler. In any mode, `close()` can be called to close the file handler.. """ def __init__(self, path, encoding="utf-8", chunk_size=io.DEFAULT_BUFFER_SIZE): """Constructor for FileReadBackwards. Args: path: Path to the file to be read encoding (str): Encoding chunk_size (int): How many bytes to read at a time """ if encoding.lower() not in supported_encodings: error_message = "{0} encoding was not supported/tested.".format(encoding) error_message += "Supported encodings are '{0}'".format(",".join(supported_encodings)) raise NotImplementedError(error_message) self.path = path self.encoding = encoding.lower() self.chunk_size = chunk_size self.iterator = FileReadBackwardsIterator(io.open(self.path, mode="rb"), self.encoding, self.chunk_size) def __iter__(self): """Return its iterator.""" return self.iterator def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): """Closes all opened its file handler and propagates all exceptions on exit.""" self.close() return False def close(self): """Closes all opened it s file handler.""" self.iterator.close() def readline(self): """Return a line content (with a trailing newline) if there are content. Return '' otherwise.""" try: r = next(self.iterator) + os.linesep return r except StopIteration: return "" class FileReadBackwardsIterator: """Iterator for `FileReadBackwards`. This will read backwards line by line a file. It holds an opened file handler. """ def __init__(self, fp, encoding, chunk_size): """Constructor for FileReadBackwardsIterator Args: fp (File): A file that we wish to start reading backwards from encoding (str): Encoding of the file chunk_size (int): How many bytes to read at a time """ self.path = fp.name self.encoding = encoding self.chunk_size = chunk_size self.__fp = fp self.__buf = BufferWorkSpace(self.__fp, self.chunk_size) def __iter__(self): return self def next(self): """Returns unicode string from the last line until the beginning of file. Gets exhausted if:: * already reached the beginning of the file on previous iteration * the file got closed When it gets exhausted, it closes the file handler. """ # Using binary mode, because some encodings such as "utf-8" use variable number of # bytes to encode different Unicode points. # Without using binary mode, we would probably need to understand each encoding more # and do the seek operations to find the proper boundary before issuing read if self.closed: raise StopIteration if self.__buf.has_returned_every_line(): self.close() raise StopIteration self.__buf.read_until_yieldable() r = self.__buf.return_line() return r.decode(self.encoding) __next__ = next @property def closed(self): """The status of the file handler. :return: True if the file handler is still opened. False otherwise. """ return self.__fp.closed def close(self): """Closes the file handler.""" self.__fp.close() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1714844135.8717992 file_read_backwards-3.1.0/file_read_backwards.egg-info/0000755000076500000240000000000014615470750022322 5ustar00rrobinstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1714844135.0 file_read_backwards-3.1.0/file_read_backwards.egg-info/PKG-INFO0000644000076500000240000001016614615470747023431 0ustar00rrobinstaffMetadata-Version: 2.1 Name: file-read-backwards Version: 3.1.0 Summary: Memory efficient way of reading files line-by-line from the end of file Home-page: https://github.com/RobinNil/file_read_backwards Author: Robin Robin Author-email: robinsquare42@gmail.com License: MIT license Keywords: file_read_backwards Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Natural Language :: English Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 License-File: LICENSE License-File: AUTHORS.rst =============================== file_read_backwards =============================== .. image:: https://img.shields.io/pypi/v/file_read_backwards.svg :target: https://pypi.python.org/pypi/file_read_backwards .. image:: https://readthedocs.org/projects/file-read-backwards/badge/?version=latest :target: https://file-read-backwards.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status .. image:: https://pyup.io/repos/github/RobinNil/file_read_backwards/shield.svg :target: https://pyup.io/repos/github/RobinNil/file_read_backwards/ :alt: Updates Memory efficient way of reading files line-by-line from the end of file * Free software: MIT license * Documentation: https://file-read-backwards.readthedocs.io. Features -------- This package is for reading file backward line by line as unicode in a memory efficient manner for both Python 2.7 and Python 3. It currently supports ascii, latin-1, and utf-8 encodings. It supports "\\r", "\\r\\n", and "\\n" as new lines. Usage Examples -------------- Another example using `python3.11`:: from file_read_backwards import FileReadBackwards with FileReadBackwards("/tmp/file", encoding="utf-8") as frb: # getting lines by lines starting from the last line up for l in frb: print(l) Another way to consume the file is via `readline()`, in `python3.11`:: from file_read_backwards import FileReadBackwards with FileReadBackwards("/tmp/file", encoding="utf-8") as frb: while True: l = frb.readline() if not l: break print(l, end="") Credits --------- This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template. .. _Cookiecutter: https://github.com/audreyr/cookiecutter .. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage ======= History ======= 1.0.0 (2016-12-18) ------------------ * First release on PyPI. 1.1.0 (2016-12-31) ------------------ * Added support for "latin-1". * Marked the package "Production/Stable". 1.1.1 (2017-01-09) ------------------ * Updated README.rst for more clarity around encoding support and Python 2.7 and 3 support. 1.1.2 (2017-01-11) ------------------ * Documentation re-arrangement. Usage examples are now in README.rst * Minor refactoring 1.2.0 (2017-09-01) ------------------ * Include context manager style as it provides cleaner/automatic close functionality 1.2.1 (2017-09-02) ------------------ * Made doc strings consistent to Google style and some code linting 1.2.2 (2017-11-19) ------------------ * Re-release of 1.2.1 for ease of updating pypi page for updated travis & pyup. 2.0.0 (2018-03-23) ------------------ Mimicing Python file object behavior. * FileReadBackwards no longer creates multiple iterators (a change of behavior from 1.x.y version) * Adding readline() function retuns one line at a time with a trailing new line and empty string when it reaches end of file. The fine print: the trailing new line will be `os.linesep` (rather than whichever new line type in the file). 3.0.0 (2023-03-29) ------------------ * Officially support Python 3.7 - 3.11. 3.1.0 (2024-05-02) ------------------ * Officially support Python 3.7 - 3.12 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1714844135.0 file_read_backwards-3.1.0/file_read_backwards.egg-info/SOURCES.txt0000644000076500000240000000146214615470747024217 0ustar00rrobinstaffAUTHORS.rst CONTRIBUTING.rst HISTORY.rst LICENSE MANIFEST.in README.rst setup.cfg setup.py docs/Makefile docs/authors.rst docs/conf.py docs/contributing.rst docs/file_read_backwards.rst docs/history.rst docs/index.rst docs/installation.rst docs/make.bat docs/modules.rst docs/readme.rst docs/usage.rst docs/_build/html/_static/file.png docs/_build/html/_static/minus.png docs/_build/html/_static/plus.png file_read_backwards/__init__.py file_read_backwards/buffer_work_space.py file_read_backwards/file_read_backwards.py file_read_backwards.egg-info/PKG-INFO file_read_backwards.egg-info/SOURCES.txt file_read_backwards.egg-info/dependency_links.txt file_read_backwards.egg-info/not-zip-safe file_read_backwards.egg-info/top_level.txt tests/__init__.py tests/test_buffer_work_space.py tests/test_file_read_backwards.py././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1714844135.0 file_read_backwards-3.1.0/file_read_backwards.egg-info/dependency_links.txt0000644000076500000240000000000114615470747026376 0ustar00rrobinstaff ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1714844135.0 file_read_backwards-3.1.0/file_read_backwards.egg-info/not-zip-safe0000644000076500000240000000000114615470747024556 0ustar00rrobinstaff ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1714844135.0 file_read_backwards-3.1.0/file_read_backwards.egg-info/top_level.txt0000644000076500000240000000002414615470747025056 0ustar00rrobinstafffile_read_backwards ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1714844135.8731403 file_read_backwards-3.1.0/setup.cfg0000644000076500000240000000065314615470750016522 0ustar00rrobinstaff[bumpversion] current_version = 3.1.0 commit = True tag = True [bumpversion:file:setup.py] search = version='{current_version}' replace = version='{new_version}' [bumpversion:file:file_read_backwards/__init__.py] search = __version__ = '{current_version}' replace = __version__ = '{new_version}' [bdist_wheel] universal = 1 [flake8] exclude = docs max-line-length = 120 ignore = D100 [egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1714843454.0 file_read_backwards-3.1.0/setup.py0000644000076500000240000000275114615467476016427 0ustar00rrobinstaff#!/usr/bin/env python # -*- coding: utf-8 -*- from setuptools import setup with open('README.rst') as readme_file: readme = readme_file.read() with open('HISTORY.rst') as history_file: history = history_file.read() requirements = [ ] test_requirements = [ "mock", ] setup( name='file_read_backwards', version='3.1.0', description="Memory efficient way of reading files line-by-line from the end of file", long_description=readme + '\n\n' + history, author="Robin Robin", author_email='robinsquare42@gmail.com', url='https://github.com/RobinNil/file_read_backwards', packages=[ 'file_read_backwards', ], package_dir={'file_read_backwards': 'file_read_backwards'}, include_package_data=True, install_requires=requirements, license="MIT license", zip_safe=False, keywords='file_read_backwards', classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: MIT License', 'Natural Language :: English', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', 'Programming Language :: Python :: 3.10', 'Programming Language :: Python :: 3.11', 'Programming Language :: Python :: 3.12', ], test_suite='tests', tests_require=test_requirements ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1714844135.8724823 file_read_backwards-3.1.0/tests/0000755000076500000240000000000014615470750016037 5ustar00rrobinstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1482088124.0 file_read_backwards-3.1.0/tests/__init__.py0000644000076500000240000000003013025557274020142 0ustar00rrobinstaff# -*- coding: utf-8 -*- ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1686279842.0 file_read_backwards-3.1.0/tests/test_buffer_work_space.py0000644000076500000240000004645414440513242023141 0ustar00rrobinstaff#!/usr/bin/env python # -*- coding: utf-8 -*- """Tests for `buffer_work_space` module.""" import io import os import tempfile import unittest from mock import Mock, patch from file_read_backwards.buffer_work_space import BufferWorkSpace from file_read_backwards.buffer_work_space import new_lines_bytes from file_read_backwards.buffer_work_space import _find_furthest_new_line from file_read_backwards.buffer_work_space import _remove_trailing_new_line from file_read_backwards.buffer_work_space import _get_file_size from file_read_backwards.buffer_work_space import _is_partially_read_new_line from file_read_backwards.buffer_work_space import _get_what_to_read_next from file_read_backwards.buffer_work_space import _get_next_chunk class TestFindFurthestNewLine(unittest.TestCase): """Class that contains test cases for the _find_furthest_new_line module.""" def setUp(self): # noqa: N802 pass def tearDown(self): # noqa: N802 pass def test_find_furthest_new_line_with_no_new_line_in_empty_byte_string(self): """Expect return value of -1 when empty bytestring is passed in.""" test_string = b"" r = _find_furthest_new_line(test_string) self.assertEqual(r, -1) def test_find_furthest_new_line_with_no_new_line_in_non_empty_byte_string(self): """Expect return value of -1 when non-empty bytestrings that don't contain new line is passed in.""" test_string = b"SomeRandomCharacters" r = _find_furthest_new_line(test_string) self.assertEqual(r, -1) def test_find_furthest_new_line_with_bytestring_with_new_line_at_the_end(self): """Expect return value of the last index of the test_string because the new line is at the end.""" base_string = b"SomeRandomCharacters" for n in new_lines_bytes: test_string = base_string + n expected_value = len(test_string) - 1 r = _find_furthest_new_line(test_string) self.assertEqual(r, expected_value, msg="Test with {0} as new line".format(repr(n))) def test_find_furthest_new_line_with_bytestring_with_new_line_in_the_middle(self): """Expect return value pointing to the middle of the test string where the newline is at.""" base_string = b"SomeRandomCharacters" for n in new_lines_bytes: test_string = base_string + n + base_string expected_value = len(base_string) + len(n) - 1 r = _find_furthest_new_line(test_string) self.assertEqual(r, expected_value, msg="Test with {0} as new line".format(repr(n))) def test_find_furthest_new_line_with_bytestring_with_new_line_in_the_middle_and_end(self): """Expect return value of the last index of the test_string because the new line is at the end.""" base_string = b"SomeRandomCharacters" for n in new_lines_bytes: test_string = base_string + n + base_string + n expected_value = len(test_string) - 1 r = _find_furthest_new_line(test_string) self.assertEqual(r, expected_value, msg="Test with {0} as new line".format(repr(n))) class TestRemoveTrailingNewLine(unittest.TestCase): """Class that contains test cases for _remove_trailing_new_line.""" def test_remove_trailing_new_line_with_empty_byte_string(self): """Expect nothing to change, because empty byte string does not contain trailing new line.""" test_string = b"" expected_string = test_string r = _remove_trailing_new_line(test_string) self.assertEqual(r, expected_string) def test_remove_trailing_new_line_with_non_empty_byte_string_with_no_new_line(self): """Expect nothing to change.""" test_string = b"Something" expected_string = test_string r = _remove_trailing_new_line(test_string) self.assertEqual(r, expected_string) def test_remove_trailing_new_line_with_non_empty_byte_string_with_variety_of_new_lines(self): """Expect new lines to be removed at the end of the string.""" expected_str = b"Something" for n in new_lines_bytes: test_string = expected_str + n r = _remove_trailing_new_line(test_string) self.assertEqual( r, expected_str, msg="Test with {0} followed by {1} as new line at the end of str".format(repr(expected_str), repr(n))) def test_remove_trailing_new_line_with_non_empty_byte_string_with_variety_of_new_lines_in_the_middle(self): """Expect nothing to change because the new line is in the middle.""" base_string = b"Something" for n in new_lines_bytes: test_string = base_string + n + base_string expected_string = test_string r = _remove_trailing_new_line(test_string) self.assertEqual(r, expected_string, msg="Test with {0} as new line".format(repr(n))) class TestGetFileSize(unittest.TestCase): """Class that contains test cases for _get_file_size.""" def test_empty_file(self): """Expect value of 0 because if it empty file.""" with tempfile.NamedTemporaryFile(delete=False) as t: pass expected_value = 0 with io.open(t.name, mode="rb") as fp: r = _get_file_size(fp) self.assertEqual(r, expected_value) os.unlink(t.name) def test_file_with_eight_bytes(self): """Expect value of 8 because if it is an 8-bytes file.""" with tempfile.NamedTemporaryFile(delete=False) as t: t.write(b"a" * 8) expected_value = 8 with io.open(t.name, mode="rb") as fp: r = _get_file_size(fp) self.assertEqual(r, expected_value) os.unlink(t.name) class TestIsPartiallyReadNewLine(unittest.TestCase): """Class that contains test cases for _is_partially_read_new_line.""" def test_when_we_have_a_partially_read_new_line(self): """Insert bytestring that is the last byte of new line separator that is longer than 1 byte.""" for n in new_lines_bytes: if len(n) > 1: b = n[-1] r = _is_partially_read_new_line(b) self.assertTrue(r) class TestGetWhatToReadNext(unittest.TestCase): """Class that contains test cases for what to _get_what_to_read_next.""" def test_with_empty_file(self): """Expect (0, 0) when we pass in empty file.""" with tempfile.NamedTemporaryFile(delete=False) as t: pass expected_result = (0, 0) with io.open(t.name, mode="rb") as fp: r = _get_what_to_read_next(fp, previously_read_position=0, chunk_size=3) self.assertEqual(expected_result, r) os.unlink(t.name) def test_with_file_with_seven_bytes_of_alphanumeric(self): """Test with alpha-numeric contents of size 7 bytes with chunk_size of 3. Expect (4, 3).""" with tempfile.NamedTemporaryFile(delete=False) as t: t.write(b"abcdefg") expected_result = (4, 3) with io.open(t.name, mode="rb") as fp: r = _get_what_to_read_next(fp, previously_read_position=7, chunk_size=3) self.assertEqual(expected_result, r) os.unlink(t.name) def test_with_file_with_single_new_line(self): """Test file with a single new line with variety of new_lines.""" for n in new_lines_bytes: with tempfile.NamedTemporaryFile(delete=False) as t: t.write(n) expected_result = (0, len(n)) chunk_size = len(n) + 1 # chunk size must be greater than len(n) with io.open(t.name, mode="rb") as fp: r = _get_what_to_read_next(fp, previously_read_position=len(n), chunk_size=chunk_size) self.assertEqual(r, expected_result) os.unlink(t.name) def test_with_file_where_we_need_to_read_more_than_chunk_size(self): """When we encounter character that may be part of a new line, we rewind further.""" with tempfile.NamedTemporaryFile(delete=False) as t: t.write(b"abcd\nfg") expected_result = (3, 4) with io.open(t.name, mode="rb") as fp: r = _get_what_to_read_next(fp, previously_read_position=7, chunk_size=3) self.assertEqual(expected_result, r) os.unlink(t.name) class TestGetNextChunk(unittest.TestCase): """Class that contains test cases for _get_next_chunk.""" def test_with_empty_file(self): """Test with empty file.""" with tempfile.NamedTemporaryFile(delete=False) as t: pass expected_result = (b"", 0) with io.open(t.name, mode="rb") as fp: r = _get_next_chunk(fp, previously_read_position=0, chunk_size=3) self.assertEqual(r, expected_result) os.unlink(t.name) def test_with_non_empty_file(self): """Test with non-empty file where we are expected to read specified chunk size.""" with tempfile.NamedTemporaryFile(delete=False) as t: t.write(b"abcdefg") expected_result = (b"efg", 4) with io.open(t.name, mode="rb") as fp: r = _get_next_chunk(fp, previously_read_position=7, chunk_size=3) self.assertEqual(expected_result, r) os.unlink(t.name) def test_with_non_empty_file_where_we_read_more_than_chunk_size(self): r"""Test with non-empty file where we are expected to read more than chunk size. Note: We read more than specified chunk size because we go further if we hit "\n". """ with tempfile.NamedTemporaryFile(delete=False) as t: t.write(b"abcd\nfg") expected_result = (b"d\nfg", 3) with io.open(t.name, mode="rb") as fp: r = _get_next_chunk(fp, previously_read_position=7, chunk_size=3) self.assertEqual(expected_result, r) os.unlink(t.name) class TestBufferWorkSpace(unittest.TestCase): """Class that contains test cases for BufferWorkSpace.""" def test_add_to_empty_buffer_work_space(self): """Test reading last 3 bytes from a 1024 byte file.""" with patch("file_read_backwards.buffer_work_space._get_file_size") as _get_file_size_mock: fp_mock = Mock() _get_file_size_mock.return_value = 1024 b = BufferWorkSpace(fp_mock, chunk_size=io.DEFAULT_BUFFER_SIZE) b.add_to_buffer(content=b"aaa", read_position=1021) self.assertEqual(b.read_buffer, b"aaa") self.assertEqual(b.read_position, 1021) def test_add_to_non_empty_buffer_work_space(self): """Test adding to a non-empty buffer work space.""" with patch("file_read_backwards.buffer_work_space._get_file_size") as _get_file_size_mock: fp_mock = Mock() _get_file_size_mock.return_value = 1024 b = BufferWorkSpace(fp_mock, chunk_size=io.DEFAULT_BUFFER_SIZE) b.add_to_buffer(content=b"aaa", read_position=1021) b.add_to_buffer(content=b"bbb", read_position=1018) self.assertEqual(b.read_buffer, b"bbbaaa") self.assertEqual(b.read_position, 1018) def test_yieldable_for_new_initialized_buffer_work_space(self): """Newly empty buffer work space should not be yieldable.""" with tempfile.NamedTemporaryFile(delete=False) as t: with io.open(t.name, mode="rb") as fp: b = BufferWorkSpace(fp, chunk_size=io.DEFAULT_BUFFER_SIZE) r = b.yieldable() self.assertFalse(r) os.unlink(t.name) def test_yieldable_for_unexhausted_buffer_space_with_single_new_line(self): """Buffer work space with a single new line (with read_position > 0) is not be yieldable.""" with patch("file_read_backwards.buffer_work_space._get_file_size") as _get_file_size_mock: fp_mock = Mock() _get_file_size_mock.return_value = 1024 for n in new_lines_bytes: b = BufferWorkSpace(fp_mock, chunk_size=io.DEFAULT_BUFFER_SIZE) b.read_position = 1024 - len(n) b.read_buffer = n expected_result = False r = b.yieldable() self.assertEqual(r, expected_result) def test_yieldable_for_buffer_space_with_two_new_lines(self): """Buffer work space with a two new lines are yieldable.""" with patch("file_read_backwards.buffer_work_space._get_file_size") as _get_file_size_mock: fp_mock = Mock() _get_file_size_mock.return_value = 1024 for n in new_lines_bytes: b = BufferWorkSpace(fp_mock, chunk_size=io.DEFAULT_BUFFER_SIZE) b.read_position = 1024 - (len(n) * 2) b.read_buffer = n * 2 expected_result = True r = b.yieldable() self.assertEqual(r, expected_result) def test_yieldable_for_fully_read_with_unreturned_contents_in_buffer_space(self): """Buffer work space that has been fully read in and unreturned contents is yieldable. Note: fully read in and unreturned contents is represented by read_position = 0 and read_buffer is not None. """ with patch("file_read_backwards.buffer_work_space._get_file_size") as _get_file_size_mock: fp_mock = Mock() _get_file_size_mock.return_value = 1024 for n in new_lines_bytes: b = BufferWorkSpace(fp_mock, chunk_size=io.DEFAULT_BUFFER_SIZE) b.read_position = 0 b.read_buffer = b"" expected_result = True r = b.yieldable() self.assertEqual(r, expected_result) def test_yieldable_for_fully_read_and_returned_contents_in_buffer_space(self): """BufferWorkSpace that has been fully read in and returned contents is not yieldable. Note: fully read-in and returned is represented by read_position = 0, read_buffer is None. """ with patch("file_read_backwards.buffer_work_space._get_file_size") as _get_file_size_mock: fp_mock = Mock() _get_file_size_mock.return_value = 1024 for n in new_lines_bytes: b = BufferWorkSpace(fp_mock, chunk_size=io.DEFAULT_BUFFER_SIZE) b.read_position = 0 b.read_buffer = None r = b.yieldable() self.assertFalse(r) def test_return_line_with_buffer_space_with_two_new_lines(self): """With two new lines as its sole contents, the buffer space is expected to return b''.""" with patch("file_read_backwards.buffer_work_space._get_file_size") as _get_file_size_mock: fp_mock = Mock() _get_file_size_mock.return_value = 1024 for n in new_lines_bytes: b = BufferWorkSpace(fp_mock, chunk_size=io.DEFAULT_BUFFER_SIZE) b.read_position = 1024 - (len(n) * 2) b.read_buffer = n * 2 expected_result = b"" r = b.return_line() self.assertEqual(r, expected_result) def test_return_line_with_buffer_space_with_some_contents_between_two_new_lines(self): """With some bytestrings in between 2 new lines, expect to have the bytestrings in the middle.""" with patch("file_read_backwards.buffer_work_space._get_file_size") as _get_file_size_mock: fp_mock = Mock() _get_file_size_mock.return_value = 1024 for n in new_lines_bytes: b = BufferWorkSpace(fp_mock, chunk_size=io.DEFAULT_BUFFER_SIZE) b.read_position = 1024 - (len(n) * 2) b.read_buffer = n + b"Something" + n expected_result = b"Something" r = b.return_line() self.assertEqual(r, expected_result) def test_return_line_with_buffer_space_with_fully_read_in_contents_at_its_last_line(self): """With some bytestrings in between 2 new lines, expect to have the bytestrings in the middle.""" with patch("file_read_backwards.buffer_work_space._get_file_size") as _get_file_size_mock: fp_mock = Mock() _get_file_size_mock.return_value = 1024 for n in new_lines_bytes: b = BufferWorkSpace(fp_mock, chunk_size=io.DEFAULT_BUFFER_SIZE) b.read_position = 0 b.read_buffer = b"LastLineYay" expected_result = b"LastLineYay" r = b.return_line() self.assertEqual(r, expected_result) def test_return_line_contract_violation(self): """BufferWorkSpace (of a completely empty file) would result in contract violation for return_line.""" with patch("file_read_backwards.buffer_work_space._get_file_size") as _get_file_size_mock: fp_mock = Mock() _get_file_size_mock.return_value = 0 b = BufferWorkSpace(fp_mock, chunk_size=io.DEFAULT_BUFFER_SIZE) with self.assertRaises(AssertionError): b.return_line() def test_has_returned_every_line_empty_file(self): """With empty file (a degenerate case), it is expected to have returned everything.""" with patch("file_read_backwards.buffer_work_space._get_file_size") as _get_file_size_mock: fp_mock = Mock() _get_file_size_mock.return_value = 0 b = BufferWorkSpace(fp_mock, chunk_size=io.DEFAULT_BUFFER_SIZE) r = b.has_returned_every_line() self.assertTrue(r) def test_has_returned_every_line_with_not_fully_read_in_buffer_space(self): """With BufferWorkSpace that has not fully read in, it definitely has not returned everything. Note that: not fully read in is represented by read_position != 0 """ with patch("file_read_backwards.buffer_work_space._get_file_size") as _get_file_size_mock: fp_mock = Mock() _get_file_size_mock.return_value = 1024 b = BufferWorkSpace(fp_mock, chunk_size=io.DEFAULT_BUFFER_SIZE) b.read_position = 1 r = b.has_returned_every_line() self.assertFalse(r) def test_has_returned_every_line_with_fully_read_in_and_unprocessed_buffer_space(self): """BufferWorkSpace that has been fully read with some unprocessed buffer has not returned everything. Note: not fully read in and some unprocessed buffer is represented by read_position = 0 and read_buffer != None """ with patch("file_read_backwards.buffer_work_space._get_file_size") as _get_file_size_mock: fp_mock = Mock() _get_file_size_mock.return_value = 1024 b = BufferWorkSpace(fp_mock, chunk_size=io.DEFAULT_BUFFER_SIZE) b.read_position = 0 b.read_buffer = b"abc" r = b.has_returned_every_line() self.assertFalse(r) def test_has_returned_every_line_with_fully_read_in_and_processed_buffer_space(self): """BufferWorkSpace that has been fully read in and fully processed has not returned everything. Note: not fully read in and some unprocessed buffer is represented by read_position = 0 and read_buffer = None """ with patch("file_read_backwards.buffer_work_space._get_file_size") as _get_file_size_mock: fp_mock = Mock() _get_file_size_mock.return_value = 1024 b = BufferWorkSpace(fp_mock, chunk_size=io.DEFAULT_BUFFER_SIZE) b.read_position = 0 b.read_buffer = None r = b.has_returned_every_line() self.assertTrue(r) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1686279842.0 file_read_backwards-3.1.0/tests/test_file_read_backwards.py0000644000076500000240000002711114440513242023373 0ustar00rrobinstaff#!/usr/bin/env python # -*- coding: utf-8 -*- """Tests for `file_read_backwards` module.""" import itertools import os import tempfile import unittest from collections import deque from file_read_backwards.file_read_backwards import FileReadBackwards from file_read_backwards.file_read_backwards import supported_encodings from file_read_backwards.buffer_work_space import new_lines # doing this xrange/range dance so that we don't need to add additional dependencies of future or six modules try: xrange except NameError: xrange = range created_files = set() def helper_write(t, s, encoding="utf-8"): """A helper method to write out string s in specified encoding.""" t.write(s.encode(encoding)) def helper_create_temp_file(generator=None, encoding='utf-8'): global created_files if generator is None: generator = ("line {}!\n".format(i) for i in xrange(42)) temp_file = tempfile.NamedTemporaryFile(delete=False) for line in generator: helper_write(temp_file, line, encoding) temp_file.close() print('Wrote file {}'.format(temp_file.name)) created_files.add(temp_file) return temp_file def helper_destroy_temp_file(temp_file): temp_file.close() os.unlink(temp_file.name) def helper_destroy_temp_files(): global created_files while created_files: helper_destroy_temp_file(created_files.pop()) def tearDownModule(): helper_destroy_temp_files() class TestFileReadBackwards(unittest.TestCase): """Class that contains various test cases for actual FileReadBackwards usage.""" @classmethod def setUpClass(cls): cls.empty_file = helper_create_temp_file(generator=(_ for _ in [])) cls.long_file = helper_create_temp_file() @classmethod def tearDownClass(cls): helper_destroy_temp_files() def test_with_completely_empty_file(self): """Test with a completely empty file.""" f = FileReadBackwards(self.empty_file.name) expected_lines = deque() lines_read = deque() for line in f: lines_read.appendleft(line) self.assertEqual(expected_lines, lines_read) def test_file_with_a_single_new_line_char_with_different_encodings(self): """Test a file with a single new line character.""" for encoding, new_line in itertools.product(supported_encodings, new_lines): temp_file = helper_create_temp_file((line for line in [new_line]), encoding=encoding) f = FileReadBackwards(temp_file.name) expected_lines = deque([""]) lines_read = deque() for line in f: lines_read.appendleft(line) self.assertEqual( expected_lines, lines_read, msg="Test with {0} encoding with {1!r} as newline".format(encoding, new_line)) def test_file_with_one_line_of_text_with_accented_char_followed_by_a_new_line(self): """Test a file with a single line of text with accented char followed by a new line.""" b = b'Caf\xc3\xa9' # accented e in utf-8 s = b.decode("utf-8") for new_line in new_lines: temp_file = helper_create_temp_file((line for line in [s, new_line])) f = FileReadBackwards(temp_file.name) expected_lines = deque([s]) lines_read = deque() for line in f: lines_read.appendleft(s) self.assertEqual(expected_lines, lines_read, msg="Test with {0!r} as newline".format(new_line)) def test_file_with_one_line_of_text_followed_by_a_new_line_with_different_encodings(self): """Test a file with just one line of text followed by a new line.""" for encoding, new_line in itertools.product(supported_encodings, new_lines): temp_file = helper_create_temp_file((line for line in ["something{0}".format(new_line)]), encoding=encoding) f = FileReadBackwards(temp_file.name) expected_lines = deque(["something"]) lines_read = deque() for line in f: lines_read.appendleft(line) self.assertEqual( expected_lines, lines_read, msg="Test with {0} encoding with {1!r} as newline".format(encoding, new_line)) def test_file_with_varying_number_of_new_lines_and_some_text_in_chunk_size(self): """Test a file with varying number of new lines and text of size custom chunk_size.""" chunk_size = 3 s = "t" for number_of_new_lines in xrange(21): for new_line in new_lines: # test with variety of new lines temp_file = helper_create_temp_file((line for line in [new_line * number_of_new_lines, s * chunk_size])) f = FileReadBackwards(temp_file.name, chunk_size=chunk_size) expected_lines = deque() for _ in xrange(number_of_new_lines): expected_lines.append("") expected_lines.append(s * chunk_size) lines_read = deque() for line in f: lines_read.appendleft(line) self.assertEqual( expected_lines, lines_read, msg="Test with {0} of new line {1!r} followed by {2} of {3!r}".format(number_of_new_lines, new_line, chunk_size, s)) def test_file_with_new_lines_and_some_accented_characters_in_chunk_size(self): """Test a file with many new lines and a random text of size custom chunk_size.""" chunk_size = 3 b = b'\xc3\xa9' s = b.decode("utf-8") for number_of_new_lines in xrange(21): for new_line in new_lines: # test with variety of new lines temp_file = helper_create_temp_file((line for line in [new_line * number_of_new_lines, s * chunk_size])) f = FileReadBackwards(temp_file.name, chunk_size=chunk_size) expected_lines = deque() for _ in xrange(number_of_new_lines): expected_lines.append("") expected_lines.append(s * chunk_size) lines_read = deque() for line in f: lines_read.appendleft(line) self.assertEqual( expected_lines, lines_read, msg="Test with {0} of new line {1!r} followed by {2} of \\xc3\\xa9".format(number_of_new_lines, new_line, chunk_size)) def test_unsupported_encoding(self): """Test when users pass in unsupported encoding, NotImplementedError should be thrown.""" with self.assertRaises(NotImplementedError): _ = FileReadBackwards(self.empty_file.name, encoding="not-supported-encoding") # noqa: F841 def test_file_with_one_line_of_text_readline(self): """Test a file with a single line of text followed by a new line.""" s = "Line0" for new_line in new_lines: temp_file = helper_create_temp_file((line for line in [s, new_line])) with FileReadBackwards(temp_file.name) as fp: line = fp.readline() expected_line = s + os.linesep self.assertEqual(line, expected_line) # the file contains only 1 line second_line = fp.readline() expected_second_line = "" self.assertEqual(second_line, expected_second_line) def test_file_with_two_lines_of_text_readline(self): """Test a file with a two lines of text followed by a new line.""" line0 = "Line0" line1 = "Line1" for new_line in new_lines: line0_with_n = "{}{}".format(line0, new_line) line1_with_n = "{}{}".format(line1, new_line) temp_file = helper_create_temp_file((line for line in [line0_with_n, line1_with_n])) with FileReadBackwards(temp_file.name) as fp: line = fp.readline() expected_line = line1 + os.linesep self.assertEqual(line, expected_line) second_line = fp.readline() expected_second_line = line0 + os.linesep self.assertEqual(second_line, expected_second_line) # EOF third_line = fp.readline() expected_third_line = "" self.assertEqual(third_line, expected_third_line) class TestFileReadBackwardsAsContextManager(unittest.TestCase): @classmethod def setUpClass(cls): cls.temp_file = helper_create_temp_file() @classmethod def tearDownClass(cls): helper_destroy_temp_files() def test_behaves_as_classic(self): with FileReadBackwards(self.temp_file.name) as f: lines_read = deque() for line in f: lines_read.appendleft(line) f2 = FileReadBackwards(self.temp_file.name) lines_read2 = deque() for l2 in f2: lines_read2.appendleft(l2) self.assertEqual( lines_read, lines_read2, msg="The Context Manager way should behave exactly the same way as without using one." ) class TestFileReadBackwardsCloseFunctionality(unittest.TestCase): @classmethod def setUpClass(cls): cls.temp_file = helper_create_temp_file() @classmethod def tearDownClass(cls): helper_destroy_temp_files() def test_close_on_iterator(self): with FileReadBackwards(self.temp_file.name) as f: it = iter(f) for count, i in enumerate(it): if count == 2: break self.assertFalse(it.closed, msg="The fp should not be closed when not exhausted") it.close() self.assertTrue(it.closed, msg="Calling close() on the iterator should close it") def test_not_creating_new_iterator(self): with FileReadBackwards(self.temp_file.name) as f: it1 = iter(f) it2 = iter(f) self.assertTrue(it1 is it2, msg="FileReadBackwards will return the same iterator") def test_close_on_iterator_exhausted(self): with FileReadBackwards(self.temp_file.name) as f: it = iter(f) for _ in it: pass self.assertTrue(it.closed, msg="The fp should be closed automatically when the iterator is exhausted.") def test_close_on_reader_exit(self): with FileReadBackwards(self.temp_file.name) as f: it = iter(f) self.assertTrue(it.closed, msg="Iterator created by a reader should have its fp closed when the reader gets closed.") def test_close_on_reader_explicitly(self): f = FileReadBackwards(self.temp_file.name) it = iter(f) self.assertFalse(it.closed, msg="Iterator should not have its fp closed at this point.") f.close() self.assertTrue(it.closed, msg="Iterator created by a reader should have its fp closed when the reader closes it.") def test_close_on_reader_with_already_closed_iterator(self): with FileReadBackwards(self.temp_file.name) as f: it = iter(f) it.close() self.assertTrue(it.closed, msg="It should be okay to close (through the reader) an already closed iterator.") def test_cannot_iterate_when_closed(self): with FileReadBackwards(self.temp_file.name) as f: it = iter(f) it.close() for _ in it: self.fail(msg="An iterator should be exhausted when closed.")