././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1704559216.343883 ftputil-5.1.0/0000755000175000017500000000000014546301160012240 5ustar00schwaschwa././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1686925980.0 ftputil-5.1.0/LICENSE0000664000175000017500000002614314443071234013257 0ustar00schwaschwaLICENSE ======= All the software in the ftputil distribution is covered by the 3-clause BSD license. While ftputil has mainly been written by Stefan Schwarzer , others have also contributed suggestions and code. In particular, Evan Prodromou has contributed his lrucache module which is covered by the 3-clause BSD license and additionally the Academic Free License 2.1. This means you can use the module `lrucache.py` under either of the two licenses. 3-clause BSD license: ---------------------------------------------------------------------- Copyright (C) 2002-2022, Stefan Schwarzer and contributors (see `doc/contributors.txt`) All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. - Neither the name of the above author nor the names of the contributors to the software may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ---------------------------------------------------------------------- The Academic Free License: ---------------------------------------------------------------------- The Academic Free License v. 2.1 This Academic Free License (the "License") applies to any original work of authorship (the "Original Work") whose owner (the "Licensor") has placed the following notice immediately following the copyright notice for the Original Work: Licensed under the Academic Free License version 2.1 1) Grant of Copyright License. Licensor hereby grants You a world-wide, royalty-free, non-exclusive, perpetual, sublicenseable license to do the following: a) to reproduce the Original Work in copies; b) to prepare derivative works ("Derivative Works") based upon the Original Work; c) to distribute copies of the Original Work and Derivative Works to the public; d) to perform the Original Work publicly; and e) to display the Original Work publicly. 2) Grant of Patent License. Licensor hereby grants You a world-wide, royalty-free, non-exclusive, perpetual, sublicenseable license, under patent claims owned or controlled by the Licensor that are embodied in the Original Work as furnished by the Licensor, to make, use, sell and offer for sale the Original Work and Derivative Works. 3) Grant of Source Code License. The term "Source Code" means the preferred form of the Original Work for making modifications to it and all available documentation describing how to modify the Original Work. Licensor hereby agrees to provide a machine-readable copy of the Source Code of the Original Work along with each copy of the Original Work that Licensor distributes. Licensor reserves the right to satisfy this obligation by placing a machine-readable copy of the Source Code in an information repository reasonably calculated to permit inexpensive and convenient access by You for as long as Licensor continues to distribute the Original Work, and by publishing the address of that information repository in a notice immediately following the copyright notice that applies to the Original Work. 4) Exclusions From License Grant. Neither the names of Licensor, nor the names of any contributors to the Original Work, nor any of their trademarks or service marks, may be used to endorse or promote products derived from this Original Work without express prior written permission of the Licensor. Nothing in this License shall be deemed to grant any rights to trademarks, copyrights, patents, trade secrets or any other intellectual property of Licensor except as expressly stated herein. No patent license is granted to make, use, sell or offer to sell embodiments of any patent claims other than the licensed claims defined in Section 2. No right is granted to the trademarks of Licensor even if such marks are included in the Original Work. Nothing in this License shall be interpreted to prohibit Licensor from licensing under different terms from this License any Original Work that Licensor otherwise would have a right to license. 5) This section intentionally omitted. 6) Attribution Rights. You must retain, in the Source Code of any Derivative Works that You create, all copyright, patent or trademark notices from the Source Code of the Original Work, as well as any notices of licensing and any descriptive text identified therein as an "Attribution Notice." You must cause the Source Code for any Derivative Works that You create to carry a prominent Attribution Notice reasonably calculated to inform recipients that You have modified the Original Work. 7) Warranty of Provenance and Disclaimer of Warranty. Licensor warrants that the copyright in and to the Original Work and the patent rights granted herein by Licensor are owned by the Licensor or are sublicensed to You under the terms of this License with the permission of the contributor(s) of those copyrights and patent rights. Except as expressly stated in the immediately proceeding sentence, the Original Work is provided under this License on an "AS IS" BASIS and WITHOUT WARRANTY, either express or implied, including, without limitation, the warranties of NON-INFRINGEMENT, MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY OF THE ORIGINAL WORK IS WITH YOU. This DISCLAIMER OF WARRANTY constitutes an essential part of this License. No license to Original Work is granted hereunder except under this disclaimer. 8) Limitation of Liability. Under no circumstances and under no legal theory, whether in tort (including negligence), contract, or otherwise, shall the Licensor be liable to any person for any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or the use of the Original Work including, without limitation, damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses. This limitation of liability shall not apply to liability for death or personal injury resulting from Licensor's negligence to the extent applicable law prohibits such limitation. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so this exclusion and limitation may not apply to You. 9) Acceptance and Termination. If You distribute copies of the Original Work or a Derivative Work, You must make a reasonable effort under the circumstances to obtain the express assent of recipients to the terms of this License. Nothing else but this License (or another written agreement between Licensor and You) grants You permission to create Derivative Works based upon the Original Work or to exercise any of the rights granted in Section 1 herein, and any attempt to do so except under the terms of this License (or another written agreement between Licensor and You) is expressly prohibited by U.S. copyright law, the equivalent laws of other countries, and by international treaty. Therefore, by exercising any of the rights granted to You in Section 1 herein, You indicate Your acceptance of this License and all of its terms and conditions. 10) Termination for Patent Action. This License shall terminate automatically and You may no longer exercise any of the rights granted to You by this License as of the date You commence an action, including a cross-claim or counterclaim, against Licensor or any licensee alleging that the Original Work infringes a patent. This termination provision shall not apply for an action alleging patent infringement by combinations of the Original Work with other software or hardware. 11) Jurisdiction, Venue and Governing Law. Any action or suit relating to this License may be brought only in the courts of a jurisdiction wherein the Licensor resides or in which Licensor conducts its primary business, and under the laws of that jurisdiction excluding its conflict-of-law provisions. The application of the United Nations Convention on Contracts for the International Sale of Goods is expressly excluded. Any use of the Original Work outside the scope of this License or after its termination shall be subject to the requirements and penalties of the U.S. Copyright Act, 17 U.S.C. ยง 101 et seq., the equivalent laws of other countries, and international treaty. This section shall survive the termination of this License. 12) Attorneys Fees. In any action to enforce the terms of this License or seeking damages relating thereto, the prevailing party shall be entitled to recover its costs and expenses, including, without limitation, reasonable attorneys' fees and costs incurred in connection with such action, including any appeal of such action. This section shall survive the termination of this License. 13) Miscellaneous. This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. 14) Definition of "You" in This License. "You" throughout this License, whether in upper or lower case, means an individual or a legal entity exercising rights under, and complying with all of the terms of, this License. For legal entities, "You" includes any entity that controls, is controlled by, or is under common control with you. For purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. 15) Right to Use. You may use the Original Work in all ways not otherwise restricted or conditioned by this License or by law, and Licensor promises not to interfere with or be responsible for such uses by You. This license is Copyright (C) 2003-2004 Lawrence E. Rosen. All rights reserved. Permission is hereby granted to copy and distribute this license without modification. This license may not be modified without the express written permission of its copyright owner. ---------------------------------------------------------------------- ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1649533296.0 ftputil-5.1.0/MANIFEST.in0000664000175000017500000000011014224360560013772 0ustar00schwaschwainclude LICENSE include VERSION include ftputil/*.py include test/*.py ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1704559216.343883 ftputil-5.1.0/PKG-INFO0000644000175000017500000000274014546301160013340 0ustar00schwaschwaMetadata-Version: 2.1 Name: ftputil Version: 5.1.0 Summary: High-level FTP client library (virtual file system and more) Home-page: https://ftputil.sschwarzer.net/ Download-URL: http://ftputil.sschwarzer.net/trac/attachment/wiki/Download/ftputil-5.1.0.tar.gz?format=raw Author: Stefan Schwarzer Author-email: sschwarzer@sschwarzer.net License: Open source (revised BSD license) Keywords: FTP,client,library,virtual file system Platform: Pure Python Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Other Environment Classifier: Intended Audience :: Developers Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 :: Only Classifier: Topic :: Internet :: File Transfer Protocol (FTP) Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: System :: Filesystems Requires-Python: >=3.6 License-File: LICENSE ftputil is a high-level FTP client library for the Python programming language. ftputil implements a virtual file system for accessing FTP servers, that is, it can generate file-like objects for remote files. The library supports many functions similar to those in the os, os.path and shutil modules. ftputil has convenience functions for conditional uploads and downloads, and handles FTP clients and servers in different timezones. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704558723.0 ftputil-5.1.0/README.md0000664000175000017500000000300414546300203013513 0ustar00schwaschwa# ftputil ## Purpose ftputil is a high-level FTP client library for the Python programming language. ftputil implements a virtual file system for accessing FTP servers, that is, it can generate file-like objects for remote files. The library supports many functions similar to those in the os, os.path and shutil modules. ftputil has convenience functions for conditional uploads and downloads, and handles FTP clients and servers in different timezones. ## Documentation The ftputil documentation is [here](https://ftputil.sschwarzer.net/documentation). ## Prerequisites To use ftputil, you need Python, at least version 3.6. ## Installation You can install ftputil with pip: pip install ftputil Unless you're installing ftputil in a virtual environment, you'll probably need root/administrator privileges. To update the library, run pip install -U ftputil That said, you can use another Python package manager if you like. Adapt the commands accordingly. Note that ftputil versions with a different major version number won't be fully backward-compatible with the previous version. Examples are the changes from 2.8 to 3.0 and from 3.4 to 4.0.0. ## License ftputil is open source software. It is distributed under the [new/modified/revised BSD license](https://opensource.org/licenses/BSD-3-Clause). ## Authors Stefan Schwarzer Evan Prodromou (lrucache module) (See also the file `doc/contributors.txt`.) Please provide feedback! It's certainly appreciated. :-) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704558351.0 ftputil-5.1.0/VERSION0000664000175000017500000000000614546277417013327 0ustar00schwaschwa5.1.0 ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1704559216.340883 ftputil-5.1.0/ftputil/0000755000175000017500000000000014546301160013727 5ustar00schwaschwa././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/ftputil/__init__.py0000664000175000017500000000356014170605001016040 0ustar00schwaschwa# Copyright (C) 2002-2018, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. """ ftputil - high-level FTP client library FTPHost objects This class resembles the `os` module's interface to ordinary file systems. In addition, it provides a method `file` which will return file-objects corresponding to remote files. # Example session with ftputil.FTPHost("ftp.domain.com", "me", "secret") as host: print(host.getcwd()) # e. g. "/home/me" host.mkdir("newdir") host.chdir("newdir") with host.open("sourcefile", "r") as source: with host.open("targetfile", "w") as target: host.copyfileobj(source, target) host.remove("targetfile") host.chdir(host.pardir) host.rmdir("newdir") There are also shortcuts for uploads and downloads: host.upload(local_file, remote_file) host.download(remote_file, local_file) Both accept an additional mode parameter. If it is "b", the transfer mode will be for binary files. For even more functionality refer to the documentation in `ftputil.txt` or `ftputil.html`. FTPFile objects `FTPFile` objects are constructed via the `file` method (`open` is an alias) of `FTPHost` objects. `FTPFile` objects support the usual file operations for non-seekable files (`read`, `readline`, `readlines`, `write`, `writelines`, `close`). Note: ftputil currently is not threadsafe. More specifically, you can use different `FTPHost` objects in different threads but not a single `FTPHost` object in different threads. """ from ftputil.host import FTPHost from ftputil.version import __version__ # Apart from `ftputil.error` and `ftputil.stat`, this is the whole # public API of `ftputil`. __all__ = ["FTPHost", "__version__"] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/ftputil/error.py0000664000175000017500000001314314170605001015430 0ustar00schwaschwa# Copyright (C) 2003-2021, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. """ ftputil.error - exception classes and wrappers """ # pylint: disable=too-many-ancestors import ftplib import ftputil.path_encoding import ftputil.tool import ftputil.version # You _can_ import these with `from ftputil.error import *`, - but it's _not_ # recommended. __all__ = [ "CommandNotImplementedError", "FTPIOError", "FTPOSError", "InaccessibleLoginDirError", "InternalError", "KeepAliveError", "NoEncodingError", "ParserError", "PermanentError", "RootDirError", "SyncError", "TemporaryError", "TimeShiftError", ] class FTPError(Exception): """ General ftputil error class. """ def __init__(self, *args, original_error=None): super().__init__(*args) # `strerror` self.strerror = "" if original_error is not None: try: self.strerror = str(original_error) except Exception: # Consume all errors. If the `str` call fails, it's more # appropriate to ignore `original_error` than to raise an # exception while instantiating `FTPError`. pass elif args: # Assume the first argument is a string. It may be a byte string # though. try: self.strerror = ftputil.tool.as_str( args[0], ftputil.path_encoding.DEFAULT_ENCODING ) except TypeError: # `args[0]` isn't `str` or `bytes`. pass # `errno` self.errno = None try: self.errno = int(self.strerror[:3]) except ValueError: # `int()` argument couldn't be converted to an integer. pass # `file_name` self.file_name = None def __str__(self): return "{}\nDebugging info: {}".format( self.strerror, ftputil.version.version_info ) # Internal errors are those that have more to do with the inner workings of # ftputil than with errors on the server side. class InternalError(FTPError): """Internal error.""" pass class RootDirError(InternalError): """Raised for generic stat calls on the remote root directory.""" pass class InaccessibleLoginDirError(InternalError): """May be raised if the login directory isn't accessible.""" pass class TimeShiftError(InternalError): """Raised for invalid time shift values.""" pass class ParserError(InternalError): """Raised if a line of a remote directory can't be parsed.""" pass class CacheMissError(InternalError): """Raised if a path isn't found in the cache.""" pass class NoEncodingError(InternalError): """Raised if session instances don't specify an encoding.""" pass # Currently not used class KeepAliveError(InternalError): """Raised if the keep-alive feature failed.""" pass class FTPOSError(FTPError, OSError): """Generic FTP error related to `OSError`.""" pass class TemporaryError(FTPOSError): """Raised for temporary FTP errors (4xx).""" pass class PermanentError(FTPOSError): """Raised for permanent FTP errors (5xx).""" pass class CommandNotImplementedError(PermanentError): """Raised if the server doesn't implement a certain feature (502).""" pass class RecursiveLinksError(PermanentError): """Raised if an infinite link structure is detected.""" pass # Currently not used class SyncError(PermanentError): """Raised for problems specific to syncing directories.""" pass class FtplibErrorToFTPOSError: """ Context manager to convert `ftplib` exceptions to exceptions derived from `FTPOSError`. """ def __enter__(self): pass def __exit__(self, exc_type, exc_value, traceback): if exc_type is None: # No exception return if isinstance(exc_value, ftplib.error_temp): raise TemporaryError( *exc_value.args, original_error=exc_value ) from exc_value elif isinstance(exc_value, ftplib.error_perm): # If `exc_value.args[0]` is present, assume it's a byte or unicode # string. if exc_value.args and ftputil.tool.as_str( exc_value.args[0], ftputil.path_encoding.DEFAULT_ENCODING ).startswith("502"): raise CommandNotImplementedError( *exc_value.args, original_error=exc_value ) from exc_value else: raise PermanentError( *exc_value.args, original_error=exc_value ) from exc_value elif isinstance(exc_value, ftplib.all_errors): raise FTPOSError(*exc_value.args, original_error=exc_value) from exc_value else: raise ftplib_error_to_ftp_os_error = FtplibErrorToFTPOSError() class FTPIOError(FTPError, IOError): """Generic FTP error related to `IOError`.""" pass class FtplibErrorToFTPIOError: """ Context manager to convert `ftplib` exceptions to `FTPIOError` exceptions. """ def __enter__(self): pass def __exit__(self, exc_type, exc_value, traceback): if exc_type is None: # No exception return if isinstance(exc_value, ftplib.all_errors): raise FTPIOError(*exc_value.args, original_error=exc_value) from exc_value else: raise ftplib_error_to_ftp_io_error = FtplibErrorToFTPIOError() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642884893.0 ftputil-5.1.0/ftputil/file.py0000664000175000017500000001542614173067435015244 0ustar00schwaschwa# Copyright (C) 2003-2022, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. """ ftputil.file - support for file-like objects on FTP servers """ import ftputil.error # This module shouldn't be used by clients of the ftputil library. __all__ = [] try: import ssl except ImportError: SSLSocket = None else: SSLSocket = ssl.SSLSocket class FTPFile: """ Represents a file-like object associated with an FTP host. File and socket are closed appropriately if the `close` method is called. """ # Set timeout in seconds when closing file connections (see ticket #51). _close_timeout = 5 def __init__(self, host): """Construct the file(-like) object.""" self._host = host # pylint: disable=protected-access self._session = host._session # The file is still closed. self.closed = True self._conn = None self._fobj = None def _open( self, path, mode, buffering=None, encoding=None, errors=None, newline=None, *, rest=None, ): """ Open the remote file with given path name and mode. Contrary to the `open` builtin, this method returns `None`, instead this file object is modified in-place. """ # We use the same arguments as in `open`. # pylint: disable=unused-argument # pylint: disable=too-many-arguments # # Check mode. if mode is None: # This is Python's behavior for local files. raise TypeError("open() argument 2 must be str, not None") if "a" in mode: raise ftputil.error.FTPIOError("append mode not supported") if mode not in ("r", "rb", "rt", "w", "wb", "wt"): raise ftputil.error.FTPIOError("invalid mode '{}'".format(mode)) if "b" in mode and "t" in mode: # Raise a `ValueError` like Python would. raise ValueError("can't have text and binary mode at once") # Convenience variables is_binary_mode = "b" in mode is_read_mode = "r" in mode # `rest` is only allowed for binary mode. if (not is_binary_mode) and (rest is not None): raise ftputil.error.CommandNotImplementedError( "`rest` argument can't be used for text files" ) # Always use binary mode and leave any conversions to Python, # controlled by the arguments to `makefile` below. transfer_type = "I" command = "TYPE {}".format(transfer_type) with ftputil.error.ftplib_error_to_ftp_io_error: self._session.voidcmd(command) # Make transfer command. command_type = "RETR" if is_read_mode else "STOR" command = "{} {}".format(command_type, path) # Get connection and file object. with ftputil.error.ftplib_error_to_ftp_io_error: self._conn = self._session.transfercmd(command, rest) self._fobj = self._conn.makefile( mode, buffering=buffering, encoding=encoding, errors=errors, newline=newline ) # This comes last so that `close` won't try to close `FTPFile` objects # without `_conn` and `_fobj` attributes in case of an error. self.closed = False def __iter__(self): """ Return a file iterator. """ return self def __next__(self): """ Return the next line or raise `StopIteration`, if there are no more. """ # Apply implicit line ending conversion for text files. line = self.readline() if line: return line else: raise StopIteration # # Context manager methods # def __enter__(self): # Return `self`, so it can be accessed as the variable component of the # `with` statement. return self def __exit__(self, exc_type, exc_val, exc_tb): # We don't need the `exc_*` arguments here # pylint: disable=unused-argument self.close() # Be explicit return False # # Other attributes # def __getattr__(self, attr_name): """ Handle requests for attributes unknown to `FTPFile` objects: delegate the requests to the contained file object. """ if attr_name in ( "encoding flush isatty fileno read readline readlines seek tell " "truncate name softspace write writelines".split() ): return getattr(self._fobj, attr_name) raise AttributeError("'FTPFile' object has no attribute '{}'".format(attr_name)) # TODO: Implement `__dir__`? (See # http://docs.python.org/whatsnew/2.6.html#other-language-changes ) def close(self): """ Close the `FTPFile`. """ if self.closed: return # Timeout value to restore, see below. # Statement works only before the try/finally statement, otherwise # Python raises an `UnboundLocalError`. old_timeout = self._session.sock.gettimeout() try: self._fobj.close() self._fobj = None with ftputil.error.ftplib_error_to_ftp_io_error: if (SSLSocket is not None) and isinstance(self._conn, SSLSocket): self._conn.unwrap() self._conn.close() # Set a timeout to prevent waiting until server timeout if we have # a server blocking here like in ticket #51. self._session.sock.settimeout(self._close_timeout) try: with ftputil.error.ftplib_error_to_ftp_io_error: self._session.voidresp() except ftputil.error.FTPIOError as exc: # Ignore some errors, see tickets #51 and #17 at # http://ftputil.sschwarzer.net/trac/ticket/51 and # http://ftputil.sschwarzer.net/trac/ticket/17, respectively. exc = str(exc) error_code = exc[:3] if exc.splitlines()[0] != "timed out" and error_code not in ( "150", "426", "450", "451", ): raise finally: # Restore timeout for socket of `FTPFile`'s `ftplib.FTP` object in # case the connection is reused later. self._session.sock.settimeout(old_timeout) # If something went wrong before, the file is probably defunct and # subsequent calls to `close` won't help either, so we consider the # file closed for practical purposes. self.closed = True def __getstate__(self): raise TypeError("cannot serialize FTPFile object") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642880791.0 ftputil-5.1.0/ftputil/file_transfer.py0000664000175000017500000001450514173057427017145 0ustar00schwaschwa# Copyright (C) 2013-2020, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. """ file_transfer.py - upload, download and generic file copy """ import os import ftputil.stat # TODO: Think a bit more about the API before making it public. # Only `chunks` should be used by clients of the ftputil library. Any other # functionality is supposed to be used via `FTPHost` objects. # __all__ = ["chunks"] __all__ = [] # Maximum size of chunk in `FTPHost.copyfileobj` in bytes. MAX_COPY_CHUNK_SIZE = 64 * 1024 class LocalFile: """ Represent a file on the local side which is to be transferred or is already transferred. """ def __init__(self, name, mode): self.name = os.path.abspath(name) self.mode = mode def exists(self): """ Return `True` if the path representing this file exists. Otherwise return `False`. """ return os.path.exists(self.name) def mtime(self): """ Return the timestamp for the last modification in seconds. """ return os.path.getmtime(self.name) def mtime_precision(self): """ Return the precision of the last modification time in seconds. """ # Derived classes might want to use `self`. # pylint: disable=no-self-use # # Assume modification timestamps for local file systems are at least # precise up to a second. return 1.0 def fobj(self): """ Return a file object for the name/path in the constructor. """ return open(self.name, self.mode) class RemoteFile: """ Represent a file on the remote side which is to be transferred or is already transferred. """ def __init__(self, ftp_host, name, mode): self._host = ftp_host self._path = ftp_host.path self.name = self._path.abspath(name) self.mode = mode def exists(self): """ Return `True` if the path representing this file exists. Otherwise return `False`. """ return self._path.exists(self.name) def mtime(self): """ Return the timestamp for the last modification in seconds. """ # Convert to client time zone (see definition of time shift in # docstring of `FTPHost.set_time_shift`). return self._path.getmtime(self.name) def mtime_precision(self): """ Return the precision of the last modification time in seconds. """ # I think using `stat` instead of `lstat` makes more sense here. return self._host.stat(self.name)._st_mtime_precision def fobj(self): """ Return a file object for the name/path in the constructor. """ return self._host.open(self.name, self.mode) def source_is_newer_than_target(source_file, target_file): """ Return `True` if the source is newer than the target, else `False`. Both arguments are `LocalFile` or `RemoteFile` objects. It's assumed that the actual modification time is reported_mtime <= actual_mtime <= reported_mtime + mtime_precision i. e. that the reported mtime is the actual mtime or rounded down (truncated). For the purpose of this test the source is newer than the target if any of the possible actual source modification times is greater than the reported target modification time. In other words: If in doubt, the file should be transferred. This is the only situation where the source is _not_ considered newer than the target: |/////////////////////| possible source mtime |////////| possible target mtime That is, the latest possible actual source modification time is before the first possible actual target modification time. """ if source_file.mtime_precision() is ftputil.stat.UNKNOWN_PRECISION: return True else: return ( source_file.mtime() + source_file.mtime_precision() >= target_file.mtime() ) def chunks(fobj, max_chunk_size=MAX_COPY_CHUNK_SIZE): """ Return an iterator which yields the contents of the file object. For each iteration, at most `max_chunk_size` bytes are read from `fobj` and yielded as a byte string. If the file object is exhausted, then don't yield any more data but stop the iteration, so the client does _not_ get an empty byte string. Any exceptions resulting from reading the file object are passed through to the client. """ while True: chunk = fobj.read(max_chunk_size) if not chunk: break yield chunk def copyfileobj( source_fobj, target_fobj, max_chunk_size=MAX_COPY_CHUNK_SIZE, callback=None ): """ Copy data from file-like object source to file-like object target. """ # Inspired by `shutil.copyfileobj` (I don't use the `shutil` code directly # because it might change) for chunk in chunks(source_fobj, max_chunk_size): target_fobj.write(chunk) if callback is not None: callback(chunk) def copy_file(source_file, target_file, conditional, callback): """ Copy a file from `source_file` to `target_file`. These are `LocalFile` or `RemoteFile` objects. Which of them is a local or a remote file, respectively, is determined by the arguments. If `conditional` is true, the file is only copied if the target doesn't exist or is older than the source. If `conditional` is false, the file is copied unconditionally. Return `True` if the file was copied, else `False`. """ if conditional: # Evaluate condition: The target file either doesn't exist or is older # than the source file. If in doubt (due to imprecise timestamps), # perform the transfer. transfer_condition = not target_file.exists() or source_is_newer_than_target( source_file, target_file ) if not transfer_condition: # We didn't transfer. return False source_fobj = source_file.fobj() try: target_fobj = target_file.fobj() try: copyfileobj(source_fobj, target_fobj, callback=callback) finally: target_fobj.close() finally: source_fobj.close() # Transfer accomplished return True ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704489962.0 ftputil-5.1.0/ftputil/host.py0000644000175000017500000013202214546071752015271 0ustar00schwaschwa# Copyright (C) 2002-2022, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. """ `FTPHost` is the central class of the `ftputil` library. See `__init__.py` for an example. """ import datetime import errno import ftplib import stat import sys import time import ftputil.error import ftputil.file import ftputil.file_transfer import ftputil.path import ftputil.path_encoding import ftputil.session import ftputil.stat import ftputil.tool __all__ = ["FTPHost"] # The "protected" attributes PyLint talks about aren't intended for clients of # the library. `FTPHost` objects need to use some of these library-internal # attributes though. # pylint: disable=protected-access # For Python versions 3.8 and below, ftputil has implicitly defaulted to # latin-1 encoding. Prefer that behavior for Python 3.9 and up as well instead # of using the encoding that is the default for `ftplib.FTP` in the Python # version. if ftputil.path_encoding.RUNNING_UNDER_PY39_AND_UP: class default_session_factory(ftplib.FTP): def __init__(self, *args, **kwargs): # Python 3.9 defines `encoding` as a keyword-only argument, so test # only for the `encoding` argument in `kwargs`. # # Only use the ftputil default encoding (latin-1) if the # caller didn't pass an encoding. if "encoding" not in kwargs: encoding = ftputil.path_encoding.DEFAULT_ENCODING kwargs["encoding"] = encoding super().__init__(*args, **kwargs) else: super().__init__(*args, **kwargs) # Handle UTF-8 encoding if it was specified. ftputil.session._maybe_send_opts_utf8_on(self, kwargs["encoding"]) else: # No need to handle UTF-8 encoding here since `ftplib.FTP` under # Python 3.8 and lower uses latin-1 encoding. default_session_factory = ftplib.FTP ##################################################################### # `FTPHost` class with several methods similar to those of `os` class FTPHost: """ FTP host class. """ # Implementation notes: # # Upon every request of a file (`FTPFile` object) a new FTP session is # created (or reused from a cache), leading to a child session of the # `FTPHost` object from which the file is requested. # # This is needed because opening an `FTPFile` will make the local session # object wait for the completion of the transfer. In fact, code like this # would block indefinitely, if the `RETR` request would be made on the # `_session` of the object host: # # host = FTPHost(ftp_server, user, password) # f = host.open("index.html") # host.getcwd() # would block! # # On the other hand, the initially constructed host object will store # references to already established `FTPFile` objects and reuse an # associated connection if its associated `FTPFile` has been closed. def __init__(self, *args, **kwargs): """ Abstract initialization of `FTPHost` object. """ # Store arguments for later operations. self._args = args self._kwargs = kwargs # XXX: Maybe put the following in a `reset` method. # The time shift setting shouldn't be reset though. Make a session # according to these arguments. self._session = self._make_session() # Simulate `os.path`. self.path = ftputil.path._Path(self) # lstat, stat, listdir services. self._stat = ftputil.stat._Stat(self) self.stat_cache = self._stat._lstat_cache self.stat_cache.enable() with ftputil.error.ftplib_error_to_ftp_os_error: current_dir = self._session.pwd() self._cached_current_dir = self.path.normpath( ftputil.tool.as_str_path(current_dir, encoding=self._encoding) ) # Associated `FTPHost` objects for data transfer. self._children = [] # This is only set to something else than `None` if this instance # represents an `FTPFile`. self._file = None # Now opened. self.closed = False # Set curdir, pardir etc. for the remote host. RFC 959 states that this # is, strictly speaking, dependent on the server OS but it seems to # work at least with Unix and Windows servers. self.curdir, self.pardir, self.sep = ".", "..", "/" # Set default time shift (used in `upload_if_newer` and # `download_if_newer`). self._time_shift = 0.0 # Don't use `LIST -a` option by default. If the server doesn't # understand the `-a` option and interprets it as a path, the results # can be surprising. See ticket #110. self.use_list_a_option = False def keep_alive(self): """ Try to keep the connection alive in order to avoid server timeouts. Note that this won't help if the connection has already timed out! In this case, `keep_alive` will raise an `TemporaryError`. (Actually, if you get a server timeout, the error - for a specific connection - will be permanent.) """ # Warning: Don't call this method on `FTPHost` instances which # represent file transfers. This may fail in confusing ways. with ftputil.error.ftplib_error_to_ftp_os_error: # Ignore return value. self._session.pwd() # # Dealing with child sessions and file-like objects (rather low-level) # def _make_session(self): """ Return a new session object according to the current state of this `FTPHost` instance. """ # Don't modify original attributes below. args = self._args[:] kwargs = self._kwargs.copy() # If a session factory has been given on the instantiation of this # `FTPHost` object, use the same factory for this `FTPHost` object's # child sessions. factory = kwargs.pop("session_factory", default_session_factory) with ftputil.error.ftplib_error_to_ftp_os_error: session = factory(*args, **kwargs) if not hasattr(session, "encoding"): raise ftputil.error.NoEncodingError( f"session instance {session!r} must have an `encoding` attribute" ) self._encoding = session.encoding return session def _copy(self): """ Return a copy of this `FTPHost` object. """ # The copy includes a new session factory return value (aka session) # but doesn't copy the state of `self.getcwd()`. return self.__class__(*self._args, **self._kwargs) def _available_child(self): """ Return an available (i. e. one whose `_file` object is closed and doesn't have a timed-out server connection) child (`FTPHost` object) from the pool of children or `None` if there aren't any. """ # TODO: Currently timed-out child sessions aren't removed and may # collect over time. In very busy or long running processes, this might # slow down an application because the same stale child sessions have # to be processed again and again. for host in self._children: # Test for timeouts only after testing for a closed file: # - If a file isn't closed, save time; don't bother to access the # remote server. # - If a file transfer on the child is in progress, requesting the # directory is an invalid operation because of the way the FTP # state machine works (see RFC 959). if host._file.closed: try: host._session.pwd() # Under high load, a 226 status response from a previous # download may arrive too late, so that it's "seen" in the # `pwd` call. For now, skip the potential child session; it # will be considered again when `_available_child` is called # the next time. except ftplib.error_reply: continue # Timed-out sessions raise `error_temp`. except ftplib.error_temp: continue # The server may have closed the connection which may cause # `host._session.getline` to raise an `EOFError` (see ticket # #114). except EOFError: continue # Under high load, there may be a socket read timeout during # the last FTP file `close` (see ticket #112). Note that a # socket timeout is quite different from an FTP session # timeout. except OSError: continue else: # Everything's ok; use this `FTPHost` instance. return host # Be explicit. return None def open( self, path, mode="r", buffering=None, encoding=None, errors=None, newline=None, *, rest=None, ): """ Return an open file(-like) object which is associated with this `FTPHost` object. The arguments `path`, `mode`, `buffering`, `encoding`, `errors` and `newlines` have the same meaning as for `open`. If `rest` is given as an integer, - reading will start at the byte (zero-based) `rest` - writing will overwrite the remote file from byte `rest` This method tries to reuse a child but will generate a new one if none is available. """ # Support the same arguments as `open`. # pylint: disable=too-many-arguments path = ftputil.tool.as_str_path(path, encoding=self._encoding) host = self._available_child() if host is None: host = self._copy() self._children.append(host) host._file = ftputil.file.FTPFile(host) basedir = self.getcwd() # Prepare for changing the directory (see whitespace workaround in # method `_dir`). if host.path.isabs(path): effective_path = path else: effective_path = host.path.join(basedir, path) effective_dir, effective_file = host.path.split(effective_path) try: # This will fail if the directory isn't accessible at all. host.chdir(effective_dir) except ftputil.error.PermanentError: # Similarly to a failed `file` in a local file system, raise an # `IOError`, not an `OSError`. raise ftputil.error.FTPIOError( "remote directory '{}' doesn't " "exist or has insufficient access rights".format(effective_dir) ) host._file._open( effective_file, mode=mode, buffering=buffering, encoding=encoding, errors=errors, newline=newline, rest=rest, ) if "w" in mode: # Invalidate cache entry because size and timestamps will change. self.stat_cache.invalidate(effective_path) return host._file def close(self): """ Close host connection. """ if self.closed: return # Close associated children. for host in self._children: # Children have a `_file` attribute which is an `FTPFile` object. host._file.close() host.close() # Now deal with ourself. try: with ftputil.error.ftplib_error_to_ftp_os_error: self._session.close() finally: # If something went wrong before, the host/session is probably # defunct and subsequent calls to `close` won't help either, so # consider the host/session closed for practical purposes. self.stat_cache.clear() self._children = [] self.closed = True # # Setting a custom directory parser # def set_parser(self, parser): """ Set the parser for extracting stat results from directory listings. The parser interface is described in the documentation, but here are the most important things: - A parser should derive from `ftputil.stat.Parser`. - The parser has to implement two methods, `parse_line` and `ignores_line`. For the latter, there's a probably useful default in the class `ftputil.stat.Parser`. - `parse_line` should try to parse a line of a directory listing and return a `ftputil.stat.StatResult` instance. If parsing isn't possible, raise `ftputil.error.ParserError` with a useful error message. - `ignores_line` should return a true value if the line isn't assumed to contain stat information. """ # The cache contents, if any, probably aren't useful. self.stat_cache.clear() # Set the parser explicitly, don't allow "smart" switching anymore. self._stat._parser = parser self._stat._allow_parser_switching = False # # Time shift adjustment between client (i. e. us) and server # @staticmethod def __rounded_time_shift(time_shift): """ Return the given time shift in seconds, but rounded to 15-minute units. The argument is also assumed to be given in seconds. """ minute = 60.0 # Avoid division by zero below. if time_shift == 0: return 0.0 # Use a positive value for rounding. absolute_time_shift = abs(time_shift) signum = time_shift / absolute_time_shift # Round absolute time shift to 15-minute units. absolute_rounded_time_shift = int( (absolute_time_shift + (7.5 * minute)) / (15.0 * minute) ) * (15.0 * minute) # Return with correct sign. return signum * absolute_rounded_time_shift def __assert_valid_time_shift(self, time_shift): """ Perform sanity checks on the time shift value (given in seconds). If the value is invalid, raise a `TimeShiftError`, else simply return `None`. """ minute = 60.0 # seconds hour = 60.0 * minute absolute_rounded_time_shift = abs(self.__rounded_time_shift(time_shift)) # Test 1: Fail if the absolute time shift is greater than a full day # (24 hours). if absolute_rounded_time_shift > 24 * hour: raise ftputil.error.TimeShiftError( "time shift abs({0:.2f} s) > 1 day".format(time_shift) ) # Test 2: Fail if the deviation between given time shift and 15-minute # units is greater than a certain limit. maximum_deviation = 5 * minute if abs(time_shift - self.__rounded_time_shift(time_shift)) > maximum_deviation: raise ftputil.error.TimeShiftError( "time shift ({0:.2f} s) deviates more than {1:d} s " "from 15-minute units".format(time_shift, int(maximum_deviation)) ) def set_time_shift(self, time_shift): """ Set the time shift value. By (my) definition, the time shift value is the difference of the time zone used in server listings and UTC, i. e. time_shift =def= t_server - utc <=> t_server = utc + time_shift <=> utc = t_server - time_shift The time shift is measured in seconds. """ self.__assert_valid_time_shift(time_shift) old_time_shift = self.time_shift() if time_shift != old_time_shift: # If the time shift changed, all entries in the cache will have # wrong times with respect to the updated time shift, therefore # clear the cache. self.stat_cache.clear() self._time_shift = self.__rounded_time_shift(time_shift) def time_shift(self): """ Return the time shift between FTP server and client. See the docstring of `set_time_shift` for more on this value. """ return self._time_shift def synchronize_times(self): """ Synchronize the local times of FTP client and server. This is necessary to let `upload_if_newer` and `download_if_newer` work correctly. If `synchronize_times` isn't applicable (see below), the time shift can still be set explicitly with `set_time_shift`. This implementation of `synchronize_times` requires _all_ of the following: - The connection between server and client is established. - The client has write access to the directory that is current when `synchronize_times` is called. The common usage pattern of `synchronize_times` is to call it directly after the connection is established. (As can be concluded from the points above, this requires write access to the login directory.) If `synchronize_times` fails, it raises a `TimeShiftError`. """ helper_file_name = "_ftputil_sync_" # Open a dummy file for writing in the current directory on the FTP # host, then close it. try: # May raise `FTPIOError` if directory isn't writable. file_ = self.open(helper_file_name, "w") file_.close() except ftputil.error.FTPIOError: raise ftputil.error.TimeShiftError( """couldn't write helper file in directory '{}'""".format(self.getcwd()) ) # If everything worked up to here it should be possible to stat and # then remove the just-written file. try: server_time = self.path.getmtime(helper_file_name) self.unlink(helper_file_name) except ftputil.error.FTPOSError: # If we got a `TimeShiftError` exception above, we should't come # here: if we didn't get a `TimeShiftError` above, deletion should # be possible. The only reason for an exception I can think of here # is a race condition by removing the helper file or write # permission from the directory or helper file after it has been # written to. raise ftputil.error.TimeShiftError( "could write helper file but not unlink it" ) # Calculate the difference between server and client. now = time.time() time_shift = server_time - now # As the time shift for this host instance isn't set yet, the directory # parser will calculate times one year in the past if the time zone of # the server is east from ours. Thus the time shift will be off by a # year as well (see ticket #55). if time_shift < -360 * 24 * 60 * 60: # Read one year and recalculate the time shift. We don't know how # many days made up that year (it might have been a leap year), so # go the route via `datetime.replace`. server_datetime = datetime.datetime.fromtimestamp( server_time, tz=datetime.timezone.utc ) server_datetime = server_datetime.replace(year=server_datetime.year + 1) time_shift = server_datetime.timestamp() - now self.set_time_shift(time_shift) # # Operations based on file-like objects (rather high-level), like upload # and download # # XXX: This has a different API from `shutil.copyfileobj`, on which this # method is modeled. But I don't think it makes sense to change this method # here because the method is probably rarely used and a change would break # client code. @staticmethod def copyfileobj( source, target, max_chunk_size=ftputil.file_transfer.MAX_COPY_CHUNK_SIZE, callback=None, ): """ Copy data from file-like object `source` to file-like object `target`. """ ftputil.file_transfer.copyfileobj(source, target, max_chunk_size, callback) def _upload_files(self, source_path, target_path): """ Return a `LocalFile` and `RemoteFile` as source and target, respectively. The strings `source_path` and `target_path` are the (absolute or relative) paths of the local and the remote file, respectively. """ source_file = ftputil.file_transfer.LocalFile(source_path, "rb") # Passing `self` (the `FTPHost` instance) here is correct. target_file = ftputil.file_transfer.RemoteFile(self, target_path, "wb") return source_file, target_file def upload(self, source, target, callback=None): """ Upload a file from the local source (name) to the remote target (name). If a callable `callback` is given, it's called after every chunk of transferred data. The chunk size is a constant defined in `file_transfer`. The callback will be called with a single argument, the data chunk that was transferred before the callback was called. """ if source in ["", b""]: raise IOError("path argument `source` is empty") ftputil.tool.raise_for_empty_path(target, path_argument_name="target") target = ftputil.tool.as_str_path(target, encoding=self._encoding) source_file, target_file = self._upload_files(source, target) ftputil.file_transfer.copy_file( source_file, target_file, conditional=False, callback=callback ) def upload_if_newer(self, source, target, callback=None): """ Upload a file only if it's newer than the target on the remote host or if the target file does not exist. See the method `upload` for the meaning of the parameters. If an upload was necessary, return `True`, else return `False`. If a callable `callback` is given, it's called after every chunk of transferred data. The chunk size is a constant defined in `file_transfer`. The callback will be called with a single argument, the data chunk that was transferred before the callback was called. """ ftputil.tool.raise_for_empty_path(source, path_argument_name="source") if target in ["", b""]: raise IOError("path argument `target` is empty") target = ftputil.tool.as_str_path(target, encoding=self._encoding) source_file, target_file = self._upload_files(source, target) return ftputil.file_transfer.copy_file( source_file, target_file, conditional=True, callback=callback ) def _download_files(self, source_path, target_path): """ Return a `RemoteFile` and `LocalFile` as source and target, respectively. The strings `source_path` and `target_path` are the (absolute or relative) paths of the remote and the local file, respectively. """ source_file = ftputil.file_transfer.RemoteFile(self, source_path, "rb") target_file = ftputil.file_transfer.LocalFile(target_path, "wb") return source_file, target_file def download(self, source, target, callback=None): """ Download a file from the remote source (name) to the local target (name). If a callable `callback` is given, it's called after every chunk of transferred data. The chunk size is a constant defined in `file_transfer`. The callback will be called with a single argument, the data chunk that was transferred before the callback was called. """ ftputil.tool.raise_for_empty_path(source, path_argument_name="source") if target in ["", b""]: raise IOError("path argument `target` is empty") source = ftputil.tool.as_str_path(source, encoding=self._encoding) source_file, target_file = self._download_files(source, target) ftputil.file_transfer.copy_file( source_file, target_file, conditional=False, callback=callback ) def download_if_newer(self, source, target, callback=None): """ Download a file only if it's newer than the target on the local host or if the target file does not exist. See the method `download` for the meaning of the parameters. If a download was necessary, return `True`, else return `False`. If a callable `callback` is given, it's called after every chunk of transferred data. The chunk size is a constant defined in `file_transfer`. The callback will be called with a single argument, the data chunk that was transferred before the callback was called. """ if source in ["", b""]: raise IOError("path argument `source` is empty") ftputil.tool.raise_for_empty_path(target, path_argument_name="target") source = ftputil.tool.as_str_path(source, encoding=self._encoding) source_file, target_file = self._download_files(source, target) return ftputil.file_transfer.copy_file( source_file, target_file, conditional=True, callback=callback ) # # Helper methods to descend into a directory before executing a command # def _check_inaccessible_login_directory(self): """ Raise an `InaccessibleLoginDirError` exception if we can't change to the login directory. This test is only reliable if the current directory is the login directory. """ presumable_login_dir = self.getcwd() # Bail out with an internal error rather than modify the current # directory without hope of restoration. try: self.chdir(presumable_login_dir) except ftputil.error.PermanentError: raise ftputil.error.InaccessibleLoginDirError( "directory '{}' is not accessible".format(presumable_login_dir) ) def _robust_ftp_command(self, command, path, descend_deeply=False): """ Run an FTP command on a path. The return value of the method is the return value of the command. If `descend_deeply` is true (the default is false), descend deeply, i. e. change the directory to the end of the path. """ # If we can't change to the yet-current directory, the code below won't # work (see below), so in this case rather raise an exception than # giving wrong results. self._check_inaccessible_login_directory() # Some FTP servers don't behave as expected if the directory portion of # the path contains whitespace; some even yield strange results if the # command isn't executed in the current directory. Therefore, change to # the directory which contains the item to run the command on and # invoke the command just there. # # Remember old working directory. old_dir = self.getcwd() try: if descend_deeply: # Invoke the command in (not: on) the deepest directory. self.chdir(path) # Workaround for some servers that give recursive listings when # called with a dot as path; see issue #33, # http://ftputil.sschwarzer.net/trac/ticket/33 return command(self, "") else: # Invoke the command in the "next to last" directory. head, tail = self.path.split(path) self.chdir(head) return command(self, tail) finally: self.chdir(old_dir) # # Miscellaneous utility methods resembling functions in `os` # def getcwd(self): """ Return the current directory path. """ return self._cached_current_dir def chdir(self, path): """ Change the directory on the host to `path`. """ path = ftputil.tool.as_str_path(path, encoding=self._encoding) with ftputil.error.ftplib_error_to_ftp_os_error: self._session.cwd(path) # The path given as the argument is relative to the old current # directory, therefore join them. self._cached_current_dir = self.path.normpath( self.path.join(self._cached_current_dir, path) ) # Ignore unused argument `mode` # pylint: disable=unused-argument def mkdir(self, path, mode=None): """ Make the directory path on the remote host. The argument `mode` is ignored and only "supported" for similarity with `os.mkdir`. """ ftputil.tool.raise_for_empty_path(path) path = ftputil.tool.as_str_path(path, encoding=self._encoding) def command(self, path): """Callback function.""" with ftputil.error.ftplib_error_to_ftp_os_error: self._session.mkd(path) self._robust_ftp_command(command, path) # TODO: The virtual directory support doesn't have unit tests yet because # the mocking most likely would be quite complicated. The tests should be # added when mainly the `mock` library is used instead of the mock code in # `test.mock_ftplib`. # # Ignore unused argument `mode` # pylint: disable=unused-argument def makedirs(self, path, mode=None, exist_ok=False): """ Make the directory `path`, but also make not yet existing intermediate directories, like `os.makedirs`. The value of `mode` is only accepted for compatibility with `os.makedirs` but otherwise ignored. If `exist_ok` is `False` (the default) and the leaf directory exists, raise a `PermanentError` with `errno` 17. """ ftputil.tool.raise_for_empty_path(path) path = ftputil.tool.as_str_path(path, encoding=self._encoding) path = self.path.abspath(path) directories = path.split(self.sep) old_dir = self.getcwd() try: # Try to build the directory chain from the "uppermost" to the # "lowermost" directory. for index in range(1, len(directories)): # Re-insert the separator which got lost by using `path.split`. next_directory = self.sep + self.path.join(*directories[: index + 1]) # If we have "virtual directories" (see #86), just listing the # parent directory won't tell us if a directory actually # exists. So try to change into the directory. try: self.chdir(next_directory) except ftputil.error.PermanentError: # Directory presumably doesn't exist. try: self.mkdir(next_directory) except ftputil.error.PermanentError: # Find out the cause of the error. Re-raise the # exception only if the directory didn't exist already, # else something went _really_ wrong, e. g. there's a # regular file with the name of the directory. if not self.path.isdir(next_directory): raise else: # Directory exists. If we are at the last directory # component and `exist_ok` is `False`, this is an error. if (index == len(directories) - 1) and (not exist_ok): # Before PEP 3151, if `exist_ok` is `False`, trying to # create an existing directory in the local file system # results in an `OSError` with `errno.EEXIST, so # emulate this also for FTP. ftp_os_error = ftputil.error.PermanentError( "path {!r} exists".format(path) ) ftp_os_error.errno = errno.EEXIST raise ftp_os_error finally: self.chdir(old_dir) def rmdir(self, path): """ Remove the _empty_ directory `path` on the remote host. Compatibility note: Previous versions of ftputil could possibly delete non-empty directories as well, - if the server allowed it. This is no longer supported. """ ftputil.tool.raise_for_empty_path(path) path = ftputil.tool.as_str_path(path, encoding=self._encoding) path = self.path.abspath(path) if self.listdir(path): raise ftputil.error.PermanentError("directory '{}' not empty".format(path)) # XXX: How does `rmd` work with links? def command(self, path): """Callback function.""" with ftputil.error.ftplib_error_to_ftp_os_error: self._session.rmd(path) # Always invalidate the cache. If `_robust_ftp_command` raises an # exception, we can't tell for sure if the removal failed on the server # vs. it succeeded, but something went wrong after that. try: self._robust_ftp_command(command, path) finally: self.stat_cache.invalidate(path) def remove(self, path): """ Remove the file or link given by `path`. Raise a `PermanentError` if the path doesn't exist, but maybe raise other exceptions depending on the state of the server (e. g. timeout). """ ftputil.tool.raise_for_empty_path(path) path = ftputil.tool.as_str_path(path, encoding=self._encoding) path = self.path.abspath(path) # Though `isfile` includes also links to files, `islink` is needed to # include links to directories. if ( self.path.isfile(path) or self.path.islink(path) or not self.path.exists(path) ): # If the path doesn't exist, let the removal command trigger an # exception with a more appropriate error message. def command(self, path): """Callback function.""" with ftputil.error.ftplib_error_to_ftp_os_error: self._session.delete(path) # Always invalidate the cache. If `_robust_ftp_command` raises an # exception, we can't tell for sure if the removal failed on the # server vs. it succeeded, but something went wrong after that. try: self._robust_ftp_command(command, path) finally: self.stat_cache.invalidate(path) else: raise ftputil.error.PermanentError( "remove/unlink can only delete files and links, " "not directories" ) unlink = remove def rmtree(self, path, ignore_errors=False, onerror=None): """ Remove the given remote, possibly non-empty, directory tree. The interface of this method is rather complex, in favor of compatibility with `shutil.rmtree`. If `ignore_errors` is set to a true value, errors are ignored. If `ignore_errors` is a false value _and_ `onerror` isn't set, all exceptions occurring during the tree iteration and processing are raised. These exceptions are all of type `PermanentError`. To distinguish between error situations, pass in a callable for `onerror`. This callable must accept three arguments: `func`, `path` and `exc_info`. `func` is a bound method object, _for example_ `your_host_object.listdir`. `path` is the path that was the recent argument of the respective method (`listdir`, `remove`, `rmdir`). `exc_info` is the exception info as it's got from `sys.exc_info`. Implementation note: The code is copied from `shutil.rmtree` in Python 2.4 and adapted to ftputil. """ ftputil.tool.raise_for_empty_path(path) path = ftputil.tool.as_str_path(path, encoding=self._encoding) # The following code is an adapted version of Python 2.4's # `shutil.rmtree` function. if ignore_errors: def new_onerror(*args): """Do nothing.""" # pylint: disable=unused-argument pass elif onerror is None: def new_onerror(*args): """Re-raise exception.""" # pylint: disable=misplaced-bare-raise, unused-argument raise else: new_onerror = onerror names = [] try: names = self.listdir(path) except ftputil.error.PermanentError: new_onerror(self.listdir, path, sys.exc_info()) for name in names: full_name = self.path.join(path, name) try: mode = self.lstat(full_name).st_mode except ftputil.error.PermanentError: mode = 0 if stat.S_ISDIR(mode): self.rmtree(full_name, ignore_errors, new_onerror) else: try: self.remove(full_name) except ftputil.error.PermanentError: new_onerror(self.remove, full_name, sys.exc_info()) try: self.rmdir(path) except ftputil.error.FTPOSError: new_onerror(self.rmdir, path, sys.exc_info()) def rename(self, source, target): """ Rename the `source` on the FTP host to `target`. """ ftputil.tool.raise_for_empty_path(source, path_argument_name="source") ftputil.tool.raise_for_empty_path(target, path_argument_name="target") source = ftputil.tool.as_str_path(source, encoding=self._encoding) target = ftputil.tool.as_str_path(target, encoding=self._encoding) source_head, source_tail = self.path.split(source) target_head, target_tail = self.path.split(target) # Avoid code duplication below. # # Use `source_arg` and `target_arg` instead of `source` and `target` to # make it clearer that we use the arguments passed to # `rename_with_cleanup`, not any variables from the scope outside # `rename_with_cleanup`. def rename_with_cleanup(source_arg, target_arg): try: with ftputil.error.ftplib_error_to_ftp_os_error: self._session.rename(source_arg, target_arg) # Always invalidate the cache entries in case the rename succeeds # on the server, but the server doesn't manage to tell the client. finally: source_absolute_path = self.path.abspath(source_arg) target_absolute_path = self.path.abspath(target_arg) self.stat_cache.invalidate(source_absolute_path) self.stat_cache.invalidate(target_absolute_path) # The following code is in spirit similar to the code in the method # `_robust_ftp_command`, though we do _not_ do _everything_ imaginable. self._check_inaccessible_login_directory() paths_contain_whitespace = (" " in source_head) or (" " in target_head) if paths_contain_whitespace and source_head == target_head: # Both items are in the same directory. old_dir = self.getcwd() try: self.chdir(source_head) rename_with_cleanup(source_tail, target_tail) finally: self.chdir(old_dir) else: # Use straightforward command. rename_with_cleanup(source, target) # XXX: One could argue to put this method into the `_Stat` class, but I # refrained from that because then `_Stat` would have to know about # `FTPHost`'s `_session` attribute and in turn about `_session`'s `dir` # method. def _dir(self, path): """ Return a directory listing as made by FTP's `LIST` command as a list of strings. """ # Don't use `self.path.isdir` in this method because that would cause a # call of `(l)stat` and thus a call to `_dir`, so we would end up with # an infinite recursion. def _FTPHost_dir_command(self, path): """Callback function.""" lines = [] def callback(line): """Callback function.""" lines.append(ftputil.tool.as_str(line, encoding=self._encoding)) with ftputil.error.ftplib_error_to_ftp_os_error: if self.use_list_a_option: self._session.dir("-a", path, callback) else: self._session.dir(path, callback) return lines lines = self._robust_ftp_command( _FTPHost_dir_command, path, descend_deeply=True ) return lines # The `listdir`, `lstat` and `stat` methods don't use `_robust_ftp_command` # because they implicitly already use `_dir` which actually uses # `_robust_ftp_command`. def listdir(self, path): """ Return a list of directories, files etc. in the directory named `path`. If the directory listing from the server can't be parsed with any of the available parsers raise a `ParserError`. """ ftputil.tool.raise_for_empty_path(path) original_path = path path = ftputil.tool.as_str_path(path, encoding=self._encoding) items = self._stat._listdir(path) return [ ftputil.tool.same_string_type_as(original_path, item, self._encoding) for item in items ] def lstat(self, path, _exception_for_missing_path=True): """ Return an object similar to that returned by `os.lstat`. If the directory listing from the server can't be parsed with any of the available parsers, raise a `ParserError`. If the directory _can_ be parsed and the `path` is _not_ found, raise a `PermanentError`. (`_exception_for_missing_path` is an implementation aid and _not_ intended for use by ftputil clients.) """ ftputil.tool.raise_for_empty_path(path) path = ftputil.tool.as_str_path(path, encoding=self._encoding) return self._stat._lstat(path, _exception_for_missing_path) def stat(self, path, _exception_for_missing_path=True): """ Return info from a "stat" call on `path`. If the directory containing `path` can't be parsed, raise a `ParserError`. If the directory containing `path` can be parsed but the `path` can't be found, raise a `PermanentError`. Also raise a `PermanentError` if there's an endless (cyclic) chain of symbolic links "behind" the `path`. (`_exception_for_missing_path` is an implementation aid and _not_ intended for use by ftputil clients.) """ ftputil.tool.raise_for_empty_path(path) path = ftputil.tool.as_str_path(path, encoding=self._encoding) return self._stat._stat(path, _exception_for_missing_path) def walk(self, top, topdown=True, onerror=None, followlinks=False): """ Iterate over directory tree and return a tuple (dirpath, dirnames, filenames) on each iteration, like the `os.walk` function (see https://docs.python.org/library/os.html#os.walk ). """ ftputil.tool.raise_for_empty_path(top, path_argument_name="top") top = ftputil.tool.as_str_path(top, encoding=self._encoding) # The following code is copied from `os.walk` in Python 2.4 and adapted # to ftputil. try: names = self.listdir(top) except ftputil.error.FTPOSError as err: if onerror is not None: onerror(err) return dirs, nondirs = [], [] for name in names: if self.path.isdir(self.path.join(top, name)): dirs.append(name) else: nondirs.append(name) if topdown: yield top, dirs, nondirs for name in dirs: path = self.path.join(top, name) if followlinks or not self.path.islink(path): yield from self.walk(path, topdown, onerror, followlinks) if not topdown: yield top, dirs, nondirs def chmod(self, path, mode): """ Change the mode of a remote `path` (a string) to the integer `mode`. This integer uses the same bits as the mode value returned by the `stat` and `lstat` commands. If something goes wrong, raise a `TemporaryError` or a `PermanentError`, according to the status code returned by the server. In particular, a non-existent path usually causes a `PermanentError`. """ ftputil.tool.raise_for_empty_path(path) path = ftputil.tool.as_str_path(path, encoding=self._encoding) path = self.path.abspath(path) def command(self, path): """Callback function.""" with ftputil.error.ftplib_error_to_ftp_os_error: self._session.voidcmd("SITE CHMOD 0{0:o} {1}".format(mode, path)) self._robust_ftp_command(command, path) self.stat_cache.invalidate(path) def __getstate__(self): raise TypeError("cannot serialize FTPHost object") # # Context manager methods # def __enter__(self): # Return `self`, so it can be accessed as the variable component of the # `with` statement. return self def __exit__(self, exc_type, exc_val, exc_tb): # We don't need the `exc_*` arguments here. # pylint: disable=unused-argument self.close() # Be explicit. return False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1686848543.0 ftputil-5.1.0/ftputil/lrucache.py0000664000175000017500000002355314442642037016107 0ustar00schwaschwa# lrucache.py -- a simple LRU (Least-Recently-Used) cache class # Copyright 2004 Evan Prodromou # # Copyright 2009-2018 Stefan Schwarzer # (some changes to the original version) # Licensed under the Academic Free License 2.1 # # Additionally licensed for ftputil under the 3-clause BSD license # with permission by the author, Evan Prodromou. This means you can # use this module under either of the two licenses. # # Many thanks for your contribution, Evan! :-) # # The original file is available at # http://pypi.python.org/pypi/lrucache/0.2 . # arch-tag: LRU cache main module """a simple LRU (Least-Recently-Used) cache module This module provides very simple LRU (Least-Recently-Used) cache functionality. An *in-memory cache* is useful for storing the results of an 'expensive' process (one that takes a lot of time or resources) for later re-use. Typical examples are accessing data from the filesystem, a database, or a network location. If you know you'll need to re-read the data again, it can help to keep it in a cache. You *can* use a Python dictionary as a cache for some purposes. However, if the results you're caching are large, or you have a lot of possible results, this can be impractical memory-wise. An *LRU cache*, on the other hand, only keeps _some_ of the results in memory, which keeps you from overusing resources. The cache is bounded by a maximum size; if you try to add more values to the cache, it will automatically discard the values that you haven't read or written to in the longest time. In other words, the least-recently-used items are discarded. [1]_ .. [1]: 'Discarded' here means 'removed from the cache'. """ import time # The suffix after the hyphen denotes modifications by the # ftputil project with respect to the original version. __version__ = "0.2-15" __all__ = ["CacheKeyError", "LRUCache", "DEFAULT_SIZE"] __docformat__ = "reStructuredText en" # Default size of a new LRUCache object, if no 'size' argument is given. DEFAULT_SIZE = 16 class CacheKeyError(KeyError): """Error raised when cache requests fail. When a cache record is accessed which no longer exists (or never did), this error is raised. To avoid it, you may want to check for the existence of a cache record before reading or deleting it. """ pass class LRUCache: """Least-Recently-Used (LRU) cache. Instances of this class provide a least-recently-used (LRU) cache. They emulate a Python mapping type. You can use an LRU cache more or less like a Python dictionary, with the exception that objects you put into the cache may be discarded before you take them out. Some example usage:: cache = LRUCache(32) # new cache cache['foo'] = get_file_contents('foo') # or whatever if 'foo' in cache: # if it's still in cache... # use cached version contents = cache['foo'] else: # recalculate contents = get_file_contents('foo') # store in cache for next time cache['foo'] = contents print(cache.size) # Maximum size print(len(cache)) # 0 <= len(cache) <= cache.size cache.size = 10 # Auto-shrink on size assignment for i in range(50): # note: larger than cache size cache[i] = i if 0 not in cache: print('Zero was discarded.') if 42 in cache: del cache[42] # Manual deletion for j in cache: # iterate (in LRU order) print(j, cache[j]) # iterator produces keys, not values """ class _Node: """Record of a cached value. Not for public consumption.""" def __init__(self, key, obj, timestamp, sort_key): object.__init__(self) self.key = key self.obj = obj self.atime = timestamp self.mtime = self.atime self._sort_key = sort_key def __lt__(self, other): # Seems to be preferred over `__cmp__`, at least in newer # Python versions. Uses only around 60 % of the time # with respect to `__cmp__`. # pylint: disable=protected-access return self._sort_key < other._sort_key def __repr__(self): return "<%s %s => %s (%s)>" % ( self.__class__, self.key, self.obj, time.asctime(time.localtime(self.atime)), ) def __init__(self, size=DEFAULT_SIZE): """Init the `LRUCache` object. `size` is the initial _maximum_ size of the cache. The size can be changed by setting the `size` attribute. """ self.clear() # Maximum size of the cache. If more than 'size' elements are # added to the cache, the least-recently-used ones will be # discarded. This assignment implicitly checks the size value. self.size = size def clear(self): """Clear the cache, removing all elements. The `size` attribute of the cache isn't modified. """ # pylint: disable=attribute-defined-outside-init self.__heap = [] self.__dict = {} self.__counter = 0 def _sort_key(self): """Return a new integer value upon every call. Cache nodes need a monotonically increasing time indicator. `time.time()` and `time.clock()` don't guarantee this in a platform-independent way. See http://ftputil.sschwarzer.net/trac/ticket/32 for details. """ self.__counter += 1 return self.__counter def __len__(self): """Return _current_ number of cache entries. This may be different from the value of the `size` attribute. """ return len(self.__heap) def __contains__(self, key): """Return `True` if the item denoted by `key` is in the cache.""" return key in self.__dict def __setitem__(self, key, obj): """Store item `obj` in the cache under the key `key`. If the number of elements after the addition of a new key would exceed the maximum cache size, the least recently used item in the cache is "forgotten". """ heap = self.__heap dict_ = self.__dict if key in dict_: node = dict_[key] # Update node object in-place. node.obj = obj node.atime = time.time() node.mtime = node.atime # pylint: disable=protected-access node._sort_key = self._sort_key() else: # The size of the heap can be at most the value of # `self.size` because `__setattr__` decreases the cache # size if the new size value is smaller; so we don't # need a loop _here_. if len(heap) == self.size: lru_node = min(heap) heap.remove(lru_node) del dict_[lru_node.key] node = self._Node(key, obj, time.time(), self._sort_key()) dict_[key] = node heap.append(node) def __getitem__(self, key): """Return the item stored under `key` key. If no such key is present in the cache, raise a `CacheKeyError`. """ if not key in self.__dict: raise CacheKeyError(key) else: node = self.__dict[key] # Update node object in-place. node.atime = time.time() # pylint: disable=protected-access node._sort_key = self._sort_key() return node.obj def __delitem__(self, key): """Delete the item stored under `key` key. If no such key is present in the cache, raise a `CacheKeyError`. """ if not key in self.__dict: raise CacheKeyError(key) else: node = self.__dict[key] self.__heap.remove(node) del self.__dict[key] return node.obj def __iter__(self): """Iterate over the cache, from the least to the most recently accessed item. """ self.__heap.sort() for node in self.__heap: yield node.key def __setattr__(self, name, value): """If the name of the attribute is "size", set the _maximum_ size of the cache to the supplied value. """ object.__setattr__(self, name, value) # Automagically shrink heap on resize. if name == "size": size = value if not isinstance(size, int): raise TypeError("cache size (%r) must be an integer" % size) if size <= 0: raise ValueError("cache size (%d) must be positive" % size) heap = self.__heap dict_ = self.__dict # Do we need to remove anything at all? if len(heap) <= self.size: return # Remove enough nodes to reach the new size. heap.sort() node_count_to_remove = len(heap) - self.size for node in heap[:node_count_to_remove]: del dict_[node.key] del heap[:node_count_to_remove] def __repr__(self): return "<%s (%d elements)>" % (str(self.__class__), len(self.__heap)) def mtime(self, key): """Return the last modification time for the cache record with key. May be useful for cache instances where the stored values can get "stale", such as caching file or network resource contents. """ if not key in self.__dict: raise CacheKeyError(key) else: node = self.__dict[key] return node.mtime if __name__ == "__main__": cache = LRUCache(25) print(cache) for i in range(50): cache[i] = str(i) print(cache) if 46 in cache: del cache[46] print(cache) cache.size = 10 print(cache) cache[46] = "46" print(cache) print(len(cache)) for c in cache: print(c) print(cache) print(cache.mtime(46)) for c in cache: print(c) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/ftputil/path.py0000664000175000017500000001762314170605001015242 0ustar00schwaschwa# Copyright (C) 2003-2021, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. """ ftputil.path - simulate `os.path` for FTP servers """ import os import posixpath import stat import ftputil.error import ftputil.tool # The `_Path` class shouldn't be used directly by clients of the ftputil # library. __all__ = [] class _Path: """ Support class resembling `os.path`, accessible from the `FTPHost` object, e. g. as `FTPHost().path.abspath(path)`. Hint: substitute `os` with the `FTPHost` object. """ # `_Path` needs to provide all methods of `os.path`. # pylint: disable=too-many-instance-attributes def __init__(self, host): self._host = host self._encoding = host._encoding # Delegate these methods to the `posixpath` module because they don't # need file system access but work on the path strings (possibly # extracted from `PathLike` objects). # pylint: disable=invalid-name pp = posixpath self.basename = pp.basename self.commonprefix = pp.commonprefix self.dirname = pp.dirname self.isabs = pp.isabs self.join = pp.join self.normcase = pp.normcase self.normpath = pp.normpath self.split = pp.split self.splitdrive = pp.splitdrive self.splitext = pp.splitext def abspath(self, path): """ Return an absolute path. """ # Don't use `raise_for_empty_path` here since Python itself doesn't # raise an exception and just returns the current directory. original_path = path path = ftputil.tool.as_str_path(path, encoding=self._encoding) if not self.isabs(path): path = self.join(self._host.getcwd(), path) return ftputil.tool.same_string_type_as( os.fspath(original_path), self.normpath(path), self._encoding ) def exists(self, path): """ Return true if the path exists. """ if path in ["", b""]: return False try: lstat_result = self._host.lstat(path, _exception_for_missing_path=False) return lstat_result is not None except ftputil.error.RootDirError: return True def getmtime(self, path): """ Return the timestamp for the last modification for `path` as a float. This will raise `PermanentError` if the path doesn't exist, but maybe other exceptions depending on the state of the server (e. g. timeout). """ ftputil.tool.raise_for_empty_path(path) return self._host.stat(path).st_mtime def getsize(self, path): """ Return the size of the `path` item as an integer. This will raise `PermanentError` if the path doesn't exist, but maybe raise other exceptions depending on the state of the server (e. g. timeout). """ ftputil.tool.raise_for_empty_path(path) return self._host.stat(path).st_size # Check whether a path is a regular file/dir/link. For the first two cases # follow links (like in `os.path`). # # Implementation note: The previous implementations simply called `stat` or # `lstat` and returned `False` if they ended with raising a # `PermanentError`. That exception usually used to signal a missing path. # This approach has the problem, however, that exceptions caused by code # earlier in `lstat` are obscured by the exception handling in `isfile`, # `isdir` and `islink`. def _is_file_system_entity(self, path, dir_or_file): """ Return `True` if `path` represents the file system entity described by `dir_or_file` ("dir" or "file"). Return `False` if `path` isn't a directory or file, respectively or if `path` leads to an infinite chain of links. """ assert dir_or_file in ["dir", "file"] # Consider differences between directories and files. if dir_or_file == "dir": should_look_for_dir = True stat_function = stat.S_ISDIR else: should_look_for_dir = False stat_function = stat.S_ISREG # path = ftputil.tool.as_str_path(path, encoding=self._encoding) # Workaround if we can't go up from the current directory. The result # from `getcwd` should already be normalized. if self.normpath(path) == self._host.getcwd(): return should_look_for_dir try: stat_result = self._host.stat(path, _exception_for_missing_path=False) except ftputil.error.RecursiveLinksError: return False except ftputil.error.RootDirError: return should_look_for_dir else: if stat_result is None: # Non-existent path return False else: return stat_function(stat_result.st_mode) def isdir(self, path): """ Return true if the `path` exists and corresponds to a directory (no link). A non-existing path does _not_ cause a `PermanentError`, instead return `False`. """ if path in ["", b""]: return False return self._is_file_system_entity(path, "dir") def isfile(self, path): """ Return true if the `path` exists and corresponds to a regular file (no link). A non-existing path does _not_ cause a `PermanentError`, instead return `False`. """ if path in ["", b""]: return False return self._is_file_system_entity(path, "file") def islink(self, path): """ Return true if the `path` exists and is a link. A non-existing path does _not_ cause a `PermanentError`, instead return `False`. """ path = ftputil.tool.as_str_path(path, encoding=self._encoding) if path == "": return False try: lstat_result = self._host.lstat(path, _exception_for_missing_path=False) except ftputil.error.RootDirError: return False else: if lstat_result is None: # Non-existent path return False else: return stat.S_ISLNK(lstat_result.st_mode) def walk(self, top, func, arg): """ Directory tree walk with callback function. For each directory in the directory tree rooted at top (including top itself, but excluding "." and ".."), call func(arg, dirname, fnames). dirname is the name of the directory, and fnames a list of the names of the files and subdirectories in dirname (excluding "." and ".."). func may modify the fnames list in-place (e.g. via del or slice assignment), and walk will only recurse into the subdirectories whose names remain in fnames; this can be used to implement a filter, or to impose a specific order of visiting. No semantics are defined for, or required of, arg, beyond that arg is always passed to func. It can be used, e.g., to pass a filename pattern, or a mutable object designed to accumulate statistics. Passing None for arg is common. """ ftputil.tool.raise_for_empty_path(top, path_argument_name="top") top = ftputil.tool.as_str_path(top, encoding=self._encoding) # This code (and the above documentation) is taken from `posixpath.py`, # with slight modifications. try: names = self._host.listdir(top) except OSError: return func(arg, top, names) for name in names: name = self.join(top, name) try: stat_result = self._host.lstat(name) except OSError: continue if stat.S_ISDIR(stat_result[stat.ST_MODE]): self.walk(name, func, arg) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1702140994.0 ftputil-5.1.0/ftputil/path_encoding.py0000664000175000017500000000176514535116102017114 0ustar00schwaschwa# Copyright (C) 2021, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. """ Compatibility code for Python versions <= 3.8 vs. >= 3.9 Python 3.9 changed the default encoding for FTP remote paths from latin-1 to UTF-8 which causes a few headaches for ftputil. """ import sys __all__ = ["DEFAULT_ENCODING", "RUNNING_UNDER_PY39_AND_UP", "FTPLIB_DEFAULT_ENCODING"] RUNNING_UNDER_PY39_AND_UP = (sys.version_info.major, sys.version_info.minor) >= (3, 9) # FTP path default encoding for Python 3.8 and below FTPLIB_PY38_ENCODING = "latin-1" # FTP path default encoding for Python 3.9 and above FTPLIB_PY39_ENCODING = "utf-8" # Default encoding for ftputil. Stay compatible to the behavior of former # ftputil versions and Python <= 3.8. DEFAULT_ENCODING = FTPLIB_PY38_ENCODING if RUNNING_UNDER_PY39_AND_UP: FTPLIB_DEFAULT_ENCODING = FTPLIB_PY39_ENCODING else: FTPLIB_DEFAULT_ENCODING = FTPLIB_PY38_ENCODING ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704490001.0 ftputil-5.1.0/ftputil/session.py0000644000175000017500000001371014546072021015767 0ustar00schwaschwa# Copyright (C) 2014-2024, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. """ Session factory factory (the two "factory" are intentional :-) ) for ftputil. """ import ftplib import ftputil.error import ftputil.tool __all__ = ["session_factory"] def _maybe_send_opts_utf8_on(session, encoding): """ If the requested encoding is UTF-8 and the server supports the `UTF8` feature, send "OPTS UTF8 ON". See https://datatracker.ietf.org/doc/html/rfc2640.html . """ if ((encoding is None) and ftputil.path_encoding.RUNNING_UNDER_PY39_AND_UP) or ( encoding in ["UTF-8", "UTF8", "utf-8", "utf8"] ): feat_output = session.sendcmd("FEAT") server_supports_opts_utf8_on = False for line in feat_output.splitlines(): # The leading space is important. See RFC 2640. if line.upper().rstrip() == " UTF8": server_supports_opts_utf8_on = True if server_supports_opts_utf8_on: session.sendcmd("OPTS UTF8 ON") # In a way, it would be appropriate to call this function # `session_factory_factory`, but that's cumbersome to use. Think of the # function returning a session factory and the shorter name should be fine. def session_factory( base_class=ftplib.FTP, port=21, use_passive_mode=None, *, encrypt_data_channel=True, encoding=None, debug_level=None, ): """ Create and return a session factory according to the keyword arguments. base_class: Base class to use for the session class (e. g. `ftplib.FTP_TLS` or `M2Crypto.ftpslib.FTP_TLS`, the default is `ftplib.FTP`). port: Port number (integer) for the command channel (default 21). If you don't know what "command channel" means, use the default or use what the provider gave you as "the FTP port". use_passive_mode: If `True`, explicitly use passive mode. If `False`, explicitly don't use passive mode. If `None` (default), let the `base_class` decide whether it wants to use active or passive mode. encrypt_data_channel: If `True` (the default), call the `prot_p` method of the base class if it has the method. If `False` or `None` (`None` is the default), don't call the method. encoding: Encoding (str) to use for directory and file paths, or `None`. Unicode (`str`) paths will be encoded with this encoding. Bytes paths are assumed to be in this encoding. The default (equivalent to passing `None`) is to use the default encoding of the `base_class` argument. Note that this encoding has changed from Python 3.8 to 3.9. In Python 3.8 and lower, the default path encoding is "latin-1"; in Python 3.9, the default path encoding is "utf-8". Therefore, if you want an encoding that's independent of the Python version, pass an explicit `encoding`. Using a non-`None` `encoding` is only supported if `base_class` is `ftplib.FTP` or a subclass of it. debug_level: Debug level (integer) to be set on a session instance. The default is `None`, meaning no debugging output. This function should work for the base classes `ftplib.FTP`, `ftplib.FTP_TLS`. Other base classes should work if they use the same API as `ftplib.FTP`. Usage example: my_session_factory = session_factory( base_class=ftplib.FTP_TLS, use_passive_mode=True, encrypt_data_channel=True) with ftputil.FTPHost(host, user, password, session_factory=my_session_factory) as host: ... """ if not isinstance(base_class, type): raise ValueError(f"`base_class` must be a class, but is {base_class!r}") if (encoding is not None) and (not issubclass(base_class, ftplib.FTP)): raise ValueError( f"`encoding` is only supported for `ftplib.FTP` and subclasses, " f"but base class is {base_class!r}" ) class Session(base_class): """ Session factory class created by `session_factory`. """ # In Python 3.8 and below, the `encoding` class attribute was never # documented, but setting it is the only way to set a custom encoding # for remote file system paths. Since we set the encoding on the class # level, all instances created from this class will share this # encoding. That's ok because the user asked for a specific encoding of # the _factory_ when calling `session_factory`. # # Python 3.9 is the first Python version to have a documented way to # set a custom encoding (per instance). # # XXX: The following heuristic doesn't cover the case that we run under # Python 3.8 or earlier _and_ have a base class with an `encoding` # argument. Also, the heuristic will fail if we run under Python 3.9, # but have a base class that overrides the constructor so that it # doesn't support the `encoding` argument anymore. def __init__(self, host, user, password): if ( encoding is not None ) and ftputil.path_encoding.RUNNING_UNDER_PY39_AND_UP: super().__init__(encoding=encoding) else: super().__init__() self.connect(host, port) if debug_level is not None: self.set_debuglevel(debug_level) self.login(user, password) # `set_pasv` can be called with `True` (causing passive mode) or # `False` (causing active mode). if use_passive_mode is not None: self.set_pasv(use_passive_mode) if encrypt_data_channel and hasattr(base_class, "prot_p"): self.prot_p() _maybe_send_opts_utf8_on(self, encoding) if (encoding is not None) and not ftputil.path_encoding.RUNNING_UNDER_PY39_AND_UP: Session.encoding = encoding return Session ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704489664.0 ftputil-5.1.0/ftputil/stat.py0000644000175000017500000010152514546071300015260 0ustar00schwaschwa# Copyright (C) 2002-2020, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. """ ftputil.stat - stat result, parsers, and FTP stat'ing for `ftputil` """ import datetime import math import re import stat import ftputil.error import ftputil.stat_cache # These can be used to write custom parsers. __all__ = ["StatResult", "Parser", "UnixParser", "MSParser"] # Datetime precision values in seconds. MINUTE_PRECISION = 60 DAY_PRECISION = 24 * 60 * 60 UNKNOWN_PRECISION = None class StatResult(tuple): """ Support class resembling a tuple like that returned from `os.(l)stat`. """ _index_mapping = { "st_mode": 0, "st_ino": 1, "st_dev": 2, "st_nlink": 3, "st_uid": 4, "st_gid": 5, "st_size": 6, "st_atime": 7, "st_mtime": 8, "st_ctime": 9, "_st_name": 10, "_st_target": 11, } def __init__(self, sequence): # Don't call `__init__` via `super`. Construction from a sequence is # implicitly handled by `tuple.__new__`, not `tuple.__init__`. # pylint: disable=super-init-not-called # # Use `sequence` parameter to remain compatible to `__new__` interface. # pylint: disable=unused-argument # # These may be overwritten in a `Parser.parse_line` method. self._st_name = "" self._st_target = None self._st_mtime_precision = UNKNOWN_PRECISION def __getattr__(self, attr_name): if attr_name in self._index_mapping: return self[self._index_mapping[attr_name]] else: raise AttributeError( "'StatResult' object has no attribute '{}'".format(attr_name) ) def __repr__(self): # "Invert" `_index_mapping` so that we can look up the names for the # tuple indices. index_to_name = dict((v, k) for k, v in self._index_mapping.items()) argument_strings = [] for index, item in enumerate(self): argument_strings.append("{}={!r}".format(index_to_name[index], item)) return "{}({})".format(type(self).__name__, ", ".join(argument_strings)) # # FTP directory parsers # class Parser: """ Represent a parser for directory lines. Parsers for specific directory formats inherit from this class. """ # Map month abbreviations to month numbers. _month_numbers = { "jan": 1, "feb": 2, "mar": 3, "apr": 4, "may": 5, "jun": 6, "jul": 7, "aug": 8, "sep": 9, "oct": 10, "nov": 11, "dec": 12, } _total_regex = re.compile(r"^total\s+\d+") def ignores_line(self, line): """ Return a true value if the line should be ignored, i. e. is assumed to _not_ contain actual directory/file/link data. A typical example are summary lines like "total 23" which are emitted by some FTP servers. If the line should be used to extract stat data from it, return a false value. """ # Ignore empty lines stemming from only a line break. if not line.strip(): # Yes, ignore the line if it's empty. return True # Either a `_SRE_Match` instance or `None` match = self._total_regex.search(line) return bool(match) def parse_line(self, line, time_shift=0.0): """ Return a `StatResult` object as derived from the string `line`. The parser code to use depends on the directory format the FTP server delivers (also see examples at end of file). If the given text line can't be parsed, raise a `ParserError`. For the definition of `time_shift` see the docstring of `FTPHost.set_time_shift` in `ftputil.py`. Not all parsers use the `time_shift` parameter. """ raise NotImplementedError("must be defined by subclass") # # Helper methods for parts of a directory listing line # def parse_unix_mode(self, mode_string): """ Return an integer from the `mode_string`, compatible with the `st_mode` value in stat results. Such a mode string may look like "drwxr-xr-x". If the mode string can't be parsed, raise an `ftputil.error.ParserError`. """ # Allow derived classes to make use of `self`. # pylint: disable=no-self-use if len(mode_string) != 10: raise ftputil.error.ParserError( "invalid mode string '{}'".format(mode_string) ) st_mode = 0 # TODO: Add support for "S" and sticky bit ("t", "T"). for bit in mode_string[1:10]: bit = bit != "-" st_mode = (st_mode << 1) + bit if mode_string[3] == "s": st_mode = st_mode | stat.S_ISUID if mode_string[6] == "s": st_mode = st_mode | stat.S_ISGID file_type_to_mode = { "b": stat.S_IFBLK, "c": stat.S_IFCHR, "d": stat.S_IFDIR, "l": stat.S_IFLNK, "p": stat.S_IFIFO, "s": stat.S_IFSOCK, "-": stat.S_IFREG, # Ignore types which `ls` can't make sense of (assuming the FTP # server returns listings like `ls` does). "?": 0, } file_type = mode_string[0] if file_type in file_type_to_mode: st_mode = st_mode | file_type_to_mode[file_type] else: raise ftputil.error.ParserError( "unknown file type character '{}'".format(file_type) ) return st_mode # pylint: disable=no-self-use def _as_int(self, int_string, int_description): """ Return `int_string` converted to an integer. If it can't be converted, raise a `ParserError`, using `int_description` in the error message. For example, if the integer value is a day, pass "day" for `int_description`. """ try: return int(int_string) except ValueError: raise ftputil.error.ParserError( "non-integer {} value {!r}".format(int_description, int_string) ) @staticmethod def _datetime(year, month, day, hour, minute, second): """ Return UTC `datetime.datetime` object for the given year, month, day, hour, minute and second. If there are invalid values, for example minute > 59, raise a `ParserError`. """ try: return datetime.datetime( year, month, day, hour, minute, second, tzinfo=datetime.timezone.utc ) except ValueError: invalid_datetime = ( f"{year:04d}-{month:02d}-{day:02d} " f"{hour:02d}:{minute:02d}:{second:02d}" ) raise ftputil.error.ParserError( "invalid datetime {0!r}".format(invalid_datetime) ) def parse_unix_time( self, month_abbreviation, day, year_or_time, time_shift, with_precision=False ): """ Return a floating point number, like from `time.mktime`, by parsing the string arguments `month_abbreviation`, `day` and `year_or_time`. The parameter `time_shift` is the difference "time on server" - "time on client" and is available as the `time_shift` parameter in the `parse_line` interface. If `with_precision` is true return a two-element tuple consisting of the floating point number as described in the previous paragraph and the precision of the time in seconds. The default is `False` for backward compatibility with custom parsers. The precision value takes into account that, for example, a time string like "May 26 2005" has only a precision of one day. This information is important for the `upload_if_newer` and `download_if_newer` methods in the `FTPHost` class. Times in Unix-style directory listings typically have one of these formats: - "Nov 23 02:33" (month name, day of month, time) - "May 26 2005" (month name, day of month, year) If this method can't make sense of the given arguments, it raises an `ftputil.error.ParserError`. """ try: month = self._month_numbers[month_abbreviation.lower()] except KeyError: raise ftputil.error.ParserError( "invalid month abbreviation {0!r}".format(month_abbreviation) ) day = self._as_int(day, "day") year_is_known = ":" not in year_or_time if year_is_known: # `year_or_time` is really a year. st_mtime_precision = DAY_PRECISION server_year, hour, minute = self._as_int(year_or_time, "year"), 0, 0 else: # `year_or_time` is a time hh:mm. st_mtime_precision = MINUTE_PRECISION hour, minute = year_or_time.split(":") _year, hour, minute = ( None, self._as_int(hour, "hour"), self._as_int(minute, "minute"), ) # First assume the year of the directory/file is the current year. server_now = datetime.datetime.now( datetime.timezone.utc ) + datetime.timedelta(seconds=time_shift) server_year = server_now.year # If the server datetime derived from this year seems to be in the # future, subtract one year. # # Things to consider: # # Since the server time will be rounded down to full minutes, apply # the same truncation to the presumed current server time. # # Due to possible small errors in the time setting of the server # (not only because of the parsing), there will always be a small # time window in which we can't be sure whether the parsed time is # in the future or not. Make a mistake of one second and the time # is off one year. # # To resolve this conflict, if in doubt assume that a directory or # file has just been created, not that it is by chance to the # minute one year old. # # Hence, add a small time difference that must be exceeded in order # to assume the time is in the future. This time difference is # arbitrary, but we have to assume _some_ value. if self._datetime( server_year, month, day, hour, minute, 0 ) > server_now.replace(second=0) + datetime.timedelta(seconds=120): server_year -= 1 # The time shift is the time difference the server is ahead. So to get # back to client time (UTC), subtract the time shift. The calculation # is supposed to be the same for negative time shifts; in this case we # subtract a negative time shift, i. e. add the absolute value of the # time shift to the server date time. server_utc_datetime = self._datetime( server_year, month, day, hour, minute, 0 ) - datetime.timedelta(seconds=time_shift) st_mtime = server_utc_datetime.timestamp() # If we had a datetime before the epoch, the resulting value 0.0 # doesn't tell us anything about the precision. if st_mtime < 0.0: st_mtime_precision = UNKNOWN_PRECISION st_mtime = 0.0 # if with_precision: return st_mtime, st_mtime_precision else: return st_mtime def parse_ms_time(self, date, time_, time_shift, with_precision=False): """ Return a floating point number, like from `time.mktime`, by parsing the string arguments `date` and `time_`. The parameter `time_shift` is the difference "time on server" - "time on client" and can be set as the `time_shift` parameter in the `parse_line` interface. If `with_precision` is true return a two-element tuple consisting of the floating point number as described in the previous paragraph and the precision of the time in seconds. The default is `False` for backward compatibility with custom parsers. The precision value takes into account that, for example, a time string like "10-23-2001 03:25PM" has only a precision of one minute. This information is important for the `upload_if_newer` and `download_if_newer` methods in the `FTPHost` class. Usually, the returned precision is `MINUTE_PRECISION`, except when the date is before the epoch, in which case the returned `st_mtime` value is set to 0.0 and the precision to `UNKNOWN_PRECISION`. Times in MS-style directory listings typically have the format "10-23-01 03:25PM" (month-day_of_month-two_digit_year, hour:minute, am/pm). If this method can't make sense of the given arguments, it raises an `ftputil.error.ParserError`. """ # Derived classes might want to use `self`. # pylint: disable=no-self-use # # Derived classes may need access to `time_shift`. # pylint: disable=unused-argument # # For the time being, I don't add a `with_precision` parameter as in # the MS parser because the precision for the DOS format is always a # minute and can be set in `MSParser.parse_line`. Should you find # yourself needing support for `with_precision` for a derived class, # please send a mail (see ftputil.txt/html). month, day, year = [ self._as_int(part, "year/month/day") for part in date.split("-") ] if year >= 1000: # We have a four-digit year, so no need for heuristics. pass elif year >= 70: year = 1900 + year else: year = 2000 + year try: hour, minute, am_pm = time_[0:2], time_[3:5], time_[5] except IndexError: raise ftputil.error.ParserError("invalid time string '{}'".format(time_)) hour, minute = (self._as_int(hour, "hour"), self._as_int(minute, "minute")) if hour == 12 and am_pm == "A": hour = 0 if hour != 12 and am_pm == "P": hour += 12 server_datetime = self._datetime(year, month, day, hour, minute, 0) client_datetime = server_datetime - datetime.timedelta(seconds=time_shift) st_mtime = client_datetime.timestamp() if st_mtime < 0.0: st_mtime_precision = UNKNOWN_PRECISION st_mtime = 0.0 else: st_mtime_precision = MINUTE_PRECISION if with_precision: return st_mtime, st_mtime_precision else: return st_mtime class UnixParser(Parser): """ `Parser` class for Unix-specific directory format. """ @staticmethod def _split_line(line): """ Split a line in metadata, nlink, user, group, size, month, day, year_or_time and name and return the result as an nine-element list of these values. If the name is a link, it will be encoded as a string "link_name -> link_target". """ # This method encapsulates the recognition of an unusual Unix format # variant (see ticket http://ftputil.sschwarzer.net/trac/ticket/12 ). line_parts = line.split() FIELD_COUNT_WITHOUT_USERID = 8 FIELD_COUNT_WITH_USERID = FIELD_COUNT_WITHOUT_USERID + 1 if len(line_parts) < FIELD_COUNT_WITHOUT_USERID: # No known Unix-style format raise ftputil.error.ParserError("line '{}' can't be parsed".format(line)) # If we have a valid format (either with or without user id field), the # field with index 5 is either the month abbreviation or a day. try: int(line_parts[5]) except ValueError: # Month abbreviation, "invalid literal for int" line_parts = line.split(None, FIELD_COUNT_WITH_USERID - 1) else: # Day line_parts = line.split(None, FIELD_COUNT_WITHOUT_USERID - 1) USER_FIELD_INDEX = 2 line_parts.insert(USER_FIELD_INDEX, None) return line_parts def parse_line(self, line, time_shift=0.0): """ Return a `StatResult` instance corresponding to the given text line. The `time_shift` value is needed to determine to which year a datetime without an explicit year belongs. If the line can't be parsed, raise a `ParserError`. """ # The local variables are rather simple. # pylint: disable=too-many-locals try: ( mode_string, nlink, user, group, size, month, day, year_or_time, name, ) = self._split_line(line) # We can get a `ValueError` here if the name is blank (see ticket #69). # This is a strange use case, but at least we should raise the # exception the docstring mentions. except ValueError as exc: raise ftputil.error.ParserError(str(exc)) # st_mode st_mode = self.parse_unix_mode(mode_string) # st_ino, st_dev, st_nlink, st_uid, st_gid, st_size, st_atime st_ino = None st_dev = None st_nlink = int(nlink) st_uid = user st_gid = group st_size = int(size) st_atime = None # st_mtime st_mtime, st_mtime_precision = self.parse_unix_time( month, day, year_or_time, time_shift, with_precision=True ) # st_ctime st_ctime = None # st_name if name.count(" -> ") > 1: # If we have more than one arrow we can't tell where the link name # ends and the target name starts. raise ftputil.error.ParserError( '''name '{}' contains more than one "->"'''.format(name) ) elif name.count(" -> ") == 1: st_name, st_target = name.split(" -> ") else: st_name, st_target = name, None stat_result = StatResult( ( st_mode, st_ino, st_dev, st_nlink, st_uid, st_gid, st_size, st_atime, st_mtime, st_ctime, ) ) # These attributes are kind of "half-official". I'm not sure whether # they should be used by ftputil client code. # pylint: disable=protected-access stat_result._st_mtime_precision = st_mtime_precision stat_result._st_name = st_name stat_result._st_target = st_target return stat_result class MSParser(Parser): """ `Parser` class for MS-specific directory format. """ def parse_line(self, line, time_shift=0.0): """ Return a `StatResult` instance corresponding to the given text line from a FTP server which emits "Microsoft format" (see end of file). If the line can't be parsed, raise a `ParserError`. The parameter `time_shift` isn't used in this method but is listed for compatibility with the base class. """ # The local variables are rather simple. # pylint: disable=too-many-locals try: date, time_, dir_or_size, name = line.split(None, 3) except ValueError: # "unpack list of wrong size" raise ftputil.error.ParserError("line '{}' can't be parsed".format(line)) # st_mode # Default to read access only; in fact, we can't tell. st_mode = 0o400 if dir_or_size == "": st_mode = st_mode | stat.S_IFDIR else: st_mode = st_mode | stat.S_IFREG # st_ino, st_dev, st_nlink, st_uid, st_gid st_ino = None st_dev = None st_nlink = None st_uid = None st_gid = None # st_size if dir_or_size != "": try: st_size = int(dir_or_size) except ValueError: raise ftputil.error.ParserError("invalid size {}".format(dir_or_size)) else: st_size = None # st_atime st_atime = None # st_mtime st_mtime, st_mtime_precision = self.parse_ms_time( date, time_, time_shift, with_precision=True ) # st_ctime st_ctime = None stat_result = StatResult( ( st_mode, st_ino, st_dev, st_nlink, st_uid, st_gid, st_size, st_atime, st_mtime, st_ctime, ) ) # These attributes are kind of "half-official". I'm not sure whether # they should be used by ftputil client code. # pylint: disable=protected-access # _st_name and _st_target stat_result._st_name = name stat_result._st_target = None # mtime precision in seconds stat_result._st_mtime_precision = st_mtime_precision return stat_result # # Stat'ing operations for files on an FTP server # class _Stat: """ Methods for stat'ing directories, links and regular files. """ # pylint: disable=protected-access def __init__(self, host): self._host = host self._path = host.path # Use the Unix directory parser by default. self._parser = UnixParser() # Allow one chance to switch to another parser if the default doesn't # work. self._allow_parser_switching = True # Cache only lstat results. `stat` works locally on `lstat` results. self._lstat_cache = ftputil.stat_cache.StatCache() def _host_dir(self, path): """ Return a list of lines, as fetched by FTP's `LIST` command, when applied to `path`. """ return self._host._dir(path) def _stat_results_from_dir(self, path): """ Yield stat results extracted from the directory listing `path`. Omit the special entries for the directory itself and its parent directory. """ lines = self._host_dir(path) # `cache` is the "high-level" `StatCache` object whereas `cache._cache` # is the "low-level" `LRUCache` object. cache = self._lstat_cache # Auto-grow cache if the cache up to now can't hold as many entries as # there are in the directory `path`. if cache._enabled and len(lines) >= cache._cache.size: new_size = int(math.ceil(1.1 * len(lines))) cache.resize(new_size) # Yield stat results from lines. for line in lines: if self._parser.ignores_line(line): continue # Although for a `listdir` call we're only interested in the names, # use the `time_shift` parameter to store the correct timestamp # values in the cache. stat_result = self._parser.parse_line(line, self._host.time_shift()) # Skip entries "." and "..". if stat_result._st_name in [self._host.curdir, self._host.pardir]: continue loop_path = self._path.join(path, stat_result._st_name) # No-op if cache is disabled. cache[loop_path] = stat_result yield stat_result # The methods `listdir`, `lstat` and `stat` come in two variants. The # methods `_real_listdir`, `_real_lstat` and `_real_stat` use the currently # set parser to get the directories/files of the requested directory, the # lstat result or the stat result, respectively. # # Additionally, we have the methods `_listdir`, `_lstat` and `_stat`, which # wrap the above `_real_*` methods. _For example_, `_listdir` calls # `_real_listdir`. If `_real_listdir` can't parse the directory lines and a # parser hasn't been fixed yet, `_listdir` switches to the MS parser and # calls `_real_listdir` again. # # There are two important conditions to watch out for: # # - If the user explicitly set a different parser with # `FTPHost.set_parser`, parser switching is disabled after that and # `_listdir` etc. only call "their" method once with the fixed parser. # # - A `_real_*` call will fail if there's no directory line at all in the # given directory. In that case, we can't tell whether the default parser # was appropriate or not. Hence parser switching will still be allowed # until we encounter a directory that has directories/files/links in it. def _real_listdir(self, path): """ Return a list of directories, files etc. in the directory named `path`. Like `os.listdir` the returned list elements have the type of the path argument. If the directory listing from the server can't be parsed, raise a `ParserError`. """ # We _can't_ put this check into `FTPHost._dir`; see its docstring. path = self._path.abspath(path) # `listdir` should only be allowed for directories and links to them. if not self._path.isdir(path): raise ftputil.error.PermanentError( "550 {}: no such directory or wrong directory parser used".format(path) ) # Set up for `for` loop. names = [] for stat_result in self._stat_results_from_dir(path): st_name = stat_result._st_name names.append(st_name) return names def _real_lstat(self, path, _exception_for_missing_path=True): """ Return an object similar to that returned by `os.lstat`. If the directory listing from the server can't be parsed, raise a `ParserError`. If the directory can be parsed and the `path` is not found, raise a `PermanentError`. That means that if the directory containing `path` can't be parsed we get a `ParserError`, independent on the presence of `path` on the server. (`_exception_for_missing_path` is an implementation aid and _not_ intended for use by ftputil clients.) """ path = self._path.abspath(path) # If the path is in the cache, return the lstat result. if path in self._lstat_cache: return self._lstat_cache[path] # Note: (l)stat works by going one directory up and parsing the output # of an FTP `LIST` command. Unfortunately, it is not possible to do # this for the root directory `/`. if path == "/": raise ftputil.error.RootDirError("can't stat remote root directory") dirname, basename = self._path.split(path) # If even the directory doesn't exist and we don't want the exception, # treat it the same as if the path wasn't found in the directory's # contents (compare below). The use of `isdir` here causes a recursion # but that should be ok because that will at the latest stop when we've # gotten to the root directory. if not self._path.isdir(dirname) and not _exception_for_missing_path: return None # Loop through all lines of the directory listing. We probably won't # need all lines for the particular path but we want to collect as many # stat results in the cache as possible. lstat_result_for_path = None # FIXME: Here we try to list the contents of `dirname` even though the # above `isdir` call might/could have shown that the directory doesn't # exist. This may be related to ticket #108. That said, we may need to # consider virtual directories here (see tickets #86 / #87). for stat_result in self._stat_results_from_dir(dirname): # Needed to work without cache or with disabled cache. if stat_result._st_name == basename: lstat_result_for_path = stat_result if lstat_result_for_path is not None: return lstat_result_for_path # Path was not found during the loop. if _exception_for_missing_path: # TODO: Use FTP `LIST` command on the file to implicitly use the # usual status code of the server for missing files (450 vs. 550). raise ftputil.error.PermanentError( "550 {}: no such file or directory".format(path) ) else: # Be explicit. Returning `None` is a signal for # `_Path.exists/isfile/isdir/islink` that the path was not found. # If we would raise an exception, there would be no distinction # between a missing path or a more severe error in the code above. return None def _real_stat(self, path, _exception_for_missing_path=True): """ Return info from a "stat" call on `path`. If the directory containing `path` can't be parsed, raise a `ParserError`. If the listing can be parsed but the `path` can't be found, raise a `PermanentError`. Also raise a `PermanentError` if there's an endless (cyclic) chain of symbolic links "behind" the `path`. (`_exception_for_missing_path` is an implementation aid and _not_ intended for use by ftputil clients.) """ # Save for error message. original_path = path # Most code in this method is used to detect recursive link structures. visited_paths = set() while True: # Stat the link if it is one, else the file/directory. lstat_result = self._real_lstat(path, _exception_for_missing_path) if lstat_result is None: return None # If the file is not a link, the `stat` result is the same as the # `lstat` result. if not stat.S_ISLNK(lstat_result.st_mode): return lstat_result # If we stat'ed a link, calculate a normalized path for the file # the link points to. dirname, _ = self._path.split(path) path = self._path.join(dirname, lstat_result._st_target) path = self._path.abspath(self._path.normpath(path)) # Check for cyclic structure. if path in visited_paths: # We had seen this path already. raise ftputil.error.RecursiveLinksError( "recursive link structure detected for remote path '{}'".format( original_path ) ) # Remember the path we have encountered. visited_paths.add(path) def __call_with_parser_retry(self, method, *args, **kwargs): """ Call `method` with the `args` and `kwargs` once. If that results in a `ParserError` and only one parser has been used yet, try the other parser. If that still fails, propagate the `ParserError`. """ # Do _not_ set `_allow_parser_switching` in a `finally` clause! This # would cause a `PermanentError` due to a not-found file in an empty # directory to finally establish the parser - which is wrong. try: result = method(*args, **kwargs) # If a `listdir` call didn't find anything, we can't say anything # about the usefulness of the parser. if (method is not self._real_listdir) and result: self._allow_parser_switching = False return result except ftputil.error.ParserError: if self._allow_parser_switching: self._allow_parser_switching = False self._parser = MSParser() return method(*args, **kwargs) else: raise # Client code should never use these methods, but only the corresponding # methods without the leading underscore in the `FTPHost` class. def _listdir(self, path): """ Return a list of items in `path`. Raise a `PermanentError` if the path doesn't exist, but maybe raise other exceptions depending on the state of the server (e. g. timeout). """ return self.__call_with_parser_retry(self._real_listdir, path) def _lstat(self, path, _exception_for_missing_path=True): """ Return a `StatResult` without following links. Raise a `PermanentError` if the path doesn't exist, but maybe raise other exceptions depending on the state of the server (e. g. timeout). """ return self.__call_with_parser_retry( self._real_lstat, path, _exception_for_missing_path ) def _stat(self, path, _exception_for_missing_path=True): """ Return a `StatResult` with following links. Raise a `PermanentError` if the path doesn't exist, but maybe raise other exceptions depending on the state of the server (e. g. timeout). """ return self.__call_with_parser_retry( self._real_stat, path, _exception_for_missing_path ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/ftputil/stat_cache.py0000664000175000017500000001302714170605001016376 0ustar00schwaschwa# Copyright (C) 2006-2018, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. """ ftp_stat_cache.py - cache for (l)stat data """ import time import ftputil.error import ftputil.lrucache # This module shouldn't be used by clients of the ftputil library. __all__ = [] class StatCache: """ Implement an LRU (least-recently-used) cache. `StatCache` objects have an attribute `max_age`. After this duration after _setting_ it a cache entry will expire. For example, if you code my_cache = StatCache() my_cache.max_age = 10 my_cache["/home"] = ... the value my_cache["/home"] can be retrieved for 10 seconds. After that, the entry will be treated as if it had never been in the cache and should be fetched again from the remote host. Note that the `__len__` method does no age tests and thus may include some or many already expired entries. """ # Default number of cache entries _DEFAULT_CACHE_SIZE = 5000 def __init__(self): # Can be reset with method `resize` self._cache = ftputil.lrucache.LRUCache(self._DEFAULT_CACHE_SIZE) # Never expire self.max_age = None self.enable() def enable(self): """ Enable storage of stat results. """ self._enabled = True def disable(self): """ Disable the cache. Further storage attempts with `__setitem__` won't have any visible effect. Disabling the cache only effects new storage attempts. Values stored before calling `disable` can still be retrieved unless disturbed by a `resize` command or normal cache expiration. """ # `_enabled` is set via calling `enable` in the constructor. # pylint: disable=attribute-defined-outside-init self._enabled = False def resize(self, new_size): """ Set number of cache entries to the integer `new_size`. If the new size is smaller than the current cache size, relatively long-unused elements will be removed. """ self._cache.size = new_size def _age(self, path): """ Return the age of a cache entry for `path` in seconds. If the path isn't in the cache, raise a `CacheMissError`. """ try: return time.time() - self._cache.mtime(path) except ftputil.lrucache.CacheKeyError: raise ftputil.error.CacheMissError( "no entry for path {} in cache".format(path) ) def clear(self): """ Clear (invalidate) all cache entries. """ self._cache.clear() def invalidate(self, path): """ Invalidate the cache entry for the absolute `path` if present. After that, the stat result data for `path` can no longer be retrieved, as if it had never been stored. If no stat result for `path` is in the cache, do _not_ raise an exception. """ # XXX: To be 100 % sure, this should be `host.sep`, but I don't want to # introduce a reference to the `FTPHost` object for only that purpose. assert path.startswith("/"), "{} must be an absolute path".format(path) try: del self._cache[path] except ftputil.lrucache.CacheKeyError: # Ignore errors pass def __getitem__(self, path): """ Return the stat entry for the `path`. If there's no stored stat entry or the cache is disabled, raise `CacheMissError`. """ if not self._enabled: raise ftputil.error.CacheMissError("cache is disabled") # Possibly raise a `CacheMissError` in `_age` if (self.max_age is not None) and (self._age(path) > self.max_age): self.invalidate(path) raise ftputil.error.CacheMissError( "entry for path {} has expired".format(path) ) else: # XXX: I don't know if this may raise a `CacheMissError` in case of # race conditions. I prefer robust code. try: return self._cache[path] except ftputil.lrucache.CacheKeyError: raise ftputil.error.CacheMissError( "entry for path {} not found".format(path) ) def __setitem__(self, path, stat_result): """ Put the stat data for the absolute `path` into the cache, unless it's disabled. """ assert path.startswith("/") if not self._enabled: return self._cache[path] = stat_result def __contains__(self, path): """ Support for the `in` operator. Return a true value, if data for `path` is in the cache, else return a false value. """ try: # Implicitly do an age test which may raise `CacheMissError`. self[path] except ftputil.error.CacheMissError: return False else: return True # # The following methods are only intended for debugging! # def __len__(self): """ Return the number of entries in the cache. Note that this may include some (or many) expired entries. """ return len(self._cache) def __str__(self): """ Return a string representation of the cache contents. """ lines = [] for key in sorted(self._cache): lines.append("{}: {}".format(key, self[key])) return "\n".join(lines) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/ftputil/sync.py0000664000175000017500000001335414170605001015257 0ustar00schwaschwa# Copyright (C) 2007-2018, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. """ Tools for syncing combinations of local and remote directories. *** WARNING: This is an unfinished in-development version! """ # Sync combinations: # - remote -> local (download) # - local -> remote (upload) # - remote -> remote # - local -> local (maybe implicitly possible due to design, but not targeted) import os import shutil from ftputil import FTPHost import ftputil.error __all__ = ["FTPHost", "LocalHost", "Syncer"] # Used for copying file objects; value is 64 KB. CHUNK_SIZE = 64 * 1024 class LocalHost: """ Provide an API for local directories and files so we can use the same code as for `FTPHost` instances. """ def open(self, path, mode): """ Return a Python file object for file name `path`, opened in mode `mode`. """ # This is the built-in `open` function, not `os.open`! return open(path, mode) def time_shift(self): """ Return the time shift value (see methods `set_time_shift` and `time_shift` in class `FTPHost` for a definition). By definition, the value is zero for local file systems. """ return 0.0 def __getattr__(self, attr): return getattr(os, attr) class Syncer: """ Control synchronization between combinations of local and remote directories and files. """ def __init__(self, source, target): """ Init the `FTPSyncer` instance. Each of `source` and `target` is either an `FTPHost` or a `LocalHost` object. The source and target directories, resp. have to be set with the `chdir` command before passing them in. The semantics is so that the items under the source directory will show up under the target directory after the synchronization (unless there's an error). """ self._source = source self._target = target def _mkdir(self, target_dir): """ Try to create the target directory `target_dir`. If it already exists, don't do anything. If the directory is present but it's actually a file, raise a `SyncError`. """ # TODO: Handle setting of target mtime according to source mtime # (beware of rootdir anomalies; try to handle them as well). if self._target.path.isfile(target_dir): raise ftputil.error.SyncError( "target dir '{}' is actually a file".format(target_dir) ) # Deliberately use an `isdir` test instead of `try/except`. The latter # approach might mask other errors we want to see, e. g. insufficient # permissions. if not self._target.path.isdir(target_dir): self._target.mkdir(target_dir) def _sync_file(self, source_file, target_file): # XXX: This duplicates code from `FTPHost._copyfileobj`. Maybe # implement the upload and download methods in terms of `_sync_file`, # or maybe not? # TODO: Handle `IOError`s # TODO: Handle conditional copy # TODO: Handle setting of target mtime according to source mtime # (beware of rootdir anomalies; try to handle them as well). source = self._source.open(source_file, "rb") try: target = self._target.open(target_file, "wb") try: shutil.copyfileobj(source, target, length=CHUNK_SIZE) finally: target.close() finally: source.close() def _fix_sep_for_target(self, path): """ Return the string `path` with appropriate path separators for the target file system. """ return path.replace(self._source.sep, self._target.sep) def _sync_tree(self, source_dir, target_dir): """ Synchronize the source and the target directory tree by updating the target to match the source as far as possible. Current limitations: - _don't_ delete items which are on the target path but not on the source path - files are always copied, the modification timestamps are not compared - all files are copied in binary mode, never in ASCII/text mode - incomplete error handling """ self._mkdir(target_dir) for dirpath, dir_names, file_names in self._source.walk(source_dir): for dir_name in dir_names: inner_source_dir = self._source.path.join(dirpath, dir_name) inner_target_dir = inner_source_dir.replace(source_dir, target_dir, 1) inner_target_dir = self._fix_sep_for_target(inner_target_dir) self._mkdir(inner_target_dir) for file_name in file_names: source_file = self._source.path.join(dirpath, file_name) target_file = source_file.replace(source_dir, target_dir, 1) target_file = self._fix_sep_for_target(target_file) self._sync_file(source_file, target_file) def sync(self, source_path, target_path): """ Synchronize `source_path` and `target_path` (both are strings, each denoting a directory or file path), i. e. update the target path so that it's a copy of the source path. This method handles both directory trees and single files. """ # TODO: Handle making of missing intermediate directories. source_path = self._source.path.abspath(source_path) target_path = self._target.path.abspath(target_path) if self._source.path.isfile(source_path): self._sync_file(source_path, target_path) else: self._sync_tree(source_path, target_path) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/ftputil/tool.py0000664000175000017500000000470414170605001015257 0ustar00schwaschwa# Copyright (C) 2013-2021, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. """ tool.py - helper code """ import os __all__ = ["same_string_type_as", "as_str", "as_str_path", "raise_for_empty_path"] def same_string_type_as(type_source, string, encoding): """ Return a string of the same type as `type_source` with the content from `string`. If the `type_source` and `string` don't have the same type, use `encoding` to encode or decode, whatever operation is needed. """ if isinstance(type_source, bytes) and isinstance(string, str): return string.encode(encoding) elif isinstance(type_source, str) and isinstance(string, bytes): return string.decode(encoding) else: return string def as_str(string, encoding): """ Return the argument `string` converted to a unicode string if it's a `bytes` object. Otherwise just return the string. If a conversion is necessary, use `encoding`. If `string` is neither `str` nor `bytes`, raise a `TypeError`. """ if isinstance(string, bytes): return string.decode(encoding) elif isinstance(string, str): return string else: raise TypeError("`as_str` argument must be `bytes` or `str`") def as_str_path(path, encoding): """ Return the argument `path` converted to a unicode string if it's a `bytes` object. Otherwise just return the string. If a conversion is necessary, use `encoding`. Instead of passing a `bytes` or `str` object for `path`, you can pass a `PathLike` object that can be converted to a `bytes` or `str` object. If the `path` can't be converted to a `bytes` or `str`, a `TypeError` is raised. """ path = os.fspath(path) return as_str(path, encoding) def raise_for_empty_path(path, path_argument_name="path"): """ Raise an exception of class `exception_class` if `path` is an empty string (text or bytes). """ # Avoid cyclic import. import ftputil.error # Don't handle `pathlib.Path("")`. This immediately results in `Path(".")`, # so we can't detect it anyway. Regarding bytes, `Path(b"")` results in a # `TypeError`. if path in ["", b""]: if path_argument_name is None: message = "path argument is empty" else: message = f"path argument `{path_argument_name}` is empty" raise ftputil.error.FTPIOError(message) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704559215.0 ftputil-5.1.0/ftputil/version.py0000664000175000017500000000105714546301157016001 0ustar00schwaschwa# Copyright (C) 2006-2018, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. """ Provide version information about ftputil and the runtime environment. """ import sys # ftputil version number; substituted by `make patch` __version__ = "5.1.0" _ftputil_version = __version__ _python_version = sys.version.split()[0] _python_platform = sys.platform version_info = "ftputil {}, Python {} ({})".format( _ftputil_version, _python_version, _python_platform ) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1704559216.340883 ftputil-5.1.0/ftputil.egg-info/0000755000175000017500000000000014546301160015421 5ustar00schwaschwa././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704559215.0 ftputil-5.1.0/ftputil.egg-info/PKG-INFO0000644000175000017500000000274014546301157016527 0ustar00schwaschwaMetadata-Version: 2.1 Name: ftputil Version: 5.1.0 Summary: High-level FTP client library (virtual file system and more) Home-page: https://ftputil.sschwarzer.net/ Download-URL: http://ftputil.sschwarzer.net/trac/attachment/wiki/Download/ftputil-5.1.0.tar.gz?format=raw Author: Stefan Schwarzer Author-email: sschwarzer@sschwarzer.net License: Open source (revised BSD license) Keywords: FTP,client,library,virtual file system Platform: Pure Python Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Other Environment Classifier: Intended Audience :: Developers Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 :: Only Classifier: Topic :: Internet :: File Transfer Protocol (FTP) Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: System :: Filesystems Requires-Python: >=3.6 License-File: LICENSE ftputil is a high-level FTP client library for the Python programming language. ftputil implements a virtual file system for accessing FTP servers, that is, it can generate file-like objects for remote files. The library supports many functions similar to those in the os, os.path and shutil modules. ftputil has convenience functions for conditional uploads and downloads, and handles FTP clients and servers in different timezones. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704559216.0 ftputil-5.1.0/ftputil.egg-info/SOURCES.txt0000644000175000017500000000140214546301160017302 0ustar00schwaschwaLICENSE MANIFEST.in README.md VERSION setup.py ftputil/__init__.py ftputil/error.py ftputil/file.py ftputil/file_transfer.py ftputil/host.py ftputil/lrucache.py ftputil/path.py ftputil/path_encoding.py ftputil/session.py ftputil/stat.py ftputil/stat_cache.py ftputil/sync.py ftputil/tool.py ftputil/version.py ftputil.egg-info/PKG-INFO ftputil.egg-info/SOURCES.txt ftputil.egg-info/dependency_links.txt ftputil.egg-info/top_level.txt test/__init__.py test/scripted_session.py test/test_base.py test/test_error.py test/test_file.py test/test_file_transfer.py test/test_host.py test/test_path.py test/test_public_servers.py test/test_real_ftp.py test/test_session.py test/test_stat.py test/test_stat_cache.py test/test_sync.py test/test_tool.py test/test_with_statement.py././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704559215.0 ftputil-5.1.0/ftputil.egg-info/dependency_links.txt0000644000175000017500000000000114546301157021475 0ustar00schwaschwa ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704559216.0 ftputil-5.1.0/ftputil.egg-info/top_level.txt0000644000175000017500000000001014546301160020142 0ustar00schwaschwaftputil ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1704559216.343883 ftputil-5.1.0/setup.cfg0000644000175000017500000000004614546301160014061 0ustar00schwaschwa[egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1649533218.0 ftputil-5.1.0/setup.py0000775000175000017500000000426614224360442013770 0ustar00schwaschwa#! /usr/bin/env python # Copyright (C) 2003-2022, Stefan Schwarzer # See the file LICENSE for licensing terms. """ setup.py - installation script for Python distutils """ import os import sys from distutils import core _name = "ftputil" _package = "ftputil" _version = open("VERSION").read().strip() core.setup( # Installation data name=_name, version=_version, packages=[_package], package_dir={_package: _package}, # Metadata author="Stefan Schwarzer", author_email="sschwarzer@sschwarzer.net", url="https://ftputil.sschwarzer.net/", description="High-level FTP client library (virtual file system and more)", keywords="FTP, client, library, virtual file system", license="Open source (revised BSD license)", platforms=["Pure Python"], # See https://packaging.python.org/guides/distributing-packages-using-setuptools/#python-requires python_requires=">=3.6", long_description="""\ ftputil is a high-level FTP client library for the Python programming language. ftputil implements a virtual file system for accessing FTP servers, that is, it can generate file-like objects for remote files. The library supports many functions similar to those in the os, os.path and shutil modules. ftputil has convenience functions for conditional uploads and downloads, and handles FTP clients and servers in different timezones.""", download_url="http://ftputil.sschwarzer.net/trac/attachment/wiki/Download/" "{}-{}.tar.gz?format=raw".format(_name, _version), classifiers=[ # Commented-out for beta release "Development Status :: 5 - Production/Stable", #"Development Status :: 4 - Beta", "Environment :: Other Environment", "Intended Audience :: Developers", "Intended Audience :: System Administrators", "License :: OSI Approved :: BSD License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 3 :: Only", "Topic :: Internet :: File Transfer Protocol (FTP)", "Topic :: Software Development :: Libraries :: Python Modules", "Topic :: System :: Filesystems", ], ) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1704559216.343883 ftputil-5.1.0/test/0000755000175000017500000000000014546301160013217 5ustar00schwaschwa././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/test/__init__.py0000664000175000017500000000000014170605001015312 0ustar00schwaschwa././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/test/scripted_session.py0000664000175000017500000002255314170605001017154 0ustar00schwaschwa# Copyright (C) 2018-2021, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. import sys import unittest.mock import ftputil.path_encoding __all__ = ["Call", "factory"] class Call: def __init__(self, method_name, *, args=None, kwargs=None, result=None): self.method_name = method_name self.result = result self.args = args self.kwargs = kwargs def __repr__(self): return ( "{0.__class__.__name__}(" "method_name={0.method_name!r}, " "result={0.result!r}, " "args={0.args!r}, " "kwargs={0.kwargs!r})".format(self) ) def check_call(self, method_name, args=None, kwargs=None): # TODO: Mention printing in the docstring. # TODO: Describe how the comparison is made. """ Check the method name, args and kwargs from this `Call` object against the method name, args and kwargs from the system under test. Raise an `AssertionError` if there's a mismatch. """ print( " Call from session script: {} | {!r} | {!r}".format( self.method_name, self.args, self.kwargs ) ) print( " Call from system under test: {} | {!r} | {!r}".format( method_name, args, kwargs ) ) def compare(value_name, script_value, sut_value): if script_value is not None: try: assert script_value == sut_value except AssertionError: print( " Mismatch for `{}`: {!r} != {!r}".format( value_name, script_value, sut_value ) ) raise compare("method_name", self.method_name, method_name) compare("args", self.args, args) compare("kwargs", self.kwargs, kwargs) @staticmethod def _is_exception_class(obj): """ Return `True` if `obj` is an exception class, else `False`. """ try: return issubclass(obj, Exception) except TypeError: # TypeError: issubclass() arg 1 must be a class return False def __call__(self): """ Simulate call, returning the result or raising the exception. """ if isinstance(self.result, Exception) or self._is_exception_class(self.result): raise self.result else: return self.result class ScriptedSession: """ "Scripted" `ftplib.FTP`-like class for testing. To avoid actual input/output over sockets or files, specify the values that should be returned by the class's methods. The class is instantiated with a `script` argument. This is a list of `Call` objects where each object specifies the name of the `ftplib.FTP` method that is expected to be called and what the method should return. If the value is an exception, it will be raised, not returned. In case the method returns a socket (like `transfercmd`), the return value to be specified in the `Call` instance is the content of the underlying socket file. The advantage of the approach of this class over the use of `unittest.mock.Mock` objects is that the sequence of calls is clearly visible. With `Mock` objects, the developer must keep in mind all the calls when specifying return values or side effects for the mock methods. """ # Class-level counter to enumerate `ScriptedSession`s. This makes it # possible to make the output even more compact. Additionally, it's easier # to distinguish numbers like 1, 2, etc. than hexadecimal ids. _session_count = 0 encoding = ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING @classmethod def reset_session_count(cls): cls._session_count = 0 def __init__(self, script): self.script = script # `File.close` accesses the session `sock` object to set and reset the # timeout. `sock` itself is never _called_ though, so it doesn't make # sense to create a `sock` _call_. self.sock = unittest.mock.Mock(name="socket_attribute") # Index into `script`, the list of `Call` objects self._call_index = 0 self.__class__._session_count += 1 self._session_count = self.__class__._session_count # Always expect an entry for the constructor. init_call = self._next_script_call("__init__") # The constructor isn't supposed to return anything. The only reason to # call it here is to raise an exception if that was specified in the # `script`. init_call() def __str__(self): return "{} {}".format(self.__class__.__name__, self._session_count) def _next_script_call(self, requested_attribute): """ Return next `Call` object. """ print(self, "in `_next_script_call`") try: call = self.script[self._call_index] except IndexError: print(" *** Ran out of `Call` objects for this session {!r}".format(self)) print(" Requested attribute was {!r}".format(requested_attribute)) raise self._call_index += 1 print(self, f"next call: {call!r}") return call def __getattr__(self, attribute_name): script_call = self._next_script_call(attribute_name) def dummy_method(*args, **kwargs): print(self, "in `__getattr__`") script_call.check_call(attribute_name, args, kwargs) return script_call() return dummy_method # ---------------------------------------------------------------------- # `ftplib.FTP` methods that shouldn't be executed with the default # processing in `__getattr__` def dir(self, path, callback): """ Call the `callback` for each line in the multiline string `call.result`. """ script_call = self._next_script_call("dir") # Check only the path. This requires that the corresponding `Call` # object also solely specifies the path as `args`. script_call.check_call("dir", (path,), None) # Give `dir` the chance to raise an exception if one was specified in # the `Call`'s `result` argument. call_result = script_call() for line in call_result.splitlines(): callback(line) def ntransfercmd(self, cmd, rest=None): """ Simulate the `ftplib.FTP.ntransfercmd` call. `ntransfercmd` returns a tuple of a socket and a size argument. The `result` value given when constructing an `ntransfercmd` call specifies an `io.TextIO` or `io.BytesIO` value to be used as the `Socket.makefile` result. """ script_call = self._next_script_call("ntransfercmd") script_call.check_call("ntransfercmd", (cmd, rest), None) # Give `ntransfercmd` the chance to raise an exception if one was # specified in the `Call`'s `result` argument. call_result = script_call() mock_socket = unittest.mock.Mock(name="socket") mock_socket.makefile.return_value = call_result # Return `None` for size. The docstring of `ftplib.FTP.ntransfercmd` # says that's a possibility. # TODO: Use a sensible `size` value later if it turns out we need it. return mock_socket, None def transfercmd(self, cmd, rest=None): """ Simulate the `ftplib.FTP.transfercmd` call. `transfercmd` returns a socket. The `result` value given when constructing an `transfercmd` call specifies an `io.TextIO` or `io.BytesIO` value to be used as the `Socket.makefile` result. """ script_call = self._next_script_call("transfercmd") script_call.check_call("transfercmd", (cmd, rest), None) # Give `transfercmd` the chance to raise an exception if one was # specified in the `Call`'s `result` argument. call_result = script_call() mock_socket = unittest.mock.Mock(name="socket") mock_socket.makefile.return_value = call_result return mock_socket class MultisessionFactory: """ Return a session factory using the scripted data from the given "scripts" for each consecutive call ("creation") of a factory. Example: host = ftputil.FTPHost(host, user, password, session_factory=scripted_session.factory(script1, script2)) When the `session_factory` is "instantiated" for the first time by `FTPHost._make_session`, the factory object will use the behavior described by the script `script1`. When the `session_factory` is "instantiated" a second time, the factory object will use the behavior described by the script `script2`. """ def __init__(self, *scripts): ScriptedSession.reset_session_count() self._scripts = iter(scripts) self.scripted_sessions = [] def __call__(self, host, user, password): """ Call the factory. This is equivalent to the constructor of the session (e. g. `ftplib.FTP` in a real application). """ script = next(self._scripts) scripted_session = ScriptedSession(script) self.scripted_sessions.append(scripted_session) return scripted_session factory = MultisessionFactory ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/test/test_base.py0000664000175000017500000000664314170605001015547 0ustar00schwaschwa# Copyright (C) 2003-2019, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. import io import ftputil # Since `io.BytesIO` and `io.StringIO` are built-in, they can't be patched with # `unittest.mock.patch`. However, derived classes can be mocked. Mocking is # useful to test the arguments of `write` calls, i. e. whether the expected # data was written. class MockableBytesIO(io.BytesIO): pass class MockableStringIO(io.StringIO): pass # Factory to produce `FTPHost`-like classes from a given `FTPHost` class and # (usually) a given `MockSession` class. def ftp_host_factory(session_factory, ftp_host_class=ftputil.FTPHost): return ftp_host_class( "dummy_host", "dummy_user", "dummy_password", session_factory=session_factory ) def dir_line( mode_string="-r--r--r--", nlink=1, user="dummy_user", group="dummy_group", size=512, date_=None, datetime_=None, name="dummy_name", link_target=None, ): """ Return a line as it would be returned by an FTP `DIR` invocation. Some values are handled specially: - One of `date_` or `datetime_` must be given, the other must be `None`. If `date_` is given, it must be a `datetime.date` object. The timestamp in the `DIR` line is formatted like "Apr 22 2019", with the concrete value taken from the `date_` object. If `datetime_` is given, it must be a `datetime.datetime` object. The timestamp in the `DIR` line is formatted like "Apr 22 16:50", with the concrete value taken from the `datetime_` object. Timezone information in the `datetime_` object is ignored. - If `link_target` is left at the default `None`, the name part is the value from the `name` argument. If `link_target` isn't `None`, the name part of the `DIR` line is formatted as "name -> link_target". Note that the spacing between the parts of the line isn't necessarily exactly what you'd get from an FTP server because the parser in ftputil doesn't take the exact amount of spaces into account, so the `DIR` lines don't have to be that accurate. Examples: # Result: # "drwxr-xr-x 2 dummy_user dummy_group 182 Apr 22 16:50 file_name" line = dir_line(mode_string="-rw-rw-r--", nlink=2, size=182, datetime=datetime.datetime.now(), name="file_name") # Result: # "drwxr-xr-x 1 dummy_user dummy_group 512 Apr 22 2019 dir_name -> dir_target" line = dir_line(mode_string="drwxr-xr-x", date=datetime.date.today(), name="dir_name", link_target="dir_target") """ # Date or datetime. We must have exactly one of `date_` and `datetime_` # set. The other value must be `None`. assert [date_, datetime_].count( None ) == 1, "specify exactly one of `date_` and `datetime_`" if date_: datetime_string = date_.strftime("%b %d %Y") else: datetime_string = datetime_.strftime("%b %d %H:%M") # Name, possibly with link target if not link_target: name_string = name else: name_string = "{} -> {}".format(name, link_target) # return "{} {} {} {} {} {} {}".format( mode_string, nlink, user, group, size, datetime_string, name_string ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/test/test_error.py0000664000175000017500000000434314170605001015761 0ustar00schwaschwa# Copyright (C) 2002-2020, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. import ftplib import socket import pytest import ftputil.error class TestFTPErrorArguments: """ The `*Error` constructors should accept either a byte string or a unicode string. """ def test_bytestring_argument(self): # An umlaut as latin-1 character io_error = ftputil.error.FTPIOError(b"\xe4") os_error = ftputil.error.FTPOSError(b"\xe4") def test_unicode_argument(self): # An umlaut as unicode character io_error = ftputil.error.FTPIOError("\xe4") os_error = ftputil.error.FTPOSError("\xe4") class TestErrorConversion: def callee(self): raise ftplib.error_perm() def test_ftplib_error_to_ftp_os_error(self): """ Ensure the `ftplib` exception isn't used as `FTPOSError` argument. """ with pytest.raises(ftputil.error.FTPOSError) as exc_info: with ftputil.error.ftplib_error_to_ftp_os_error: self.callee() exc = exc_info.value assert not (exc.args and isinstance(exc.args[0], ftplib.error_perm)) del exc_info def test_ftplib_error_to_ftp_io_error(self): """ Ensure the `ftplib` exception isn't used as `FTPIOError` argument. """ with pytest.raises(ftputil.error.FTPIOError) as exc_info: with ftputil.error.ftplib_error_to_ftp_io_error: self.callee() exc = exc_info.value assert not (exc.args and isinstance(exc.args[0], ftplib.error_perm)) del exc_info def test_error_message_reuse(self): """ Test if the error message string is retained if the caught exception has more than one element in `args`. """ # See ticket #76. with pytest.raises(ftputil.error.FTPOSError) as exc_info: # Format "host:port" doesn't work. The use here is intentional. host = ftputil.FTPHost("localhost:21", "", "") exc = exc_info.value assert isinstance(exc.__cause__, socket.gaierror) assert exc.__cause__.errno == socket.EAI_NONAME del exc_info ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/test/test_file.py0000664000175000017500000005460514170605001015555 0ustar00schwaschwa# Copyright (C) 2002-2019, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. import ftplib import io import unittest import pytest import ftputil.error from test import scripted_session from test import test_base Call = scripted_session.Call # Test data for use in text/binary read/write tests. TEXT_TEST_DATA = " sรถme line\r\nรคnother line\n almost done\n" BINARY_TEST_DATA = TEXT_TEST_DATA.encode("UTF-8") class TestFileOperations: """Test operations with file-like objects.""" def test_inaccessible_dir(self): """ Test whether opening a file at an invalid location fails. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("STOR inaccessible", None), result=ftplib.error_perm, ), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with pytest.raises(ftputil.error.FTPIOError): host.open("/inaccessible", "w") def test_caching_of_children(self): """ Test whether `FTPFile` cache of `FTPHost` object works. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file1_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call("transfercmd", args=("STOR path1", None), result=io.StringIO("")), Call("voidresp"), # Open a file again while reusing the child object and with it its # `_session` attribute (the `ftplib.FTP` object). Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call("transfercmd", args=("STOR path1", None), result=io.StringIO("")), Call("voidresp"), Call("close"), ] file2_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call("transfercmd", result=io.StringIO(""), args=("STOR path2", None)), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory( host_script, file1_script, file2_script ) with test_base.ftp_host_factory(multisession_factory) as host: assert len(host._children) == 0 path1 = "path1" path2 = "path2" # Open one file and inspect cache of available children. file1 = host.open(path1, "w") child1 = host._children[0] assert len(host._children) == 1 assert not child1._file.closed # Open another file. file2 = host.open(path2, "w") child2 = host._children[1] assert len(host._children) == 2 assert not child2._file.closed # Close first file. file1.close() assert len(host._children) == 2 assert child1._file.closed assert not child2._file.closed # Re-open first child's file. file1 = host.open(path1, "w") child1_1 = file1._host # Check if it's reused. assert child1 is child1_1 assert not child1._file.closed assert not child2._file.closed # Close second file. file2.close() assert child2._file.closed def test_write_to_directory(self): """ Test whether attempting to write to a directory fails. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("STOR some_directory", None), result=ftplib.error_perm, ), # Because of the exception, `voidresp` isn't called. Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with pytest.raises(ftputil.error.FTPIOError): host.open("/some_directory", "w") def test_mode_not_given(self): """ If the mode isn't given, a file should be opened in text read mode. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("RETR some_file", None), result=io.StringIO(TEXT_TEST_DATA), ), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with host.open("some_file") as fobj: data = fobj.read() assert data == TEXT_TEST_DATA def test_mode_is_none(self): """ If the mode is passed as `None`, a `TypeError` should be raised. (This is Python's behavior for local files.) """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with pytest.raises(TypeError): with host.open("some_file", None) as fobj: pass def test_binary_read(self): """ Read data from a binary file. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("RETR some_file", None), result=io.BytesIO(BINARY_TEST_DATA), ), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with host.open("some_file", "rb") as fobj: data = fobj.read() assert data == BINARY_TEST_DATA def test_binary_write(self): """ Write binary data with `write`. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("STOR some_file", None), result=test_base.MockableBytesIO(b""), ), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with unittest.mock.patch("test.test_base.MockableBytesIO.write") as write_mock: with test_base.ftp_host_factory(multisession_factory) as host: with host.open("some_file", "wb") as output: output.write(BINARY_TEST_DATA) write_mock.assert_called_with(BINARY_TEST_DATA) def test_text_read(self): """ Read text with plain `read`. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("RETR dummy", None), # Use the same `newline` as `host.open` call. The default for # `StringIO`'s `newline` differs from the default for `open` # and `socket.makefile`. result=io.StringIO(TEXT_TEST_DATA, newline=None), ), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with host.open("dummy", "r") as input_: data = input_.read(0) assert data == "" data = input_.read(3) assert data == " sรถ" # Specifically check the behavior around the line ending # character. data = input_.read(7) assert data == "me line" data = input_.read(1) assert data == "\n" data = input_.read() assert data == "รคnother line\n almost done\n" data = input_.read() assert data == "" def test_text_write(self): """ Write text with `write`. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("STOR dummy", None), # Use the same `newline` as `host.open` call. The default for # `StringIO`'s `newline` differs from the default for `open` # and `socket.makefile`. result=test_base.MockableStringIO(newline="\r\n"), ), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with unittest.mock.patch("test.test_base.MockableStringIO.write") as write_mock: with test_base.ftp_host_factory(multisession_factory) as host: with host.open("dummy", "w", newline="\r\n") as output: output.write(TEXT_TEST_DATA) write_mock.assert_called_with(TEXT_TEST_DATA) def test_text_writelines(self): """ Write text with `writelines`. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("STOR dummy", None), # Use the same `newline` as `host.open` call. The default for # `StringIO`'s `newline` differs from the default for `open` # and `socket.makefile`. result=test_base.MockableStringIO(newline="\r\n"), ), Call("voidresp"), Call("close"), ] data = [" sรถme line\r\n", "รคnother line\n", " almost done\n"] print("=== data:", data) backup_data = data[:] multisession_factory = scripted_session.factory(host_script, file_script) with unittest.mock.patch( "test.test_base.MockableStringIO.writelines" ) as write_mock: with test_base.ftp_host_factory(multisession_factory) as host: with host.open("dummy", "w", newline="\r\n") as output: output.writelines(data) write_mock.assert_called_with(data) # Ensure that the original data was not modified. assert data == backup_data def test_binary_readline(self): """ Read binary data with `readline`. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("RETR dummy", None), result=io.BytesIO(BINARY_TEST_DATA), ), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with host.open("dummy", "rb") as input_: data = input_.readline(3) assert data == b" s\xc3" data = input_.readline(9) assert data == b"\xb6me line\r" # 30 = at most 30 bytes data = input_.readline(30) assert data == b"\n" data = input_.readline() assert data == b"\xc3\xa4nother line\n" data = input_.readline() assert data == b" almost done\n" data = input_.readline() assert data == b"" def test_text_readline(self): """ Read text with `readline`. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("RETR dummy", None), # Use the same `newline` as `host.open` call. The default for # `StringIO`'s `newline` differs from the default for `open` # and `socket.makefile`. result=io.StringIO(TEXT_TEST_DATA, newline=None), ), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with host.open("dummy", "r") as input_: data = input_.readline(3) assert data == " sรถ" data = input_.readline(7) assert data == "me line" data = input_.readline(10) # Does implicit newline conversion. assert data == "\n" # 30 = at most 30 bytes data = input_.readline(30) assert data == "รคnother line\n" data = input_.readline() assert data == " almost done\n" data = input_.readline() assert data == "" def test_text_readlines(self): """ Read text with `readlines`. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("RETR dummy", None), # Use the same `newline` as `host.open` call. The default for # `StringIO`'s `newline` differs from the default for `open` # and `socket.makefile`. result=io.StringIO(TEXT_TEST_DATA, newline=None), ), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with host.open("dummy", "r") as input_: data = input_.read(3) assert data == " sรถ" data = input_.readlines() assert data == ["me line\n", "รคnother line\n", " almost done\n"] input_.close() def test_binary_iterator(self): """ Test iterator interface of `FTPFile` objects for binary files. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("RETR dummy", None), result=io.BytesIO(BINARY_TEST_DATA), ), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with host.open("dummy", "rb") as input_: input_iterator = iter(input_) assert next(input_iterator) == b" s\xc3\xb6me line\r\n" # The last two lines don't have a `\r`. assert next(input_iterator) == b"\xc3\xa4nother line\n" assert next(input_iterator) == b" almost done\n" with pytest.raises(StopIteration): input_iterator.__next__() def test_text_iterator(self): """ Test iterator interface of `FTPFile` objects for text files. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("RETR dummy", None), # Use the same `newline` as `host.open` call. The default for # `StringIO`'s `newline` differs from the default for `open` # and `socket.makefile`. result=io.StringIO(TEXT_TEST_DATA, newline=None), ), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with host.open("dummy", "r") as input_: input_iterator = iter(input_) assert next(input_iterator) == " sรถme line\n" assert next(input_iterator) == "รคnother line\n" assert next(input_iterator) == " almost done\n" with pytest.raises(StopIteration): input_iterator.__next__() def test_read_unknown_file(self): """ Test whether reading a file which isn't there fails. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call("transfercmd", args=("RETR notthere", None), result=ftplib.error_perm), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with pytest.raises(ftputil.error.FTPIOError): with host.open("notthere", "r") as input_: pass class TestAvailableChild: def _failing_pwd(self, exception_class): """ Return a function that will be used instead of the `session.pwd` and will raise the exception `exception_to_raise`. """ def new_pwd(): raise exception_class("") return new_pwd def _test_with_pwd_error(self, exception_class): """ Test if reusing a child session fails because of `child_host._session.pwd` raising an exception of type `exception_class`. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] first_file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call("transfercmd", args=("RETR dummy1", None), result=io.StringIO("")), Call("voidresp"), # This `pwd` is executed from `FTPHost._available_child`. Call("pwd", result=exception_class), Call("close"), ] second_file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call("transfercmd", args=("RETR dummy2", None), result=io.StringIO("")), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory( host_script, first_file_script, second_file_script ) with test_base.ftp_host_factory(multisession_factory) as host: # Implicitly create a child session. with host.open("/dummy1") as _: pass assert len(host._children) == 1 # Try to create a new file. Since `pwd` in # `FTPHost._available_child` raises an exception, a new child # session should be created. with host.open("/dummy2") as _: pass assert len(host._children) == 2 def test_pwd_with_error_temp(self): """ Test if an `error_temp` in `_session.pwd` skips the child session. """ self._test_with_pwd_error(ftplib.error_temp) def test_pwd_with_error_reply(self): """ Test if an `error_reply` in `_session.pwd` skips the child session. """ self._test_with_pwd_error(ftplib.error_reply) def test_pwd_with_OSError(self): """ Test if an `OSError` in `_session.pwd` skips the child session. """ self._test_with_pwd_error(OSError) def test_pwd_with_EOFError(self): """ Test if an `EOFError` in `_session.pwd` skips the child session. """ self._test_with_pwd_error(EOFError) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704487104.0 ftputil-5.1.0/test/test_file_transfer.py0000664000175000017500000001366014546064300017465 0ustar00schwaschwa# Copyright (C) 2010-2020, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. import datetime import io import random import pytest import ftputil.file_transfer import ftputil.stat from test import test_base from test import scripted_session Call = scripted_session.Call class MockFile: """ Class compatible with `LocalFile` and `RemoteFile`. """ def __init__(self, mtime, mtime_precision): self._mtime = mtime self._mtime_precision = mtime_precision def mtime(self): return self._mtime def mtime_precision(self): return self._mtime_precision class TestRemoteFile: def test_time_shift_subtracted_only_once(self): """ Test whether the time shift value is subtracted from the initial server timestamp only once. This subtraction happens in `stat._Stat.parse_unix_time`, so it must _not_ be done a second time in `file_transfer.RemoteFile`. """ utcnow = datetime.datetime.now(datetime.timezone.utc) # 3 hours time_shift = 3 * 60 * 60 dir_line = test_base.dir_line( datetime_=utcnow + datetime.timedelta(seconds=time_shift), name="dummy_name" ) script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.set_time_shift(3 * 60 * 60) remote_file = ftputil.file_transfer.RemoteFile(host, "dummy_name", 0o644) remote_mtime = remote_file.mtime() # The remote mtime should be corrected by the time shift, so the # calculated UTC time is the same as for the client. The 60.0 (seconds) # is the timestamp precision. assert remote_mtime <= utcnow.timestamp() <= remote_mtime + 60.0 class TestTimestampComparison: def test_source_is_newer_than_target(self): """ Test whether the source is newer than the target, i. e. if the file should be transferred. """ # Define some time units/precisions. second = 1.0 minute = 60 * second hour = 60 * minute day = 24 * hour unknown = ftputil.stat.UNKNOWN_PRECISION # Define input arguments; modification datetimes are in seconds. Fields # are source datetime/precision, target datetime/precision, expected # comparison result. file_data = [ # Non-overlapping modification datetimes/precisions (1000.0, second, 900.0, second, True), (900.0, second, 1000.0, second, False), # Equal modification datetimes/precisions (if in doubt, transfer) (1000.0, second, 1000.0, second, True), # Just touching intervals (1000.0, second, 1000.0 + second, minute, True), (1000.0 + second, minute, 1000.0, second, True), # Other overlapping intervals (10000.0 - 0.5 * hour, hour, 10000.0, day, True), (10000.0 + 0.5 * hour, hour, 10000.0, day, True), (10000.0 + 0.2 * hour, 0.2 * hour, 10000.0, hour, True), (10000.0 - 0.2 * hour, 2 * hour, 10000.0, hour, True), # Unknown precision (1000.0, unknown, 1000.0, second, True), (1000.0, second, 1000.0, unknown, True), (1000.0, unknown, 1000.0, unknown, True), ] for ( source_mtime, source_mtime_precision, target_mtime, target_mtime_precision, expected_result, ) in file_data: source_file = MockFile(source_mtime, source_mtime_precision) target_file = MockFile(target_mtime, target_mtime_precision) result = ftputil.file_transfer.source_is_newer_than_target( source_file, target_file ) assert result == expected_result class FailingStringIO(io.BytesIO): """ Mock class to test whether exceptions are passed on. """ # Kind of nonsense; we just want to see this exception raised. expected_exception = IndexError def read(self, count): raise self.expected_exception class TestChunkwiseTransfer: def _random_string(self, count): """ Return a `BytesIO` object containing `count` "random" bytes. """ ints = (random.randint(0, 255) for i in range(count)) return bytes(ints) def test_chunkwise_transfer_without_remainder(self): """ Check if we get four chunks with 256 Bytes each. """ data = self._random_string(1024) fobj = io.BytesIO(data) chunks = list(ftputil.file_transfer.chunks(fobj, 256)) assert len(chunks) == 4 assert chunks[0] == data[:256] assert chunks[1] == data[256:512] assert chunks[2] == data[512:768] assert chunks[3] == data[768:1024] def test_chunkwise_transfer_with_remainder(self): """ Check if we get three chunks with 256 Bytes and one with 253. """ data = self._random_string(1021) fobj = io.BytesIO(data) chunks = list(ftputil.file_transfer.chunks(fobj, 256)) assert len(chunks) == 4 assert chunks[0] == data[:256] assert chunks[1] == data[256:512] assert chunks[2] == data[512:768] assert chunks[3] == data[768:1021] def test_chunkwise_transfer_with_exception(self): """ Check if we see the exception raised during reading. """ data = self._random_string(1024) fobj = FailingStringIO(data) iterator = ftputil.file_transfer.chunks(fobj, 256) with pytest.raises(FailingStringIO.expected_exception): next(iterator) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704487207.0 ftputil-5.1.0/test/test_host.py0000644000175000017500000013625114546064447015633 0ustar00schwaschwa# Copyright (C) 2002-2021, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. import datetime import errno import ftplib import io import itertools import os import pickle import posixpath import random import time import unittest import unittest.mock import warnings import pytest import ftputil import ftputil.error import ftputil.file import ftputil.path_encoding import ftputil.tool import ftputil.stat from test import test_base import test.scripted_session as scripted_session Call = scripted_session.Call # # Helper function to generate random data # def binary_data(): """ Return a bytes object of length 10000, consisting of bytes from a pool of integer numbers in the range 0..255. """ pool = list(range(0, 256)) size = 10000 integer_list = [random.choice(pool) for i in range(size)] return bytes(integer_list) def as_bytes(string, encoding=ftputil.path_encoding.DEFAULT_ENCODING): return string.encode(encoding) # # Test cases # # For Python < 3.9, the `default_session_factory` is just `ftplib.FTP`. It's # not worth it to test the factory then. @pytest.mark.skipif( not ftputil.path_encoding.RUNNING_UNDER_PY39_AND_UP, reason="tests apply only to Python 3.9 and up", ) class TestDefaultSessionFactory: def test_ftplib_FTP_subclass(self): """ Test if the default factory is a subclass of `ftplib.FTP`. """ assert issubclass(ftputil.host.default_session_factory, ftplib.FTP) def _test_extra_arguments(self, args=None, kwargs=None, expected_kwargs=None): """ Test if `ftputil.FTPHost` accepts additional positional and keyword arguments, which are then passed to the session factory. """ if args is None: args = () if kwargs is None: kwargs = {} if expected_kwargs is None: expected_kwargs = kwargs # Since our test server listens on a non-default port, we can't use the # session factory directly. We have to mock `ftplib.FTP` which is used # by ftputil's `default_session_factory`. with unittest.mock.patch("ftplib.FTP.__init__") as ftp_mock: # Prevent `TypeError` when Python checks the `__init__` result. ftp_mock.return_value = None session = ftputil.host.default_session_factory( "localhost", "ftptest", "dummy", *args, **kwargs ) assert len(ftp_mock.call_args_list) == 1 assert ( # Don't compare the `self` argument. It changes for every test # run. ftp_mock.call_args.args == ("localhost", "ftptest", "dummy") + args ) assert ftp_mock.call_args.kwargs == expected_kwargs def test_extra_positional_arguments(self): """ Test if extra positional arguments are passed to the `ftplib.FTP` constructor. """ expected_kwargs = {"encoding": ftputil.path_encoding.DEFAULT_ENCODING} # `acct`, `timeout` self._test_extra_arguments( args=("", 1.0), kwargs={}, expected_kwargs=expected_kwargs ) def test_extra_keyword_arguments(self): """ Test if extra keyword arguments are passed to the `ftplib.FTP` constructor. """ kwargs = {"timeout": 1.0, "source_address": None} expected_kwargs = kwargs.copy() expected_kwargs["encoding"] = ftputil.path_encoding.DEFAULT_ENCODING self._test_extra_arguments(kwargs=kwargs, expected_kwargs=expected_kwargs) def test_custom_encoding(self): """ Test if a custom encoding is passed to the base class constructor when running under Python 3.9 and up. """ kwargs = {"timeout": 1.0, "source_address": None, "encoding": "latin-2"} self._test_extra_arguments(kwargs=kwargs) class TestConstructor: """ Test initialization of `FTPHost` objects. """ def test_open_and_close(self): """ Test if opening and closing an `FTPHost` object works as expected. """ script = [Call("__init__"), Call("pwd", result="/"), Call("close")] host = test_base.ftp_host_factory(scripted_session.factory(script)) host.close() assert host.closed is True assert host._children == [] def test_invalid_login(self): """ Login to invalid host must fail. """ script = [Call("__init__", result=ftplib.error_perm), Call("pwd", result="/")] with pytest.raises(ftputil.error.FTPOSError): test_base.ftp_host_factory(scripted_session.factory(script)) def test_pwd_normalization(self): """ Test if the stored current directory is normalized. """ script = [ Call("__init__"), # Deliberately return the current working directory with a trailing # slash to test if it's removed when stored in the `FTPHost` # instance. Call("pwd", result="/home/"), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: assert host.getcwd() == "/home" def test_missing_encoding_attribute(self): """ Test if a missing `encoding` attribute on the session instance leads to a `NoEncodingError`. """ class InvalidSessionError: pass with pytest.raises(ftputil.error.NoEncodingError): _ = ftputil.host.FTPHost(session_factory=InvalidSessionError) class TestKeepAlive: def test_succeeding_keep_alive(self): """ Assume the connection is still alive. """ script = [ Call("__init__"), Call("pwd", result="/"), # `pwd` due to `keep_alive` call. Call("pwd", result="/"), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.keep_alive() def test_failing_keep_alive(self): """ Assume the connection has timed out, so `keep_alive` fails. """ script = [ Call("__init__"), Call("pwd", result="/home"), # Simulate failing `pwd` call after the server closed the # connection due to a session timeout. Call("pwd", result=ftplib.error_temp), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: with pytest.raises(ftputil.error.TemporaryError): host.keep_alive() class TestSetParser: class TrivialParser(ftputil.stat.Parser): """ An instance of this parser always returns the same result from its `parse_line` method. This is all we need to check if ftputil uses the set parser. No actual parsing code is required here. """ def __init__(self): # We can't use `os.stat("/home")` directly because we later need # the object's `_st_name` attribute, which we can't set on a # `os.stat` stat value. default_stat_result = ftputil.stat.StatResult(os.stat("/home")) default_stat_result._st_name = "home" self.default_stat_result = default_stat_result def parse_line(self, line, time_shift=0.0): return self.default_stat_result def test_set_parser(self): """ Test if the selected parser is used. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", result="drwxr-xr-x 2 45854 200 512 May 4 2000 home", ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: assert host._stat._allow_parser_switching is True trivial_parser = TestSetParser.TrivialParser() host.set_parser(trivial_parser) stat_result = host.stat("/home") assert stat_result == trivial_parser.default_stat_result assert host._stat._allow_parser_switching is False class TestCommandNotImplementedError: def test_command_not_implemented_error(self): """ Test if we get the anticipated exception if a command isn't implemented by the server. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # `FTPHost.chmod` only raises a `CommandNotImplementedError` when # the exception text of the `ftplib.error_perm` starts with "502". Call( "voidcmd", result=ftplib.error_perm("502 command not implemented"), args=("SITE CHMOD 0644 nonexistent",), ), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # `FTPHost.chmod` only raises a `CommandNotImplementedError` when # the exception text of the `ftplib.error_perm` starts with "502". Call( "voidcmd", result=ftplib.error_perm("502 command not implemented"), args=("SITE CHMOD 0644 nonexistent",), ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: with pytest.raises(ftputil.error.CommandNotImplementedError): host.chmod("nonexistent", 0o644) # `CommandNotImplementedError` is a subclass of `PermanentError`. with pytest.raises(ftputil.error.PermanentError): host.chmod("nonexistent", 0o644) class TestRecursiveListingForDotAsPath: """ These tests are for issue #33, see http://ftputil.sschwarzer.net/trac/ticket/33 . """ def test_plain_listing(self): """ If an empty string is passed to `FTPHost._dir` it should be passed to `session.dir` unmodified. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=(".",)), # Check that the empty string is passed on to `session.dir`. Call("dir", args=("",), result="non-recursive listing"), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: lines = host._dir(host.curdir) assert lines[0] == "non-recursive listing" def test_empty_string_instead_of_dot_workaround(self): """ If `FTPHost.listdir` is called with a dot as argument, the underlying `session.dir` should _not_ be called with the dot as argument, but with an empty string. """ dir_result = ( "total 10\n" "lrwxrwxrwx 1 staff 7 Aug 13 2003 bin -> usr/bin\n" "d--x--x--x 2 staff 512 Sep 24 2000 dev\n" "d--x--x--x 3 staff 512 Sep 25 2000 etc\n" "dr-xr-xr-x 3 staff 512 Oct 3 2000 pub\n" "d--x--x--x 5 staff 512 Oct 3 2000 usr\n" ) script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_result), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: files = host.listdir(host.curdir) assert files == ["bin", "dev", "etc", "pub", "usr"] class TestTimeShift: # Helper mock class that frees us from setting up complicated session # scripts for the remote calls. class _Path: def split(self, path): return posixpath.split(path) def set_mtime(self, mtime): self._mtime = mtime def getmtime(self, file_name): return self._mtime def join(self, *args): return posixpath.join(*args) def normpath(self, path): return posixpath.normpath(path) def isabs(self, path): return posixpath.isabs(path) def abspath(self, path): return "/_ftputil_sync_" # Needed for `isdir` in `FTPHost.remove` def isfile(self, path): return True def test_rounded_time_shift(self): """ Test if time shift is rounded correctly. """ script = [Call("__init__"), Call("pwd", result="/"), Call("close")] multisession_factory = scripted_session.factory(script) with test_base.ftp_host_factory(multisession_factory) as host: # Use private bound method. rounded_time_shift = host._FTPHost__rounded_time_shift # Pairs consisting of original value and expected result test_data = [ (0, 0), (0.1, 0), (-0.1, 0), (1500, 1800), (-1500, -1800), (1800, 1800), (-1800, -1800), (2000, 1800), (-2000, -1800), (5 * 3600 - 100, 5 * 3600), (-5 * 3600 + 100, -5 * 3600), ] for time_shift, expected_time_shift in test_data: calculated_time_shift = rounded_time_shift(time_shift) assert calculated_time_shift == expected_time_shift def test_assert_valid_time_shift(self): """ Test time shift sanity checks. """ script = [Call("__init__"), Call("pwd", result="/"), Call("close")] multisession_factory = scripted_session.factory(script) with test_base.ftp_host_factory(multisession_factory) as host: # Use private bound method. assert_time_shift = host._FTPHost__assert_valid_time_shift # Valid time shifts test_data = [23 * 3600, -23 * 3600, 3600 + 30, -3600 + 30] for time_shift in test_data: assert assert_time_shift(time_shift) is None # Invalid time shift (exceeds one day) with pytest.raises(ftputil.error.TimeShiftError): assert_time_shift(25 * 3600) with pytest.raises(ftputil.error.TimeShiftError): assert_time_shift(-25 * 3600) # Invalid time shift (too large deviation from 15-minute units is # unacceptable) with pytest.raises(ftputil.error.TimeShiftError): assert_time_shift(8 * 60) with pytest.raises(ftputil.error.TimeShiftError): assert_time_shift(-3600 - 8 * 60) def test_synchronize_times(self): """ Test time synchronization with server. """ host_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("delete", args=("_ftputil_sync_",)), Call("cwd", args=("/",)), Call("close"), ] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("STOR _ftputil_sync_", None), result=io.BytesIO() ), Call("voidresp"), Call("close"), ] # Valid time shifts test_data = [ (60 * 60 + 30, 60 * 60), (60 * 60 - 100, 60 * 60), (30 * 60 + 100, 30 * 60), (45 * 60 - 100, 45 * 60), ] for measured_time_shift, expected_time_shift in test_data: # Use a new `BytesIO` object to avoid exception # `ValueError: I/O operation on closed file`. file_script[4] = Call( "transfercmd", result=io.BytesIO(), args=("STOR _ftputil_sync_", None) ) multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: host.path = self._Path() host.path.set_mtime(time.time() + measured_time_shift) host.synchronize_times() assert host.time_shift() == expected_time_shift # Invalid time shifts measured_time_shifts = [60 * 60 + 8 * 60, 45 * 60 - 6 * 60] for measured_time_shift in measured_time_shifts: # Use a new `BytesIO` object to avoid exception # `ValueError: I/O operation on closed file`. file_script[4] = Call( "transfercmd", result=io.BytesIO(), args=("STOR _ftputil_sync_", None) ) multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: host.path = self._Path() host.path.set_mtime(time.time() + measured_time_shift) with pytest.raises(ftputil.error.TimeShiftError): host.synchronize_times() def test_synchronize_times_for_server_in_east(self): """ Test for timestamp correction (see ticket #55). """ host_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("delete", args=("_ftputil_sync_",)), Call("cwd", args=("/",)), Call("close"), ] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("STOR _ftputil_sync_", None), result=io.BytesIO() ), Call("voidresp", args=()), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(session_factory=multisession_factory) as host: host.path = self._Path() # Set this explicitly to emphasize the problem. host.set_time_shift(0.0) hour = 60 * 60 # This could be any negative time shift. presumed_time_shift = -6 * hour # Set `mtime` to simulate a server east of us. # In case the `time_shift` value for this host instance is 0.0 # (as is to be expected before the time shift is determined), the # directory parser (more specifically # `ftputil.stat.Parser.parse_unix_time`) will return a time which # is a year too far in the past. The `synchronize_times` method # needs to deal with this and add the year "back". I don't think # this is a bug in `parse_unix_time` because the method should work # once the time shift is set correctly. client_time = datetime.datetime.now(datetime.timezone.utc) presumed_server_time = client_time.replace( year=client_time.year - 1 ) + datetime.timedelta(seconds=presumed_time_shift) host.path.set_mtime(presumed_server_time.timestamp()) host.synchronize_times() assert host.time_shift() == presumed_time_shift class TestUploadAndDownload: """ Test upload and download. """ def test_download(self, tmp_path): """ Test mode download. """ remote_file_name = "dummy_name" remote_file_content = b"dummy_content" local_target = tmp_path / "test_target" host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("RETR {}".format(remote_file_name), None), result=io.BytesIO(remote_file_content), ), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) # Download with test_base.ftp_host_factory(multisession_factory) as host: host.download(remote_file_name, str(local_target)) # Verify expected operations on mock socket as done in `FTPFile.close`. # We expect one `gettimeout` and two `settimeout` calls. file_session = multisession_factory.scripted_sessions[1] file_session.sock.gettimeout.assert_called_once_with() assert len(file_session.sock.settimeout.call_args_list) == 2 assert file_session.sock.settimeout.call_args_list[0] == ( (ftputil.file.FTPFile._close_timeout,), {}, ) assert file_session.sock.settimeout.call_args_list[1] == ( (file_session.sock.gettimeout(),), {}, ) assert local_target.read_bytes() == remote_file_content def test_conditional_upload_without_upload(self, tmp_path): """ If the target file is newer, no upload should happen. """ local_source = tmp_path / "test_source" data = binary_data() local_source.write_bytes(data) dir_result = test_base.dir_line( mode_string="-rw-r--r--", date_=datetime.date.today() + datetime.timedelta(days=1), name="newer", ) script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_result), Call("cwd", args=("/",)), Call("close"), ] # Target is newer, so don't upload. # # This not only tests the return value, but also if a transfer # happened. If an upload was tried, our test framework would complain # about a missing scripted session for the `FTPFile` host. multisession_factory = scripted_session.factory(script) with test_base.ftp_host_factory(multisession_factory) as host: flag = host.upload_if_newer(str(local_source), "/newer") assert flag is False def test_conditional_upload_with_upload(self, tmp_path): """ If the target file is older or doesn't exist, the source file should be uploaded. """ local_source = tmp_path / "test_source" file_content = b"dummy_content" local_source.write_bytes(file_content) remote_file_name = "dummy_name" dir_result = test_base.dir_line( mode_string="-rw-r--r--", date_=datetime.date.today() - datetime.timedelta(days=1), name="older", ) host_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_result), Call("cwd", args=("/",)), Call("close"), ] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call( "transfercmd", args=("STOR older", None), result=test_base.MockableBytesIO(), ), Call("voidresp"), Call("close"), ] # Target is older, so upload. multisession_factory = scripted_session.factory(host_script, file_script) with unittest.mock.patch("test.test_base.MockableBytesIO.write") as write_mock: with test_base.ftp_host_factory(multisession_factory) as host: flag = host.upload_if_newer(str(local_source), "/older") write_mock.assert_called_with(file_content) assert flag is True # Target doesn't exist, so upload. # Use correct file name for this test. file_script[4] = Call( "transfercmd", args=("STOR notthere", None), result=test_base.MockableBytesIO(), ) multisession_factory = scripted_session.factory(host_script, file_script) with unittest.mock.patch("test.test_base.MockableBytesIO.write") as write_mock: with test_base.ftp_host_factory(multisession_factory) as host: flag = host.upload_if_newer(str(local_source), "/notthere") write_mock.assert_called_with(file_content) assert flag is True def test_conditional_download_without_target(self, tmp_path): """ Test conditional binary mode download when no target file exists. """ local_target = tmp_path / "test_target" data = binary_data() # Target does not exist, so download. # There isn't a `dir` call to compare the datetimes of the remote and # the target file because the local `exists` call for the local target # returns `False` and the datetime comparison therefore isn't done. host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call("transfercmd", args=("RETR newer", None), result=io.BytesIO(data)), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: flag = host.download_if_newer("/newer", str(local_target)) assert flag is True assert local_target.read_bytes() == data def test_conditional_download_with_older_target(self, tmp_path): """ Test conditional binary mode download with newer source file. """ local_target = tmp_path / "test_target" # Make sure file exists for the timestamp comparison. local_target.touch() data = binary_data() # Target is older, so download. # Use a date in the future. That isn't realistic, but for the purpose # of the test it's an easy way to make sure the source file is newer # than the target file. dir_result = test_base.dir_line( mode_string="-rw-r--r--", date_=datetime.date.today() + datetime.timedelta(days=1), name="newer", ) host_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_result), Call("cwd", args=("/",)), Call("close"), ] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call("transfercmd", args=("RETR newer", None), result=io.BytesIO(data)), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: flag = host.download_if_newer("/newer", str(local_target)) assert flag is True assert local_target.read_bytes() == data def test_conditional_download_with_newer_target(self, tmp_path): """ Test conditional binary mode download with older source file. """ local_target = tmp_path / "test_target" # Make sure file exists for timestamp comparison. local_target.touch() data = binary_data() # Use date in the past, so the target file is newer and no download # happens. dir_result = test_base.dir_line( mode_string="-rw-r--r--", date_=datetime.date.today() - datetime.timedelta(days=1), name="newer", ) host_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_result), Call("cwd", args=("/",)), Call("close"), ] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call("transfercmd", args=("RETR newer", None), result=io.BytesIO(data)), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: flag = host.download_if_newer("/newer", str(local_target)) assert flag is False class TestMakedirs: def test_exist_ok_false(self): """ If `exist_ok` is `False` or not specified, an existing leaf directory should lead to a `PermanentError` with `errno` set to 17. """ # No `exist_ok` specified script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/part1",)), Call("cwd", args=("/part1/part2",)), Call("cwd", args=("/",)), Call("close"), ] multisession_factory = scripted_session.factory(script) with test_base.ftp_host_factory(session_factory=multisession_factory) as host: with pytest.raises(ftputil.error.PermanentError) as exc_info: host.makedirs("/part1/part2") assert isinstance(exc_info.value, ftputil.error.PermanentError) assert exc_info.value.errno == errno.EEXIST # `exist_ok` explicitly set to `False` script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/part1",)), Call("cwd", args=("/part1/part2",)), Call("cwd", args=("/",)), Call("close"), ] multisession_factory = scripted_session.factory(script) with test_base.ftp_host_factory(session_factory=multisession_factory) as host: with pytest.raises(ftputil.error.PermanentError) as exc_info: host.makedirs("/part1/part2", exist_ok=False) assert isinstance(exc_info.value, ftputil.error.PermanentError) assert exc_info.value.errno == errno.EEXIST def test_exist_ok_true(self): """ If `exist_ok` is `True`, an existing leaf directory should _not_ lead to an exception. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/part1",)), Call("cwd", args=("/part1/part2",)), Call("cwd", args=("/",)), Call("close"), ] multisession_factory = scripted_session.factory(script) with test_base.ftp_host_factory(session_factory=multisession_factory) as host: host.makedirs("/part1/part2", exist_ok=True) class TestAcceptEitherUnicodeOrBytes: """ Test whether certain `FTPHost` methods accept either unicode or byte strings for the path(s). """ def test_upload(self): """ Test whether `upload` accepts either unicode or bytes. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call("transfercmd", args=("STOR target", None), result=io.BytesIO()), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) # The source file needs to be present in the current directory. with test_base.ftp_host_factory(multisession_factory) as host: host.upload("Makefile", "target") # Create new `BytesIO` object. file_script[4] = Call( "transfercmd", args=("STOR target", None), result=io.BytesIO() ) multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: host.upload("Makefile", as_bytes("target")) def test_download(self, tmp_path): """ Test whether `download` accepts either unicode or bytes. """ local_target = tmp_path / "local_target" host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call("transfercmd", args=("RETR source", None), result=io.BytesIO()), Call("voidresp"), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) # The source file needs to be present in the current directory. with test_base.ftp_host_factory(multisession_factory) as host: host.download("source", str(local_target)) # Create new `BytesIO` object. file_script[4] = Call( "transfercmd", args=("RETR source", None), result=io.BytesIO() ) multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: host.download(as_bytes("source"), str(local_target)) def test_rename(self): """ Test whether `rename` accepts either unicode or bytes. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("rename", args=("/รค", "/รค")), Call("close"), ] # It's possible to mix argument types, as for `os.rename`. path_as_str = "/รค" path_as_bytes = as_bytes( path_as_str, ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING ) paths = [path_as_str, path_as_bytes] for source_path, target_path in itertools.product(paths, paths): # Uses ftplib default encoding session_factory = scripted_session.factory(script) with test_base.ftp_host_factory(session_factory) as host: host.rename(source_path, target_path) def test_listdir(self): """ Test whether `listdir` accepts either unicode or bytes. """ top_level_dir_line = test_base.dir_line( mode_string="drwxr-xr-x", date_=datetime.date.today(), name="รค" ) dir_line1 = test_base.dir_line( mode_string="-rw-r--r--", date_=datetime.date.today(), name="รถ" ) dir_line2 = test_base.dir_line( mode_string="-rw-r--r--", date_=datetime.date.today(), name="o" ) dir_result = dir_line1 + "\n" + dir_line2 script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=top_level_dir_line), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/รค",)), Call("dir", args=("",), result=dir_result), Call("cwd", args=("/",)), Call("close"), ] # Unicode session_factory = scripted_session.factory(script) with test_base.ftp_host_factory(session_factory) as host: items = host.listdir("รค") assert items == ["รถ", "o"] # Bytes session_factory = scripted_session.factory(script) with test_base.ftp_host_factory(session_factory) as host: items = host.listdir( as_bytes("รค", ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING) ) assert items == [ as_bytes("รถ", ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING), as_bytes("o"), ] def test_chmod(self): """ Test whether `chmod` accepts either unicode or bytes. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("voidcmd", args=("SITE CHMOD 0755 รค",)), Call("cwd", args=("/",)), Call("close"), ] path = "/รค" # Unicode session_factory = scripted_session.factory(script) with test_base.ftp_host_factory(session_factory) as host: host.chmod(path, 0o755) # Bytes session_factory = scripted_session.factory(script) with test_base.ftp_host_factory(session_factory) as host: host.chmod( as_bytes(path, ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING), 0o755 ) def _test_method_with_single_path_argument(self, method_name, path, script): # Unicode session_factory = scripted_session.factory(script) with test_base.ftp_host_factory(session_factory) as host: method = getattr(host, method_name) method(path) # Bytes session_factory = scripted_session.factory(script) with test_base.ftp_host_factory(session_factory) as host: method = getattr(host, method_name) method(as_bytes(path, ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING)) def test_chdir(self): """ Test whether `chdir` accepts either unicode or bytes. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/รถ",)), Call("close"), ] self._test_method_with_single_path_argument("chdir", "/รถ", script) def test_mkdir(self): """ Test whether `mkdir` accepts either unicode or bytes. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("mkd", args=("รค",)), Call("cwd", args=("/",)), Call("close"), ] self._test_method_with_single_path_argument("mkdir", "/รค", script) def test_makedirs(self): """ Test whether `makedirs` accepts either unicode or bytes. """ script = [ Call("__init__"), Call("pwd", result="/"), # To deal with ticket #86 (virtual directories), `makedirs` tries # to change into each directory and if it exists (changing doesn't # raise an exception), doesn't try to create it. That's why you # don't see an `mkd` calls here despite originally having a # `makedirs` call. Call("cwd", args=("/รค",)), # If `exist_ok` is `False` (which is the default), the leaf # directory to make must not exist. In other words, the `chdir` # call is `makedirs` must fail with a permanent error. Call("cwd", args=("/รค/รถ",), result=ftplib.error_perm), Call("cwd", args=("/รค",)), Call("cwd", args=("/รค",)), Call("mkd", args=("รถ",)), # From `isdir` call Call("cwd", args=("/รค",)), Call("cwd", args=("/",)), Call("close"), ] self._test_method_with_single_path_argument("makedirs", "/รค/รถ", script) def test_rmdir(self): """ Test whether `rmdir` accepts either unicode or bytes. """ dir_line = test_base.dir_line( mode_string="drwxr-xr-x", date_=datetime.date.today(), name="empty_รค" ) # Since the session script isn't at all obvious, I checked it with a # debugger and added comments on some of the calls that happen during # the `rmdir` call. # # `_robust_ftp_command` descends one directory at a time (see ticket # #11) and restores the original directory in the end, which results in # at least four calls on the FTP session object (`cwd`, `cwd`, actual # method, `cwd`). It would be great if all the roundtrips to the server # could be reduced. script = [ # `FTPHost` initialization Call("__init__"), Call("pwd", result="/"), # `host.rmdir("/empty_รค")` # `host.listdir("/empty_รค")` # `host._stat._listdir("/empty_รค")` # `host._stat.__call_with_parser_retry("/empty_รค")` # `host._stat._real_listdir("/empty_รค")` # `host.path.isdir("/empty_รค")` Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), # `host.path.isdir` end # `host._stat._stat_results_from_dir("/empty_รค")` Call("cwd", args=("/",)), Call("cwd", args=("/empty_รค",)), Call("dir", args=("",), result=""), Call("cwd", args=("/",)), # `host._stat._stat_results_from_dir("/empty_รค")` end # `host._session.rmd` in `host._robust_ftp_command` # `host._check_inaccessible_login_directory()` Call("cwd", args=("/",)), # `host.chdir(head)` ("/") Call("cwd", args=("/",)), # `host.rmd(tail)` ("empty_รค") Call("rmd", args=("empty_รค",)), # `host.chdir(old_dir)` ("/") Call("cwd", args=("/",)), # Call("close"), ] empty_directory_as_required_by_rmdir = "/empty_รค" self._test_method_with_single_path_argument( "rmdir", empty_directory_as_required_by_rmdir, script ) def test_remove(self): """ Test whether `remove` accepts either unicode or bytes. """ dir_line = test_base.dir_line( mode_string="-rw-r--r--", date_=datetime.date.today(), name="รถ" ) script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("delete", args=("รถ",)), Call("cwd", args=("/",)), Call("close"), ] self._test_method_with_single_path_argument("remove", "/รถ", script) def test_rmtree(self): """ Test whether `rmtree` accepts either unicode or bytes. """ dir_line = test_base.dir_line( mode_string="drwxr-xr-x", date_=datetime.date.today(), name="empty_รค" ) script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # Recursive `listdir` # Check parent (root) directory. Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/empty_รค",)), # Child directory (inside `empty_รค`) Call("dir", args=("",), result=""), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/empty_รค",)), # Recursive `rmdir` (repeated `cwd` calls because of # `_robust_ftp_command`) Call("dir", result="", args=("",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("rmd", args=("empty_รค",)), Call("cwd", args=("/",)), Call("close"), ] empty_directory_as_required_by_rmtree = "/empty_รค" self._test_method_with_single_path_argument( "rmtree", empty_directory_as_required_by_rmtree, script ) def test_lstat(self): """ Test whether `lstat` accepts either unicode or bytes. """ dir_line = test_base.dir_line( mode_string="-rw-r--r--", date_=datetime.date.today(), name="รค" ) script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), Call("close"), ] self._test_method_with_single_path_argument("lstat", "/รค", script) def test_stat(self): """ Test whether `stat` accepts either unicode or bytes. """ dir_line = test_base.dir_line( mode_string="-rw-r--r--", date_=datetime.date.today(), name="รค" ) script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), Call("close"), ] self._test_method_with_single_path_argument("stat", "/รค", script) def test_walk(self): """ Test whether `walk` accepts either unicode or bytes. """ dir_line = test_base.dir_line( mode_string="-rw-r--r--", date_=datetime.date.today(), name="รค" ) script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), Call("close"), ] # We're not interested in the return value of `walk`. # Unicode session_factory = scripted_session.factory(script) with test_base.ftp_host_factory(session_factory) as host: result = list(host.walk("/รค")) # Bytes session_factory = scripted_session.factory(script) with test_base.ftp_host_factory(session_factory) as host: result = list( host.walk(as_bytes("/รค", ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING)) ) class TestFailingPickling: def test_failing_pickling(self): """ Test if pickling (intentionally) isn't supported. """ host_script = [Call("__init__"), Call("pwd", result="/"), Call("close")] file_script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("voidcmd", args=("TYPE I",)), Call("transfercmd", args=("RETR test", None), result=io.BytesIO()), Call("voidresp", args=()), Call("close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with pytest.raises(TypeError): pickle.dumps(host) with host.open("/test") as file_obj: with pytest.raises(TypeError): pickle.dumps(file_obj) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704487259.0 ftputil-5.1.0/test/test_path.py0000644000175000017500000006215314546064533015605 0ustar00schwaschwa# Copyright (C) 2003-2021, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. import datetime import ftplib import functools import time import pytest import ftputil import ftputil.error import ftputil.path_encoding import ftputil.tool from test import test_base from test import scripted_session Call = scripted_session.Call def as_bytes(string, encoding=ftputil.path_encoding.DEFAULT_ENCODING): return string.encode(encoding) class TestPath: """Test operations in `FTPHost.path`.""" # TODO: Add unit tests for changes for ticket #113 (commits [b4c9b089b6b8] # and [4027740cdd2d]). def test_regular_isdir_isfile_islink(self): """ Test regular `FTPHost._Path.isdir/isfile/islink`. """ # Test a path which isn't there. script = [ Call("__init__"), Call("pwd", result="/"), # `isdir` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=""), Call("cwd", args=("/",)), # `isfile` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=""), Call("cwd", args=("/",)), # `islink` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=""), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() assert not host.path.isdir("notthere") assert not host.path.isfile("notthere") assert not host.path.islink("notthere") # This checks additional code (see ticket #66). script = [ Call("__init__"), Call("pwd", result="/"), # `isdir` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=""), Call("cwd", args=("/",)), # `isfile` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=""), Call("cwd", args=("/",)), # `islink` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=""), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() assert not host.path.isdir("/notthere/notthere") assert not host.path.isfile("/notthere/notthere") assert not host.path.islink("/notthere/notthere") # Test a directory. test_dir = "/some_dir" dir_line = test_base.dir_line( mode_string="dr-xr-xr-x", datetime_=datetime.datetime.now(), name=test_dir.lstrip("/"), ) script = [ Call("__init__"), Call("pwd", result="/"), # `isdir` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), # `isfile` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), # `islink` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() assert host.path.isdir(test_dir) assert not host.path.isfile(test_dir) assert not host.path.islink(test_dir) # Test a file. test_file = "/some_file" dir_line = test_base.dir_line( datetime_=datetime.datetime.now(), name=test_file.lstrip("/") ) script = [ Call("__init__"), Call("pwd", result="/"), # `isdir` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), # `isfile` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), # `islink` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() assert not host.path.isdir(test_file) assert host.path.isfile(test_file) assert not host.path.islink(test_file) # Test a link. Since the link target doesn't exist, neither # `isdir` nor `isfile` return `True`. test_link = "/some_link" dir_line = test_base.dir_line( mode_string="lrwxrwxrwx", datetime_=datetime.datetime.now(), name=test_link.lstrip("/"), link_target="nonexistent", ) script = [ Call("__init__"), Call("pwd", result="/"), # `isdir` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), # Look for `/some_link` Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # Look for `/nonexistent` Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), # `isfile` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), # Look for `/some_link` Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # Look for `/nonexistent` Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), # `islink` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), # Look for `/some_link`. `islink` doesn't try to dereference # the link. Call("dir", args=("",), result=dir_line), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() assert not host.path.isdir(test_link) assert not host.path.isfile(test_link) assert host.path.islink(test_link) def test_workaround_for_spaces(self): """ Test whether the workaround for space-containing paths is used. """ # Test a file name containing spaces. test_file = "/home/dir with spaces/file with spaces" dir_line1 = test_base.dir_line( mode_string="dr-xr-xr-x", datetime_=datetime.datetime.now(), name="home" ) dir_line2 = test_base.dir_line( mode_string="dr-xr-xr-x", datetime_=datetime.datetime.now(), name="dir with spaces", ) dir_line3 = test_base.dir_line( mode_string="-r--r--r--", datetime_=datetime.datetime.now(), name="file with spaces", ) script = [ Call("__init__"), Call("pwd", result="/"), # `isdir` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line1), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/home",)), Call("dir", args=("",), result=dir_line2), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/home/dir with spaces",)), Call("dir", args=("",), result=dir_line3), Call("cwd", args=("/",)), # `isfile` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line1), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/home",)), Call("dir", args=("",), result=dir_line2), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/home/dir with spaces",)), Call("dir", args=("",), result=dir_line3), Call("cwd", args=("/",)), # `islink` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line1), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/home",)), Call("dir", args=("",), result=dir_line2), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/home/dir with spaces",)), Call("dir", args=("",), result=dir_line3), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() assert not host.path.isdir(test_file) assert host.path.isfile(test_file) assert not host.path.islink(test_file) def test_inaccessible_home_directory_and_whitespace_workaround(self): """ Test combination of inaccessible home directory + whitespace in path. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", result=ftplib.error_perm), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: with pytest.raises(ftputil.error.InaccessibleLoginDirError): host._dir("/home dir") def test_isdir_isfile_islink_with_dir_failure(self): """ Test failing `FTPHost._Path.isdir/isfile/islink` because of failing `_dir` call. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=ftplib.error_perm), Call("cwd", args=("/",)), Call("close"), ] FTPOSError = ftputil.error.FTPOSError # Test if exceptions are propagated. with test_base.ftp_host_factory(scripted_session.factory(script)) as host: with pytest.raises(FTPOSError): host.path.isdir("index.html") with test_base.ftp_host_factory(scripted_session.factory(script)) as host: with pytest.raises(FTPOSError): host.path.isfile("index.html") with test_base.ftp_host_factory(scripted_session.factory(script)) as host: with pytest.raises(FTPOSError): host.path.islink("index.html") def test_isdir_isfile_with_infinite_link_chain(self): """ Test if `isdir` and `isfile` return `False` if they encounter an infinite link chain. """ # `/home/bad_link` links to `/home/subdir/bad_link`, which links back # to `/home/bad_link` etc. dir_line1 = test_base.dir_line( mode_string="dr-xr-xr-x", datetime_=datetime.datetime.now(), name="home" ) dir_line2 = test_base.dir_line( mode_string="lrwxrwxrwx", datetime_=datetime.datetime.now(), name="bad_link", link_target="subdir/bad_link", ) dir_line3 = test_base.dir_line( mode_string="dr-xr-xr-x", datetime_=datetime.datetime.now(), name="subdir" ) dir_line4 = test_base.dir_line( mode_string="lrwxrwxrwx", datetime_=datetime.datetime.now(), name="bad_link", link_target="/home/bad_link", ) script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line1), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/home",)), Call("dir", args=("",), result=dir_line2 + "\n" + dir_line3), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/home/subdir",)), Call("dir", args=("",), result=dir_line4), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: assert host.path.isdir("/home/bad_link") is False with test_base.ftp_host_factory(scripted_session.factory(script)) as host: assert host.path.isfile("/home/bad_link") is False def test_exists(self): """ Test `FTPHost.path.exists`. """ # Regular use of `exists` dir_line1 = test_base.dir_line( datetime_=datetime.datetime.now(), name="some_file" ) script = [ Call("__init__"), Call("pwd", result="/"), # `exists("some_file")` Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line1), Call("cwd", args=("/",)), # `exists("notthere")` Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=dir_line1), Call("cwd", args=("/",)), # `exists` with failing `dir` call Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=ftplib.error_perm), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() assert host.path.exists("some_file") assert not host.path.exists("notthere") # Test if exceptions are propagated. with pytest.raises(ftputil.error.FTPOSError): host.path.exists("some_file") class TestAcceptEitherBytesOrStr: # Use path arguments directly path_converter = staticmethod(lambda path: path) def _test_method_string_types(self, method, path): expected_type = type(path) path_converter = self.path_converter assert isinstance(method(path_converter(path)), expected_type) def test_methods_that_take_and_return_one_string(self): """ Test whether the same string type as for the argument is returned. """ method_names = [ "abspath", "basename", "dirname", "join", "normcase", "normpath", ] script = [Call("__init__"), Call("pwd", result="/"), Call("close")] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: for method_name in method_names: method = getattr(host.path, method_name) self._test_method_string_types(method, "/") self._test_method_string_types(method, ".") self._test_method_string_types(method, b"/") self._test_method_string_types(method, b".") def test_methods_that_take_a_string_and_return_a_bool(self): """ Test whether the methods accept byte and unicode strings. """ path_converter = self.path_converter script = [ Call("__init__"), Call("pwd", result="/"), # `exists` test 1 Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result=test_base.dir_line(name="รค", datetime_=datetime.datetime.now()), ), Call("cwd", args=("/",)), # `exists` test 2 Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result=test_base.dir_line(name="รค", datetime_=datetime.datetime.now()), ), Call("cwd", args=("/",)), # `isdir` test 1 Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result=test_base.dir_line( mode_string="dr-xr-xr-x", name="รค", datetime_=datetime.datetime.now(), ), ), Call("cwd", args=("/",)), # `isdir` test 2 Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result=test_base.dir_line( mode_string="dr-xr-xr-x", name="รค", datetime_=datetime.datetime.now(), ), ), Call("cwd", args=("/",)), # `isfile` test 1 Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result=test_base.dir_line(name="รถ", datetime_=datetime.datetime.now()), ), Call("cwd", args=("/",)), # `isfile` test 2 Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result=test_base.dir_line(name="รถ", datetime_=datetime.datetime.now()), ), Call("cwd", args=("/",)), # `islink` test 1 Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result=test_base.dir_line( mode_string="lrwxrwxrwx", name="รผ", datetime_=datetime.datetime.now(), link_target="unimportant", ), ), Call("cwd", args=("/",)), # `islink` test 2 Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result=test_base.dir_line( mode_string="lrwxrwxrwx", name="รผ", datetime_=datetime.datetime.now(), link_target="unimportant", ), ), Call("cwd", args=("/",)), Call("close"), ] custom_as_bytes = functools.partial( as_bytes, encoding=ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING ) with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() # `isabs` assert not host.path.isabs("รค") assert not host.path.isabs(path_converter(custom_as_bytes("รค"))) # `exists` assert host.path.exists(path_converter("รค")) assert host.path.exists(path_converter(custom_as_bytes("รค"))) # `isdir`, `isfile`, `islink` assert host.path.isdir(path_converter("รค")) assert host.path.isdir(path_converter(custom_as_bytes("รค"))) assert host.path.isfile(path_converter("รถ")) assert host.path.isfile(path_converter(custom_as_bytes("รถ"))) assert host.path.islink(path_converter("รผ")) assert host.path.islink(path_converter(custom_as_bytes("รผ"))) def test_getmtime(self): """ Test whether `FTPHost.path.getmtime` accepts byte and unicode paths. """ path_converter = self.path_converter now = datetime.datetime.now(datetime.timezone.utc) script = [ Call("__init__"), Call("pwd", result="/"), # `getmtime` call 1 Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=test_base.dir_line(name="รค", datetime_=now)), Call("cwd", args=("/",)), # `getmtime` call 2 Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=test_base.dir_line(name="รค", datetime_=now)), Call("cwd", args=("/",)), Call("close"), ] expected_mtime = now.timestamp() # We don't care about the _exact_ time, so don't bother with timezone # differences. Instead, do a simple sanity check. day = 24 * 60 * 60 # seconds mtime_makes_sense = ( lambda mtime: expected_mtime - day <= mtime <= expected_mtime + day ) with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() assert mtime_makes_sense(host.path.getmtime(path_converter(("รค")))) assert mtime_makes_sense( host.path.getmtime( path_converter( as_bytes("รค", ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING) ) ) ) def test_getsize(self): """ Test whether `FTPHost.path.getsize` accepts byte and unicode paths. """ path_converter = self.path_converter now = datetime.datetime.now() script = [ Call("__init__"), Call("pwd", result="/"), # `getsize` call 1 Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result=test_base.dir_line(name="รค", size=512, datetime_=now), ), Call("cwd", args=("/",)), # `getsize` call 2 Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result=test_base.dir_line(name="รค", size=512, datetime_=now), ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() assert host.path.getsize(path_converter("รค")) == 512 assert ( host.path.getsize( path_converter( as_bytes("รค", ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING) ) ) == 512 ) def test_walk(self): """ Test whether `FTPHost.path.walk` accepts bytes and unicode paths. """ path_converter = self.path_converter now = datetime.datetime.now() script = [ Call("__init__"), Call("pwd", result="/"), # `walk` call 1 Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result=test_base.dir_line( mode_string="dr-xr-xr-x", name="รค", size=512, datetime_=now ), ), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/รค",)), # Assume directory `รค` is empty. Call("dir", args=("",), result=""), Call("cwd", args=("/",)), # `walk` call 2 Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result=test_base.dir_line( mode_string="dr-xr-xr-x", name="รค", size=512, datetime_=now ), ), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/รค",)), # Assume directory `รค` is empty. Call("dir", args=("",), result=""), Call("cwd", args=("/",)), Call("close"), ] def noop(arg, top, names): del names[:] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() host.path.walk(path_converter("รค"), func=noop, arg=None) host.path.walk( path_converter( as_bytes("รค", ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING) ), func=noop, arg=None, ) class Path: def __init__(self, path): self.path = path def __fspath__(self): return self.path class TestAcceptEitherBytesOrStrFromPath(TestAcceptEitherBytesOrStr): # Take path arguments from `Path(...)` objects path_converter = Path ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1686998681.0 ftputil-5.1.0/test/test_public_servers.py0000644000175000017500000001575714443307231017677 0ustar00schwaschwa# Copyright (C) 2009-2023, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. import os import socket import subprocess import pytest import ftputil import test def email_address(): """ Return the email address used to identify the client to an FTP server. If the hostname is "warpy", use my (Stefan's) email address, else try to use the content of the `$EMAIL` environment variable. If that doesn't exist, use a dummy address. """ hostname = socket.gethostname() if hostname == "warpy": email = "sschwarzer@sschwarzer.net" else: dummy_address = "anonymous@example.com" email = os.environ.get("EMAIL", dummy_address) if not email: # Environment variable exists but content is an empty string email = dummy_address return email EMAIL = email_address() def ftp_client_listing(server, directory): """ Log into the FTP server `server` using the command line client, then change to the `directory` and retrieve a listing with "dir". Return the list of items found as an `os.listdir` would return it. """ # The `-n` option prevents an auto-login. ftp_popen = subprocess.Popen( ["ftp", "-n", server], stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=True, ) commands = ["user anonymous {}".format(EMAIL), "dir", "bye"] if directory: # Change to this directory before calling "dir". commands.insert(1, "cd {}".format(directory)) input_ = "\n".join(commands) stdout, unused_stderr = ftp_popen.communicate(input_) # Collect the directory/file names from the listing's text names = [] for line in stdout.strip().split("\n"): if line.startswith("total ") or line.startswith("Trying "): continue parts = line.split() if parts[-2] == "->": # Most likely a link name = parts[-3] else: name = parts[-1] names.append(name) # Remove entries for current and parent directory since they aren't # included in the result of `FTPHost.listdir` either. names = [name for name in names if name not in (".", "..")] return names class TestPublicServers: """ Get directory listings from various public FTP servers with a command line client and ftputil and compare both. An important aspect is to test different "spellings" of the same directory. For example, to list the root directory which is usually set after login, use "" (nothing), ".", "/", "/.", "./.", "././", "..", "../.", "../.." etc. The command line client `ftp` has to be in the path. """ # Implementation note: # # I (Stefan) implement the code so it works with Ubuntu's client. Other # clients may work or not. If you have problems testing some other client, # please send me a (small) patch. Keep in mind that I don't plan supporting # as many FTP obscure commandline clients as servers. ;-) # List of pairs with server name and a directory "guaranteed to exist" # under the login directory which is assumed to be the root directory. servers = [ # Posix format ("ftp.de.debian.org", "debian"), ("ftp.heise.de", "pub"), ("ftp.tu-chemnitz.de", "pub"), ("ftp.uni-erlangen.de", "pub"), # DOS/Microsoft format # Do you know any FTP servers that use Microsoft format? # `ftp.microsoft.com` doesn't seem to be reachable anymore. ] # This data structure contains the initial directories "." and "DIR" (which # will be replaced by a valid directory name for each server). The list # after the initial directory contains paths that will be queried after # changing into the initial directory. All items in these lists are # actually supposed to yield the same directory contents. paths_table = [ ( ".", [ ".", "/", "/.", "./.", "././", "..", "../.", "../..", "DIR/..", "/DIR/../.", "/DIR/../..", ], ), ("DIR", [".", "/DIR", "/DIR/", "../DIR", "../../DIR"]), ] def inner_test_server(self, server, initial_directory, paths): """ Test one server for one initial directory. Connect to the server `server`; if the string argument `initial_directory` has a true value, change to this directory. Then iterate over all strings in the sequence `paths`, comparing the results of a listdir call with the listing from the command line client. """ canonical_names = ftp_client_listing(server, initial_directory) host = ftputil.FTPHost(server, "anonymous", EMAIL) try: host.chdir(initial_directory) for path in paths: path = path.replace("DIR", initial_directory) # Make sure that we don't recycle directory entries, i. e. # really repeatedly retrieve the directory contents (shouldn't # happen anyway with the current implementation). host.stat_cache.clear() names = host.listdir(path) # Filter out "hidden" names since the FTP command line client # won't include them in its listing either. names = [ name for name in names if not ( name.startswith(".") or # The login directory of `ftp.microsoft.com` contains # this "hidden" entry that ftputil finds but not the # FTP command line client. name == "mscomtest" ) ] failure_message = "For server {}, directory {}: {} != {}".format( server, initial_directory, names, canonical_names ) assert names == canonical_names, failure_message finally: host.close() @pytest.mark.slow_test def test_servers(self): """ Test all servers in `self.servers`. For each server, get the listings for the login directory and one other directory which is known to exist. Use different "spellings" to retrieve each list via ftputil and compare with the results gotten with the command line client. """ for server, actual_initial_directory in self.servers: print("=== server:", server) for initial_directory, paths in self.paths_table: initial_directory = initial_directory.replace( "DIR", actual_initial_directory ) print(server, initial_directory) self.inner_test_server(server, initial_directory, paths) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704487629.0 ftputil-5.1.0/test/test_real_ftp.py0000644000175000017500000013665614546065315016456 0ustar00schwaschwa# Copyright (C) 2003-2022, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. # Execute tests on a real FTP server (other tests use mock code). # # These tests write some files and directories on the local client and the # remote server. You'll need write access in the login directory. These tests # can take a few minutes because they have to wait to test the timezone # calculation. import datetime import ftplib import functools import gc import operator import os import pathlib import time import stat import pytest import ftputil.error import ftputil.file_transfer import ftputil.path_encoding import ftputil.session import ftputil.stat_cache import test def expected_time_shift(): """ Return the expected time shift in seconds. Unfortunately, the calculation may depend on the timezone of the server, i. e. the timezone used in directory listings coming from the server. So, depending on your test environment, you may need to change this function for your environment. _If_ you need an adapted `expected_time_shift`, please contact me (my e-mail address is in the ftputil documentation). In my particular case, I use Pure-FTPd as FTP server for the integration tests. At some point, it returned listings in the local timezone of the server, later it used UTC time, and now it uses the local timezone again. I wasn't able to find out why or how I can control this. """ raw_time_shift = ( datetime.datetime.now() - datetime.datetime.now(datetime.timezone.utc) ).seconds # To be safe, round the above value to units of 900 s (1/4 hours). return round(raw_time_shift / 900.0) * 900 # The containerized PureFTPd seems to use UTC. EXPECTED_TIME_SHIFT = 0.0 # Set when starting container PORT = 2121 DEFAULT_SESSION_FACTORY = ftputil.session.session_factory( port=PORT, encrypt_data_channel=False ) class Cleaner: """ This class helps remove directories and files which might otherwise be left behind if a test fails in unexpected ways. """ def __init__(self, host): # The test class (probably `RealFTPTest`) and the helper # class share the same `FTPHost` object. self._host = host self._ftp_items = [] def add_dir(self, path): """ Schedule a directory with path `path` for removal. """ self._ftp_items.append(("d", self._host.path.abspath(path))) def add_file(self, path): """ Schedule a file with path `path` for removal. """ self._ftp_items.append(("f", self._host.path.abspath(path))) def clean(self): """ Remove the directories and files previously remembered. The removal works in reverse order of the scheduling with `add_dir` and `add_file`. Errors due to a removal are ignored. """ self._host.chdir("/") for type_, path in reversed(self._ftp_items): try: if type_ == "d": # If something goes wrong in `rmtree` we might leave a mess # behind. self._host.rmtree(path) elif type_ == "f": # Minor mess if `remove` fails self._host.remove(path) except ftputil.error.FTPError: pass class RealFTPTest: def setup_method(self, method): # Server, username, password. self.login_data = ("localhost", "ftptest", "dummy") self.host = ftputil.FTPHost( *self.login_data, session_factory=DEFAULT_SESSION_FACTORY ) self.cleaner = Cleaner(self.host) def teardown_method(self, method): self.cleaner.clean() self.host.close() # # Helper methods # def make_remote_file(self, path): """Create a file on the FTP host.""" self.cleaner.add_file(path) with self.host.open(path, "wb") as file_: # Write something. Otherwise the FTP server might not update the # time of last modification if the file existed before. file_.write(b"\n") def make_local_file(self): """ Create a file on the local host (= on the client side). """ with open("_local_file_", "wb") as fobj: fobj.write(b"abc\x12\x34def\t") class TestMkdir(RealFTPTest): def test_directory_exists_after_mkdir(self): """ After a `mkdir` call, the directory should exist. """ host = self.host dir_name = "_testdir_" file_name = host.path.join(dir_name, "_nonempty_") self.cleaner.add_dir(dir_name) # Make dir and check if the directory is there. host.mkdir(dir_name) files = host.listdir(host.curdir) assert dir_name in files def test_makedirs_without_existing_dirs(self): """ If some directories in a `makedirs` call don't exist yet, they should be implcitly created. """ host = self.host # No `_dir1_` yet assert "_dir1_" not in host.listdir(host.curdir) # Vanilla case, all should go well. host.makedirs("_dir1_/dir2/dir3/dir4") self.cleaner.add_dir("_dir1_") # Check host. assert host.path.isdir("_dir1_") assert host.path.isdir("_dir1_/dir2") assert host.path.isdir("_dir1_/dir2/dir3") assert host.path.isdir("_dir1_/dir2/dir3/dir4") def test_makedirs_from_non_root_directory(self): # This is a testcase for issue #22, see # http://ftputil.sschwarzer.net/trac/ticket/22 . host = self.host # No `_dir1_` and `_dir2_` yet assert "_dir1_" not in host.listdir(host.curdir) assert "_dir2_" not in host.listdir(host.curdir) # Part 1: Try to make directories starting from `_dir1_` and # change to non-root directory. self.cleaner.add_dir("_dir1_") host.mkdir("_dir1_") host.chdir("_dir1_") host.makedirs("_dir2_/_dir3_") # Test for expected directory hierarchy. assert host.path.isdir("/_dir1_") assert host.path.isdir("/_dir1_/_dir2_") assert host.path.isdir("/_dir1_/_dir2_/_dir3_") assert not host.path.isdir("/_dir1_/_dir1_") # Remove all but the directory we're in. host.rmdir("/_dir1_/_dir2_/_dir3_") host.rmdir("/_dir1_/_dir2_") # Part 2: Try to make directories starting from root. self.cleaner.add_dir("/_dir2_") host.makedirs("/_dir2_/_dir3_") # Test for expected directory hierarchy assert host.path.isdir("/_dir2_") assert host.path.isdir("/_dir2_/_dir3_") assert not host.path.isdir("/_dir1_/_dir2_") def test_makedirs_of_existing_directory(self): """ Calling `makedirs` on an existing directory with `exist_ok` should do nothing. """ host = self.host # The (chrooted) login directory host.makedirs("/", exist_ok=True) def test_makedirs_with_file_in_the_way(self): """ If a part of `makedirs`'s path is a file, a `PermanentError` should be raised. """ host = self.host self.cleaner.add_dir("_dir1_") host.mkdir("_dir1_") self.make_remote_file("_dir1_/file1") # Try it. with pytest.raises(ftputil.error.PermanentError): host.makedirs("_dir1_/file1") with pytest.raises(ftputil.error.PermanentError): host.makedirs("_dir1_/file1/dir2") def test_makedirs_with_existing_directory(self): """ An already-existent subdirectory in the path of `makedirs` should be ignored. """ host = self.host self.cleaner.add_dir("_dir1_") host.mkdir("_dir1_") host.makedirs("_dir1_/dir2") # Check assert host.path.isdir("_dir1_") assert host.path.isdir("_dir1_/dir2") def test_makedirs_in_non_writable_directory(self): """ If `makedirs` is asked to create a directory in a non-writable directory, a `PermanentError` should be raised. """ host = self.host # Preparation: `rootdir1` exists but is only writable by root. with pytest.raises(ftputil.error.PermanentError): host.makedirs("rootdir1/dir2") def test_makedirs_with_writable_directory_at_end(self): """ If `makedirs` is asked to create a directory in a writable directory under a non-writable directory, the `makedirs` call should succeed. """ host = self.host self.cleaner.add_dir("rootdir2/dir2") # Preparation: `rootdir2` exists but is only writable by root. `dir2` # is writable by regular ftp users. Both directories below should work. host.makedirs("rootdir2/dir2", exist_ok=True) host.makedirs("rootdir2/dir2/dir3") assert host.path.isdir("rootdir2/dir2") assert host.path.isdir("rootdir2/dir2/dir3") class TestRemoval(RealFTPTest): # Tests for `remove` def test_remove_existing_file(self): """ Removal of an existing file should succeed. """ self.cleaner.add_file("_testfile_") self.make_remote_file("_testfile_") host = self.host assert host.path.isfile("_testfile_") host.remove("_testfile_") assert not host.path.exists("_testfile_") def test_cache_invalidation_for_remove_exception(self): """ If a file is removed, its stats information should be removed from the cache. This should also work if the removal raises an exception. """ # Test for ticket #150 self.cleaner.add_file("_testfile_") self.make_remote_file("_testfile_") host = self.host assert host.path.isfile("_testfile_") # Monkey-patch session `delete` call. old_delete = host._session.delete def failing_delete(path): # Simulate the case where the file is removed on the server, but a # proper reply doesn't get through to the client. It doesn't matter # whether the exception is `error_temp` or `error_perm`. old_delete(path) raise ftplib.error_perm("simulated error") host._session.delete = failing_delete with pytest.raises(ftputil.error.PermanentError): host.remove("_testfile_") assert not host.path.exists("_testfile_") def test_remove_non_existent_item(self): """ If trying to remove a non-existent file system item, a `PermanentError` should be raised. """ host = self.host with pytest.raises(ftputil.error.PermanentError): host.remove("nonexistent") def test_remove_on_directory(self): """ Calling `remove` on a directory should raise a `PermanentError`. (Directories must be removed with `rmdir`.) """ host = self.host dir_name = "_testdir_" self.cleaner.add_dir(dir_name) host.mkdir(dir_name) try: try: host.remove(dir_name) except ftputil.error.PermanentError as exc: assert str(exc).startswith("remove/unlink can only delete files") else: pytest.fail("we shouldn't have come here") finally: # Delete empty directory. host.rmdir(dir_name) files = host.listdir(host.curdir) assert dir_name not in files # Test for `rmdir` def test_remove_existing_directory(self): """ If `rmdir` is called on an empty directory, the call should succeed. """ self.cleaner.add_dir("_testdir_") host = self.host host.mkdir("_testdir_") assert host.path.isdir("_testdir_") host.rmdir("_testdir_") assert not host.path.exists("_testdir_") def test_rmdir_on_nonempty_directory(self): """ If a directory exists, but isn't empty, `rmdir` should raise a `PermanentError`. """ host = self.host dir_name = "_testdir_" self.cleaner.add_dir(dir_name) host.mkdir(dir_name) # Try to remove a non-empty directory. file_name = host.path.join(dir_name, "_nonempty_") self.cleaner.add_file(file_name) non_empty = host.open(file_name, "w") non_empty.close() with pytest.raises(ftputil.error.PermanentError): host.rmdir(dir_name) def test_cache_invalidation_for_rmdir_exception(self): """ If a directory is removed, its stats information should be removed from the cache. This should also work if the removal raises an exception. """ # Test for ticket #150 self.cleaner.add_dir("_testdir_") host = self.host host.mkdir("_testdir_") assert host.path.isdir("_testdir_") # Monkey-patch session `rmd` call. old_rmd = host._session.rmd def failing_rmd(path): # Simulate the case where the directory is removed on the server, # but a proper reply doesn't get through to the client. It doesn't # matter whether the exception is `error_temp` or `error_perm`. old_rmd(path) raise ftplib.error_perm("simulated error") host._session.rmd = failing_rmd with pytest.raises(ftputil.error.PermanentError): host.rmdir("_testdir_") assert not host.path.exists("_testdir_") # Tests for `rmtree` def build_tree(self, host): """ Build a directory tree for tests. """ self.cleaner.add_dir("_dir1_") host.makedirs("_dir1_/dir2") self.make_remote_file("_dir1_/file1") self.make_remote_file("_dir1_/file2") self.make_remote_file("_dir1_/dir2/file3") self.make_remote_file("_dir1_/dir2/file4") def test_rmtree_without_error_handler(self): """ Calling `rmtree` on a directory tree should remove the directory and everything under it. """ host = self.host self.build_tree(host) # Remove `dir2`. host.rmtree("_dir1_/dir2") assert not host.path.exists("_dir1_/dir2") assert host.path.exists("_dir1_/file2") # Re-create `dir2` and remove `_dir1_`. host.mkdir("_dir1_/dir2") self.make_remote_file("_dir1_/dir2/file3") self.make_remote_file("_dir1_/dir2/file4") host.rmtree("_dir1_") assert not host.path.exists("_dir1_") def test_rmtree_with_error_handler(self): """ If an `rmtree` call specifies an error handler, it should be used. """ host = self.host self.cleaner.add_dir("_dir1_") host.mkdir("_dir1_") self.make_remote_file("_dir1_/file1") # Prepare error "handler" log = [] def error_handler(*args): log.append(args) # Try to remove a file as root "directory". host.rmtree("_dir1_/file1", ignore_errors=True, onerror=error_handler) assert log == [] host.rmtree("_dir1_/file1", ignore_errors=False, onerror=error_handler) assert log[0][0] == host.listdir assert log[0][1] == "_dir1_/file1" assert log[1][0] == host.rmdir assert log[1][1] == "_dir1_/file1" host.rmtree("_dir1_") # Try to remove a non-existent directory. del log[:] host.rmtree("_dir1_", ignore_errors=False, onerror=error_handler) assert log[0][0] == host.listdir assert log[0][1] == "_dir1_" assert log[1][0] == host.rmdir assert log[1][1] == "_dir1_" def test_remove_file_with_rmtree(self): """ Calling `rmtree` on a file should raise a `PermanentError`. """ host = self.host self.build_tree(host) with pytest.raises(ftputil.error.PermanentError): host.rmtree("_dir1_/file2") class TestWalk(RealFTPTest): """ Walk the directory tree walk_test โ”œโ”€โ”€ dir1 โ”‚ย ย  โ”œโ”€โ”€ dir11 โ”‚ย ย  โ””โ”€โ”€ dir12 โ”‚ย ย  โ”œโ”€โ”€ dir123 โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ file1234 โ”‚ย ย  โ”œโ”€โ”€ file121 โ”‚ย ย  โ””โ”€โ”€ file122 โ”œโ”€โ”€ dir2 โ”œโ”€โ”€ dir3 โ”‚ย ย  โ”œโ”€โ”€ dir31 โ”‚ย ย  โ”œโ”€โ”€ dir32 -> ../dir1/dir12/dir123 โ”‚ย ย  โ”œโ”€โ”€ file31 โ”‚ย ย  โ””โ”€โ”€ file32 โ””โ”€โ”€ file4 and check if the results are the expected ones. """ def _walk_test(self, expected_result, **walk_kwargs): """ Walk the directory and test results. """ # Collect data using `walk`. actual_result = [] for items in self.host.walk(**walk_kwargs): actual_result.append(items) # Compare with expected results. assert len(actual_result) == len(expected_result) for index, _ in enumerate(actual_result): assert actual_result[index] == expected_result[index] def test_walk_topdown(self): # Preparation: build tree in directory `walk_test`. expected_result = [ ("walk_test", ["dir1", "dir2", "dir3"], ["file4"]), # ("walk_test/dir1", ["dir11", "dir12"], []), # ("walk_test/dir1/dir11", [], []), # ("walk_test/dir1/dir12", ["dir123"], ["file121", "file122"]), # ("walk_test/dir1/dir12/dir123", [], ["file1234"]), # ("walk_test/dir2", [], []), # ("walk_test/dir3", ["dir31", "dir32"], ["file31", "file32"]), # ("walk_test/dir3/dir31", [], []), ] self._walk_test(expected_result, top="walk_test") def test_walk_depth_first(self): # Preparation: build tree in directory `walk_test` expected_result = [ ("walk_test/dir1/dir11", [], []), # ("walk_test/dir1/dir12/dir123", [], ["file1234"]), # ("walk_test/dir1/dir12", ["dir123"], ["file121", "file122"]), # ("walk_test/dir1", ["dir11", "dir12"], []), # ("walk_test/dir2", [], []), # ("walk_test/dir3/dir31", [], []), # ("walk_test/dir3", ["dir31", "dir32"], ["file31", "file32"]), # ("walk_test", ["dir1", "dir2", "dir3"], ["file4"]), ] self._walk_test(expected_result, top="walk_test", topdown=False) def test_walk_following_links(self): # Preparation: build tree in directory `walk_test`. expected_result = [ ("walk_test", ["dir1", "dir2", "dir3"], ["file4"]), # ("walk_test/dir1", ["dir11", "dir12"], []), # ("walk_test/dir1/dir11", [], []), # ("walk_test/dir1/dir12", ["dir123"], ["file121", "file122"]), # ("walk_test/dir1/dir12/dir123", [], ["file1234"]), # ("walk_test/dir2", [], []), # ("walk_test/dir3", ["dir31", "dir32"], ["file31", "file32"]), # ("walk_test/dir3/dir31", [], []), # ("walk_test/dir3/dir32", [], ["file1234"]), ] self._walk_test(expected_result, top="walk_test", followlinks=True) class TestRename(RealFTPTest): def test_rename(self): """ If `rename` is called on two paths, the first should disappear and the second should exist afterward. This test also tests `Path` support. """ host = self.host # Make sure both files are gone after the test. self.cleaner.add_file("_testfile1_") self.cleaner.add_file("_testfile2_") self.make_remote_file("_testfile1_") host.rename(pathlib.Path("_testfile1_"), "_testfile2_") assert not host.path.exists("_testfile1_") assert host.path.exists(pathlib.Path("_testfile2_")) def test_cache_invalidation_for_rename(self): """ A `rename` call should invalidate the cache entries for both path arguments. The purpose of the first invalidation is obvious, but the second invalidation is also important in case the path of the target existed before the rename and changes according to the new file stat information. This test also tests `Path` support. """ # Test for ticket #150 host = self.host # Make sure both files are gone after the test. self.cleaner.add_file("_testfile1_") self.cleaner.add_file("_testfile2_") # Case 1: Target file doesn't exist yet. self.make_remote_file("_testfile1_") file1_stat = host.stat("_testfile1_") host.rename(pathlib.Path("_testfile1_"), "_testfile2_") assert not host.path.exists("_testfile1_") assert host.path.exists(pathlib.Path("_testfile2_")) # Case 2: Target file already exists. # Write the source file with a size different from the target file, so # we can check whether we find the old or the new stat information # when stat'ing the target file after the rename. with host.open("_testfile1_", "w") as fobj: fobj.write("abcdef\n") self.make_remote_file("_testfile2_") file1_stat = host.stat("_testfile1_") file2_stat = host.stat("_testfile2_") host.rename(pathlib.Path("_testfile1_"), "_testfile2_") assert not host.path.exists("_testfile1_") assert host.path.exists(pathlib.Path("_testfile2_")) new_file2_stat = host.stat("_testfile2_") assert new_file2_stat.st_size > file2_stat.st_size def test_cache_invalidation_for_rename_exception(self): """ If a file system item is renamed/moved, its stats information should be removed from the cache. This should also work if the rename operation raises an exception. """ # Test for ticket #150 host = self.host # Make sure the target of the renaming operation is removed later. # Make sure both files are gone after the test. self.cleaner.add_file("_testfile1_") self.cleaner.add_file("_testfile2_") # Write the source file with a size different from the target file, so # we can check whether we find the old or the new stat information # when stat'ing the target file after the rename. with host.open("_testfile1_", "w") as fobj: fobj.write("abcdef\n") self.make_remote_file("_testfile2_") file1_stat = host.stat("_testfile1_") file2_stat = host.stat("_testfile2_") # Monkey-patch session `rename` call. old_rename = host._session.rename def failing_rename(source, target): # Simulate the case where the rename completely or partially # succeeds on the server, but a proper reply doesn't get through to # the client. It doesn't matter whether the exception is # `error_temp` or `error_perm`. old_rename(source, target) raise ftplib.error_perm("simulated error") host._session.rename = failing_rename # with pytest.raises(ftputil.error.PermanentError): host.rename(pathlib.Path("_testfile1_"), "_testfile2_") assert not host.path.exists("_testfile1_") assert host.path.exists(pathlib.Path("_testfile2_")) new_file2_stat = host.stat("_testfile2_") assert new_file2_stat.st_size > file2_stat.st_size def test_rename_with_spaces_in_directory(self): """ `rename` should work if source and target contain a directory with spaces in its name. This test also tests `Path` support. """ host = self.host dir_name = "_dir with spaces_" self.cleaner.add_dir(dir_name) host.mkdir(pathlib.Path(dir_name)) self.make_remote_file(dir_name + "/testfile1") host.rename(dir_name + "/testfile1", pathlib.Path(dir_name + "/testfile2")) assert not host.path.exists(dir_name + "/testfile1") assert host.path.exists(dir_name + "/testfile2") class TestStat(RealFTPTest): def test_stat(self): """ Test some stat-related calls. """ host = self.host dir_name = "_testdir_" file_name = host.path.join(dir_name, "_nonempty_") # Make a directory and a file in it. self.cleaner.add_dir(dir_name) host.mkdir(dir_name) with host.open(file_name, "wb") as fobj: fobj.write(b"abc\x12\x34def\t") # Do some stats # - dir dir_stat = host.stat(dir_name) assert isinstance(dir_stat._st_name, str) assert host.listdir(dir_name) == ["_nonempty_"] assert host.path.isdir(dir_name) assert not host.path.isfile(dir_name) assert not host.path.islink(dir_name) # - file file_stat = host.stat(file_name) assert isinstance(file_stat._st_name, str) assert not host.path.isdir(file_name) assert host.path.isfile(file_name) assert not host.path.islink(file_name) assert host.path.getsize(file_name) == 9 # - file's modification time host.synchronize_times() # The returned server mtime is supposed to be converted to UTC, so # there should be only a small difference between server and client # time. Arbitrarily allow two minutes here to account for limited time # precision from parsing the directory. server_mtime = host.path.getmtime(file_name) client_mtime = datetime.datetime.now(datetime.timezone.utc).timestamp() assert not (client_mtime - server_mtime > 120) def test_issomething_for_nonexistent_directory(self): """ If the `is*` methods are called on a path where the base directory doesn't exist, all of the methods should return `False`. """ host = self.host # Check if we get the right results if even the containing directory # doesn't exist (see ticket #66). nonexistent_path = "/nonexistent/nonexistent" assert not host.path.isdir(nonexistent_path) assert not host.path.isfile(nonexistent_path) assert not host.path.islink(nonexistent_path) def test_special_broken_link(self): """ If `is*` methods are called on a broken link (i.e. the link target doesn't exist), `islink` should return `True` and `isdir` and `isfile` should return `False`. """ # Test for ticket #39. host = self.host broken_link_name = os.path.join("dir_with_broken_link", "nonexistent") assert host.lstat(broken_link_name)._st_target == "../nonexistent/nonexistent" assert not host.path.isdir(broken_link_name) assert not host.path.isfile(broken_link_name) assert host.path.islink(broken_link_name) def test_concurrent_access(self): """ If we have two `FTPHost` instances for the same server directory, operations on one `FTPHost` instance shouldn't influence the cache entries of the other `FTPHost` instance. """ self.make_remote_file("_testfile_") with ftputil.FTPHost( *self.login_data, session_factory=DEFAULT_SESSION_FACTORY ) as host1: with ftputil.FTPHost( *self.login_data, session_factory=DEFAULT_SESSION_FACTORY ) as host2: stat_result1 = host1.stat("_testfile_") stat_result2 = host2.stat("_testfile_") assert stat_result1 == stat_result2 host2.remove("_testfile_") # Can still get the result via `host1` stat_result1 = host1.stat("_testfile_") assert stat_result1 == stat_result2 # Stat'ing on `host2` gives an exception. with pytest.raises(ftputil.error.PermanentError): host2.stat("_testfile_") # Stat'ing on `host1` after invalidation absolute_path = host1.path.join(host1.getcwd(), "_testfile_") host1.stat_cache.invalidate(absolute_path) with pytest.raises(ftputil.error.PermanentError): host1.stat("_testfile_") def test_cache_auto_resizing(self): """ Test if the cache is resized appropriately. """ host = self.host cache = host.stat_cache._cache # Make sure the cache size isn't adjusted towards smaller values. unused_entries = host.listdir("walk_test") assert cache.size == ftputil.stat_cache.StatCache._DEFAULT_CACHE_SIZE # Make the cache very small initially and see if it gets resized. cache.size = 2 entries = host.listdir("walk_test") # The adjusted cache size should be larger or equal to the number of # items in `walk_test` and its parent directory. The latter is read # implicitly upon `listdir`'s `isdir` call. expected_min_cache_size = max(len(host.listdir(host.curdir)), len(entries)) assert cache.size >= expected_min_cache_size class TestUploadAndDownload(RealFTPTest): """ Test upload and download (including time shift test). """ def test_time_shift(self): self.host.synchronize_times() assert self.host.time_shift() == EXPECTED_TIME_SHIFT @pytest.mark.slow_test def test_upload(self): """ `upload_if_newer` should respect the time difference between client and server, including consideration of the time shift. """ host = self.host host.synchronize_times() local_file = "_local_file_" remote_file = "_remote_file_" # Make local file to upload. self.make_local_file() # Wait, else small time differences between client and server actually # could trigger the update. time.sleep(65) try: self.cleaner.add_file(remote_file) host.upload(local_file, remote_file) # Retry; shouldn't be uploaded uploaded = host.upload_if_newer(local_file, remote_file) assert uploaded is False # Rewrite the local file. self.make_local_file() # Retry; should be uploaded now uploaded = host.upload_if_newer(local_file, remote_file) assert uploaded is True finally: # Clean up os.unlink(local_file) @pytest.mark.slow_test def test_download(self): """ `download_if_newer` should respect the time difference between client and server, including consideration of the time shift. """ host = self.host host.synchronize_times() local_file = "_local_file_" remote_file = "_remote_file_" # Make a remote file. self.make_remote_file(remote_file) # File should be downloaded as it's not present yet. downloaded = host.download_if_newer(remote_file, local_file) assert downloaded is True try: # If the remote file, taking the datetime precision into account, # _might_ be newer, the file will be downloaded again. To prevent # this, wait a bit over a minute (the remote precision), then # "touch" the local file. time.sleep(65) # Create empty file. with open(local_file, "w") as fobj: pass # Local file is present and newer, so shouldn't download. downloaded = host.download_if_newer(remote_file, local_file) assert downloaded is False # Re-make the remote file. self.make_remote_file(remote_file) # Local file is present but possibly older (taking the possible # deviation because of the precision into account), so should # download. downloaded = host.download_if_newer(remote_file, local_file) assert downloaded is True finally: # Clean up. os.unlink(local_file) def test_callback_with_transfer(self): """ A `callback` argument should be called during a file transfer. """ host = self.host FILE_NAME = "large_file" # Default chunk size as in `FTPHost.copyfileobj` MAX_COPY_CHUNK_SIZE = ftputil.file_transfer.MAX_COPY_CHUNK_SIZE file_size = host.path.getsize(FILE_NAME) chunk_count, _ = divmod(file_size, MAX_COPY_CHUNK_SIZE) # Add one chunk for remainder. chunk_count += 1 # Define a callback that just collects all data passed to it. transferred_chunks_list = [] def test_callback(chunk): transferred_chunks_list.append(chunk) try: host.download(FILE_NAME, FILE_NAME, callback=test_callback) # Construct a list of data chunks we expect. expected_chunks_list = [] with open(FILE_NAME, "rb") as downloaded_fobj: while True: chunk = downloaded_fobj.read(MAX_COPY_CHUNK_SIZE) if not chunk: break expected_chunks_list.append(chunk) # Examine data collected by callback function. assert len(transferred_chunks_list) == chunk_count assert transferred_chunks_list == expected_chunks_list finally: os.unlink(FILE_NAME) class TestFTPFiles(RealFTPTest): def test_only_closed_children(self): """ If `FTPHost.open` needs an `FTPFile` object, an existing closed `FTPFile` should be reused. """ REMOTE_FILE_NAME = "CONTENTS" host = self.host with host.open(REMOTE_FILE_NAME, "rb") as file_obj1: # Create empty file and close it. with host.open(REMOTE_FILE_NAME, "rb") as file_obj2: pass # This should re-use the second child because the first isn't # closed but the second is. with host.open(REMOTE_FILE_NAME, "rb") as file_obj: assert len(host._children) == 2 assert file_obj._host is host._children[1] def test_no_timed_out_children(self): """ `FTPHost.open` shouldn't use an `FTPFile` object that has timed out. """ REMOTE_FILE_NAME = "CONTENTS" host = self.host # Implicitly create child host object. with host.open(REMOTE_FILE_NAME, "rb") as file_obj1: pass # Monkey-patch file to simulate an FTP server timeout below. def timed_out_pwd(): raise ftplib.error_temp("simulated timeout") file_obj1._host._session.pwd = timed_out_pwd # Try to get a file - which shouldn't be the timed-out file. with host.open(REMOTE_FILE_NAME, "rb") as file_obj2: assert file_obj1 is not file_obj2 # Re-use closed and not timed-out child session. with host.open(REMOTE_FILE_NAME, "rb") as file_obj3: pass assert file_obj2 is file_obj3 def test_no_delayed_226_children(self): """ `FTPHost.open` shouldn't use an `FTPFile` object that gave an 226 error reply. """ REMOTE_FILE_NAME = "CONTENTS" host = self.host # Implicitly create child host object. with host.open(REMOTE_FILE_NAME, "rb") as file_obj1: pass # Monkey-patch file to simulate an FTP server timeout below. def timed_out_pwd(): raise ftplib.error_reply("delayed 226 reply") file_obj1._host._session.pwd = timed_out_pwd # Try to get a file - which shouldn't be the timed-out file. with host.open(REMOTE_FILE_NAME, "rb") as file_obj2: assert file_obj1 is not file_obj2 # Re-use closed and not timed-out child session. with host.open(REMOTE_FILE_NAME, "rb") as file_obj3: pass assert file_obj2 is file_obj3 class TestChmod(RealFTPTest): def assert_mode(self, path, expected_mode): """ Return an integer containing the allowed bits in the mode change command. The `FTPHost` object to test against is `self.host`. """ full_mode = self.host.stat(path).st_mode # Remove flags we can't set via `chmod`. Allowed flags according to # Python documentation: https://docs.python.org/library/stat.html allowed_flags = [ stat.S_ISUID, stat.S_ISGID, stat.S_ENFMT, stat.S_ISVTX, stat.S_IREAD, stat.S_IWRITE, stat.S_IEXEC, stat.S_IRWXU, stat.S_IRUSR, stat.S_IWUSR, stat.S_IXUSR, stat.S_IRWXG, stat.S_IRGRP, stat.S_IWGRP, stat.S_IXGRP, stat.S_IRWXO, stat.S_IROTH, stat.S_IWOTH, stat.S_IXOTH, ] allowed_mask = functools.reduce(operator.or_, allowed_flags) mode = full_mode & allowed_mask assert mode == expected_mode, "mode {0:o} != {1:o}".format(mode, expected_mode) def test_chmod_existing_directory(self): """ A `chmod` operation on a writable existing directory should succeed. """ host = self.host host.mkdir("_test dir_") self.cleaner.add_dir("_test dir_") # Set/get mode of a directory. host.chmod("_test dir_", 0o757) self.assert_mode("_test dir_", 0o757) # Set/get mode in nested directory. host.mkdir("_test dir_/nested_dir") self.cleaner.add_dir("_test dir_/nested_dir") host.chmod("_test dir_/nested_dir", 0o757) self.assert_mode("_test dir_/nested_dir", 0o757) def test_chmod_existing_file(self): """ A `chmod` operation on a writable existing file should succeed. """ host = self.host host.mkdir("_test dir_") self.cleaner.add_dir("_test dir_") # Set/get mode on a file. file_name = host.path.join("_test dir_", "_testfile_") self.make_remote_file(file_name) host.chmod(file_name, 0o646) self.assert_mode(file_name, 0o646) def test_chmod_nonexistent_path(self): """ If `chmod` is called on a non-existent path, a `PermanentError` should be raised. """ # Set/get mode of a non-existing item. with pytest.raises(ftputil.error.PermanentError): self.host.chmod("nonexistent", 0o757) def test_cache_invalidation(self): """ If the mode of a directory or file is changed, make sure that the cache doesn't contain stat entries before the `chmod` call. """ host = self.host host.mkdir("_test dir_") self.cleaner.add_dir("_test dir_") # Make sure the mode is in the cache. unused_stat_result = host.stat("_test dir_") # Set/get mode of the directory. host.chmod("_test dir_", 0o757) self.assert_mode("_test dir_", 0o757) # Set/get mode on a file. file_name = host.path.join("_test dir_", "_testfile_") self.make_remote_file(file_name) # Make sure the mode is in the cache. unused_stat_result = host.stat(file_name) host.chmod(file_name, 0o646) self.assert_mode(file_name, 0o646) class TestRestArgument(RealFTPTest): TEST_FILE_NAME = "rest_test" def setup_method(self, method): super().setup_method(method) # Write test file. with self.host.open(self.TEST_FILE_NAME, "wb") as fobj: fobj.write(b"abcdefghijkl") self.cleaner.add_file(self.TEST_FILE_NAME) def test_for_reading(self): """ If a `rest` argument is passed to `open`, the following read operation should start at the byte given by `rest`. """ with self.host.open(self.TEST_FILE_NAME, "rb", rest=3) as fobj: data = fobj.read() assert data == b"defghijkl" def test_for_writing(self): """ If a `rest` argument is passed to `open`, the following write operation should start writing at the byte given by `rest`. """ with self.host.open(self.TEST_FILE_NAME, "wb", rest=3) as fobj: fobj.write(b"123") with self.host.open(self.TEST_FILE_NAME, "rb") as fobj: data = fobj.read() assert data == b"abc123" def test_invalid_read_from_text_file(self): """ If the `rest` argument is used for reading from a text file, a `CommandNotImplementedError` should be raised. """ with pytest.raises(ftputil.error.CommandNotImplementedError): self.host.open(self.TEST_FILE_NAME, "r", rest=3) def test_invalid_write_to_text_file(self): """ If the `rest` argument is used for reading from a text file, a `CommandNotImplementedError` should be raised. """ with pytest.raises(ftputil.error.CommandNotImplementedError): self.host.open(self.TEST_FILE_NAME, "w", rest=3) # There are no tests for reading and writing beyond the end of a file. For # example, if the remote file is 10 bytes long and # `open(remote_file, "rb", rest=100)` is used, the server may return an # error status code or not. # # The server I use for testing returns a 554 status when attempting to # _read_ beyond the end of the file. On the other hand, if attempting to # _write_ beyond the end of the file, the server accepts the request, but # starts writing after the end of the file, i. e. appends to the file. # # Instead of expecting certain responses that may differ between server # implementations, I leave the bahavior for too large `rest` arguments # undefined. In practice, this shouldn't be a problem because the `rest` # argument should only be used for error recovery, and in this case a valid # byte count for the `rest` argument should be known. class TestOther(RealFTPTest): def test_open_for_reading(self): # Test for issues #17 and #51, # http://ftputil.sschwarzer.net/trac/ticket/17 and # http://ftputil.sschwarzer.net/trac/ticket/51 . file1 = self.host.open("large_file", "rb") time.sleep(1) # Depending on the FTP server, this might return a status code # unexpected by `ftplib` or block the socket connection until a # server-side timeout. file1.close() def test_subsequent_reading(self): """ An `FTPHost.open` call should reuse a closed `FTPFile`. """ # Open a file for reading. with self.host.open("CONTENTS", "rb") as file1: pass # Make sure that there are no problems if the connection is reused. with self.host.open("CONTENTS", "rb") as file2: pass assert file1._session is file2._session def test_names_with_spaces(self): """ Directory and file names with spaces in them shouldn't cause problems. (We've seen problems here with some servers before `_robust_ftp_command` was introduced.) """ host = self.host assert host.path.isdir("dir with spaces") assert host.listdir("dir with spaces") == [ "second dir", "some file", "some_file", ] assert host.path.isdir("dir with spaces/second dir") assert host.path.isfile("dir with spaces/some_file") assert host.path.isfile("dir with spaces/some file") def test_synchronize_times_without_write_access(self): """ If the current directory isn't writable, `synchronize_times` should raise a `TimeShiftError`. """ host = self.host # This isn't writable by the ftp account the tests are run under. host.chdir("rootdir1") with pytest.raises(ftputil.error.TimeShiftError): host.synchronize_times() def test_encoding(self): """ If the session factory is requested with a specific encoding, the session from the factory should use this requested encoding. """ for encoding in ["latin-1", "UTF-8"]: factory = ftputil.session.session_factory(port=PORT, encoding=encoding) session = factory(*self.login_data) assert session.encoding == encoding def test_listdir_with_non_ascii_byte_string(self): """ `listdir` should accept byte strings with non-ASCII characters and return non-ASCII characters in directory or file names. """ host = self.host path = "รคbc".encode("UTF-8") names = host.listdir(path) assert names[0] == b"file1" assert names[1] == "file1_รถ".encode("UTF-8") def test_listdir_with_non_ascii_unicode_string(self): """ `listdir` should accept unicode strings with non-ASCII characters and return non-ASCII characters in directory or file names. """ host = self.host # `ftplib` under Python 3 only works correctly if the unicode strings # are decoded from the `ftplib` default encoding. For Python 3.9 and up # the `encode`/`decode` combination is a no-op. path = "รคbc".encode("UTF-8").decode( ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING ) names = host.listdir(path) assert names[0] == "file1" assert names[1] == "file1_รถ".encode("UTF-8").decode( ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING ) @pytest.mark.skipif( ftputil.path_encoding.RUNNING_UNDER_PY39_AND_UP, reason="test applies only to `FTPHost` objects using 'latin-1' path encoding", ) def test_path_with_non_latin1_unicode_string_below_python_3_9(self): """ ftputil operations shouldn't accept file paths with non-latin1 characters. """ # Use some musical symbols. These are certainly not latin1. ;-) path = "๐„ž๐„ข" # `UnicodeEncodeError` is also the exception that `ftplib` raises if it # gets a non-latin1 path. with pytest.raises(UnicodeEncodeError): self.host.mkdir(path) def test_list_a_option(self): """ If `use_list_a_option` is set to `False`, "hidden" files shouldn't be included in the `listdir` result. If `use_list_a_option` is set to `True`, "hidden" files should be included in the `listdir` result. Whether this works as described, depends on the FTP server and its configuration. """ # For this test to pass, the server must _not_ list "hidden" files by # default but instead only when the `LIST` `-a` option is used. host = self.host assert not host.use_list_a_option directory_entries = host.listdir(host.curdir) assert ".hidden" not in directory_entries # Switch on showing of hidden paths. host.use_list_a_option = True directory_entries = host.listdir(host.curdir) assert ".hidden" in directory_entries def _make_objects_to_be_garbage_collected(self): for _ in range(10): with ftputil.FTPHost( *self.login_data, session_factory=DEFAULT_SESSION_FACTORY ) as host: for _ in range(10): unused_stat_result = host.stat("CONTENTS") with host.open("CONTENTS") as fobj: unused_data = fobj.read() def test_garbage_collection(self): """ Test whether there are cycles which prevent garbage collection. """ gc.collect() objects_before_test = len(gc.garbage) self._make_objects_to_be_garbage_collected() gc.collect() objects_after_test = len(gc.garbage) assert not objects_after_test - objects_before_test ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704483088.0 ftputil-5.1.0/test/test_session.py0000644000175000017500000001760114546054420016324 0ustar00schwaschwa# Copyright (C) 2014-2024, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. """ Unit tests for session factory helpers. """ import ftplib import functools import sys import pytest import ftputil.path_encoding import ftputil.session import ftputil.tool UTF8_FEAT_STRING = " UTF8\r\n" # Inherit from `ftplib.FTP` to get past the subclass check in # `ftputil.session.session_factory`. class MockSession(ftplib.FTP): """ Mock session base class to determine if all expected calls have happened. """ encoding = ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING def __init__(self, encoding=None, feat_command_output=""): self.calls = [] if encoding is not None: self.encoding = encoding self._feat_command_output = feat_command_output def add_call(self, *args): self.calls.append(args) def connect(self, host, port): self.add_call("connect", host, port) def login(self, user, password): self.add_call("login", user, password) def sendcmd(self, command): self.add_call("sendcmd", command) if command == "FEAT": return self._feat_command_output else: # Dummy return "" def set_debuglevel(self, value): self.add_call("set_debuglevel", value) def set_pasv(self, flag): self.add_call("set_pasv", flag) class EncryptedMockSession(MockSession): def auth_tls(self): self.add_call("auth_tls") def prot_p(self): self.add_call("prot_p") class TestSessionFactory: """ Test if session factories created by `ftputil.session.session_factory` trigger the expected calls. """ @staticmethod def _expected_session_calls_for_encoding_handling(encoding, feat_command_output): """ Return the FTP session calls that the session factory should perform. """ if encoding is None: encoding = ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING if encoding.upper() == "UTF-8": if feat_command_output == UTF8_FEAT_STRING: return [("sendcmd", "FEAT"), ("sendcmd", "OPTS UTF8 ON")] else: return [("sendcmd", "FEAT")] else: return [] def test_defaults(self): """ Test defaults (apart from base class). """ factory = ftputil.session.session_factory(base_class=MockSession) session = factory("host", "user", "password") assert session.calls == [ ("connect", "host", 21), ("login", "user", "password"), ] + self._expected_session_calls_for_encoding_handling(None, "") def test_different_port(self): """ Test setting the command channel port with `port`. """ factory = ftputil.session.session_factory(base_class=MockSession, port=2121) session = factory("host", "user", "password") assert session.calls == [ ("connect", "host", 2121), ("login", "user", "password"), ] + self._expected_session_calls_for_encoding_handling(None, "") def test_use_passive_mode(self): """ Test explicitly setting passive/active mode with `use_passive_mode`. """ # Passive mode factory = ftputil.session.session_factory( base_class=MockSession, use_passive_mode=True ) session = factory("host", "user", "password") assert session.calls == [ ("connect", "host", 21), ("login", "user", "password"), ("set_pasv", True), ] + self._expected_session_calls_for_encoding_handling(None, "") # Active mode factory = ftputil.session.session_factory( base_class=MockSession, use_passive_mode=False ) session = factory("host", "user", "password") assert session.calls == [ ("connect", "host", 21), ("login", "user", "password"), ("set_pasv", False), ] + self._expected_session_calls_for_encoding_handling(None, "") def test_encrypt_data_channel(self): """ Test request to call `prot_p` with `encrypt_data_channel`. """ # With encrypted data channel (default for encrypted session). factory = ftputil.session.session_factory(base_class=EncryptedMockSession) session = factory("host", "user", "password") assert session.calls == [ ("connect", "host", 21), ("login", "user", "password"), ("prot_p",), ] + self._expected_session_calls_for_encoding_handling(None, "") # factory = ftputil.session.session_factory( base_class=EncryptedMockSession, encrypt_data_channel=True ) session = factory("host", "user", "password") assert session.calls == [ ("connect", "host", 21), ("login", "user", "password"), ("prot_p",), ] + self._expected_session_calls_for_encoding_handling(None, "") # Without encrypted data channel. factory = ftputil.session.session_factory( base_class=EncryptedMockSession, encrypt_data_channel=False ) session = factory("host", "user", "password") assert session.calls == [ ("connect", "host", 21), ("login", "user", "password"), ] + self._expected_session_calls_for_encoding_handling(None, "") @pytest.mark.parametrize( "encoding, feat_command_output, expected_encoding, expected_session_calls_for_encoding_handling", [ # Expected session calls for the first two tuples depend on the # Python version and are determined in the code below. # # For the `FEAT` command output we consider only wether the " UTF8" # string is present. A real `FEAT` response from a server would be # more complicated. (None, "", ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING, None), ( None, UTF8_FEAT_STRING, ftputil.path_encoding.FTPLIB_DEFAULT_ENCODING, None, ), ("UTF-8", "", "UTF-8", [("sendcmd", "FEAT")]), ( "UTF-8", UTF8_FEAT_STRING, "UTF-8", [("sendcmd", "FEAT"), ("sendcmd", "OPTS UTF8 ON")], ), ("latin1", "", "latin1", []), ("latin1", UTF8_FEAT_STRING, "latin1", []), ], ) def test_encoding( self, encoding, feat_command_output, expected_encoding, expected_session_calls_for_encoding_handling, ): """ If `encoding` has the given values, the result should be the same as documented. """ # Special handling for default encoding. base_class = functools.partial( MockSession, feat_command_output=feat_command_output ) factory = ftputil.session.session_factory( base_class=MockSession, encoding=encoding, ) session = factory("host", "user", "password") expected_session_calls = [ ("connect", "host", 21), ("login", "user", "password"), ] + self._expected_session_calls_for_encoding_handling( encoding, feat_command_output ) assert session.encoding == expected_encoding def test_debug_level(self): """ Test setting the debug level on the session. """ factory = ftputil.session.session_factory(base_class=MockSession, debug_level=1) session = factory("host", "user", "password") assert session.calls == [ ("connect", "host", 21), ("set_debuglevel", 1), ("login", "user", "password"), ] + self._expected_session_calls_for_encoding_handling(None, "") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704487629.0 ftputil-5.1.0/test/test_stat.py0000644000175000017500000011403014546065315015613 0ustar00schwaschwa# Copyright (C) 2003-2019, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. import datetime import ftplib import stat import time import freezegun import pytest import ftputil import ftputil.error import ftputil.stat from ftputil.stat import MINUTE_PRECISION, DAY_PRECISION, UNKNOWN_PRECISION from test import test_base from test import scripted_session Call = scripted_session.Call # Special value to handle special case of datetimes before the epoch. EPOCH = time.gmtime(0)[:6] def stat_tuple_to_seconds(t): """ Return a float number representing the UTC timestamp from the six-element tuple `t`. """ assert len(t) == 6, "need a six-element tuple (year, month, day, hour, min, sec)" # Do _not_ apply `time.mktime` to the `EPOCH` value below. On some # platforms (e. g. Windows) this might cause an `OverflowError`. if t == EPOCH: return 0.0 else: return datetime.datetime(*t, tzinfo=datetime.timezone.utc).timestamp() class TestParsers: # # Helper methods # def _test_valid_lines(self, parser_class, lines, expected_stat_results): parser = parser_class() for line, expected_stat_result in zip(lines, expected_stat_results): # Convert to list to compare with the list `expected_stat_results`. parse_result = parser.parse_line(line, time_shift=5 * 60 * 60) stat_result = list(parse_result) + [ parse_result._st_mtime_precision, parse_result._st_name, parse_result._st_target, ] # Convert time tuple to seconds. expected_stat_result[8] = stat_tuple_to_seconds(expected_stat_result[8]) # Compare lists. assert stat_result == expected_stat_result def _test_invalid_lines(self, parser_class, lines): parser = parser_class() for line in lines: with pytest.raises(ftputil.error.ParserError): parser.parse_line(line) def _expected_year(self): """ Return the expected year for the second line in the listing in `test_valid_unix_lines`. """ # If in this year it's after Dec 19, 23:11, use the current year, else # use the previous year. This datetime value corresponds to the # hard-coded value in the string lists below. client_datetime = datetime.datetime.now(datetime.timezone.utc) server_datetime_candidate = client_datetime.replace( month=12, day=19, hour=23, minute=11, second=0 ) if server_datetime_candidate > client_datetime: return server_datetime_candidate.year - 1 else: return server_datetime_candidate.year # # Unix parser # def test_valid_unix_lines(self): lines = [ "drwxr-sr-x 2 45854 200 512 May 4 2000 " "chemeng link -> chemeng target", # The year value for this line will change with the actual time. "-rw-r--r-- 1 45854 200 4604 Dec 19 23:11 index.html", "drwxr-sr-x 2 45854 200 512 Jan 01 2000 os2", "---------- 2 45854 200 512 May 29 2000 some_file", "lrwxrwxrwx 2 45854 200 512 May 29 2000 osup -> " "../os2", ] # Note that the time shift is also subtracted from the datetimes that # have only day precision, i. e. a year but no time. expected_stat_results = [ [ 17901, None, None, 2, "45854", "200", 512, None, (2000, 5, 3, 19, 0, 0), None, DAY_PRECISION, "chemeng link", "chemeng target", ], [ 33188, None, None, 1, "45854", "200", 4604, None, (self._expected_year(), 12, 19, 18, 11, 0), None, MINUTE_PRECISION, "index.html", None, ], [ 17901, None, None, 2, "45854", "200", 512, None, (1999, 12, 31, 19, 0, 0), None, DAY_PRECISION, "os2", None, ], [ 32768, None, None, 2, "45854", "200", 512, None, (2000, 5, 28, 19, 0, 0), None, DAY_PRECISION, "some_file", None, ], [ 41471, None, None, 2, "45854", "200", 512, None, (2000, 5, 28, 19, 0, 0), None, DAY_PRECISION, "osup", "../os2", ], ] self._test_valid_lines(ftputil.stat.UnixParser, lines, expected_stat_results) def test_alternative_unix_format(self): # See http://ftputil.sschwarzer.net/trac/ticket/12 for a description # for the need for an alternative format. lines = [ "drwxr-sr-x 2 200 512 May 4 2000 " "chemeng link -> chemeng target", # The year value for this line will change with the actual time. "-rw-r--r-- 1 200 4604 Dec 19 23:11 index.html", "drwxr-sr-x 2 200 512 May 29 2000 os2", "lrwxrwxrwx 2 200 512 May 29 2000 osup -> ../os2", ] expected_stat_results = [ [ 17901, None, None, 2, None, "200", 512, None, (2000, 5, 3, 19, 0, 0), None, DAY_PRECISION, "chemeng link", "chemeng target", ], [ 33188, None, None, 1, None, "200", 4604, None, (self._expected_year(), 12, 19, 18, 11, 0), None, MINUTE_PRECISION, "index.html", None, ], [ 17901, None, None, 2, None, "200", 512, None, (2000, 5, 28, 19, 0, 0), None, DAY_PRECISION, "os2", None, ], [ 41471, None, None, 2, None, "200", 512, None, (2000, 5, 28, 19, 0, 0), None, DAY_PRECISION, "osup", "../os2", ], ] self._test_valid_lines(ftputil.stat.UnixParser, lines, expected_stat_results) def test_pre_epoch_times_for_unix(self): # See http://ftputil.sschwarzer.net/trac/ticket/83 . # `mirrors.ibiblio.org` returns dates before the "epoch" that cause an # `OverflowError` in `mktime` on some platforms, e. g. Windows. lines = [ "-rw-r--r-- 1 45854 200 4604 May 4 1968 index.html", "-rw-r--r-- 1 45854 200 4604 Dec 31 1969 index.html", "-rw-r--r-- 1 45854 200 4604 May 4 1800 index.html", ] expected_stat_result = [ 33188, None, None, 1, "45854", "200", 4604, None, EPOCH, None, UNKNOWN_PRECISION, "index.html", None, ] # Make shallow copies to avoid converting the time tuple more than once # in _test_valid_lines`. expected_stat_results = [ expected_stat_result[:], expected_stat_result[:], expected_stat_result[:], ] self._test_valid_lines(ftputil.stat.UnixParser, lines, expected_stat_results) def test_invalid_unix_lines(self): lines = [ # Not intended to be parsed. Should have been filtered out by # `ignores_line`. "total 14", # Invalid month abbreviation "drwxr-sr-x 2 45854 200 512 Max 4 2000 chemeng", # Year value isn't an integer "drwxr-sr-x 2 45854 200 512 May 4 abcd chemeng", # Day value isn't an integer "drwxr-sr-x 2 45854 200 512 May ab 2000 chemeng", # Hour value isn't an integer "-rw-r--r-- 1 45854 200 4604 Dec 19 ab:11 index.html", # Minute value isn't an integer "-rw-r--r-- 1 45854 200 4604 Dec 19 23:ab index.html", # Day value too large "drwxr-sr-x 2 45854 200 512 May 32 2000 chemeng", # Ditto, for time instead of year "drwxr-sr-x 2 45854 200 512 May 32 11:22 chemeng", # Incomplete mode "drwxr-sr- 2 45854 200 512 May 4 2000 chemeng", # Invalid first letter in mode "xrwxr-sr-x 2 45854 200 512 May 4 2000 chemeng", # Ditto, plus invalid size value "xrwxr-sr-x 2 45854 200 51x May 4 2000 chemeng", # Is this `os1 -> os2` pointing to `os3`, or `os1` pointing to # `os2 -> os3` or the plain name `os1 -> os2 -> os3`? We don't # know, so we consider the line invalid. "drwxr-sr-x 2 45854 200 512 May 29 2000 " "os1 -> os2 -> os3", # Missing name "-rwxr-sr-x 2 45854 200 51x May 4 2000 ", ] self._test_invalid_lines(ftputil.stat.UnixParser, lines) # # Microsoft parser # def test_valid_ms_lines_two_digit_year(self): lines = [ "07-27-01 11:16AM Test", "10-23-95 03:25PM WindowsXP", "07-17-00 02:08PM 12266720 test.exe", "07-17-09 12:08AM 12266720 test.exe", "07-17-09 12:08PM 12266720 test.exe", ] expected_stat_results = [ [ 16640, None, None, None, None, None, None, None, (2001, 7, 27, 6, 16, 0), None, MINUTE_PRECISION, "Test", None, ], [ 16640, None, None, None, None, None, None, None, (1995, 10, 23, 10, 25, 0), None, MINUTE_PRECISION, "WindowsXP", None, ], [ 33024, None, None, None, None, None, 12266720, None, (2000, 7, 17, 9, 8, 0), None, MINUTE_PRECISION, "test.exe", None, ], [ 33024, None, None, None, None, None, 12266720, None, (2009, 7, 16, 19, 8, 0), None, MINUTE_PRECISION, "test.exe", None, ], [ 33024, None, None, None, None, None, 12266720, None, (2009, 7, 17, 7, 8, 0), None, MINUTE_PRECISION, "test.exe", None, ], ] self._test_valid_lines(ftputil.stat.MSParser, lines, expected_stat_results) def test_valid_ms_lines_four_digit_year(self): # See http://ftputil.sschwarzer.net/trac/ticket/67 lines = [ "10-19-2012 03:13PM SYNCDEST", "10-19-2012 03:13PM SYNCSOURCE", "10-19-1968 03:13PM SYNC", ] expected_stat_results = [ [ 16640, None, None, None, None, None, None, None, (2012, 10, 19, 10, 13, 0), None, MINUTE_PRECISION, "SYNCDEST", None, ], [ 16640, None, None, None, None, None, None, None, (2012, 10, 19, 10, 13, 0), None, MINUTE_PRECISION, "SYNCSOURCE", None, ], [ 16640, None, None, None, None, None, None, None, EPOCH, None, UNKNOWN_PRECISION, "SYNC", None, ], ] self._test_valid_lines(ftputil.stat.MSParser, lines, expected_stat_results) def test_invalid_ms_lines(self): lines = [ # Neither "" nor a size present "07-27-01 11:16AM Test", # "AM"/"PM" missing "07-17-00 02:08 12266720 test.exe", # Year not an int "07-17-ab 02:08AM 12266720 test.exe", # Month not an int "ab-17-00 02:08AM 12266720 test.exe", # Day not an int "07-ab-00 02:08AM 12266720 test.exe", # Hour not an int "07-17-00 ab:08AM 12266720 test.exe", # Invalid size value "07-17-00 02:08AM 1226672x test.exe", ] self._test_invalid_lines(ftputil.stat.MSParser, lines) # # The following code checks if the decision logic in the Unix line parser # for determining the year works. # def dir_line(self, datetime_): """ Return a directory line as from a Unix FTP server. Most of the contents are fixed, but the timestamp is made from `time_float` (seconds since the epoch, as from `time.time()`). """ line_template = "-rw-r--r-- 1 45854 200 4604 {} index.html" datetime_string = datetime_.strftime("%b %d %H:%M") return line_template.format(datetime_string) def assert_equal_times(self, time1, time2): """ Check if both times (seconds since the epoch) are equal. For the purpose of this test, two times are "equal" if they differ no more than one minute from each other. """ abs_difference = abs(time1 - time2) assert abs_difference <= 60.0, "Difference is %s seconds" % abs_difference def _test_time_shift(self, supposed_time_shift, deviation=0.0): """ Check if the stat parser considers the time shift value correctly. `deviation` is the difference between the actual time shift and the supposed time shift, which is rounded to full hours. """ script = [Call("__init__"), Call("pwd", result="/"), Call("close")] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() # Explicitly use Unix format parser here. host._stat._parser = ftputil.stat.UnixParser() host.set_time_shift(supposed_time_shift) server_time = datetime.datetime.now( datetime.timezone.utc ) + datetime.timedelta(seconds=supposed_time_shift + deviation) stat_result = host._stat._parser.parse_line( self.dir_line(server_time), host.time_shift() ) # We expect `st_mtime` in UTC. self.assert_equal_times( stat_result.st_mtime, ( server_time # Convert back to client time. - datetime.timedelta(seconds=supposed_time_shift) ).timestamp(), ) def test_time_shifts(self): """ Test correct year depending on time shift value. """ # 1. test: Client and server share the same time (UTC). This is true if # the directory listing from the server is in UTC. self._test_time_shift(0.0) # 2. test: Server is three hours ahead of client self._test_time_shift(3 * 60 * 60) # Ditto, but with client and server in different years. See ticket #131. with freezegun.freeze_time("2019-12-31 22:37"): self._test_time_shift(3 * 60 * 60) # 3. test: Client is three hours ahead of server self._test_time_shift(-3 * 60 * 60) # 4. test: Server is supposed to be three hours ahead, but is ahead # three hours and one minute self._test_time_shift(3 * 60 * 60, 60) # 5. test: Server is supposed to be three hours ahead, but is ahead # three hours minus one minute self._test_time_shift(3 * 60 * 60, -60) # 6. test: Client is supposed to be three hours ahead, but is ahead # three hours and one minute self._test_time_shift(-3 * 60 * 60, -60) # 7. test: Client is supposed to be three hours ahead, but is ahead # three hours minus one minute self._test_time_shift(-3 * 60 * 60, 60) class TestLstatAndStat: """ Test `FTPHost.lstat` and `FTPHost.stat` (test currently only implemented for Unix server format). """ def test_repr(self): """ Test if the `repr` result looks like a named tuple. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result="drwxr-sr-x 2 45854 200 512 May 4 2000 foo", ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() stat_result = host.stat("/foo") expected_result = ( "StatResult(st_mode=17901, st_ino=None, st_dev=None, " "st_nlink=2, st_uid='45854', st_gid='200', st_size=512, " "st_atime=None, st_mtime=957398400.0, st_ctime=None)" ) assert repr(stat_result) == expected_result def test_failing_lstat(self): """ Test whether `lstat` fails for a nonexistent path. """ # Directory with presumed file item doesn't exist. script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=""), Call("cwd", args=("/",)), # See FIXME comment in `ftputil.stat._Stat._real_lstat` Call("cwd", args=("/",)), Call("cwd", args=("/notthere",), result=ftplib.error_perm), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() with pytest.raises(ftputil.error.PermanentError): host.lstat("/notthere/irrelevant") # Directory exists, but not the file system item in the directory. script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result=test_base.dir_line( mode_string="dr-xr-xr-x", datetime_=datetime.datetime.now(), name="some_dir", ), ), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/some_dir",)), Call("dir", args=("",), result=""), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() with pytest.raises(ftputil.error.PermanentError): host.lstat("/some_dir/notthere") def test_lstat_for_root(self): """ Test `lstat` for `/` . Note: `(l)stat` works by going one directory up and parsing the output of an FTP `LIST` command. Unfortunately, it's not possible to do this for the root directory `/`. """ script = [Call("__init__"), Call("pwd", result="/"), Call("close")] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() with pytest.raises(ftputil.error.RootDirError) as exc_info: host.lstat("/") # `RootDirError` is "outside" the `FTPOSError` hierarchy. assert not isinstance(exc_info.value, ftputil.error.FTPOSError) del exc_info def test_lstat_one_unix_file(self): """ Test `lstat` for a file described in Unix-style format. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result="-rw-r--r-- 1 45854 200 4604 Jan 19 23:11 some_file", ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() stat_result = host.lstat("/some_file") assert oct(stat_result.st_mode) == "0o100644" assert stat_result.st_size == 4604 assert stat_result._st_mtime_precision == 60 def test_lstat_one_ms_file(self): """ Test `lstat` for a file described in DOS-style format. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # First try with unix parser, but this parser can't parse this # line. Call( "dir", args=("",), result="07-17-00 02:08PM 12266720 some_file", ), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # Now try with MS parser. Call( "dir", args=("",), result="07-17-00 02:08PM 12266720 some_file", ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() stat_result = host.lstat("/some_file") assert stat_result._st_name == "some_file" assert stat_result._st_mtime_precision == 60 def test_lstat_one_unix_dir(self): """ Test `lstat` for a directory described in Unix-style format. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result="drwxr-sr-x 6 45854 200 512 Sep 20 1999 some_dir", ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() stat_result = host.lstat("/some_dir") assert oct(stat_result.st_mode) == "0o42755" assert stat_result.st_ino is None assert stat_result.st_dev is None assert stat_result.st_nlink == 6 assert stat_result.st_uid == "45854" assert stat_result.st_gid == "200" assert stat_result.st_size == 512 assert stat_result.st_atime is None assert stat_result.st_mtime == stat_tuple_to_seconds((1999, 9, 20, 0, 0, 0)) assert stat_result.st_ctime is None assert stat_result._st_mtime_precision == 24 * 60 * 60 assert stat_result == ( 17901, None, None, 6, "45854", "200", 512, None, stat_tuple_to_seconds((1999, 9, 20, 0, 0, 0)), None, ) def test_lstat_one_ms_dir(self): """ Test `lstat` for a directory described in DOS-style format. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # First try with unix parser, but this parser can't parse this # line. Call( "dir", args=("",), result="10-23-01 03:25PM some_dir", ), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # Now try with MS parser. Call( "dir", args=("",), result="10-23-01 03:25PM some_dir", ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() stat_result = host.lstat("/some_dir") assert stat_result._st_mtime_precision == 60 def test_lstat_via_stat_module(self): """ Test `lstat` indirectly via `stat` module. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result="drwxr-sr-x 2 45854 200 512 May 4 2000 some_dir", ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() stat_result = host.lstat("/some_dir") assert stat.S_ISDIR(stat_result.st_mode) def test_stat_following_link(self): """ Test `stat` when invoked on a link. """ # Simple link script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result="lrwxrwxrwx 1 45854 200 21 Jan 19 2002 link -> link_target", ), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result="-rw-r--r-- 1 45854 200 4604 Jan 19 23:11 link_target", ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() stat_result = host.stat("/link") assert stat_result.st_size == 4604 # Link pointing to a link dir_lines = ( "lrwxrwxrwx 1 45854 200 7 Jan 19 2002 link_link -> link\n" "lrwxrwxrwx 1 45854 200 14 Jan 19 2002 link -> link_target\n" "-rw-r--r-- 1 45854 200 4604 Jan 19 23:11 link_target" ) # Note that only one `dir` call would be needed in case of an enabled # cache. script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # Look up `/link_link`. Call("dir", args=("",), result=dir_lines), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # Look up `/link`. Call("dir", args=("",), result=dir_lines), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # Look up `/link_target`. Call("dir", args=("",), result=dir_lines), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() stat_result = host.stat("/link_link") assert stat_result.st_size == 4604 # Recursive link structure dir_lines = ( "lrwxrwxrwx 1 45854 200 7 Jan 19 2002 bad_link1 -> bad_link2\n" "lrwxrwxrwx 1 45854 200 14 Jan 19 2002 bad_link2 -> bad_link1" ) script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # This dir finds the `bad_link1` name requested in the `stat` call. Call("dir", args=("",), result=dir_lines), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # Look up link target `bad_link2`. Call("dir", args=("",), result=dir_lines), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), # FIXME: `stat` looks up the link target pointed to by `bad_link2`, # which is `bad_link1`. Only here ftputil notices the recursive # link chain. Obviously the start of the link chain hadn't been # stored in `visited_paths` (see also ticket #108). Call("dir", args=("",), result=dir_lines), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() with pytest.raises(ftputil.error.PermanentError): host.stat("bad_link1") # # Test automatic switching of Unix/MS parsers # def test_parser_switching_with_permanent_error(self): """ Test non-switching of parser format with `PermanentError`. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result="10-23-01 03:25PM home" ), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result="10-23-01 03:25PM home" ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() assert host._stat._allow_parser_switching is True # With these directory contents, we get a `ParserError` for the # Unix parser first, so `_allow_parser_switching` can be switched # off no matter whether we got a `PermanentError` afterward or not. with pytest.raises(ftputil.error.PermanentError): host.lstat("/nonexistent") assert host._stat._allow_parser_switching is False def test_parser_switching_default_to_unix(self): """ Test non-switching of parser format; stay with Unix. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result="-rw-r--r-- 1 45854 200 4604 Jan 19 23:11 some_file", ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() assert host._stat._allow_parser_switching is True assert host._stat._allow_parser_switching is True assert isinstance(host._stat._parser, ftputil.stat.UnixParser) stat_result = host.lstat("some_file") # The Unix parser worked, so keep it. assert isinstance(host._stat._parser, ftputil.stat.UnixParser) assert host._stat._allow_parser_switching is False def test_parser_switching_to_ms(self): """ Test switching of parser from Unix to MS format. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result="07-17-00 02:08PM 12266720 some_file", ), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result="07-17-00 02:08PM 12266720 some_file", ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() assert host._stat._allow_parser_switching is True assert isinstance(host._stat._parser, ftputil.stat.UnixParser) # Parsing the directory `/` with the Unix parser fails, so switch # to the MS parser. stat_result = host.lstat("/some_file") assert isinstance(host._stat._parser, ftputil.stat.MSParser) assert host._stat._allow_parser_switching is False assert stat_result._st_name == "some_file" assert stat_result.st_size == 12266720 def test_parser_switching_regarding_empty_dir(self): """Test switching of parser if a directory is empty.""" script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=""), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() assert host._stat._allow_parser_switching is True # When the directory we're looking into doesn't give us any lines # we can't decide whether the first parser worked, because it # wasn't applied. So keep the parser for now. result = host.listdir("/") assert result == [] assert host._stat._allow_parser_switching is True assert isinstance(host._stat._parser, ftputil.stat.UnixParser) class TestListdir: """ Test `FTPHost.listdir`. """ def test_failing_listdir(self): """ Test failing `FTPHost.listdir`. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call("dir", args=("",), result=""), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() with pytest.raises(ftputil.error.PermanentError): host.listdir("notthere") def test_succeeding_listdir(self): """ Test succeeding `FTPHost.listdir`. """ script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result="drwxr-sr-x 2 45854 200 512 Jan 3 17:17 download\n" "drwxr-sr-x 2 45854 200 512 Jul 30 17:14 dir with spaces\n" "lrwxrwxrwx 2 45854 200 6 May 29 2000 link -> ../link_target\n" "-rw-r--r-- 1 45854 200 4604 Jan 19 23:11 index.html", ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: host.stat_cache.disable() remote_file_list = host.listdir(".") # Do we have all expected "files"? assert len(remote_file_list) == 4 expected_names = ["download", "dir with spaces", "link", "index.html"] for name in expected_names: assert name in remote_file_list assert len(host.stat_cache) == 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/test/test_stat_cache.py0000664000175000017500000001010514170605001016717 0ustar00schwaschwa# Copyright (C) 2006-2018, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. import time import pytest import ftputil.error import ftputil.stat_cache from test import scripted_session from test import test_base Call = scripted_session.Call class TestStatCache: def setup_method(self, method): self.cache = ftputil.stat_cache.StatCache() def test_get_set(self): with pytest.raises(ftputil.error.CacheMissError): self.cache.__getitem__("/path") self.cache["/path"] = "test" assert self.cache["/path"] == "test" def test_invalidate(self): # Don't raise a `CacheMissError` for missing paths self.cache.invalidate("/path") self.cache["/path"] = "test" self.cache.invalidate("/path") assert len(self.cache) == 0 def test_clear(self): self.cache["/path1"] = "test1" self.cache["/path2"] = "test2" self.cache.clear() assert len(self.cache) == 0 def test_contains(self): self.cache["/path1"] = "test1" assert "/path1" in self.cache assert "/path2" not in self.cache def test_len(self): assert len(self.cache) == 0 self.cache["/path1"] = "test1" self.cache["/path2"] = "test2" assert len(self.cache) == 2 def test_resize(self): self.cache.resize(100) # Don't grow the cache beyond it's set size. for i in range(150): self.cache["/{0:d}".format(i)] = i assert len(self.cache) == 100 def test_max_age1(self): """ Set expiration after setting a cache item. """ self.cache["/path1"] = "test1" # Expire after one second self.cache.max_age = 1 time.sleep(0.5) # Should still be present assert self.cache["/path1"] == "test1" time.sleep(0.6) # Should have expired (_setting_ the cache counts) with pytest.raises(ftputil.error.CacheMissError): self.cache.__getitem__("/path1") def test_max_age2(self): """ Set expiration before setting a cache item. """ # Expire after one second self.cache.max_age = 1 self.cache["/path1"] = "test1" time.sleep(0.5) # Should still be present assert self.cache["/path1"] == "test1" time.sleep(0.6) # Should have expired (_setting_ the cache counts) with pytest.raises(ftputil.error.CacheMissError): self.cache.__getitem__("/path1") def test_disabled(self): self.cache["/path1"] = "test1" self.cache.disable() self.cache["/path2"] = "test2" with pytest.raises(ftputil.error.CacheMissError): self.cache.__getitem__("/path1") with pytest.raises(ftputil.error.CacheMissError): self.cache.__getitem__("/path2") assert len(self.cache) == 1 # Don't raise a `CacheMissError` for missing paths. self.cache.invalidate("/path2") def test_cache_size_zero(self): script = [ Call("__init__"), Call("pwd", result="/"), Call("cwd", args=("/",)), Call("cwd", args=("/",)), Call( "dir", args=("",), result="drwxr-sr-x 2 45854 200 512 Jan 3 17:17 download\n" "drwxr-sr-x 2 45854 200 512 Jul 30 17:14 dir with spaces\n" "lrwxrwxrwx 2 45854 200 6 May 29 2000 link -> ../link_target\n" "-rw-r--r-- 1 45854 200 4604 Jan 19 23:11 index.html", ), Call("cwd", args=("/",)), Call("close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: with pytest.raises(ValueError): host.stat_cache.resize(0) # If bug #38 was present, this would raise an `IndexError`. items = host.listdir(host.curdir) assert items == ["download", "dir with spaces", "link", "index.html"] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/test/test_sync.py0000664000175000017500000000636714170605001015614 0ustar00schwaschwa# Copyright (C) 2007-2021, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. import io import ntpath import os import shutil import ftputil import ftputil.path_encoding import ftputil.sync # Assume the test subdirectories are or will be in the current directory. TEST_ROOT = os.getcwd() class TestLocalToLocal: def setup_method(self, method): if not os.path.exists("test_empty"): os.mkdir("test_empty") if os.path.exists("test_target"): shutil.rmtree("test_target") os.mkdir("test_target") def test_sync_empty_dir(self): source = ftputil.sync.LocalHost() target = ftputil.sync.LocalHost() syncer = ftputil.sync.Syncer(source, target) source_dir = os.path.join(TEST_ROOT, "test_empty") target_dir = os.path.join(TEST_ROOT, "test_target") syncer.sync(source_dir, target_dir) def test_source_with_and_target_without_slash(self): source = ftputil.sync.LocalHost() target = ftputil.sync.LocalHost() syncer = ftputil.sync.Syncer(source, target) source_dir = os.path.join(TEST_ROOT, "test_source/") target_dir = os.path.join(TEST_ROOT, "test_target") syncer.sync(source_dir, target_dir) # Helper classes for `TestUploadFromWindows` class LocalWindowsHost(ftputil.sync.LocalHost): def __init__(self): self.path = ntpath self.sep = "\\" def open(self, path, mode): # Just return a dummy file object. return io.StringIO("") def walk(self, root): """ Return a list of tuples as `os.walk`, but use tuples as if the directory structure was dir1 dir11 file1 file2 where is the string passed in as `root`. """ join = ntpath.join return [ (root, [join(root, "dir1")], []), (join(root, "dir1"), ["dir11"], ["file1", "file2"]), ] class DummyFTPSession: def pwd(self): return "/" class DummyFTPPath: def abspath(self, path): # Don't care here if the path is absolute or not. return path def isdir(self, path): return "dir" in path def isfile(self, path): return "file" in path class ArgumentCheckingFTPHost(ftputil.FTPHost): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.path = DummyFTPPath() def _make_session(self, *args, **kwargs): self._encoding = ftputil.path_encoding.DEFAULT_ENCODING return DummyFTPSession() def mkdir(self, path): assert "\\" not in path def open(self, path, mode): assert "\\" not in path return io.StringIO("") class TestUploadFromWindows: def test_no_mixed_separators(self): source = LocalWindowsHost() target = ArgumentCheckingFTPHost() local_root = ntpath.join("some", "directory") syncer = ftputil.sync.Syncer(source, target) # If the following call raises any `AssertionError`s, the test # framework will catch them and show them. syncer.sync(local_root, "not_used_by_ArgumentCheckingFTPHost") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/test/test_tool.py0000664000175000017500000000426414170605001015607 0ustar00schwaschwa# Copyright (C) 2013-2021, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. import os import pytest import ftputil.path_encoding import ftputil.tool DEFAULT_ENCODING = ftputil.path_encoding.DEFAULT_ENCODING class Path(os.PathLike): """ Helper class for `TestSameStringTypeAs` """ def __init__(self, type_source): self._type_source = type_source def __fspath__(self): return self._type_source same_string_type_as = ftputil.tool.same_string_type_as class TestSameStringTypeAs: @staticmethod def _test_string(type_source, string, expected_result): """ Check if the result from `tool.same_string_type_as(type_source, string)` is the same as `expected_result`. `type_source` must be a `bytes` or `str` object. """ result = ftputil.tool.same_string_type_as(type_source, path) assert result == expected_result def test_to_bytes(self): assert same_string_type_as(b"abc", "def", encoding=DEFAULT_ENCODING) == b"def" def test_to_str(self): assert same_string_type_as("abc", b"def", encoding=DEFAULT_ENCODING) == "def" def test_both_bytes_type(self): assert same_string_type_as(b"abc", b"def", encoding=DEFAULT_ENCODING) == b"def" def test_both_str_type(self): assert same_string_type_as("abc", "def", encoding=DEFAULT_ENCODING) == "def" as_str = ftputil.tool.as_str as_str_path = ftputil.tool.as_str_path class TestAsStr: def test_from_bytes(self): assert as_str(b"abc", encoding=DEFAULT_ENCODING) == "abc" assert as_str_path(b"abc", encoding=DEFAULT_ENCODING) == "abc" def test_from_str(self): assert as_str("abc", encoding=DEFAULT_ENCODING) == "abc" assert as_str_path("abc", encoding=DEFAULT_ENCODING) == "abc" def test_from_bytes_path(self): assert as_str_path(Path(b"abc"), encoding=DEFAULT_ENCODING) == "abc" def test_from_str_path(self): assert as_str_path(Path("abc"), encoding=DEFAULT_ENCODING) == "abc" def test_type_error(self): with pytest.raises(TypeError): as_str(1, encoding=DEFAULT_ENCODING) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1642269185.0 ftputil-5.1.0/test/test_with_statement.py0000664000175000017500000001527014170605001017670 0ustar00schwaschwa# Copyright (C) 2008-2019, Stefan Schwarzer # and ftputil contributors (see `doc/contributors.txt`) # See the file LICENSE for licensing terms. import ftplib import io import pytest import ftputil.error import test.scripted_session as scripted_session from test import test_base Call = scripted_session.Call # Exception raised by client code, i. e. code using ftputil. Used to test the # behavior in case of client exceptions. class ClientCodeException(Exception): pass # # Test cases # class TestHostContextManager: def test_normal_operation(self): """ If an `FTPHost` instance is created, it should be closed by the host context manager. """ script = [ Call(method_name="__init__"), Call(method_name="pwd", result="/"), Call(method_name="close"), ] with test_base.ftp_host_factory(scripted_session.factory(script)) as host: assert host.closed is False assert host.closed is True def test_ftputil_exception(self): """ If an `ftplib.FTP` method raises an exception, it should be caught by the host context manager and the host object should be closed. """ script = [ # Since `__init__` raises an exception, `pwd` isn't called. # However, `close` is called via the context manager. Call(method_name="__init__", result=ftplib.error_perm), Call(method_name="close"), ] with pytest.raises(ftputil.error.FTPOSError): with test_base.ftp_host_factory(scripted_session.factory(script)) as host: pass # We arrived here, that's fine. Because the `FTPHost` object wasn't # successfully constructed, the assignment to `host` shouldn't have # happened. assert "host" not in locals() def test_client_code_exception(self): """ If client code raises an exception in the context manager block, the host object should be closed by the context manager. """ script = [ Call(method_name="__init__"), Call(method_name="pwd", result="/"), Call(method_name="close"), ] try: with test_base.ftp_host_factory(scripted_session.factory(script)) as host: assert host.closed is False raise ClientCodeException() except ClientCodeException: assert host.closed is True else: pytest.fail("`ClientCodeException` not raised") class TestFileContextManager: def test_normal_operation(self): """ If an `FTPFile` object is created in the `FTPFile` context manager, the context manager should close the file at the end of the `with` block. """ host_script = [ Call(method_name="__init__"), Call(method_name="pwd", result="/"), Call(method_name="close"), ] file_script = [ Call(method_name="__init__"), Call(method_name="pwd", result="/"), Call(method_name="cwd", result=None, args=("/",)), Call(method_name="voidcmd", result=None, args=("TYPE I",)), Call( method_name="transfercmd", result=io.StringIO("line 1\nline 2"), args=("RETR dummy", None), ), Call(method_name="voidresp", result=None), Call(method_name="close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with host.open("dummy", "r") as fobj: assert fobj.closed is False data = fobj.readline() assert data == "line 1\n" assert fobj.closed is False assert fobj.closed is True def test_ftputil_exception(self): """ If an `ftplib.FTP` method raises an exception, the `FTPFile` context manager should try to close the file. """ host_script = [ Call(method_name="__init__"), Call(method_name="pwd", result="/"), Call(method_name="close"), ] file_script = [ Call(method_name="__init__"), Call(method_name="pwd", result="/"), Call(method_name="cwd", result=None, args=("/",)), Call(method_name="voidcmd", result=None, args=("TYPE I",)), # Raise exception. `voidresp` therefore won't be called, but # `close` will be called by the context manager. Call( method_name="transfercmd", result=ftplib.error_perm, args=("STOR inaccessible", None), ), # Call(method_name="voidresp", result=None), Call(method_name="close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: with pytest.raises(ftputil.error.FTPIOError): # This should fail. with host.open("/inaccessible", "w") as fobj: pass # The file construction shouldn't have succeeded, so `fobj` should # be absent from the local namespace. assert "fobj" not in locals() def test_client_code_exception(self): """ If client code raises an exception in the `FTPFile` context manager block, the file object should be closed by the context manager. """ host_script = [ Call(method_name="__init__"), Call(method_name="pwd", result="/"), Call(method_name="close"), ] file_script = [ Call(method_name="__init__"), Call(method_name="pwd", result="/"), Call(method_name="cwd", result=None, args=("/",)), Call(method_name="voidcmd", result=None, args=("TYPE I",)), Call( method_name="transfercmd", result=io.BytesIO(b""), args=("RETR dummy", None), ), Call(method_name="voidresp", result=None), Call(method_name="close"), ] multisession_factory = scripted_session.factory(host_script, file_script) with test_base.ftp_host_factory(multisession_factory) as host: try: with host.open("dummy", "r") as fobj: assert fobj.closed is False raise ClientCodeException() except ClientCodeException: assert fobj.closed is True else: pytest.fail("`ClientCodeException` not raised")