././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1704829528.347189 command_runner-1.6.0/0000777000000000000000000000000014547321130011457 5ustar00././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1702302043.0 command_runner-1.6.0/LICENSE0000666000000000000000000000306614535610533012476 0ustar00BSD 3-Clause License Copyright (c) 2015-2023, netinvent, Orsiris de Jong, contact@netinvent.fr All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1704829528.347189 command_runner-1.6.0/PKG-INFO0000666000000000000000000006170614547321130012566 0ustar00Metadata-Version: 2.1 Name: command_runner Version: 1.6.0 Summary: Platform agnostic command and shell execution tool, also allows UAC/sudo privilege elevation Home-page: https://github.com/netinvent/command_runner Author: NetInvent - Orsiris de Jong Author-email: contact@netinvent.fr License: BSD Keywords: shell,execution,subprocess,check_output,wrapper,uac,sudo,elevate,privilege Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Topic :: Software Development Classifier: Topic :: System Classifier: Topic :: System :: Operating System Classifier: Topic :: System :: Shells Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Classifier: Operating System :: POSIX :: Linux Classifier: Operating System :: POSIX :: BSD :: FreeBSD Classifier: Operating System :: POSIX :: BSD :: NetBSD Classifier: Operating System :: POSIX :: BSD :: OpenBSD Classifier: Operating System :: Microsoft Classifier: Operating System :: Microsoft :: Windows Classifier: License :: OSI Approved :: BSD License Requires-Python: >=2.7 Description-Content-Type: text/markdown License-File: LICENSE Requires-Dist: psutil>=5.6.0 # command_runner # Platform agnostic command execution, timed background jobs with live stdout/stderr output capture, and UAC/sudo elevation [![License](https://img.shields.io/badge/License-BSD%203--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause) [![Percentage of issues still open](http://isitmaintained.com/badge/open/netinvent/command_runner.svg)](http://isitmaintained.com/project/netinvent/command_runner "Percentage of issues still open") [![Maintainability](https://api.codeclimate.com/v1/badges/defbe10a354d3705f287/maintainability)](https://codeclimate.com/github/netinvent/command_runner/maintainability) [![codecov](https://codecov.io/gh/netinvent/command_runner/branch/master/graph/badge.svg?token=rXqlphOzMh)](https://codecov.io/gh/netinvent/command_runner) [![linux-tests](https://github.com/netinvent/command_runner/actions/workflows/linux.yaml/badge.svg)](https://github.com/netinvent/command_runner/actions/workflows/linux.yaml) [![windows-tests](https://github.com/netinvent/command_runner/actions/workflows/windows.yaml/badge.svg)](https://github.com/netinvent/command_runner/actions/workflows/windows.yaml) [![GitHub Release](https://img.shields.io/github/release/netinvent/command_runner.svg?label=Latest)](https://github.com/netinvent/command_runner/releases/latest) command_runner's purpose is to run external commands from python, just like subprocess on which it relies, while solving various problems a developer may face among: - Handling of all possible subprocess.popen / subprocess.check_output scenarios / python versions in one handy function without encoding / timeout hassle - Allow stdout/stderr stream output to be redirected to callback functions / output queues / files so you get to handle output in your application while commands are running - Callback to optional stop check so we can stop execution from outside command_runner - Callback with optional process information so we get to control the process from outside command_runner - Callback once we're finished to easen thread usage - Optional process priority and io_priority settings - System agnostic functionality, the developer shouldn't carry the burden of Windows & Linux differences - Optional Windows UAC elevation module compatible with CPython, PyInstaller & Nuitka - Optional Linux sudo elevation compatible with CPython, PyInstaller & Nuitka It is compatible with Python 2.7+, tested up to Python 3.11 (backports some newer Python 3.5 functionality) and is tested on both Linux and Windows. It is also compatible with PyPy Python implementation. ...and yes, keeping Python 2.7 compatibility has proven to be quite challenging. ## command_runner command_runner is a replacement package for subprocess.popen and subprocess.check_output The main promise command_runner can do is to make sure to never have a blocking command, and always get results. It works as wrapper for subprocess.popen and subprocess.communicate that solves: - Platform differences - Handle timeouts even for windows GUI applications that don't return anything to stdout - Python language version differences - Handle timeouts even on earlier Python implementations - Handle encoding even on earlier Python implementations - Keep the promise to always return an exit code (so we don't have to deal with exit codes and exception logic at the same time) - Keep the promise to always return the command output regardless of the execution state (even with timeouts, callback interrupts and keyboard interrupts) - Can show command output on the fly without waiting the end of execution (with `live_output=True` argument) - Can give command output on the fly to application by using queues or callback functions - Catch all possible exceptions and log them properly with encoding fixes - Be compatible, and always return the same result regarless of platform command_runner also promises to properly kill commands when timeouts are reached, including spawned subprocesses of such commands. This specific behavior is achieved via psutil module, which is an optional dependency. ### command_runner in a nutshell Install with `pip install command_runner` The following example will work regardless of the host OS and the Python version. ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', timeout=10) ``` ## Guide to command_runner ### Setup `pip install command_runner` or download the latest git release ### Advanced command_runner usage #### Special exit codes In order to keep the promise to always provide an exit_code, spcial exit codes have been added for the case where none is given. Those exit codes are: - -250 : command_runner called with incompatible arguments - -251 : stop_on function returned True - -252 : KeyboardInterrupt - -253 : FileNotFoundError, OSError, IOError - -254 : Timeout - -255 : Any other uncatched exceptions This allows you to use the standard exit code logic, without having to deal with various exceptions. #### Default encoding command_runner has an `encoding` argument which defaults to `utf-8` for Unixes and `cp437` for Windows platforms. Using `cp437` ensures that most `cmd.exe` output is encoded properly, including accents and special characters, on most locale systems. Still you can specify your own encoding for other usages, like Powershell where `unicode_escape` is preferred. ```python from command_runner import command_runner command = r'C:\Windows\sysnative\WindowsPowerShell\v1.0\powershell.exe --help' exit_code, output = command_runner(command, encoding='unicode_escape') ``` Earlier subprocess.popen implementations didn't have an encoding setting so command_runner will deal with encoding for those. You can also disable command_runner's internal encoding in order to get raw process output (bytes) by passing False boolean. Example: ```python from command_runner import command_runner exit_code, raw_output = command_runner('ping 127.0.0.1', encoding=False) ``` #### On the fly (interactive screen) output **Note: for live output capture and threading, see stream redirection. If you want to run your application while command_runner gives back command output, the best way to go is queues / callbacks.** command_runner can output a command output on the fly to stdout, eg show output on screen during execution. This is helpful when the command is long, and we need to know the output while execution is ongoing. It is also helpful in order to catch partial command output when timeout is reached or a CTRL+C signal is received. Example: ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', shell=True, live_output=True) ``` Note: using live output relies on stdout pipe polling, which has lightly higher cpu usage. #### Timeouts **command_runner has a `timeout` argument which defaults to 3600 seconds.** This default setting ensures commands will not block the main script execution. Feel free to lower / higher that setting with `timeout` argument. Note that a command_runner kills the whole process tree that the command may have generated, even under Windows. ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', timeout=30) ``` #### Remarks on processes Using `shell=True` will spawn a shell which will spawn the desired child process. Be aware that under MS Windows, no direct process tree is available. We fixed this by walking processes during runtime. The drawback is that orphaned processes cannot be identified this way. #### Disabling logs / silencing `command_runner` has it's own logging system, which will log all sorts of error logs. If you need to disable it's logging, just run with argument silent. Be aware that logging.DEBUG log levels won't be silenced, by design. Example: ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', silent=True) ``` If you also need to disable logging.DEBUG level, you can run the following code which will required logging.CRITICAL only messages which `command_runner` never does: ```python import logging import command_runner logging.getLogger('command_runner').setLevel(logging.CRITICAL) ``` #### Capture method `command_runner` allows two different process output capture methods: `method='monitor'` which is default: - A thread is spawned in order to check stop conditions and kill process if needed - A main loop waits for the process to finish, then uses proc.communicate() to get it's output - Pros: - less CPU usage - less threads - Cons: - cannot read partial output on KeyboardInterrupt or stop_on (still works for partial timeout output) - cannot use queues or callback functions redirectors - is 0.1 seconds slower than poller method `method='poller'`: - A thread is spawned and reads stdout/stderr pipes into output queues - A poller loop reads from the output queues, checks stop conditions and kills process if needed - Pros: - Reads on the fly, allowing interactive commands (is also used with `live_output=True`) - Allows stdout/stderr output to be written live to callback functions, queues or files (useful when threaded) - is 0.1 seconds faster than monitor method, is preferred method for fast batch runnings - Cons: - lightly higher CPU usage Example: ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', method='poller') exit_code, output = command_runner('ping 127.0.0.1', method='monitor') ``` #### stdin stream redirection `command_runner` allows to redirect some stream directly into the subprocess it spawns. Example code ```python import sys from command_runner import command_runner exit_code, output = command_runner("gzip -d", stdin=sys.stdin.buffer) print("Uncompressed data", output) ``` The above program, when run with `echo "Hello, World!" | gzip | python myscript.py` will show the uncompressed string `Hello, World!` You can use whatever file descriptor you want, basic ones being sys.stdin for text input and sys.stdin.buffer for binary input. #### stdout / stderr stream redirection command_runner can redirect stdout and/or stderr streams to different outputs: - subprocess pipes - /dev/null or NUL - files - queues - callback functions Unless an output redirector is given for `stderr` argument, stderr will be redirected to `stdout` stream. Note that both queues and callback function redirectors require `poller` method and will fail if method is not set. Possible output redirection options are: - subprocess pipes By default, stdout writes into a subprocess.PIPE which is read by command_runner and returned as `output` variable. You may also pass any other subprocess.PIPE int values to `stdout` or `stderr` arguments. - /dev/null or NUL If `stdout=False` and/or `stderr=False` argument(s) are given, command output will not be saved. stdout/stderr streams will be redirected to `/dev/null` or `NUL` depending on platform. Output will always be `None`. See `split_streams` for more details using multiple outputs. - files Giving `stdout` and/or `stderr` arguments a string, `command_runner` will consider the string to be a file path where stream output will be written live. Examples: ```python from command_runner import command_runner exit_code, output = command_runner('dir', stdout=r"C:/tmp/command_result", stderr=r"C:/tmp/command_error", shell=True) ``` ```python from command_runner import command_runner exit_code, output = command_runner('dir', stdout='/tmp/stdout.log', stderr='/tmp/stderr.log', shell=True) ``` Opening a file with the wrong encoding (especially opening a CP437 encoded file on Windows with UTF-8 coded might endup with UnicodedecodeError.) - queues Queue(s) will be filled up by command_runner. In order to keep your program "live", we'll use the threaded version of command_runner which is basically the same except it returns a future result instead of a tuple. Note: With all the best will, there's no good way to achieve this under Python 2.7 without using more queues, so the threaded version is only compatible with Python 3.3+. For Python 2.7, you must create your thread and queue reader yourself (see footnote for a Python 2.7 comaptible example). Threaded command_runner plus queue example: ```python import queue from command_runner import command_runner_threaded output_queue = queue.Queue() stream_output = "" thread_result = command_runner_threaded('ping 127.0.0.1', shell=True, method='poller', stdout=output_queue) read_queue = True while read_queue: try: line = output_queue.get(timeout=0.1) except queue.Empty: pass else: if line is None: read_queue = False else: stream_output += line # ADD YOUR LIVE CODE HERE # Now we may get exit_code and output since result has become available at this point exit_code, output = thread_result.result() ``` You might also want to read both stdout and stderr queues. In that case, you can create a read loop just like in the following example. Here we're reading both queues in one loop, so we need to observe a couple of conditions before stopping the loop, in order to catch all queue output: ```python import queue from time import sleep from command_runner import command_runner_threaded stdout_queue = queue.Queue() stderr_queue = queue.Queue() thread_result = command_runner_threaded('ping 127.0.0.1', method='poller', shell=True, stdout=stdout_queue, stderr=stderr_queue) read_stdout = read_stderr = True while read_stdout or read_stderr: try: stdout_line = stdout_queue.get(timeout=0.1) except queue.Empty: pass else: if stdout_line is None: read_stdout = False else: print('STDOUT:', stdout_line) try: stderr_line = stderr_queue.get(timeout=0.1) except queue.Empty: pass else: if stderr_line is None: read_stderr = False else: print('STDERR:', stderr_line) # ADD YOUR LIVE CODE HERE exit_code, output = thread_result.result() assert exit_code == 0, 'We did not succeed in running the thread' ``` - callback functions The callback function will get one argument, being a str of current stream readings. It will be executed on every line that comes from streams. Example: ```python from command_runner import command_runner def callback_function(string): # ADD YOUR CODE HERE print('CALLBACK GOT:', string) # Launch command_runner exit_code, output = command_runner('ping 127.0.0.1', stdout=callback_function, method='poller') ``` #### stop_on In some situations, you want a command to be aborted on some external triggers. That's where `stop_on` argument comes in handy. Just pass a function to `stop_on`, as soon as function result becomes True, execution will halt with exit code -251. Example: ```python from command_runner import command_runner def some_function(): return True if we_must_stop_execution exit_code, output = command_runner('ping 127.0.0.1', stop_on=some_function) ``` #### Checking intervals By default, command_runner checks timeouts and outputs every 0.05 seconds. You can increase/decrease this setting via `check_interval` setting which accepts floats. Example: `command_runner(cmd, check_interval=0.2)` Note that lowering `check_interval` will increase CPU usage. #### Getting current process information `command_runner` can provide a subprocess.Popen instance of currently run process as external data. In order to do so, just declare a function and give it as `process_callback` argument. Example: ```python from command_runner import command_runner def show_process_info(process): print('My process has pid: {}'.format(process.pid)) exit_code, output = command_runner('ping 127.0.0.1', process_callback=show_process_info) ``` #### Split stdout and stderr By default, `command_runner` returns a tuple like `(exit_code, output)` in which output contains both stdout and stderr stream outputs. You can alter that behavior by using argument `split_stream=True`. In that case, `command_runner` will return a tuple like `(exit_code, stdout, stderr)`. Example: ```python from command_runner import command_runner exit_code, stdout, stderr = command_runner('ping 127.0.0.1', split_streams=True) print('exit code:', exit_code) print('stdout', stdout) print('stderr', stderr) ``` #### On-exit Callback `command_runner` allows to execute a callback function once it has finished it's execution. This might help building threaded programs where a callback is needed to disable GUI elements for example. Example: ```python from command_runner import command_runner def do_something(): print("We're done running") exit_code, output = command_runner('ping 127.0.0.1', on_exit=do_something) ``` ### Process and IO priority `command_runner` can set it's subprocess priority to 'low', 'normal' or 'high', which translate to 15, 0, -15 niceness on Linux and BELOW_NORMAL_PRIORITY_CLASS and HIGH_PRIORITY_CLASS in Windows. On Linux, you may also directly use priority with niceness int values. You may also set subprocess io priority to 'low', 'normal' or 'high'. Example: ```python from command_runner import command_runner exit_code, output = command_runner('some_intensive_process', priority='low', io_priority='high') ``` #### Other arguments `command_runner` takes **any** argument that `subprocess.Popen()` would take. It also uses the following standard arguments: - command (str/list): The command, doesn't need to be a list, a simple string works - valid_exit_codes (list): List of exit codes which won't trigger error logs - timeout (int): seconds before a process tree is killed forcefully, defaults to 3600 - shell (bool): Shall we use the cmd.exe or /usr/bin/env shell for command execution, defaults to False - encoding (str/bool): Which text encoding the command produces, defaults to cp437 under Windows and utf-8 under Linux - stdin (sys.stdin/int): Optional stdin file descriptor, sent to the process command_runner spawns - stdout (str/queue.Queue/function/False/None): Optional path to filename where to dump stdout, or queue where to write stdout, or callback function which is called when stdout has output - stderr (str/queue.Queue/function/False/None): Optional path to filename where to dump stderr, or queue where to write stderr, or callback function which is called when stderr has output - no_close_queues (bool): Normally, command_runner sends None to stdout / stderr queues when process is finished. This behavior can be disabled allowing to reuse those queues for other functions wrapping command_runner - windows_no_window (bool): Shall a command create a console window (MS Windows only), defaults to False - live_output (bool): Print output to stdout while executing command, defaults to False - method (str): Accepts 'poller' or 'monitor' stdout capture and timeout monitoring methods - check interval (float): Defaults to 0.05 seconds, which is the time between stream readings and timeout checks - stop_on (function): Optional function that when returns True stops command_runner execution - on_exit (function): Optional function that gets executed when command_runner has finished (callback function) - process_callback (function): Optional function that will take command_runner spawned process as argument, in order to deal with process info outside of command_runner - split_streams (bool): Split stdout and stderr into two separate results - silent (bool): Allows to disable command_runner's internal logs, except for logging.DEBUG levels which for obvious reasons should never be silenced - priority (str): Allows to set CPU bound process priority (takes 'low', 'normal' or 'high' parameter) - io_priority (str): Allows to set IO priority for process (takes 'low', 'normal' or 'high' parameter) - close_fds (bool): Like Popen, defaults to True on Linux and False on Windows - universal_newlines (bool): Like Popen, defaults to False - creation_flags (int): Like Popen, defaults to 0 - bufsize (int): Like Popen, defaults to 16384. Line buffering (bufsize=1) is deprecated since Python 3.7 **Note that ALL other subprocess.Popen arguments are supported, since they are directly passed to subprocess.** ## UAC Elevation / sudo elevation command_runner package allowing privilege elevation. Becoming an admin is fairly easy with command_runner.elevate You only have to import the elevate module, and then launch your main function with the elevate function. ### elevation In a nutshell ```python from command_runner.elevate import elevate def main(): """My main function that should be elevated""" print("Who's the administrator, now ?") if __name__ == '__main__': elevate(main) ``` elevate function handles arguments (positional and keyword arguments). `elevate(main, arg, arg2, kw=somearg)` will call `main(arg, arg2, kw=somearg)` ### Advanced elevate usage #### is_admin() function The elevate module has a nifty is_admin() function that returns a boolean according to your current root/administrator privileges. Usage: ```python from command_runner.elevate import is_admin print('Am I an admin ? %s' % is_admin()) ``` #### sudo elevation Initially designed for Windows UAC, command_runner.elevate can also elevate privileges on Linux, using the sudo command. This is mainly designed for PyInstaller / Nuitka executables, as it's really not safe to allow automatic privilege elevation of a Python interpreter. Example for a binary in `/usr/local/bin/my_compiled_python_binary` You'll have to allow this file to be run with sudo without a password prompt. This can be achieved in `/etc/sudoers` file. Example for Redhat / Rocky Linux, where adding the following line will allow the elevation process to succeed without password: ``` someuser ALL= NOPASSWD:/usr/local/bin/my_compiled_python_binary ``` ## Footnotes #### command_runner Python 2.7 compatible queue reader The following example is a Python 2.7 compatible threaded implementation that reads stdout / stderr queue in a thread. This only exists for compatibility reasons. ```python import queue import threading from command_runner import command_runner def read_queue(output_queue): """ Read the queue as thread Our problem here is that the thread can live forever if we don't check a global value, which is...well ugly """ stream_output = "" read_queue = True while read_queue: try: line = output_queue.get(timeout=1) except queue.Empty: pass else: # The queue reading can be stopped once 'None' is received. if line is None: read_queue = False else: stream_output += line # ADD YOUR LIVE CODE HERE # Create a new queue that command_runner will fill up output_queue = queue.Queue() # Create a thread of read_queue() in order to read the queue while command_runner executes the command read_thread = threading.Thread( target=read_queue, args=(output_queue) ) read_thread.daemon = True # thread dies with the program read_thread.start() # Launch command_runner, which will be blocking. Your live code goes directly into the threaded function exit_code, output = command_runner('ping 127.0.0.1', stdout=output_queue, method='poller') ``` ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704461684.0 command_runner-1.6.0/README.md0000666000000000000000000005714414546002564012757 0ustar00# command_runner # Platform agnostic command execution, timed background jobs with live stdout/stderr output capture, and UAC/sudo elevation [![License](https://img.shields.io/badge/License-BSD%203--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause) [![Percentage of issues still open](http://isitmaintained.com/badge/open/netinvent/command_runner.svg)](http://isitmaintained.com/project/netinvent/command_runner "Percentage of issues still open") [![Maintainability](https://api.codeclimate.com/v1/badges/defbe10a354d3705f287/maintainability)](https://codeclimate.com/github/netinvent/command_runner/maintainability) [![codecov](https://codecov.io/gh/netinvent/command_runner/branch/master/graph/badge.svg?token=rXqlphOzMh)](https://codecov.io/gh/netinvent/command_runner) [![linux-tests](https://github.com/netinvent/command_runner/actions/workflows/linux.yaml/badge.svg)](https://github.com/netinvent/command_runner/actions/workflows/linux.yaml) [![windows-tests](https://github.com/netinvent/command_runner/actions/workflows/windows.yaml/badge.svg)](https://github.com/netinvent/command_runner/actions/workflows/windows.yaml) [![GitHub Release](https://img.shields.io/github/release/netinvent/command_runner.svg?label=Latest)](https://github.com/netinvent/command_runner/releases/latest) command_runner's purpose is to run external commands from python, just like subprocess on which it relies, while solving various problems a developer may face among: - Handling of all possible subprocess.popen / subprocess.check_output scenarios / python versions in one handy function without encoding / timeout hassle - Allow stdout/stderr stream output to be redirected to callback functions / output queues / files so you get to handle output in your application while commands are running - Callback to optional stop check so we can stop execution from outside command_runner - Callback with optional process information so we get to control the process from outside command_runner - Callback once we're finished to easen thread usage - Optional process priority and io_priority settings - System agnostic functionality, the developer shouldn't carry the burden of Windows & Linux differences - Optional Windows UAC elevation module compatible with CPython, PyInstaller & Nuitka - Optional Linux sudo elevation compatible with CPython, PyInstaller & Nuitka It is compatible with Python 2.7+, tested up to Python 3.11 (backports some newer Python 3.5 functionality) and is tested on both Linux and Windows. It is also compatible with PyPy Python implementation. ...and yes, keeping Python 2.7 compatibility has proven to be quite challenging. ## command_runner command_runner is a replacement package for subprocess.popen and subprocess.check_output The main promise command_runner can do is to make sure to never have a blocking command, and always get results. It works as wrapper for subprocess.popen and subprocess.communicate that solves: - Platform differences - Handle timeouts even for windows GUI applications that don't return anything to stdout - Python language version differences - Handle timeouts even on earlier Python implementations - Handle encoding even on earlier Python implementations - Keep the promise to always return an exit code (so we don't have to deal with exit codes and exception logic at the same time) - Keep the promise to always return the command output regardless of the execution state (even with timeouts, callback interrupts and keyboard interrupts) - Can show command output on the fly without waiting the end of execution (with `live_output=True` argument) - Can give command output on the fly to application by using queues or callback functions - Catch all possible exceptions and log them properly with encoding fixes - Be compatible, and always return the same result regarless of platform command_runner also promises to properly kill commands when timeouts are reached, including spawned subprocesses of such commands. This specific behavior is achieved via psutil module, which is an optional dependency. ### command_runner in a nutshell Install with `pip install command_runner` The following example will work regardless of the host OS and the Python version. ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', timeout=10) ``` ## Guide to command_runner ### Setup `pip install command_runner` or download the latest git release ### Advanced command_runner usage #### Special exit codes In order to keep the promise to always provide an exit_code, spcial exit codes have been added for the case where none is given. Those exit codes are: - -250 : command_runner called with incompatible arguments - -251 : stop_on function returned True - -252 : KeyboardInterrupt - -253 : FileNotFoundError, OSError, IOError - -254 : Timeout - -255 : Any other uncatched exceptions This allows you to use the standard exit code logic, without having to deal with various exceptions. #### Default encoding command_runner has an `encoding` argument which defaults to `utf-8` for Unixes and `cp437` for Windows platforms. Using `cp437` ensures that most `cmd.exe` output is encoded properly, including accents and special characters, on most locale systems. Still you can specify your own encoding for other usages, like Powershell where `unicode_escape` is preferred. ```python from command_runner import command_runner command = r'C:\Windows\sysnative\WindowsPowerShell\v1.0\powershell.exe --help' exit_code, output = command_runner(command, encoding='unicode_escape') ``` Earlier subprocess.popen implementations didn't have an encoding setting so command_runner will deal with encoding for those. You can also disable command_runner's internal encoding in order to get raw process output (bytes) by passing False boolean. Example: ```python from command_runner import command_runner exit_code, raw_output = command_runner('ping 127.0.0.1', encoding=False) ``` #### On the fly (interactive screen) output **Note: for live output capture and threading, see stream redirection. If you want to run your application while command_runner gives back command output, the best way to go is queues / callbacks.** command_runner can output a command output on the fly to stdout, eg show output on screen during execution. This is helpful when the command is long, and we need to know the output while execution is ongoing. It is also helpful in order to catch partial command output when timeout is reached or a CTRL+C signal is received. Example: ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', shell=True, live_output=True) ``` Note: using live output relies on stdout pipe polling, which has lightly higher cpu usage. #### Timeouts **command_runner has a `timeout` argument which defaults to 3600 seconds.** This default setting ensures commands will not block the main script execution. Feel free to lower / higher that setting with `timeout` argument. Note that a command_runner kills the whole process tree that the command may have generated, even under Windows. ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', timeout=30) ``` #### Remarks on processes Using `shell=True` will spawn a shell which will spawn the desired child process. Be aware that under MS Windows, no direct process tree is available. We fixed this by walking processes during runtime. The drawback is that orphaned processes cannot be identified this way. #### Disabling logs / silencing `command_runner` has it's own logging system, which will log all sorts of error logs. If you need to disable it's logging, just run with argument silent. Be aware that logging.DEBUG log levels won't be silenced, by design. Example: ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', silent=True) ``` If you also need to disable logging.DEBUG level, you can run the following code which will required logging.CRITICAL only messages which `command_runner` never does: ```python import logging import command_runner logging.getLogger('command_runner').setLevel(logging.CRITICAL) ``` #### Capture method `command_runner` allows two different process output capture methods: `method='monitor'` which is default: - A thread is spawned in order to check stop conditions and kill process if needed - A main loop waits for the process to finish, then uses proc.communicate() to get it's output - Pros: - less CPU usage - less threads - Cons: - cannot read partial output on KeyboardInterrupt or stop_on (still works for partial timeout output) - cannot use queues or callback functions redirectors - is 0.1 seconds slower than poller method `method='poller'`: - A thread is spawned and reads stdout/stderr pipes into output queues - A poller loop reads from the output queues, checks stop conditions and kills process if needed - Pros: - Reads on the fly, allowing interactive commands (is also used with `live_output=True`) - Allows stdout/stderr output to be written live to callback functions, queues or files (useful when threaded) - is 0.1 seconds faster than monitor method, is preferred method for fast batch runnings - Cons: - lightly higher CPU usage Example: ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', method='poller') exit_code, output = command_runner('ping 127.0.0.1', method='monitor') ``` #### stdin stream redirection `command_runner` allows to redirect some stream directly into the subprocess it spawns. Example code ```python import sys from command_runner import command_runner exit_code, output = command_runner("gzip -d", stdin=sys.stdin.buffer) print("Uncompressed data", output) ``` The above program, when run with `echo "Hello, World!" | gzip | python myscript.py` will show the uncompressed string `Hello, World!` You can use whatever file descriptor you want, basic ones being sys.stdin for text input and sys.stdin.buffer for binary input. #### stdout / stderr stream redirection command_runner can redirect stdout and/or stderr streams to different outputs: - subprocess pipes - /dev/null or NUL - files - queues - callback functions Unless an output redirector is given for `stderr` argument, stderr will be redirected to `stdout` stream. Note that both queues and callback function redirectors require `poller` method and will fail if method is not set. Possible output redirection options are: - subprocess pipes By default, stdout writes into a subprocess.PIPE which is read by command_runner and returned as `output` variable. You may also pass any other subprocess.PIPE int values to `stdout` or `stderr` arguments. - /dev/null or NUL If `stdout=False` and/or `stderr=False` argument(s) are given, command output will not be saved. stdout/stderr streams will be redirected to `/dev/null` or `NUL` depending on platform. Output will always be `None`. See `split_streams` for more details using multiple outputs. - files Giving `stdout` and/or `stderr` arguments a string, `command_runner` will consider the string to be a file path where stream output will be written live. Examples: ```python from command_runner import command_runner exit_code, output = command_runner('dir', stdout=r"C:/tmp/command_result", stderr=r"C:/tmp/command_error", shell=True) ``` ```python from command_runner import command_runner exit_code, output = command_runner('dir', stdout='/tmp/stdout.log', stderr='/tmp/stderr.log', shell=True) ``` Opening a file with the wrong encoding (especially opening a CP437 encoded file on Windows with UTF-8 coded might endup with UnicodedecodeError.) - queues Queue(s) will be filled up by command_runner. In order to keep your program "live", we'll use the threaded version of command_runner which is basically the same except it returns a future result instead of a tuple. Note: With all the best will, there's no good way to achieve this under Python 2.7 without using more queues, so the threaded version is only compatible with Python 3.3+. For Python 2.7, you must create your thread and queue reader yourself (see footnote for a Python 2.7 comaptible example). Threaded command_runner plus queue example: ```python import queue from command_runner import command_runner_threaded output_queue = queue.Queue() stream_output = "" thread_result = command_runner_threaded('ping 127.0.0.1', shell=True, method='poller', stdout=output_queue) read_queue = True while read_queue: try: line = output_queue.get(timeout=0.1) except queue.Empty: pass else: if line is None: read_queue = False else: stream_output += line # ADD YOUR LIVE CODE HERE # Now we may get exit_code and output since result has become available at this point exit_code, output = thread_result.result() ``` You might also want to read both stdout and stderr queues. In that case, you can create a read loop just like in the following example. Here we're reading both queues in one loop, so we need to observe a couple of conditions before stopping the loop, in order to catch all queue output: ```python import queue from time import sleep from command_runner import command_runner_threaded stdout_queue = queue.Queue() stderr_queue = queue.Queue() thread_result = command_runner_threaded('ping 127.0.0.1', method='poller', shell=True, stdout=stdout_queue, stderr=stderr_queue) read_stdout = read_stderr = True while read_stdout or read_stderr: try: stdout_line = stdout_queue.get(timeout=0.1) except queue.Empty: pass else: if stdout_line is None: read_stdout = False else: print('STDOUT:', stdout_line) try: stderr_line = stderr_queue.get(timeout=0.1) except queue.Empty: pass else: if stderr_line is None: read_stderr = False else: print('STDERR:', stderr_line) # ADD YOUR LIVE CODE HERE exit_code, output = thread_result.result() assert exit_code == 0, 'We did not succeed in running the thread' ``` - callback functions The callback function will get one argument, being a str of current stream readings. It will be executed on every line that comes from streams. Example: ```python from command_runner import command_runner def callback_function(string): # ADD YOUR CODE HERE print('CALLBACK GOT:', string) # Launch command_runner exit_code, output = command_runner('ping 127.0.0.1', stdout=callback_function, method='poller') ``` #### stop_on In some situations, you want a command to be aborted on some external triggers. That's where `stop_on` argument comes in handy. Just pass a function to `stop_on`, as soon as function result becomes True, execution will halt with exit code -251. Example: ```python from command_runner import command_runner def some_function(): return True if we_must_stop_execution exit_code, output = command_runner('ping 127.0.0.1', stop_on=some_function) ``` #### Checking intervals By default, command_runner checks timeouts and outputs every 0.05 seconds. You can increase/decrease this setting via `check_interval` setting which accepts floats. Example: `command_runner(cmd, check_interval=0.2)` Note that lowering `check_interval` will increase CPU usage. #### Getting current process information `command_runner` can provide a subprocess.Popen instance of currently run process as external data. In order to do so, just declare a function and give it as `process_callback` argument. Example: ```python from command_runner import command_runner def show_process_info(process): print('My process has pid: {}'.format(process.pid)) exit_code, output = command_runner('ping 127.0.0.1', process_callback=show_process_info) ``` #### Split stdout and stderr By default, `command_runner` returns a tuple like `(exit_code, output)` in which output contains both stdout and stderr stream outputs. You can alter that behavior by using argument `split_stream=True`. In that case, `command_runner` will return a tuple like `(exit_code, stdout, stderr)`. Example: ```python from command_runner import command_runner exit_code, stdout, stderr = command_runner('ping 127.0.0.1', split_streams=True) print('exit code:', exit_code) print('stdout', stdout) print('stderr', stderr) ``` #### On-exit Callback `command_runner` allows to execute a callback function once it has finished it's execution. This might help building threaded programs where a callback is needed to disable GUI elements for example. Example: ```python from command_runner import command_runner def do_something(): print("We're done running") exit_code, output = command_runner('ping 127.0.0.1', on_exit=do_something) ``` ### Process and IO priority `command_runner` can set it's subprocess priority to 'low', 'normal' or 'high', which translate to 15, 0, -15 niceness on Linux and BELOW_NORMAL_PRIORITY_CLASS and HIGH_PRIORITY_CLASS in Windows. On Linux, you may also directly use priority with niceness int values. You may also set subprocess io priority to 'low', 'normal' or 'high'. Example: ```python from command_runner import command_runner exit_code, output = command_runner('some_intensive_process', priority='low', io_priority='high') ``` #### Other arguments `command_runner` takes **any** argument that `subprocess.Popen()` would take. It also uses the following standard arguments: - command (str/list): The command, doesn't need to be a list, a simple string works - valid_exit_codes (list): List of exit codes which won't trigger error logs - timeout (int): seconds before a process tree is killed forcefully, defaults to 3600 - shell (bool): Shall we use the cmd.exe or /usr/bin/env shell for command execution, defaults to False - encoding (str/bool): Which text encoding the command produces, defaults to cp437 under Windows and utf-8 under Linux - stdin (sys.stdin/int): Optional stdin file descriptor, sent to the process command_runner spawns - stdout (str/queue.Queue/function/False/None): Optional path to filename where to dump stdout, or queue where to write stdout, or callback function which is called when stdout has output - stderr (str/queue.Queue/function/False/None): Optional path to filename where to dump stderr, or queue where to write stderr, or callback function which is called when stderr has output - no_close_queues (bool): Normally, command_runner sends None to stdout / stderr queues when process is finished. This behavior can be disabled allowing to reuse those queues for other functions wrapping command_runner - windows_no_window (bool): Shall a command create a console window (MS Windows only), defaults to False - live_output (bool): Print output to stdout while executing command, defaults to False - method (str): Accepts 'poller' or 'monitor' stdout capture and timeout monitoring methods - check interval (float): Defaults to 0.05 seconds, which is the time between stream readings and timeout checks - stop_on (function): Optional function that when returns True stops command_runner execution - on_exit (function): Optional function that gets executed when command_runner has finished (callback function) - process_callback (function): Optional function that will take command_runner spawned process as argument, in order to deal with process info outside of command_runner - split_streams (bool): Split stdout and stderr into two separate results - silent (bool): Allows to disable command_runner's internal logs, except for logging.DEBUG levels which for obvious reasons should never be silenced - priority (str): Allows to set CPU bound process priority (takes 'low', 'normal' or 'high' parameter) - io_priority (str): Allows to set IO priority for process (takes 'low', 'normal' or 'high' parameter) - close_fds (bool): Like Popen, defaults to True on Linux and False on Windows - universal_newlines (bool): Like Popen, defaults to False - creation_flags (int): Like Popen, defaults to 0 - bufsize (int): Like Popen, defaults to 16384. Line buffering (bufsize=1) is deprecated since Python 3.7 **Note that ALL other subprocess.Popen arguments are supported, since they are directly passed to subprocess.** ## UAC Elevation / sudo elevation command_runner package allowing privilege elevation. Becoming an admin is fairly easy with command_runner.elevate You only have to import the elevate module, and then launch your main function with the elevate function. ### elevation In a nutshell ```python from command_runner.elevate import elevate def main(): """My main function that should be elevated""" print("Who's the administrator, now ?") if __name__ == '__main__': elevate(main) ``` elevate function handles arguments (positional and keyword arguments). `elevate(main, arg, arg2, kw=somearg)` will call `main(arg, arg2, kw=somearg)` ### Advanced elevate usage #### is_admin() function The elevate module has a nifty is_admin() function that returns a boolean according to your current root/administrator privileges. Usage: ```python from command_runner.elevate import is_admin print('Am I an admin ? %s' % is_admin()) ``` #### sudo elevation Initially designed for Windows UAC, command_runner.elevate can also elevate privileges on Linux, using the sudo command. This is mainly designed for PyInstaller / Nuitka executables, as it's really not safe to allow automatic privilege elevation of a Python interpreter. Example for a binary in `/usr/local/bin/my_compiled_python_binary` You'll have to allow this file to be run with sudo without a password prompt. This can be achieved in `/etc/sudoers` file. Example for Redhat / Rocky Linux, where adding the following line will allow the elevation process to succeed without password: ``` someuser ALL= NOPASSWD:/usr/local/bin/my_compiled_python_binary ``` ## Footnotes #### command_runner Python 2.7 compatible queue reader The following example is a Python 2.7 compatible threaded implementation that reads stdout / stderr queue in a thread. This only exists for compatibility reasons. ```python import queue import threading from command_runner import command_runner def read_queue(output_queue): """ Read the queue as thread Our problem here is that the thread can live forever if we don't check a global value, which is...well ugly """ stream_output = "" read_queue = True while read_queue: try: line = output_queue.get(timeout=1) except queue.Empty: pass else: # The queue reading can be stopped once 'None' is received. if line is None: read_queue = False else: stream_output += line # ADD YOUR LIVE CODE HERE # Create a new queue that command_runner will fill up output_queue = queue.Queue() # Create a thread of read_queue() in order to read the queue while command_runner executes the command read_thread = threading.Thread( target=read_queue, args=(output_queue) ) read_thread.daemon = True # thread dies with the program read_thread.start() # Launch command_runner, which will be blocking. Your live code goes directly into the threaded function exit_code, output = command_runner('ping 127.0.0.1', stdout=output_queue, method='poller') ``` ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1704829528.2376902 command_runner-1.6.0/command_runner/0000777000000000000000000000000014547321130014466 5ustar00././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704327942.0 command_runner-1.6.0/command_runner/__init__.py0000666000000000000000000013120314545375406016614 0ustar00#! /usr/bin/env python # -*- coding: utf-8 -*- # # This file is part of command_runner module """ command_runner is a quick tool to launch commands from Python, get exit code and output, and handle most errors that may happen Versioning semantics: Major version: backward compatibility breaking changes Minor version: New functionality Patch version: Backwards compatible bug fixes """ # python 2.7 compat fixes so all strings are considered unicode from __future__ import unicode_literals __intname__ = "command_runner" __author__ = "Orsiris de Jong" __copyright__ = "Copyright (C) 2015-2024 Orsiris de Jong for NetInvent SASU" __licence__ = "BSD 3 Clause" __version__ = "1.6.0" __build__ = "2024010401" __compat__ = "python2.7+" import io import os import shlex import subprocess import sys from datetime import datetime from logging import getLogger from time import sleep try: import psutil except ImportError: # Don't bother with an error since we need command_runner to work without dependencies pass try: # Also make sure we directly import priority classes so we can reuse them if os.name == "nt": from psutil import ( ABOVE_NORMAL_PRIORITY_CLASS, BELOW_NORMAL_PRIORITY_CLASS, HIGH_PRIORITY_CLASS, IDLE_PRIORITY_CLASS, NORMAL_PRIORITY_CLASS, REALTIME_PRIORITY_CLASS, ) from psutil import IOPRIO_HIGH, IOPRIO_NORMAL, IOPRIO_LOW, IOPRIO_VERYLOW else: from psutil import ( IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE, IOPRIO_CLASS_NONE, IOPRIO_CLASS_RT, ) except (ImportError, AttributeError): pass try: import signal except ImportError: pass # Python 2.7 compat fixes (queue was Queue) try: import queue except ImportError: import Queue as queue import threading # Python 2.7 compat fixes (missing typing) try: from typing import Union, Optional, List, Tuple, NoReturn, Any, Callable except ImportError: pass # Python 2.7 compat fixes (no concurrent futures) try: from concurrent.futures import Future from functools import wraps except ImportError: # Python 2.7 just won't have concurrent.futures, so we just declare threaded and wraps in order to # avoid NameError def threaded(fn): """ Simple placeholder for python 2.7 """ return fn def wraps(fn): """ Simple placeholder for python 2.7 """ return fn # Python 2.7 compat fixes (no FileNotFoundError class) try: # pylint: disable=E0601 (used-before-assignment) FileNotFoundError except NameError: # pylint: disable=W0622 (redefined-builtin) FileNotFoundError = IOError # python <= 3.3 compat fixes (missing TimeoutExpired class) try: TimeoutExpired = subprocess.TimeoutExpired except AttributeError: class TimeoutExpired(BaseException): """ Basic redeclaration when subprocess.TimeoutExpired does not exist, python <= 3.3 """ def __init__(self, cmd, timeout, output=None, stderr=None, *args, **kwargs): self.cmd = cmd self.timeout = timeout self.output = output self.stderr = stderr super().__init__(*args, **kwargs) def __str__(self): return "Command '%s' timed out after %s seconds" % (self.cmd, self.timeout) @property def stdout(self): return self.output @stdout.setter def stdout(self, value): # There's no obvious reason to set this, but allow it anyway so # .stdout is a transparent alias for .output self.output = value class InterruptGetOutput(BaseException): """ Make sure we get the current output when process is stopped mid-execution """ def __init__(self, output, *args, **kwargs): self._output = output super().__init__(*args, **kwargs) @property def output(self): return self._output class KbdInterruptGetOutput(InterruptGetOutput): """ Make sure we get the current output when KeyboardInterrupt is made """ def __init__(self, output, *args, **kwargs): self._output = output super().__init__(output, *args, **kwargs) @property def output(self): return self._output class StopOnInterrupt(InterruptGetOutput): """ Make sure we get the current output when optional stop_on function execution returns True """ def __init__(self, output, *args, **kwargs): self._output = output super().__init__(output, *args, **kwargs) @property def output(self): return self._output ### BEGIN DIRECT IMPORT FROM ofunctions.threading def call_with_future(fn, future, args, kwargs): """ Threading a function with return info using Future from https://stackoverflow.com/a/19846691/2635443 Example: @threaded def somefunc(arg): return 'arg was %s' % arg thread = somefunc('foo') while thread.done() is False: time.sleep(1) print(thread.result()) """ try: result = fn(*args, **kwargs) future.set_result(result) except Exception as exc: future.set_exception(exc) # pylint: disable=E0102 (function-redefined) def threaded(fn): """ @threaded wrapper in order to thread any function @wraps decorator sole purpose is for function.__name__ to be the real function instead of 'wrapper' """ @wraps(fn) def wrapper(*args, **kwargs): if kwargs.pop("__no_threads", False): return fn(*args, **kwargs) future = Future() thread = threading.Thread( target=call_with_future, args=(fn, future, args, kwargs) ) thread.daemon = True thread.start() return future return wrapper ### END DIRECT IMPORT FROM ofunctions.threading logger = getLogger(__intname__) PIPE = subprocess.PIPE def _set_priority( pid, # type: int priority, # type: Union[int, str] priority_type, # type: str ): """ Set process and / or io priorities Since Windows and Linux use different possible values, let's simplify things by allowing 3 prioriy types """ priority = priority.lower() if priority_type == "process": if isinstance(priority, int) and os.name != "nt" and -20 <= priority <= 20: raise ValueError("Bogus process priority int given: {}".format(priority)) if priority not in ["low", "normal", "high"]: raise ValueError( "Bogus {} priority given: {}".format(priority_type, priority) ) if priority_type == "io" and priority not in ["low", "normal", "high"]: raise ValueError("Bogus {} priority given: {}".format(priority_type, priority)) if os.name == "nt": priorities = { "process": { "low": BELOW_NORMAL_PRIORITY_CLASS, "normal": NORMAL_PRIORITY_CLASS, "high": HIGH_PRIORITY_CLASS, }, "io": {"low": IOPRIO_LOW, "normal": IOPRIO_NORMAL, "high": IOPRIO_HIGH}, } else: priorities = { "process": {"low": 15, "normal": 0, "high": -15}, "io": { "low": IOPRIO_CLASS_IDLE, "normal": IOPRIO_CLASS_BE, "high": IOPRIO_CLASS_RT, }, } if priority_type == "process": # Allow direct priority nice settings under linux if isinstance(priority, int): _priority = priority else: _priority = priorities[priority_type][priority] psutil.Process(pid).nice(_priority) elif priority_type == "io": psutil.Process(pid).ionice(priorities[priority_type][priority]) else: raise ValueError("Bogus priority type given.") def set_priority( pid, # type: int priority, # type: Union[int, str] ): """ Shorthand for _set_priority """ _set_priority(pid, priority, "process") def set_io_priority( pid, # type: int priority, # type: str ): """ Shorthand for _set_priority """ _set_priority(pid, priority, "io") def to_encoding( process_output, # type: Union[str, bytes] encoding, # type: Optional[str] errors, # type: str ): # type: (...) -> str """ Convert bytes output to string and handles conversion errors Varation of ofunctions.string_handling.safe_string_convert """ if not encoding: return process_output # Compatibility for earlier Python versions where Popen has no 'encoding' nor 'errors' arguments if isinstance(process_output, bytes): try: process_output = process_output.decode(encoding, errors=errors) except TypeError: try: # handle TypeError: don't know how to handle UnicodeDecodeError in error callback process_output = process_output.decode(encoding, errors="ignore") except (ValueError, TypeError): # What happens when str cannot be concatenated logger.error("Output cannot be captured {}".format(process_output)) return process_output def kill_childs_mod( pid=None, # type: int itself=False, # type: bool soft_kill=False, # type: bool ): # type: (...) -> bool """ Inline version of ofunctions.kill_childs that has no hard dependency on psutil Kills all childs of pid (current pid can be obtained with os.getpid()) If no pid given current pid is taken Good idea when using multiprocessing, is to call with atexit.register(ofunctions.kill_childs, os.getpid(),) Beware: MS Windows does not maintain a process tree, so child dependencies are computed on the fly Knowing this, orphaned processes (where parent process died) cannot be found and killed this way Prefer using process.send_signal() in favor of process.kill() to avoid race conditions when PID was reused too fast :param pid: Which pid tree we'll kill :param itself: Should parent be killed too ? """ sig = None ### BEGIN COMMAND_RUNNER MOD if "psutil" not in sys.modules: logger.error( "No psutil module present. Can only kill direct pids, not child subtree." ) if "signal" not in sys.modules: logger.error( "No signal module present. Using direct psutil kill API which might have race conditions when PID is reused too fast." ) else: """ Warning: There are only a couple of signals supported on Windows platform Extract from signal.valid_signals(): Windows / Python 3.9-64 {, , , , , , } Linux / Python 3.8-64 {, , , , , , , , , , , , , , , 16, , , , , , , , , , , , , , , , , 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, } A ValueError will be raised in any other case. Note that not all systems define the same set of signal names; an AttributeError will be raised if a signal name is not defined as SIG* module level constant. """ try: if not soft_kill and hasattr(signal, "SIGKILL"): # Don't bother to make pylint go crazy on Windows # pylint: disable=E1101 sig = signal.SIGKILL else: sig = signal.SIGTERM except NameError: sig = None ### END COMMAND_RUNNER MOD def _process_killer( process, # type: Union[subprocess.Popen, psutil.Process] sig, # type: signal.valid_signals soft_kill, # type: bool ): # (...) -> None """ Simple abstract process killer that works with signals in order to avoid reused PID race conditions and can prefers using terminate than kill """ if sig: try: process.send_signal(sig) # psutil.NoSuchProcess might not be available, let's be broad # pylint: disable=W0703 except Exception: pass else: if soft_kill: process.terminate() else: process.kill() try: current_process = psutil.Process(pid) # psutil.NoSuchProcess might not be available, let's be broad # pylint: disable=W0703 except Exception: if itself: ### BEGIN COMMAND_RUNNER MOD try: os.kill( pid, 15 ) # 15 being signal.SIGTERM or SIGKILL depending on the platform except OSError as exc: if os.name == "nt": # We'll do an ugly hack since os.kill() has some pretty big caveats on Windows # especially for Python 2.7 where we can get Access Denied os.system("taskkill /F /pid {}".format(pid)) else: logger.error( "Could not properly kill process with pid {}: {}".format( pid, to_encoding(exc.__str__(), "utf-8", "backslashreplace"), ) ) raise ### END COMMAND_RUNNER MOD return False else: for child in current_process.children(recursive=True): _process_killer(child, sig, soft_kill) if itself: _process_killer(current_process, sig, soft_kill) return True def command_runner( command, # type: Union[str, List[str]] valid_exit_codes=False, # type: Union[List[int], bool] timeout=3600, # type: Optional[int] shell=False, # type: bool encoding=None, # type: Optional[Union[str, bool]] stdin=None, # type: Optional[Union[int, str, Callable, queue.Queue]] stdout=None, # type: Optional[Union[int, str, Callable, queue.Queue]] stderr=None, # type: Optional[Union[int, str, Callable, queue.Queue]] no_close_queues=False, # type: Optional[bool] windows_no_window=False, # type: bool live_output=False, # type: bool method="monitor", # type: str check_interval=0.05, # type: float stop_on=None, # type: Callable on_exit=None, # type: Callable process_callback=None, # type: Callable split_streams=False, # type: bool silent=False, # type: bool priority=None, # type: Union[int, str] io_priority=None, # type: str **kwargs # type: Any ): # type: (...) -> Union[Tuple[int, Optional[Union[bytes, str]]], Tuple[int, Optional[Union[bytes, str]], Optional[Union[bytes, str]]]] """ Unix & Windows compatible subprocess wrapper that handles output encoding and timeouts Newer Python check_output already handles encoding and timeouts, but this one is retro-compatible It is still recommended to set cp437 for windows and utf-8 for unix Also allows a list of various valid exit codes (ie no error when exit code = arbitrary int) command should be a list of strings, eg ['ping', '-c 2', '127.0.0.1'] command can also be a single string, ex 'ping -c 2 127.0.0.1' if shell=True or if os is Windows Accepts all of subprocess.popen arguments Whenever we can, we need to avoid shell=True in order to preserve better security Avoiding shell=True involves passing absolute paths to executables since we don't have shell PATH environment When no stdout option is given, we'll get output into the returned (exit_code, output) tuple When stdout = filename or stderr = filename, we'll write output to the given file live_output will poll the process for output and show it on screen (output may be non reliable, don't use it if your program depends on the commands' stdout output) windows_no_window will disable visible window (MS Windows platform only) stop_on is an optional function that will stop execution if function returns True priority and io_priority can be set to 'low', 'normal' or 'high' priority may also be an int from -20 to 20 on Unix Returns a tuple (exit_code, output) """ # Choose default encoding when none set # cp437 encoding assures we catch most special characters from cmd.exe # Unless encoding=False in which case nothing gets encoded except Exceptions and logger strings for Python 2 error_encoding = "cp437" if os.name == "nt" else "utf-8" if encoding is None: encoding = error_encoding # Fix when unix command was given as single string # This is more secure than setting shell=True if os.name == "posix": if not shell and isinstance(command, str): command = shlex.split(command) elif shell and isinstance(command, list): command = " ".join(command) # Set default values for kwargs errors = kwargs.pop( "errors", "backslashreplace" ) # Don't let encoding issues make you mad universal_newlines = kwargs.pop("universal_newlines", False) creationflags = kwargs.pop("creationflags", 0) # subprocess.CREATE_NO_WINDOW was added in Python 3.7 for Windows OS only if ( windows_no_window and sys.version_info[0] >= 3 and sys.version_info[1] >= 7 and os.name == "nt" ): # Disable the following pylint error since the code also runs on nt platform, but # triggers an error on Unix # pylint: disable=E1101 creationflags = creationflags | subprocess.CREATE_NO_WINDOW close_fds = kwargs.pop("close_fds", "posix" in sys.builtin_module_names) # Default buffer size. line buffer (1) is deprecated in Python 3.7+ bufsize = kwargs.pop("bufsize", 16384) # Decide whether we write to output variable only (stdout=None), to output variable and stdout (stdout=PIPE) # or to output variable and to file (stdout='path/to/file') if stdout is None: _stdout = PIPE stdout_destination = "pipe" elif callable(stdout): _stdout = PIPE stdout_destination = "callback" elif isinstance(stdout, queue.Queue): _stdout = PIPE stdout_destination = "queue" elif isinstance(stdout, str): # We will send anything to file _stdout = open(stdout, "wb") stdout_destination = "file" elif stdout is False: # Python 2.7 does not have subprocess.DEVNULL, hence we need to use a file descriptor try: _stdout = subprocess.DEVNULL except AttributeError: _stdout = PIPE stdout_destination = None else: # We will send anything to given stdout pipe _stdout = stdout stdout_destination = "pipe" # The only situation where we don't add stderr to stdout is if a specific target file was given if callable(stderr): _stderr = PIPE stderr_destination = "callback" elif isinstance(stderr, queue.Queue): _stderr = PIPE stderr_destination = "queue" elif isinstance(stderr, str): _stderr = open(stderr, "wb") stderr_destination = "file" elif stderr is False: try: _stderr = subprocess.DEVNULL except AttributeError: _stderr = PIPE stderr_destination = None elif stderr is not None: _stderr = stderr stderr_destination = "pipe" # Automagically add a pipe so we are sure not to redirect to stdout elif split_streams: _stderr = PIPE stderr_destination = "pipe" else: _stderr = subprocess.STDOUT stderr_destination = "stdout" def _read_pipe( stream, # type: io.StringIO output_queue, # type: queue.Queue ): # type: (...) -> None """ will read from subprocess.PIPE Must be threaded since readline() might be blocking on Windows GUI apps Partly based on https://stackoverflow.com/a/4896288/2635443 """ # WARNING: Depending on the stream type (binary or text), the sentinel character # needs to be of the same type, or the iterator won't have an end # We also need to check that stream has readline, in case we're writing to files instead of PIPE # Another magnificient python 2.7 fix # So we need to convert sentinel_char which would be unicode because of unicode_litterals # to str which is the output format from stream.readline() if hasattr(stream, "readline"): sentinel_char = str("") if hasattr(stream, "encoding") else b"" for line in iter(stream.readline, sentinel_char): output_queue.put(line) output_queue.put(None) stream.close() def _get_error_output(output_stdout, output_stderr): """ Try to concatenate output for exceptions if possible """ try: return output_stdout + output_stderr except TypeError: if output_stdout: return output_stdout if output_stderr: return output_stderr return None def _poll_process( process, # type: Union[subprocess.Popen[str], subprocess.Popen] timeout, # type: int encoding, # type: str errors, # type: str ): # type: (...) -> Union[Tuple[int, Optional[str]], Tuple[int, Optional[str], Optional[str]]] """ Process stdout/stderr output polling is only used in live output mode since it takes more resources than using communicate() Reads from process output pipe until: - Timeout is reached, in which case we'll terminate the process - Process ends by itself Returns an encoded string of the pipe output """ def __check_timeout( begin_time, # type: datetime.timestamp timeout, # type: int ): # type: (...) -> None """ Simple subfunction to check whether timeout is reached Since we check this alot, we put it into a function """ if timeout and (datetime.now() - begin_time).total_seconds() > timeout: kill_childs_mod(process.pid, itself=True, soft_kill=False) raise TimeoutExpired( process, timeout, _get_error_output(output_stdout, output_stderr) ) if stop_on and stop_on(): kill_childs_mod(process.pid, itself=True, soft_kill=False) raise StopOnInterrupt(_get_error_output(output_stdout, output_stderr)) begin_time = datetime.now() if encoding is False: output_stdout = output_stderr = b"" else: output_stdout = output_stderr = "" try: if stdout_destination is not None: stdout_read_queue = True stdout_queue = queue.Queue() stdout_read_thread = threading.Thread( target=_read_pipe, args=(process.stdout, stdout_queue) ) stdout_read_thread.daemon = True # thread dies with the program stdout_read_thread.start() else: stdout_read_queue = False # Don't bother to read stderr if we redirect to stdout if stderr_destination not in ["stdout", None]: stderr_read_queue = True stderr_queue = queue.Queue() stderr_read_thread = threading.Thread( target=_read_pipe, args=(process.stderr, stderr_queue) ) stderr_read_thread.daemon = True # thread dies with the program stderr_read_thread.start() else: stderr_read_queue = False while stdout_read_queue or stderr_read_queue: if stdout_read_queue: try: line = stdout_queue.get(timeout=check_interval) except queue.Empty: pass else: if line is None: stdout_read_queue = False else: line = to_encoding(line, encoding, errors) if stdout_destination == "callback": stdout(line) if stdout_destination == "queue": stdout.put(line) if live_output: sys.stdout.write(line) output_stdout += line if stderr_read_queue: try: line = stderr_queue.get(timeout=check_interval) except queue.Empty: pass else: if line is None: stderr_read_queue = False else: line = to_encoding(line, encoding, errors) if stderr_destination == "callback": stderr(line) if stderr_destination == "queue": stderr.put(line) if live_output: sys.stderr.write(line) if split_streams: output_stderr += line else: output_stdout += line __check_timeout(begin_time, timeout) # Make sure we wait for the process to terminate, even after # output_queue has finished sending data, so we catch the exit code while process.poll() is None: __check_timeout(begin_time, timeout) # Additional timeout check to make sure we don't return an exit code from processes # that were killed because of timeout __check_timeout(begin_time, timeout) exit_code = process.poll() if split_streams: return exit_code, output_stdout, output_stderr return exit_code, output_stdout except KeyboardInterrupt: raise KbdInterruptGetOutput(_get_error_output(output_stdout, output_stderr)) def _timeout_check_thread( process, # type: Union[subprocess.Popen[str], subprocess.Popen] timeout, # type: int must_stop, # type dict ): # type: (...) -> None """ Since elder python versions don't have timeout, we need to manually check the timeout for a process when working in process monitor mode """ begin_time = datetime.now() while True: if timeout and (datetime.now() - begin_time).total_seconds() > timeout: kill_childs_mod(process.pid, itself=True, soft_kill=False) must_stop["value"] = "T" # T stands for TIMEOUT REACHED break if stop_on and stop_on(): kill_childs_mod(process.pid, itself=True, soft_kill=False) must_stop["value"] = "S" # S stands for STOP_ON RETURNED TRUE break if process.poll() is not None: break # We definitly need some sleep time here or else we will overload CPU sleep(check_interval) def _monitor_process( process, # type: Union[subprocess.Popen[str], subprocess.Popen] timeout, # type: int encoding, # type: str errors, # type: str ): # type: (...) -> Union[Tuple[int, Optional[str]], Tuple[int, Optional[str], Optional[str]]] """ Create a thread in order to enforce timeout or a stop_on condition Get stdout output and return it """ # Shared mutable objects have proven to have race conditions with PyPy 3.7 (mutable object # is changed in thread, but outer monitor function has still old mutable object state) # Strangely, this happened only sometimes on github actions/ubuntu 20.04.3 & pypy 3.7 # Just make sure the thread is done before using mutable object must_stop = {"value": False} thread = threading.Thread( target=_timeout_check_thread, args=(process, timeout, must_stop), ) thread.daemon = True # was setDaemon(True) which has been deprecated thread.start() if encoding is False: output_stdout = output_stderr = b"" output_stdout_end = output_stderr_end = b"" else: output_stdout = output_stderr = "" output_stdout_end = output_stderr_end = "" try: # Don't use process.wait() since it may deadlock on old Python versions # Also it won't allow communicate() to get incomplete output on timeouts while process.poll() is None: if must_stop["value"]: break # We still need to use process.communicate() in this loop so we don't get stuck # with poll() is not None even after process is finished, when using shell=True # Behavior validated on python 3.7 try: output_stdout, output_stderr = process.communicate() # ValueError is raised on closed IO file except (TimeoutExpired, ValueError): pass exit_code = process.poll() try: output_stdout_end, output_stderr_end = process.communicate() except (TimeoutExpired, ValueError): pass # Fix python 2.7 first process.communicate() call will have output whereas other python versions # will give output in second process.communicate() call if output_stdout_end and len(output_stdout_end) > 0: output_stdout = output_stdout_end if output_stderr_end and len(output_stderr_end) > 0: output_stderr = output_stderr_end if split_streams: if stdout_destination is not None: output_stdout = to_encoding(output_stdout, encoding, errors) if stderr_destination is not None: output_stderr = to_encoding(output_stderr, encoding, errors) else: if stdout_destination is not None: output_stdout = to_encoding(output_stdout, encoding, errors) # On PyPy 3.7 only, we can have a race condition where we try to read the queue before # the thread could write to it, failing to register a timeout. # This workaround prevents reading the mutable object while the thread is still alive while thread.is_alive(): sleep(check_interval) if must_stop["value"] == "T": raise TimeoutExpired( process, timeout, _get_error_output(output_stdout, output_stderr) ) if must_stop["value"] == "S": raise StopOnInterrupt(_get_error_output(output_stdout, output_stderr)) if split_streams: return exit_code, output_stdout, output_stderr return exit_code, output_stdout except KeyboardInterrupt: raise KbdInterruptGetOutput(_get_error_output(output_stdout, output_stderr)) # After all the stuff above, here's finally the function main entry point output_stdout = output_stderr = None try: # Don't allow monitor method when stdout or stderr is callback/queue redirection (makes no sense) if method == "monitor" and ( stdout_destination in [ "callback", "queue", ] or stderr_destination in ["callback", "queue"] ): raise ValueError( 'Cannot use callback or queue destination in monitor mode. Please use method="poller" argument.' ) # Finally, we won't use encoding & errors arguments for Popen # since it would defeat the idea of binary pipe reading in live mode # Python >= 3.3 has SubProcessError(TimeoutExpired) class # Python >= 3.6 has encoding & error arguments # universal_newlines=True makes netstat command fail under windows # timeout does not work under Python 2.7 with subprocess32 < 3.5 # decoder may be cp437 or unicode_escape for dos commands or utf-8 for powershell # Disabling pylint error for the same reason as above # pylint: disable=E1123 if sys.version_info >= (3, 6): process = subprocess.Popen( command, stdin=stdin, stdout=_stdout, stderr=_stderr, shell=shell, universal_newlines=universal_newlines, encoding=encoding if encoding is not False else None, errors=errors if encoding is not False else None, creationflags=creationflags, bufsize=bufsize, # 1 = line buffered close_fds=close_fds, **kwargs ) else: process = subprocess.Popen( command, stdin=stdin, stdout=_stdout, stderr=_stderr, shell=shell, universal_newlines=universal_newlines, creationflags=creationflags, bufsize=bufsize, close_fds=close_fds, **kwargs ) # Set process priority if given if priority: try: try: set_priority(process.pid, priority) except psutil.AccessDenied as exc: logger.warning( "Cannot set process priority {}. Access denied.".format(exc) ) except Exception as exc: logger.warning("Cannot set process priority: {}".format(exc)) logger.debug("Trace:", exc_info=True) except NameError: logger.warning( "Cannot set process priority. No psutil module installed." ) logger.debug("Trace:", exc_info=True) # Set io priority if given if io_priority: try: try: set_io_priority(process.pid, io_priority) except psutil.AccessDenied as exc: logger.warning( "Cannot set io priority for process {}: access denied.".format( exc ) ) except Exception as exc: logger.warning("Cannot set io priority: {}".format(exc)) logger.debug("Trace:", exc_info=True) raise except NameError: logger.warning("Cannot set io priority. No psutil module installed.") try: # let's return process information if callback was given if callable(process_callback): process_callback(process) if method == "poller" or live_output and _stdout is not False: if split_streams: exit_code, output_stdout, output_stderr = _poll_process( process, timeout, encoding, errors ) else: exit_code, output_stdout = _poll_process( process, timeout, encoding, errors ) elif method == "monitor": if split_streams: exit_code, output_stdout, output_stderr = _monitor_process( process, timeout, encoding, errors ) else: exit_code, output_stdout = _monitor_process( process, timeout, encoding, errors ) else: raise ValueError("Unknown method {} provided.".format(method)) except KbdInterruptGetOutput as exc: exit_code = -252 output_stdout = "KeyboardInterrupted. Partial output\n{}".format(exc.output) try: kill_childs_mod(process.pid, itself=True, soft_kill=False) except AttributeError: pass if stdout_destination == "file" and output_stdout: _stdout.write(output_stdout.encode(encoding, errors=errors)) if stderr_destination == "file" and output_stderr: _stderr.write(output_stderr.encode(encoding, errors=errors)) elif stdout_destination == "file" and output_stderr: _stdout.write(output_stderr.encode(encoding, errors=errors)) logger.debug( 'Command "{}" returned with exit code "{}". Command output was:'.format( command, exit_code ) ) except subprocess.CalledProcessError as exc: exit_code = exc.returncode try: output_stdout = exc.output except AttributeError: output_stdout = "command_runner: Could not obtain output from command." logger_fn = logger.error valid_exit_codes_msg = "" if valid_exit_codes: if valid_exit_codes is True or exit_code in valid_exit_codes: logger_fn = logger.info valid_exit_codes_msg = " allowed" if not silent: logger_fn( 'Command "{}" failed with{} exit code "{}". Command output was:'.format( command, valid_exit_codes_msg, exc.returncode, ) ) logger_fn(output_stdout) except FileNotFoundError as exc: message = 'Command "{}" failed, file not found: {}'.format( command, to_encoding(exc.__str__(), error_encoding, errors) ) if not silent: logger.error(message) if stdout_destination == "file": _stdout.write(message.encode(error_encoding, errors=errors)) exit_code, output_stdout = (-253, message) # On python 2.7, OSError is also raised when file is not found (no FileNotFoundError) # pylint: disable=W0705 (duplicate-except) except (OSError, IOError) as exc: message = 'Command "{}" failed because of OS: {}'.format( command, to_encoding(exc.__str__(), error_encoding, errors) ) if not silent: logger.error(message) if stdout_destination == "file": _stdout.write(message.encode(error_encoding, errors=errors)) exit_code, output_stdout = (-253, message) except TimeoutExpired as exc: message = 'Timeout {} seconds expired for command "{}" execution. Original output was: {}'.format( timeout, command, exc.output ) if not silent: logger.error(message) if stdout_destination == "file": _stdout.write(message.encode(error_encoding, errors=errors)) exit_code, output_stdout = (-254, message) except StopOnInterrupt as exc: message = "Command {} was stopped because stop_on function returned True. Original output was: {}".format( command, to_encoding(exc.output, error_encoding, errors) ) if not silent: logger.info(message) if stdout_destination == "file": _stdout.write(message.encode(error_encoding, errors=errors)) exit_code, output_stdout = (-251, message) except ValueError as exc: message = to_encoding(exc.__str__(), error_encoding, errors) if not silent: logger.error(message, exc_info=True) if stdout_destination == "file": _stdout.write(message) exit_code, output_stdout = (-250, message) # We need to be able to catch a broad exception # pylint: disable=W0703 except Exception as exc: if not silent: logger.error( 'Command "{}" failed for unknown reasons: {}'.format( command, to_encoding(exc.__str__(), error_encoding, errors) ), exc_info=True, ) exit_code, output_stdout = ( -255, to_encoding(exc.__str__(), error_encoding, errors), ) finally: if stdout_destination == "file": _stdout.close() if stderr_destination == "file": _stderr.close() stdout_output = to_encoding(output_stdout, error_encoding, errors) if stdout_output: logger.debug("STDOUT: " + stdout_output) if stderr_destination not in ["stdout", None]: stderr_output = to_encoding(output_stderr, error_encoding, errors) if stderr_output: logger.debug("STDERR: " + stderr_output) # Make sure we send a simple queue end before leaving to make sure any queue read process will stop regardless # of command_runner state (useful when launching with queue and method poller which isn't supposed to write queues) if not no_close_queues: if stdout_destination == "queue": stdout.put(None) if stderr_destination == "queue": stderr.put(None) # With polling, we return None if nothing has been send to the queues # With monitor, process.communicate() will result in '' even if nothing has been sent # Let's fix this here # Python 2.7 will return False to u'' == '' (UnicodeWarning: Unicode equal comparison failed) # so we have to make the following statement if stdout_destination is None or ( output_stdout is not None and len(output_stdout) == 0 ): output_stdout = None if stderr_destination is None or ( output_stderr is not None and len(output_stderr) == 0 ): output_stderr = None if on_exit: logger.debug("Running on_exit callable.") on_exit() if split_streams: return exit_code, output_stdout, output_stderr return exit_code, _get_error_output(output_stdout, output_stderr) if sys.version_info[0] >= 3: @threaded def command_runner_threaded(*args, **kwargs): """ Threaded version of command_runner_threaded which returns concurrent.Future result Not available for Python 2.7 """ return command_runner(*args, **kwargs) def deferred_command(command, defer_time=300): # type: (str, int) -> None """ This is basically an ugly hack to launch commands which are detached from parent process Especially useful to launch an auto update/deletion of a running executable after a given amount of seconds after it finished """ # Use ping as a standard timer in shell since it's present on virtually *any* system if os.name == "nt": deferrer = "ping 127.0.0.1 -n {} > NUL & ".format(defer_time) else: deferrer = "sleep {} && ".format(defer_time) # We'll create a independent shell process that will not be attached to any stdio interface # Our command shall be a single string since shell=True subprocess.Popen( deferrer + command, shell=True, stdin=None, stdout=None, stderr=None, close_fds=True, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1703768654.0 command_runner-1.6.0/command_runner/elevate.py0000666000000000000000000002241614543271116016476 0ustar00#! /usr/bin/env python # -*- coding: utf-8 -*- # # This file is part of command_runner module """ elevate is a Windows/ unix compatible function elevator for Python 3+ usage: import sys from elevate import elevate def main(argv): print('Hello world, with arguments %s' % argv) # Hey, check my exit code ;) sys.exit(123) if __name__ == '__main__': elevate(main, sys.argv) Versioning semantics: Major version: backward compatibility breaking changes Minor version: New functionality Patch version: Backwards compatible bug fixes """ __intname__ = "command_runner.elevate" __author__ = "Orsiris de Jong" __copyright__ = "Copyright (C) 2017-2023 Orsiris de Jong" __licence__ = "BSD 3 Clause" __version__ = "0.3.2" __build__ = "2023122801" from logging import getLogger import os import sys from command_runner import command_runner if os.name == "nt": try: import win32event # monitor process import win32process # monitor process from win32com.shell.shell import ShellExecuteEx from win32com.shell.shell import IsUserAnAdmin from win32com.shell import shellcon except ImportError: raise ImportError( "Cannot import ctypes for checking admin privileges on Windows platform." ) logger = getLogger(__name__) def is_admin(): # type: () -> bool """ Checks whether current program has administrative privileges in OS Works with Windows XP SP2+ and most Unixes :return: Boolean, True if admin privileges present """ current_os_name = os.name # Works with XP SP2 + if current_os_name == "nt": try: return IsUserAnAdmin() except Exception: raise EnvironmentError("Cannot check admin privileges") elif current_os_name == "posix": # Check for root on Posix # os.getuid only exists on postix OSes # pylint: disable=E1101 (no-member) return os.getuid() == 0 else: raise EnvironmentError( "OS does not seem to be supported for admin check. OS: {}".format( current_os_name ) ) def get_absolute_path(executable): # type: (str) -> str """ Search for full executable path in preferred shell paths This allows avoiding usage of shell=True with subprocess """ executable_path = None exit_code, output = command_runner(["type", "-p", "sudo"]) if exit_code == 0: # Remove ending '\n'' character output = output.strip() if os.path.isfile(output): return output if os.name == "nt": split_char = ";" else: split_char = ":" for path in os.environ.get("PATH", "").split(split_char): if os.path.isfile(os.path.join(path, executable)): executable_path = os.path.join(path, executable) return executable_path def _windows_runner(runner, arguments): # type: (str, str) -> int # Old method using ctypes which does not wait for executable to exit nor does get exit code # See https://docs.microsoft.com/en-us/windows/desktop/api/shellapi/nf-shellapi-shellexecutew # int 0 means SH_HIDE window, 1 is SW_SHOWNORMAL # needs the following imports # import ctypes # ctypes.windll.shell32.ShellExecuteW(None, 'runas', runner, arguments, None, 0) # Method with exit code that waits for executable to exit, needs the following imports # import win32event # monitor process # import win32process # monitor process # from win32com.shell.shell import ShellExecuteEx # from win32com.shell import shellcon # pylint: disable=C0103 (invalid-name) childProcess = ShellExecuteEx( nShow=0, fMask=shellcon.SEE_MASK_NOCLOSEPROCESS, lpVerb="runas", lpFile=runner, lpParameters=arguments, ) # pylint: disable=C0103 (invalid-name) procHandle = childProcess["hProcess"] # pylint: disable=I1101 (c-extension-no-member) win32event.WaitForSingleObject(procHandle, win32event.INFINITE) # pylint: disable=I1101 (c-extension-no-member) exit_code = win32process.GetExitCodeProcess(procHandle) return exit_code def _check_environment(): # type: () -> (str, str) # Regardless of the runner (CPython, Nuitka or frozen CPython), sys.argv[0] is the relative path to script, # sys.argv[1] are the arguments # The only exception being CPython on Windows where sys.argv[0] contains absolute path to script # Regarless of OS, sys.executable will contain full path to python binary for CPython and Nuitka, # and full path to frozen executable on frozen CPython # Recapitulative table create with # (CentOS 7x64 / Python 3.4 / Nuitka 0.6.1 / PyInstaller 3.4) and # (Windows 10 x64 / Python 3.7x32 / Nuitka 0.6.2.10 / PyInstaller 3.4) # -------------------------------------------------------------------------------------------------------------- # | OS | Variable | CPython | Nuitka | PyInstaller | # |------------------------------------------------------------------------------------------------------------| # | Lin | argv | ['./script.py', '-h'] | ['./test', '-h'] | ['./test.py', -h'] | # | Lin | sys.executable | /usr/bin/python3.4 | /usr/bin/python3.4 | /absolute/path/to/test | # | Win | argv | ['C:\\Python\\test.py', '-h'] | ['test', '-h'] | ['test', '-h'] | # | Win | sys.executable | C:\Python\python.exe | C:\Python\Python.exe | C:\absolute\path\to\test.exe | # -------------------------------------------------------------------------------------------------------------- # Nuitka > 0.8 just declares __compiled__ variables # Nuitka 0.6.2 and newer define builtin __nuitka_binary_dir # Nuitka does not set the frozen attribute on sys # Nuitka < 0.6.2 can be detected in sloppy ways, ie if not sys.argv[0].endswith('.py') or len(sys.path) < 3 # Let's assume this will only be compiled with newer nuitka, and remove sloppy detections is_nuitka_compiled = False try: # Actual if statement not needed, but keeps code inspectors more happy if "__compiled__" in globals(): is_nuitka_compiled = True except NameError: pass if is_nuitka_compiled: # On nuitka, sys.executable is the python binary, even if it does not exist in standalone, # so we need to fill runner with sys.argv[0] absolute path runner = os.path.abspath(sys.argv[0]) arguments = sys.argv[1:] # current_dir = os.path.dirname(runner) logger.debug('Running elevator as Nuitka with runner "{}"'.format(runner)) # If a freezer is used (PyInstaller, cx_freeze, py2exe) elif getattr(sys, "frozen", False): runner = os.path.abspath(sys.executable) arguments = sys.argv[1:] # current_dir = os.path.dirname(runner) logger.debug('Running elevator as Frozen with runner "{}"'.format(runner)) # If standard interpreter CPython is used else: runner = os.path.abspath(sys.executable) arguments = [os.path.abspath(sys.argv[0])] + sys.argv[1:] # current_dir = os.path.abspath(sys.argv[0]) logger.debug('Running elevator as CPython with runner "{}"'.format(runner)) logger.debug('Arguments are "{}"'.format(arguments)) return runner, arguments def elevate(callable_function, *args, **kwargs): """ UAC elevation / sudo code working for CPython, Nuitka >= 0.6.2, PyInstaller, PyExe, CxFreeze """ if is_admin(): # Don't bother if we already got mighty admin privileges callable_function(*args, **kwargs) else: runner, arguments = _check_environment() # Windows runner if os.name == "nt": # Re-run the script with admin rights # Join arguments and double quote each argument in order to prevent space separation arguments = " ".join('"' + arg + '"' for arg in arguments) try: exit_code = _windows_runner(runner, arguments) logger.debug('Child exited with code "{}"'.format(exit_code)) sys.exit(exit_code) except Exception as exc: logger.info(exc) logger.debug("Trace:", exc_info=True) sys.exit(255) # Linux runner and hopefully Unixes else: # Re-run the script but with sudo sudo_path = get_absolute_path("sudo") if sudo_path is None: logger.error( "Cannot find sudo executable. Trying to run without privileges elevation." ) callable_function(*args, **kwargs) else: command = ["sudo", runner] + arguments # Optionnaly might also pass a stdout PIPE to command_runner so we get live output exit_code, output = command_runner(command, shell=False, timeout=None) logger.info("Child output: {}".format(output)) sys.exit(exit_code) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1704829528.3004427 command_runner-1.6.0/command_runner.egg-info/0000777000000000000000000000000014547321130016160 5ustar00././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704829527.0 command_runner-1.6.0/command_runner.egg-info/PKG-INFO0000666000000000000000000006170614547321127017275 0ustar00Metadata-Version: 2.1 Name: command-runner Version: 1.6.0 Summary: Platform agnostic command and shell execution tool, also allows UAC/sudo privilege elevation Home-page: https://github.com/netinvent/command_runner Author: NetInvent - Orsiris de Jong Author-email: contact@netinvent.fr License: BSD Keywords: shell,execution,subprocess,check_output,wrapper,uac,sudo,elevate,privilege Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Topic :: Software Development Classifier: Topic :: System Classifier: Topic :: System :: Operating System Classifier: Topic :: System :: Shells Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Classifier: Operating System :: POSIX :: Linux Classifier: Operating System :: POSIX :: BSD :: FreeBSD Classifier: Operating System :: POSIX :: BSD :: NetBSD Classifier: Operating System :: POSIX :: BSD :: OpenBSD Classifier: Operating System :: Microsoft Classifier: Operating System :: Microsoft :: Windows Classifier: License :: OSI Approved :: BSD License Requires-Python: >=2.7 Description-Content-Type: text/markdown License-File: LICENSE Requires-Dist: psutil>=5.6.0 # command_runner # Platform agnostic command execution, timed background jobs with live stdout/stderr output capture, and UAC/sudo elevation [![License](https://img.shields.io/badge/License-BSD%203--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause) [![Percentage of issues still open](http://isitmaintained.com/badge/open/netinvent/command_runner.svg)](http://isitmaintained.com/project/netinvent/command_runner "Percentage of issues still open") [![Maintainability](https://api.codeclimate.com/v1/badges/defbe10a354d3705f287/maintainability)](https://codeclimate.com/github/netinvent/command_runner/maintainability) [![codecov](https://codecov.io/gh/netinvent/command_runner/branch/master/graph/badge.svg?token=rXqlphOzMh)](https://codecov.io/gh/netinvent/command_runner) [![linux-tests](https://github.com/netinvent/command_runner/actions/workflows/linux.yaml/badge.svg)](https://github.com/netinvent/command_runner/actions/workflows/linux.yaml) [![windows-tests](https://github.com/netinvent/command_runner/actions/workflows/windows.yaml/badge.svg)](https://github.com/netinvent/command_runner/actions/workflows/windows.yaml) [![GitHub Release](https://img.shields.io/github/release/netinvent/command_runner.svg?label=Latest)](https://github.com/netinvent/command_runner/releases/latest) command_runner's purpose is to run external commands from python, just like subprocess on which it relies, while solving various problems a developer may face among: - Handling of all possible subprocess.popen / subprocess.check_output scenarios / python versions in one handy function without encoding / timeout hassle - Allow stdout/stderr stream output to be redirected to callback functions / output queues / files so you get to handle output in your application while commands are running - Callback to optional stop check so we can stop execution from outside command_runner - Callback with optional process information so we get to control the process from outside command_runner - Callback once we're finished to easen thread usage - Optional process priority and io_priority settings - System agnostic functionality, the developer shouldn't carry the burden of Windows & Linux differences - Optional Windows UAC elevation module compatible with CPython, PyInstaller & Nuitka - Optional Linux sudo elevation compatible with CPython, PyInstaller & Nuitka It is compatible with Python 2.7+, tested up to Python 3.11 (backports some newer Python 3.5 functionality) and is tested on both Linux and Windows. It is also compatible with PyPy Python implementation. ...and yes, keeping Python 2.7 compatibility has proven to be quite challenging. ## command_runner command_runner is a replacement package for subprocess.popen and subprocess.check_output The main promise command_runner can do is to make sure to never have a blocking command, and always get results. It works as wrapper for subprocess.popen and subprocess.communicate that solves: - Platform differences - Handle timeouts even for windows GUI applications that don't return anything to stdout - Python language version differences - Handle timeouts even on earlier Python implementations - Handle encoding even on earlier Python implementations - Keep the promise to always return an exit code (so we don't have to deal with exit codes and exception logic at the same time) - Keep the promise to always return the command output regardless of the execution state (even with timeouts, callback interrupts and keyboard interrupts) - Can show command output on the fly without waiting the end of execution (with `live_output=True` argument) - Can give command output on the fly to application by using queues or callback functions - Catch all possible exceptions and log them properly with encoding fixes - Be compatible, and always return the same result regarless of platform command_runner also promises to properly kill commands when timeouts are reached, including spawned subprocesses of such commands. This specific behavior is achieved via psutil module, which is an optional dependency. ### command_runner in a nutshell Install with `pip install command_runner` The following example will work regardless of the host OS and the Python version. ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', timeout=10) ``` ## Guide to command_runner ### Setup `pip install command_runner` or download the latest git release ### Advanced command_runner usage #### Special exit codes In order to keep the promise to always provide an exit_code, spcial exit codes have been added for the case where none is given. Those exit codes are: - -250 : command_runner called with incompatible arguments - -251 : stop_on function returned True - -252 : KeyboardInterrupt - -253 : FileNotFoundError, OSError, IOError - -254 : Timeout - -255 : Any other uncatched exceptions This allows you to use the standard exit code logic, without having to deal with various exceptions. #### Default encoding command_runner has an `encoding` argument which defaults to `utf-8` for Unixes and `cp437` for Windows platforms. Using `cp437` ensures that most `cmd.exe` output is encoded properly, including accents and special characters, on most locale systems. Still you can specify your own encoding for other usages, like Powershell where `unicode_escape` is preferred. ```python from command_runner import command_runner command = r'C:\Windows\sysnative\WindowsPowerShell\v1.0\powershell.exe --help' exit_code, output = command_runner(command, encoding='unicode_escape') ``` Earlier subprocess.popen implementations didn't have an encoding setting so command_runner will deal with encoding for those. You can also disable command_runner's internal encoding in order to get raw process output (bytes) by passing False boolean. Example: ```python from command_runner import command_runner exit_code, raw_output = command_runner('ping 127.0.0.1', encoding=False) ``` #### On the fly (interactive screen) output **Note: for live output capture and threading, see stream redirection. If you want to run your application while command_runner gives back command output, the best way to go is queues / callbacks.** command_runner can output a command output on the fly to stdout, eg show output on screen during execution. This is helpful when the command is long, and we need to know the output while execution is ongoing. It is also helpful in order to catch partial command output when timeout is reached or a CTRL+C signal is received. Example: ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', shell=True, live_output=True) ``` Note: using live output relies on stdout pipe polling, which has lightly higher cpu usage. #### Timeouts **command_runner has a `timeout` argument which defaults to 3600 seconds.** This default setting ensures commands will not block the main script execution. Feel free to lower / higher that setting with `timeout` argument. Note that a command_runner kills the whole process tree that the command may have generated, even under Windows. ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', timeout=30) ``` #### Remarks on processes Using `shell=True` will spawn a shell which will spawn the desired child process. Be aware that under MS Windows, no direct process tree is available. We fixed this by walking processes during runtime. The drawback is that orphaned processes cannot be identified this way. #### Disabling logs / silencing `command_runner` has it's own logging system, which will log all sorts of error logs. If you need to disable it's logging, just run with argument silent. Be aware that logging.DEBUG log levels won't be silenced, by design. Example: ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', silent=True) ``` If you also need to disable logging.DEBUG level, you can run the following code which will required logging.CRITICAL only messages which `command_runner` never does: ```python import logging import command_runner logging.getLogger('command_runner').setLevel(logging.CRITICAL) ``` #### Capture method `command_runner` allows two different process output capture methods: `method='monitor'` which is default: - A thread is spawned in order to check stop conditions and kill process if needed - A main loop waits for the process to finish, then uses proc.communicate() to get it's output - Pros: - less CPU usage - less threads - Cons: - cannot read partial output on KeyboardInterrupt or stop_on (still works for partial timeout output) - cannot use queues or callback functions redirectors - is 0.1 seconds slower than poller method `method='poller'`: - A thread is spawned and reads stdout/stderr pipes into output queues - A poller loop reads from the output queues, checks stop conditions and kills process if needed - Pros: - Reads on the fly, allowing interactive commands (is also used with `live_output=True`) - Allows stdout/stderr output to be written live to callback functions, queues or files (useful when threaded) - is 0.1 seconds faster than monitor method, is preferred method for fast batch runnings - Cons: - lightly higher CPU usage Example: ```python from command_runner import command_runner exit_code, output = command_runner('ping 127.0.0.1', method='poller') exit_code, output = command_runner('ping 127.0.0.1', method='monitor') ``` #### stdin stream redirection `command_runner` allows to redirect some stream directly into the subprocess it spawns. Example code ```python import sys from command_runner import command_runner exit_code, output = command_runner("gzip -d", stdin=sys.stdin.buffer) print("Uncompressed data", output) ``` The above program, when run with `echo "Hello, World!" | gzip | python myscript.py` will show the uncompressed string `Hello, World!` You can use whatever file descriptor you want, basic ones being sys.stdin for text input and sys.stdin.buffer for binary input. #### stdout / stderr stream redirection command_runner can redirect stdout and/or stderr streams to different outputs: - subprocess pipes - /dev/null or NUL - files - queues - callback functions Unless an output redirector is given for `stderr` argument, stderr will be redirected to `stdout` stream. Note that both queues and callback function redirectors require `poller` method and will fail if method is not set. Possible output redirection options are: - subprocess pipes By default, stdout writes into a subprocess.PIPE which is read by command_runner and returned as `output` variable. You may also pass any other subprocess.PIPE int values to `stdout` or `stderr` arguments. - /dev/null or NUL If `stdout=False` and/or `stderr=False` argument(s) are given, command output will not be saved. stdout/stderr streams will be redirected to `/dev/null` or `NUL` depending on platform. Output will always be `None`. See `split_streams` for more details using multiple outputs. - files Giving `stdout` and/or `stderr` arguments a string, `command_runner` will consider the string to be a file path where stream output will be written live. Examples: ```python from command_runner import command_runner exit_code, output = command_runner('dir', stdout=r"C:/tmp/command_result", stderr=r"C:/tmp/command_error", shell=True) ``` ```python from command_runner import command_runner exit_code, output = command_runner('dir', stdout='/tmp/stdout.log', stderr='/tmp/stderr.log', shell=True) ``` Opening a file with the wrong encoding (especially opening a CP437 encoded file on Windows with UTF-8 coded might endup with UnicodedecodeError.) - queues Queue(s) will be filled up by command_runner. In order to keep your program "live", we'll use the threaded version of command_runner which is basically the same except it returns a future result instead of a tuple. Note: With all the best will, there's no good way to achieve this under Python 2.7 without using more queues, so the threaded version is only compatible with Python 3.3+. For Python 2.7, you must create your thread and queue reader yourself (see footnote for a Python 2.7 comaptible example). Threaded command_runner plus queue example: ```python import queue from command_runner import command_runner_threaded output_queue = queue.Queue() stream_output = "" thread_result = command_runner_threaded('ping 127.0.0.1', shell=True, method='poller', stdout=output_queue) read_queue = True while read_queue: try: line = output_queue.get(timeout=0.1) except queue.Empty: pass else: if line is None: read_queue = False else: stream_output += line # ADD YOUR LIVE CODE HERE # Now we may get exit_code and output since result has become available at this point exit_code, output = thread_result.result() ``` You might also want to read both stdout and stderr queues. In that case, you can create a read loop just like in the following example. Here we're reading both queues in one loop, so we need to observe a couple of conditions before stopping the loop, in order to catch all queue output: ```python import queue from time import sleep from command_runner import command_runner_threaded stdout_queue = queue.Queue() stderr_queue = queue.Queue() thread_result = command_runner_threaded('ping 127.0.0.1', method='poller', shell=True, stdout=stdout_queue, stderr=stderr_queue) read_stdout = read_stderr = True while read_stdout or read_stderr: try: stdout_line = stdout_queue.get(timeout=0.1) except queue.Empty: pass else: if stdout_line is None: read_stdout = False else: print('STDOUT:', stdout_line) try: stderr_line = stderr_queue.get(timeout=0.1) except queue.Empty: pass else: if stderr_line is None: read_stderr = False else: print('STDERR:', stderr_line) # ADD YOUR LIVE CODE HERE exit_code, output = thread_result.result() assert exit_code == 0, 'We did not succeed in running the thread' ``` - callback functions The callback function will get one argument, being a str of current stream readings. It will be executed on every line that comes from streams. Example: ```python from command_runner import command_runner def callback_function(string): # ADD YOUR CODE HERE print('CALLBACK GOT:', string) # Launch command_runner exit_code, output = command_runner('ping 127.0.0.1', stdout=callback_function, method='poller') ``` #### stop_on In some situations, you want a command to be aborted on some external triggers. That's where `stop_on` argument comes in handy. Just pass a function to `stop_on`, as soon as function result becomes True, execution will halt with exit code -251. Example: ```python from command_runner import command_runner def some_function(): return True if we_must_stop_execution exit_code, output = command_runner('ping 127.0.0.1', stop_on=some_function) ``` #### Checking intervals By default, command_runner checks timeouts and outputs every 0.05 seconds. You can increase/decrease this setting via `check_interval` setting which accepts floats. Example: `command_runner(cmd, check_interval=0.2)` Note that lowering `check_interval` will increase CPU usage. #### Getting current process information `command_runner` can provide a subprocess.Popen instance of currently run process as external data. In order to do so, just declare a function and give it as `process_callback` argument. Example: ```python from command_runner import command_runner def show_process_info(process): print('My process has pid: {}'.format(process.pid)) exit_code, output = command_runner('ping 127.0.0.1', process_callback=show_process_info) ``` #### Split stdout and stderr By default, `command_runner` returns a tuple like `(exit_code, output)` in which output contains both stdout and stderr stream outputs. You can alter that behavior by using argument `split_stream=True`. In that case, `command_runner` will return a tuple like `(exit_code, stdout, stderr)`. Example: ```python from command_runner import command_runner exit_code, stdout, stderr = command_runner('ping 127.0.0.1', split_streams=True) print('exit code:', exit_code) print('stdout', stdout) print('stderr', stderr) ``` #### On-exit Callback `command_runner` allows to execute a callback function once it has finished it's execution. This might help building threaded programs where a callback is needed to disable GUI elements for example. Example: ```python from command_runner import command_runner def do_something(): print("We're done running") exit_code, output = command_runner('ping 127.0.0.1', on_exit=do_something) ``` ### Process and IO priority `command_runner` can set it's subprocess priority to 'low', 'normal' or 'high', which translate to 15, 0, -15 niceness on Linux and BELOW_NORMAL_PRIORITY_CLASS and HIGH_PRIORITY_CLASS in Windows. On Linux, you may also directly use priority with niceness int values. You may also set subprocess io priority to 'low', 'normal' or 'high'. Example: ```python from command_runner import command_runner exit_code, output = command_runner('some_intensive_process', priority='low', io_priority='high') ``` #### Other arguments `command_runner` takes **any** argument that `subprocess.Popen()` would take. It also uses the following standard arguments: - command (str/list): The command, doesn't need to be a list, a simple string works - valid_exit_codes (list): List of exit codes which won't trigger error logs - timeout (int): seconds before a process tree is killed forcefully, defaults to 3600 - shell (bool): Shall we use the cmd.exe or /usr/bin/env shell for command execution, defaults to False - encoding (str/bool): Which text encoding the command produces, defaults to cp437 under Windows and utf-8 under Linux - stdin (sys.stdin/int): Optional stdin file descriptor, sent to the process command_runner spawns - stdout (str/queue.Queue/function/False/None): Optional path to filename where to dump stdout, or queue where to write stdout, or callback function which is called when stdout has output - stderr (str/queue.Queue/function/False/None): Optional path to filename where to dump stderr, or queue where to write stderr, or callback function which is called when stderr has output - no_close_queues (bool): Normally, command_runner sends None to stdout / stderr queues when process is finished. This behavior can be disabled allowing to reuse those queues for other functions wrapping command_runner - windows_no_window (bool): Shall a command create a console window (MS Windows only), defaults to False - live_output (bool): Print output to stdout while executing command, defaults to False - method (str): Accepts 'poller' or 'monitor' stdout capture and timeout monitoring methods - check interval (float): Defaults to 0.05 seconds, which is the time between stream readings and timeout checks - stop_on (function): Optional function that when returns True stops command_runner execution - on_exit (function): Optional function that gets executed when command_runner has finished (callback function) - process_callback (function): Optional function that will take command_runner spawned process as argument, in order to deal with process info outside of command_runner - split_streams (bool): Split stdout and stderr into two separate results - silent (bool): Allows to disable command_runner's internal logs, except for logging.DEBUG levels which for obvious reasons should never be silenced - priority (str): Allows to set CPU bound process priority (takes 'low', 'normal' or 'high' parameter) - io_priority (str): Allows to set IO priority for process (takes 'low', 'normal' or 'high' parameter) - close_fds (bool): Like Popen, defaults to True on Linux and False on Windows - universal_newlines (bool): Like Popen, defaults to False - creation_flags (int): Like Popen, defaults to 0 - bufsize (int): Like Popen, defaults to 16384. Line buffering (bufsize=1) is deprecated since Python 3.7 **Note that ALL other subprocess.Popen arguments are supported, since they are directly passed to subprocess.** ## UAC Elevation / sudo elevation command_runner package allowing privilege elevation. Becoming an admin is fairly easy with command_runner.elevate You only have to import the elevate module, and then launch your main function with the elevate function. ### elevation In a nutshell ```python from command_runner.elevate import elevate def main(): """My main function that should be elevated""" print("Who's the administrator, now ?") if __name__ == '__main__': elevate(main) ``` elevate function handles arguments (positional and keyword arguments). `elevate(main, arg, arg2, kw=somearg)` will call `main(arg, arg2, kw=somearg)` ### Advanced elevate usage #### is_admin() function The elevate module has a nifty is_admin() function that returns a boolean according to your current root/administrator privileges. Usage: ```python from command_runner.elevate import is_admin print('Am I an admin ? %s' % is_admin()) ``` #### sudo elevation Initially designed for Windows UAC, command_runner.elevate can also elevate privileges on Linux, using the sudo command. This is mainly designed for PyInstaller / Nuitka executables, as it's really not safe to allow automatic privilege elevation of a Python interpreter. Example for a binary in `/usr/local/bin/my_compiled_python_binary` You'll have to allow this file to be run with sudo without a password prompt. This can be achieved in `/etc/sudoers` file. Example for Redhat / Rocky Linux, where adding the following line will allow the elevation process to succeed without password: ``` someuser ALL= NOPASSWD:/usr/local/bin/my_compiled_python_binary ``` ## Footnotes #### command_runner Python 2.7 compatible queue reader The following example is a Python 2.7 compatible threaded implementation that reads stdout / stderr queue in a thread. This only exists for compatibility reasons. ```python import queue import threading from command_runner import command_runner def read_queue(output_queue): """ Read the queue as thread Our problem here is that the thread can live forever if we don't check a global value, which is...well ugly """ stream_output = "" read_queue = True while read_queue: try: line = output_queue.get(timeout=1) except queue.Empty: pass else: # The queue reading can be stopped once 'None' is received. if line is None: read_queue = False else: stream_output += line # ADD YOUR LIVE CODE HERE # Create a new queue that command_runner will fill up output_queue = queue.Queue() # Create a thread of read_queue() in order to read the queue while command_runner executes the command read_thread = threading.Thread( target=read_queue, args=(output_queue) ) read_thread.daemon = True # thread dies with the program read_thread.start() # Launch command_runner, which will be blocking. Your live code goes directly into the threaded function exit_code, output = command_runner('ping 127.0.0.1', stdout=output_queue, method='poller') ``` ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704829527.0 command_runner-1.6.0/command_runner.egg-info/SOURCES.txt0000666000000000000000000000045114547321127020052 0ustar00LICENSE README.md setup.py command_runner/__init__.py command_runner/elevate.py command_runner.egg-info/PKG-INFO command_runner.egg-info/SOURCES.txt command_runner.egg-info/dependency_links.txt command_runner.egg-info/requires.txt command_runner.egg-info/top_level.txt tests/test_command_runner.py././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704829527.0 command_runner-1.6.0/command_runner.egg-info/dependency_links.txt0000666000000000000000000000000114547321127022234 0ustar00 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704829527.0 command_runner-1.6.0/command_runner.egg-info/requires.txt0000666000000000000000000000001614547321127020563 0ustar00psutil>=5.6.0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1704829527.0 command_runner-1.6.0/command_runner.egg-info/top_level.txt0000666000000000000000000000001714547321127020716 0ustar00command_runner ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1704829528.347189 command_runner-1.6.0/setup.cfg0000666000000000000000000000005214547321130013275 0ustar00[egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1702302043.0 command_runner-1.6.0/setup.py0000666000000000000000000001033114535610533013174 0ustar00#! /usr/bin/env python # -*- coding: utf-8 -*- # # This file is part of command_runner package __intname__ = "command_runner.setup" __author__ = "Orsiris de Jong" __copyright__ = "Copyright (C) 2021-2022 Orsiris de Jong" __licence__ = "BSD 3 Clause" __build__ = "2022092801" PACKAGE_NAME = "command_runner" DESCRIPTION = "Platform agnostic command and shell execution tool, also allows UAC/sudo privilege elevation" import sys import os import pkg_resources import setuptools def _read_file(filename): here = os.path.abspath(os.path.dirname(__file__)) if sys.version_info[0] < 3: # With python 2.7, open has no encoding parameter, resulting in TypeError # Fix with io.open (slow but works) from io import open as io_open try: with io_open( os.path.join(here, filename), "r", encoding="utf-8" ) as file_handle: return file_handle.read() except IOError: # Ugly fix for missing requirements.txt file when installing via pip under Python 2 return "psutil\n" else: with open(os.path.join(here, filename), "r", encoding="utf-8") as file_handle: return file_handle.read() def get_metadata(package_file): """ Read metadata from package file """ _metadata = {} for line in _read_file(package_file).splitlines(): if line.startswith("__version__") or line.startswith("__description__"): delim = "=" _metadata[line.split(delim)[0].strip().strip("__")] = ( line.split(delim)[1].strip().strip("'\"") ) return _metadata def parse_requirements(filename): """ There is a parse_requirements function in pip but it keeps changing import path Let's build a simple one """ try: requirements_txt = _read_file(filename) install_requires = [ str(requirement) for requirement in pkg_resources.parse_requirements(requirements_txt) ] return install_requires except OSError: print( 'WARNING: No requirements.txt file found as "{}". Please check path or create an empty one'.format( filename ) ) package_path = os.path.abspath(PACKAGE_NAME) package_file = os.path.join(package_path, "__init__.py") metadata = get_metadata(package_file) requirements = parse_requirements(os.path.join(package_path, "requirements.txt")) long_description = _read_file("README.md") setuptools.setup( name=PACKAGE_NAME, # We may use find_packages in order to not specify each package manually # packages = ['command_runner'], packages=setuptools.find_packages(), version=metadata["version"], install_requires=requirements, classifiers=[ # command_runner is mature "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "Topic :: Software Development", "Topic :: System", "Topic :: System :: Operating System", "Topic :: System :: Shells", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy", "Operating System :: POSIX :: Linux", "Operating System :: POSIX :: BSD :: FreeBSD", "Operating System :: POSIX :: BSD :: NetBSD", "Operating System :: POSIX :: BSD :: OpenBSD", "Operating System :: Microsoft", "Operating System :: Microsoft :: Windows", "License :: OSI Approved :: BSD License", ], description=DESCRIPTION, license="BSD", author="NetInvent - Orsiris de Jong", author_email="contact@netinvent.fr", url="https://github.com/netinvent/command_runner", keywords=[ "shell", "execution", "subprocess", "check_output", "wrapper", "uac", "sudo", "elevate", "privilege", ], long_description=long_description, long_description_content_type="text/markdown", python_requires=">=2.7", ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1704829528.3418603 command_runner-1.6.0/tests/0000777000000000000000000000000014547321130012621 5ustar00././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1703781944.0 command_runner-1.6.0/tests/test_command_runner.py0000666000000000000000000007265114543323070017255 0ustar00#! /usr/bin/env python # -*- coding: utf-8 -*- # # This file is part of command_runner module """ command_runner is a quick tool to launch commands from Python, get exit code and output, and handle most errors that may happen Versioning semantics: Major version: backward compatibility breaking changes Minor version: New functionality Patch version: Backwards compatible bug fixes """ __intname__ = 'command_runner_tests' __author__ = 'Orsiris de Jong' __copyright__ = 'Copyright (C) 2015-2023 Orsiris de Jong' __licence__ = 'BSD 3 Clause' __build__ = '2023122801' import sys import os import platform import re import threading import logging try: from command_runner import * except ImportError: # would be ModuleNotFoundError in Python 3+ # In case we run tests without actually having installed command_runner sys.path.insert(0, os.path.abspath(os.path.join(__file__, os.pardir, os.pardir))) from command_runner import * # Python 2.7 compat where datetime.now() does not have .timestamp() method if sys.version_info[0] < 3 or sys.version_info[1] < 4: # python version < 3.3 import time def timestamp(date): return time.mktime(date.timetuple()) else: def timestamp(date): return date.timestamp() # We need a logging unit here logger = logging.getLogger() logger.setLevel(logging.ERROR) handler = logging.StreamHandler(sys.stdout) handler.setLevel(logging.ERROR) formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) streams = ['stdout', 'stderr'] methods = ['monitor', 'poller'] TEST_FILENAME = 'README.md' if os.name == 'nt': ENCODING = 'cp437' PING_CMD = 'ping 127.0.0.1 -n 4' PING_CMD_REDIR = PING_CMD + ' 1>&2' # Make sure we run the failure command first so end result is okay PING_CMD_AND_FAILURE = 'ping 0.0.0.0 -n 2 1>&2 & ping 127.0.0.1 -n 2' PING_FAILURE = 'ping 0.0.0.0 -n 2 1>&2' PRINT_FILE_CMD = 'type {}'.format(TEST_FILENAME) else: ENCODING = 'utf-8' PING_CMD = ['ping', '-c', '4', '127.0.0.1'] PING_CMD_REDIR = 'ping -c 4 127.0.0.1 1>&2' PING_CMD_AND_FAILURE = 'ping -c 2 0.0.0.0 1>&2; ping -c 2 127.0.0.1' PRINT_FILE_CMD = 'cat {}'.format(TEST_FILENAME) PING_FAILURE = 'ping -c 2 0.0.0.0 1>&2' ELAPSED_TIME = timestamp(datetime.now()) PROCESS_ID = None STREAM_OUTPUT = "" PROC = None ON_EXIT_CALLED = False def reset_elapsed_time(): global ELAPSED_TIME ELAPSED_TIME = timestamp(datetime.now()) def get_elapsed_time(): return timestamp(datetime.now()) - ELAPSED_TIME def running_on_github_actions(): """ This is set in github actions workflow with env: RUNNING_ON_GITHUB_ACTIONS: true """ return os.environ.get("RUNNING_ON_GITHUB_ACTIONS") == "true" # bash 'true' def is_pypy(): """ Checks interpreter """ return True if platform.python_implementation().lower() == "pypy" else False def test_standard_ping_with_encoding(): """ Test command_runner with a standard ping and encoding parameter """ for method in methods: print('method={}'.format(method)) exit_code, output = command_runner(PING_CMD, encoding=ENCODING, method=method) print(output) assert exit_code == 0, 'Exit code should be 0 for ping command with method {}'.format(method) def test_standard_ping_with_default_encoding(): """ Without encoding, iter(stream.readline, '') will hang since the expected sentinel char would be b'': This could only happen on python <3.6 since command_runner decides to use an encoding anyway """ for method in methods: exit_code, output = command_runner(PING_CMD, encoding=None, method=method) print(output) assert exit_code == 0, 'Exit code should be 0 for ping command with method {}'.format(method) def test_standard_ping_with_encoding_disabled(): """ Without encoding disabled, we should have binary output """ for method in methods: exit_code, output = command_runner(PING_CMD, encoding=False, method=method) print(output) assert exit_code == 0, 'Exit code should be 0 for ping command with method {}'.format(method) assert isinstance(output, bytes), 'Output should be binary.' def test_timeout(): """ Test command_runner with a timeout """ for method in methods: begin_time = datetime.now() exit_code, output = command_runner(PING_CMD, timeout=1, method=method) print(output) end_time = datetime.now() assert (end_time - begin_time).total_seconds() < 2, 'It took more than 2 seconds for a timeout=1 command to finish with method {}'.format(method) assert exit_code == -254, 'Exit code should be -254 on timeout with method {}'.format(method) assert 'Timeout' in output, 'Output should have timeout with method {}'.format(method) def test_timeout_with_subtree_killing(): """ Launch a subtree of long commands and see if timeout actually kills them in time """ if os.name != 'nt': cmd = 'echo "test" && sleep 5 && echo "done"' else: cmd = 'echo test && {} && echo done'.format(PING_CMD) for method in methods: begin_time = datetime.now() exit_code, output = command_runner(cmd, shell=True, timeout=1, method=method) print(output) end_time = datetime.now() elapsed_time = (end_time - begin_time).total_seconds() assert elapsed_time < 4, 'It took more than 2 seconds for a timeout=1 command to finish with method {}'.format(method) assert exit_code == -254, 'Exit code should be -254 on timeout with method {}'.format(method) assert 'Timeout' in output, 'Output should have timeout with method {}'.format(method) def test_no_timeout(): """ Test with setting timeout=None """ for method in methods: exit_code, output = command_runner(PING_CMD, timeout=None, method=method) print(output) assert exit_code == 0, 'Without timeout, command should have run with method {}'.format(method) def test_live_output(): """ Test command_runner with live output to stdout """ for method in methods: exit_code, _ = command_runner(PING_CMD, stdout=PIPE, encoding=ENCODING, method=method) assert exit_code == 0, 'Exit code should be 0 for ping command with method {}'.format(method) def test_not_found(): """ Test command_runner with an unexisting command """ for method in methods: print('The following command should fail with method {}'.format(method)) exit_code, output = command_runner('unknown_command_nowhere_to_be_found_1234') assert exit_code == -253, 'Unknown command should trigger a -253 exit code with method {}'.format(method) assert "failed" in output, 'Error code -253 should be Command x failed, reason' def test_file_output(): """ Test command_runner with file output instead of stdout """ for method in methods: stdout_filename = 'temp.test' stderr_filename = 'temp.test.err' print('The following command should timeout') exit_code, output = command_runner(PING_CMD, timeout=1, stdout=stdout_filename, stderr=stderr_filename, method=method) assert os.path.isfile(stdout_filename), 'Log file does not exist with method {}'.format(method) # We don't have encoding argument in Python 2, yet we need it for PyPy if sys.version_info[0] < 3: with open(stdout_filename, 'r') as file_handle: output = file_handle.read() else: with open(stdout_filename, 'r', encoding=ENCODING) as file_handle: output = file_handle.read() assert os.path.isfile(stderr_filename), 'stderr log file does not exist with method {}'.format(method) assert exit_code == -254, 'Exit code should be -254 for timeouts with method {}'.format(method) assert 'Timeout' in output, 'Output should have timeout with method {}'.format(method) # arbitrary time to make sure file handle was closed sleep(3) os.remove(stdout_filename) os.remove(stderr_filename) def test_valid_exit_codes(): """ Test command_runner with a failed ping but that should not trigger an error # WIP We could improve tests here by capturing logs """ for method in methods: exit_code, _ = command_runner('ping nonexistent_host', shell=True, valid_exit_codes=[0, 1, 2], method=method) assert exit_code in [0, 1, 2], 'Exit code not in valid list with method {}'.format(method) exit_code, _ = command_runner('ping nonexistent_host', shell=True, valid_exit_codes=True, method=method) assert exit_code != 0, 'Exit code should not be equal to 0' exit_code, _ = command_runner('ping nonexistent_host', shell=True, valid_exit_codes=False, method=method) assert exit_code != 0, 'Exit code should not be equal to 0' exit_code, _ = command_runner('ping nonexistent_host', shell=True, valid_exit_codes=None, method=method) assert exit_code != 0, 'Exit code should not be equal to 0' def test_unix_only_split_command(): """ This test is specifically written when command_runner receives a str command instead of a list on unix """ if os.name == 'posix': for method in methods: exit_code, _ = command_runner(' '.join(PING_CMD), method=method) assert exit_code == 0, 'Non splitted command should not trigger an error with method {}'.format(method) def test_create_no_window(): """ Only used on windows, when we don't want to create a cmd visible windows """ for method in methods: exit_code, _ = command_runner(PING_CMD, windows_no_window=True, method=method) assert exit_code == 0, 'Should have worked too with method {}'.format(method) def test_read_file(): """ Read a couple of times the same file to be sure we don't get garbage from _read_pipe() This is a random failure detection test """ # We don't have encoding argument in Python 2, yet we need it for PyPy if sys.version_info[0] < 3: with open(TEST_FILENAME, 'r') as file: file_content = file.read() else: with open(TEST_FILENAME, 'r', encoding=ENCODING) as file: file_content = file.read() for method in methods: # pypy is quite slow with poller method on github actions. # Lets lower rounds max_rounds = 100 if is_pypy() else 1000 print("\nSetting up test_read_file for {} rounds".format(max_rounds)) for round in range(0, max_rounds): print('Comparaison round {} with method {}'.format(round, method)) exit_code, output = command_runner(PRINT_FILE_CMD, shell=True, method=method) if os.name == 'nt': output = output.replace('\r\n', '\n') assert exit_code == 0, 'Did not succeed to read {}, method={}, exit_code: {}, output: {}'.format(TEST_FILENAME, method, exit_code, output) assert file_content == output, 'Round {} File content and output are not identical, method={}'.format(round, method) def test_stop_on_argument(): expected_output_regex = "Command .* was stopped because stop_on function returned True. Original output was:" def stop_on(): """ Simple function that returns True two seconds after reset_elapsed_time() has been called """ if get_elapsed_time() > 2: return True for method in methods: reset_elapsed_time() print('method={}'.format(method)) exit_code, output = command_runner(PING_CMD, stop_on=stop_on, method=method) # On github actions only with Python 2.7.18, we sometimes get -251 failed because of OS: [Error 5] Access is denied # when os.kill(pid) is called in kill_childs_mod # On my windows platform using the same Python version, it works... # well nothing I can debug on github actions if running_on_github_actions() and os.name == 'nt' and sys.version_info[0] < 3: assert exit_code in [-253, -251], 'Not as expected, we should get a permission error on github actions windows platform' else: assert exit_code == -251, 'Monitor mode should have been stopped by stop_on with exit_code -251. method={}, exit_code: {}, output: {}'.format(method, exit_code, output) assert re.match(expected_output_regex, output, re.MULTILINE) is not None, 'stop_on output is bogus. method={}, exit_code: {}, output: {}'.format(method, exit_code, output) def test_process_callback(): def callback(process_id): global PROCESS_ID PROCESS_ID = process_id for method in methods: exit_code, output = command_runner(PING_CMD, method=method, process_callback=callback) assert exit_code == 0, 'Wrong exit code. method={}, exit_code: {}, output: {}'.format(method, exit_code, output) assert isinstance(PROCESS_ID, subprocess.Popen), 'callback did not work properly. PROCESS_ID="{}"'.format(PROCESS_ID) def test_stream_callback(): global STREAM_OUTPUT def stream_callback(string): global STREAM_OUTPUT STREAM_OUTPUT += string print("CALLBACK: ", string) for stream in streams: stream_args = {stream: stream_callback} for method in methods: STREAM_OUTPUT = "" try: print('Method={}, stream={}, output=callback'.format(method, stream)) exit_code, output = command_runner(PING_CMD_REDIR, shell=True, method=method, **stream_args) except ValueError: if method == 'poller': assert False, 'ValueError should not be produced in poller mode.' if method == 'poller': assert exit_code == 0, 'Wrong exit code. method={}, exit_code: {}, output: {}'.format(method, exit_code, output) # Since we redirect STDOUT to STDERR assert STREAM_OUTPUT == output, 'Callback stream should contain same result as output' else: assert exit_code == -250, 'stream_callback exit_code is bogus. method={}, exit_code: {}, output: {}'.format(method, exit_code, output) def test_queue_output(): """ Thread command runner and get it's output queue """ if sys.version_info[0] < 3: print("Queue test uses concurrent futures. Won't run on python 2.7, sorry.") return # pypy is quite slow with poller method on github actions. # Lets lower rounds max_rounds = 100 if is_pypy() else 1000 print("\nSetting up test_read_file for {} rounds".format(max_rounds)) for i in range(0, max_rounds): for stream in streams: for method in methods: if method == 'monitor' and i > 1: # Dont bother to repeat the test for monitor mode more than once continue output_queue = queue.Queue() stream_output = "" stream_args = {stream: output_queue} print('Round={}, Method={}, stream={}, output=queue'.format(i, method, stream)) thread_result = command_runner_threaded(PRINT_FILE_CMD, shell=True, method=method, **stream_args) read_queue = True while read_queue: try: line = output_queue.get(timeout=0.1) except queue.Empty: pass else: if line is None: break else: stream_output += line exit_code, output = thread_result.result() if method == 'poller': assert exit_code == 0, 'Wrong exit code. method={}, exit_code: {}, output: {}'.format(method, exit_code, output) # Since we redirect STDOUT to STDERR if stream == 'stdout': assert stream_output == output, 'stdout queue output should contain same result as output' if stream == 'stderr': assert len(stream_output) == 0, 'stderr queue output should be empty' else: assert exit_code == -250, 'stream_queue exit_code is bogus. method={}, exit_code: {}, output: {}'.format( method, exit_code, output) def test_queue_non_threaded_command_runner(): """ Test case for Python 2.7 without proper threading return values """ def read_queue(output_queue, stream_output): """ Read the queue as thread Our problem here is that the thread can live forever if we don't check a global value, which is...well ugly """ read_queue = True while read_queue: try: line = output_queue.get(timeout=1) except queue.Empty: pass else: # The queue reading can be stopped once 'None' is received. if line is None: read_queue = False else: stream_output['value'] += line # ADD YOUR LIVE CODE HERE return stream_output for i in range(0, 20): for cmd in [PING_CMD, PRINT_FILE_CMD]: if cmd == PRINT_FILE_CMD: shell_args = {'shell': True} else: shell_args = {'shell': False} # Create a new queue that command_runner will fill up output_queue = queue.Queue() stream_output = {'value': ''} # Create a thread of read_queue() in order to read the queue while command_runner executes the command read_thread = threading.Thread( target=read_queue, args=(output_queue, stream_output) ) read_thread.daemon = True # thread dies with the program read_thread.start() # Launch command_runner print('Round={}, cmd={}'.format(i, cmd)) exit_code, output = command_runner(cmd, stdout=output_queue, method='poller', **shell_args) assert exit_code == 0, 'PING_CMD Exit code is not okay. exit_code={}, output={}'.format(exit_code, output) # Wait until we are sure that we emptied the queue while not output_queue.empty(): sleep(.1) assert stream_output['value'] == output, 'Output should be identical' def test_double_queue_threaded_stop(): """ Use both stdout and stderr queues and make them stop """ if sys.version_info[0] < 3: print("Queue test uses concurrent futures. Won't run on python 2.7, sorry.") return stdout_queue = queue.Queue() stderr_queue = queue.Queue() thread_result = command_runner_threaded( PING_CMD_AND_FAILURE, method='poller', shell=True, stdout=stdout_queue, stderr=stderr_queue) print('Begin to read queues') read_stdout = read_stderr = True while read_stdout or read_stderr: try: stdout_line = stdout_queue.get(timeout=0.1) except queue.Empty: pass else: if stdout_line is None: read_stdout = False print('stdout is finished') else: print('STDOUT:', stdout_line) try: stderr_line = stderr_queue.get(timeout=0.1) except queue.Empty: pass else: if stderr_line is None: read_stderr = False print('stderr is finished') else: print('STDERR:', stderr_line) while True: done = thread_result.done() print('Thread is done:', done) if done: break sleep(1) exit_code, _ = thread_result.result() assert exit_code == 0, 'We did not succeed in running the thread' def test_deferred_command(): """ Using deferred_command in order to run a command after a given timespan """ test_filename = 'deferred_test_file' if os.path.isfile(test_filename): os.remove(test_filename) deferred_command('echo test > {}'.format(test_filename), defer_time=5) assert os.path.isfile(test_filename) is False, 'File should not exist yet' sleep(6) assert os.path.isfile(test_filename) is True, 'File should exist now' os.remove(test_filename) def test_powershell_output(): # Don't bother to test powershell on other platforms than windows if os.name != 'nt': return """ Parts from windows_tools.powershell are used here """ powershell_interpreter = None # Try to guess powershell path if no valid path given interpreter_executable = "powershell.exe" for syspath in ["sysnative", "system32"]: try: # Let's try native powershell (64 bit) first or else # Import-Module may fail when running 32 bit powershell on 64 bit arch best_guess = os.path.join( os.environ.get("SYSTEMROOT", "C:"), syspath, "WindowsPowerShell", "v1.0", interpreter_executable, ) if os.path.isfile(best_guess): powershell_interpreter = best_guess break except KeyError: pass if powershell_interpreter is None: try: ps_paths = os.path.dirname(os.environ["PSModulePath"]).split(";") for ps_path in ps_paths: if ps_path.endswith("Modules"): ps_path = ps_path.strip("Modules") possible_ps_path = os.path.join(ps_path, interpreter_executable) if os.path.isfile(possible_ps_path): powershell_interpreter = possible_ps_path break except KeyError: pass if powershell_interpreter is None: raise OSError("Could not find any valid powershell interpreter") # Do not add -NoProfile so we don't end up in a path we're not supposed to command = powershell_interpreter + " -NonInteractive -NoLogo %s" % PING_CMD exit_code, output = command_runner(command, encoding="unicode_escape") print('powershell: ', exit_code, output) assert exit_code == 0, 'Powershell execution failed.' def test_null_redir(): for method in methods: print('method={}'.format(method)) exit_code, output = command_runner(PING_CMD, stdout=False) print(exit_code) print('OUTPUT:', output) assert output is None, 'We should not have any output here' exit_code, output = command_runner(PING_CMD_AND_FAILURE, shell=True, stderr=False) print(exit_code) print('OUTPUT:', output) assert '0.0.0.0' not in output, 'We should not get error output from here' for method in methods: print('method={}'.format(method)) exit_code, stdout, stderr = command_runner(PING_CMD, split_streams=True, stdout=False, stderr=False) print(exit_code) print('STDOUT:', stdout) print('STDERR:', stderr) assert stdout is None, 'We should not have any output from stdout' assert stderr is None, 'We should not have any output from stderr' exit_code, stdout, stderr = command_runner(PING_CMD_AND_FAILURE, shell=True, split_streams=True, stdout=False, stderr=False) print(exit_code) print('STDOUT:', stdout) print('STDERR:', stderr) assert stdout is None, 'We should not have any output from stdout' assert stderr is None, 'We should not have any output from stderr' def test_split_streams(): """ Test replacing output with stdout and stderr output """ for cmd in [PING_CMD, PING_CMD_AND_FAILURE]: for method in methods: print('cmd={}, method={}'.format(cmd, method)) try: exit_code, _ = command_runner(cmd, method=method, shell=True, split_streams=True) except ValueError: # Should generate a valueError pass except Exception as exc: assert False, 'We should have too many values to unpack here: {}'.format(exc) exit_code, stdout, stderr = command_runner(cmd, method=method, shell=True, split_streams=True) print('exit_code:', exit_code) print('STDOUT:', stdout) print('STDERR:', stderr) if cmd == PING_CMD: assert exit_code == 0, 'Exit code should be 0 for ping command with method {}'.format(method) assert '127.0.0.1' in stdout assert stderr is None if cmd == PING_CMD_AND_FAILURE: assert exit_code == 0, 'Exit code should be 0 for ping command with method {}'.format(method) assert '127.0.0.1' in stdout assert '0.0.0.0' in stderr def test_on_exit(): def on_exit(): global ON_EXIT_CALLED ON_EXIT_CALLED = True exit_code, _ = command_runner(PING_CMD, on_exit=on_exit) assert exit_code == 0, 'Exit code is not null' assert ON_EXIT_CALLED is True, 'On exit was never called' def test_priority(): def check_nice(process): niceness = os.nice(process.pid) if os.name == 'nt': assert niceness == 16384, 'Process niceness not properly set: {}'.format(niceness) else: assert niceness == 15, 'Process niceness not properly set: {}'.format(niceness) print('Nice !') def command_runner_thread(): return command_runner_threaded(PING_CMD, priority='low', io_priority='low', process_callback=check_nice) thread = threading.Thread( target=command_runner_thread, args=() ) thread.daemon = True # thread dies with the program thread.start() def test_no_close_queues(): """ Test no_close_queues """ if sys.version_info[0] < 3: print("Queue test uses concurrent futures. Won't run on python 2.7, sorry.") return stdout_queue = queue.Queue() stderr_queue = queue.Queue() thread_result = command_runner_threaded( PING_CMD_AND_FAILURE, method='poller', shell=True, stdout=stdout_queue, stderr=stderr_queue, no_close_queues=True) print('Begin to read queues') read_stdout = read_stderr = True wait_period = 50 # let's have 100 rounds of 2x timeout 0.1s = 10 seconds, which should be enough for exec to terminate while read_stdout or read_stderr: try: stdout_line = stdout_queue.get(timeout=0.1) except queue.Empty: pass else: if stdout_line is None: assert False, "STDOUT queue has been closed with no_close_queues" else: print('STDOUT:', stdout_line) try: stderr_line = stderr_queue.get(timeout=0.1) except queue.Empty: pass else: if stderr_line is None: assert False, "STDOUT queue has been closed with no_close_queues" else: print('STDERR:', stderr_line) wait_period -= 1 if wait_period < 1: break while True: done = thread_result.done() print('Thread is done:', done) if done: break sleep(1) exit_code, _ = thread_result.result() assert exit_code == 0, 'We did not succeed in running the thread' if __name__ == "__main__": print("Example code for %s, %s" % (__intname__, __build__)) test_standard_ping_with_encoding() test_standard_ping_with_default_encoding() test_standard_ping_with_encoding_disabled() test_timeout() test_timeout_with_subtree_killing() test_no_timeout() test_live_output() test_not_found() test_file_output() test_valid_exit_codes() test_unix_only_split_command() test_create_no_window() test_read_file() test_stop_on_argument() test_process_callback() test_stream_callback() test_queue_output() test_queue_non_threaded_command_runner() test_double_queue_threaded_stop() test_deferred_command() test_powershell_output() test_null_redir() test_split_streams() test_on_exit() test_priority() test_no_close_queues()