././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1671843145.3971457 supervisor-4.2.5/0000755000076500000240000000000014351446511013475 5ustar00mnaberezstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671841540.0 supervisor-4.2.5/CHANGES.rst0000644000076500000240000024330314351443404015302 0ustar00mnaberezstaff4.2.5 (2022-12-23) ------------------ - Fixed a bug where the XML-RPC method ``supervisor.startProcess()`` would return 500 Internal Server Error instead of an XML-RPC fault response if the command could not be parsed. Patch by Julien Le Cléach. - Fixed a bug on Python 2.7 where a ``UnicodeDecodeError`` may have occurred when using the web interface. Patch by Vinay Sajip. - Removed use of ``urllib.parse`` functions ``splithost``, ``splitport``, and ``splittype`` deprecated in Python 3.8. - Removed use of ``asynchat`` and ``asyncore`` deprecated in Python 3.10. - The return value of the XML-RPC method ``supervisor.getAllConfigInfo()`` now includes the ``directory``, ``uid``, and ``serverurl`` of the program. Patch by Yellmean. - If a subprocess exits with a unexpected exit code (one not listed in ``exitcodes=`` in a ``[program:x]`` section) then the exit will now be logged at the ``WARN`` level instead of ``INFO``. Patch by Precy Lee. - ``supervisorctl shutdown`` now shows an error message if an argument is given. - File descriptors are now closed using the faster ``os.closerange()`` instead of calling ``os.close()`` in a loop. Patch by tyong920. 4.2.4 (2021-12-30) ------------------ - Fixed a bug where the ``--identifier`` command line argument was ignored. It was broken since at least 3.0a7 (released in 2009) and probably earlier. Patch by Julien Le Cléach. 4.2.3 (2021-12-27) ------------------ - Fixed a race condition where an ``rpcinterface`` extension that subscribed to events would not see the correct process state if it accessed the the ``state`` attribute on a ``Subprocess`` instance immediately in the event callback. Patch by Chao Wang. - Added the ``setuptools`` package to the list of dependencies in ``setup.py`` because it is a runtime dependency. Patch by Louis Sautier. - The web interface will now return a 404 Not Found response if a log file is missing. Previously, it would return 410 Gone. It was changed because 410 is intended to mean that the condition is likely to be permanent. A log file missing is usually temporary, e.g. a process that was never started will not have a log file but will have one as soon as it is started. 4.2.2 (2021-02-26) ------------------ - Fixed a bug where ``supervisord`` could crash if a subprocess exited immediately before trying to kill it. - Fixed a bug where the ``stdout_syslog`` and ``stderr_syslog`` options of a ``[program:x]`` section could not be used unless file logging for the same program had also been configured. The file and syslog options can now be used independently. Patch by Scott Stroupe. - Fixed a bug where the ``logfile`` option in the ``[supervisord]`` section would not log to syslog when the special filename of ``syslog`` was supplied, as is supported by all other log filename options. Patch by Franck Cuny. - Fixed a bug where environment variables defined in ``environment=`` in the ``[supervisord]`` section or a ``[program:x]`` section could not be used in ``%(ENV_x)s`` expansions. Patch by MythRen. - The ``supervisorctl signal`` command now allows a signal to be sent when a process is in the ``STOPPING`` state. Patch by Mike Gould. - ``supervisorctl`` and ``supervisord`` now print help when given ``-?`` in addition to the existing ``-h``/``--help``. 4.2.1 (2020-08-20) ------------------ - Fixed a bug on Python 3 where a network error could cause ``supervisord`` to crash with the error ``:can't concat str to bytes``. Patch by Vinay Sajip. - Fixed a bug where a test would fail on systems with glibc 2.3.1 because the default value of SOMAXCONN changed. 4.2.0 (2020-04-30) ------------------ - When ``supervisord`` is run in the foreground, a new ``--silent`` option suppresses the main log from being echoed to ``stdout`` as it normally would. Patch by Trevor Foster. - Parsing ``command=`` now supports a new expansion, ``%(numprocs)d``, that expands to the value of ``numprocs=`` in the same section. Patch by Santjago Corkez. - Web UI buttons no longer use background images. Patch by Dmytro Karpovych. - The Web UI now has a link to view ``tail -f stderr`` for a process in addition to the existing ``tail -f stdout`` link. Based on a patch by OuroborosCoding. - The HTTP server will now send an ``X-Accel-Buffering: no`` header in logtail responses to fix Nginx proxy buffering. Patch by Weizhao Li. - When ``supervisord`` reaps an unknown PID, it will now log a description of the ``waitpid`` status. Patch by Andrey Zelenchuk. - Fixed a bug introduced in 4.0.3 where ``supervisorctl tail -f foo | grep bar`` would fail with the error ``NoneType object has no attribute 'lower'``. This only occurred on Python 2.7 and only when piped. Patch by Slawa Pidgorny. 4.1.0 (2019-10-19) ------------------ - Fixed a bug on Python 3 only where logging to syslog did not work and would log the exception ``TypeError: a bytes-like object is required, not 'str'`` to the main ``supervisord`` log file. Patch by Vinay Sajip and Josh Staley. - Fixed a Python 3.8 compatibility issue caused by the removal of ``cgi.escape()``. Patch by Mattia Procopio. - The ``meld3`` package is no longer a dependency. A version of ``meld3`` is now included within the ``supervisor`` package itself. 4.0.4 (2019-07-15) ------------------ - Fixed a bug where ``supervisorctl tail stdout`` would actually tail ``stderr``. Note that ``tail `` without the explicit ``stdout`` correctly tailed ``stdout``. The bug existed since 3.0a3 (released in 2007). Patch by Arseny Hofman. - Improved the warning message added in 4.0.3 so it is now emitted for both ``tail`` and ``tail -f``. Patch by Vinay Sajip. - CVE-2019-12105. Documentation addition only, no code changes. This CVE states that ``inet_http_server`` does not use authentication by default (`details `_). Note that ``inet_http_server`` is not enabled by default, and is also not enabled in the example configuration output by ``echo_supervisord_conf``. The behavior of the ``inet_http_server`` options have been correctly documented, and have not changed, since the feature was introduced in 2006. A new `warning message `_ was added to the documentation. 4.0.3 (2019-05-22) ------------------ - Fixed an issue on Python 2 where running ``supervisorctl tail -f `` would fail with the message ``Cannot connect, error: `` where it may have worked on Supervisor 3.x. The issue was introduced in Supervisor 4.0.0 due to new bytes/strings conversions necessary to add Python 3 support. For ``supervisorctl`` to correctly display logs with Unicode characters, the terminal encoding specified by the environment must support it. If not, the ``UnicodeEncodeError`` may still occur on either Python 2 or 3. A new warning message is now printed if a problematic terminal encoding is detected. Patch by Vinay Sajip. 4.0.2 (2019-04-17) ------------------ - Fixed a bug where inline comments in the config file were not parsed correctly such that the comments were included as part of the values. This only occurred on Python 2, and only where the environment had an extra ``configparser`` module installed. The bug was introduced in Supervisor 4.0.0 because of Python 2/3 compatibility code that expected a Python 2 environment to only have a ``ConfigParser`` module. 4.0.1 (2019-04-10) ------------------ - Fixed an issue on Python 3 where an ``OSError: [Errno 29] Illegal seek`` would occur if ``logfile`` in the ``[supervisord]`` section was set to a special file like ``/dev/stdout`` that was not seekable, even if ``logfile_maxbytes = 0`` was set to disable rotation. The issue only affected the main log and not child logs. Patch by Martin Falatic. 4.0.0 (2019-04-05) ------------------ - Support for Python 3 has been added. On Python 3, Supervisor requires Python 3.4 or later. Many thanks to Vinay Sajip, Scott Maxwell, Palm Kevin, Tres Seaver, Marc Abramowitz, Son Nguyen, Shane Hathaway, Evan Andrews, and Ethan Hann who all made major contributions to the Python 3 porting effort. Thanks also to all contributors who submitted issue reports and patches towards this effort. - Support for Python 2.4, 2.5, and 2.6 has been dropped. On Python 2, Supervisor now requires Python 2.7. - The ``supervisor`` package is no longer a namespace package. - The behavior of the config file expansion ``%(here)s`` has changed. In previous versions, a bug caused ``%(here)s`` to always expand to the directory of the root config file. Now, when ``%(here)s`` is used inside a file included via ``[include]``, it will expand to the directory of that file. Thanks to Alex Eftimie and Zoltan Toth-Czifra for the patches. - The default value for the config file setting ``exitcodes=``, the expected exit codes of a program, has changed. In previous versions, it was ``0,2``. This caused issues with Golang programs where ``panic()`` causes the exit code to be ``2``. The default value for ``exitcodes`` is now ``0``. - An undocumented feature where multiple ``supervisorctl`` commands could be combined on a single line separated by semicolons has been removed. - ``supervisorctl`` will now set its exit code to a non-zero value when an error condition occurs. Previous versions did not set the exit code for most error conditions so it was almost always 0. Patch by Luke Weber. - Added new ``stdout_syslog`` and ``stderr_syslog`` options to the config file. These are boolean options that indicate whether process output will be sent to syslog. Supervisor can now log to both files and syslog at the same time. Specifying a log filename of ``syslog`` is still supported but deprecated. Patch by Jason R. Coombs. 3.4.0 (2019-04-05) ------------------ - FastCGI programs (``[fcgi-program:x]`` sections) can now be used in groups (``[group:x]``). Patch by Florian Apolloner. - Added a new ``socket_backlog`` option to the ``[fcgi-program:x]`` section to set the listen(2) socket backlog. Patch by Nenad Merdanovic. - Fixed a bug where ``SupervisorTransport`` (the XML-RPC transport used with Unix domain sockets) did not close the connection when ``close()`` was called on it. Patch by Jérome Perrin. - Fixed a bug where ``supervisorctl start `` could hang for a long time if the system clock rolled back. Patch by Joe LeVeque. 3.3.5 (2018-12-22) ------------------ - Fixed a race condition where ``supervisord`` would cancel a shutdown already in progress if it received ``SIGHUP``. Now, ``supervisord`` will ignore ``SIGHUP`` if shutdown is already in progress. Patch by Livanh. - Fixed a bug where searching for a relative command ignored changes to ``PATH`` made in ``environment=``. Based on a patch by dongweiming. - ``childutils.ProcessCommunicationsProtocol`` now does an explicit ``flush()`` after writing to ``stdout``. - A more descriptive error message is now emitted if a name in the config file contains a disallowed character. Patch by Rick van Hattem. 3.3.4 (2018-02-15) ------------------ - Fixed a bug where rereading the configuration would not detect changes to eventlisteners. Patch by Michael Ihde. - Fixed a bug where the warning ``Supervisord is running as root and it is searching for its config file`` may have been incorrectly shown by ``supervisorctl`` if its executable name was changed. - Fixed a bug where ``supervisord`` would continue starting up if the ``[supervisord]`` section of the config file specified ``user=`` but ``setuid()`` to that user failed. It will now exit immediately if it cannot drop privileges. - Fixed a bug in the web interface where redirect URLs did not have a slash between the host and query string, which caused issues when proxying with Nginx. Patch by Luke Weber. - When ``supervisord`` successfully drops privileges during startup, it is now logged at the ``INFO`` level instead of ``CRIT``. - The HTTP server now returns a Content-Type header specifying UTF-8 encoding. This may fix display issues in some browsers. Patch by Katenkka. 3.3.3 (2017-07-24) ------------------ - Fixed CVE-2017-11610. A vulnerability was found where an authenticated client can send a malicious XML-RPC request to ``supervisord`` that will run arbitrary shell commands on the server. The commands will be run as the same user as ``supervisord``. Depending on how ``supervisord`` has been configured, this may be root. See https://github.com/Supervisor/supervisor/issues/964 for details. 3.3.2 (2017-06-03) ------------------ - Fixed a bug introduced in 3.3.0 where the ``supervisorctl reload`` command would crash ``supervisord`` with the error ``OSError: [Errno 9] Bad file descriptor`` if the ``kqueue`` poller was used. Patch by Jared Suttles. - Fixed a bug introduced in 3.3.0 where ``supervisord`` could get stuck in a polling loop after the web interface was used, causing high CPU usage. Patch by Jared Suttles. - Fixed a bug where if ``supervisord`` attempted to start but aborted due to another running instance of ``supervisord`` with the same config, the pidfile of the running instance would be deleted. Patch by coldnight. - Fixed a bug where ``supervisorctl fg`` would swallow most XML-RPC faults. ``fg`` now prints the fault and exits. - Parsing the config file will now fail with an error message if a process or group name contains a forward slash character (``/``) since it would break the URLs used by the web interface. - ``supervisorctl reload`` now shows an error message if an argument is given. Patch by Joel Krauska. - ``supervisorctl`` commands ``avail``, ``reread``, and ``version`` now show an error message if an argument is given. 3.3.1 (2016-08-02) ------------------ - Fixed an issue where ``supervisord`` could hang when responding to HTTP requests (including ``supervisorctl`` commands) if the system time was set back after ``supervisord`` was started. - Zope ``trackrefs``, a debugging tool that was included in the ``tests`` directory but hadn't been used for years, has been removed. 3.3.0 (2016-05-14) ------------------ - ``supervisord`` will now use ``kqueue``, ``poll``, or ``select`` to monitor its file descriptors, in that order, depending on what is available on the system. Previous versions used ``select`` only and would crash with the error ``ValueError: filedescriptor out of range in select()`` when running a large number of subprocesses (whatever number resulted in enough file descriptors to exceed the fixed-size file descriptor table used by ``select``, which is typically 1024). Patch by Igor Sobreira. - ``/etc/supervisor/supervisord.conf`` has been added to the config file search paths. Many versions of Supervisor packaged for Debian and Ubuntu have included a patch that added this path. This difference was reported in a number of tickets as a source of confusion and upgrade difficulties, so the path has been added. Patch by Kelvin Wong. - Glob patterns in the ``[include]`` section now support the ``host_node_name`` expansion. Patch by Paul Lockaby. - Files included via the ``[include]`` section are now logged at the ``INFO`` level instead of ``WARN``. Patch by Daniel Hahler. 3.2.4 (2017-07-24) ------------------ - Backported from Supervisor 3.3.3: Fixed CVE-2017-11610. A vulnerability was found where an authenticated client can send a malicious XML-RPC request to ``supervisord`` that will run arbitrary shell commands on the server. The commands will be run as the same user as ``supervisord``. Depending on how ``supervisord`` has been configured, this may be root. See https://github.com/Supervisor/supervisor/issues/964 for details. 3.2.3 (2016-03-19) ------------------ - 400 Bad Request is now returned if an XML-RPC request is received with invalid body data. In previous versions, 500 Internal Server Error was returned. 3.2.2 (2016-03-04) ------------------ - Parsing the config file will now fail with an error message if an ``inet_http_server`` or ``unix_http_server`` section contains a ``username=`` but no ``password=``. In previous versions, ``supervisord`` would start with this invalid configuration but the HTTP server would always return a 500 Internal Server Error. Thanks to Chris Ergatides for reporting this issue. 3.2.1 (2016-02-06) ------------------ - Fixed a server exception ``OverflowError: int exceeds XML-RPC limits`` that made ``supervisorctl status`` unusable if the system time was far into the future. The XML-RPC API returns timestamps as XML-RPC integers, but timestamps will exceed the maximum value of an XML-RPC integer in January 2038 ("Year 2038 Problem"). For now, timestamps exceeding the maximum integer will be capped at the maximum to avoid the exception and retain compatibility with existing API clients. In a future version of the API, the return type for timestamps will be changed. 3.2.0 (2015-11-30) ------------------ - Files included via the ``[include]`` section are read in sorted order. In past versions, the order was undefined. Patch by Ionel Cristian Mărieș. - ``supervisorctl start`` and ``supervisorctl stop`` now complete more quickly when handling many processes. Thanks to Chris McDonough for this patch. See: https://github.com/Supervisor/supervisor/issues/131 - Environment variables are now expanded for all config file options. Patch by Dexter Tad-y. - Added ``signalProcess``, ``signalProcessGroup``, and ``signalAllProcesses`` XML-RPC methods to supervisor RPC interface. Thanks to Casey Callendrello, Marc Abramowitz, and Moriyoshi Koizumi for the patches. - Added ``signal`` command to supervisorctl. Thanks to Moriyoshi Koizumi and Marc Abramowitz for the patches. - Errors caused by bad values in a config file now show the config section to make debugging easier. Patch by Marc Abramowitz. - Setting ``redirect_stderr=true`` in an ``[eventlistener:x]`` section is now disallowed because any messages written to ``stderr`` would interfere with the eventlistener protocol on ``stdout``. - Fixed a bug where spawning a process could cause ``supervisord`` to crash if an ``IOError`` occurred while setting up logging. One way this could happen is if a log filename was accidentally set to a directory instead of a file. Thanks to Grzegorz Nosek for reporting this issue. - Fixed a bug introduced in 3.1.0 where ``supervisord`` could crash when attempting to display a resource limit error. - Fixed a bug where ``supervisord`` could crash with the message ``Assertion failed for processname: RUNNING not in STARTING`` if a time change caused the last start time of the process to be in the future. Thanks to Róbert Nagy, Sergey Leschenko, and samhair for the patches. - A warning is now logged if an eventlistener enters the UNKNOWN state, which usually indicates a bug in the eventlistener. Thanks to Steve Winton and detailyang for reporting issues that led to this change. - Errors from the web interface are now logged at the ``ERROR`` level. Previously, they were logged at the ``TRACE`` level and easily missed. Thanks to Thomas Güttler for reporting this issue. - Fixed ``DeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately.`` on setuptools >= 11.3. - If ``redirect_stderr=true`` and ``stderr_logfile=auto``, no stderr log file will be created. In previous versions, an empty stderr log file would be created. Thanks to Łukasz Kożuchowski for the initial patch. - Fixed an issue in Medusa that would cause ``supervisorctl tail -f`` to disconnect if many other ``supervisorctl`` commands were run in parallel. Patch by Stefan Friesel. 3.1.4 (2017-07-24) ------------------ - Backported from Supervisor 3.3.3: Fixed CVE-2017-11610. A vulnerability was found where an authenticated client can send a malicious XML-RPC request to ``supervisord`` that will run arbitrary shell commands on the server. The commands will be run as the same user as ``supervisord``. Depending on how ``supervisord`` has been configured, this may be root. See https://github.com/Supervisor/supervisor/issues/964 for details. 3.1.3 (2014-10-28) ------------------ - Fixed an XML-RPC bug where the ElementTree-based parser handled strings like ``hello`` but not strings like ``hello``, which are valid in the XML-RPC spec. This fixes compatibility with the Apache XML-RPC client for Java and possibly other clients. 3.1.2 (2014-09-07) ------------------ - Fixed a bug where ``tail group:*`` in ``supervisorctl`` would show a 500 Internal Server Error rather than a BAD_NAME fault. - Fixed a bug where the web interface would show a 500 Internal Server Error instead of an error message for some process start faults. - Removed medusa files not used by Supervisor. 3.1.1 (2014-08-11) ------------------ - Fixed a bug where ``supervisorctl tail -f name`` output would stop if log rotation occurred while tailing. - Prevent a crash when a greater number of file descriptors were attempted to be opened than permitted by the environment when starting a bunch of programs. Now, instead a spawn error is logged. - Compute "channel delay" properly, fixing symptoms where a supervisorctl start command would hang for a very long time when a process (or many processes) are spewing to their stdout or stderr. See comments attached to https://github.com/Supervisor/supervisor/pull/263 . - Added ``docs/conf.py``, ``docs/Makefile``, and ``supervisor/scripts/*.py`` to the release package. 3.1.0 (2014-07-29) ------------------ - The output of the ``start``, ``stop``, ``restart``, and ``clear`` commands in ``supervisorctl`` has been changed to be consistent with the ``status`` command. Previously, the ``status`` command would show a process like ``foo:foo_01`` but starting that process would show ``foo_01: started`` (note the group prefix ``foo:`` was missing). Now, starting the process will show ``foo:foo_01: started``. Suggested by Chris Wood. - The ``status`` command in ``supervisorctl`` now supports group name syntax: ``status group:*``. - The process column in the table output by the ``status`` command in ``supervisorctl`` now expands to fit the widest name. - The ``update`` command in ``supervisorctl`` now accepts optional group names. When group names are specified, only those groups will be updated. Patch by Gary M. Josack. - Tab completion in ``supervisorctl`` has been improved and now works for more cases. Thanks to Mathieu Longtin and Marc Abramowitz for the patches. - Attempting to start or stop a process group in ``supervisorctl`` with the ``group:*`` syntax will now show the same error message as the ``process`` syntax if the name does not exist. Previously, it would show a Python exception. Patch by George Ang. - Added new ``PROCESS_GROUP_ADDED`` and ``PROCESS_GROUP_REMOVED`` events. These events are fired when process groups are added or removed from Supervisor's runtime configuration when using the ``add`` and ``remove`` commands in ``supervisorctl``. Patch by Brent Tubbs. - Stopping a process in the backoff state now changes it to the stopped state. Previously, an attempt to stop a process in backoff would be ignored. Patch by Pascal Varet. - The ``directory`` option is now expanded separately for each process in a homogeneous process group. This allows each process to have its own working directory. Patch by Perttu Ranta-aho. - Removed ``setuptools`` from the ``requires`` list in ``setup.py`` because it caused installation issues on some systems. - Fixed a bug in Medusa where the HTTP Basic authorizer would cause an exception if the password contained a colon. Thanks to Thomas Güttler for reporting this issue. - Fixed an XML-RPC bug where calling supervisor.clearProcessLogs() with a name like ``group:*`` would cause a 500 Internal Server Error rather than returning a BAD_NAME fault. - Fixed a hang that could occur in ``supervisord`` if log rotation is used and an outside program deletes an active log file. Patch by Magnus Lycka. - A warning is now logged if a glob pattern in an ``[include]`` section does not match any files. Patch by Daniel Hahler. 3.0.1 (2017-07-24) ------------------ - Backported from Supervisor 3.3.3: Fixed CVE-2017-11610. A vulnerability was found where an authenticated client can send a malicious XML-RPC request to ``supervisord`` that will run arbitrary shell commands on the server. The commands will be run as the same user as ``supervisord``. Depending on how ``supervisord`` has been configured, this may be root. See https://github.com/Supervisor/supervisor/issues/964 for details. 3.0 (2013-07-30) ---------------- - Parsing the config file will now fail with an error message if a process or group name contains characters that are not compatible with the eventlistener protocol. - Fixed a bug where the ``tail -f`` command in ``supervisorctl`` would fail if the combined length of the username and password was over 56 characters. - Reading the config file now gives a separate error message when the config file exists but can't be read. Previously, any error reading the file would be reported as "could not find config file". Patch by Jens Rantil. - Fixed an XML-RPC bug where array elements after the first would be ignored when using the ElementTree-based XML parser. Patch by Zev Benjamin. - Fixed the usage message output by ``supervisorctl`` to show the correct default config file path. Patch by Alek Storm. 3.0b2 (2013-05-28) ------------------ - The behavior of the program option ``user`` has changed. In all previous versions, if ``supervisord`` failed to switch to the user, a warning would be sent to the stderr log but the child process would still be spawned. This means that a mistake in the config file could result in a child process being unintentionally spawned as root. Now, ``supervisord`` will not spawn the child unless it was able to successfully switch to the user. Thanks to Igor Partola for reporting this issue. - If a user specified in the config file does not exist on the system, ``supervisord`` will now print an error and refuse to start. - Reverted a change to logging introduced in 3.0b1 that was intended to allow multiple processes to log to the same file with the rotating log handler. The implementation caused supervisord to crash during reload and to leak file handles. Also, since log rotation options are given on a per-program basis, impossible configurations could be created (conflicting rotation options for the same file). Given this and that supervisord now has syslog support, it was decided to remove this feature. A warning was added to the documentation that two processes may not log to the same file. - Fixed a bug where parsing ``command=`` could cause supervisord to crash if shlex.split() fails, such as a bad quoting. Patch by Scott Wilson. - It is now possible to use ``supervisorctl`` on a machine with no ``supervisord.conf`` file by supplying the connection information in command line options. Patch by Jens Rantil. - Fixed a bug where supervisord would crash if the syslog handler was used and supervisord received SIGUSR2 (log reopen request). - Fixed an XML-RPC bug where calling supervisor.getProcessInfo() with a bad name would cause a 500 Internal Server Error rather than the returning a BAD_NAME fault. - Added a favicon to the web interface. Patch by Caio Ariede. - Fixed a test failure due to incorrect handling of daylight savings time in the childutils tests. Patch by Ildar Hizbulin. - Fixed a number of pyflakes warnings for unused variables, imports, and dead code. Patch by Philippe Ombredanne. 3.0b1 (2012-09-10) ------------------ - Fixed a bug where parsing ``environment=`` did not verify that key/value pairs were correctly separated. Patch by Martijn Pieters. - Fixed a bug in the HTTP server code that could cause unnecessary delays when sending large responses. Patch by Philip Zeyliger. - When supervisord starts up as root, if the ``-c`` flag was not provided, a warning is now emitted to the console. Rationale: supervisord looks in the current working directory for a ``supervisord.conf`` file; someone might trick the root user into starting supervisord while cd'ed into a directory that has a rogue ``supervisord.conf``. - A warning was added to the documentation about the security implications of starting supervisord without the ``-c`` flag. - Add a boolean program option ``stopasgroup``, defaulting to false. When true, the flag causes supervisor to send the stop signal to the whole process group. This is useful for programs, such as Flask in debug mode, that do not propagate stop signals to their children, leaving them orphaned. - Python 2.3 is no longer supported. The last version that supported Python 2.3 is Supervisor 3.0a12. - Removed the unused "supervisor_rpc" entry point from setup.py. - Fixed a bug in the rotating log handler that would cause unexpected results when two processes were set to log to the same file. Patch by Whit Morriss. - Fixed a bug in config file reloading where each reload could leak memory because a list of warning messages would be appended but never cleared. Patch by Philip Zeyliger. - Added a new Syslog log handler. Thanks to Denis Bilenko, Nathan L. Smith, and Jason R. Coombs, who each contributed to the patch. - Put all change history into a single file (CHANGES.txt). 3.0a12 (2011-12-06) ------------------- - Released to replace a broken 3.0a11 package where non-Python files were not included in the package. 3.0a11 (2011-12-06) ------------------- - Added a new file, ``PLUGINS.rst``, with a listing of third-party plugins for Supervisor. Contributed by Jens Rantil. - The ``pid`` command in supervisorctl can now be used to retrieve the PIDs of child processes. See ``help pid``. Patch by Gregory Wisniewski. - Added a new ``host_node_name`` expansion that will be expanded to the value returned by Python's ``platform.node`` (see http://docs.python.org/library/platform.html#platform.node). Patch by Joseph Kondel. - Fixed a bug in the web interface where pages over 64K would be truncated. Thanks to Drew Perttula and Timothy Jones for reporting this. - Renamed ``README.txt`` to ``README.rst`` so GitHub renders the file as ReStructuredText. - The XML-RPC server is now compatible with clients that do not send empty when there are no parameters for the method call. Thanks to Johannes Becker for reporting this. - Fixed ``supervisorctl --help`` output to show the correct program name. - The behavior of the configuration options ``minfds`` and ``minprocs`` has changed. Previously, if a hard limit was less than ``minfds`` or ``minprocs``, supervisord would unconditionally abort with an error. Now, supervisord will attempt to raise the hard limit. This may succeed if supervisord is run as root, otherwise the error is printed as before. Patch by Benoit Sigoure. - Add a boolean program option ``killasgroup``, defaulting to false, if true when resorting to send SIGKILL to stop/terminate the process send it to its whole process group instead to take care of possible children as well and not leave them behind. Patch by Samuele Pedroni. - Environment variables may now be used in the configuration file for options that support string expansion. Patch by Aleksey Sivokon. - Fixed a race condition where supervisord might not act on a signal sent to it. Thanks to Adar Dembo for reporting the issue and supplying the initial patch. - Updated the output of ``echo_supervisord_conf`` to fix typos and improve comments. Thanks to Jens Rantil for noticing these. - Fixed a possible 500 Server Error from the web interface. This was observed when using Supervisor on a domain socket behind Nginx, where Supervisor would raise an exception because REMOTE_ADDR was not set. Patch by David Bennett. 3.0a10 (2011-03-30) ------------------- - Fixed the stylesheet of the web interface so the footer line won't overlap a long process list. Thanks to Derek DeVries for the patch. - Allow rpc interface plugins to register new events types. - Bug fix for FCGI sockets not getting cleaned up when the ``reload`` command is issued from supervisorctl. Also, the default behavior has changed for FCGI sockets. They are now closed whenever the number of running processes in a group hits zero. Previously, the sockets were kept open unless a group-level stop command was issued. - Better error message when HTTP server cannot reverse-resolve a hostname to an IP address. Previous behavior: show a socket error. Current behavior: spit out a suggestion to stdout. - Environment variables set via ``environment=`` value within ``[supervisord]`` section had no effect. Thanks to Wyatt Baldwin for a patch. - Fix bug where stopping process would cause process output that happened after the stop request was issued to be lost. See https://github.com/Supervisor/supervisor/issues/11. - Moved 2.X change log entries into ``HISTORY.txt``. - Converted ``CHANGES.txt`` and ``README.txt`` into proper ReStructuredText and included them in the ``long_description`` in ``setup.py``. - Added a tox.ini to the package (run via ``tox`` in the package dir). Tests supervisor on multiple Python versions. 3.0a9 (2010-08-13) ------------------ - Use rich comparison methods rather than __cmp__ to sort process configs and process group configs to better straddle Python versions. (thanks to Jonathan Riboux for identifying the problem and supplying an initial patch). - Fixed test_supervisorctl.test_maintail_dashf test for Python 2.7. (thanks to Jonathan Riboux for identifying the problem and supplying an initial patch). - Fixed the way that supervisor.datatypes.url computes a "good" URL for compatibility with Python 2.7 and Python >= 2.6.5. URLs with bogus "schemes://" will now be accepted as a version-straddling compromise (before they were rejected before supervisor would start). (thanks to Jonathan Riboux for identifying the problem and supplying an initial patch). - Add a ``-v`` / ``--version`` option to supervisord: Print the supervisord version number out to stdout and exit. (Roger Hoover) - Import iterparse from xml.etree when available (eg: Python 2.6). Patch by Sidnei da Silva. - Fixed the url to the supervisor-users mailing list. Patch by Sidnei da Silva - When parsing "environment=" in the config file, changes introduced in 3.0a8 prevented Supervisor from parsing some characters commonly found in paths unless quoting was used as in this example:: environment=HOME='/home/auser' Supervisor once again allows the above line to be written as:: environment=HOME=/home/auser Alphanumeric characters, "_", "/", ".", "+", "-", "(", ")", and ":" can all be used as a value without quoting. If any other characters are needed in the value, please quote it as in the first example above. Thanks to Paul Heideman for reporting this issue. - Supervisor will now look for its config file in locations relative to the executable path, allowing it to be used more easily in virtual environments. If sys.argv[0] is ``/path/to/venv/bin/supervisorctl``, supervisor will now look for it's config file in ``/path/to/venv/etc/supervisord.conf`` and ``/path/to/venv/supervisord.conf`` in addition to the other standard locations. Patch by Chris Rossi. 3.0a8 (2010-01-20) ------------------ - Don't cleanup file descriptors on first supervisord invocation: this is a lame workaround for Snow Leopard systems that use libdispatch and are receiving "Illegal instruction" messages at supervisord startup time. Restarting supervisord via "supervisorctl restart" may still cause a crash on these systems. - Got rid of Medusa hashbang headers in various files to ease RPM packaging. - Allow umask to be 000 (patch contributed by Rowan Nairn). - Fixed a bug introduced in 3.0a7 where supervisorctl wouldn't ask for a username/password combination properly from a password-protected supervisord if it wasn't filled in within the "[supervisorctl]" section username/password values. It now properly asks for a username and password. - Fixed a bug introduced in 3.0a7 where setup.py would not detect the Python version correctly. Patch by Daniele Paolella. - Fixed a bug introduced in 3.0a7 where parsing a string of key/value pairs failed on Python 2.3 due to use of regular expression syntax introduced in Python 2.4. - Removed the test suite for the ``memmon`` console script, which was moved to the Superlance package in 3.0a7. - Added release dates to CHANGES.txt. - Reloading the config for an fcgi process group did not close the fcgi socket - now, the socket is closed whenever the group is stopped as a unit (including during config update). However, if you stop all the processes in a group individually, the socket will remain open to allow for graceful restarts of FCGI daemons. (Roger Hoover) - Rereading the config did not pick up changes to the socket parameter in a fcgi-program section. (Roger Hoover) - Made a more friendly exception message when a FCGI socket cannot be created. (Roger Hoover) - Fixed a bug where the --serverurl option of supervisorctl would not accept a URL with a "unix" scheme. (Jason Kirtland) - Running the tests now requires the "mock" package. This dependency has been added to "tests_require" in setup.py. (Roger Hoover) - Added support for setting the ownership and permissions for an FCGI socket. This is done using new "socket_owner" and "socket_mode" options in an [fcgi-program:x] section. See the manual for details. (Roger Hoover) - Fixed a bug where the FCGI socket reference count was not getting decremented on spawn error. (Roger Hoover) - Fixed a Python 2.6 deprecation warning on use of the "sha" module. - Updated ez_setup.py to one that knows about setuptools 0.6c11. - Running "supervisorctl shutdown" no longer dumps a Python backtrace when it can't connect to supervisord on the expected socket. Thanks to Benjamin Smith for reporting this. - Removed use of collections.deque in our bundled version of asynchat because it broke compatibility with Python 2.3. - The sample configuration output by "echo_supervisord_conf" now correctly shows the default for "autorestart" as "unexpected". Thanks to William Dode for noticing it showed the wrong value. 3.0a7 (2009-05-24) ------------------ - We now bundle our own patched version of Medusa contributed by Jason Kirtland to allow Supervisor to run on Python 2.6. This was done because Python 2.6 introduced backwards incompatible changes to asyncore and asynchat in the stdlib. - The console script ``memmon``, introduced in Supervisor 3.0a4, has been moved to Superlance (http://pypi.python.org/pypi/superlance). The Superlance package contains other useful monitoring tools designed to run under Supervisor. - Supervisorctl now correctly interprets all of the error codes that can be returned when starting a process. Patch by Francesc Alted. - New ``stdout_events_enabled`` and ``stderr_events_enabled`` config options have been added to the ``[program:x]``, ``[fcgi-program:x]``, and ``[eventlistener:x]`` sections. These enable the emitting of new PROCESS_LOG events for a program. If unspecified, the default is False. If enabled for a subprocess, and data is received from the stdout or stderr of the subprocess while not in the special capture mode used by PROCESS_COMMUNICATION, an event will be emitted. Event listeners can subscribe to either PROCESS_LOG_STDOUT or PROCESS_LOG_STDERR individually, or PROCESS_LOG for both. - Values for subprocess environment variables specified with environment= in supervisord.conf can now be optionally quoted, allowing them to contain commas. Patch by Tim Godfrey. - Added a new event type, REMOTE_COMMUNICATION, that is emitted by a new RPC method, supervisor.sendRemoteCommEvent(). - Patch for bug #268 (KeyError on ``here`` expansion for stdout/stderr_logfile) from David E. Kindred. - Add ``reread``, ``update``, and ``avail`` commands based on Anders Quist's ``online_config_reload.diff`` patch. This patch extends the "add" and "drop" commands with automagical behavior:: In supervisorctl: supervisor> status bar RUNNING pid 14864, uptime 18:03:42 baz RUNNING pid 23260, uptime 0:10:16 foo RUNNING pid 14866, uptime 18:03:42 gazonk RUNNING pid 23261, uptime 0:10:16 supervisor> avail bar in use auto 999:999 baz in use auto 999:999 foo in use auto 999:999 gazonk in use auto 999:999 quux avail auto 999:999 Now we add this to our conf: [group:zegroup] programs=baz,gazonk Then we reread conf: supervisor> reread baz: disappeared gazonk: disappeared quux: available zegroup: available supervisor> avail bar in use auto 999:999 foo in use auto 999:999 quux avail auto 999:999 zegroup:baz avail auto 999:999 zegroup:gazonk avail auto 999:999 supervisor> status bar RUNNING pid 14864, uptime 18:04:18 baz RUNNING pid 23260, uptime 0:10:52 foo RUNNING pid 14866, uptime 18:04:18 gazonk RUNNING pid 23261, uptime 0:10:52 The magic make-it-so command: supervisor> update baz: stopped baz: removed process group gazonk: stopped gazonk: removed process group zegroup: added process group quux: added process group supervisor> status bar RUNNING pid 14864, uptime 18:04:43 foo RUNNING pid 14866, uptime 18:04:43 quux RUNNING pid 23561, uptime 0:00:02 zegroup:baz RUNNING pid 23559, uptime 0:00:02 zegroup:gazonk RUNNING pid 23560, uptime 0:00:02 supervisor> avail bar in use auto 999:999 foo in use auto 999:999 quux in use auto 999:999 zegroup:baz in use auto 999:999 zegroup:gazonk in use auto 999:999 - Fix bug with symptom "KeyError: 'process_name'" when using a logfile name including documented``process_name`` Python string expansions. - Tab completions in the supervisorctl shell, and a foreground mode for Supervisor, implemented as a part of GSoC. The supervisorctl program now has a ``fg`` command, which makes it possible to supply inputs to a process, and see its output/error stream in real time. - Process config reloading implemented by Anders Quist. The supervisorctl program now has the commands "add" and "drop". "add " adds the process group implied by in the config file. "drop " removes the process group from the running configuration (it must already be stopped). This makes it possible to add processes to and remove processes from a running supervisord without restarting the supervisord process. - Fixed a bug where opening the HTTP servers would fail silently for socket errors other than errno.EADDRINUSE. - Thanks to Dave Peticolas, using "reload" against a supervisord that is running in the background no longer causes supervisord to crash. - Configuration options for logfiles now accept mixed case reserved words (e.g. "AUTO" or "auto") for consistency with other options. - childutils.eventdata was buggy, it could not deal with carriage returns in data. See http://www.plope.com/software/collector/257. Thanks to Ian Bicking. - Per-process exitcodes= configuration now will not accept exit codes that are not 8-bit unsigned integers (supervisord will not start when one of the exit codes is outside the range of 0 - 255). - Per-process ``directory`` value can now contain expandable values like ``%(here)s``. (See http://www.plope.com/software/collector/262). - Accepted patch from Roger Hoover to allow for a new sort of process group: "fcgi-program". Adding one of these to your supervisord.conf allows you to control fastcgi programs. FastCGI programs cannot belong to heterogenous groups. The configuration for FastCGI programs is the same as regular programs except an additional "socket" parameter. Substitution happens on the socket parameter with the ``here`` and ``program_name`` variables:: [fcgi-program:fcgi_test] ;socket=tcp://localhost:8002 socket=unix:///path/to/fcgi/socket - Supervisorctl now supports a plugin model for supervisorctl commands. - Added the ability to retrieve supervisord's own pid through supervisor.getPID() on the XML-RPC interface or a new "pid" command on supervisorctl. 3.0a6 (2008-04-07) ------------------ - The RotatingFileLogger had a race condition in its doRollover method whereby a file might not actually exist despite a call to os.path.exists on the line above a place where we try to remove it. We catch the exception now and ignore the missing file. 3.0a5 (2008-03-13) ------------------ - Supervisorctl now supports persistent readline history. To enable, add "history_file = " to the ``[supervisorctl]`` section in your supervisord.conf file. - Multiple commands may now be issued on one supervisorctl command line, e.g. "restart prog; tail -f prog". Separate commands with a single semicolon; they will be executed in order as you would expect. 3.0a4 (2008-01-30) ------------------ - 3.0a3 broke Python 2.3 backwards compatibility. - On Debian Sarge, one user reported that a call to options.mktempfile would fail with an "[Errno 9] Bad file descriptor" at supervisord startup time. I was unable to reproduce this, but we found a workaround that seemed to work for him and it's included in this release. See http://www.plope.com/software/collector/252 for more information. Thanks to William Dode. - The fault ``ALREADY_TERMINATED`` has been removed. It was only raised by supervisor.sendProcessStdin(). That method now returns ``NOT_RUNNING`` for parity with the other methods. (Mike Naberezny) - The fault TIMED_OUT has been removed. It was not used. - Supervisor now depends on meld3 0.6.4, which does not compile its C extensions by default, so there is no more need to faff around with NO_MELD3_EXTENSION_MODULES during installation if you don't have a C compiler or the Python development libraries on your system. - Instead of making a user root around for the sample.conf file, provide a convenience command "echo_supervisord_conf", which he can use to echo the sample.conf to his terminal (and redirect to a file appropriately). This is a new user convenience (especially one who has no Python experience). - Added ``numprocs_start`` config option to ``[program:x]`` and ``[eventlistener:x]`` sections. This is an offset used to compute the first integer that ``numprocs`` will begin to start from. Contributed by Antonio Beamud Montero. - Added capability for ``[include]`` config section to config format. This section must contain a single key "files", which must name a space-separated list of file globs that will be included in supervisor's configuration. Contributed by Ian Bicking. - Invoking the ``reload`` supervisorctl command could trigger a bug in supervisord which caused it to crash. See http://www.plope.com/software/collector/253 . Thanks to William Dode for a bug report. - The ``pidproxy`` script was made into a console script. - The ``password`` value in both the ``[inet_http_server]`` and ``[unix_http_server]`` sections can now optionally be specified as a SHA hexdigest instead of as cleartext. Values prefixed with ``{SHA}`` will be considered SHA hex digests. To encrypt a password to a form suitable for pasting into the configuration file using Python, do, e.g.:: >>> import sha >>> '{SHA}' + sha.new('thepassword').hexdigest() '{SHA}82ab876d1387bfafe46cc1c8a2ef074eae50cb1d' - The subtypes of the events PROCESS_STATE_CHANGE (and PROCESS_STATE_CHANGE itself) have been removed, replaced with a simpler set of PROCESS_STATE subscribable event types. The new event types are: PROCESS_STATE_STOPPED PROCESS_STATE_EXITED PROCESS_STATE_STARTING PROCESS_STATE_STOPPING PROCESS_STATE_BACKOFF PROCESS_STATE_FATAL PROCESS_STATE_RUNNING PROCESS_STATE_UNKNOWN PROCESS_STATE # abstract PROCESS_STATE_STARTING replaces: PROCESS_STATE_CHANGE_STARTING_FROM_STOPPED PROCESS_STATE_CHANGE_STARTING_FROM_BACKOFF PROCESS_STATE_CHANGE_STARTING_FROM_EXITED PROCESS_STATE_CHANGE_STARTING_FROM_FATAL PROCESS_STATE_RUNNING replaces PROCESS_STATE_CHANGE_RUNNING_FROM_STARTED PROCESS_STATE_BACKOFF replaces PROCESS_STATE_CHANGE_BACKOFF_FROM_STARTING PROCESS_STATE_STOPPING replaces: PROCESS_STATE_CHANGE_STOPPING_FROM_RUNNING PROCESS_STATE_CHANGE_STOPPING_FROM_STARTING PROCESS_STATE_EXITED replaces PROCESS_STATE_CHANGE_EXITED_FROM_RUNNING PROCESS_STATE_STOPPED replaces PROCESS_STATE_CHANGE_STOPPED_FROM_STOPPING PROCESS_STATE_FATAL replaces PROCESS_STATE_CHANGE_FATAL_FROM_BACKOFF PROCESS_STATE_UNKNOWN replaces PROCESS_STATE_CHANGE_TO_UNKNOWN PROCESS_STATE replaces PROCESS_STATE_CHANGE The PROCESS_STATE_CHANGE_EXITED_OR_STOPPED abstract event is gone. All process state changes have at least "processname", "groupname", and "from_state" (the name of the previous state) in their serializations. PROCESS_STATE_EXITED additionally has "expected" (1 or 0) and "pid" (the process id) in its serialization. PROCESS_STATE_RUNNING, PROCESS_STATE_STOPPING, PROCESS_STATE_STOPPED additionally have "pid" in their serializations. PROCESS_STATE_STARTING and PROCESS_STATE_BACKOFF have "tries" in their serialization (initially "0", bumped +1 each time a start retry happens). - Remove documentation from README.txt, point people to http://supervisord.org/manual/ . - The eventlistener request/response protocol has changed. OK/FAIL must now be wrapped in a RESULT envelope so we can use it for more specialized communications. Previously, to signify success, an event listener would write the string ``OK\n`` to its stdout. To signify that the event was seen but couldn't be handled by the listener and should be rebuffered, an event listener would write the string ``FAIL\n`` to its stdout. In the new protocol, the listener must write the string:: RESULT {resultlen}\n{result} For example, to signify OK:: RESULT 2\nOK To signify FAIL:: RESULT 4\nFAIL See the scripts/sample_eventlistener.py script for an example. - To provide a hook point for custom results returned from event handlers (see above) the [eventlistener:x] configuration sections now accept a "result_handler=" parameter, e.g. "result_handler=supervisor.dispatchers:default_handler" (the default) or "handler=mypackage:myhandler". The keys are pkgutil "entry point" specifications (importable Python function names). Result handlers must be callables which accept two arguments: one named "event" which represents the event, and the other named "result", which represents the listener's result. A result handler either executes successfully or raises an exception. If it raises a supervisor.dispatchers.RejectEvent exception, the event will be rebuffered, and the eventhandler will be placed back into the ACKNOWLEDGED state. If it raises any other exception, the event handler will be placed in the UNKNOWN state. If it does not raise any exception, the event is considered successfully processed. A result handler's return value is ignored. Writing a result handler is a "in case of emergency break glass" sort of thing, it is not something to be used for arbitrary business code. In particular, handlers *must not block* for any appreciable amount of time. The standard eventlistener result handler (supervisor.dispatchers:default_handler) does nothing if it receives an "OK" and will raise a supervisor.dispatchers.RejectEvent exception if it receives any other value. - Supervisord now emits TICK events, which happen every N seconds. Three types of TICK events are available: TICK_5 (every five seconds), TICK_60 (every minute), TICK_3600 (every hour). Event listeners may subscribe to one of these types of events to perform every-so-often processing. TICK events are subtypes of the EVENT type. - Get rid of OSX platform-specific memory monitor and replace with memmon.py, which works on both Linux and Mac OS. This script is now a console script named "memmon". - Allow "web handler" (the handler which receives http requests from browsers visiting the web UI of supervisor) to deal with POST requests. - RPC interface methods stopProcess(), stopProcessGroup(), and stopAllProcesses() now take an optional "wait" argument that defaults to True for parity with the start methods. 3.0a3 (2007-10-02) ------------------ - Supervisorctl now reports a better error message when the main supervisor XML-RPC namespace is not registered. Thanks to Mike Orr for reporting this. (Mike Naberezny) - Create ``scripts`` directory within supervisor package, move ``pidproxy.py`` there, and place sample event listener and comm event programs within the directory. - When an event notification is buffered (either because a listener rejected it or because all listeners were busy when we attempted to send it originally), we now rebuffer it in a way that will result in it being retried earlier than it used to be. - When a listener process exits (unexpectedly) before transitioning from the BUSY state, rebuffer the event that was being processed. - supervisorctl ``tail`` command now accepts a trailing specifier: ``stderr`` or ``stdout``, which respectively, allow a user to tail the stderr or stdout of the named process. When this specifier is not provided, tail defaults to stdout. - supervisor ``clear`` command now clears both stderr and stdout logs for the given process. - When a process encounters a spawn error as a result of a failed execve or when it cannot setuid to a given uid, it now puts this info into the process' stderr log rather than its stdout log. - The event listener protocol header now contains the ``server`` identifier, the ``pool`` that the event emanated from, and the ``poolserial`` as well as the values it previously contained (version, event name, serial, and length). The server identifier is taken from the config file options value ``identifier``, the ``pool`` value is the name of the listener pool that this event emanates from, and the ``poolserial`` is a serial number assigned to the event local to the pool that is processing it. - The event listener protocol header is now a sequence of key-value pairs rather than a list of positional values. Previously, a representative header looked like:: SUPERVISOR3.0 PROCESS_COMMUNICATION_STDOUT 30 22\n Now it looks like:: ver:3.0 server:supervisor serial:21 ... - Specific event payload serializations have changed. All event types that deal with processes now include the pid of the process that the event is describing. In event serialization "header" values, we've removed the space between the header name and the value and headers are now separated by a space instead of a line feed. The names of keys in all event types have had underscores removed. - Abandon the use of the Python stdlib ``logging`` module for speed and cleanliness purposes. We've rolled our own. - Fix crash on start if AUTO logging is used with a max_bytes of zero for a process. - Improve process communication event performance. - The process config parameters ``stdout_capturefile`` and ``stderr_capturefile`` are no longer valid. They have been replaced with the ``stdout_capture_maxbytes`` and ``stderr_capture_maxbytes`` parameters, which are meant to be suffix-multiplied integers. They both default to zero. When they are zero, process communication event capturing is not performed. When either is nonzero, the value represents the maximum number of bytes that will be captured between process event start and end tags. This change was to support the fact that we no longer keep capture data in a separate file, we just use a FIFO in RAM to maintain capture info. For users whom don't care about process communication events, or whom haven't changed the defaults for ``stdout_capturefile`` or ``stderr_capturefile``, they needn't do anything to their configurations to deal with this change. - Log message levels have been normalized. In particular, process stdin/stdout is now logged at ``debug`` level rather than at ``trace`` level (``trace`` level is now reserved for output useful typically for debugging supervisor itself). See "Supervisor Log Levels" in the documentation for more info. - When an event is rebuffered (because all listeners are busy or a listener rejected the event), the rebuffered event is now inserted in the head of the listener event queue. This doesn't guarantee event emission in natural ordering, because if a listener rejects an event or dies while it's processing an event, it can take an arbitrary amount of time for the event to be rebuffered, and other events may be processed in the meantime. But if pool listeners never reject an event or don't die while processing an event, this guarantees that events will be emitted in the order that they were received because if all listeners are busy, the rebuffered event will be tried again "first" on the next go-around. - Removed EVENT_BUFFER_OVERFLOW event type. - The supervisorctl xmlrpc proxy can now communicate with supervisord using a persistent HTTP connection. - A new module "supervisor.childutils" was added. This module provides utilities for Python scripts which act as children of supervisord. Most notably, it contains an API method "getRPCInterface" allows you to obtain an xmlrpclib ServerProxy that is willing to communicate with the parent supervisor. It also contains utility functions that allow for parsing of supervisor event listener protocol headers. A pair of scripts (loop_eventgen.py and loop_listener.py) were added to the script directory that serve as examples about how to use the childutils module. - A new envvar is added to child process environments: SUPERVISOR_SERVER_URL. This contains the server URL for the supervisord running the child. - An ``OK`` URL was added at ``/ok.html`` which just returns the string ``OK`` (can be used for up checks or speed checks via plain-old-HTTP). - An additional command-line option ``--profile_options`` is accepted by the supervisord script for developer use:: supervisord -n -c sample.conf --profile_options=cumulative,calls The values are sort_stats options that can be passed to the standard Python profiler's PStats sort_stats method. When you exit supervisor, it will print Python profiling output to stdout. - If cElementTree is installed in the Python used to invoke supervisor, an alternate (faster, by about 2X) XML parser will be used to parse XML-RPC request bodies. cElementTree was added as an "extras_require" option in setup.py. - Added the ability to start, stop, and restart process groups to supervisorctl. To start a group, use ``start groupname:*``. To start multiple groups, use ``start groupname1:* groupname2:*``. Equivalent commands work for "stop" and "restart". You can mix and match short processnames, fully-specified group:process names, and groupsplats on the same line for any of these commands. - Added ``directory`` option to process config. If you set this option, supervisor will chdir to this directory before executing the child program (and thus it will be the child's cwd). - Added ``umask`` option to process config. If you set this option, supervisor will set the umask of the child program. (Thanks to Ian Bicking for the suggestion). - A pair of scripts ``osx_memmon_eventgen.py`` and `osx_memmon_listener.py`` have been added to the scripts directory. If they are used together as described in their comments, processes which are consuming "too much" memory will be restarted. The ``eventgen`` script only works on OSX (my main development platform) but it should be trivially generalizable to other operating systems. - The long form ``--configuration`` (-c) command line option for supervisord was broken. Reported by Mike Orr. (Mike Naberezny) - New log level: BLAT (blather). We log all supervisor-internal-related debugging info here. Thanks to Mike Orr for the suggestion. - We now allow supervisor to listen on both a UNIX domain socket and an inet socket instead of making them mutually exclusive. As a result, the options "http_port", "http_username", "http_password", "sockchmod" and "sockchown" are no longer part of the ``[supervisord]`` section configuration. These have been supplanted by two other sections: ``[unix_http_server]`` and ``[inet_http_server]``. You'll need to insert one or the other (depending on whether you want to listen on a UNIX domain socket or a TCP socket respectively) or both into your supervisord.conf file. These sections have their own options (where applicable) for port, username, password, chmod, and chown. See README.txt for more information about these sections. - All supervisord command-line options related to "http_port", "http_username", "http_password", "sockchmod" and "sockchown" have been removed (see above point for rationale). - The option that *used* to be ``sockchown`` within the ``[supervisord]`` section (and is now named ``chown`` within the ``[unix_http_server]`` section) used to accept a dot-separated user.group value. The separator now must be a colon ":", e.g. "user:group". Unices allow for dots in usernames, so this change is a bugfix. Thanks to Ian Bicking for the bug report. - If a '-c' option is not specified on the command line, both supervisord and supervisorctl will search for one in the paths ``./supervisord.conf`` , ``./etc/supervisord.conf`` (relative to the current working dir when supervisord or supervisorctl is invoked) or in ``/etc/supervisord.conf`` (the old default path). These paths are searched in order, and supervisord and supervisorctl will use the first one found. If none are found, supervisor will fail to start. - The Python string expression ``%(here)s`` (referring to the directory in which the configuration file was found) can be used within the following sections/options within the config file:: unix_http_server:file supervisor:directory supervisor:logfile supervisor:pidfile supervisor:childlogdir supervisor:environment program:environment program:stdout_logfile program:stderr_logfile program:process_name program:command - The ``--environment`` aka ``-b`` option was removed from the list of available command-line switches to supervisord (use "A=1 B=2 bin/supervisord" instead). - If the socket filename (the tail-end of the unix:// URL) was longer than 64 characters, supervisorctl would fail with an encoding error at startup. - The ``identifier`` command-line argument was not functional. - Fixed http://www.plope.com/software/collector/215 (bad error message in supervisorctl when program command not found on PATH). - Some child processes may not have been shut down properly at supervisor shutdown time. - Move to ZPL-derived (but not ZPL) license available from http://www.repoze.org/LICENSE.txt; it's slightly less restrictive than the ZPL (no servicemark clause). - Spurious errors related to unclosed files ("bad file descriptor", typically) were evident at supervisord "reload" time (when using the "reload" command from supervisorctl). - We no longer bundle ez_setup to bootstrap setuptools installation. 3.0a2 (2007-08-24) ------------------ - Fixed the README.txt example for defining the supervisor RPC interface in the configuration file. Thanks to Drew Perttula. - Fixed a bug where process communication events would not have the proper payload if the payload data was very short. - when supervisord attempted to kill a process with SIGKILL after the process was not killed within "stopwaitsecs" using a "normal" kill signal, supervisord would crash with an improper AssertionError. Thanks to Calvin Hendryx-Parker. - On Linux, Supervisor would consume too much CPU in an effective "busywait" between the time a subprocess exited and the time at which supervisor was notified of its exit status. Thanks to Drew Perttula. - RPC interface behavior change: if the RPC method "sendProcessStdin" is called against a process that has closed its stdin file descriptor (e.g. it has done the equivalent of "sys.stdin.close(); os.close(0)"), we return a NO_FILE fault instead of accepting the data. - Changed the semantics of the process configuration ``autorestart`` parameter with respect to processes which move between the RUNNING and EXITED state. ``autorestart`` was previously a boolean. Now it's a trinary, accepting one of ``false``, ``unexpected``, or ``true``. If it's ``false``, a process will never be automatically restarted from the EXITED state. If it's ``unexpected``, a process that enters the EXITED state will be automatically restarted if it exited with an exit code that was not named in the process config's ``exitcodes`` list. If it's ``true``, a process that enters the EXITED state will be automatically restarted unconditionally. The default is now ``unexpected`` (it was previously ``true``). The readdition of this feature is a reversion of the behavior change note in the changelog notes for 3.0a1 that asserted we never cared about the process' exit status when determining whether to restart it or not. - setup.py develop (and presumably setup.py install) would fail under Python 2.3.3, because setuptools attempted to import ``splituser`` from urllib2, and it didn't exist. - It's now possible to use ``setup.py install`` and ``setup.py develop`` on systems which do not have a C compiler if you set the environment variable "NO_MELD3_EXTENSION_MODULES=1" in the shell in which you invoke these commands (versions of meld3 > 0.6.1 respect this envvar and do not try to compile optional C extensions when it's set). - The test suite would fail on Python versions <= 2.3.3 because the "assertTrue" and "assertFalse" methods of unittest.TestCase didn't exist in those versions. - The ``supervisorctl`` and ``supervisord`` wrapper scripts were disused in favor of using setuptools' ``console_scripts`` entry point settings. - Documentation files and the sample configuration file are put into the generated supervisor egg's ``doc`` directory. - Using the web interface would cause fairly dramatic memory leakage. We now require a version of meld3 that does not appear to leak memory from its C extensions (0.6.3). 3.0a1 (2007-08-16) ------------------ - Default config file comment documented 10 secs as default for ``startsecs`` value in process config, in reality it was 1 sec. Thanks to Christoph Zwerschke. - Make note of subprocess environment behavior in README.txt. Thanks to Christoph Zwerschke. - New "strip_ansi" config file option attempts to strip ANSI escape sequences from logs for smaller/more readable logs (submitted by Mike Naberezny). - The XML-RPC method supervisor.getVersion() has been renamed for clarity to supervisor.getAPIVersion(). The old name is aliased for compatibility but is deprecated and will be removed in a future version (Mike Naberezny). - Improved web interface styling (Mike Naberezny, Derek DeVries) - The XML-RPC method supervisor.startProcess() now checks that the file exists and is executable (Mike Naberezny). - Two environment variables, "SUPERVISOR_PROCESS_NAME" and "SUPERVISOR_PROCESS_GROUP" are set in the environment of child processes, representing the name of the process and group in supervisor's configuration. - Process state map change: a process may now move directly from the STARTING state to the STOPPING state (as a result of a stop request). - Behavior change: if ``autorestart`` is true, even if a process exits with an "expected" exit code, it will still be restarted. In the immediately prior release of supervisor, this was true anyway, and no one complained, so we're going to consider that the "officially correct" behavior from now on. - Supervisor now logs subprocess stdout and stderr independently. The old program config keys "logfile", "logfile_backups" and "logfile_maxbytes" are superseded by "stdout_logfile", "stdout_logfile_backups", and "stdout_logfile_maxbytes". Added keys include "stderr_logfile", "stderr_logfile_backups", and "stderr_logfile_maxbytes". An additional "redirect_stderr" key is used to cause program stderr output to be sent to its stdout channel. The keys "log_stderr" and "log_stdout" have been removed. - ``[program:x]`` config file sections now represent "homogeneous process groups" instead of single processes. A "numprocs" key in the section represents the number of processes that are in the group. A "process_name" key in the section allows composition of the each process' name within the homogeneous group. - A new kind of config file section, ``[group:x]`` now exists, allowing users to group heterogeneous processes together into a process group that can be controlled as a unit from a client. - Supervisord now emits "events" at certain points in its normal operation. These events include supervisor state change events, process state change events, and "process communication events". - A new kind of config file section ``[eventlistener:x]`` now exists. Each section represents an "event listener pool", which is a special kind of homogeneous process group. Each process in the pool is meant to receive supervisor "events" via its stdin and perform some notification (e.g. send a mail, log, make an http request, etc.) - Supervisord can now capture data between special tokens in subprocess stdout/stderr output and emit a "process communications event" as a result. - Supervisor's XML-RPC interface may be extended arbitrarily by programmers. Additional top-level namespace XML-RPC interfaces can be added using the ``[rpcinterface:foo]`` declaration in the configuration file. - New ``supervisor``-namespace XML-RPC methods have been added: getAPIVersion (returns the XML-RPC API version, the older "getVersion" is now deprecated), "startProcessGroup" (starts all processes in a supervisor process group), "stopProcessGroup" (stops all processes in a supervisor process group), and "sendProcessStdin" (sends data to a process' stdin file descriptor). - ``supervisor``-namespace XML-RPC methods which previously accepted ony a process name as "name" (startProcess, stopProcess, getProcessInfo, readProcessLog, tailProcessLog, and clearProcessLog) now accept a "name" which may contain both the process name and the process group name in the form ``groupname:procname``. For backwards compatibility purposes, "simple" names will also be accepted but will be expanded internally (e.g. if "foo" is sent as a name, it will be expanded to "foo:foo", representing the foo process within the foo process group). - 2.X versions of supervisorctl will work against supervisor 3.0 servers in a degraded fashion, but 3.X versions of supervisorctl will not work at all against supervisor 2.X servers. 2.2b1 (2007-03-31) ------------------ - Individual program configuration sections can now specify an environment. - Added a 'version' command to supervisorctl. This returns the version of the supervisor2 package which the remote supervisord process is using. 2.1 (2007-03-17) ---------------- - When supervisord was invoked more than once, and its configuration was set up to use a UNIX domain socket as the HTTP server, the socket file would be erased in error. The symptom of this was that a subsequent invocation of supervisorctl could not find the socket file, so the process could not be controlled (it and all of its subprocesses would need to be killed by hand). - Close subprocess file descriptors properly when a subprocess exits or otherwise dies. This should result in fewer "too many open files to spawn foo" messages when supervisor is left up for long periods of time. - When a process was not killable with a "normal" signal at shutdown time, too many "INFO: waiting for x to die" messages would be sent to the log until we ended up killing the process with a SIGKILL. Now a maximum of one every three seconds is sent up until SIGKILL time. Thanks to Ian Bicking. - Add an assertion: we never want to try to marshal None to XML-RPC callers. Issue 223 in the collector from vgatto indicates that somehow a supervisor XML-RPC method is returning None (which should never happen), but I cannot identify how. Maybe the assertion will give us more clues if it happens again. - Supervisor would crash when run under Python 2.5 because the xmlrpclib.Transport class in Python 2.5 changed in a backward-incompatible way. Thanks to Eric Westra for the bug report and a fix. - Tests now pass under Python 2.5. - Better supervisorctl reporting on stop requests that have a FAILED status. - Removed duplicated code (readLog/readMainLog), thanks to Mike Naberezny. - Added tailProcessLog command to the XML-RPC API. It provides a more efficient way to tail logs than readProcessLog(). Use readProcessLog() to read chunks and tailProcessLog() to tail. (thanks to Mike Naberezny). 2.1b1 (2006-08-30) ------------------ - "supervisord -h" and "supervisorctl -h" did not work (traceback instead of showing help view (thanks to Damjan from Macedonia for the bug report). - Processes which started successfully after failing to start initially are no longer reported in BACKOFF state once they are started successfully (thanks to Damjan from Macedonia for the bug report). - Add new 'maintail' command to supervisorctl shell, which allows you to tail the 'main' supervisor log. This uses a new readMainLog xmlrpc API. - Various process-state-transition related changes, all internal. README.txt updated with new state transition map. - startProcess and startAllProcesses xmlrpc APIs changed: instead of accepting a timeout integer, these accept a wait boolean (timeout is implied by process' "startsecs" configuration). If wait is False, do not wait for startsecs. Known issues: - Code does not match state transition map. Processes which are configured as autorestarting which start "successfully" but subsequently die after 'startsecs' go through the transitions RUNNING -> BACKOFF -> STARTING instead of the correct transitions RUNNING -> EXITED -> STARTING. This has no real negative effect, but should be fixed for correctness. 2.0 (2006-08-30) ---------------- - pidfile written in daemon mode had incorrect pid. - supervisorctl: tail (non -f) did not pass through proper error messages when supplied by the server. - Log signal name used to kill processes at debug level. - supervisorctl "tail -f" didn't work with supervisorctl sections configured with an absolute unix:// URL - New "environment" config file option allows you to add environment variable values to supervisord environment from config file. 2.0b1 (2006-07-12) ------------------ - Fundamental rewrite based on 1.0.7, use distutils (only) for installation, use ConfigParser rather than ZConfig, use HTTP for wire protocol, web interface, less lies in supervisorctl. 1.0.7 (2006-07-11) ------------------ - Don't log a waitpid error if the error value is "no children". - Use select() against child file descriptor pipes and bump up select timeout appropriately. 1.0.6 (2005-11-20) ------------------ - Various tweaks to make run more effectively on Mac OS X (including fixing tests to run there, no more "error reading from fd XXX" in logtail output, reduced disk/CPU usage as a result of not writing to log file unnecessarily on Mac OS). 1.0.5 (2004-07-29) ------------------ - Short description: In previous releases, managed programs that created voluminous stdout/stderr output could run more slowly than usual when invoked under supervisor, now they do not. Long description: The supervisord manages child output by polling pipes related to child process stderr/stdout. Polling operations are performed in the mainloop, which also performs a 'select' on the filedescriptor(s) related to client/server operations. In prior releases, the select timeout was set to 2 seconds. This release changes the timeout to 1/10th of a second in order to keep up with client stdout/stderr output. Gory description: On Linux, at least, there is a pipe buffer size fixed by the kernel of somewhere between 512 - 4096 bytes; when a child process writes enough data to fill the pipe buffer, it will block on further stdout/stderr output until supervisord comes along and clears out the buffer by reading bytes from the pipe within the mainloop. We now clear these buffers much more quickly than we did before due to the increased frequency of buffer reads in the mainloop; the timeout value of 1/10th of a second seems to be fast enough to clear out the buffers of child process pipes when managing programs on even a very fast system while still enabling the supervisord process to be in a sleeping state for most of the time. 1.0.4 or "Alpha 4" (2004-06-30) ------------------------------- - Forgot to update version tag in configure.py, so the supervisor version in a3 is listed as "1.0.1", where it should be "1.0.3". a4 will be listed as "1.0.4'. - Instead of preventing a process from starting if setuid() can't be called (if supervisord is run as nonroot, for example), just log the error and proceed. 1.0.3 or "Alpha 3" (2004-05-26) ------------------------------- - The daemon could chew up a lot of CPU time trying to select() on real files (I didn't know select() failed to block when a file is at EOF). Fixed by polling instead of using select(). - Processes could "leak" and become zombies due to a bug in reaping dead children. - supervisord now defaults to daemonizing itself. - 'daemon' config file option and -d/--daemon command-line option removed from supervisord acceptable options. In place of these options, we now have a 'nodaemon' config file option and a -n/--nodaemon command-line option. - logtail now works. - pidproxy changed slightly to reap children synchronously. - in alpha2 changelist, supervisord was reported to have a "noauth" command-line option. This was not accurate. The way to turn off auth on the server is to disinclude the "passwdfile" config file option from the server config file. The client however does indeed still have a noauth option, which prevents it from ever attempting to send authentication credentials to servers. - ZPL license added for ZConfig to LICENSE.txt 1.0.2 or "Alpha 2" (Unreleased) ------------------------------- - supervisorctl and supervisord no longer need to run on the same machine due to the addition of internet socket support. - supervisorctl and supervisord no longer share a common configuration file format. - supervisorctl now uses a persistent connection to supervisord (as opposed to creating a fresh connection for each command). - SRP (Secure Remote Password) authentication is now a supported form of access control for supervisord. In supervisorctl interactive mode, by default, users will be asked for credentials when attempting to talk to a supervisord that requires SRP authentication. - supervisord has a new command-line option and configuration file option for specifying "noauth" mode, which signifies that it should not require authentication from clients. - supervisorctl has a new command-line option and configuration option for specifying "noauth" mode, which signifies that it should never attempt to send authentication info to servers. - supervisorctl has new commands: open: opens a connection to a new supervisord; close: closes the current connection. - supervisorctl's "logtail" command now retrieves log data from supervisord's log file remotely (as opposed to reading it directly from a common filesystem). It also no longer emulates "tail -f", it just returns lines of the server's log file. - The supervisord/supervisorctl wire protocol now has protocol versioning and is documented in "protocol.txt". - "configfile" command-line override -C changed to -c - top-level section name for supervisor schema changed to 'supervisord' from 'supervisor' - Added 'pidproxy' shim program. Known issues in alpha 2: - If supervisorctl loses a connection to a supervisord or if the remote supervisord crashes or shuts down unexpectedly, it is possible that any supervisorctl talking to it will "hang" indefinitely waiting for data. Pressing Ctrl-C will allow you to restart supervisorctl. - Only one supervisorctl process may talk to a given supervisord process at a time. If two supervisorctl processes attempt to talk to the same supervisord process, one will "win" and the other will be disconnected. - Sometimes if a pidproxy is used to start a program, the pidproxy program itself will "leak". 1.0.0 or "Alpha 1" (Unreleased) ------------------------------- Initial release. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671836081.0 supervisor-4.2.5/COPYRIGHT.txt0000644000076500000240000000036414351430661015610 0ustar00mnaberezstaffSupervisor is Copyright (c) 2006-2015 Agendaless Consulting and Contributors. (http://www.agendaless.com), All Rights Reserved medusa was (is?) Copyright (c) Sam Rushing. http_client.py code Copyright (c) by Daniel Krech, http://eikeon.com/. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/LICENSES.txt0000644000076500000240000001204014340177153015441 0ustar00mnaberezstaffSupervisor is licensed under the following license: A copyright notice accompanies this license document that identifies the copyright holders. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions in source code must retain the accompanying copyright notice, this list of conditions, and the following disclaimer. 2. Redistributions in binary form must reproduce the accompanying copyright notice, this list of conditions, and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Names of the copyright holders must not be used to endorse or promote products derived from this software without prior written permission from the copyright holders. 4. If any files are modified, you must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. Disclaimer THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. http_client.py code is based on code by Daniel Krech, which was released under this license: LICENSE AGREEMENT FOR RDFLIB 0.9.0 THROUGH 2.3.1 ------------------------------------------------ Copyright (c) 2002-2005, Daniel Krech, http://eikeon.com/ All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Daniel Krech nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Medusa, the asynchronous communications framework upon which supervisor's server and client code is based, was created by Sam Rushing: Medusa was once distributed under a 'free for non-commercial use' license, but in May of 2000 Sam Rushing changed the license to be identical to the standard Python license at the time. The standard Python license has always applied to the core components of Medusa, this change just frees up the rest of the system, including the http server, ftp server, utilities, etc. Medusa is therefore under the following license: ============================== Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of Sam Rushing not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. SAM RUSHING DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL SAM RUSHING BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ============================== ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/MANIFEST.in0000644000076500000240000000063314340177153015236 0ustar00mnaberezstaffinclude CHANGES.rst include COPYRIGHT.txt include LICENSES.txt include README.rst include tox.ini include supervisor/version.txt include supervisor/scripts/*.py include supervisor/skel/*.conf recursive-include supervisor/tests/fixtures *.conf *.py recursive-include supervisor/ui *.html *.css *.png *.gif include docs/Makefile recursive-include docs *.py *.rst *.css *.gif *.png recursive-exclude docs/.build * ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1671843145.3972576 supervisor-4.2.5/PKG-INFO0000644000076500000240000025016714351446511014605 0ustar00mnaberezstaffMetadata-Version: 2.1 Name: supervisor Version: 4.2.5 Summary: A system for controlling process state under UNIX Home-page: http://supervisord.org/ Author: Chris McDonough Author-email: chrism@plope.com License: BSD-derived (http://www.repoze.org/LICENSE.txt) Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: No Input/Output (Daemon) Classifier: Intended Audience :: System Administrators Classifier: Natural Language :: English Classifier: Operating System :: POSIX Classifier: Topic :: System :: Boot Classifier: Topic :: System :: Monitoring Classifier: Topic :: System :: Systems Administration Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Provides-Extra: testing License-File: LICENSES.txt Supervisor ========== Supervisor is a client/server system that allows its users to control a number of processes on UNIX-like operating systems. Supported Platforms ------------------- Supervisor has been tested and is known to run on Linux (Ubuntu), Mac OS X (10.4, 10.5, 10.6), and Solaris (10 for Intel) and FreeBSD 6.1. It will likely work fine on most UNIX systems. Supervisor will not run at all under any version of Windows. Supervisor is intended to work on Python 3 version 3.4 or later and on Python 2 version 2.7. Documentation ------------- You can view the current Supervisor documentation online `in HTML format `_ . This is where you should go for detailed installation and configuration documentation. Reporting Bugs and Viewing the Source Repository ------------------------------------------------ Please report bugs in the `GitHub issue tracker `_. You can view the source repository for supervisor via `https://github.com/Supervisor/supervisor `_. Contributing ------------ We'll review contributions from the community in `pull requests `_ on GitHub. 4.2.5 (2022-12-23) ------------------ - Fixed a bug where the XML-RPC method ``supervisor.startProcess()`` would return 500 Internal Server Error instead of an XML-RPC fault response if the command could not be parsed. Patch by Julien Le Cléach. - Fixed a bug on Python 2.7 where a ``UnicodeDecodeError`` may have occurred when using the web interface. Patch by Vinay Sajip. - Removed use of ``urllib.parse`` functions ``splithost``, ``splitport``, and ``splittype`` deprecated in Python 3.8. - Removed use of ``asynchat`` and ``asyncore`` deprecated in Python 3.10. - The return value of the XML-RPC method ``supervisor.getAllConfigInfo()`` now includes the ``directory``, ``uid``, and ``serverurl`` of the program. Patch by Yellmean. - If a subprocess exits with a unexpected exit code (one not listed in ``exitcodes=`` in a ``[program:x]`` section) then the exit will now be logged at the ``WARN`` level instead of ``INFO``. Patch by Precy Lee. - ``supervisorctl shutdown`` now shows an error message if an argument is given. - File descriptors are now closed using the faster ``os.closerange()`` instead of calling ``os.close()`` in a loop. Patch by tyong920. 4.2.4 (2021-12-30) ------------------ - Fixed a bug where the ``--identifier`` command line argument was ignored. It was broken since at least 3.0a7 (released in 2009) and probably earlier. Patch by Julien Le Cléach. 4.2.3 (2021-12-27) ------------------ - Fixed a race condition where an ``rpcinterface`` extension that subscribed to events would not see the correct process state if it accessed the the ``state`` attribute on a ``Subprocess`` instance immediately in the event callback. Patch by Chao Wang. - Added the ``setuptools`` package to the list of dependencies in ``setup.py`` because it is a runtime dependency. Patch by Louis Sautier. - The web interface will now return a 404 Not Found response if a log file is missing. Previously, it would return 410 Gone. It was changed because 410 is intended to mean that the condition is likely to be permanent. A log file missing is usually temporary, e.g. a process that was never started will not have a log file but will have one as soon as it is started. 4.2.2 (2021-02-26) ------------------ - Fixed a bug where ``supervisord`` could crash if a subprocess exited immediately before trying to kill it. - Fixed a bug where the ``stdout_syslog`` and ``stderr_syslog`` options of a ``[program:x]`` section could not be used unless file logging for the same program had also been configured. The file and syslog options can now be used independently. Patch by Scott Stroupe. - Fixed a bug where the ``logfile`` option in the ``[supervisord]`` section would not log to syslog when the special filename of ``syslog`` was supplied, as is supported by all other log filename options. Patch by Franck Cuny. - Fixed a bug where environment variables defined in ``environment=`` in the ``[supervisord]`` section or a ``[program:x]`` section could not be used in ``%(ENV_x)s`` expansions. Patch by MythRen. - The ``supervisorctl signal`` command now allows a signal to be sent when a process is in the ``STOPPING`` state. Patch by Mike Gould. - ``supervisorctl`` and ``supervisord`` now print help when given ``-?`` in addition to the existing ``-h``/``--help``. 4.2.1 (2020-08-20) ------------------ - Fixed a bug on Python 3 where a network error could cause ``supervisord`` to crash with the error ``:can't concat str to bytes``. Patch by Vinay Sajip. - Fixed a bug where a test would fail on systems with glibc 2.3.1 because the default value of SOMAXCONN changed. 4.2.0 (2020-04-30) ------------------ - When ``supervisord`` is run in the foreground, a new ``--silent`` option suppresses the main log from being echoed to ``stdout`` as it normally would. Patch by Trevor Foster. - Parsing ``command=`` now supports a new expansion, ``%(numprocs)d``, that expands to the value of ``numprocs=`` in the same section. Patch by Santjago Corkez. - Web UI buttons no longer use background images. Patch by Dmytro Karpovych. - The Web UI now has a link to view ``tail -f stderr`` for a process in addition to the existing ``tail -f stdout`` link. Based on a patch by OuroborosCoding. - The HTTP server will now send an ``X-Accel-Buffering: no`` header in logtail responses to fix Nginx proxy buffering. Patch by Weizhao Li. - When ``supervisord`` reaps an unknown PID, it will now log a description of the ``waitpid`` status. Patch by Andrey Zelenchuk. - Fixed a bug introduced in 4.0.3 where ``supervisorctl tail -f foo | grep bar`` would fail with the error ``NoneType object has no attribute 'lower'``. This only occurred on Python 2.7 and only when piped. Patch by Slawa Pidgorny. 4.1.0 (2019-10-19) ------------------ - Fixed a bug on Python 3 only where logging to syslog did not work and would log the exception ``TypeError: a bytes-like object is required, not 'str'`` to the main ``supervisord`` log file. Patch by Vinay Sajip and Josh Staley. - Fixed a Python 3.8 compatibility issue caused by the removal of ``cgi.escape()``. Patch by Mattia Procopio. - The ``meld3`` package is no longer a dependency. A version of ``meld3`` is now included within the ``supervisor`` package itself. 4.0.4 (2019-07-15) ------------------ - Fixed a bug where ``supervisorctl tail stdout`` would actually tail ``stderr``. Note that ``tail `` without the explicit ``stdout`` correctly tailed ``stdout``. The bug existed since 3.0a3 (released in 2007). Patch by Arseny Hofman. - Improved the warning message added in 4.0.3 so it is now emitted for both ``tail`` and ``tail -f``. Patch by Vinay Sajip. - CVE-2019-12105. Documentation addition only, no code changes. This CVE states that ``inet_http_server`` does not use authentication by default (`details `_). Note that ``inet_http_server`` is not enabled by default, and is also not enabled in the example configuration output by ``echo_supervisord_conf``. The behavior of the ``inet_http_server`` options have been correctly documented, and have not changed, since the feature was introduced in 2006. A new `warning message `_ was added to the documentation. 4.0.3 (2019-05-22) ------------------ - Fixed an issue on Python 2 where running ``supervisorctl tail -f `` would fail with the message ``Cannot connect, error: `` where it may have worked on Supervisor 3.x. The issue was introduced in Supervisor 4.0.0 due to new bytes/strings conversions necessary to add Python 3 support. For ``supervisorctl`` to correctly display logs with Unicode characters, the terminal encoding specified by the environment must support it. If not, the ``UnicodeEncodeError`` may still occur on either Python 2 or 3. A new warning message is now printed if a problematic terminal encoding is detected. Patch by Vinay Sajip. 4.0.2 (2019-04-17) ------------------ - Fixed a bug where inline comments in the config file were not parsed correctly such that the comments were included as part of the values. This only occurred on Python 2, and only where the environment had an extra ``configparser`` module installed. The bug was introduced in Supervisor 4.0.0 because of Python 2/3 compatibility code that expected a Python 2 environment to only have a ``ConfigParser`` module. 4.0.1 (2019-04-10) ------------------ - Fixed an issue on Python 3 where an ``OSError: [Errno 29] Illegal seek`` would occur if ``logfile`` in the ``[supervisord]`` section was set to a special file like ``/dev/stdout`` that was not seekable, even if ``logfile_maxbytes = 0`` was set to disable rotation. The issue only affected the main log and not child logs. Patch by Martin Falatic. 4.0.0 (2019-04-05) ------------------ - Support for Python 3 has been added. On Python 3, Supervisor requires Python 3.4 or later. Many thanks to Vinay Sajip, Scott Maxwell, Palm Kevin, Tres Seaver, Marc Abramowitz, Son Nguyen, Shane Hathaway, Evan Andrews, and Ethan Hann who all made major contributions to the Python 3 porting effort. Thanks also to all contributors who submitted issue reports and patches towards this effort. - Support for Python 2.4, 2.5, and 2.6 has been dropped. On Python 2, Supervisor now requires Python 2.7. - The ``supervisor`` package is no longer a namespace package. - The behavior of the config file expansion ``%(here)s`` has changed. In previous versions, a bug caused ``%(here)s`` to always expand to the directory of the root config file. Now, when ``%(here)s`` is used inside a file included via ``[include]``, it will expand to the directory of that file. Thanks to Alex Eftimie and Zoltan Toth-Czifra for the patches. - The default value for the config file setting ``exitcodes=``, the expected exit codes of a program, has changed. In previous versions, it was ``0,2``. This caused issues with Golang programs where ``panic()`` causes the exit code to be ``2``. The default value for ``exitcodes`` is now ``0``. - An undocumented feature where multiple ``supervisorctl`` commands could be combined on a single line separated by semicolons has been removed. - ``supervisorctl`` will now set its exit code to a non-zero value when an error condition occurs. Previous versions did not set the exit code for most error conditions so it was almost always 0. Patch by Luke Weber. - Added new ``stdout_syslog`` and ``stderr_syslog`` options to the config file. These are boolean options that indicate whether process output will be sent to syslog. Supervisor can now log to both files and syslog at the same time. Specifying a log filename of ``syslog`` is still supported but deprecated. Patch by Jason R. Coombs. 3.4.0 (2019-04-05) ------------------ - FastCGI programs (``[fcgi-program:x]`` sections) can now be used in groups (``[group:x]``). Patch by Florian Apolloner. - Added a new ``socket_backlog`` option to the ``[fcgi-program:x]`` section to set the listen(2) socket backlog. Patch by Nenad Merdanovic. - Fixed a bug where ``SupervisorTransport`` (the XML-RPC transport used with Unix domain sockets) did not close the connection when ``close()`` was called on it. Patch by Jérome Perrin. - Fixed a bug where ``supervisorctl start `` could hang for a long time if the system clock rolled back. Patch by Joe LeVeque. 3.3.5 (2018-12-22) ------------------ - Fixed a race condition where ``supervisord`` would cancel a shutdown already in progress if it received ``SIGHUP``. Now, ``supervisord`` will ignore ``SIGHUP`` if shutdown is already in progress. Patch by Livanh. - Fixed a bug where searching for a relative command ignored changes to ``PATH`` made in ``environment=``. Based on a patch by dongweiming. - ``childutils.ProcessCommunicationsProtocol`` now does an explicit ``flush()`` after writing to ``stdout``. - A more descriptive error message is now emitted if a name in the config file contains a disallowed character. Patch by Rick van Hattem. 3.3.4 (2018-02-15) ------------------ - Fixed a bug where rereading the configuration would not detect changes to eventlisteners. Patch by Michael Ihde. - Fixed a bug where the warning ``Supervisord is running as root and it is searching for its config file`` may have been incorrectly shown by ``supervisorctl`` if its executable name was changed. - Fixed a bug where ``supervisord`` would continue starting up if the ``[supervisord]`` section of the config file specified ``user=`` but ``setuid()`` to that user failed. It will now exit immediately if it cannot drop privileges. - Fixed a bug in the web interface where redirect URLs did not have a slash between the host and query string, which caused issues when proxying with Nginx. Patch by Luke Weber. - When ``supervisord`` successfully drops privileges during startup, it is now logged at the ``INFO`` level instead of ``CRIT``. - The HTTP server now returns a Content-Type header specifying UTF-8 encoding. This may fix display issues in some browsers. Patch by Katenkka. 3.3.3 (2017-07-24) ------------------ - Fixed CVE-2017-11610. A vulnerability was found where an authenticated client can send a malicious XML-RPC request to ``supervisord`` that will run arbitrary shell commands on the server. The commands will be run as the same user as ``supervisord``. Depending on how ``supervisord`` has been configured, this may be root. See https://github.com/Supervisor/supervisor/issues/964 for details. 3.3.2 (2017-06-03) ------------------ - Fixed a bug introduced in 3.3.0 where the ``supervisorctl reload`` command would crash ``supervisord`` with the error ``OSError: [Errno 9] Bad file descriptor`` if the ``kqueue`` poller was used. Patch by Jared Suttles. - Fixed a bug introduced in 3.3.0 where ``supervisord`` could get stuck in a polling loop after the web interface was used, causing high CPU usage. Patch by Jared Suttles. - Fixed a bug where if ``supervisord`` attempted to start but aborted due to another running instance of ``supervisord`` with the same config, the pidfile of the running instance would be deleted. Patch by coldnight. - Fixed a bug where ``supervisorctl fg`` would swallow most XML-RPC faults. ``fg`` now prints the fault and exits. - Parsing the config file will now fail with an error message if a process or group name contains a forward slash character (``/``) since it would break the URLs used by the web interface. - ``supervisorctl reload`` now shows an error message if an argument is given. Patch by Joel Krauska. - ``supervisorctl`` commands ``avail``, ``reread``, and ``version`` now show an error message if an argument is given. 3.3.1 (2016-08-02) ------------------ - Fixed an issue where ``supervisord`` could hang when responding to HTTP requests (including ``supervisorctl`` commands) if the system time was set back after ``supervisord`` was started. - Zope ``trackrefs``, a debugging tool that was included in the ``tests`` directory but hadn't been used for years, has been removed. 3.3.0 (2016-05-14) ------------------ - ``supervisord`` will now use ``kqueue``, ``poll``, or ``select`` to monitor its file descriptors, in that order, depending on what is available on the system. Previous versions used ``select`` only and would crash with the error ``ValueError: filedescriptor out of range in select()`` when running a large number of subprocesses (whatever number resulted in enough file descriptors to exceed the fixed-size file descriptor table used by ``select``, which is typically 1024). Patch by Igor Sobreira. - ``/etc/supervisor/supervisord.conf`` has been added to the config file search paths. Many versions of Supervisor packaged for Debian and Ubuntu have included a patch that added this path. This difference was reported in a number of tickets as a source of confusion and upgrade difficulties, so the path has been added. Patch by Kelvin Wong. - Glob patterns in the ``[include]`` section now support the ``host_node_name`` expansion. Patch by Paul Lockaby. - Files included via the ``[include]`` section are now logged at the ``INFO`` level instead of ``WARN``. Patch by Daniel Hahler. 3.2.4 (2017-07-24) ------------------ - Backported from Supervisor 3.3.3: Fixed CVE-2017-11610. A vulnerability was found where an authenticated client can send a malicious XML-RPC request to ``supervisord`` that will run arbitrary shell commands on the server. The commands will be run as the same user as ``supervisord``. Depending on how ``supervisord`` has been configured, this may be root. See https://github.com/Supervisor/supervisor/issues/964 for details. 3.2.3 (2016-03-19) ------------------ - 400 Bad Request is now returned if an XML-RPC request is received with invalid body data. In previous versions, 500 Internal Server Error was returned. 3.2.2 (2016-03-04) ------------------ - Parsing the config file will now fail with an error message if an ``inet_http_server`` or ``unix_http_server`` section contains a ``username=`` but no ``password=``. In previous versions, ``supervisord`` would start with this invalid configuration but the HTTP server would always return a 500 Internal Server Error. Thanks to Chris Ergatides for reporting this issue. 3.2.1 (2016-02-06) ------------------ - Fixed a server exception ``OverflowError: int exceeds XML-RPC limits`` that made ``supervisorctl status`` unusable if the system time was far into the future. The XML-RPC API returns timestamps as XML-RPC integers, but timestamps will exceed the maximum value of an XML-RPC integer in January 2038 ("Year 2038 Problem"). For now, timestamps exceeding the maximum integer will be capped at the maximum to avoid the exception and retain compatibility with existing API clients. In a future version of the API, the return type for timestamps will be changed. 3.2.0 (2015-11-30) ------------------ - Files included via the ``[include]`` section are read in sorted order. In past versions, the order was undefined. Patch by Ionel Cristian Mărieș. - ``supervisorctl start`` and ``supervisorctl stop`` now complete more quickly when handling many processes. Thanks to Chris McDonough for this patch. See: https://github.com/Supervisor/supervisor/issues/131 - Environment variables are now expanded for all config file options. Patch by Dexter Tad-y. - Added ``signalProcess``, ``signalProcessGroup``, and ``signalAllProcesses`` XML-RPC methods to supervisor RPC interface. Thanks to Casey Callendrello, Marc Abramowitz, and Moriyoshi Koizumi for the patches. - Added ``signal`` command to supervisorctl. Thanks to Moriyoshi Koizumi and Marc Abramowitz for the patches. - Errors caused by bad values in a config file now show the config section to make debugging easier. Patch by Marc Abramowitz. - Setting ``redirect_stderr=true`` in an ``[eventlistener:x]`` section is now disallowed because any messages written to ``stderr`` would interfere with the eventlistener protocol on ``stdout``. - Fixed a bug where spawning a process could cause ``supervisord`` to crash if an ``IOError`` occurred while setting up logging. One way this could happen is if a log filename was accidentally set to a directory instead of a file. Thanks to Grzegorz Nosek for reporting this issue. - Fixed a bug introduced in 3.1.0 where ``supervisord`` could crash when attempting to display a resource limit error. - Fixed a bug where ``supervisord`` could crash with the message ``Assertion failed for processname: RUNNING not in STARTING`` if a time change caused the last start time of the process to be in the future. Thanks to Róbert Nagy, Sergey Leschenko, and samhair for the patches. - A warning is now logged if an eventlistener enters the UNKNOWN state, which usually indicates a bug in the eventlistener. Thanks to Steve Winton and detailyang for reporting issues that led to this change. - Errors from the web interface are now logged at the ``ERROR`` level. Previously, they were logged at the ``TRACE`` level and easily missed. Thanks to Thomas Güttler for reporting this issue. - Fixed ``DeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately.`` on setuptools >= 11.3. - If ``redirect_stderr=true`` and ``stderr_logfile=auto``, no stderr log file will be created. In previous versions, an empty stderr log file would be created. Thanks to Łukasz Kożuchowski for the initial patch. - Fixed an issue in Medusa that would cause ``supervisorctl tail -f`` to disconnect if many other ``supervisorctl`` commands were run in parallel. Patch by Stefan Friesel. 3.1.4 (2017-07-24) ------------------ - Backported from Supervisor 3.3.3: Fixed CVE-2017-11610. A vulnerability was found where an authenticated client can send a malicious XML-RPC request to ``supervisord`` that will run arbitrary shell commands on the server. The commands will be run as the same user as ``supervisord``. Depending on how ``supervisord`` has been configured, this may be root. See https://github.com/Supervisor/supervisor/issues/964 for details. 3.1.3 (2014-10-28) ------------------ - Fixed an XML-RPC bug where the ElementTree-based parser handled strings like ``hello`` but not strings like ``hello``, which are valid in the XML-RPC spec. This fixes compatibility with the Apache XML-RPC client for Java and possibly other clients. 3.1.2 (2014-09-07) ------------------ - Fixed a bug where ``tail group:*`` in ``supervisorctl`` would show a 500 Internal Server Error rather than a BAD_NAME fault. - Fixed a bug where the web interface would show a 500 Internal Server Error instead of an error message for some process start faults. - Removed medusa files not used by Supervisor. 3.1.1 (2014-08-11) ------------------ - Fixed a bug where ``supervisorctl tail -f name`` output would stop if log rotation occurred while tailing. - Prevent a crash when a greater number of file descriptors were attempted to be opened than permitted by the environment when starting a bunch of programs. Now, instead a spawn error is logged. - Compute "channel delay" properly, fixing symptoms where a supervisorctl start command would hang for a very long time when a process (or many processes) are spewing to their stdout or stderr. See comments attached to https://github.com/Supervisor/supervisor/pull/263 . - Added ``docs/conf.py``, ``docs/Makefile``, and ``supervisor/scripts/*.py`` to the release package. 3.1.0 (2014-07-29) ------------------ - The output of the ``start``, ``stop``, ``restart``, and ``clear`` commands in ``supervisorctl`` has been changed to be consistent with the ``status`` command. Previously, the ``status`` command would show a process like ``foo:foo_01`` but starting that process would show ``foo_01: started`` (note the group prefix ``foo:`` was missing). Now, starting the process will show ``foo:foo_01: started``. Suggested by Chris Wood. - The ``status`` command in ``supervisorctl`` now supports group name syntax: ``status group:*``. - The process column in the table output by the ``status`` command in ``supervisorctl`` now expands to fit the widest name. - The ``update`` command in ``supervisorctl`` now accepts optional group names. When group names are specified, only those groups will be updated. Patch by Gary M. Josack. - Tab completion in ``supervisorctl`` has been improved and now works for more cases. Thanks to Mathieu Longtin and Marc Abramowitz for the patches. - Attempting to start or stop a process group in ``supervisorctl`` with the ``group:*`` syntax will now show the same error message as the ``process`` syntax if the name does not exist. Previously, it would show a Python exception. Patch by George Ang. - Added new ``PROCESS_GROUP_ADDED`` and ``PROCESS_GROUP_REMOVED`` events. These events are fired when process groups are added or removed from Supervisor's runtime configuration when using the ``add`` and ``remove`` commands in ``supervisorctl``. Patch by Brent Tubbs. - Stopping a process in the backoff state now changes it to the stopped state. Previously, an attempt to stop a process in backoff would be ignored. Patch by Pascal Varet. - The ``directory`` option is now expanded separately for each process in a homogeneous process group. This allows each process to have its own working directory. Patch by Perttu Ranta-aho. - Removed ``setuptools`` from the ``requires`` list in ``setup.py`` because it caused installation issues on some systems. - Fixed a bug in Medusa where the HTTP Basic authorizer would cause an exception if the password contained a colon. Thanks to Thomas Güttler for reporting this issue. - Fixed an XML-RPC bug where calling supervisor.clearProcessLogs() with a name like ``group:*`` would cause a 500 Internal Server Error rather than returning a BAD_NAME fault. - Fixed a hang that could occur in ``supervisord`` if log rotation is used and an outside program deletes an active log file. Patch by Magnus Lycka. - A warning is now logged if a glob pattern in an ``[include]`` section does not match any files. Patch by Daniel Hahler. 3.0.1 (2017-07-24) ------------------ - Backported from Supervisor 3.3.3: Fixed CVE-2017-11610. A vulnerability was found where an authenticated client can send a malicious XML-RPC request to ``supervisord`` that will run arbitrary shell commands on the server. The commands will be run as the same user as ``supervisord``. Depending on how ``supervisord`` has been configured, this may be root. See https://github.com/Supervisor/supervisor/issues/964 for details. 3.0 (2013-07-30) ---------------- - Parsing the config file will now fail with an error message if a process or group name contains characters that are not compatible with the eventlistener protocol. - Fixed a bug where the ``tail -f`` command in ``supervisorctl`` would fail if the combined length of the username and password was over 56 characters. - Reading the config file now gives a separate error message when the config file exists but can't be read. Previously, any error reading the file would be reported as "could not find config file". Patch by Jens Rantil. - Fixed an XML-RPC bug where array elements after the first would be ignored when using the ElementTree-based XML parser. Patch by Zev Benjamin. - Fixed the usage message output by ``supervisorctl`` to show the correct default config file path. Patch by Alek Storm. 3.0b2 (2013-05-28) ------------------ - The behavior of the program option ``user`` has changed. In all previous versions, if ``supervisord`` failed to switch to the user, a warning would be sent to the stderr log but the child process would still be spawned. This means that a mistake in the config file could result in a child process being unintentionally spawned as root. Now, ``supervisord`` will not spawn the child unless it was able to successfully switch to the user. Thanks to Igor Partola for reporting this issue. - If a user specified in the config file does not exist on the system, ``supervisord`` will now print an error and refuse to start. - Reverted a change to logging introduced in 3.0b1 that was intended to allow multiple processes to log to the same file with the rotating log handler. The implementation caused supervisord to crash during reload and to leak file handles. Also, since log rotation options are given on a per-program basis, impossible configurations could be created (conflicting rotation options for the same file). Given this and that supervisord now has syslog support, it was decided to remove this feature. A warning was added to the documentation that two processes may not log to the same file. - Fixed a bug where parsing ``command=`` could cause supervisord to crash if shlex.split() fails, such as a bad quoting. Patch by Scott Wilson. - It is now possible to use ``supervisorctl`` on a machine with no ``supervisord.conf`` file by supplying the connection information in command line options. Patch by Jens Rantil. - Fixed a bug where supervisord would crash if the syslog handler was used and supervisord received SIGUSR2 (log reopen request). - Fixed an XML-RPC bug where calling supervisor.getProcessInfo() with a bad name would cause a 500 Internal Server Error rather than the returning a BAD_NAME fault. - Added a favicon to the web interface. Patch by Caio Ariede. - Fixed a test failure due to incorrect handling of daylight savings time in the childutils tests. Patch by Ildar Hizbulin. - Fixed a number of pyflakes warnings for unused variables, imports, and dead code. Patch by Philippe Ombredanne. 3.0b1 (2012-09-10) ------------------ - Fixed a bug where parsing ``environment=`` did not verify that key/value pairs were correctly separated. Patch by Martijn Pieters. - Fixed a bug in the HTTP server code that could cause unnecessary delays when sending large responses. Patch by Philip Zeyliger. - When supervisord starts up as root, if the ``-c`` flag was not provided, a warning is now emitted to the console. Rationale: supervisord looks in the current working directory for a ``supervisord.conf`` file; someone might trick the root user into starting supervisord while cd'ed into a directory that has a rogue ``supervisord.conf``. - A warning was added to the documentation about the security implications of starting supervisord without the ``-c`` flag. - Add a boolean program option ``stopasgroup``, defaulting to false. When true, the flag causes supervisor to send the stop signal to the whole process group. This is useful for programs, such as Flask in debug mode, that do not propagate stop signals to their children, leaving them orphaned. - Python 2.3 is no longer supported. The last version that supported Python 2.3 is Supervisor 3.0a12. - Removed the unused "supervisor_rpc" entry point from setup.py. - Fixed a bug in the rotating log handler that would cause unexpected results when two processes were set to log to the same file. Patch by Whit Morriss. - Fixed a bug in config file reloading where each reload could leak memory because a list of warning messages would be appended but never cleared. Patch by Philip Zeyliger. - Added a new Syslog log handler. Thanks to Denis Bilenko, Nathan L. Smith, and Jason R. Coombs, who each contributed to the patch. - Put all change history into a single file (CHANGES.txt). 3.0a12 (2011-12-06) ------------------- - Released to replace a broken 3.0a11 package where non-Python files were not included in the package. 3.0a11 (2011-12-06) ------------------- - Added a new file, ``PLUGINS.rst``, with a listing of third-party plugins for Supervisor. Contributed by Jens Rantil. - The ``pid`` command in supervisorctl can now be used to retrieve the PIDs of child processes. See ``help pid``. Patch by Gregory Wisniewski. - Added a new ``host_node_name`` expansion that will be expanded to the value returned by Python's ``platform.node`` (see http://docs.python.org/library/platform.html#platform.node). Patch by Joseph Kondel. - Fixed a bug in the web interface where pages over 64K would be truncated. Thanks to Drew Perttula and Timothy Jones for reporting this. - Renamed ``README.txt`` to ``README.rst`` so GitHub renders the file as ReStructuredText. - The XML-RPC server is now compatible with clients that do not send empty when there are no parameters for the method call. Thanks to Johannes Becker for reporting this. - Fixed ``supervisorctl --help`` output to show the correct program name. - The behavior of the configuration options ``minfds`` and ``minprocs`` has changed. Previously, if a hard limit was less than ``minfds`` or ``minprocs``, supervisord would unconditionally abort with an error. Now, supervisord will attempt to raise the hard limit. This may succeed if supervisord is run as root, otherwise the error is printed as before. Patch by Benoit Sigoure. - Add a boolean program option ``killasgroup``, defaulting to false, if true when resorting to send SIGKILL to stop/terminate the process send it to its whole process group instead to take care of possible children as well and not leave them behind. Patch by Samuele Pedroni. - Environment variables may now be used in the configuration file for options that support string expansion. Patch by Aleksey Sivokon. - Fixed a race condition where supervisord might not act on a signal sent to it. Thanks to Adar Dembo for reporting the issue and supplying the initial patch. - Updated the output of ``echo_supervisord_conf`` to fix typos and improve comments. Thanks to Jens Rantil for noticing these. - Fixed a possible 500 Server Error from the web interface. This was observed when using Supervisor on a domain socket behind Nginx, where Supervisor would raise an exception because REMOTE_ADDR was not set. Patch by David Bennett. 3.0a10 (2011-03-30) ------------------- - Fixed the stylesheet of the web interface so the footer line won't overlap a long process list. Thanks to Derek DeVries for the patch. - Allow rpc interface plugins to register new events types. - Bug fix for FCGI sockets not getting cleaned up when the ``reload`` command is issued from supervisorctl. Also, the default behavior has changed for FCGI sockets. They are now closed whenever the number of running processes in a group hits zero. Previously, the sockets were kept open unless a group-level stop command was issued. - Better error message when HTTP server cannot reverse-resolve a hostname to an IP address. Previous behavior: show a socket error. Current behavior: spit out a suggestion to stdout. - Environment variables set via ``environment=`` value within ``[supervisord]`` section had no effect. Thanks to Wyatt Baldwin for a patch. - Fix bug where stopping process would cause process output that happened after the stop request was issued to be lost. See https://github.com/Supervisor/supervisor/issues/11. - Moved 2.X change log entries into ``HISTORY.txt``. - Converted ``CHANGES.txt`` and ``README.txt`` into proper ReStructuredText and included them in the ``long_description`` in ``setup.py``. - Added a tox.ini to the package (run via ``tox`` in the package dir). Tests supervisor on multiple Python versions. 3.0a9 (2010-08-13) ------------------ - Use rich comparison methods rather than __cmp__ to sort process configs and process group configs to better straddle Python versions. (thanks to Jonathan Riboux for identifying the problem and supplying an initial patch). - Fixed test_supervisorctl.test_maintail_dashf test for Python 2.7. (thanks to Jonathan Riboux for identifying the problem and supplying an initial patch). - Fixed the way that supervisor.datatypes.url computes a "good" URL for compatibility with Python 2.7 and Python >= 2.6.5. URLs with bogus "schemes://" will now be accepted as a version-straddling compromise (before they were rejected before supervisor would start). (thanks to Jonathan Riboux for identifying the problem and supplying an initial patch). - Add a ``-v`` / ``--version`` option to supervisord: Print the supervisord version number out to stdout and exit. (Roger Hoover) - Import iterparse from xml.etree when available (eg: Python 2.6). Patch by Sidnei da Silva. - Fixed the url to the supervisor-users mailing list. Patch by Sidnei da Silva - When parsing "environment=" in the config file, changes introduced in 3.0a8 prevented Supervisor from parsing some characters commonly found in paths unless quoting was used as in this example:: environment=HOME='/home/auser' Supervisor once again allows the above line to be written as:: environment=HOME=/home/auser Alphanumeric characters, "_", "/", ".", "+", "-", "(", ")", and ":" can all be used as a value without quoting. If any other characters are needed in the value, please quote it as in the first example above. Thanks to Paul Heideman for reporting this issue. - Supervisor will now look for its config file in locations relative to the executable path, allowing it to be used more easily in virtual environments. If sys.argv[0] is ``/path/to/venv/bin/supervisorctl``, supervisor will now look for it's config file in ``/path/to/venv/etc/supervisord.conf`` and ``/path/to/venv/supervisord.conf`` in addition to the other standard locations. Patch by Chris Rossi. 3.0a8 (2010-01-20) ------------------ - Don't cleanup file descriptors on first supervisord invocation: this is a lame workaround for Snow Leopard systems that use libdispatch and are receiving "Illegal instruction" messages at supervisord startup time. Restarting supervisord via "supervisorctl restart" may still cause a crash on these systems. - Got rid of Medusa hashbang headers in various files to ease RPM packaging. - Allow umask to be 000 (patch contributed by Rowan Nairn). - Fixed a bug introduced in 3.0a7 where supervisorctl wouldn't ask for a username/password combination properly from a password-protected supervisord if it wasn't filled in within the "[supervisorctl]" section username/password values. It now properly asks for a username and password. - Fixed a bug introduced in 3.0a7 where setup.py would not detect the Python version correctly. Patch by Daniele Paolella. - Fixed a bug introduced in 3.0a7 where parsing a string of key/value pairs failed on Python 2.3 due to use of regular expression syntax introduced in Python 2.4. - Removed the test suite for the ``memmon`` console script, which was moved to the Superlance package in 3.0a7. - Added release dates to CHANGES.txt. - Reloading the config for an fcgi process group did not close the fcgi socket - now, the socket is closed whenever the group is stopped as a unit (including during config update). However, if you stop all the processes in a group individually, the socket will remain open to allow for graceful restarts of FCGI daemons. (Roger Hoover) - Rereading the config did not pick up changes to the socket parameter in a fcgi-program section. (Roger Hoover) - Made a more friendly exception message when a FCGI socket cannot be created. (Roger Hoover) - Fixed a bug where the --serverurl option of supervisorctl would not accept a URL with a "unix" scheme. (Jason Kirtland) - Running the tests now requires the "mock" package. This dependency has been added to "tests_require" in setup.py. (Roger Hoover) - Added support for setting the ownership and permissions for an FCGI socket. This is done using new "socket_owner" and "socket_mode" options in an [fcgi-program:x] section. See the manual for details. (Roger Hoover) - Fixed a bug where the FCGI socket reference count was not getting decremented on spawn error. (Roger Hoover) - Fixed a Python 2.6 deprecation warning on use of the "sha" module. - Updated ez_setup.py to one that knows about setuptools 0.6c11. - Running "supervisorctl shutdown" no longer dumps a Python backtrace when it can't connect to supervisord on the expected socket. Thanks to Benjamin Smith for reporting this. - Removed use of collections.deque in our bundled version of asynchat because it broke compatibility with Python 2.3. - The sample configuration output by "echo_supervisord_conf" now correctly shows the default for "autorestart" as "unexpected". Thanks to William Dode for noticing it showed the wrong value. 3.0a7 (2009-05-24) ------------------ - We now bundle our own patched version of Medusa contributed by Jason Kirtland to allow Supervisor to run on Python 2.6. This was done because Python 2.6 introduced backwards incompatible changes to asyncore and asynchat in the stdlib. - The console script ``memmon``, introduced in Supervisor 3.0a4, has been moved to Superlance (http://pypi.python.org/pypi/superlance). The Superlance package contains other useful monitoring tools designed to run under Supervisor. - Supervisorctl now correctly interprets all of the error codes that can be returned when starting a process. Patch by Francesc Alted. - New ``stdout_events_enabled`` and ``stderr_events_enabled`` config options have been added to the ``[program:x]``, ``[fcgi-program:x]``, and ``[eventlistener:x]`` sections. These enable the emitting of new PROCESS_LOG events for a program. If unspecified, the default is False. If enabled for a subprocess, and data is received from the stdout or stderr of the subprocess while not in the special capture mode used by PROCESS_COMMUNICATION, an event will be emitted. Event listeners can subscribe to either PROCESS_LOG_STDOUT or PROCESS_LOG_STDERR individually, or PROCESS_LOG for both. - Values for subprocess environment variables specified with environment= in supervisord.conf can now be optionally quoted, allowing them to contain commas. Patch by Tim Godfrey. - Added a new event type, REMOTE_COMMUNICATION, that is emitted by a new RPC method, supervisor.sendRemoteCommEvent(). - Patch for bug #268 (KeyError on ``here`` expansion for stdout/stderr_logfile) from David E. Kindred. - Add ``reread``, ``update``, and ``avail`` commands based on Anders Quist's ``online_config_reload.diff`` patch. This patch extends the "add" and "drop" commands with automagical behavior:: In supervisorctl: supervisor> status bar RUNNING pid 14864, uptime 18:03:42 baz RUNNING pid 23260, uptime 0:10:16 foo RUNNING pid 14866, uptime 18:03:42 gazonk RUNNING pid 23261, uptime 0:10:16 supervisor> avail bar in use auto 999:999 baz in use auto 999:999 foo in use auto 999:999 gazonk in use auto 999:999 quux avail auto 999:999 Now we add this to our conf: [group:zegroup] programs=baz,gazonk Then we reread conf: supervisor> reread baz: disappeared gazonk: disappeared quux: available zegroup: available supervisor> avail bar in use auto 999:999 foo in use auto 999:999 quux avail auto 999:999 zegroup:baz avail auto 999:999 zegroup:gazonk avail auto 999:999 supervisor> status bar RUNNING pid 14864, uptime 18:04:18 baz RUNNING pid 23260, uptime 0:10:52 foo RUNNING pid 14866, uptime 18:04:18 gazonk RUNNING pid 23261, uptime 0:10:52 The magic make-it-so command: supervisor> update baz: stopped baz: removed process group gazonk: stopped gazonk: removed process group zegroup: added process group quux: added process group supervisor> status bar RUNNING pid 14864, uptime 18:04:43 foo RUNNING pid 14866, uptime 18:04:43 quux RUNNING pid 23561, uptime 0:00:02 zegroup:baz RUNNING pid 23559, uptime 0:00:02 zegroup:gazonk RUNNING pid 23560, uptime 0:00:02 supervisor> avail bar in use auto 999:999 foo in use auto 999:999 quux in use auto 999:999 zegroup:baz in use auto 999:999 zegroup:gazonk in use auto 999:999 - Fix bug with symptom "KeyError: 'process_name'" when using a logfile name including documented``process_name`` Python string expansions. - Tab completions in the supervisorctl shell, and a foreground mode for Supervisor, implemented as a part of GSoC. The supervisorctl program now has a ``fg`` command, which makes it possible to supply inputs to a process, and see its output/error stream in real time. - Process config reloading implemented by Anders Quist. The supervisorctl program now has the commands "add" and "drop". "add " adds the process group implied by in the config file. "drop " removes the process group from the running configuration (it must already be stopped). This makes it possible to add processes to and remove processes from a running supervisord without restarting the supervisord process. - Fixed a bug where opening the HTTP servers would fail silently for socket errors other than errno.EADDRINUSE. - Thanks to Dave Peticolas, using "reload" against a supervisord that is running in the background no longer causes supervisord to crash. - Configuration options for logfiles now accept mixed case reserved words (e.g. "AUTO" or "auto") for consistency with other options. - childutils.eventdata was buggy, it could not deal with carriage returns in data. See http://www.plope.com/software/collector/257. Thanks to Ian Bicking. - Per-process exitcodes= configuration now will not accept exit codes that are not 8-bit unsigned integers (supervisord will not start when one of the exit codes is outside the range of 0 - 255). - Per-process ``directory`` value can now contain expandable values like ``%(here)s``. (See http://www.plope.com/software/collector/262). - Accepted patch from Roger Hoover to allow for a new sort of process group: "fcgi-program". Adding one of these to your supervisord.conf allows you to control fastcgi programs. FastCGI programs cannot belong to heterogenous groups. The configuration for FastCGI programs is the same as regular programs except an additional "socket" parameter. Substitution happens on the socket parameter with the ``here`` and ``program_name`` variables:: [fcgi-program:fcgi_test] ;socket=tcp://localhost:8002 socket=unix:///path/to/fcgi/socket - Supervisorctl now supports a plugin model for supervisorctl commands. - Added the ability to retrieve supervisord's own pid through supervisor.getPID() on the XML-RPC interface or a new "pid" command on supervisorctl. 3.0a6 (2008-04-07) ------------------ - The RotatingFileLogger had a race condition in its doRollover method whereby a file might not actually exist despite a call to os.path.exists on the line above a place where we try to remove it. We catch the exception now and ignore the missing file. 3.0a5 (2008-03-13) ------------------ - Supervisorctl now supports persistent readline history. To enable, add "history_file = " to the ``[supervisorctl]`` section in your supervisord.conf file. - Multiple commands may now be issued on one supervisorctl command line, e.g. "restart prog; tail -f prog". Separate commands with a single semicolon; they will be executed in order as you would expect. 3.0a4 (2008-01-30) ------------------ - 3.0a3 broke Python 2.3 backwards compatibility. - On Debian Sarge, one user reported that a call to options.mktempfile would fail with an "[Errno 9] Bad file descriptor" at supervisord startup time. I was unable to reproduce this, but we found a workaround that seemed to work for him and it's included in this release. See http://www.plope.com/software/collector/252 for more information. Thanks to William Dode. - The fault ``ALREADY_TERMINATED`` has been removed. It was only raised by supervisor.sendProcessStdin(). That method now returns ``NOT_RUNNING`` for parity with the other methods. (Mike Naberezny) - The fault TIMED_OUT has been removed. It was not used. - Supervisor now depends on meld3 0.6.4, which does not compile its C extensions by default, so there is no more need to faff around with NO_MELD3_EXTENSION_MODULES during installation if you don't have a C compiler or the Python development libraries on your system. - Instead of making a user root around for the sample.conf file, provide a convenience command "echo_supervisord_conf", which he can use to echo the sample.conf to his terminal (and redirect to a file appropriately). This is a new user convenience (especially one who has no Python experience). - Added ``numprocs_start`` config option to ``[program:x]`` and ``[eventlistener:x]`` sections. This is an offset used to compute the first integer that ``numprocs`` will begin to start from. Contributed by Antonio Beamud Montero. - Added capability for ``[include]`` config section to config format. This section must contain a single key "files", which must name a space-separated list of file globs that will be included in supervisor's configuration. Contributed by Ian Bicking. - Invoking the ``reload`` supervisorctl command could trigger a bug in supervisord which caused it to crash. See http://www.plope.com/software/collector/253 . Thanks to William Dode for a bug report. - The ``pidproxy`` script was made into a console script. - The ``password`` value in both the ``[inet_http_server]`` and ``[unix_http_server]`` sections can now optionally be specified as a SHA hexdigest instead of as cleartext. Values prefixed with ``{SHA}`` will be considered SHA hex digests. To encrypt a password to a form suitable for pasting into the configuration file using Python, do, e.g.:: >>> import sha >>> '{SHA}' + sha.new('thepassword').hexdigest() '{SHA}82ab876d1387bfafe46cc1c8a2ef074eae50cb1d' - The subtypes of the events PROCESS_STATE_CHANGE (and PROCESS_STATE_CHANGE itself) have been removed, replaced with a simpler set of PROCESS_STATE subscribable event types. The new event types are: PROCESS_STATE_STOPPED PROCESS_STATE_EXITED PROCESS_STATE_STARTING PROCESS_STATE_STOPPING PROCESS_STATE_BACKOFF PROCESS_STATE_FATAL PROCESS_STATE_RUNNING PROCESS_STATE_UNKNOWN PROCESS_STATE # abstract PROCESS_STATE_STARTING replaces: PROCESS_STATE_CHANGE_STARTING_FROM_STOPPED PROCESS_STATE_CHANGE_STARTING_FROM_BACKOFF PROCESS_STATE_CHANGE_STARTING_FROM_EXITED PROCESS_STATE_CHANGE_STARTING_FROM_FATAL PROCESS_STATE_RUNNING replaces PROCESS_STATE_CHANGE_RUNNING_FROM_STARTED PROCESS_STATE_BACKOFF replaces PROCESS_STATE_CHANGE_BACKOFF_FROM_STARTING PROCESS_STATE_STOPPING replaces: PROCESS_STATE_CHANGE_STOPPING_FROM_RUNNING PROCESS_STATE_CHANGE_STOPPING_FROM_STARTING PROCESS_STATE_EXITED replaces PROCESS_STATE_CHANGE_EXITED_FROM_RUNNING PROCESS_STATE_STOPPED replaces PROCESS_STATE_CHANGE_STOPPED_FROM_STOPPING PROCESS_STATE_FATAL replaces PROCESS_STATE_CHANGE_FATAL_FROM_BACKOFF PROCESS_STATE_UNKNOWN replaces PROCESS_STATE_CHANGE_TO_UNKNOWN PROCESS_STATE replaces PROCESS_STATE_CHANGE The PROCESS_STATE_CHANGE_EXITED_OR_STOPPED abstract event is gone. All process state changes have at least "processname", "groupname", and "from_state" (the name of the previous state) in their serializations. PROCESS_STATE_EXITED additionally has "expected" (1 or 0) and "pid" (the process id) in its serialization. PROCESS_STATE_RUNNING, PROCESS_STATE_STOPPING, PROCESS_STATE_STOPPED additionally have "pid" in their serializations. PROCESS_STATE_STARTING and PROCESS_STATE_BACKOFF have "tries" in their serialization (initially "0", bumped +1 each time a start retry happens). - Remove documentation from README.txt, point people to http://supervisord.org/manual/ . - The eventlistener request/response protocol has changed. OK/FAIL must now be wrapped in a RESULT envelope so we can use it for more specialized communications. Previously, to signify success, an event listener would write the string ``OK\n`` to its stdout. To signify that the event was seen but couldn't be handled by the listener and should be rebuffered, an event listener would write the string ``FAIL\n`` to its stdout. In the new protocol, the listener must write the string:: RESULT {resultlen}\n{result} For example, to signify OK:: RESULT 2\nOK To signify FAIL:: RESULT 4\nFAIL See the scripts/sample_eventlistener.py script for an example. - To provide a hook point for custom results returned from event handlers (see above) the [eventlistener:x] configuration sections now accept a "result_handler=" parameter, e.g. "result_handler=supervisor.dispatchers:default_handler" (the default) or "handler=mypackage:myhandler". The keys are pkgutil "entry point" specifications (importable Python function names). Result handlers must be callables which accept two arguments: one named "event" which represents the event, and the other named "result", which represents the listener's result. A result handler either executes successfully or raises an exception. If it raises a supervisor.dispatchers.RejectEvent exception, the event will be rebuffered, and the eventhandler will be placed back into the ACKNOWLEDGED state. If it raises any other exception, the event handler will be placed in the UNKNOWN state. If it does not raise any exception, the event is considered successfully processed. A result handler's return value is ignored. Writing a result handler is a "in case of emergency break glass" sort of thing, it is not something to be used for arbitrary business code. In particular, handlers *must not block* for any appreciable amount of time. The standard eventlistener result handler (supervisor.dispatchers:default_handler) does nothing if it receives an "OK" and will raise a supervisor.dispatchers.RejectEvent exception if it receives any other value. - Supervisord now emits TICK events, which happen every N seconds. Three types of TICK events are available: TICK_5 (every five seconds), TICK_60 (every minute), TICK_3600 (every hour). Event listeners may subscribe to one of these types of events to perform every-so-often processing. TICK events are subtypes of the EVENT type. - Get rid of OSX platform-specific memory monitor and replace with memmon.py, which works on both Linux and Mac OS. This script is now a console script named "memmon". - Allow "web handler" (the handler which receives http requests from browsers visiting the web UI of supervisor) to deal with POST requests. - RPC interface methods stopProcess(), stopProcessGroup(), and stopAllProcesses() now take an optional "wait" argument that defaults to True for parity with the start methods. 3.0a3 (2007-10-02) ------------------ - Supervisorctl now reports a better error message when the main supervisor XML-RPC namespace is not registered. Thanks to Mike Orr for reporting this. (Mike Naberezny) - Create ``scripts`` directory within supervisor package, move ``pidproxy.py`` there, and place sample event listener and comm event programs within the directory. - When an event notification is buffered (either because a listener rejected it or because all listeners were busy when we attempted to send it originally), we now rebuffer it in a way that will result in it being retried earlier than it used to be. - When a listener process exits (unexpectedly) before transitioning from the BUSY state, rebuffer the event that was being processed. - supervisorctl ``tail`` command now accepts a trailing specifier: ``stderr`` or ``stdout``, which respectively, allow a user to tail the stderr or stdout of the named process. When this specifier is not provided, tail defaults to stdout. - supervisor ``clear`` command now clears both stderr and stdout logs for the given process. - When a process encounters a spawn error as a result of a failed execve or when it cannot setuid to a given uid, it now puts this info into the process' stderr log rather than its stdout log. - The event listener protocol header now contains the ``server`` identifier, the ``pool`` that the event emanated from, and the ``poolserial`` as well as the values it previously contained (version, event name, serial, and length). The server identifier is taken from the config file options value ``identifier``, the ``pool`` value is the name of the listener pool that this event emanates from, and the ``poolserial`` is a serial number assigned to the event local to the pool that is processing it. - The event listener protocol header is now a sequence of key-value pairs rather than a list of positional values. Previously, a representative header looked like:: SUPERVISOR3.0 PROCESS_COMMUNICATION_STDOUT 30 22\n Now it looks like:: ver:3.0 server:supervisor serial:21 ... - Specific event payload serializations have changed. All event types that deal with processes now include the pid of the process that the event is describing. In event serialization "header" values, we've removed the space between the header name and the value and headers are now separated by a space instead of a line feed. The names of keys in all event types have had underscores removed. - Abandon the use of the Python stdlib ``logging`` module for speed and cleanliness purposes. We've rolled our own. - Fix crash on start if AUTO logging is used with a max_bytes of zero for a process. - Improve process communication event performance. - The process config parameters ``stdout_capturefile`` and ``stderr_capturefile`` are no longer valid. They have been replaced with the ``stdout_capture_maxbytes`` and ``stderr_capture_maxbytes`` parameters, which are meant to be suffix-multiplied integers. They both default to zero. When they are zero, process communication event capturing is not performed. When either is nonzero, the value represents the maximum number of bytes that will be captured between process event start and end tags. This change was to support the fact that we no longer keep capture data in a separate file, we just use a FIFO in RAM to maintain capture info. For users whom don't care about process communication events, or whom haven't changed the defaults for ``stdout_capturefile`` or ``stderr_capturefile``, they needn't do anything to their configurations to deal with this change. - Log message levels have been normalized. In particular, process stdin/stdout is now logged at ``debug`` level rather than at ``trace`` level (``trace`` level is now reserved for output useful typically for debugging supervisor itself). See "Supervisor Log Levels" in the documentation for more info. - When an event is rebuffered (because all listeners are busy or a listener rejected the event), the rebuffered event is now inserted in the head of the listener event queue. This doesn't guarantee event emission in natural ordering, because if a listener rejects an event or dies while it's processing an event, it can take an arbitrary amount of time for the event to be rebuffered, and other events may be processed in the meantime. But if pool listeners never reject an event or don't die while processing an event, this guarantees that events will be emitted in the order that they were received because if all listeners are busy, the rebuffered event will be tried again "first" on the next go-around. - Removed EVENT_BUFFER_OVERFLOW event type. - The supervisorctl xmlrpc proxy can now communicate with supervisord using a persistent HTTP connection. - A new module "supervisor.childutils" was added. This module provides utilities for Python scripts which act as children of supervisord. Most notably, it contains an API method "getRPCInterface" allows you to obtain an xmlrpclib ServerProxy that is willing to communicate with the parent supervisor. It also contains utility functions that allow for parsing of supervisor event listener protocol headers. A pair of scripts (loop_eventgen.py and loop_listener.py) were added to the script directory that serve as examples about how to use the childutils module. - A new envvar is added to child process environments: SUPERVISOR_SERVER_URL. This contains the server URL for the supervisord running the child. - An ``OK`` URL was added at ``/ok.html`` which just returns the string ``OK`` (can be used for up checks or speed checks via plain-old-HTTP). - An additional command-line option ``--profile_options`` is accepted by the supervisord script for developer use:: supervisord -n -c sample.conf --profile_options=cumulative,calls The values are sort_stats options that can be passed to the standard Python profiler's PStats sort_stats method. When you exit supervisor, it will print Python profiling output to stdout. - If cElementTree is installed in the Python used to invoke supervisor, an alternate (faster, by about 2X) XML parser will be used to parse XML-RPC request bodies. cElementTree was added as an "extras_require" option in setup.py. - Added the ability to start, stop, and restart process groups to supervisorctl. To start a group, use ``start groupname:*``. To start multiple groups, use ``start groupname1:* groupname2:*``. Equivalent commands work for "stop" and "restart". You can mix and match short processnames, fully-specified group:process names, and groupsplats on the same line for any of these commands. - Added ``directory`` option to process config. If you set this option, supervisor will chdir to this directory before executing the child program (and thus it will be the child's cwd). - Added ``umask`` option to process config. If you set this option, supervisor will set the umask of the child program. (Thanks to Ian Bicking for the suggestion). - A pair of scripts ``osx_memmon_eventgen.py`` and `osx_memmon_listener.py`` have been added to the scripts directory. If they are used together as described in their comments, processes which are consuming "too much" memory will be restarted. The ``eventgen`` script only works on OSX (my main development platform) but it should be trivially generalizable to other operating systems. - The long form ``--configuration`` (-c) command line option for supervisord was broken. Reported by Mike Orr. (Mike Naberezny) - New log level: BLAT (blather). We log all supervisor-internal-related debugging info here. Thanks to Mike Orr for the suggestion. - We now allow supervisor to listen on both a UNIX domain socket and an inet socket instead of making them mutually exclusive. As a result, the options "http_port", "http_username", "http_password", "sockchmod" and "sockchown" are no longer part of the ``[supervisord]`` section configuration. These have been supplanted by two other sections: ``[unix_http_server]`` and ``[inet_http_server]``. You'll need to insert one or the other (depending on whether you want to listen on a UNIX domain socket or a TCP socket respectively) or both into your supervisord.conf file. These sections have their own options (where applicable) for port, username, password, chmod, and chown. See README.txt for more information about these sections. - All supervisord command-line options related to "http_port", "http_username", "http_password", "sockchmod" and "sockchown" have been removed (see above point for rationale). - The option that *used* to be ``sockchown`` within the ``[supervisord]`` section (and is now named ``chown`` within the ``[unix_http_server]`` section) used to accept a dot-separated user.group value. The separator now must be a colon ":", e.g. "user:group". Unices allow for dots in usernames, so this change is a bugfix. Thanks to Ian Bicking for the bug report. - If a '-c' option is not specified on the command line, both supervisord and supervisorctl will search for one in the paths ``./supervisord.conf`` , ``./etc/supervisord.conf`` (relative to the current working dir when supervisord or supervisorctl is invoked) or in ``/etc/supervisord.conf`` (the old default path). These paths are searched in order, and supervisord and supervisorctl will use the first one found. If none are found, supervisor will fail to start. - The Python string expression ``%(here)s`` (referring to the directory in which the configuration file was found) can be used within the following sections/options within the config file:: unix_http_server:file supervisor:directory supervisor:logfile supervisor:pidfile supervisor:childlogdir supervisor:environment program:environment program:stdout_logfile program:stderr_logfile program:process_name program:command - The ``--environment`` aka ``-b`` option was removed from the list of available command-line switches to supervisord (use "A=1 B=2 bin/supervisord" instead). - If the socket filename (the tail-end of the unix:// URL) was longer than 64 characters, supervisorctl would fail with an encoding error at startup. - The ``identifier`` command-line argument was not functional. - Fixed http://www.plope.com/software/collector/215 (bad error message in supervisorctl when program command not found on PATH). - Some child processes may not have been shut down properly at supervisor shutdown time. - Move to ZPL-derived (but not ZPL) license available from http://www.repoze.org/LICENSE.txt; it's slightly less restrictive than the ZPL (no servicemark clause). - Spurious errors related to unclosed files ("bad file descriptor", typically) were evident at supervisord "reload" time (when using the "reload" command from supervisorctl). - We no longer bundle ez_setup to bootstrap setuptools installation. 3.0a2 (2007-08-24) ------------------ - Fixed the README.txt example for defining the supervisor RPC interface in the configuration file. Thanks to Drew Perttula. - Fixed a bug where process communication events would not have the proper payload if the payload data was very short. - when supervisord attempted to kill a process with SIGKILL after the process was not killed within "stopwaitsecs" using a "normal" kill signal, supervisord would crash with an improper AssertionError. Thanks to Calvin Hendryx-Parker. - On Linux, Supervisor would consume too much CPU in an effective "busywait" between the time a subprocess exited and the time at which supervisor was notified of its exit status. Thanks to Drew Perttula. - RPC interface behavior change: if the RPC method "sendProcessStdin" is called against a process that has closed its stdin file descriptor (e.g. it has done the equivalent of "sys.stdin.close(); os.close(0)"), we return a NO_FILE fault instead of accepting the data. - Changed the semantics of the process configuration ``autorestart`` parameter with respect to processes which move between the RUNNING and EXITED state. ``autorestart`` was previously a boolean. Now it's a trinary, accepting one of ``false``, ``unexpected``, or ``true``. If it's ``false``, a process will never be automatically restarted from the EXITED state. If it's ``unexpected``, a process that enters the EXITED state will be automatically restarted if it exited with an exit code that was not named in the process config's ``exitcodes`` list. If it's ``true``, a process that enters the EXITED state will be automatically restarted unconditionally. The default is now ``unexpected`` (it was previously ``true``). The readdition of this feature is a reversion of the behavior change note in the changelog notes for 3.0a1 that asserted we never cared about the process' exit status when determining whether to restart it or not. - setup.py develop (and presumably setup.py install) would fail under Python 2.3.3, because setuptools attempted to import ``splituser`` from urllib2, and it didn't exist. - It's now possible to use ``setup.py install`` and ``setup.py develop`` on systems which do not have a C compiler if you set the environment variable "NO_MELD3_EXTENSION_MODULES=1" in the shell in which you invoke these commands (versions of meld3 > 0.6.1 respect this envvar and do not try to compile optional C extensions when it's set). - The test suite would fail on Python versions <= 2.3.3 because the "assertTrue" and "assertFalse" methods of unittest.TestCase didn't exist in those versions. - The ``supervisorctl`` and ``supervisord`` wrapper scripts were disused in favor of using setuptools' ``console_scripts`` entry point settings. - Documentation files and the sample configuration file are put into the generated supervisor egg's ``doc`` directory. - Using the web interface would cause fairly dramatic memory leakage. We now require a version of meld3 that does not appear to leak memory from its C extensions (0.6.3). 3.0a1 (2007-08-16) ------------------ - Default config file comment documented 10 secs as default for ``startsecs`` value in process config, in reality it was 1 sec. Thanks to Christoph Zwerschke. - Make note of subprocess environment behavior in README.txt. Thanks to Christoph Zwerschke. - New "strip_ansi" config file option attempts to strip ANSI escape sequences from logs for smaller/more readable logs (submitted by Mike Naberezny). - The XML-RPC method supervisor.getVersion() has been renamed for clarity to supervisor.getAPIVersion(). The old name is aliased for compatibility but is deprecated and will be removed in a future version (Mike Naberezny). - Improved web interface styling (Mike Naberezny, Derek DeVries) - The XML-RPC method supervisor.startProcess() now checks that the file exists and is executable (Mike Naberezny). - Two environment variables, "SUPERVISOR_PROCESS_NAME" and "SUPERVISOR_PROCESS_GROUP" are set in the environment of child processes, representing the name of the process and group in supervisor's configuration. - Process state map change: a process may now move directly from the STARTING state to the STOPPING state (as a result of a stop request). - Behavior change: if ``autorestart`` is true, even if a process exits with an "expected" exit code, it will still be restarted. In the immediately prior release of supervisor, this was true anyway, and no one complained, so we're going to consider that the "officially correct" behavior from now on. - Supervisor now logs subprocess stdout and stderr independently. The old program config keys "logfile", "logfile_backups" and "logfile_maxbytes" are superseded by "stdout_logfile", "stdout_logfile_backups", and "stdout_logfile_maxbytes". Added keys include "stderr_logfile", "stderr_logfile_backups", and "stderr_logfile_maxbytes". An additional "redirect_stderr" key is used to cause program stderr output to be sent to its stdout channel. The keys "log_stderr" and "log_stdout" have been removed. - ``[program:x]`` config file sections now represent "homogeneous process groups" instead of single processes. A "numprocs" key in the section represents the number of processes that are in the group. A "process_name" key in the section allows composition of the each process' name within the homogeneous group. - A new kind of config file section, ``[group:x]`` now exists, allowing users to group heterogeneous processes together into a process group that can be controlled as a unit from a client. - Supervisord now emits "events" at certain points in its normal operation. These events include supervisor state change events, process state change events, and "process communication events". - A new kind of config file section ``[eventlistener:x]`` now exists. Each section represents an "event listener pool", which is a special kind of homogeneous process group. Each process in the pool is meant to receive supervisor "events" via its stdin and perform some notification (e.g. send a mail, log, make an http request, etc.) - Supervisord can now capture data between special tokens in subprocess stdout/stderr output and emit a "process communications event" as a result. - Supervisor's XML-RPC interface may be extended arbitrarily by programmers. Additional top-level namespace XML-RPC interfaces can be added using the ``[rpcinterface:foo]`` declaration in the configuration file. - New ``supervisor``-namespace XML-RPC methods have been added: getAPIVersion (returns the XML-RPC API version, the older "getVersion" is now deprecated), "startProcessGroup" (starts all processes in a supervisor process group), "stopProcessGroup" (stops all processes in a supervisor process group), and "sendProcessStdin" (sends data to a process' stdin file descriptor). - ``supervisor``-namespace XML-RPC methods which previously accepted ony a process name as "name" (startProcess, stopProcess, getProcessInfo, readProcessLog, tailProcessLog, and clearProcessLog) now accept a "name" which may contain both the process name and the process group name in the form ``groupname:procname``. For backwards compatibility purposes, "simple" names will also be accepted but will be expanded internally (e.g. if "foo" is sent as a name, it will be expanded to "foo:foo", representing the foo process within the foo process group). - 2.X versions of supervisorctl will work against supervisor 3.0 servers in a degraded fashion, but 3.X versions of supervisorctl will not work at all against supervisor 2.X servers. 2.2b1 (2007-03-31) ------------------ - Individual program configuration sections can now specify an environment. - Added a 'version' command to supervisorctl. This returns the version of the supervisor2 package which the remote supervisord process is using. 2.1 (2007-03-17) ---------------- - When supervisord was invoked more than once, and its configuration was set up to use a UNIX domain socket as the HTTP server, the socket file would be erased in error. The symptom of this was that a subsequent invocation of supervisorctl could not find the socket file, so the process could not be controlled (it and all of its subprocesses would need to be killed by hand). - Close subprocess file descriptors properly when a subprocess exits or otherwise dies. This should result in fewer "too many open files to spawn foo" messages when supervisor is left up for long periods of time. - When a process was not killable with a "normal" signal at shutdown time, too many "INFO: waiting for x to die" messages would be sent to the log until we ended up killing the process with a SIGKILL. Now a maximum of one every three seconds is sent up until SIGKILL time. Thanks to Ian Bicking. - Add an assertion: we never want to try to marshal None to XML-RPC callers. Issue 223 in the collector from vgatto indicates that somehow a supervisor XML-RPC method is returning None (which should never happen), but I cannot identify how. Maybe the assertion will give us more clues if it happens again. - Supervisor would crash when run under Python 2.5 because the xmlrpclib.Transport class in Python 2.5 changed in a backward-incompatible way. Thanks to Eric Westra for the bug report and a fix. - Tests now pass under Python 2.5. - Better supervisorctl reporting on stop requests that have a FAILED status. - Removed duplicated code (readLog/readMainLog), thanks to Mike Naberezny. - Added tailProcessLog command to the XML-RPC API. It provides a more efficient way to tail logs than readProcessLog(). Use readProcessLog() to read chunks and tailProcessLog() to tail. (thanks to Mike Naberezny). 2.1b1 (2006-08-30) ------------------ - "supervisord -h" and "supervisorctl -h" did not work (traceback instead of showing help view (thanks to Damjan from Macedonia for the bug report). - Processes which started successfully after failing to start initially are no longer reported in BACKOFF state once they are started successfully (thanks to Damjan from Macedonia for the bug report). - Add new 'maintail' command to supervisorctl shell, which allows you to tail the 'main' supervisor log. This uses a new readMainLog xmlrpc API. - Various process-state-transition related changes, all internal. README.txt updated with new state transition map. - startProcess and startAllProcesses xmlrpc APIs changed: instead of accepting a timeout integer, these accept a wait boolean (timeout is implied by process' "startsecs" configuration). If wait is False, do not wait for startsecs. Known issues: - Code does not match state transition map. Processes which are configured as autorestarting which start "successfully" but subsequently die after 'startsecs' go through the transitions RUNNING -> BACKOFF -> STARTING instead of the correct transitions RUNNING -> EXITED -> STARTING. This has no real negative effect, but should be fixed for correctness. 2.0 (2006-08-30) ---------------- - pidfile written in daemon mode had incorrect pid. - supervisorctl: tail (non -f) did not pass through proper error messages when supplied by the server. - Log signal name used to kill processes at debug level. - supervisorctl "tail -f" didn't work with supervisorctl sections configured with an absolute unix:// URL - New "environment" config file option allows you to add environment variable values to supervisord environment from config file. 2.0b1 (2006-07-12) ------------------ - Fundamental rewrite based on 1.0.7, use distutils (only) for installation, use ConfigParser rather than ZConfig, use HTTP for wire protocol, web interface, less lies in supervisorctl. 1.0.7 (2006-07-11) ------------------ - Don't log a waitpid error if the error value is "no children". - Use select() against child file descriptor pipes and bump up select timeout appropriately. 1.0.6 (2005-11-20) ------------------ - Various tweaks to make run more effectively on Mac OS X (including fixing tests to run there, no more "error reading from fd XXX" in logtail output, reduced disk/CPU usage as a result of not writing to log file unnecessarily on Mac OS). 1.0.5 (2004-07-29) ------------------ - Short description: In previous releases, managed programs that created voluminous stdout/stderr output could run more slowly than usual when invoked under supervisor, now they do not. Long description: The supervisord manages child output by polling pipes related to child process stderr/stdout. Polling operations are performed in the mainloop, which also performs a 'select' on the filedescriptor(s) related to client/server operations. In prior releases, the select timeout was set to 2 seconds. This release changes the timeout to 1/10th of a second in order to keep up with client stdout/stderr output. Gory description: On Linux, at least, there is a pipe buffer size fixed by the kernel of somewhere between 512 - 4096 bytes; when a child process writes enough data to fill the pipe buffer, it will block on further stdout/stderr output until supervisord comes along and clears out the buffer by reading bytes from the pipe within the mainloop. We now clear these buffers much more quickly than we did before due to the increased frequency of buffer reads in the mainloop; the timeout value of 1/10th of a second seems to be fast enough to clear out the buffers of child process pipes when managing programs on even a very fast system while still enabling the supervisord process to be in a sleeping state for most of the time. 1.0.4 or "Alpha 4" (2004-06-30) ------------------------------- - Forgot to update version tag in configure.py, so the supervisor version in a3 is listed as "1.0.1", where it should be "1.0.3". a4 will be listed as "1.0.4'. - Instead of preventing a process from starting if setuid() can't be called (if supervisord is run as nonroot, for example), just log the error and proceed. 1.0.3 or "Alpha 3" (2004-05-26) ------------------------------- - The daemon could chew up a lot of CPU time trying to select() on real files (I didn't know select() failed to block when a file is at EOF). Fixed by polling instead of using select(). - Processes could "leak" and become zombies due to a bug in reaping dead children. - supervisord now defaults to daemonizing itself. - 'daemon' config file option and -d/--daemon command-line option removed from supervisord acceptable options. In place of these options, we now have a 'nodaemon' config file option and a -n/--nodaemon command-line option. - logtail now works. - pidproxy changed slightly to reap children synchronously. - in alpha2 changelist, supervisord was reported to have a "noauth" command-line option. This was not accurate. The way to turn off auth on the server is to disinclude the "passwdfile" config file option from the server config file. The client however does indeed still have a noauth option, which prevents it from ever attempting to send authentication credentials to servers. - ZPL license added for ZConfig to LICENSE.txt 1.0.2 or "Alpha 2" (Unreleased) ------------------------------- - supervisorctl and supervisord no longer need to run on the same machine due to the addition of internet socket support. - supervisorctl and supervisord no longer share a common configuration file format. - supervisorctl now uses a persistent connection to supervisord (as opposed to creating a fresh connection for each command). - SRP (Secure Remote Password) authentication is now a supported form of access control for supervisord. In supervisorctl interactive mode, by default, users will be asked for credentials when attempting to talk to a supervisord that requires SRP authentication. - supervisord has a new command-line option and configuration file option for specifying "noauth" mode, which signifies that it should not require authentication from clients. - supervisorctl has a new command-line option and configuration option for specifying "noauth" mode, which signifies that it should never attempt to send authentication info to servers. - supervisorctl has new commands: open: opens a connection to a new supervisord; close: closes the current connection. - supervisorctl's "logtail" command now retrieves log data from supervisord's log file remotely (as opposed to reading it directly from a common filesystem). It also no longer emulates "tail -f", it just returns lines of the server's log file. - The supervisord/supervisorctl wire protocol now has protocol versioning and is documented in "protocol.txt". - "configfile" command-line override -C changed to -c - top-level section name for supervisor schema changed to 'supervisord' from 'supervisor' - Added 'pidproxy' shim program. Known issues in alpha 2: - If supervisorctl loses a connection to a supervisord or if the remote supervisord crashes or shuts down unexpectedly, it is possible that any supervisorctl talking to it will "hang" indefinitely waiting for data. Pressing Ctrl-C will allow you to restart supervisorctl. - Only one supervisorctl process may talk to a given supervisord process at a time. If two supervisorctl processes attempt to talk to the same supervisord process, one will "win" and the other will be disconnected. - Sometimes if a pidproxy is used to start a program, the pidproxy program itself will "leak". 1.0.0 or "Alpha 1" (Unreleased) ------------------------------- Initial release. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/README.rst0000644000076500000240000000235114340177153015166 0ustar00mnaberezstaffSupervisor ========== Supervisor is a client/server system that allows its users to control a number of processes on UNIX-like operating systems. Supported Platforms ------------------- Supervisor has been tested and is known to run on Linux (Ubuntu), Mac OS X (10.4, 10.5, 10.6), and Solaris (10 for Intel) and FreeBSD 6.1. It will likely work fine on most UNIX systems. Supervisor will not run at all under any version of Windows. Supervisor is intended to work on Python 3 version 3.4 or later and on Python 2 version 2.7. Documentation ------------- You can view the current Supervisor documentation online `in HTML format `_ . This is where you should go for detailed installation and configuration documentation. Reporting Bugs and Viewing the Source Repository ------------------------------------------------ Please report bugs in the `GitHub issue tracker `_. You can view the source repository for supervisor via `https://github.com/Supervisor/supervisor `_. Contributing ------------ We'll review contributions from the community in `pull requests `_ on GitHub. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1671843145.3761468 supervisor-4.2.5/docs/0000755000076500000240000000000014351446511014425 5ustar00mnaberezstaff././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1671843145.3764088 supervisor-4.2.5/docs/.static/0000755000076500000240000000000014351446511015772 5ustar00mnaberezstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/docs/.static/logo_hi.gif0000644000076500000240000001451214340177153020105 0ustar00mnaberezstaffGIF89a<{{{ȮbhsssdcdW^LLL¥beX_cjHHIʱ]eFFGҼ}yqq}潽U\ĨV^ƆʹЭJJKimuvȹŪ\d^fUUVǬŪ]bOWѺDDEln]^^iiiQPQ_cYYZйyx^d֎T[ԾhjnnnԊĨζϸU]ЃNNNtsnpmnut\c~~zzwCCDӽPWT\BBCEEFɯ[`\aY^RYGGHKJLrt|ʀ{kklCCDCDEkn]bYaPX|yr\[Ycb_ZbRZ̷½QYPX[cxİljfsqlS[QYIII[bиSZQXһHGHGHHZaӾMNNS[ϷNMNһS\V^EEFKKKMMMKLLPY[cSZEFF__`ѺWWW˲QZ͵ũ[bOOPÇǭNMMNNMpoSSSTTTTSTgiwwxJIJoooƪ폭qqq\\\{{ffghhhSTSW`llm\a``aR[R[_g! XMP DataXMP ~}|{zyxwvutsrqponmlkjihgfedcba`_^]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!  !,<H*\ȰÇ#JHŋ3jȱǏ CQA&ȗ0c,%ř8s%̟@V!/*o<>J(&S!z 7v%h/M  0\pK,. \E@%-[ ҘQE廻 ^.'ʂQ"GX.vEN[g?JPd@R[ nʚ q ID ^,rqH*w6!DRSD@,t @1T-DC(9 D (3hx@"H!MXpn$vCF()"dn[jEÂ ƖИd0Pdtzpm)$Bi (h ! fz`B<u^EGH'Yg9YB꫰Z@Q kt 20P&+ z m +yRDETr@|mꁬ4P $TjB p> LDu+P@0D|Bp At20 ?'q=ܠ;'VmQ XHTA3?7P9 wLDUtO,BCr!* 0;IĀ+ŰPqP=b="<7k ے,(T#q Z^w(|"r((@0\̔CF 0C+HD@زHA&j>=z0\=K" ;*&QIPے@A=*% 77!8Y+zZW6,c(ApLؽs|v0!BCz! D. B wp)Ch?YāD4*$"d Q$! I )4W `H9 `:%@Lpkg; +tc"%ԑD,vpIRH aw(Dh@@J3 tDnAP mheGeD (Q4cES $)MFm`DRS]@Qj6)Aa\ f4 $ drJmf8) 8  @0tMA;BSe,¢5 BXE@XtD4[hZX= | CAPPr A tJӂ*QTT)Q Z9pdxd 6̈Tn]AD5$]rt [X6 =,c]*;0:& hv`4+5H! ]C"(4 Y*!1t a/<(w T:rWpY p`@B^@g:a .*`~kڋb9AV^TWxCb͠&|#}BHaP`8A`oP`0 8a vU_ `5 2P!hE+ (/@=<0x-"7p XB),PX0? Y]U Xtu !  n@s.*0+as h>~"` XB 8kѼピUPρkN A׭j@aC'͉F `vlNY!]4T²aZ`v=4\l^v " 10 =lIw3݌kgtV-R` ~(9 &4Єo[a!Bp8(gE(p XЈPwCaM&a:%\w(` hYn,G߃ptUib$0\獔Cv0 ;_rnsV$1 g @%pPWnb( 2U &{(847`WW%u ~0g0p~@X~۠  qtP|R&Y%0 Q(KP OŅK``|P%p@_PPp bp]pTaJ]< pWP<"nq zP vP\0m8")0 AHx@~@ g# A#!h ,@cV\ f Z0v 8 ԨxՈZHazH pިp 0 H @ p<(?qd}d0y w20)?0@ !9 4 /@~\(-АT LLPYٔN  Ʌ6XhaOKpiOIxL9 ( )p 2d8d) `M) W7 °PFp#@`+?Q RP P e@ py pȀp 0.93ǀԐM@eȠ9$  R90ԉ2@ ~0A0 0FAFd UĖc 4 j@ P0 @N00 (  S Ya <:K \2NpNp0f X9@arAP F# UXJEXV@ ܰ _.`Ć`  P z `  ` JС z E* p Q 0 : z@ j 0` zp ੍z@` @  j2W:9*( Y8j z@ ɚ zJj ڡ[k lʰl: BZzp `ڡ ɚ  Z z zM z [pb0 *    ``Y ey00P F :`ڳ p [BJ%뵐+5[Pp4kJ;۰q@ۡVkZ Z 몗[ Z z@)@: ! @Wjk$  9@@{;kЊ5++z {+Bj모 ,𰚀<|&{ A.3S7#cP+ 0 _پ :몕p˻ H|@+ ܰD,@ʵ1K [ 5+I{t,pLǩP 1 S`fY N:@# @ ?@S.+q@ j ʚ,SL[ @ ˙J{ [d| ʦʫ `ʥʡ ʩp˙Z \ |kͦp f*` > @1 ) (` #Ǻ иp M UMR̦<[иUˬ:pW+ ̺ cܪ[ѣ cl8} 5Н (0~@2` F`  M6@`!q!11 5900 \+xmx>? :? p{= mq  @@:S0)٪-]a*0@/@ڼ~V@4#94 #@e8=4 Mڍ  =;././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/docs/.static/repoze.css0000644000076500000240000000077514340177153020022 0ustar00mnaberezstaff@import url('default.css'); body { background-color: #006339; } div.document { background-color: #dad3bd; } div.sphinxsidebar h3, h4, h5, a { color: #127c56 !important; } div.related { color: #dad3bd !important; background-color: #00744a; } div.related a { color: #dad3bd !important; } /* override the justify text align of the default */ div.body p { text-align: left !important; } /* fix google chrome
 tag renderings */

pre {
   line-height: normal !important;
}
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/docs/Makefile0000644000076500000240000000434014340177153016067 0ustar00mnaberezstaff# Makefile for Sphinx documentation
#

# You can set these variables from the command line.
SPHINXOPTS    =
SPHINXBUILD   = sphinx-build
PAPER         =

# Internal variables.
PAPEROPT_a4     = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS   = -d .build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .

.PHONY: help clean html web pickle htmlhelp latex changes linkcheck

help:
	@echo "Please use \`make ' where  is one of"
	@echo "  html      to make standalone HTML files"
	@echo "  pickle    to make pickle files (usable by e.g. sphinx-web)"
	@echo "  htmlhelp  to make HTML files and a HTML help project"
	@echo "  latex     to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
	@echo "  changes   to make an overview over all changed/added/deprecated items"
	@echo "  linkcheck to check all external links for integrity"

clean:
	-rm -rf .build/*

html:
	mkdir -p .build/html .build/doctrees
	$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) .build/html
	@echo
	@echo "Build finished. The HTML pages are in .build/html."

pickle:
	mkdir -p .build/pickle .build/doctrees
	$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) .build/pickle
	@echo
	@echo "Build finished; now you can process the pickle files or run"
	@echo "  sphinx-web .build/pickle"
	@echo "to start the sphinx-web server."

web: pickle

htmlhelp:
	mkdir -p .build/htmlhelp .build/doctrees
	$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) .build/htmlhelp
	@echo
	@echo "Build finished; now you can run HTML Help Workshop with the" \
	      ".hhp project file in .build/htmlhelp."

latex:
	mkdir -p .build/latex .build/doctrees
	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) .build/latex
	@echo
	@echo "Build finished; the LaTeX files are in .build/latex."
	@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
	      "run these through (pdf)latex."

changes:
	mkdir -p .build/changes .build/doctrees
	$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) .build/changes
	@echo
	@echo "The overview file is in .build/changes."

linkcheck:
	mkdir -p .build/linkcheck .build/doctrees
	$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) .build/linkcheck
	@echo
	@echo "Link check complete; look for any errors in the above output " \
	      "or in .build/linkcheck/output.txt."
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0
supervisor-4.2.5/docs/api.rst0000644000076500000240000003171414351440431015731 0ustar00mnaberezstaff.. _xml_rpc:

XML-RPC API Documentation
=========================

To use the XML-RPC interface, first make sure you have configured the interface
factory properly by setting the default factory. See :ref:`rpcinterface_factories`.

Then you can connect to supervisor's HTTP port
with any XML-RPC client library and run commands against it.

An example of doing this using Python 2's ``xmlrpclib`` client library
is as follows.

.. code-block:: python

    import xmlrpclib
    server = xmlrpclib.Server('http://localhost:9001/RPC2')

An example of doing this using Python 3's ``xmlrpc.client`` library
is as follows.

.. code-block:: python

    from xmlrpc.client import ServerProxy
    server = ServerProxy('http://localhost:9001/RPC2')

You may call methods against :program:`supervisord` and its
subprocesses by using the ``supervisor`` namespace.  An example is
provided below.

.. code-block:: python

    server.supervisor.getState()

You can get a list of methods supported by the
:program:`supervisord` XML-RPC interface by using the XML-RPC
``system.listMethods`` API:

.. code-block:: python

    server.system.listMethods()

You can see help on a method by using the ``system.methodHelp`` API
against the method:

.. code-block:: python

    server.system.methodHelp('supervisor.shutdown')

The :program:`supervisord` XML-RPC interface also supports the
`XML-RPC multicall API
`_.

You can extend :program:`supervisord` functionality with new XML-RPC
API methods by adding new top-level RPC interfaces as necessary.
See :ref:`rpcinterface_factories`.

.. note::

  Any XML-RPC method call may result in a fault response.  This includes errors caused
  by the client such as bad arguments, and any errors that make :program:`supervisord`
  unable to fulfill the request.  Many XML-RPC client programs will raise an exception
  when a fault response is encountered.

.. automodule:: supervisor.rpcinterface

Status and Control
------------------

  .. autoclass:: SupervisorNamespaceRPCInterface

    .. automethod:: getAPIVersion

        This API is versioned separately from Supervisor itself. The API version
        returned by ``getAPIVersion`` only changes when the API changes. Its purpose
        is to help the client identify with which version of the Supervisor API it
        is communicating.

        When writing software that communicates with this API, it is highly
        recommended that you first test the API version for compatibility before
        making method calls.

        .. note::

          The ``getAPIVersion`` method replaces ``getVersion`` found in Supervisor
          versions prior to 3.0a1. It is aliased for compatibility but getVersion()
          is deprecated and support will be dropped from Supervisor in a future
          version.

    .. automethod:: getSupervisorVersion

    .. automethod:: getIdentification

        This method allows the client to identify with which Supervisor
        instance it is communicating in the case of environments where
        multiple Supervisors may be running.

        The identification is a string that must be set in Supervisor’s
        configuration file. This method simply returns that value back to the
        client.

    .. automethod:: getState

        This is an internal value maintained by Supervisor that determines what
        Supervisor believes to be its current operational state.

        Some method calls can alter the current state of the Supervisor. For
        example, calling the method supervisor.shutdown() while the station is
        in the RUNNING state places the Supervisor in the SHUTDOWN state while
        it is shutting down.

        The supervisor.getState() method provides a means for the client to check
        Supervisor's state, both for informational purposes and to ensure that the
        methods it intends to call will be permitted.

        The return value is a struct:

        .. code-block:: python

            {'statecode': 1,
             'statename': 'RUNNING'}

        The possible return values are:

        +---------+----------+----------------------------------------------+
        |statecode|statename |Description                                   |
        +=========+==========+==============================================+
        | 2       |FATAL     |Supervisor has experienced a serious error.   |
        +---------+----------+----------------------------------------------+
        | 1       |RUNNING   |Supervisor is working normally.               |
        +---------+----------+----------------------------------------------+
        | 0       |RESTARTING|Supervisor is in the process of restarting.   |
        +---------+----------+----------------------------------------------+
        | -1      |SHUTDOWN  |Supervisor is in the process of shutting down.|
        +---------+----------+----------------------------------------------+

        The ``FATAL`` state reports unrecoverable errors, such as internal
        errors inside Supervisor or system runaway conditions. Once set to
        ``FATAL``, the Supervisor can never return to any other state without
        being restarted.

        In the ``FATAL`` state, all future methods except
        supervisor.shutdown() and supervisor.restart() will automatically fail
        without being called and the fault ``FATAL_STATE`` will be raised.

        In the ``SHUTDOWN`` or ``RESTARTING`` states, all method calls are
        ignored and their possible return values are undefined.

    .. automethod:: getPID

    .. automethod:: readLog

        It can either return the entire log, a number of characters from the
        tail of the log, or a slice of the log specified by the offset and
        length parameters:

        +--------+---------+------------------------------------------------+
        | Offset | Length  | Behavior of ``readProcessLog``                 |
        +========+=========+================================================+
        |Negative|Not Zero | Bad arguments. This will raise the fault       |
        |        |         | ``BAD_ARGUMENTS``.                             |
        +--------+---------+------------------------------------------------+
        |Negative|Zero     | This will return the tail of the log, or offset|
        |        |         | number of characters from the end of the log.  |
        |        |         | For example, if ``offset`` = -4 and ``length`` |
        |        |         | = 0, then the last four characters will be     |
        |        |         | returned from the end of the log.              |
        +--------+---------+------------------------------------------------+
        |Zero or |Negative | Bad arguments. This will raise the fault       |
        |Positive|         | ``BAD_ARGUMENTS``.                             |
        +--------+---------+------------------------------------------------+
        |Zero or |Zero     | All characters will be returned from the       |
        |Positive|         | ``offset`` specified.                          |
        +--------+---------+------------------------------------------------+
        |Zero or |Positive | A number of characters length will be returned |
        |Positive|         | from the ``offset``.                           |
        +--------+---------+------------------------------------------------+

        If the log is empty and the entire log is requested, an empty string
        is returned.

        If either offset or length is out of range, the fault
        ``BAD_ARGUMENTS`` will be returned.

        If the log cannot be read, this method will raise either the
        ``NO_FILE`` error if the file does not exist or the ``FAILED`` error
        if any other problem was encountered.

        .. note::

          The readLog() method replaces readMainLog() found in Supervisor
          versions prior to 2.1. It is aliased for compatibility but
          readMainLog() is deprecated and support will be dropped from
          Supervisor in a future version.


    .. automethod:: clearLog

        If the log cannot be cleared because the log file does not exist, the
        fault ``NO_FILE`` will be raised. If the log cannot be cleared for any
        other reason, the fault ``FAILED`` will be raised.

    .. automethod:: shutdown

        This method shuts down the Supervisor daemon. If any processes are running,
        they are automatically killed without warning.

        Unlike most other methods, if Supervisor is in the ``FATAL`` state,
        this method will still function.

    .. automethod:: restart

        This method soft restarts the Supervisor daemon. If any processes are
        running, they are automatically killed without warning. Note that the
        actual UNIX process for Supervisor cannot restart; only Supervisor’s
        main program loop. This has the effect of resetting the internal
        states of Supervisor.

        Unlike most other methods, if Supervisor is in the ``FATAL`` state,
        this method will still function.


Process Control
---------------

  .. autoclass:: SupervisorNamespaceRPCInterface
    :noindex:

    .. automethod:: getProcessInfo

        The return value is a struct:

        .. code-block:: python

            {'name':           'process name',
             'group':          'group name',
             'description':    'pid 18806, uptime 0:03:12'
             'start':          1200361776,
             'stop':           0,
             'now':            1200361812,
             'state':          20,
             'statename':      'RUNNING',
             'spawnerr':       '',
             'exitstatus':     0,
             'logfile':        '/path/to/stdout-log', # deprecated, b/c only
             'stdout_logfile': '/path/to/stdout-log',
             'stderr_logfile': '/path/to/stderr-log',
             'pid':            1}

        .. describe:: name

            Name of the process

        .. describe:: group

            Name of the process' group

        .. describe:: description

            If process state is running description's value is process_id
            and uptime. Example "pid 18806, uptime 0:03:12 ".
            If process state is stopped description's value is stop time.
            Example:"Jun 5 03:16 PM ".

        .. describe:: start

            UNIX timestamp of when the process was started

        .. describe:: stop

            UNIX timestamp of when the process last ended, or 0 if the process
            has never been stopped.

        .. describe:: now

            UNIX timestamp of the current time, which can be used to calculate
            process up-time.

        .. describe:: state

            State code, see :ref:`process_states`.

        .. describe:: statename

            String description of `state`, see :ref:`process_states`.

        .. describe:: logfile

            Deprecated alias for ``stdout_logfile``.  This is provided only
            for compatibility with clients written for Supervisor 2.x and
            may be removed in the future.  Use ``stdout_logfile`` instead.

        .. describe:: stdout_logfile

            Absolute path and filename to the STDOUT logfile

        .. describe:: stderr_logfile

            Absolute path and filename to the STDOUT logfile

        .. describe:: spawnerr

            Description of error that occurred during spawn, or empty string
            if none.

        .. describe:: exitstatus

            Exit status (errorlevel) of process, or 0 if the process is still
            running.

        .. describe:: pid

            UNIX process ID (PID) of the process, or 0 if the process is not
            running.


    .. automethod:: getAllProcessInfo

        Each element contains a struct, and this struct contains the exact
        same elements as the struct returned by ``getProcessInfo``. If the process
        table is empty, an empty array is returned.

    .. automethod:: getAllConfigInfo

    .. automethod:: startProcess

    .. automethod:: startAllProcesses

    .. automethod:: startProcessGroup

    .. automethod:: stopProcess

    .. automethod:: stopProcessGroup

    .. automethod:: stopAllProcesses

    .. automethod:: signalProcess

    .. automethod:: signalProcessGroup

    .. automethod:: signalAllProcesses

    .. automethod:: sendProcessStdin

    .. automethod:: sendRemoteCommEvent

    .. automethod:: reloadConfig

    .. automethod:: addProcessGroup

    .. automethod:: removeProcessGroup

Process Logging
---------------

  .. autoclass:: SupervisorNamespaceRPCInterface
    :noindex:

    .. automethod:: readProcessStdoutLog

    .. automethod:: readProcessStderrLog

    .. automethod:: tailProcessStdoutLog

    .. automethod:: tailProcessStderrLog

    .. automethod:: clearProcessLogs

    .. automethod:: clearAllProcessLogs


.. automodule:: supervisor.xmlrpc

System Methods
--------------

  .. autoclass:: SystemNamespaceRPCInterface

    .. automethod:: listMethods

    .. automethod:: methodHelp

    .. automethod:: methodSignature

    .. automethod:: multicall
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/docs/conf.py0000644000076500000240000001355614340177153015737 0ustar00mnaberezstaff# -*- coding: utf-8 -*-
#
# Supervisor documentation build configuration file
#
# This file is execfile()d with the current directory set to its containing
# dir.
#
# The contents of this file are pickled, so don't put values in the
# namespace that aren't pickleable (module imports are okay, they're
# removed automatically).
#
# All configuration values have a default value; values that are commented
# out serve to show the default value.

import sys, os
from datetime import date

# If your extensions are in another directory, add it here. If the
# directory is relative to the documentation root, use os.path.abspath to
# make it absolute, like shown here.
#sys.path.append(os.path.abspath('some/directory'))

parent = os.path.dirname(os.path.dirname(__file__))
sys.path.append(os.path.abspath(parent))

version_txt = os.path.join(parent, 'supervisor/version.txt')
supervisor_version = open(version_txt).read().strip()

# General configuration
# ---------------------

# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc']

# Add any paths that contain templates here, relative to this directory.
templates_path = ['.templates']

# The suffix of source filenames.
source_suffix = '.rst'

# The master toctree document.
master_doc = 'index'

# General substitutions.
project = 'Supervisor'
year = date.today().year
copyright = '2004-%d, Agendaless Consulting and Contributors' % year

# The default replacements for |version| and |release|, also used in various
# other places throughout the built documents.
#
# The short X.Y version.
version = supervisor_version
# The full version, including alpha/beta/rc tags.
release = version

# There are two options for replacing |today|: either, you set today to
# some non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
today_fmt = '%B %d, %Y'

# List of documents that shouldn't be included in the build.
#unused_docs = []

# List of directories, relative to source directories, that shouldn't be
# searched for source files.
#exclude_dirs = []

# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None

# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True

# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True

# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'


# Options for HTML output
# -----------------------

# The style sheet to use for HTML and HTML Help pages. A file of that name
# must exist either in Sphinx' static/ path, or in one of the custom paths
# given in html_static_path.
html_style = 'repoze.css'

# The name for this set of Sphinx documents.  If None, it defaults to
# " v documentation".
#html_title = None

# A shorter title for the navigation bar.  Default is the same as
# html_title.
#html_short_title = None

# The name of an image file (within the static path) to place at the top of
# the sidebar.
html_logo = '.static/logo_hi.gif'

# The name of an image file (within the static path) to use as favicon of
# the docs.  This file should be a Windows icon file (.ico) being 16x16 or
# 32x32 pixels large.
#html_favicon = None

# Add any paths that contain custom static files (such as style sheets)
# here, relative to this directory. They are copied after the builtin
# static files, so a file named "default.css" will overwrite the builtin
# "default.css".
html_static_path = ['.static']

# If not '', a 'Last updated on:' timestamp is inserted at every page
# bottom, using the given strftime format.
html_last_updated_fmt = '%b %d, %Y'

# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True

# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}

# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}

# If false, no module index is generated.
#html_use_modindex = True

# If false, no index is generated.
#html_use_index = True

# If true, the index is split into individual pages for each letter.
#html_split_index = False

# If true, the reST sources are included in the HTML build as
# _sources/.
#html_copy_source = True

# If true, an OpenSearch description file will be output, and all pages
# will contain a  tag referring to it.  The value of this option must
# be the base URL from which the finished HTML is served.
#html_use_opensearch = ''

# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = ''

# Output file base name for HTML help builder.
htmlhelp_basename = 'supervisor'


# Options for LaTeX output
# ------------------------

# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'

# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'

# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
#  author, document class [howto/manual]).
latex_documents = [
  ('index', 'supervisor.tex', 'supervisor Documentation',
   'Supervisor Developers', 'manual'),
]

# The name of an image file (relative to this directory) to place at the
# top of the title page.
latex_logo = '.static/logo_hi.gif'

# For "manual" documents, if this is true, then toplevel headings are
# parts, not chapters.
#latex_use_parts = False

# Additional stuff for the LaTeX preamble.
#latex_preamble = ''

# Documents to append as an appendix to all manuals.
#latex_appendices = []

# If false, no module index is generated.
#latex_use_modindex = True
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840622.0
supervisor-4.2.5/docs/configuration.rst0000644000076500000240000013557014351441556020045 0ustar00mnaberezstaffConfiguration File
==================

The Supervisor configuration file is conventionally named
:file:`supervisord.conf`.  It is used by both :program:`supervisord`
and :program:`supervisorctl`.  If either application is started
without the ``-c`` option (the option which is used to tell the
application the configuration filename explicitly), the application
will look for a file named :file:`supervisord.conf` within the
following locations, in the specified order.  It will use the first
file it finds.

#. :file:`../etc/supervisord.conf` (Relative to the executable)

#. :file:`../supervisord.conf` (Relative to the executable)

#. :file:`$CWD/supervisord.conf`

#. :file:`$CWD/etc/supervisord.conf`

#. :file:`/etc/supervisord.conf`

#. :file:`/etc/supervisor/supervisord.conf` (since Supervisor 3.3.0)

.. note::

  Many versions of Supervisor packaged for Debian and Ubuntu included a patch
  that added ``/etc/supervisor/supervisord.conf`` to the search paths.  The
  first PyPI package of Supervisor to include it was Supervisor 3.3.0.

File Format
-----------

:file:`supervisord.conf` is a Windows-INI-style (Python ConfigParser)
file.  It has sections (each denoted by a ``[header]``) and key / value
pairs within the sections.  The sections and their allowable values
are described below.

Environment Variables
~~~~~~~~~~~~~~~~~~~~~

Environment variables that are present in the environment at the time that
:program:`supervisord` is started can be used in the configuration file
using the Python string expression syntax ``%(ENV_X)s``:

.. code-block:: ini

    [program:example]
    command=/usr/bin/example --loglevel=%(ENV_LOGLEVEL)s

In the example above, the expression ``%(ENV_LOGLEVEL)s`` would be expanded
to the value of the environment variable ``LOGLEVEL``.

.. note::

    In Supervisor 3.2 and later, ``%(ENV_X)s`` expressions are supported in
    all options.  In prior versions, some options support them, but most
    do not.  See the documentation for each option below.


``[unix_http_server]`` Section Settings
---------------------------------------

The :file:`supervisord.conf` file contains a section named
``[unix_http_server]`` under which configuration parameters for an
HTTP server that listens on a UNIX domain socket should be inserted.
If the configuration file has no ``[unix_http_server]`` section, a
UNIX domain socket HTTP server will not be started.  The allowable
configuration values are as follows.

``[unix_http_server]`` Section Values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``file``

  A path to a UNIX domain socket on which supervisor will listen for
  HTTP/XML-RPC requests.  :program:`supervisorctl` uses XML-RPC to
  communicate with :program:`supervisord` over this port.  This option
  can include the value ``%(here)s``, which expands to the directory
  in which the :program:`supervisord` configuration file was found.

  *Default*:  None.

  *Required*:  No.

  *Introduced*: 3.0

.. warning::

  The example configuration output by :program:`echo_supervisord_conf` uses
  ``/tmp/supervisor.sock`` as the socket file.  That path is an example only
  and will likely need to be changed to a location more appropriate for your
  system.  Some systems periodically delete older files in ``/tmp``.  If the
  socket file is deleted, :program:`supervisorctl` will be unable to
  connect to :program:`supervisord`.

``chmod``

  Change the UNIX permission mode bits of the UNIX domain socket to
  this value at startup.

  *Default*: ``0700``

  *Required*:  No.

  *Introduced*: 3.0

``chown``

  Change the user and group of the socket file to this value.  May be
  a UNIX username (e.g. ``chrism``) or a UNIX username and group
  separated by a colon (e.g. ``chrism:wheel``).

  *Default*:  Use the username and group of the user who starts supervisord.

  *Required*:  No.

  *Introduced*: 3.0

``username``

  The username required for authentication to this HTTP server.

  *Default*:  No username required.

  *Required*:  No.

  *Introduced*: 3.0

``password``

  The password required for authentication to this HTTP server.  This
  can be a cleartext password, or can be specified as a SHA-1 hash if
  prefixed by the string ``{SHA}``.  For example,
  ``{SHA}82ab876d1387bfafe46cc1c8a2ef074eae50cb1d`` is the SHA-stored
  version of the password "thepassword".

  Note that hashed password must be in hex format.

  *Default*:  No password required.

  *Required*:  No.

  *Introduced*: 3.0

``[unix_http_server]`` Section Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: ini

   [unix_http_server]
   file = /tmp/supervisor.sock
   chmod = 0777
   chown= nobody:nogroup
   username = user
   password = 123

``[inet_http_server]`` Section Settings
---------------------------------------

The :file:`supervisord.conf` file contains a section named
``[inet_http_server]`` under which configuration parameters for an
HTTP server that listens on a TCP (internet) socket should be
inserted.  If the configuration file has no ``[inet_http_server]``
section, an inet HTTP server will not be started.  The allowable
configuration values are as follows.

.. warning::

  The inet HTTP server is not enabled by default.  If you choose to enable it,
  please read the following security warning.  The inet HTTP server is intended
  for use within a trusted environment only.  It should only be bound to localhost
  or only accessible from within an isolated, trusted network.  The inet HTTP server
  does not support any form of encryption.  The inet HTTP server does not use
  authentication by default (see the ``username=`` and ``password=`` options).
  The inet HTTP server can be controlled remotely from :program:`supervisorctl`.
  It also serves a web interface that allows subprocesses to be started or stopped,
  and subprocess logs to be viewed.  **Never expose the inet HTTP server to the
  public internet.**

``[inet_http_server]`` Section Values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``port``

  A TCP host:port value or (e.g. ``127.0.0.1:9001``) on which
  supervisor will listen for HTTP/XML-RPC requests.
  :program:`supervisorctl` will use XML-RPC to communicate with
  :program:`supervisord` over this port.  To listen on all interfaces
  in the machine, use ``:9001`` or ``*:9001``.  Please read the security
  warning above.

  *Default*:  No default.

  *Required*:  Yes.

  *Introduced*: 3.0

``username``

  The username required for authentication to this HTTP server.

  *Default*:  No username required.

  *Required*:  No.

  *Introduced*: 3.0

``password``

  The password required for authentication to this HTTP server.  This
  can be a cleartext password, or can be specified as a SHA-1 hash if
  prefixed by the string ``{SHA}``.  For example,
  ``{SHA}82ab876d1387bfafe46cc1c8a2ef074eae50cb1d`` is the SHA-stored
  version of the password "thepassword".

  Note that hashed password must be in hex format.

  *Default*:  No password required.

  *Required*:  No.

  *Introduced*: 3.0

``[inet_http_server]`` Section Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: ini

   [inet_http_server]
   port = 127.0.0.1:9001
   username = user
   password = 123

``[supervisord]`` Section Settings
----------------------------------

The :file:`supervisord.conf` file contains a section named
``[supervisord]`` in which global settings related to the
:program:`supervisord` process should be inserted.  These are as
follows.

``[supervisord]`` Section Values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``logfile``

  The path to the activity log of the supervisord process.  This
  option can include the value ``%(here)s``, which expands to the
  directory in which the supervisord configuration file was found.

  .. note::

    If ``logfile`` is set to a special file like ``/dev/stdout`` that is
    not seekable, log rotation must be disabled by setting
    ``logfile_maxbytes = 0``.

  *Default*:  :file:`$CWD/supervisord.log`

  *Required*:  No.

  *Introduced*: 3.0

``logfile_maxbytes``

  The maximum number of bytes that may be consumed by the activity log
  file before it is rotated (suffix multipliers like "KB", "MB", and
  "GB" can be used in the value).  Set this value to 0 to indicate an
  unlimited log size.

  *Default*:  50MB

  *Required*:  No.

  *Introduced*: 3.0

``logfile_backups``

  The number of backups to keep around resulting from activity log
  file rotation.  If set to 0, no backups will be kept.

  *Default*:  10

  *Required*:  No.

  *Introduced*: 3.0

``loglevel``

  The logging level, dictating what is written to the supervisord
  activity log.  One of ``critical``, ``error``, ``warn``, ``info``,
  ``debug``, ``trace``, or ``blather``.  Note that at log level
  ``debug``, the supervisord log file will record the stderr/stdout
  output of its child processes and extended info about process
  state changes, which is useful for debugging a process which isn't
  starting properly.  See also: :ref:`activity_log_levels`.

  *Default*:  info

  *Required*:  No.

  *Introduced*: 3.0

``pidfile``

  The location in which supervisord keeps its pid file.  This option
  can include the value ``%(here)s``, which expands to the directory
  in which the supervisord configuration file was found.

  *Default*:  :file:`$CWD/supervisord.pid`

  *Required*:  No.

  *Introduced*: 3.0

``umask``

  The :term:`umask` of the supervisord process.

  *Default*:  ``022``

  *Required*:  No.

  *Introduced*: 3.0

``nodaemon``

  If true, supervisord will start in the foreground instead of
  daemonizing.

  *Default*:  false

  *Required*:  No.

  *Introduced*: 3.0

``silent``

  If true and not daemonized, logs will not be directed to stdout.

  *Default*:  false

  *Required*: No.

  *Introduced*: 4.2.0

``minfds``

  The minimum number of file descriptors that must be available before
  supervisord will start successfully.  A call to setrlimit will be made
  to attempt to raise the soft and hard limits of the supervisord process to
  satisfy ``minfds``.  The hard limit may only be raised if supervisord
  is run as root.  supervisord uses file descriptors liberally, and will
  enter a failure mode when one cannot be obtained from the OS, so it's
  useful to be able to specify a minimum value to ensure it doesn't run out
  of them during execution.  These limits will be inherited by the managed
  subprocesses.  This option is particularly useful on Solaris,
  which has a low per-process fd limit by default.

  *Default*:  1024

  *Required*:  No.

  *Introduced*: 3.0

``minprocs``

  The minimum number of process descriptors that must be available
  before supervisord will start successfully.  A call to setrlimit will be
  made to attempt to raise the soft and hard limits of the supervisord process
  to satisfy ``minprocs``.  The hard limit may only be raised if supervisord
  is run as root.  supervisord will enter a failure mode when the OS runs out
  of process descriptors, so it's useful to ensure that enough process
  descriptors are available upon :program:`supervisord` startup.

  *Default*:  200

  *Required*:  No.

  *Introduced*: 3.0

``nocleanup``

  Prevent supervisord from clearing any existing ``AUTO``
  child log files at startup time.  Useful for debugging.

  *Default*:  false

  *Required*:  No.

  *Introduced*: 3.0

``childlogdir``

  The directory used for ``AUTO`` child log files.  This option can
  include the value ``%(here)s``, which expands to the directory in
  which the :program:`supervisord` configuration file was found.

  *Default*: value of Python's :func:`tempfile.gettempdir`

  *Required*:  No.

  *Introduced*: 3.0

``user``

  Instruct :program:`supervisord` to switch users to this UNIX user
  account before doing any meaningful processing.  The user can only
  be switched if :program:`supervisord` is started as the root user.

  *Default*: do not switch users

  *Required*:  No.

  *Introduced*: 3.0

  *Changed*: 3.3.4.  If :program:`supervisord` can't switch to the
  specified user, it will write an error message to ``stderr`` and
  then exit immediately.  In earlier versions, it would continue to
  run but would log a message at the ``critical`` level.

``directory``

  When :program:`supervisord` daemonizes, switch to this directory.
  This option can include the value ``%(here)s``, which expands to the
  directory in which the :program:`supervisord` configuration file was
  found.

  *Default*: do not cd

  *Required*:  No.

  *Introduced*: 3.0

``strip_ansi``

  Strip all ANSI escape sequences from child log files.

  *Default*: false

  *Required*:  No.

  *Introduced*: 3.0

``environment``

  A list of key/value pairs in the form ``KEY="val",KEY2="val2"`` that
  will be placed in the environment of all child processes.  This does
  not change the environment of :program:`supervisord` itself.  This
  option can include the value ``%(here)s``, which expands to the
  directory in which the supervisord configuration file was found.
  Values containing non-alphanumeric characters should be quoted
  (e.g. ``KEY="val:123",KEY2="val,456"``).  Otherwise, quoting the
  values is optional but recommended.  To escape percent characters,
  simply use two. (e.g. ``URI="/first%%20name"``) **Note** that
  subprocesses will inherit the environment variables of the shell
  used to start :program:`supervisord` except for the ones overridden
  here and within the program's ``environment`` option.  See
  :ref:`subprocess_environment`.

  *Default*: no values

  *Required*:  No.

  *Introduced*: 3.0

``identifier``

  The identifier string for this supervisor process, used by the RPC
  interface.

  *Default*: supervisor

  *Required*:  No.

  *Introduced*: 3.0

``[supervisord]`` Section Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: ini

   [supervisord]
   logfile = /tmp/supervisord.log
   logfile_maxbytes = 50MB
   logfile_backups=10
   loglevel = info
   pidfile = /tmp/supervisord.pid
   nodaemon = false
   minfds = 1024
   minprocs = 200
   umask = 022
   user = chrism
   identifier = supervisor
   directory = /tmp
   nocleanup = true
   childlogdir = /tmp
   strip_ansi = false
   environment = KEY1="value1",KEY2="value2"

``[supervisorctl]`` Section Settings
------------------------------------

  The configuration file may contain settings for the
  :program:`supervisorctl` interactive shell program.  These options
  are listed below.

``[supervisorctl]`` Section Values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``serverurl``

  The URL that should be used to access the supervisord server,
  e.g. ``http://localhost:9001``.  For UNIX domain sockets, use
  ``unix:///absolute/path/to/file.sock``.

  *Default*: ``http://localhost:9001``

  *Required*:  No.

  *Introduced*: 3.0

``username``

  The username to pass to the supervisord server for use in
  authentication.  This should be same as ``username`` from the
  supervisord server configuration for the port or UNIX domain socket
  you're attempting to access.

  *Default*: No username

  *Required*:  No.

  *Introduced*: 3.0

``password``

  The password to pass to the supervisord server for use in
  authentication. This should be the cleartext version of ``password``
  from the supervisord server configuration for the port or UNIX
  domain socket you're attempting to access.  This value cannot be
  passed as a SHA hash.  Unlike other passwords specified in this
  file, it must be provided in cleartext.

  *Default*: No password

  *Required*:  No.

  *Introduced*: 3.0

``prompt``

  String used as supervisorctl prompt.

  *Default*: ``supervisor``

  *Required*:  No.

  *Introduced*: 3.0

``history_file``

  A path to use as the ``readline`` persistent history file.  If you
  enable this feature by choosing a path, your supervisorctl commands
  will be kept in the file, and you can use readline (e.g. arrow-up)
  to invoke commands you performed in your last supervisorctl session.

  *Default*: No file

  *Required*:  No.

  *Introduced*: 3.0a5

``[supervisorctl]`` Section Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: ini

   [supervisorctl]
   serverurl = unix:///tmp/supervisor.sock
   username = chris
   password = 123
   prompt = mysupervisor

.. _programx_section:

``[program:x]`` Section Settings
--------------------------------

The configuration file must contain one or more ``program`` sections
in order for supervisord to know which programs it should start and
control.  The header value is composite value.  It is the word
"program", followed directly by a colon, then the program name.  A
header value of ``[program:foo]`` describes a program with the name of
"foo".  The name is used within client applications that control the
processes that are created as a result of this configuration.  It is
an error to create a ``program`` section that does not have a name.
The name must not include a colon character or a bracket character.
The value of the name is used as the value for the
``%(program_name)s`` string expression expansion within other values
where specified.

.. note::

   A ``[program:x]`` section actually represents a "homogeneous
   process group" to supervisor (as of 3.0).  The members of the group
   are defined by the combination of the ``numprocs`` and
   ``process_name`` parameters in the configuration.  By default, if
   numprocs and process_name are left unchanged from their defaults,
   the group represented by ``[program:x]`` will be named ``x`` and
   will have a single process named ``x`` in it.  This provides a
   modicum of backwards compatibility with older supervisor releases,
   which did not treat program sections as homogeneous process group
   definitions.

   But for instance, if you have a ``[program:foo]`` section with a
   ``numprocs`` of 3 and a ``process_name`` expression of
   ``%(program_name)s_%(process_num)02d``, the "foo" group will
   contain three processes, named ``foo_00``, ``foo_01``, and
   ``foo_02``.  This makes it possible to start a number of very
   similar processes using a single ``[program:x]`` section.  All
   logfile names, all environment strings, and the command of programs
   can also contain similar Python string expressions, to pass
   slightly different parameters to each process.

``[program:x]`` Section Values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``command``

  The command that will be run when this program is started.  The
  command can be either absolute (e.g. ``/path/to/programname``) or
  relative (e.g. ``programname``).  If it is relative, the
  supervisord's environment ``$PATH`` will be searched for the
  executable.  Programs can accept arguments, e.g. ``/path/to/program
  foo bar``.  The command line can use double quotes to group
  arguments with spaces in them to pass to the program,
  e.g. ``/path/to/program/name -p "foo bar"``.  Note that the value of
  ``command`` may include Python string expressions,
  e.g. ``/path/to/programname --port=80%(process_num)02d`` might
  expand to ``/path/to/programname --port=8000`` at runtime.  String
  expressions are evaluated against a dictionary containing the keys
  ``group_name``, ``host_node_name``, ``program_name``, ``process_num``,
  ``numprocs``, ``here`` (the directory of the supervisord config file),
  and all supervisord's environment variables prefixed with ``ENV_``.
  Controlled programs should themselves not be daemons, as supervisord
  assumes it is responsible for daemonizing its subprocesses (see
  :ref:`nondaemonizing_of_subprocesses`).

  .. note::

    The command will be truncated if it looks like a config file comment,
    e.g. ``command=bash -c 'foo ; bar'`` will be truncated to
    ``command=bash -c 'foo``.  Quoting will not prevent this behavior,
    since the configuration file reader does not parse the command like
    a shell would.

  *Default*: No default.

  *Required*:  Yes.

  *Introduced*: 3.0

  *Changed*: 4.2.0.  Added support for the ``numprocs`` expansion.

``process_name``

  A Python string expression that is used to compose the supervisor
  process name for this process.  You usually don't need to worry
  about setting this unless you change ``numprocs``.  The string
  expression is evaluated against a dictionary that includes
  ``group_name``, ``host_node_name``, ``process_num``, ``program_name``,
  and ``here`` (the directory of the supervisord config file).

  *Default*: ``%(program_name)s``

  *Required*:  No.

  *Introduced*: 3.0

``numprocs``

  Supervisor will start as many instances of this program as named by
  numprocs.  Note that if numprocs > 1, the ``process_name``
  expression must include ``%(process_num)s`` (or any other
  valid Python string expression that includes ``process_num``) within
  it.

  *Default*: 1

  *Required*:  No.

  *Introduced*: 3.0

``numprocs_start``

  An integer offset that is used to compute the number at which
  ``process_num`` starts.

  *Default*: 0

  *Required*:  No.

  *Introduced*: 3.0

``priority``

  The relative priority of the program in the start and shutdown
  ordering.  Lower priorities indicate programs that start first and
  shut down last at startup and when aggregate commands are used in
  various clients (e.g. "start all"/"stop all").  Higher priorities
  indicate programs that start last and shut down first.

  *Default*: 999

  *Required*:  No.

  *Introduced*: 3.0

``autostart``

  If true, this program will start automatically when supervisord is
  started.

  *Default*: true

  *Required*:  No.

  *Introduced*: 3.0

``startsecs``

  The total number of seconds which the program needs to stay running
  after a startup to consider the start successful (moving the process
  from the ``STARTING`` state to the ``RUNNING`` state).  Set to ``0``
  to indicate that the program needn't stay running for any particular
  amount of time.

  .. note::

      Even if a process exits with an "expected" exit code (see
      ``exitcodes``), the start will still be considered a failure
      if the process exits quicker than ``startsecs``.

  *Default*: 1

  *Required*:  No.

  *Introduced*: 3.0

``startretries``

  The number of serial failure attempts that :program:`supervisord`
  will allow when attempting to start the program before giving up and
  putting the process into an ``FATAL`` state.

  .. note::

      After each failed restart, process will be put in ``BACKOFF`` state
      and each retry attempt will take increasingly more time.

      See :ref:`process_states` for explanation of the ``FATAL`` and
      ``BACKOFF`` states.

  *Default*: 3

  *Required*:  No.

  *Introduced*: 3.0

``autorestart``

  Specifies if :program:`supervisord` should automatically restart a
  process if it exits when it is in the ``RUNNING`` state.  May be
  one of ``false``, ``unexpected``, or ``true``.  If ``false``, the
  process will not be autorestarted.  If ``unexpected``, the process
  will be restarted when the program exits with an exit code that is
  not one of the exit codes associated with this process' configuration
  (see ``exitcodes``).  If ``true``, the process will be unconditionally
  restarted when it exits, without regard to its exit code.

  .. note::

      ``autorestart`` controls whether :program:`supervisord` will
      autorestart a program if it exits after it has successfully started
      up (the process is in the ``RUNNING`` state).

      :program:`supervisord` has a different restart mechanism for when the
      process is starting up (the process is in the ``STARTING`` state).
      Retries during process startup are controlled by ``startsecs``
      and ``startretries``.

  *Default*: unexpected

  *Required*:  No.

  *Introduced*: 3.0

``exitcodes``

  The list of "expected" exit codes for this program used with ``autorestart``.
  If the ``autorestart`` parameter is set to ``unexpected``, and the process
  exits in any other way than as a result of a supervisor stop
  request, :program:`supervisord` will restart the process if it exits
  with an exit code that is not defined in this list.

  *Default*: 0

  *Required*:  No.

  *Introduced*: 3.0

  .. note::

      In Supervisor versions prior to 4.0, the default was ``0,2``.  In
      Supervisor 4.0, the default was changed to ``0``.

``stopsignal``

  The signal used to kill the program when a stop is requested.  This can be
  specified using the signal's name or its number.  It is normally one of:
  ``TERM``, ``HUP``, ``INT``, ``QUIT``, ``KILL``, ``USR1``, or ``USR2``.

  *Default*: TERM

  *Required*:  No.

  *Introduced*: 3.0

``stopwaitsecs``

  The number of seconds to wait for the OS to return a SIGCHLD to
  :program:`supervisord` after the program has been sent a stopsignal.
  If this number of seconds elapses before :program:`supervisord`
  receives a SIGCHLD from the process, :program:`supervisord` will
  attempt to kill it with a final SIGKILL.

  *Default*: 10

  *Required*:  No.

  *Introduced*: 3.0

``stopasgroup``

  If true, the flag causes supervisor to send the stop signal to the
  whole process group and implies ``killasgroup`` is true.  This is useful
  for programs, such as Flask in debug mode, that do not propagate
  stop signals to their children, leaving them orphaned.

  *Default*: false

  *Required*:  No.

  *Introduced*: 3.0b1

``killasgroup``

  If true, when resorting to send SIGKILL to the program to terminate
  it send it to its whole process group instead, taking care of its
  children as well, useful e.g with Python programs using
  :mod:`multiprocessing`.

  *Default*: false

  *Required*:  No.

  *Introduced*: 3.0a11

``user``

  Instruct :program:`supervisord` to use this UNIX user account as the
  account which runs the program.  The user can only be switched if
  :program:`supervisord` is run as the root user.  If :program:`supervisord`
  can't switch to the specified user, the program will not be started.

  .. note::

      The user will be changed using ``setuid`` only.  This does not start
      a login shell and does not change environment variables like
      ``USER`` or ``HOME``.  See :ref:`subprocess_environment` for details.

  *Default*: Do not switch users

  *Required*:  No.

  *Introduced*: 3.0

``redirect_stderr``

  If true, cause the process' stderr output to be sent back to
  :program:`supervisord` on its stdout file descriptor (in UNIX shell
  terms, this is the equivalent of executing ``/the/program 2>&1``).

  .. note::

     Do not set ``redirect_stderr=true`` in an ``[eventlistener:x]`` section.
     Eventlisteners use ``stdout`` and ``stdin`` to communicate with
     ``supervisord``.  If ``stderr`` is redirected, output from
     ``stderr`` will interfere with the eventlistener protocol.

  *Default*: false

  *Required*:  No.

  *Introduced*: 3.0, replaces 2.0's ``log_stdout`` and ``log_stderr``

``stdout_logfile``

  Put process stdout output in this file (and if redirect_stderr is
  true, also place stderr output in this file).  If ``stdout_logfile``
  is unset or set to ``AUTO``, supervisor will automatically choose a
  file location.  If this is set to ``NONE``, supervisord will create
  no log file.  ``AUTO`` log files and their backups will be deleted
  when :program:`supervisord` restarts.  The ``stdout_logfile`` value
  can contain Python string expressions that will evaluated against a
  dictionary that contains the keys ``group_name``, ``host_node_name``,
  ``process_num``, ``program_name``, and ``here`` (the directory of the
  supervisord config file).

  .. note::

     It is not possible for two processes to share a single log file
     (``stdout_logfile``) when rotation (``stdout_logfile_maxbytes``)
     is enabled.  This will result in the file being corrupted.

  .. note::

    If ``stdout_logfile`` is set to a special file like ``/dev/stdout``
    that is not seekable, log rotation must be disabled by setting
    ``stdout_logfile_maxbytes = 0``.

  *Default*: ``AUTO``

  *Required*:  No.

  *Introduced*: 3.0, replaces 2.0's ``logfile``

``stdout_logfile_maxbytes``

  The maximum number of bytes that may be consumed by
  ``stdout_logfile`` before it is rotated (suffix multipliers like
  "KB", "MB", and "GB" can be used in the value).  Set this value to 0
  to indicate an unlimited log size.

  *Default*: 50MB

  *Required*:  No.

  *Introduced*: 3.0, replaces 2.0's ``logfile_maxbytes``

``stdout_logfile_backups``

  The number of ``stdout_logfile`` backups to keep around resulting
  from process stdout log file rotation.  If set to 0, no backups
  will be kept.

  *Default*: 10

  *Required*:  No.

  *Introduced*: 3.0, replaces 2.0's ``logfile_backups``

``stdout_capture_maxbytes``

  Max number of bytes written to capture FIFO when process is in
  "stdout capture mode" (see :ref:`capture_mode`).  Should be an
  integer (suffix multipliers like "KB", "MB" and "GB" can used in the
  value).  If this value is 0, process capture mode will be off.

  *Default*: 0

  *Required*:  No.

  *Introduced*: 3.0

``stdout_events_enabled``

  If true, PROCESS_LOG_STDOUT events will be emitted when the process
  writes to its stdout file descriptor.  The events will only be
  emitted if the file descriptor is not in capture mode at the time
  the data is received (see :ref:`capture_mode`).

  *Default*: 0

  *Required*:  No.

  *Introduced*: 3.0a7

``stdout_syslog``

  If true, stdout will be directed to syslog along with the process name.

  *Default*: False

  *Required*:  No.

  *Introduced*: 4.0.0

``stderr_logfile``

  Put process stderr output in this file unless ``redirect_stderr`` is
  true.  Accepts the same value types as ``stdout_logfile`` and may
  contain the same Python string expressions.

  .. note::

     It is not possible for two processes to share a single log file
     (``stderr_logfile``) when rotation (``stderr_logfile_maxbytes``)
     is enabled.  This will result in the file being corrupted.

  .. note::

    If ``stderr_logfile`` is set to a special file like ``/dev/stderr``
    that is not seekable, log rotation must be disabled by setting
    ``stderr_logfile_maxbytes = 0``.

  *Default*: ``AUTO``

  *Required*:  No.

  *Introduced*: 3.0

``stderr_logfile_maxbytes``

  The maximum number of bytes before logfile rotation for
  ``stderr_logfile``.  Accepts the same value types as
  ``stdout_logfile_maxbytes``.

  *Default*: 50MB

  *Required*:  No.

  *Introduced*: 3.0

``stderr_logfile_backups``

  The number of backups to keep around resulting from process stderr
  log file rotation.  If set to 0, no backups will be kept.

  *Default*: 10

  *Required*:  No.

  *Introduced*: 3.0

``stderr_capture_maxbytes``

  Max number of bytes written to capture FIFO when process is in
  "stderr capture mode" (see :ref:`capture_mode`).  Should be an
  integer (suffix multipliers like "KB", "MB" and "GB" can used in the
  value).  If this value is 0, process capture mode will be off.

  *Default*: 0

  *Required*:  No.

  *Introduced*: 3.0

``stderr_events_enabled``

  If true, PROCESS_LOG_STDERR events will be emitted when the process
  writes to its stderr file descriptor.  The events will only be
  emitted if the file descriptor is not in capture mode at the time
  the data is received (see :ref:`capture_mode`).

  *Default*: false

  *Required*:  No.

  *Introduced*: 3.0a7

``stderr_syslog``

  If true, stderr will be directed to syslog along with the process name.

  *Default*: False

  *Required*:  No.

  *Introduced*: 4.0.0

``environment``

  A list of key/value pairs in the form ``KEY="val",KEY2="val2"`` that
  will be placed in the child process' environment.  The environment
  string may contain Python string expressions that will be evaluated
  against a dictionary containing ``group_name``, ``host_node_name``,
  ``process_num``, ``program_name``, and ``here`` (the directory of the
  supervisord config file).  Values containing non-alphanumeric characters
  should be quoted (e.g. ``KEY="val:123",KEY2="val,456"``).  Otherwise,
  quoting the values is optional but recommended.  **Note** that the
  subprocess will inherit the environment variables of the shell used to
  start "supervisord" except for the ones overridden here.  See
  :ref:`subprocess_environment`.

  *Default*: No extra environment

  *Required*:  No.

  *Introduced*: 3.0

``directory``

  A file path representing a directory to which :program:`supervisord`
  should temporarily chdir before exec'ing the child.

  *Default*: No chdir (inherit supervisor's)

  *Required*:  No.

  *Introduced*: 3.0

``umask``

  An octal number (e.g. 002, 022) representing the umask of the
  process.

  *Default*: No special umask (inherit supervisor's)

  *Required*:  No.

  *Introduced*: 3.0

``serverurl``

  The URL passed in the environment to the subprocess process as
  ``SUPERVISOR_SERVER_URL`` (see :mod:`supervisor.childutils`) to
  allow the subprocess to easily communicate with the internal HTTP
  server.  If provided, it should have the same syntax and structure
  as the ``[supervisorctl]`` section option of the same name.  If this
  is set to AUTO, or is unset, supervisor will automatically construct
  a server URL, giving preference to a server that listens on UNIX
  domain sockets over one that listens on an internet socket.

  *Default*: AUTO

  *Required*:  No.

  *Introduced*: 3.0

``[program:x]`` Section Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: ini

   [program:cat]
   command=/bin/cat
   process_name=%(program_name)s
   numprocs=1
   directory=/tmp
   umask=022
   priority=999
   autostart=true
   autorestart=unexpected
   startsecs=10
   startretries=3
   exitcodes=0
   stopsignal=TERM
   stopwaitsecs=10
   stopasgroup=false
   killasgroup=false
   user=chrism
   redirect_stderr=false
   stdout_logfile=/a/path
   stdout_logfile_maxbytes=1MB
   stdout_logfile_backups=10
   stdout_capture_maxbytes=1MB
   stdout_events_enabled=false
   stderr_logfile=/a/path
   stderr_logfile_maxbytes=1MB
   stderr_logfile_backups=10
   stderr_capture_maxbytes=1MB
   stderr_events_enabled=false
   environment=A="1",B="2"
   serverurl=AUTO

``[include]`` Section Settings
------------------------------

The :file:`supervisord.conf` file may contain a section named
``[include]``.  If the configuration file contains an ``[include]``
section, it must contain a single key named "files".  The values in
this key specify other configuration files to be included within the
configuration.

.. note::

    The ``[include]`` section is processed only by ``supervisord``.  It is
    ignored by ``supervisorctl``.


``[include]`` Section Values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``files``

  A space-separated sequence of file globs.  Each file glob may be
  absolute or relative.  If the file glob is relative, it is
  considered relative to the location of the configuration file which
  includes it.  A "glob" is a file pattern which matches a specified
  pattern according to the rules used by the Unix shell. No tilde
  expansion is done, but ``*``, ``?``, and character ranges expressed
  with ``[]`` will be correctly matched.  The string expression is
  evaluated against a dictionary that includes ``host_node_name``
  and ``here`` (the directory of the supervisord config file).  Recursive
  includes from included files are not supported.

  *Default*: No default (required)

  *Required*:  Yes.

  *Introduced*: 3.0

  *Changed*: 3.3.0.  Added support for the ``host_node_name`` expansion.

``[include]`` Section Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: ini

   [include]
   files = /an/absolute/filename.conf /an/absolute/*.conf foo.conf config??.conf

``[group:x]`` Section Settings
------------------------------

It is often useful to group "homogeneous" process groups (aka
"programs") together into a "heterogeneous" process group so they can
be controlled as a unit from Supervisor's various controller
interfaces.

To place programs into a group so you can treat them as a unit, define
a ``[group:x]`` section in your configuration file.  The group header
value is a composite.  It is the word "group", followed directly by a
colon, then the group name.  A header value of ``[group:foo]``
describes a group with the name of "foo".  The name is used within
client applications that control the processes that are created as a
result of this configuration.  It is an error to create a ``group``
section that does not have a name.  The name must not include a colon
character or a bracket character.

For a ``[group:x]``, there must be one or more ``[program:x]``
sections elsewhere in your configuration file, and the group must
refer to them by name in the ``programs`` value.

If "homogeneous" process groups (represented by program sections) are
placed into a "heterogeneous" group via ``[group:x]`` section's
``programs`` line, the homogeneous groups that are implied by the
program section will not exist at runtime in supervisor.  Instead, all
processes belonging to each of the homogeneous groups will be placed
into the heterogeneous group.  For example, given the following group
configuration:

.. code-block:: ini

   [group:foo]
   programs=bar,baz
   priority=999

Given the above, at supervisord startup, the ``bar`` and ``baz``
homogeneous groups will not exist, and the processes that would have
been under them will now be moved into the ``foo`` group.

``[group:x]`` Section Values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``programs``

  A comma-separated list of program names.  The programs which are
  listed become members of the group.

  *Default*: No default (required)

  *Required*:  Yes.

  *Introduced*: 3.0

``priority``

  A priority number analogous to a ``[program:x]`` priority value
  assigned to the group.

  *Default*: 999

  *Required*:  No.

  *Introduced*: 3.0

``[group:x]`` Section Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: ini

   [group:foo]
   programs=bar,baz
   priority=999


``[fcgi-program:x]`` Section Settings
-------------------------------------

Supervisor can manage groups of `FastCGI `_
processes that all listen on the same socket.  Until now, deployment
flexibility for FastCGI was limited.  To get full process management,
you could use mod_fastcgi under Apache but then you were stuck with
Apache's inefficient concurrency model of one process or thread per
connection.  In addition to requiring more CPU and memory resources,
the process/thread per connection model can be quickly saturated by a
slow resource, preventing other resources from being served.  In order
to take advantage of newer event-driven web servers such as lighttpd
or nginx which don't include a built-in process manager, you had to
use scripts like cgi-fcgi or spawn-fcgi.  These can be used in
conjunction with a process manager such as supervisord or daemontools
but require each FastCGI child process to bind to its own socket.
The disadvantages of this are: unnecessarily complicated web server
configuration, ungraceful restarts, and reduced fault tolerance.  With
fewer sockets to configure, web server configurations are much smaller
if groups of FastCGI processes can share sockets.  Shared sockets
allow for graceful restarts because the socket remains bound by the
parent process while any of the child processes are being restarted.
Finally, shared sockets are more fault tolerant because if a given
process fails, other processes can continue to serve inbound
connections.

With integrated FastCGI spawning support, Supervisor gives you the
best of both worlds.  You get full-featured process management with
groups of FastCGI processes sharing sockets without being tied to a
particular web server.  It's a clean separation of concerns, allowing
the web server and the process manager to each do what they do best.

.. note::

   The socket manager in Supervisor was originally developed to support
   FastCGI processes but it is not limited to FastCGI.  Other protocols may
   be used as well with no special configuration.  Any program that can
   access an open socket from a file descriptor (e.g. with
   `socket.fromfd `_
   in Python) can use the socket manager.  Supervisor will automatically
   create the socket, bind, and listen before forking the first child in a
   group.  The socket will be passed to each child on file descriptor
   number ``0`` (zero).  When the last child in the group exits,
   Supervisor will close the socket.

.. note::

   Prior to Supervisor 3.4.0, FastCGI programs (``[fcgi-program:x]``)
   could not be referenced in groups (``[group:x]``).

All the options available to ``[program:x]`` sections are
also respected by ``fcgi-program`` sections.

``[fcgi-program:x]`` Section Values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``[fcgi-program:x]`` sections have a few keys which ``[program:x]``
sections do not have.

``socket``

  The FastCGI socket for this program, either TCP or UNIX domain
  socket. For TCP sockets, use this format: ``tcp://localhost:9002``.
  For UNIX domain sockets, use ``unix:///absolute/path/to/file.sock``.
  String expressions are evaluated against a dictionary containing the
  keys "program_name" and "here" (the directory of the supervisord
  config file).

  *Default*: No default.

  *Required*:  Yes.

  *Introduced*: 3.0

``socket_backlog``

  Sets socket listen(2) backlog.

  *Default*: socket.SOMAXCONN

  *Required*:  No.

  *Introduced*: 3.4.0

``socket_owner``

  For UNIX domain sockets, this parameter can be used to specify the user
  and group for the FastCGI socket. May be a UNIX username (e.g. chrism)
  or a UNIX username and group separated by a colon (e.g. chrism:wheel).

  *Default*: Uses the user and group set for the fcgi-program

  *Required*:  No.

  *Introduced*: 3.0

``socket_mode``

  For UNIX domain sockets, this parameter can be used to specify the
  permission mode.

  *Default*: 0700

  *Required*:  No.

  *Introduced*: 3.0

Consult :ref:`programx_section` for other allowable keys, delta the
above constraints and additions.

``[fcgi-program:x]`` Section Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: ini

   [fcgi-program:fcgiprogramname]
   command=/usr/bin/example.fcgi
   socket=unix:///var/run/supervisor/%(program_name)s.sock
   socket_owner=chrism
   socket_mode=0700
   process_name=%(program_name)s_%(process_num)02d
   numprocs=5
   directory=/tmp
   umask=022
   priority=999
   autostart=true
   autorestart=unexpected
   startsecs=1
   startretries=3
   exitcodes=0
   stopsignal=QUIT
   stopasgroup=false
   killasgroup=false
   stopwaitsecs=10
   user=chrism
   redirect_stderr=true
   stdout_logfile=/a/path
   stdout_logfile_maxbytes=1MB
   stdout_logfile_backups=10
   stdout_events_enabled=false
   stderr_logfile=/a/path
   stderr_logfile_maxbytes=1MB
   stderr_logfile_backups=10
   stderr_events_enabled=false
   environment=A="1",B="2"
   serverurl=AUTO

``[eventlistener:x]`` Section Settings
--------------------------------------

Supervisor allows specialized homogeneous process groups ("event
listener pools") to be defined within the configuration file.  These
pools contain processes that are meant to receive and respond to event
notifications from supervisor's event system.  See :ref:`events` for
an explanation of how events work and how to implement programs that
can be declared as event listeners.

Note that all the options available to ``[program:x]`` sections are
respected by eventlistener sections *except* for ``stdout_capture_maxbytes``.
Eventlisteners cannot emit process communication events on ``stdout``,
but can emit on ``stderr`` (see :ref:`capture_mode`).

``[eventlistener:x]`` Section Values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``[eventlistener:x]`` sections have a few keys which ``[program:x]``
sections do not have.

``buffer_size``

  The event listener pool's event queue buffer size.  When a listener
  pool's event buffer is overflowed (as can happen when an event
  listener pool cannot keep up with all of the events sent to it), the
  oldest event in the buffer is discarded.

``events``

  A comma-separated list of event type names that this listener is
  "interested" in receiving notifications for (see
  :ref:`event_types` for a list of valid event type names).

``result_handler``

  A `pkg_resources entry point string
  `_ that
  resolves to a Python callable.  The default value is
  ``supervisor.dispatchers:default_handler``.  Specifying an alternate
  result handler is a very uncommon thing to need to do, and as a
  result, how to create one is not documented.

Consult :ref:`programx_section` for other allowable keys, delta the
above constraints and additions.

``[eventlistener:x]`` Section Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: ini

   [eventlistener:theeventlistenername]
   command=/bin/eventlistener
   process_name=%(program_name)s_%(process_num)02d
   numprocs=5
   events=PROCESS_STATE
   buffer_size=10
   directory=/tmp
   umask=022
   priority=-1
   autostart=true
   autorestart=unexpected
   startsecs=1
   startretries=3
   exitcodes=0
   stopsignal=QUIT
   stopwaitsecs=10
   stopasgroup=false
   killasgroup=false
   user=chrism
   redirect_stderr=false
   stdout_logfile=/a/path
   stdout_logfile_maxbytes=1MB
   stdout_logfile_backups=10
   stdout_events_enabled=false
   stderr_logfile=/a/path
   stderr_logfile_maxbytes=1MB
   stderr_logfile_backups=10
   stderr_events_enabled=false
   environment=A="1",B="2"
   serverurl=AUTO

``[rpcinterface:x]`` Section Settings
-------------------------------------

Adding ``rpcinterface:x`` settings in the configuration file is only
useful for people who wish to extend supervisor with additional custom
behavior.

In the sample config file (see :ref:`create_config`), there is a section
which is named ``[rpcinterface:supervisor]``.  By default it looks like the
following.

.. code-block:: ini

   [rpcinterface:supervisor]
   supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

The ``[rpcinterface:supervisor]`` section *must* remain in the
configuration for the standard setup of supervisor to work properly.
If you don't want supervisor to do anything it doesn't already do out
of the box, this is all you need to know about this type of section.

However, if you wish to add rpc interface namespaces in order to
customize supervisor, you may add additional ``[rpcinterface:foo]``
sections, where "foo" represents the namespace of the interface (from
the web root), and the value named by
``supervisor.rpcinterface_factory`` is a factory callable which should
have a function signature that accepts a single positional argument
``supervisord`` and as many keyword arguments as required to perform
configuration.  Any extra key/value pairs defined within the
``[rpcinterface:x]`` section will be passed as keyword arguments to
the factory.

Here's an example of a factory function, created in the
``__init__.py`` file of the Python package ``my.package``.

.. code-block:: python

   from my.package.rpcinterface import AnotherRPCInterface

   def make_another_rpcinterface(supervisord, **config):
       retries = int(config.get('retries', 0))
       another_rpc_interface = AnotherRPCInterface(supervisord, retries)
       return another_rpc_interface

And a section in the config file meant to configure it.

.. code-block:: ini

   [rpcinterface:another]
   supervisor.rpcinterface_factory = my.package:make_another_rpcinterface
   retries = 1

``[rpcinterface:x]`` Section Values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``supervisor.rpcinterface_factory``

  ``pkg_resources`` "entry point" dotted name to your RPC interface's
  factory function.

  *Default*: N/A

  *Required*:  No.

  *Introduced*: 3.0

``[rpcinterface:x]`` Section Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: ini

   [rpcinterface:another]
   supervisor.rpcinterface_factory = my.package:make_another_rpcinterface
   retries = 1
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/docs/development.rst0000644000076500000240000000635614340177153017514 0ustar00mnaberezstaffResources and Development
=========================

Bug Tracker
-----------

Supervisor has a bugtracker where you may report any bugs or other
errors you find.  Please report bugs to the `GitHub issues page
`_.

Version Control Repository
--------------------------

You can also view the `Supervisor version control repository
`_.

Contributing
------------

We'll review contributions from the community in
`pull requests `_
on GitHub.

Author Information
------------------

The following people are responsible for creating Supervisor.

Original Author
~~~~~~~~~~~~~~~

- `Chris McDonough `_ is the original author of
  Supervisor.

Contributors
~~~~~~~~~~~~

Contributors are tracked on the `GitHub contributions page
`_.  The two lists
below are included for historical reasons.

This first list recognizes significant contributions that were made
before the repository moved to GitHub.

- Anders Quist: Anders contributed the patch that was the basis for
  Supervisor’s ability to reload parts of its configuration without
  restarting.

- Derek DeVries: Derek did the web design of Supervisor’s internal web
  interface and website logos.

- Guido van Rossum: Guido authored ``zdrun`` and ``zdctl``, the
  programs from Zope that were the original basis for Supervisor.  He
  also created Python, the programming language that Supervisor is
  written in.

- Jason Kirtland: Jason fixed Supervisor to run on Python 2.6 by
  contributing a patched version of Medusa (a Supervisor dependency)
  that we now bundle.

- Roger Hoover: Roger added support for spawning FastCGI programs. He
  has also been one of the most active mailing list users, providing
  his testing and feedback.

- Siddhant Goel: Siddhant worked on :program:`supervisorctl` as our
  Google Summer of Code student for 2008. He implemented the ``fg``
  command and also added tab completion.

This second list records contributors who signed a legal agreement.
The legal agreement was
`introduced `_
in January 2014 but later
`withdrawn `_
in March 2014.  This list is being preserved in case it is useful
later (e.g. if at some point there was a desire to donate the project
to a foundation that required such agreements).

- Chris McDonough, 2006-06-26

- Siddhant Goel, 2008-06-15

- Chris Rossi, 2010-02-02

- Roger Hoover, 2010-08-17

- Benoit Sigoure, 2011-06-21

- John Szakmeister, 2011-09-06

- Gunnlaugur Þór Briem, 2011-11-26

- Jens Rantil, 2011-11-27

- Michael Blume, 2012-01-09

- Philip Zeyliger, 2012-02-21

- Marcelo Vanzin, 2012-05-03

- Martijn Pieters, 2012-06-04

- Marcin Kuźmiński, 2012-06-21

- Jean Jordaan, 2012-06-28

- Perttu Ranta-aho, 2012-09-27

- Chris Streeter, 2013-03-23

- Caio Ariede, 2013-03-25

- David Birdsong, 2013-04-11

- Lukas Rist, 2013-04-18

- Honza Pokorny, 2013-07-23

- Thúlio Costa, 2013-10-31

- Gary M. Josack, 2013-11-12

- Márk Sági-Kazár, 2013-12-16
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0
supervisor-4.2.5/docs/events.rst0000644000076500000240000007474614351440431016500 0ustar00mnaberezstaff.. _events:

Events
======

Events are an advanced feature of Supervisor introduced in version
3.0.  You don't need to understand events if you simply want to use
Supervisor as a mechanism to restart crashed processes or as a system
to manually control process state.  You do need to understand events
if you want to use Supervisor as part of a process
monitoring/notification framework.

Event Listeners and Event Notifications
---------------------------------------

Supervisor provides a way for a specially written program (which it
runs as a subprocess) called an "event listener" to subscribe to
"event notifications".  An event notification implies that something
happened related to a subprocess controlled by :program:`supervisord`
or to :program:`supervisord` itself.  Event notifications are grouped
into types in order to make it possible for event listeners to
subscribe to a limited subset of event notifications.  Supervisor
continually emits event notifications as its running even if there are
no listeners configured.  If a listener is configured and subscribed
to an event type that is emitted during a :program:`supervisord`
lifetime, that listener will be notified.

The purpose of the event notification/subscription system is to
provide a mechanism for arbitrary code to be run (e.g. send an email,
make an HTTP request, etc) when some condition is met.  That condition
usually has to do with subprocess state.  For instance, you may want
to notify someone via email when a process crashes and is restarted by
Supervisor.

The event notification protocol is based on communication via a
subprocess' stdin and stdout.  Supervisor sends specially-formatted
input to an event listener process' stdin and expects
specially-formatted output from an event listener's stdout, forming a
request-response cycle.  A protocol agreed upon between supervisor and
the listener's implementer allows listeners to process event
notifications.  Event listeners can be written in any language
supported by the platform you're using to run Supervisor.  Although
event listeners may be written in any language, there is special
library support for Python in the form of a
:mod:`supervisor.childutils` module, which makes creating event
listeners in Python slightly easier than in other languages.

Configuring an Event Listener
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A supervisor event listener is specified via a ``[eventlistener:x]``
section in the configuration file.  Supervisor ``[eventlistener:x]``
sections are treated almost exactly like supervisor ``[program:x]``
section with the respect to the keys allowed in their configuration
except that Supervisor does not respect "capture mode" output from
event listener processes (ie. event listeners cannot be
``PROCESS_COMMUNICATIONS_EVENT`` event generators).  Therefore it is
an error to specify ``stdout_capture_maxbytes`` or
``stderr_capture_maxbytes`` in the configuration of an eventlistener.
There is no artificial constraint on the number of eventlistener
sections that can be placed into the configuration file.

When an ``[eventlistener:x]`` section is defined, it actually defines
a "pool", where the number of event listeners in the pool is
determined by the ``numprocs`` value within the section.

The ``events`` parameter of the ``[eventlistener:x]`` section
specifies the events that will be sent to a listener pool.  A
well-written event listener will ignore events that it cannot process,
but there is no guarantee that a specific event listener won't crash
as a result of receiving an event type it cannot handle.  Therefore,
depending on the listener implementation, it may be important to
specify in the configuration that it may receive only certain types of
events.  The implementor of the event listener is the only person who
can tell you what these are (and therefore what value to put in the
``events`` configuration).  Examples of eventlistener
configurations that can be placed in ``supervisord.conf`` are as
follows.

.. code-block:: ini

   [eventlistener:memmon]
   command=memmon -a 200MB -m bob@example.com
   events=TICK_60

.. code-block:: ini

   [eventlistener:mylistener]
   command=my_custom_listener.py
   events=PROCESS_STATE,TICK_60

.. note::

   An advanced feature, specifying an alternate "result handler" for a
   pool, can be specified via the ``result_handler`` parameter of an
   ``[eventlistener:x]`` section in the form of a `pkg_resources
   `_ "entry
   point" string.  The default result handler is
   ``supervisord.dispatchers:default_handler``.  Creating an alternate
   result handler is not currently documented.

When an event notification is sent by supervisor, all event listener
pools which are subscribed to receive events for the event's type
(filtered by the ``events`` value in the eventlistener
section) will be found.  One of the listeners in each listener pool
will receive the event notification (any "available" listener).

Every process in an event listener pool is treated equally by
supervisor.  If a process in the pool is unavailable (because it is
already processing an event, because it has crashed, or because it has
elected to removed itself from the pool), supervisor will choose
another process from the pool.  If the event cannot be sent because
all listeners in the pool are "busy", the event will be buffered and
notification will be retried later.  "Later" is defined as "the next
time that the :program:`supervisord` select loop executes".  For
satisfactory event processing performance, you should configure a pool
with as many event listener processes as appropriate to handle your
event load.  This can only be determined empirically for any given
workload, there is no "magic number" but to help you determine the
optimal number of listeners in a given pool, Supervisor will emit
warning messages to its activity log when an event cannot be sent
immediately due to pool congestion.  There is no artificial constraint
placed on the number of processes that can be in a pool, it is limited
only by your platform constraints.

A listener pool has an event buffer queue.  The queue is sized via the
listener pool's ``buffer_size`` config file option.  If the queue is
full and supervisor attempts to buffer an event, supervisor will throw
away the oldest event in the buffer and log an error.

Writing an Event Listener
~~~~~~~~~~~~~~~~~~~~~~~~~

An event listener implementation is a program that is willing to
accept structured input on its stdin stream and produce structured
output on its stdout stream.  An event listener implementation should
operate in "unbuffered" mode or should flush its stdout every time it
needs to communicate back to the supervisord process.  Event listeners
can be written to be long-running or may exit after a single request
(depending on the implementation and the ``autorestart`` parameter in
the eventlistener's configuration).

An event listener can send arbitrary output to its stderr, which will
be logged or ignored by supervisord depending on the stderr-related
logfile configuration in its ``[eventlistener:x]`` section.

Event Notification Protocol
+++++++++++++++++++++++++++

When supervisord sends a notification to an event listener process,
the listener will first be sent a single "header" line on its
stdin. The composition of the line is a set of colon-separated tokens
(each of which represents a key-value pair) separated from each other
by a single space.  The line is terminated with a ``\n`` (linefeed)
character.  The tokens on the line are not guaranteed to be in any
particular order.  The types of tokens currently defined are in the
table below.

Header Tokens
@@@@@@@@@@@@@

=========== =============================================   ===================
Key         Description                                     Example
=========== =============================================   ===================
ver         The event system protocol version               3.0
server      The identifier of the supervisord sending the
            event (see config file ``[supervisord]``
            section ``identifier`` value.
serial      An integer assigned to each event.  No two      30
            events generated during the lifetime of
            a :program:`supervisord` process will have
            the same serial number.  The value is useful
            for functional testing and detecting event
            ordering anomalies.
pool        The name of the event listener pool which       myeventpool
            generated this event.
poolserial  An integer assigned to each event by the        30
            eventlistener pool which it is being sent
            from.  No two events generated by the same
            eventlistener pool during the lifetime of a
            :program:`supervisord` process will have the
            same ``poolserial`` number.  This value can
            be used to detect event ordering anomalies.
eventname   The specific event type name (see               TICK_5
            :ref:`event_types`)
len         An integer indicating the number of bytes in    22
            the event payload, aka the ``PAYLOAD_LENGTH``
=========== =============================================   ===================

An example of a complete header line is as follows.

.. code-block:: text

   ver:3.0 server:supervisor serial:21 pool:listener poolserial:10 eventname:PROCESS_COMMUNICATION_STDOUT len:54

Directly following the linefeed character in the header is the event
payload.  It consists of ``PAYLOAD_LENGTH`` bytes representing a
serialization of the event data.  See :ref:`event_types` for the
specific event data serialization definitions.

An example payload for a ``PROCESS_COMMUNICATION_STDOUT`` event
notification is as follows.

.. code-block:: text

   processname:foo groupname:bar pid:123
   This is the data that was sent between the tags

The payload structure of any given event is determined only by the
event's type.

Event Listener States
+++++++++++++++++++++

An event listener process has three possible states that are
maintained by supervisord:

=============================   ==============================================
Name                            Description
=============================   ==============================================
ACKNOWLEDGED                    The event listener has acknowledged (accepted
                                or rejected) an event send.
READY                           Event notifications may be sent to this event
                                listener
BUSY                            Event notifications may not be sent to this
                                event listener.
=============================   ==============================================

When an event listener process first starts, supervisor automatically
places it into the ``ACKNOWLEDGED`` state to allow for startup
activities or guard against startup failures (hangs).  Until the
listener sends a ``READY\n`` string to its stdout, it will stay in
this state.

When supervisor sends an event notification to a listener in the
``READY`` state, the listener will be placed into the ``BUSY`` state
until it receives an ``OK`` or ``FAIL`` response from the listener, at
which time, the listener will be transitioned back into the
``ACKNOWLEDGED`` state.

Event Listener Notification Protocol
++++++++++++++++++++++++++++++++++++

Supervisor will notify an event listener in the ``READY`` state of an
event by sending data to the stdin of the process.  Supervisor will
never send anything to the stdin of an event listener process while
that process is in the ``BUSY`` or ``ACKNOWLEDGED`` state.  Supervisor
starts by sending the header.

Once it has processed the header, the event listener implementation
should read ``PAYLOAD_LENGTH`` bytes from its stdin, perform an
arbitrary action based on the values in the header and the data parsed
out of the serialization.  It is free to block for an arbitrary amount
of time while doing this.  Supervisor will continue processing
normally as it waits for a response and it will send other events of
the same type to other listener processes in the same pool as
necessary.

After the event listener has processed the event serialization, in
order to notify supervisord about the result, it should send back a
result structure on its stdout.  A result structure is the word
"RESULT", followed by a space, followed by the result length, followed
by a line feed, followed by the result content.  For example,
``RESULT 2\nOK`` is the result "OK".  Conventionally, an event
listener will use either ``OK`` or ``FAIL`` as the result content.
These strings have special meaning to the default result handler.

If the default result handler receives ``OK`` as result content, it
will assume that the listener processed the event notification
successfully.  If it receives ``FAIL``, it will assume that the
listener has failed to process the event, and the event will be
rebuffered and sent again at a later time.  The event listener may
reject the event for any reason by returning a ``FAIL`` result.  This
does not indicate a problem with the event data or the event listener.
Once an ``OK`` or ``FAIL`` result is received by supervisord, the
event listener is placed into the ``ACKNOWLEDGED`` state.

Once the listener is in the ``ACKNOWLEDGED`` state, it may either exit
(and subsequently may be restarted by supervisor if its
``autorestart`` config parameter is ``true``), or it may continue
running.  If it continues to run, in order to be placed back into the
``READY`` state by supervisord, it must send a ``READY`` token
followed immediately by a line feed to its stdout.

Example Event Listener Implementation
+++++++++++++++++++++++++++++++++++++

A Python implementation of a "long-running" event listener which
accepts an event notification, prints the header and payload to its
stderr, and responds with an ``OK`` result, and then subsequently a
``READY`` is as follows.

.. code-block:: python

   import sys

   def write_stdout(s):
       # only eventlistener protocol messages may be sent to stdout
       sys.stdout.write(s)
       sys.stdout.flush()

   def write_stderr(s):
       sys.stderr.write(s)
       sys.stderr.flush()

   def main():
       while 1:
           # transition from ACKNOWLEDGED to READY
           write_stdout('READY\n')

           # read header line and print it to stderr
           line = sys.stdin.readline()
           write_stderr(line)

           # read event payload and print it to stderr
           headers = dict([ x.split(':') for x in line.split() ])
           data = sys.stdin.read(int(headers['len']))
           write_stderr(data)

           # transition from READY to ACKNOWLEDGED
           write_stdout('RESULT 2\nOK')

   if __name__ == '__main__':
       main()

Other sample event listeners are present within the :term:`Superlance`
package, including one which can monitor supervisor subprocesses and
restart a process if it is using "too much" memory.

Event Listener Error Conditions
+++++++++++++++++++++++++++++++

If the event listener process dies while the event is being
transmitted to its stdin, or if it dies before sending an result
structure back to supervisord, the event is assumed to not be
processed and will be rebuffered by supervisord and sent again later.

If an event listener sends data to its stdout which supervisor does
not recognize as an appropriate response based on the state that the
event listener is in, the event listener will be placed into the
``UNKNOWN`` state, and no further event notifications will be sent to
it.  If an event was being processed by the listener during this time,
it will be rebuffered and sent again later.

Miscellaneous
+++++++++++++

Event listeners may use the Supervisor XML-RPC interface to call "back
in" to Supervisor.  As such, event listeners can impact the state of a
Supervisor subprocess as a result of receiving an event notification.
For example, you may want to generate an event every few minutes
related to process usage of Supervisor-controlled subprocesses, and if
any of those processes exceed some memory threshold, you would like
to restart it.  You would write a program that caused supervisor to
generate ``PROCESS_COMMUNICATION`` events every so often with memory
information in them, and an event listener to perform an action based
on processing the data it receives from these events.

.. _event_types:

Event Types
-----------

The event types are a controlled set, defined by Supervisor itself.
There is no way to add an event type without changing
:program:`supervisord` itself.  This is typically not a problem,
though, because metadata is attached to events that can be used by
event listeners as additional filter criterion, in conjunction with
its type.

Event types that may be subscribed to by event listeners are
predefined by supervisor and fall into several major categories,
including "process state change", "process communication", and
"supervisor state change" events. Below are tables describing
these event types.

In the below list, we indicate that some event types have a "body"
which is a a *token set*.  A token set consists of a set of characters
with space-separated tokens.  Each token represents a key-value pair.
The key and value are separated by a colon.  For example:

.. code-block:: text

   processname:cat groupname:cat from_state:STOPPED

Token sets do not have a linefeed or carriage return character at
their end.

``EVENT`` Event Type
~~~~~~~~~~~~~~~~~~~~

The base event type.  This event type is abstract.  It will never be
sent directly.  Subscribing to this event type will cause a subscriber
to receive all event notifications emitted by Supervisor.

*Name*: ``EVENT``

*Subtype Of*: N/A

*Body Description*: N/A


``PROCESS_STATE`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This process type indicates a process has moved from one state to
another.  See :ref:`process_states` for a description of the states
that a process moves through during its lifetime.  This event type is
abstract, it will never be sent directly.  Subscribing to this event
type will cause a subscriber to receive event notifications of all the
event types that are subtypes of ``PROCESS_STATE``.

*Name*: ``PROCESS_STATE``

*Subtype Of*: ``EVENT``

Body Description
++++++++++++++++

All subtypes of ``PROCESS_STATE`` have a body which is a token set.
Additionally, each ``PROCESS_STATE`` subtype's token set has a default
set of key/value pairs: ``processname``, ``groupname``, and
``from_state``.  ``processname`` represents the process name which
supervisor knows this process as. ``groupname`` represents the name of
the supervisord group which this process is in.  ``from_state`` is the
name of the state from which this process is transitioning (the new
state is implied by the concrete event type).  Concrete subtypes may
include additional key/value pairs in the token set.

``PROCESS_STATE_STARTING`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Indicates a process has moved from a state to the STARTING state.

*Name*: ``PROCESS_STATE_STARTING``

*Subtype Of*: ``PROCESS_STATE``

Body Description
++++++++++++++++

This body is a token set.  It has the default set of key/value pairs
plus an additional ``tries`` key.  ``tries`` represents the number of
times this process has entered this state before transitioning to
RUNNING or FATAL (it will never be larger than the "startretries"
parameter of the process).  For example:

.. code-block:: text

   processname:cat groupname:cat from_state:STOPPED tries:0

``PROCESS_STATE_RUNNING`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates a process has moved from the ``STARTING`` state to the
``RUNNING`` state.  This means that the process has successfully
started as far as Supervisor is concerned.

*Name*: ``PROCESS_STATE_RUNNING``

*Subtype Of*: ``PROCESS_STATE``

Body Description
++++++++++++++++

This body is a token set.  It has the default set of key/value pairs
plus an additional ``pid`` key.  ``pid`` represents the UNIX
process id of the process that was started.  For example:

.. code-block:: text

   processname:cat groupname:cat from_state:STARTING pid:2766

``PROCESS_STATE_BACKOFF`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates a process has moved from the ``STARTING`` state to the
``BACKOFF`` state.  This means that the process did not successfully
enter the RUNNING state, and Supervisor is going to try to restart it
unless it has exceeded its "startretries" configuration limit.

*Name*: ``PROCESS_STATE_BACKOFF``

*Subtype Of*: ``PROCESS_STATE``

Body Description
++++++++++++++++

This body is a token set.  It has the default set of key/value pairs
plus an additional ``tries`` key.  ``tries`` represents the number of
times this process has entered this state before transitioning to
``RUNNING`` or ``FATAL`` (it will never be larger than the
"startretries" parameter of the process).  For example:

.. code-block:: text

   processname:cat groupname:cat from_state:STOPPED tries:0

``PROCESS_STATE_STOPPING`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates a process has moved from either the ``RUNNING`` state or the
``STARTING`` state to the ``STOPPING`` state.

*Name*: ``PROCESS_STATE_STOPPING``

*Subtype Of*: ``PROCESS_STATE``

Body Description
++++++++++++++++

This body is a token set.  It has the default set of key/value pairs
plus an additional ``pid`` key.  ``pid`` represents the UNIX process
id of the process that was started.  For example:

.. code-block:: text

   processname:cat groupname:cat from_state:STARTING pid:2766

``PROCESS_STATE_EXITED`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates a process has moved from the ``RUNNING`` state to the
``EXITED`` state.

*Name*: ``PROCESS_STATE_EXITED``

*Subtype Of*: ``PROCESS_STATE``

Body Description
++++++++++++++++

This body is a token set.  It has the default set of key/value pairs
plus two additional keys: ``pid`` and ``expected``.  ``pid``
represents the UNIX process id of the process that exited.
``expected`` represents whether the process exited with an expected
exit code or not.  It will be ``0`` if the exit code was unexpected,
or ``1`` if the exit code was expected. For example:

.. code-block:: text

   processname:cat groupname:cat from_state:RUNNING expected:0 pid:2766

``PROCESS_STATE_STOPPED`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates a process has moved from the ``STOPPING`` state to the
``STOPPED`` state.

*Name*: ``PROCESS_STATE_STOPPED``

*Subtype Of*: ``PROCESS_STATE``

Body Description
++++++++++++++++

This body is a token set.  It has the default set of key/value pairs
plus an additional ``pid`` key.  ``pid`` represents the UNIX process
id of the process that was started.  For example:

.. code-block:: text

   processname:cat groupname:cat from_state:STOPPING pid:2766

``PROCESS_STATE_FATAL`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates a process has moved from the ``BACKOFF`` state to the
``FATAL`` state.  This means that Supervisor tried ``startretries``
number of times unsuccessfully to start the process, and gave up
attempting to restart it.

*Name*: ``PROCESS_STATE_FATAL``

*Subtype Of*: ``PROCESS_STATE``

Body Description
++++++++++++++++

This event type is a token set with the default key/value pairs.  For
example:

.. code-block:: text

   processname:cat groupname:cat from_state:BACKOFF

``PROCESS_STATE_UNKNOWN`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates a process has moved from any state to the ``UNKNOWN`` state
(indicates an error in :program:`supervisord`).  This state transition
will only happen if :program:`supervisord` itself has a programming
error.

*Name*: ``PROCESS_STATE_UNKNOWN``

*Subtype Of*: ``PROCESS_STATE``

Body Description
++++++++++++++++

This event type is a token set with the default key/value pairs.  For
example:

.. code-block:: text

   processname:cat groupname:cat from_state:BACKOFF

``REMOTE_COMMUNICATION`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

An event type raised when the ``supervisor.sendRemoteCommEvent()``
method is called on Supervisor's RPC interface.  The ``type`` and
``data`` are arguments of the RPC method.

*Name*: ``REMOTE_COMMUNICATION``

*Subtype Of*: ``EVENT``

Body Description
++++++++++++++++

.. code-block:: text

   type:type
   data

``PROCESS_LOG`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~

An event type emitted when a process writes to stdout or stderr.  The
event will only be emitted if the file descriptor is not in capture
mode and if ``stdout_events_enabled`` or ``stderr_events_enabled``
config options are set to ``true``.  This event type is abstract, it
will never be sent directly.  Subscribing to this event type will
cause a subscriber to receive event notifications for all subtypes of
``PROCESS_LOG``.

*Name*: ``PROCESS_LOG``

*Subtype Of*: ``EVENT``

*Body Description*: N/A

``PROCESS_LOG_STDOUT`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates a process has written to its stdout file descriptor.  The
event will only be emitted if the file descriptor is not in capture
mode and if the ``stdout_events_enabled`` config option is set to
``true``.

*Name*: ``PROCESS_LOG_STDOUT``

*Subtype Of*: ``PROCESS_LOG``

Body Description
++++++++++++++++

.. code-block:: text

   processname:name groupname:name pid:pid
   data

``PROCESS_LOG_STDERR`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates a process has written to its stderr file descriptor.  The
event will only be emitted if the file descriptor is not in capture
mode and if the ``stderr_events_enabled`` config option is set to
``true``.

*Name*: ``PROCESS_LOG_STDERR``

*Subtype Of*: ``PROCESS_LOG``

Body Description
++++++++++++++++

.. code-block:: text

   processname:name groupname:name pid:pid
   data

``PROCESS_COMMUNICATION`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

An event type raised when any process attempts to send information
between ```` and ````
tags in its output.  This event type is abstract, it will never be
sent directly.  Subscribing to this event type will cause a subscriber
to receive event notifications for all subtypes of
``PROCESS_COMMUNICATION``.

*Name*: ``PROCESS_COMMUNICATION``

*Subtype Of*: ``EVENT``

*Body Description*: N/A

``PROCESS_COMMUNICATION_STDOUT`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates a process has sent a message to Supervisor on its stdout
file descriptor.

*Name*: ``PROCESS_COMMUNICATION_STDOUT``

*Subtype Of*: ``PROCESS_COMMUNICATION``

Body Description
++++++++++++++++

.. code-block:: text

   processname:name groupname:name pid:pid
   data

``PROCESS_COMMUNICATION_STDERR`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates a process has sent a message to Supervisor on its stderr
file descriptor.

*Name*: ``PROCESS_COMMUNICATION_STDERR``

*Subtype Of*: ``PROCESS_COMMUNICATION``

Body Description
++++++++++++++++

.. code-block:: text

   processname:name groupname:name pid:pid
   data

``SUPERVISOR_STATE_CHANGE`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

An event type raised when the state of the :program:`supervisord`
process changes.  This type is abstract, it will never be sent
directly.  Subscribing to this event type will cause a subscriber to
receive event notifications of all the subtypes of
``SUPERVISOR_STATE_CHANGE``.

*Name*: ``SUPERVISOR_STATE_CHANGE``

*Subtype Of*: ``EVENT``

*Body Description*: N/A

``SUPERVISOR_STATE_CHANGE_RUNNING`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates that :program:`supervisord` has started.

*Name*: ``SUPERVISOR_STATE_CHANGE_RUNNING``

*Subtype Of*: ``SUPERVISOR_STATE_CHANGE``

*Body Description*: Empty string

``SUPERVISOR_STATE_CHANGE_STOPPING`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates that :program:`supervisord` is stopping.

*Name*: ``SUPERVISOR_STATE_CHANGE_STOPPING``

*Subtype Of*: ``SUPERVISOR_STATE_CHANGE``

*Body Description*: Empty string

``TICK`` Event Type
~~~~~~~~~~~~~~~~~~~

An event type that may be subscribed to for event listeners to receive
"wake-up" notifications every N seconds.  This event type is abstract,
it will never be sent directly.  Subscribing to this event type will
cause a subscriber to receive event notifications for all subtypes of
``TICK``.

Note that the only ``TICK`` events available are the ones listed below.
You cannot subscribe to an arbitrary ``TICK`` interval. If you need an
interval not provided below, you can subscribe to one of the shorter
intervals given below and keep track of the time between runs in your
event listener.

*Name*: ``TICK``

*Subtype Of*: ``EVENT``

*Body Description*: N/A

``TICK_5`` Event Type
~~~~~~~~~~~~~~~~~~~~~

An event type that may be subscribed to for event listeners to receive
"wake-up" notifications every 5 seconds.

*Name*: ``TICK_5``

*Subtype Of*: ``TICK``

Body Description
++++++++++++++++

This event type is a token set with a single key: "when", which
indicates the epoch time for which the tick was sent.

.. code-block:: text

   when:1201063880

``TICK_60`` Event Type
~~~~~~~~~~~~~~~~~~~~~~

An event type that may be subscribed to for event listeners to receive
"wake-up" notifications every 60 seconds.

*Name*: ``TICK_60``

*Subtype Of*: ``TICK``

Body Description
++++++++++++++++

This event type is a token set with a single key: "when", which
indicates the epoch time for which the tick was sent.

.. code-block:: text

   when:1201063880

``TICK_3600`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~

An event type that may be subscribed to for event listeners to receive
"wake-up" notifications every 3600 seconds (1 hour).

*Name*: ``TICK_3600``

*Subtype Of*: ``TICK``

Body Description
++++++++++++++++

This event type is a token set with a single key: "when", which
indicates the epoch time for which the tick was sent.

.. code-block:: text

   when:1201063880

``PROCESS_GROUP`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

An event type raised when a process group is added to or removed from
Supervisor.  This type is abstract, it will never be sent
directly.  Subscribing to this event type will cause a subscriber to
receive event notifications of all the subtypes of
``PROCESS_GROUP``.

*Name*: ``PROCESS_GROUP``

*Subtype Of*: ``EVENT``

*Body Description*: N/A

``PROCESS_GROUP_ADDED`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates that a process group has been added to Supervisor's configuration.

*Name*: ``PROCESS_GROUP_ADDED``

*Subtype Of*: ``PROCESS_GROUP``

*Body Description*: This body is a token set with just a groupname key/value.

.. code-block:: text

   groupname:cat

``PROCESS_GROUP_REMOVED`` Event Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Indicates that a process group has been removed from Supervisor's configuration.

*Name*: ``PROCESS_GROUP_REMOVED``

*Subtype Of*: ``PROCESS_GROUP``

*Body Description*: This body is a token set with just a groupname key/value.

.. code-block:: text

   groupname:cat

././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/docs/faq.rst0000644000076500000240000000250414340177153015730 0ustar00mnaberezstaffFrequently Asked Questions
==========================

Q
  My program never starts and supervisor doesn't indicate any error?

A
  Make sure the ``x`` bit is set on the executable file you're using in
  the ``command=`` line of your program section.

Q
  I am a software author and I want my program to behave differently
  when it's running under :program:`supervisord`.  How can I tell if
  my program is running under :program:`supervisord`?

A
  Supervisor and its subprocesses share an environment variable
  :envvar:`SUPERVISOR_ENABLED`.  When your program is run under
  :program:`supervisord`, it can check for the presence of this
  environment variable to determine whether it is running as a
  :program:`supervisord` subprocess.

Q
  My command works fine when I invoke it by hand from a shell prompt,
  but when I use the same command line in a supervisor program
  ``command=`` section, the program fails mysteriously.  Why?

A
  This may be due to your process' dependence on environment variable
  settings.  See :ref:`subprocess_environment`.

Q
  How can I make Supervisor restart a process that's using "too much"
  memory automatically?

A
  The :term:`Superlance` package contains a console script that can be
  used as a Supervisor event listener named ``memmon`` which helps
  with this task.  It works on Linux and Mac OS X.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/docs/glossary.rst0000644000076500000240000000150114340177153017020 0ustar00mnaberezstaff.. _glossary:

Glossary
========

.. glossary::
   :sorted:

   daemontools
     A `process control system by D.J. Bernstein
     `_.

   runit
     A `process control system `_.

   launchd
     A `process control system used by Apple
     `_ as process 1 under Mac
     OS X.

   umask
     Abbreviation of *user mask*: sets the file mode creation mask of
     the current process.  See `http://en.wikipedia.org/wiki/Umask
     `_.

   Superlance
     A package which provides various event listener implementations
     that plug into Supervisor which can help monitor process memory
     usage and crash status: `https://pypi.org/pypi/superlance/
     `_.



././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/docs/index.rst0000644000076500000240000000207314340177153016271 0ustar00mnaberezstaffSupervisor: A Process Control System
====================================

Supervisor is a client/server system that allows its users to monitor
and control a number of processes on UNIX-like operating systems.

It shares some of the same goals of programs like :term:`launchd`,
:term:`daemontools`, and :term:`runit`. Unlike some of these programs,
it is not meant to be run as a substitute for ``init`` as "process id
1". Instead it is meant to be used to control processes related to a
project or a customer, and is meant to start like any other program at
boot time.

Narrative Documentation
-----------------------

.. toctree::
   :maxdepth: 2

   introduction.rst
   installing.rst
   running.rst
   configuration.rst
   subprocess.rst
   logging.rst
   events.rst
   xmlrpc.rst
   upgrading.rst
   faq.rst
   development.rst
   glossary.rst

API Documentation
-----------------

.. toctree::
   :maxdepth: 2

   api.rst

Plugins
-------

.. toctree::
   :maxdepth: 2

   plugins.rst

Indices and tables
------------------

* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/docs/installing.rst0000644000076500000240000001216014340177153017324 0ustar00mnaberezstaffInstalling
==========

Installation instructions depend whether the system on which
you're attempting to install Supervisor has internet access.

Installing to A System With Internet Access
-------------------------------------------

Internet-Installing With Pip
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Supervisor can be installed with ``pip install``:

.. code-block:: bash

   pip install supervisor

Depending on the permissions of your system's Python, you might need
to be the root user to install Supervisor successfully using
``pip``.

You can also install supervisor in a virtualenv via ``pip``.

Internet-Installing Without Pip
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If your system does not have ``pip`` installed, you will need to download
the Supervisor distribution and install it by hand.  Current and previous
Supervisor releases may be downloaded from `PyPi
`_.  After unpacking the software
archive, run ``python setup.py install``.  This requires internet access.  It
will download and install all distributions depended upon by Supervisor and
finally install Supervisor itself.

.. note::

   Depending on the permissions of your system's Python, you might
   need to be the root user to successfully invoke ``python
   setup.py install``.

Installing To A System Without Internet Access
----------------------------------------------

If the system that you want to install Supervisor to does not have
Internet access, you'll need to perform installation slightly
differently.  Since both ``pip`` and ``python setup.py
install`` depend on internet access to perform downloads of dependent
software, neither will work on machines without internet access until
dependencies are installed.  To install to a machine which is not
internet-connected, obtain the following dependencies on a machine
which is internet-connected:

- setuptools (latest) from `https://pypi.org/pypi/setuptools/
  `_.

Copy these files to removable media and put them on the target
machine.  Install each onto the target machine as per its
instructions.  This typically just means unpacking each file and
invoking ``python setup.py install`` in the unpacked directory.
Finally, run supervisor's ``python setup.py install``.

.. note::

   Depending on the permissions of your system's Python, you might
   need to be the root user to invoke ``python setup.py install``
   successfully for each package.

Installing a Distribution Package
---------------------------------

Some Linux distributions offer a version of Supervisor that is installable
through the system package manager.  These packages are made by third parties,
not the Supervisor developers, and often include distribution-specific changes
to Supervisor.

Use the package management tools of your distribution to check availability;
e.g. on Ubuntu you can run ``apt-cache show supervisor``, and on CentOS
you can run ``yum info supervisor``.

A feature of distribution packages of Supervisor is that they will usually
include integration into the service management infrastructure of the
distribution, e.g. allowing ``supervisord`` to automatically start when
the system boots.

.. note::

    Distribution packages of Supervisor can lag considerably behind the
    official Supervisor packages released to PyPI.  For example, Ubuntu
    12.04 (released April 2012) offered a package based on Supervisor 3.0a8
    (released January 2010).  Lag is often caused by the software release
    policy set by a given distribution.

.. note::

    Users reported that the distribution package of Supervisor for Ubuntu 16.04
    had different behavior than previous versions.  On Ubuntu 10.04, 12.04, and
    14.04, installing the package will configure the system to start
    ``supervisord`` when the system boots.  On Ubuntu 16.04, this was not done
    by the initial release of the package.  The package was fixed later.  See
    `Ubuntu Bug #1594740 `_
    for more information.

.. _create_config:

Creating a Configuration File
-----------------------------

Once the Supervisor installation has completed, run
``echo_supervisord_conf``.  This will print a "sample" Supervisor
configuration file to your terminal's stdout.

Once you see the file echoed to your terminal, reinvoke the command as
``echo_supervisord_conf > /etc/supervisord.conf``. This won't work if
you do not have root access.

If you don't have root access, or you'd rather not put the
:file:`supervisord.conf` file in :file:`/etc/supervisord.conf`, you
can place it in the current directory (``echo_supervisord_conf >
supervisord.conf``) and start :program:`supervisord` with the
``-c`` flag in order to specify the configuration file
location.

For example, ``supervisord -c supervisord.conf``.  Using the ``-c``
flag actually is redundant in this case, because
:program:`supervisord` searches the current directory for a
:file:`supervisord.conf` before it searches any other locations for
the file, but it will work.  See :ref:`running` for more information
about the ``-c`` flag.

Once you have a configuration file on your filesystem, you can
begin modifying it to your liking.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/docs/introduction.rst0000644000076500000240000001516214340177153017706 0ustar00mnaberezstaffIntroduction
============

Overview
--------

Supervisor is a client/server system that allows its users to control
a number of processes on UNIX-like operating systems.  It was inspired
by the following:

Convenience

  It is often inconvenient to need to write ``rc.d`` scripts for every
  single process instance.  ``rc.d`` scripts are a great
  lowest-common-denominator form of process
  initialization/autostart/management, but they can be painful to
  write and maintain.  Additionally, ``rc.d`` scripts cannot
  automatically restart a crashed process and many programs do not
  restart themselves properly on a crash.  Supervisord starts
  processes as its subprocesses, and can be configured to
  automatically restart them on a crash.  It can also automatically be
  configured to start processes on its own invocation.

Accuracy

  It's often difficult to get accurate up/down status on processes on
  UNIX.  Pidfiles often lie.  Supervisord starts processes as
  subprocesses, so it always knows the true up/down status of its
  children and can be queried conveniently for this data.

Delegation

  Users who need to control process state often need only to do that.
  They don't want or need full-blown shell access to the machine on
  which the processes are running.  Processes which listen on "low"
  TCP ports often need to be started and restarted as the root user (a
  UNIX misfeature).  It's usually the case that it's perfectly fine to
  allow "normal" people to stop or restart such a process, but
  providing them with shell access is often impractical, and providing
  them with root access or sudo access is often impossible.  It's also
  (rightly) difficult to explain to them why this problem exists.  If
  supervisord is started as root, it is possible to allow "normal"
  users to control such processes without needing to explain the
  intricacies of the problem to them.  Supervisorctl allows a very
  limited form of access to the machine, essentially allowing users to
  see process status and control supervisord-controlled subprocesses
  by emitting "stop", "start", and "restart" commands from a simple
  shell or web UI.

Process Groups

  Processes often need to be started and stopped in groups, sometimes
  even in a "priority order".  It's often difficult to explain to
  people how to do this.  Supervisor allows you to assign priorities
  to processes, and allows user to emit commands via the supervisorctl
  client like "start all", and "restart all", which starts them in the
  preassigned priority order.  Additionally, processes can be grouped
  into "process groups" and a set of logically related processes can
  be stopped and started as a unit.

Features
--------

Simple

  Supervisor is configured through a simple INI-style config file
  that’s easy to learn. It provides many per-process options that make
  your life easier like restarting failed processes and automatic log
  rotation.

Centralized

  Supervisor provides you with one place to start, stop, and monitor
  your processes. Processes can be controlled individually or in
  groups. You can configure Supervisor to provide a local or remote
  command line and web interface.

Efficient

  Supervisor starts its subprocesses via fork/exec and subprocesses
  don’t daemonize. The operating system signals Supervisor immediately
  when a process terminates, unlike some solutions that rely on
  troublesome PID files and periodic polling to restart failed
  processes.

Extensible

  Supervisor has a simple event notification protocol that programs
  written in any language can use to monitor it, and an XML-RPC
  interface for control. It is also built with extension points that
  can be leveraged by Python developers.

Compatible

  Supervisor works on just about everything except for Windows. It is
  tested and supported on Linux, Mac OS X, Solaris, and FreeBSD. It is
  written entirely in Python, so installation does not require a C
  compiler.

Proven

  While Supervisor is very actively developed today, it is not new
  software. Supervisor has been around for years and is already in use
  on many servers.

Supervisor Components
---------------------

:program:`supervisord`

  The server piece of supervisor is named :program:`supervisord`.  It
  is responsible for starting child programs at its own invocation,
  responding to commands from clients, restarting crashed or exited
  subprocesseses, logging its subprocess ``stdout`` and ``stderr``
  output, and generating and handling "events" corresponding to points
  in subprocess lifetimes.

  The server process uses a configuration file.  This is typically
  located in :file:`/etc/supervisord.conf`.  This configuration file
  is a "Windows-INI" style config file.  It is important to keep this
  file secure via proper filesystem permissions because it may contain
  unencrypted usernames and passwords.

:program:`supervisorctl`

  The command-line client piece of the supervisor is named
  :program:`supervisorctl`.  It provides a shell-like interface to the
  features provided by :program:`supervisord`.  From
  :program:`supervisorctl`, a user can connect to different
  :program:`supervisord` processes (one at a time), get status on the
  subprocesses controlled by, stop and start subprocesses of, and get lists of
  running processes of a :program:`supervisord`.

  The command-line client talks to the server across a UNIX domain
  socket or an internet (TCP) socket.  The server can assert that the
  user of a client should present authentication credentials before it
  allows him to perform commands.  The client process typically uses
  the same configuration file as the server but any configuration file
  with a ``[supervisorctl]`` section in it will work.

Web Server

  A (sparse) web user interface with functionality comparable to
  :program:`supervisorctl` may be accessed via a browser if you start
  :program:`supervisord` against an internet socket.  Visit the server
  URL (e.g. ``http://localhost:9001/``) to view and control process
  status through the web interface after activating the configuration
  file's ``[inet_http_server]`` section.

XML-RPC Interface

  The same HTTP server which serves the web UI serves up an XML-RPC
  interface that can be used to interrogate and control supervisor and
  the programs it runs.  See :ref:`xml_rpc`.

Platform Requirements
---------------------

Supervisor has been tested and is known to run on Linux (Ubuntu 18.04),
Mac OS X (10.4/10.5/10.6), and Solaris (10 for Intel) and FreeBSD 6.1.
It will likely work fine on most UNIX systems.

Supervisor will *not* run at all under any version of Windows.

Supervisor is intended to work on Python 3 version 3.4 or later
and on Python 2 version 2.7.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/docs/logging.rst0000644000076500000240000002320614340177153016611 0ustar00mnaberezstaffLogging
=======

One of the main tasks that :program:`supervisord` performs is logging.
:program:`supervisord` logs an activity log detailing what it's doing
as it runs.  It also logs child process stdout and stderr output to
other files if configured to do so.

Activity Log
------------

The activity log is the place where :program:`supervisord` logs
messages about its own health, its subprocess' state changes, any
messages that result from events, and debug and informational
messages.  The path to the activity log is configured via the
``logfile`` parameter in the ``[supervisord]`` section of the
configuration file, defaulting to :file:`$CWD/supervisord.log`.  If
the value of this option is the special string ``syslog``, the
activity log will be routed to the syslog service instead of being
written to a file.  Sample activity log traffic is shown in the
example below.  Some lines have been broken to better fit the screen.

Sample Activity Log Output
~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: text

   2007-09-08 14:43:22,886 DEBG 127.0.0.1:Medusa (V1.11) started at Sat Sep  8 14:43:22 2007
           Hostname: kingfish
           Port:9001
   2007-09-08 14:43:22,961 INFO RPC interface 'supervisor' initialized
   2007-09-08 14:43:22,961 CRIT Running without any HTTP authentication checking
   2007-09-08 14:43:22,962 INFO supervisord started with pid 27347
   2007-09-08 14:43:23,965 INFO spawned: 'listener_00' with pid 27349
   2007-09-08 14:43:23,970 INFO spawned: 'eventgen' with pid 27350
   2007-09-08 14:43:23,990 INFO spawned: 'grower' with pid 27351
   2007-09-08 14:43:24,059 DEBG 'listener_00' stderr output:
    /Users/chrism/projects/supervisor/supervisor2/dev-sandbox/bin/python:
    can't open file '/Users/chrism/projects/supervisor/supervisor2/src/supervisor/scripts/osx_eventgen_listener.py':
    [Errno 2] No such file or directory
   2007-09-08 14:43:24,060 DEBG fd 7 closed, stopped monitoring  (stdout)>
   2007-09-08 14:43:24,060 INFO exited: listener_00 (exit status 2; not expected)
   2007-09-08 14:43:24,061 DEBG received SIGCHLD indicating a child quit

The activity log "level" is configured in the config file via the
``loglevel`` parameter in the ``[supervisord]`` ini file section.
When ``loglevel`` is set, messages of the specified priority, plus
those with any higher priority are logged to the activity log.  For
example, if ``loglevel`` is ``error``, messages of ``error`` and
``critical`` priority will be logged.  However, if loglevel is
``warn``, messages of ``warn``, ``error``, and ``critical`` will be
logged.

.. _activity_log_levels:

Activity Log Levels
~~~~~~~~~~~~~~~~~~~

The below table describes the logging levels in more detail, ordered
in highest priority to lowest.  The "Config File Value" is the string
provided to the ``loglevel`` parameter in the ``[supervisord]``
section of configuration file and the "Output Code" is the code that
shows up in activity log output lines.

=================   ===========   ============================================
Config File Value   Output Code   Description
=================   ===========   ============================================
critical            CRIT          Messages that indicate a condition that
                                  requires immediate user attention, a
                                  supervisor state change, or an error in
                                  supervisor itself.
error               ERRO          Messages that indicate a potentially
                                  ignorable error condition (e.g. unable to
                                  clear a log directory).
warn                WARN          Messages that indicate an anomalous
                                  condition which isn't an error.
info                INFO          Normal informational output.  This is the
                                  default log level if none is explicitly
                                  configured.
debug               DEBG          Messages useful for users trying to debug
                                  process configuration and communications
                                  behavior (process output, listener state
                                  changes, event notifications).
trace               TRAC          Messages useful for developers trying to
                                  debug supervisor plugins, and information
                                  about HTTP and RPC requests and responses.
blather             BLAT          Messages useful for developers trying to
                                  debug supervisor itself.
=================   ===========   ============================================

Activity Log Rotation
~~~~~~~~~~~~~~~~~~~~~

The activity log is "rotated" by :program:`supervisord` based on the
combination of the ``logfile_maxbytes`` and the ``logfile_backups``
parameters in the ``[supervisord]`` section of the configuration file.
When the activity log reaches ``logfile_maxbytes`` bytes, the current
log file is moved to a backup file and a new activity log file is
created.  When this happens, if the number of existing backup files is
greater than or equal to ``logfile_backups``, the oldest backup file
is removed and the backup files are renamed accordingly.  If the file
being written to is named :file:`supervisord.log`, when it exceeds
``logfile_maxbytes``, it is closed and renamed to
:file:`supervisord.log.1`, and if files :file:`supervisord.log.1`,
:file:`supervisord.log.2` etc. exist, then they are renamed to
:file:`supervisord.log.2`, :file:`supervisord.log.3` etc.
respectively.  If ``logfile_maxbytes`` is 0, the logfile is never
rotated (and thus backups are never made).  If ``logfile_backups`` is
0, no backups will be kept.

Child Process Logs
------------------

The stdout of child processes spawned by supervisor, by default, is
captured for redisplay to users of :program:`supervisorctl` and other
clients.  If no specific logfile-related configuration is performed in
a ``[program:x]``, ``[fcgi-program:x]``, or ``[eventlistener:x]``
section in the configuration file, the following is true:

- :program:`supervisord` will capture the child process' stdout and
  stderr output into temporary files.  Each stream is captured to a
  separate file.  This is known as ``AUTO`` log mode.

- ``AUTO`` log files are named automatically and placed in the
  directory configured as ``childlogdir`` of the ``[supervisord]``
  section of the config file.

- The size of each ``AUTO`` log file is bounded by the
  ``{streamname}_logfile_maxbytes`` value of the program section
  (where {streamname} is "stdout" or "stderr").  When it reaches that
  number, it is rotated (like the activity log), based on the
  ``{streamname}_logfile_backups``.

The configuration keys that influence child process logging in
``[program:x]`` and ``[fcgi-program:x]`` sections are these:

``redirect_stderr``, ``stdout_logfile``, ``stdout_logfile_maxbytes``,
``stdout_logfile_backups``, ``stdout_capture_maxbytes``, ``stdout_syslog``,
``stderr_logfile``, ``stderr_logfile_maxbytes``,
``stderr_logfile_backups``, ``stderr_capture_maxbytes``, and
``stderr_syslog``.

``[eventlistener:x]`` sections may not specify
``redirect_stderr``, ``stdout_capture_maxbytes``, or
``stderr_capture_maxbytes``, but otherwise they accept the same values.

The configuration keys that influence child process logging in the
``[supervisord]`` config file section are these:
``childlogdir``, and ``nocleanup``.

.. _capture_mode:

Capture Mode
~~~~~~~~~~~~

Capture mode is an advanced feature of Supervisor.  You needn't
understand capture mode unless you want to take actions based on data
parsed from subprocess output.

If a ``[program:x]`` section in the configuration file defines a
non-zero ``stdout_capture_maxbytes`` or ``stderr_capture_maxbytes``
parameter, each process represented by the program section may emit
special tokens on its stdout or stderr stream (respectively) which
will effectively cause supervisor to emit a ``PROCESS_COMMUNICATION``
event (see :ref:`events` for a description of events).

The process communications protocol relies on two tags, one which
commands supervisor to enter "capture mode" for the stream and one
which commands it to exit.  When a process stream enters "capture
mode", data sent to the stream will be sent to a separate buffer in
memory, the "capture buffer", which is allowed to contain a maximum of
``capture_maxbytes`` bytes.  During capture mode, when the buffer's
length exceeds ``capture_maxbytes`` bytes, the earliest data in the
buffer is discarded to make room for new data.  When a process stream
exits capture mode, a ``PROCESS_COMMUNICATION`` event subtype is
emitted by supervisor, which may be intercepted by event listeners.

The tag to begin "capture mode" in a process stream is
````.  The tag to exit capture mode is
````.  The data between these tags may be
arbitrary, and forms the payload of the ``PROCESS_COMMUNICATION``
event.  For example, if a program is set up with a
``stdout_capture_maxbytes`` of "1MB", and it emits the following on
its stdout stream:

.. code-block:: text

   Hello!

In this circumstance, :program:`supervisord` will emit a
``PROCESS_COMMUNICATIONS_STDOUT`` event with data in the payload of
"Hello!".

An example of a script (written in Python) which emits a process
communication event is in the :file:`scripts` directory of the
supervisor package, named :file:`sample_commevent.py`.

The output of processes specified as "event listeners"
(``[eventlistener:x]`` sections) is not processed this way.
Output from these processes cannot enter capture mode.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/docs/plugins.rst0000644000076500000240000002532214340177153016645 0ustar00mnaberezstaffThird Party Applications and Libraries
======================================

There are a number of third party applications that can be useful together
with Supervisor. This list aims to summarize them and make them easier
to find.

See README.rst for information on how to contribute to this list.

Dashboards and Tools for Multiple Supervisor Instances
------------------------------------------------------

These are tools that can monitor or control a number of Supervisor
instances running on different servers.

`cesi `_
    Web-based dashboard written in Python.

`Django-Dashvisor `_
    Web-based dashboard written in Python.  Requires Django 1.3 or 1.4.

`Nodervisor `_
    Web-based dashboard written in Node.js.

`Supervisord-Monitor `_
    Web-based dashboard written in PHP.

`SupervisorUI `_
    Another Web-based dashboard written in PHP.

`supervisorclusterctl `_
    Command line tool for controlling multiple Supervisor instances
    using Ansible.

`suponoff `_
    Web-based dashboard written in Python 3.  Requires Django 1.7 or later.

`Supvisors `_
    Designed for distributed applications, written in Python 3.6. Includes an extended XML-RPC API,
    a Web-based dashboard and special features such as staged start and stop.

`multivisor `_
    Centralized supervisor web-based dashboard. The frontend is based on
    `VueJS `_. The backend runs a `flask `_
    web server. It communicates with each supervisor through a specialized supervisor
    event-listener based on `zerorpc `_.

`Dart `_
    Web-based dashboard and command line tool written in Python using PostgreSQL
    with a REST API, event monitoring, and configuration management.

Third Party Plugins and Libraries for Supervisor
------------------------------------------------

These are plugins and libraries that add new functionality to Supervisor.
These also includes various event listeners.

`superlance `_
    Provides set of common eventlisteners that can be used to monitor
    and, for example, restart when it uses too much memory etc.
`superhooks `_
    Send Supervisor event notifications to HTTP1.1 webhooks.
`mr.rubber `_
    An event listener that makes it possible to scale the number of
    processes to the number of cores on the supervisor host.
`supervisor-wildcards `_
    Implements start/stop/restart commands with wildcard support for
    Supervisor.  These commands run in parallel and can be much faster
    than the built-in start/stop/restart commands.
`mr.laforge `_
    Lets you easily make sure that ``supervisord`` and specific
    processes controlled by it are running from within shell and
    Python scripts. Also adds a ``kill`` command to supervisor that
    makes it possible to send arbitrary signals to child processes.
`supervisor_cache `_
    An extension for Supervisor that provides the ability to cache
    arbitrary data directly inside a Supervisor instance as key/value
    pairs. Also serves as a reference for how to write Supervisor
    extensions.
`supervisor_twiddler `_
    An RPC extension for Supervisor that allows Supervisor's
    configuration and state to be manipulated in ways that are not
    normally possible at runtime.
`supervisor-stdout `_
    An event listener that sends process output to supervisord's stdout.
`supervisor-serialrestart `_
    Adds a ``serialrestart`` command to ``supervisorctl`` that restarts
    processes one after another rather than all at once.
`supervisor-quick `_
    Adds ``quickstart``, ``quickstop``, and ``quickrestart`` commands to
    ``supervisorctl`` that can be faster than the built-in commands.  It
    works by using the non-blocking mode of the XML-RPC methods and then
    polling ``supervisord``.  The built-in commands use the blocking mode,
    which can be slower due to ``supervisord`` implementation details.
`supervisor-logging `_
    An event listener that sends process log events to an external
    Syslog instance (e.g. Logstash).
`supervisor-logstash-notifier `_
    An event listener plugin to stream state events to a Logstash instance.
`supervisor_cgroups `_
    An event listener that enables tying Supervisor processes to a cgroup
    hierarchy.  It is intended to be used as a replacement for
    `cgrules.conf `_.
`supervisor_checks `_
    Framework to build health checks for Supervisor-based services. Health
    check applications are supposed to run as event listeners in Supervisor
    environment. On check failure Supervisor will attempt to restart
    monitored process.
`Superfsmon `_
    Watch a directory and restart programs when files change.  It can monitor
    a directory for changes, filter the file paths by glob patterns or regular
    expressions and restart Supervisor programs individually or by group.


Libraries that integrate Third Party Applications with Supervisor
-----------------------------------------------------------------

These are libraries and plugins that makes it easier to use Supervisor
with third party applications:

`collective.recipe.supervisor `_
    A buildout recipe to install supervisor.
`puppet-module-supervisor `_
    Puppet module for configuring the supervisor daemon tool.
`puppet-supervisord `_
    Puppet module to manage the supervisord process control system.
`ngx_supervisord `_
    An nginx module providing API to communicate with supervisord and
    manage (start/stop) backends on-demand.
`Supervisord-Nagios-Plugin `_
    A Nagios/Icinga plugin written in Python to monitor individual supervisord processes.
`nagios-supervisord-processes `_
    A Nagios/Icinga plugin written in PHP to monitor individual supervisord processes.
`supervisord-nagios `_
    A plugin for supervisorctl to allow one to perform nagios-style checks
    against supervisord-managed processes.
`php-supervisor-event `_
    PHP classes for interacting with Supervisor event notifications.
`PHP5 Supervisor wrapper `_
    PHP 5 library to manage Supervisor instances as object.
`Symfony2 SupervisorBundle `_
    Provide full integration of Supervisor multiple servers management into Symfony2 project.
`sd-supervisord `_
    `Server Density `_ plugin for
    supervisor.
`node-supervisord `_
    Node.js client for Supervisor's XML-RPC interface.
`node-supervisord-eventlistener `_
    Node.js implementation of an event listener for Supervisor.
`ruby-supervisor `_
    Ruby client library for Supervisor's XML-RPC interface.
`Sulphite `_
    Sends supervisord events to `Graphite `_.
`supervisord.tmbundle `_
    `TextMate `_ bundle for supervisord.conf.
`capistrano-supervisord `_
    `Capistrano `_ recipe to deploy supervisord based services.
`capistrano-supervisor `_
    Another package to control supervisord from `Capistrano `_.
`chef-supervisor `_
    `Chef `_ cookbook install and configure supervisord.
`SupervisorPHP `_
    Complete Supervisor suite in PHP: Client using XML-RPC interface, event listener and configuration builder implementation, console application and monitor UI.
`Supervisord-Client `_
    Perl client for the supervisord XML-RPC interface.
`supervisord4j `_
    Java client for Supervisor's XML-RPC interface.
`Supermann `_
    Supermann monitors processes running under Supervisor and sends metrics
    to `Riemann `_.
`gulp-supervisor `_
    Run Supervisor as a `Gulp `_ task.
`Yeebase.Supervisor `_
    Control and monitor Supervisor from a TYPO3 Flow application.
`dokku-supervisord `_
    `Dokku `_ plugin that injects ``supervisord`` to run
    applications.
`dokku-logging-supervisord `_
    `Dokku `_ plugin that injects ``supervisord`` to run
    applications.  It also redirects ``stdout`` and ``stderr`` from processes to log files
    (rather than the Docker default per-container JSON files).
`superslacker `_
    Send Supervisor event notifications to `Slack `_.
`supervisor-alert `_
    Send event notifications over `Telegram `_ or to an
    arbitrary command.
`supervisor-discord `_
    Send event notifications to `Discord `_ via webhooks.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/docs/running.rst0000644000076500000240000003051714340177153016646 0ustar00mnaberezstaff.. _running:

Running Supervisor
==================

This section makes reference to a :envvar:`BINDIR` when explaining how
to run the :command:`supervisord` and :command:`supervisorctl`
commands.  This is the "bindir" directory that your Python
installation has been configured with.  For example, for an
installation of Python installed via ``./configure
--prefix=/usr/local/py; make; make install``, :envvar:`BINDIR` would
be :file:`/usr/local/py/bin`. Python interpreters on different
platforms use a different :envvar:`BINDIR`.  Look at the output of
``setup.py install`` if you can't figure out where yours is.

Adding a Program
----------------

Before :program:`supervisord` will do anything useful for you, you'll
need to add at least one ``program`` section to its configuration.
The ``program`` section will define a program that is run and managed
when you invoke the :command:`supervisord` command.  To add a program,
you'll need to edit the :file:`supervisord.conf` file.

One of the simplest possible programs to run is the UNIX
:program:`cat` program.  A ``program`` section that will run ``cat``
when the :program:`supervisord` process starts up is shown below.

.. code-block:: ini

   [program:foo]
   command=/bin/cat

This stanza may be cut and pasted into the :file:`supervisord.conf`
file.  This is the simplest possible program configuration, because it
only names a command.  Program configuration sections have many other
configuration options which aren't shown here.  See
:ref:`programx_section` for more information.

Running :program:`supervisord`
------------------------------

To start :program:`supervisord`, run :file:`$BINDIR/supervisord`.  The
resulting process will daemonize itself and detach from the terminal.
It keeps an operations log at :file:`$CWD/supervisor.log` by default.

You may start the :command:`supervisord` executable in the foreground
by passing the ``-n`` flag on its command line.  This is useful to
debug startup problems.

.. warning::

   When :program:`supervisord` starts up, it will search for its
   configuration file in default locations *including the current working
   directory*.  If you are security-conscious you will probably want to
   specify a "-c" argument after the :program:`supervisord` command
   specifying an absolute path to a configuration file to ensure that someone
   doesn't trick you into running supervisor from within a directory that
   contains a rogue ``supervisord.conf`` file.  A warning is emitted when
   supervisor is started as root without this ``-c`` argument.

To change the set of programs controlled by :program:`supervisord`,
edit the :file:`supervisord.conf` file and ``kill -HUP`` or otherwise
restart the :program:`supervisord` process.  This file has several
example program definitions.

The :command:`supervisord` command accepts a number of command-line
options.  Each of these command line options overrides any equivalent
value in the configuration file.

:command:`supervisord` Command-Line Options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

-c FILE, --configuration=FILE

   The path to a :program:`supervisord` configuration file.

-n, --nodaemon

   Run :program:`supervisord` in the foreground.

-s, --silent

   No output directed to stdout.

-h, --help

   Show :command:`supervisord` command help.

-u USER, --user=USER

   UNIX username or numeric user id.  If :program:`supervisord` is
   started as the root user, setuid to this user as soon as possible
   during startup.

-m OCTAL, --umask=OCTAL

   Octal number (e.g. 022) representing the :term:`umask` that should
   be used by :program:`supervisord` after it starts.

-d PATH, --directory=PATH

   When supervisord is run as a daemon, cd to this directory before
   daemonizing.

-l FILE, --logfile=FILE

   Filename path to use as the supervisord activity log.

-y BYTES, --logfile_maxbytes=BYTES

   Max size of the supervisord activity log file before a rotation
   occurs.  The value is suffix-multiplied, e.g "1" is one byte, "1MB"
   is 1 megabyte, "1GB" is 1 gigabyte.

-z NUM, --logfile_backups=NUM

   Number of backup copies of the supervisord activity log to keep
   around.  Each logfile will be of size ``logfile_maxbytes``.

-e LEVEL, --loglevel=LEVEL

   The logging level at which supervisor should write to the activity
   log.  Valid levels are ``trace``, ``debug``, ``info``, ``warn``,
   ``error``, and ``critical``.

-j FILE, --pidfile=FILE

   The filename to which supervisord should write its pid file.

-i STRING, --identifier=STRING

   Arbitrary string identifier exposed by various client UIs for this
   instance of supervisor.

-q PATH, --childlogdir=PATH

   A path to a directory (it must already exist) where supervisor will
   write its ``AUTO`` -mode child process logs.

-k, --nocleanup

   Prevent :program:`supervisord` from performing cleanup (removal of
   old ``AUTO`` process log files) at startup.

-a NUM, --minfds=NUM

   The minimum number of file descriptors that must be available to
   the supervisord process before it will start successfully.

-t, --strip_ansi

   Strip ANSI escape sequences from all child log process.

-v, --version

   Print the supervisord version number out to stdout and exit.

--profile_options=LIST

   Comma-separated options list for profiling.  Causes
   :program:`supervisord` to run under a profiler, and output results
   based on the options, which is a comma-separated list of the
   following: ``cumulative``, ``calls``, ``callers``.
   E.g. ``cumulative,callers``.

--minprocs=NUM

   The minimum number of OS process slots that must be available to
   the supervisord process before it will start successfully.


Running :program:`supervisorctl`
--------------------------------

To start :program:`supervisorctl`, run ``$BINDIR/supervisorctl``.  A
shell will be presented that will allow you to control the processes
that are currently managed by :program:`supervisord`.  Type "help" at
the prompt to get information about the supported commands.

The :command:`supervisorctl` executable may be invoked with "one time"
commands when invoked with arguments from a command line.  An example:
``supervisorctl stop all``.  If arguments are present on the
command-line, it will prevent the interactive shell from being
invoked.  Instead, the command will be executed and ``supervisorctl``
will exit with a code of 0 for success or running and non-zero for
error. An example: ``supervisorctl status all`` would return non-zero
if any single process was not running.

If :command:`supervisorctl` is invoked in interactive mode against a
:program:`supervisord` that requires authentication, you will be asked
for authentication credentials.

:command:`supervisorctl` Command-Line Options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

-c, --configuration

   Configuration file path (default /etc/supervisord.conf)

-h, --help

   Print usage message and exit

-i, --interactive

   Start an interactive shell after executing commands

-s, --serverurl URL

   URL on which supervisord server is listening (default "http://localhost:9001").

-u, --username

   Username to use for authentication with server

-p, --password

   Password to use for authentication with server

-r, --history-file

   Keep a readline history (if readline is available)

`action [arguments]`

Actions are commands like "tail" or "stop".  If -i is specified or no action is
specified on the command line, a "shell" interpreting actions typed
interactively is started.  Use the action "help" to find out about available
actions.


:command:`supervisorctl` Actions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

help

   Print a list of available actions

help 

   Print help for 

add  [...]

   Activates any updates in config for process/group

remove  [...]

   Removes process/group from active config
   
update

   Reload config and add/remove as necessary, and will restart affected programs
   
update all

   Reload config and add/remove as necessary, and will restart affected programs

update  [...]

   Update specific groups, and will restart affected programs

clear 

   Clear a process' log files.

clear  

   Clear multiple process' log files

clear all

   Clear all process' log files

fg 

   Connect to a process in foreground mode
   Press Ctrl+C to exit foreground

pid

   Get the PID of supervisord.

pid 

   Get the PID of a single child process by name.

pid all

   Get the PID of every child process, one per line.

reload

   Restarts the remote supervisord

reread

   Reload the daemon's configuration files, without add/remove (no restarts)

restart 

   Restart a process
   Note: restart does not reread config files. For that, see reread and update.

restart :*

   Restart all processes in a group
   Note: restart does not reread config files. For that, see reread and update.

restart  

   Restart multiple processes or groups
   Note: restart does not reread config files. For that, see reread and update.

restart all

   Restart all processes
   Note: restart does not reread config files. For that, see reread and update.

signal

   No help on signal

start 

   Start a process

start :*

   Start all processes in a group

start  

   Start multiple processes or groups

start all

   Start all processes

status

   Get all process status info.

status 

   Get status on a single process by name.

status  

   Get status on multiple named processes.

stop 

   Stop a process

stop :*

   Stop all processes in a group

stop  

   Stop multiple processes or groups

stop all

   Stop all processes

tail [-f]  [stdout|stderr] (default stdout)

   Output the last part of process logs
   Ex:
   tail -f 		Continuous tail of named process stdout Ctrl-C to exit.
   tail -100 	last 100 *bytes* of process stdout
   tail  stderr	last 1600 *bytes* of process stderr


Signals
-------

The :program:`supervisord` program may be sent signals which cause it
to perform certain actions while it's running.

You can send any of these signals to the single :program:`supervisord`
process id.  This process id can be found in the file represented by
the ``pidfile`` parameter in the ``[supervisord]`` section of the
configuration file (by default it's :file:`$CWD/supervisord.pid`).

Signal Handlers
~~~~~~~~~~~~~~~

``SIGTERM``

  :program:`supervisord` and all its subprocesses will shut down.
  This may take several seconds.

``SIGINT``

  :program:`supervisord` and all its subprocesses will shut down.
  This may take several seconds.

``SIGQUIT``

  :program:`supervisord` and all its subprocesses will shut down.
  This may take several seconds.

``SIGHUP``

  :program:`supervisord` will stop all processes, reload the
  configuration from the first config file it finds, and start all
  processes.

``SIGUSR2``

  :program:`supervisord` will close and reopen the main activity log
  and all child log files.

Runtime Security
----------------

The developers have done their best to assure that use of a
:program:`supervisord` process running as root cannot lead to
unintended privilege escalation.  But **caveat emptor**.  Supervisor
is not as paranoid as something like DJ Bernstein's
:term:`daemontools`, inasmuch as :program:`supervisord` allows for
arbitrary path specifications in its configuration file to which data
may be written.  Allowing arbitrary path selections can create
vulnerabilities from symlink attacks.  Be careful when specifying
paths in your configuration.  Ensure that the :program:`supervisord`
configuration file cannot be read from or written to by unprivileged
users and that all files installed by the supervisor package have
"sane" file permission protection settings.  Additionally, ensure that
your ``PYTHONPATH`` is sane and that all Python standard
library files have adequate file permission protections.

Running :program:`supervisord` automatically on startup
-------------------------------------------------------

If you are using a distribution-packaged version of Supervisor, it should
already be integrated into the service management infrastructure of your
distribution.

There are user-contributed scripts for various operating systems at:
https://github.com/Supervisor/initscripts

There are some answers at Serverfault in case you get stuck:
`How to automatically start supervisord on Linux (Ubuntu)`__

.. __: http://serverfault.com/questions/96499/how-to-automatically-start-supervisord-on-linux-ubuntu
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/docs/subprocess-transitions.png0000644000076500000240000020327314340177153021706 0ustar00mnaberezstaffPNG


IHDRF\o	pHYs IDATx}Uy}]3::9ּ Ej!ᥥ6HH,,S\ (-6郼ҲIaBLZ8iIjis_z	$L&ߏ8{_{߿\׵"Be
/@(F
a@(F
a@(F
a@(F
a@(F
rXDcǎwqG__֭[0.hmm]dɌ3ak8^zʕX)mmm7tӊ+@pxؿ…7l@Dwÿ~Y:EBD"D\
K9'L\q),R)ŭ1h/f&Z!!afS/"LJ)#jaXDR+V5T[+MҖI\ڛRk-x	IgcA3U^uvL&a!u8:D4\U
|oWhׯ4iF/_f"Xyo$4g*>SKm~6idO
Jˉɣpi5k"y63JBQ4%Ufj7H$nL'Mck-B֭b6UjTz3:>"IS%MB#?$+VZ#=I'4<<vyQsAjmP$q1r^Oc14TԥI~ |DBuѐ{)#X^y
43~R-fnAd84((jy#?wΞ=fJ0vӦ-96$-%&eraObDń{#~xRy4=I~\-N"(`q$5pXJR|ch]z'Ӳְ٩KE6,_"tcW/eVBzJ%{9gڵk	0΀00PR~[,Ƨ4
(86KpR	!l%	P	aQ-4Oʡ5D+"+➙{ec}j/!6>*f7]zKQ%OGLDR֓Sl7B\<֬[/l
#/5J|:6I3T0HL6G'{̦HrA3ڶJ9$LM!.C䗉(<~!bEb"H*$$F"JF$A%`\دgbN&_~yď`>FODOJvP-ӟIqMW|0:^I:J\PR* 7SvF"b-KEޘBDNq	aC3,W!'\=n$SJ2"+
$vFXUeq/S#'"NޞX{%Eof@"b{KVU0d'P9Gfvr1>$GPSĦes# 8	0K!
qͱɑ
fĵ7sb"HE:\tH[)Zi%jk4j-
:3	4$ɛ~$鄏[,i^$	kT1B-f+ݨ5Meߛ4d,"y(SqT}ul2%ńbE=="H>*/MlPJ0] 7=(Zj8j*9<^hM@}=XJm"yrVgvo40N0!T-vlF=Y8+Le|<!AҤqJW"!KGM{JxG(,@[raUITo+<ɑUI6t	N=SvEVt喤vj3 8|HOe)Zœ<,T(+*ӵPqO嫣j?{ET3kurJK͏3Kq;9Ew%Lg)!iᥳGQiҪ~.TL,dz&`܁}1]wsJLO4 \Q	1UD"RyNUvUq+=XM
T"䵙!3/Gr}u"BR
lDbo]պXDRip%f&QԨ_fڦWX䦄ۋm"pOrDOJԏNK8FٰaÜ9s.(Lt<\"Zgf&saWVtz/KLK!1(Q9Sb\k",0d׳w5C&>yQuq.&qGIlJ{U>YuHF+DEF]W$WEVJTU1w}woOٳ{g5/ZuK_d~Z"JvtM?s	fWiVr&wO;E0	/Z__ڞ?tSnڊ6?hխoةvjѪ[S3g^xiӧNx#^4M,w}
U84=rƕJ
#<*4f]ıo?{<$gs-HvΆ5Ȝ\#[{Yvo,G'/K
n(P҈$	l/qBi(w}\s-c7ѠBDNHӖج2C?@қ_H^OuDUU}{ӏŏPn*=ȨT|pā?HǠӄi5cљs%J+*v]twM_xʇo{tɫw϶
Lx=]]Ӊ,TK\H[o~:WG^6\5ս{\!Н}Q=vhԩQZ$1B}\1kT"ڳw-O{\
V^_<O=kC/A^yHDbh_ϔVi?m~-mm.itYxʩlW|g->g~٥NSe]rsnʂx۷V?w?ٙNyk}ˮ9u}u;%"/7>Ç^}^q$N8LBSڻ;r[wΚ-Y){qj7cJ^蒷FG,UD:Fd*uD`ŋ^,Zg.dx[𷕈>$Vk+	_S'6]l;ITɳ5*ݲ={5T7).wLӂ%͜{e*98]#t/k`l-61_ү/\z_2ф{衇9|eo\rÆ
yEn7̿fDi[_(+"*FHy+U$p[JyOno[EK9w{]\Ħ8{ZWQ.7a.p;k~m;::JW"Z-ʞ.!)J{nou-;wJʞ#O;zg1C¢w{^n!/Ҽ2d}4_D"whK坊Vb{~=Dܓ!Qf'H	Fkzk<-N{Ф;΃ %Qד="rLn=!#DTr$B_7L̓ھs/?}EZ7m]2rDq31̛9Ϭ㞫Oy[+w#9:Gr9ќi].]bih?Z\?uFk08ul~*Syrv'{)oj130^0>wLXz]Sյ3mތu?G8.$aHRjŧ]~ٶgNjO)Dv=lڱ=Wѱओ=4.㪂\ce]5)IO"LJn&1&?[
*+D,iB~|
U~K8h/^|/o~ckśDh;JĈb}ّmƵu!"" ͘9o,ס|"[aԶ	hCAB,|Y>̖8ܵx~E$jQC?^tKM:(I	
7.؞=sY+Y-wmڱ}Ν
KU"dQѝ(v9r%e45фywݞ14K"C6r='/Q̪"fHepyXaӎ6xق='-b_DR]7o_T:+,xI_adeZDZ\(&,߰9\9v}ΘA#uAP6pŗd{M"]IpںӈeT0NE]c?r7/ZhM:g?*r!cK"jꉄ򍛅$;*"}Xd(cwJ{>ܠ
#53T)+;{]MEKu6il]eOgi(E\XAgxكhr{m_rnŵw6VSx~wv,3
'S0NhtG}^r%_^zPDA9,25{>FP2ԩ|Olڱ㡭?MOno_s­`nvduJM{K7ǿ3ͮi9_;,r?NhFw,"`y"\*-{cTf̜3s6f-ez $G~ÿAQ7͜p}i=&MC9^zkv$Q	r5wDdhHӎlteIMPY-[0{j [存
6wF$2j#Z3!"8qEcRvHmQS婡;8Ҳhdo`,0oF_yOL\:K<6p wS|Ibvy/>nyD;YBzt囩R
JIfuGmēI9%i9Ҏz/"WEo@!Qq{q]Og6[A9w_pL~"P
rO;nT\~mqxR H}3I;ʆIĺMٽ=IFR2mH61fᛢS
(ǥT=nMR `att344p={wKl	'х鶖X	j"TT@ޔX#MG,lcGc&SET~$r)<񦊚eQ3	U.9uE3:*iZev(dY"\QEU$/-rL&Sڰ&Hq	Q/_|я|,CuyDIreIjPbT'SR*RVn1׌(i8T4wK4RqT&-D(dlׇ8J2һ7
Nu?@c\y問='?ch cyev^ҕm~h [[mPzc	[nW*l}Mvmz6/螓^ڞIf[XiK]۞L2DvAMfX9(*[0ڋ&!.>*oMՋCd0~h[o]v=3/b3;	bou-,rR#*uJk9(9{|[
4ORUMqRPNZy˺uEi#N5SҊKlU;#,ʋA|&@
L%xqw?9g?{KqI:[ʭu&{FJFfdbyd))I vef]eWy.YikĈ㌐p$ޤXumȱ1`)ԾJgv.hdM+$<#AiRѣW;&6"ɧTl(N=LD$og
pA(h>ml)W_%,Cd82˧8Xy΍LVJ	L Kx9b>vrW+WLCR(aw IDATTm[4ZC\)!޴cl
#'~ɎH~rEL 1㫢q
G.MAsdLnhGP$ᰱa}CLR$-R*xT9iLIt@W*=+"~x/Ck㑭2Ii)!.[X)&q!~f ;}هI6F,Y+OЉ~tr
uK=PKv	?RbâER6FtW3DO
+m^-/;궇'/\4^6//O:5Ua_*ɻI'+	0΀00p'~%b&RB:3r^DQ@RġJ6ɩB}Cpi@*`bat2珵4@0P #|cǎ;o֭GE0imm]dɌ3xV^}I'^z˖-PE޲e<hT=F_p
WO~Ϙ9]]ǵ"&&!VX9aJKfrM)nDD˖KDB$T+"$$lu%ZZ)eD-,Hj귆qk_B2]V{SvE/!)l,hQʫ.IS;,$Q\.rZO?5\裏_~ҤI1k*x^yki!b"teme&bPc:6)vZN. &zҪlBOYX
ԒR^XfF(+b8Up%H""[!`vkgl@"ӄZL=WzI*\.^l=}>~͟;7hÆ
]wȟ$H1Z瞻ۉ諗-K/k4qRmR.2N]`ӻ!%imVu$!ȤAͧ0`9ˉB@2k	/n],4Nnhk:LwDnEIL
#e֊v6cT1fn_?2"ZfM?ch	kϞ6m琈tJJy!jRψ"I%-;?Hl>&Tҩ7|pR	!l%	P	aQ-4Oʡ5D+"+B&;^Df#V[E,uI鄚.gnj\Kr.m.MB`Kwa]%zyc%^z%":kN7\ف$c*8ZPH̿I4i^Pu	SSbjsC$.j$eLx_7J-b`CQH*+Dr0*HZHXdIQXBZz	Jp&&9n"zG8GZ~"}btuS,ӟIqMW^0:eRtn4UAn&윟eSJSkIe]Z-da'%o잦qIr+3ˈN5ё*BbrL;_n5V8$/SwC~;wL!={4~#n8j;gp^
Bf
&O4rMM#jvh
ST$J]P8RZscLKDG.,N6T+M^RCIՇH]@DLp)eI
b{;c(|3lh|kҤ39:;zH"dDV9Idd!r
ƩT'"Nǒ*K*;&Yy16QMt𝆒ѝ$;	ʩ]>J5C)!9܆"6V/06#AYc3yh9Ve3Ȕsd.tb鐶SJ*)ij-
:3	4$ɛ~鄏[,i^$	kT1B-f+ݨ5Meߛ$I(Ӑl(#Kwch{L06k1dJ{	7-Ŋ<3z=#NLK=[&®j)G#,
tVC/#dJu<Ai_pbA>ּGREiH䀜*展x&MƞFBbfS+^"]OPjʳ(($M̜qU,]DԴDGyx$*>H"jVǟDvqYdJ43eW[meK'\nIj6>
Q`8#ۉ1e܏C{HĬteJ{|zZLקQɟ"5u]9%GJ8YJʍ-uJSBcg[KgoӴUY*\~gө\Y╏gmM`,9H}3.tF6$D*QD[`HՂt&
mDQa=GMy?-Bp?.yZD$?Yl/PzFKeUhkTef%j챲lo5GD@:,H4gx[%q1TE?Zx20v(}(dޤ	d	 =*x_<	ǾPT<˚#/DD,}?*->'>P-}k{g0Om60{^okg>!yPL;	}b-}?+zϜv_Ni[ٟ=w	Ddί7?="\6?ESI2fWWu[Tu)L$wmܗSϜٳNrl`ҚJL34fH/c({Ģ*ҸQxUc*/J2orsýy
M?_$WH=%Le"ݯ{ܴ֚G~;/]pQdIo:8up`vwM'z
|vۻ~7?}ż3W/|R#_6o#	^IHW4BY>S7=נ}Yiґe@$	z7
8ֳ̌	UٍT#73	ɷ~Ԭ2~Z3oz巈tm2}ϟooh=޽o޷D;7m\bN1P۝6Xm\h謉@R%7O&_?#DT$R9Ur28Ǩ{zROV"$A"M}OmVrbWmKa7wOOWgw┹{޷{o\a?>04tge?J5H_B".lyYA/r禍]]W;S*aOj"Zl3}Ԯ[U
eњFr{@.,H3;_h<%ysS\srho^vZ6CC_JnNWyOm}y_^:s:55w
߼h7w}+c
tN7s-|j4ƒu"Ԉ9F"L!"߹H%!ٳJPfeK)ϲS|஡8;D发Եx^>29FB8*'i"T\&Ss+Hs:-GHFD6ԷK.NU2L?&V*}۷3]*R\ꠑUv
ӜV|gݶʧ#YϝWK"mFU|ƑmGwR٭{
5Gd#2q-Չ!ruٲtKܭZvMv)Ef-=Y1׊@QWy3gKEپBL<}){|S6c9F5m;-%#50
zh:UN53r7twv5J |%
1m9g>	1ѼY_Um,񇵙(*J1X09FGvUYU{,DuݷlkV7L}۷m߾y.?"ePU;:XAa
.kR23g]>w/Xjեp4/ήzw#k]tuLW5_~
}[l
oiI#adtCiXEJ4&ƜΧϙ"uOlwRc=fX6f}ДDvX[j
9/?{TSW]rY̙j($RM^g]ue
k{!#g^ʢ~#+opda5@*Bi1Gd߅+]xE|#L$lOUP%V.\t4kM8K2wbggnv
uklş'לdsή)b)ݝ]3gΛ1tH/ryx[{	G2]~}`Mc&ή{aOZdJb1Ucj#Q1gȗ6L>ݒB<7߽o;J0nJ{ҳϽ}k)|sudGzD.'&) 
|/g)fsE?jzuBOpѼ۶Ld{RZbNg=K7T鮑ҦuoW}஡uOg~`XHARr(M,
VV*;;84d.6ofIXn"K+|sKu`L}ga&a}DUNc`2рZ<#h$RGٿW֧>g~iǎ$71='2=3ۆCRֵhsW!	yo珯[ivֹ̙ɥ9ҎHϵVO\eDzױy]tE`P#AMioޕK/o<|B6-ŏ/xIwv09gZ3trx[&¶L09HY$<z#"PCHd\J|#gKkJ[ir$)\AAi	ѳsʡyYߞ0mK[1#j.0eTϕZsQPlir#c9Źy{+ɞ}ydmJ[iӸ<VCp55;[vv'ҲFЗ_u]W^_8xF;:Cܧ)W(
r%#"iha_B2i&Gr#J^\9yFSZcWd_XT	5aͦ_2{BLzFHVV&ul!iyD}`F68N]o;m;J1>3BmM?݌f漮@,"18M,pdΔUp8J$ #uw
y^5Yc\=^嬛YQ IDAT(L#uKQ*fۤRxLuoJ}vQpe!;ﵛbyjPE	.t*ce02熫	jiqY
lq&#	nb0simM^p3o)LU.9uE3%T=$jEI%Z1*;by|kdJ5vTN]>K,hzfCi؊WIXsVa\PT,%j26%
gk^&яʭk0_yR+>\/HY
бZy76 ip(F*c(/קXjbEbOWJo=lzY(=T1ׄ-w+-K^]\.V7ْ/xi|k/Hyousˈ=m5
ldvI=:T\j(}O'45\r";ޖ\[xoĬO3KA#BL_ٜhyG#j(ybݦm\`rh|^N{4,Lq•V_ReCadmKJs	
zk\"/$7	+~ʓ t[5L
PJF.QJ,RDS"=EA`ͺʮ\("җwq}kS,:lgؘHj+^3f;zMx_~QJp&ҕTu{񔤥"M1I>rg@q2a%"}>S(2ZI&˯J$RY@"'qdOMq!5|Lԯt'V<ٙBF	UQ,rUUh#Fi)LH)ؗzTY6jnV=&VWo;ch	N"QIWߪT薮bאOҞdgo\*(Q@aIݘRPi_]
o+GYAմW-|<4q[|#%7&C*9U^Q4Fv%l*.]+nYt9)N\y%nuJD7`-a4|"?<(
)AbWE~\{!$z]+"h<*C՟`g)Bwj&
rq~g}402Zhɒ%/ʝKhM*
PE)n+%0,ʣJa;\u@Mj$IHaĉbpeoɬHʦHurIa-ӫeGp䅋FESg*[# y7)_yӟ4ch	O<,|g3ҙk"R
'])m$(E{pcsɝّG8
;VdޗZk.4!бj4|7rC	""Ow~4S"?-pdEa4iҤGyd+W8zikk馛VX1ֆ5XgYbEOOh.zzz1ֶD4""1cƪU@C?18Z0P #F;wk~^&@ñ+3w={!o	`\q
5kܹs͚5cmțgtWhϞ=W&իW	`q
5k1gϞ27{r/ɓ'Oc7l_hQ5G}pG7R(믿~M:T&@`ԩ^{Gr7lذaÆEʏi&9]/"V;V!101ioRrj~@+Ofg1+zTS.d#VJ=ѣbf6#%FD,2\
>//<_u5<ׯ4iR	>Ͽ{9|okkTzT"	5>T3ǭDD2PiΖB55mlE[H{$-,e]n!ϷUz68GJ~	$Fak}4KөT$Y뗍|*$
0PI%!>1(dk4*ZQy%hU**JڨLiN}dyqn>
S䣤&_*U
+:mQ*T-'t._O?u#a|w]{_~=W^9GZU$HVM#>Yn)oY*6H,>"5bSKd׋[n&ETW&n+K%"rJ*asY}(~=UwnenUK5pkR]%rac>WRW1qS[dR,߾K5F
)~[[Lߔ31މS/BDI0M`_Dſ! Tny=fZ'e2)Z7;\(u*6늬R|A!?)0WJi\ҬPE]k7,u<MȢ6O+򲥥eFOO}W[cjgDr(Uĕ]RWT"LKiN2RM)Y>0&zR?mbKcFrAr#~0i
*_V}sBRFŷu"6Ro=t[R4-D.mN]o=SU_4#F楗^"If¾ |)y4)(.JdEX/ȧmS.M>#IӍw̘rUBYR{6]
Fo!k\#"q'T|8Cx
"n?lc7?*{vLď@D/_sϝEDD"bs	w^KsrX
{Zc($2978#yؔ}WJ|4ަdկF'g9%>2Z$X/eoh-|1|Z
,{
ٛn\.ȱPQ
4	Pke94ʿhX6k& 'wyEؤW⯼x(ҷw!r'y]$uykȝ\Rj'0gP(cdVI-5fL4 6(\⁆zWM
i(H>MDxd"ڳgCuu&c"a]7ۂ&"f]eeS%UYI-R+K>n!*nabaf["vMPe	j5\(VqKUj&^Ǻ6kڄ}4W{qIUds"nH.-~mMK~$tsqɶMœ#;k}~D>Ebruۮ#Y^գ-"_E_4x&<;ڊqoiQɎYb]5Yd7yb"\G*˄1m0XOj~a?G?kN%%7Eμ,'9aR\쭐D蟍t`ԯC֣t$oŷsހIКu&BOWDpnS%REqQ5HYO	>Uh\Ii1OQE-\Y,T
Z^ωpհ/QC%#OKWYK^ckwky#P*b|y7kْ:F-ɥ`ZQfm"NB'/JOO//}֤	O8"&-ChѴ)kF\Phr&.B[lJ´	+tC5ˬS˂߈ixSPBtf)etwESO_r5d.#L}p V!qMi|^HdNkkdX\f=MqnOR)"|*	Yz+mɘ+rbֺ}jZ"-%*G*(i6nT_2YB#Xp)å!sPIheK./E]TpN
Bwv}!H
vǴ^
kV,ʳ&i6o=)!UxOp[^mW!+bx7Y<ʳkbȿb$AIgpސ>[<ĉMm)I1	Y,Nu=+[93o4,ªLfE>sT"$(j}Y.;eibV3%S%̳!]e&KUP-H*4ykBQ*[uU$YehGWd+ѸkQ̲e˖,Yo{˗=pW_R~ycsCK,};%Y5;aJ9**3;QJ.ݪT1eK¤lowKXSE	C蒃X*Į
HB7MPT*k~78Y
䟛\!Ho3"8QR6nKe{둪9Gt>LDgJ~=ګp",bUEf毆MEvQ%U5񻔰!\ɕjda#RO,$2X@bkTU2r)ſLĤQ3@|G>ooB{ɽyU Nl0Yת@E5oDWe$ݶHD=4Y?E$RH2ŤgGcl
z
0ƾ9ClB&/bC.elV(Sw4rIpRZYK漫=.ЬMY1ƞ*1D4\aXD=ΪJ~꜄f-|F5ihui„L3q oȄE$QQF.j$1rH%w힟c8?jU94]nR~֪U"T T<"$$&B| *W :CsyH5|ԜXP
=:-(GFQ\	$//Hf=]]Rn;#N+|bQTGpiVтQ
#FmڴyǺu{ƌcT%*HiOe/]x	>۪@wk5tVր+RkHi$_"
Mo-pq3;4ȇ)S
9/|D)@IB*TҗZAyY2{ܢؓ6qo622C<&YQSSS}}>taժU|NH5A:<,1rm&X	Y'G;r8)7~{`rbΞ[!<J.Y^<~Ğ]yrt)ީZT %@TI;YɷIY-|&D
?s,h)^\~'"xρ	$O3v	Guܹs+++BpEٺT"C$3K.⨙H
Q'IT@mO"D}4ҐqkQ;MMдQ@901*HHZ|p%a+$VaZ|bK=CۗG,d%&TH  ,H8TyܥI&	455?'s̛7fQu:}n
ps|k=JoiT 8_eXJ	?zo=scB}刈Dyb35&++rrcinN$>HJ]BAa[-Bd́$r/@+|nl3у1:pI'=c|>ljj2dȦMZVDD؈Z,	^	"*PE\NLi)#)"H&+MHB@DE2ӓM	
C.59JER儦dT$3VWHD/Ҭ$ϵ#Ozbooh8",\gbW鬳{oR.wf+?g}xfiۮT7GI!\!S@ӁF=zժU{o`k{iZvA6āS@$JndN#Oy7ysGUE9,yŢ[Ǽ~溓D&y6[_WA|KQ)skD,y%qAdEc9KTEo.K8ڍyeuSoi)==+\i);E+D)
=*r9b/{1ǶǜXjr䷻&m'9K]yw=9.1TQ"V7,J'GD[o_|"X\>)w
 ]D,k
>)+:=4HD力ozvvmYn˓WLcϼnLL8x+4M9w,C[.8&rspq=Q#!)S]v/|ekՊPWW׫W>J^'.$$!?*p.C[:RVb{k5}d˥I~=~O"qBv:x̟ێ5C&{_~E5Kz1o}7Tenڸ#E$/ΕK:E?-JBرcGD	
9<Ċ aGxiB؂]:7?}r]g9ow߿4F8-xcK?nC[M.];8k^L.(|˳;.qM7j·fVUUVvͳ&(yHqj*xw7,AyypЇ
aAYP\>Lsn{1:&{ڵa̙3gΜٲjE7n\1"5hlja8DD~B$҇2EM.@-t	݃
LYmF)65lY'=#һR5=:i06/oN)~z泋VV6_٩%Qڸ~^Nw|԰%i˖1đ	t%ZtQMMMrˤIvSR5%d.n#q+ZvC#zD.A$-ʡ{Ir[hs{n}}Xȕi
pڅп[PrQp\\ʊF^sktٳg~µL[UF%8kvnT*xxo֛o.fEvm9Ga)e[{+>!:I>w9ԞyV7xxd\nBN8k @ტ y	9-0\BdH.~Caڌ3~ᖮKGw@KU%9rc9ITΫ;t"":VwvyVl$#WݣUt܆,XDž
8NkHOڴut6䏛6U!ɟ5*A=:8gIpB0w	3(*a$P3E@᧗g"U2A{t3[펨`ra:TҨ3?݉C!HDEt	 Z2=D2xPaٲe;O-Q-Cpkf18:e4JAЖr={z<iy.'W)gkl莢bO>.s>;tgǎ7{JqԴiKI]w]#?5
ȑ<[qݘ~L[v9ieO3%-CLmɨ9e8TN0Uç|m׫U4  '7S{AH+CWxa)N05)_Soh_?kS_u	t=*ArvG NU8C(1q>Ni/^Z5o7}Bȓs1y\ھx4v1_6雍7|dv8N3hWYQT%o}=_f%:?{Ord}!6{azV.P>|^wnmڶtRC
[f_0w)/iWYqBZCqpŲ'3
VrGeDu_=[}W;(5E# 5SkO YV(-
cbŊnab~TVVp
+Vhu:"Y֔'=<ux14q').ӴȻ+uޝ=fOy@l׮G;Z޻[YKS׮G	|kJrOx})+
hl
5vlqI>t@DY{>Yh'vy^ppiŔZ~<;ks.9a#$6CG>K]MJ6.nmxa
 Xpdu$acM}3-L];{t+UVD"TsTLI G%-He	}.9
ڴm1ݿq[ךʋ:^D]Pee7WVV~_w1u]7qP~݌TVV515|D&"a;\$qr\XINiP섴hlv-=}Kt]K4C8\6s?7z첿G@SCy"nڴWӬF|l* HJbh޶cƦy@[[Rl}@
X!4ۮC'T;d||È#x5T$Gq"$g/W^
u1ꞝc{sDK؏ɑkOHt.UE-/2M:ִ?oiŒnÊwFp8T_"^%KʾC3.9mkӶ%cu[|Êw65\]sH erQ
saF;УAG?i	&
1EJ$ lj&.D8#.|ttey Q	P#!c̙=b@On;9jlpΜ?'״;p׿%#Fn񚚪P}ׯO[χ:otΐTUV5yi-;F^&v9jKiГ &B7Ξ{dQWIeU]1?es䓧[d܌N$k]zd&?_P@c
Pa`!8RX.+8)?msDs#KzMo~_xOIgRAxn4՚o)V8wbgr*fF;?._&grchJA;PޥpXH38LaMr!`?;ttcHԋ=81nv!mq}GI"W"x8GI4"p8D"l{eox﮻^Уi̘>DE>?暰~{sa'8e#gΙsim]eǶXK
)~2J[nB-O3Wv}m.Ӏ~JѴiKO
	"[\WwLee[m[顪ʶRؙTU~r9" $;4|e$Cx99C/k_#+
dz`"A-I/Njr3_nZE`6LhOKY3>JRRNDiVl\-LϝqiGI.6:Rç"(_.3.iֺ$Ug;VLVltv={n'{{JJGxO[7w	544|͇Q%jhhwi.G&%Oo!^XX]RgN(<򾌡؉Rp)yB|i*++.P>眇?ԞSciɧN뮥ں\~]cҽ?sAl~ĈSNsS׍}j?,\v?zoP1T)
~:,)!;t-
ȹ8ʉq<9+gkВe/Xֽo0loްD#mɕs^o_vp67lyXkO8O(/^/i67e}|*=^<ů\O{Yx"&K<;Q,]/ݱ}ŧKBQKܥxd@>wzرc'M4y֮JB#8ND .UJ"gOלZN\f|8gP׮G/Oeuu23R҃-u=#}sC͡?_6rVUS={['v92.WUU4i_{(puOYWI8t=j3Y!qڃ˧=>e#NaRŪ+U8=y rN^>L穇x˱W>#a'J:J$X'P%$u
5K׮Y	:С}Z77Yn\3ztP	ְi[+UXze߾}NL!z\W9i)X}Yfi6ʇ1"+7&Ȁ0Y?|mv=}gټʮ@j2`:4d/֯LpWpУD$c*Ik@' rq3	Gbצ2C>hΜU:b)F)@wp睃tZu#DQ捍.^E;	ɭV d\;@,.-dI[&}I/G?*h̋D@aKYvbr~ctu,3Xz.ŎDچi;T.q&Oz816Nk"K6УVg\cYx?vI$kQdKLd&W8vѢLKUU#Nֺ!CNJOkキg}صd|	}Wb
p4o)(x5+G?vصJͣ$L@ٷk:_ܗ{N񆗋(->#aS^z;IIJ.DtHsFL8hWY{X^vNI~'+[w$/f9Zv,ye4n#iU&` ]zSܯ !\2.#^Qʊ+-F?_o~|N.ZʊvtӋU$4'^i/k%-^[=ڗ:
8kz'ħV#J5hk׮G'6SӴieĈS	;"O.'qj+F-#z8z'MݱTUr&?~xk{rbztu]7|#.=4dQUU!CN|Þ%G
@:)7	KͲP<_ FzSg#pnex2!$JtǨ}8`„	q9B0f̘So֢e
eTx$oN4.^q[*fFw4blȶL2rl*c;L8ɜmUT1hnO(3%s IDAT5@~Ĝp%/b'v8FmJ.O	'Mp
'	hjjZv{XreuunA۲FIc$7pߠ/gcݸ!9z{Q{R'=F矯Qvmݻw˥%.JR$Az*P%Ar!A k;ɪ*@{B;})M$#.d{!N/E:M9OJ7$\w\a
^\,M+(C|+AVKNV煾ɥ&8҆&K.4g&)G~ص;oQYYyp"ŪރPIG@.횈#x
o'VfαP;sr$uص%W\		r>LC|:y4҇CK'(W+ĂC’d9'G$ ֶm~5;USXXeP$PPx&閠	9Bޅo+(O|KbB2ф< .$C#a
6E'.
%!sz N@$/e6PFXV==2IqPI|u'F/GA鈓Є϶+#F˖-2d_sGϝ;d/ߗ\mYqt/>^'2"`!#_/17Rv&7Vp8Ԇ!:NP$Zj!u`C7+FD9i]]t2y"r3##FSN3fL4hЬY>O,~C0cE^?.(O#1G_Kr#ȟn#}Ex(- QD?O@?<
wnFI$@0u
#Sw !(uJQHcwaԓ2𫄲1OJ4ɰg?EWI̛8CLDxpM0[ٹ0T|	m'݈sFXC`E1{u0YW(V2c`w7Z1pu3 t&i*\$ ey9":@
.r]yy	PtfC>3r>!rDt2!=cF>	hQFu]|~''wze		Q,25%3-f
|~E";V9Rehr:9{*Q7R"OBb`Rs\8osCMq/A/@+:-=
H5W̥^V:8;;Qp!ʟ?aElِlHJQ]'WVy8r(8#K
37"8v3y<6>kLwCGW::>
:С ;9B(s/9A.ɧ_(K(5\uI^蝓8׹h9R$Y.2ccfph|>]w5jWN߾}{yh+EtVh‹@<
)%5~4dVFdL0q:a.͔1C8-_}PT jՋ)!#8!i2(""
CtݙGs3%=$'0K/I\C'8L{O$:eFNUb[?dh9fL)ύT~:ûx\v^*0Q{)4VDZ)E#Þ]IBv+ԞηڹCaNbyx	V&VG23|],0#|
h;|A!$5>x4ذaC޽SVTQQ1k֬]aEmڴ;Mma=f$.CF9!el\@<'SpH/(i榯n"$O"]2&a6.WpP~#e0&)@!׆jEX"[R%~tOU<,n<#MWǩ's.+9dxoc[f
ƍBT2U ~Pr09xIz
:pI2'?P&̳Om 8~J$hv+D!'+xmRq,D'JR"yə/uGA9v$}"=ȵ>WKc&B1fTaN$m*W/Ѓ<4#FC˖-;iG}E+ՋRb$yO\HNH"[r1O>49i%dCݐJtEmBݘd7%B+wj-\.^u_fx̑ce	EIB.S´T&ykf(Kmcu{uDa"|xfh׋0z./\ApH#py^tyA.*ƒ	is"*1.މ(C()C0h(Uz<D]st#ly
8xtxnC.OI8y>ZH9B<ߓ)+F-bsB".(m+G. pyryZpfphg0`@:-W^inZ~)
GBAC4IR%l*RS:qNz9Y+'7W2qR3~F8hy		cO)QH %uJ&jLEA
unM9ɴ+7
I!%^$0}KT8;PA>}E3VYwɺ"$$t	yO|#IÐb'*5EKYot	j՜A$sD#!72yPwQO'yA-)T5U(($W?J\I'D\I'SXWzѕ#!S2$?hР^zi>2|>_3^VzB8y[
 ,8dTeQZ~*HLOOB]Zѝ"
GeK_zI+n:\t|I;C"h.t6"bJ-2rNz Ћ.57{y|sӴ_},y%VG"ڨwwjQz#})BN$_"RHN5{-(u b(jKɶ.CMu!srz9iAjד(J1&$QsuV*$-5C3ƱA}>t}曛fpHP(\s5\sM:-ԨQO?ڤkkkJsnч
[]Fz2B='Bߠd$(R,3A]m_r/8)A\8	aۘ\XJF¤xL<3 sPDI{eL<
-u"Nwr(vF.uq6DQ,M1_ǒ:R;"n3Ne[~GaNsIU>X||ϱ>
Q./ĝO.$v"Bdǃ
!<G.7=˚r&χҝ'JGTUS!Z"pĶ3~NZ9.2:9ti15?=0pPG+ ^yy%oF@G\N:6eoeM7/lV^N@+ĉ׮];%xOcN::m|$$_NɿJEk32sR&Aޢ4yϸyeCEId_L};#Sl+2%PqH/35)y'%y_־Go-
݇>h?~Çӧ9
C@CCC7DEEœO>9O8q+wTTTL8qv12bt(W^9G裏~WUW;vl]]jՍ;vN;a,f4CǞ޽ĉUn,A30kC>f0C#Ck>f0C|`00bdhذaӵ>***{챳>kea3bV
3Z[XQo߾MMM-]`s12&̛7o/W@3?LiZvQ.
0`0"12D8pM@;[rQF ©U0%hd0ZE&VtCkʕ+/˖-uuueg\O>~w0bd0v})Kn&LhРA?яlaLi`0#F`0#F`0#F`0#F`0#F`0#F`0#F`0#F`0#F`0#F`0#F`0#F`0#F`0#F`0#F`0#F`0#F`0t֮]`˖-{キ4Ygխ[X9`0120uQwvf/YYYа?f0LiѣGG1Fcǎ]SYY9vT`0`pE`80bd8-"`0`12Phdr`004vQ42`0F(\d0#FNE#"0bdhT42`0-#FD#R0bdh@42`0-#FCY"`0 ZeE#Ђ0bdhIF&eaВ(L.2CˆE`hq120T42`0-#FرcM.2CÈQYYh"0bd8(pǶt`0bd0 0bd00bd0XKWš5kŋ
@>߿W\ѽ{^[&70haҤI'|I/_nȠ(
˗/ۣ0w\ݞ?~`0|GC	'3	@ G 8 @pHwIģNNv\g"#Osr4n5I"B!r5n.I})}IPآНC1\Gg{IzOrᬤ%D䜓JRlGT
+Y>{siӦFj(
-[֫W`0L1wXѠ%7]jr( 8"H1&pz(=F}!	Z)x]!I…7O.%dO .XI&P 3|H,;y	F(!Us1->rIvJ49ڐtτ	Z270bXrԩS_6LD1:3WA<#0dC,*ya:qЗb8 (dr1qD IQR+y$mII+V$)T4P,@@I $TRKI.FJBܣ$o1Θfm*Q
g|ǿ~.<'O^j!#9:t2ȫmEMs͎:OmȚoJ	1*&GNElUD~!WC-
e;",	%k9ӻhヴ%H)䈂y0l+31
c?;HqUE"♡!al+OgvBp}mw0P(̛7t|ea/ahoh/97EL"ȡ(2݈$*j:	G*L$|LO\@^KJH(Y4ڦ"v)6X "
E;%PDm4tѱDHJL@J
Jc݂),9Ol]P!Nu^Ւ]l.EUHj 6K)YtpH<™N\0K{
Ci;]S(&T2'@vpbLRivauM$0#?Y	iOB0q@OBD$3yP	OV=-a+l]Ju1Qd|yPBl0-K-CʴB=#GFrP5]NQԭ*#.EV((v5,]$(#
IRbto|G}t~xЃ#_~3rT
q䅏BpQ5TD	%(uIHkGE^)aD頔a(É]xTQ,K:D2=4	W/TK٫-Jr(1+W꼳6m4(lWTT;Vg?;|á
#F={5k2DHQ gU(=j#
"ʎX.>Q`Q$(	[
Mʟi	a QO5DaY&Jbcֱp2BqtLw΅~:<QTťT4)EBk{''1͇h۷ןW^yljj
AP
#}ݽ%3	MK|L#i#i‰Eָ22+FzĄJI$LG}N͆
=BN;jժ+W|>y%=z'>񉰽iӦ{`0)10z+B.X`̘1{=GšHQDQ
SB(HGÔˉ)-c$4	}r	D!Ȓ!ړ;h "r^8>LTmGw/q&="숺^Hf"d%xmḏLY)4'=q ]-}W^y4i}{ÁZ-)S]v~N:)w<T%qU3ec_|Z\fy\iڝ~>_waL+צ)ߓp]̽$AYduzߥR{"8"ͿSTcU6~hgoG-@IM}s':_LoHA7KN%54&娚'~R㏪a{/(4OP
H-Ǝ;um۶ظqiү`8q*FڴicuM3f@^$
rsUakWO:ߞ<zT)3Bfe%4ۏW|TTwfFJdWOdCDv?.PC$ЫsNΫOY7~u^׭[Lpp'FN:{LMMMCX<	8IOql=O=㑗]'y+ֹ7X!=4k:_*0Ԩu%Pm;5ghÇןǏ7`08}
8q:_].zg۴iYD8ŝaO+QIʰH2IxvZ:_&˟zа;vk:MS&g?9/*HYfjhVT	+GkjkƦM^~euuu_WTj?~#ppˆQaZ޽wwK*
ddPM($* 䥈mn2{3(Aʊs&\\5P?T|W{r}&/ƕCu{Jٷ?z[Ei~;t1_A.Ư+TO<ӵ{.x
7(1Zx/ЧORE`1*F/y΢b#Yaa1RfQݣ5]eQ|)ڴ]eERAګTJHaq|\c$%TɔL>UNRr3FTi%E+.(:4^4z%1Tb!ѩ"
xrQ駟~Ygz>>;H}'R,[LˮfsYg1AvQPK#$G9ːXSqSl58ݹa;s?.t\mg?-T1(&+ַD3.9mkֹ0VdҔ.ϸ֦msjQK NiLUUa;͒N#79dv!s)NLL|Εͺw}gsqøqt?O=}Y?awpQG}ݱ}`Q+++g-	#Fq=		JsT&J^:t3ChG'BumGIWN\۽qBǥŇ_9ciA[}i'GQgU67ҲbjWܰeˢ̿+?TƕLjgSңSJ^JaUOןZfJޓʜ>tCFS;I'T[[[8W^9>a„Yf*ѣGvm{ѣ5,ЃM7`=ݤ&emXE?>KNkLI_;ɤL7K86:yp>_構6n%.=:%Mȗk="2\0n8ݞ={E7Ǝrر>@Αz8NaYE_rg\_}/Έ&
8wfq~E)Y|ؕXѽoε-eil'(-=I8nH2Y%K2	&=Ih+sDGoЮb/D(%#\8.=.("a1V!hвxTZt%oe\{U~T}}QGTHh=̉h#R
(	*>u'.I\rX
zhٮG.U*+p/rKp1{l3(/RDKE"QQGFv-%XYYSFԎvhwDŕ\^PF~KU*#v=e?&>Hq"WM2}Aȑ̳kѶmҕ@( ϧ}e*g0vQ420ke
yТ(;T4G&y˂q
&Wnz&{
!2^q[10.ӑ.g܄5Q\k"sUQŒ'hBMx_!S^QsNC4Xi;+vUTRۙK=QkQu"=~خ˹}ѧ>7cǎ8qV`r-;4oSCˆBt`%#8<]'g$"إGeL
7A.V"Y%>9_=J]r8O<ݶǪBȥCSxPrֈq=<ӝ4xMD@X#mh$Ay8UӶAHc~?_]֫.*o޵ШQ
1I
2g\
\Q+CJyu1)!Ŝ	Pn-3n#/^o|$\rߞG'f qc4ʹ6/	'rFucjl[.@rAK\[`,eA(,O!b/ʁ3ɞ҈w`hvd$"B'j""Csgltzu>^_yN6|*id8|\`Kɽiku(L
%>QPaѢEʄUʓd-乃+|X;+	#~DDӝ5Y>y?MQe)B7F.8
 _G{P?4_^͛7P~E+92'hrDŨQݿ;P8IXس!^rkT
bB@9L:%\=9Bb)B^\$ioı@D6~PTh>`!BINYDSk\WT4'Bl#ɕ*5$6Dnm:’);d
i.x2 G___%h%އ~xhrDE[[nDu8N	[Bk_,9vQdĨK;~R	KhY
R@Z؋FuRmA?Q3#JN&-=@.ͤp $&X5F^?-[|x~s=n{o{	B.bt5aivIc	^I"%ɔ\8B?{c.1q(u	$Q,]T=t{U pꡢQE#AN-jjjzs?lKZb	4"S#,xbDP;	|,ʠ[/C8Ɩ)1KBʻ54y$Z)Dr2
m(Ԭ9Ff~YϞPzf<shjjO/!>?ZO?~-䢑 F'ɓ'r-?#U~nKۓeU;KԴ9n\İsHTw#9LKe$
)ʑ$QHctpL*!cN4Y(rUm3;!_<:|r|llE#qVȑ#W]uUWWYߘ덶+0(;z^ֽѱn:
7?=<jwW_}ڵk%\/|@]10XS IDATbٲe.p4477/_|=#a`֭_|>3Fw A{y7lذm۶FD6mڜ9s-Z4a„F_l۶/o)H1p?zΜ9<̇c T(]wݥAW^yS͛gΜ?X;/Ywwm"A iڴigZl0 Q .`Ϟ=z7péw̙7oק
i?pl9'~(T})S릦~3<5]wuu]qz+|
/9福Ό&OyhenWBKuar-^6w`(6Jف(mr p~5cPڗsf\yl+un,
/Oy<6mCdH:1-2֖s~)/=w<7߄!@
O_OYC3+Z|ŕ߽wvv@j3EUW3n l\4g瀜"챴@m42:
C@l:nHK6 f9ɸ/tk4\R|@tj3Q'b
H64ӛ6F摫(NCNvݴ+뮻(Qضmۋ/Mv%4֞@Ɣ)SoߞoL]k׮uSSӎ;"B۷o[:pG!C6/NT4Dʋ"ƊaQGycC9j/ ֱܷ	~jT+_P^ע	 k_C(''#ijjzW&MtT3QwRW_}UYQSSӵ^{
7oow___$®UVM:Es/mbN䪒肥S-dMJʔr]KLݒ3ϟ$rmMvYBl>t1h[;@#;RI6$9@ID-O %H?j:.}g.sVZ!Q (f1cƩe455-]To}QMPd'o$F} *kBOҒ t,5<%d,lCHLڌh\#ϧQ&NLUYc{Y-?G5a/.ykTybo`]+9RzsSXX	nq	A@ON\quםȑ#;xSQ<$eD1!@pq!NNAuҬ*i|F
$aC`J^!\Pfs1%&CbOP207{ n1%$ J5ȡA6!n4$\9yJdRkdTfTYOL
oF?G@ `ݺuknjj+bFSSӝwީVz뭷bɠő#Gm8^e(;1YHhd4Ax1)^,	a҃x:m$Ui(|P
>xP%1{ȍh*̠8˴8
%9RIK
HiG&#sw$ܓYӨgc7(\t饗5ʒosׇ^bE,uFLqBH*Ra]e\@Xt'mMj33+T5~ ]~Ă[?G+䵥 С	rsY&.<%4erJ|%r6vLT|
u+ꟶ@b>~޼y
dԨQ>%6o/k&>AȂM*pNc{9FUeMu8)THA'ГO@?Zѣ/^+W=ϟ@ccƌyWXPJʆe([:$mP[
%KD3ґyeILTVad“BHI$,~@Y	ȼȝ(3ȨJ-_6!DS8*IK{Q8妢)m"`b)sbcq@\I&x#~ Q Xf&OS|$"{;}-ggL($a}tqB5{Gy'f)볫P|+~Ǒ,7tG4{Gqx5i5dYnBZQ
/c@{S@r\	oI`bj'4W#Rf„	'On=p
z*n:B""8LV
&ppb`JHaO@D0@vQe3*LzoO^j|2{`m_U5P=\R:bו]ͭ@c	AF-$O^+FIǁ Q G}T\tҦ|grdCd*'}rGlw1
[rL0oKe4QGJVW'a9:"H}RPDS&{ͩDRZNCaIx5S\ͤy-OH*3?aGAnذAoUb^{|+SJ$Yyd~5eO^$=zD,B.H^Y$H[homqo['{p-}ޫ{v1I'>?ֽ⍳fj!LU"kigZ<p{(N>fFs
o!?1
F:֯_{ڦOX{K*1ڶmے%KCŴi.䒺$򪉤Q*^\a\3
@B$	vv+ȸ[?y~Mm9ÓOhr?zCu[Fp쫦_ii!J׳zco|lKCu;wΚ8)SRޮnNOޕtQ2{:,^2$֑|(#
Xc"Q 05c`SkG	!K.CPIwdu]_"(yV47BkY1BTA xȪjI>,$)#ú/ݫΜСBwgєv # myq~I&q.?Y_{w/d]:
U7VfևkJ~hw0idaNch>|{\|_h	Sx\ɉXɵ$/E8k=
%\Rr޳;wM^;h
40E-\cSjE"

	JLʿLé2kD0Go
p;^)g9Pv&{ҩƄ\|BJF(<1uەbE6FY9[P$š8ZF+"6"ւ8tD7ocލwfY!Nfdsy4޼9[.qWŐdem>y6U<6vz̫N9E>]Z|6/QQ 0BQ͝;>$#^,&(;oP,d!C&Uԥ#oGIVL0;m4aXnY5"$oHpfMI@4]K>&bW!wmܳ[_mif.vZͻ¢7?މ8HO9~c6Qe&	]0tM8#[ly뭷ѣ̙X{a+dСI(e>|4v𻏃ɾZHY_
^7|2_rwtu:K4[B"a`JtPۙg$w&3"lQh# 2Ö1c'夺ӟĨ@`qwu]zҥ_Wk%K뮻Gi="[+G}`~J/mGQxg[7%0X@f15kRGZВ.3PޒoVR&i1fYER4_isŃ.D.Љd&^@o%֨d_V=pvq_m1	p1
F"ho%-ImXnH̄Sű#p+ۃOr"kj$#ɹ :#ePx⻕hz4k"a7gXDzdV7/t
ڵ'o!\
q:i2)T-f?(	A-[+>8$^@:\!26ak{-E
eU|crGH9|Wci4 s;\N7e',6oJxcb꒰.զ3γǣy?鍋ĜNA/utt4777ИɀtafQ 7L
I"n(ZP݈+, |\M(qۘkh4L$
9X1KU­v).P6ģN@dgP	 %PI P"-d̂RΈiSJ	t'!>EDn Ձ Fc=,h%7ƂO<5TVumFXÄ3PhUԴW6L6U/ǫ%uQT-eπ:DoMe
8U+-Yؾ}kÏ6A Ȗ%YMaB7`ȡg
yV|evD %m#m%M3{+6OҔ>+9LviIkgl4Petdn2<
&UDD\WRf4>b,\Q$IيόFuXYU Adk=|EB7C (K"l$ho LdR]HJotgn7|1
#l2󝥟D"g+O \87}LBP$,HɭrDH}}s_D@`^~0XhQSS7|hsJ!x\3dDT^JGs,b:H)03JʺP5v<\с r)Uӳ،Y-դM!l(fBxoHO0+e^mmNNUe*Vԉ֔>T:I|M|o()rѥ^@c'ɓ'r-_τ.+#{G)G m7C b)QvD6"CJoлPQt`#	d>r#
C<:c"yZu*>.WAVXb-3CuƁT?H:'-|8J>rK?]XxɓoX#vN^}
6>z۶m>`m9s?C4n`ڵz}6В{W3v~.lƹfLՌ@߻*FQy{t%2jwN0#=ʁE*8SΏr,Inb.jAe`fk'ԵP5	vюW_[޷k_Z@=dhoo?pV3fLOOO9ymOOOaXbٲen!A˗wvv6ڐ!pNƌsm}v[f̘hؠg?YggimK`paڴih[B1
"|(̙37oޜկ.]D(FS \4Px뭷l٢]w]	 FS1cƄ;PXv-g6mZ@hb8u8!(Ě5k:11FS~FEtb߾}g}*F;v4iRcM
APB.@x?ɓ1(pqH.Xk%@ 0(pq\(ľ}~i@ 86"(#(>_Ǐ G(FQEphI 	bhF!
,>|Yg)Z/%\X@`#@cPW4
h`~zeEmmm@bh*b3ڀcz~@ hr~hL DQH.p<{=ܞƚ	B.p;3.O?Do	)#m0WA$eru@@\7Fڈ@PT!  D6WA^rbsl!PQ}(?^jE=黢Xqۓ~G/V^ҦmxHb! Ĕ_	Ȫ%_@_J{z7yo	֭5j@@~ʕ+`W.[4/ܚЕP ]#GPjc-5b	~9FAX%6̌m
qI
e4(<y ʰ*iEZSFg}2d2Tez߯ǩ'Cő#y}A 1
ؾ}^wq8
	B@cjKyaLiJjR@,k^X$
~^NAcyA#tNW7F	EACAA+xO>G+L4fAJXjU__ߤ[4rP5}D$4eY\I	y~hܸ4~	~#_74IZrm5&rݱhem̍nf5-y/<8iچp j]YKQ*++}\qZ
@`X Q@-Z8MxRZ%%bGo>! t,5<%d,lCHLڌh\0HUflIHǢjEEbA-Z0K3|Jy7B,{wDL_|R(SXX	nq	a F|bT~V'mxDgLsnY2"~#_Cu4
S016D%8r0ـEr	Js6y(=N0	q!i>kYhGh2ȓKhL8}bTx79C1
8rg GH51(=Q`	*Aj.䝾@f;B!K+͋nRHN:,5X734ƻlPI΃c$A(F/yekz,X(RgROjJ%;{N)E"4Pي!gVF)Nn27DUGF}4R"@5Z) A@=q+ *+:/rV@yp+2޹ؖΞѵwnޯ?͍f3IDH+ė<|wV$޹c}-q'Tks(|}h?zoOtſqymKP}睔UJ D<{c&D_a8$PDDk!%	%_8
%!?"Js䇉}YHZPtaj[_xSgWm=_ҽcGm9[Gƍߒ:SXΩ޹Kkh羴wnn@?*?2< TCL 5y_;C@` Q._ƫ*%MvCL@HQtr.C)߄XedI6K伋.5{} YykL=#vFO  $X{ﻇz$$NBP4Jx[A$BTABu@ԠV,!BሕDiqhsZNrZ-\	ƶAt'gm*o~s]튣߽qL
*H?گ
}_c8ây|@tСu[޽j[WweJgי  18Wțք*:ۑY&@ b8*$$9E3"S\c
O,H@z¸jų&~T M	@+>}{vکw=O쉓kDMj=Lmog?3蚿ϗgMvy`S\5_Ϛ8)_w 56̛~l5|+~ѷ/'n"hU!ŝ%wR'e{@`#Qd.mrzqɬݒ+ttX}RSO'Ukޗ{zNk]HnaG\gHc[f_0?3I~3Zޫ"М$zʸ
衍ݷv\a/xm?]5$V՟/"(?1
Y"Mu~r/̲>sFI9*gϾoҶW.IT)$ܼ}mdlK˷}flK3ٶ;u<(+ۦ.,s#rPu[_zu#|²A2xpgCD`ZWCF`aM(#t麷-NdG'	˛٤ze4ˊ&ϼ|f
b_$[ٔN
P@(F:#2x6rj{m!5rx*@-1spse55K>;{D֢PvD }{<^n."=RD1zd@,DX'LD@+D!]"nlKn@{AKf܃D	i[T%]Yfj[@ 0QX J%bDoDdЪXځ,KHD`A3ʜ]&,Q%N)=i1w`$yjIr"T3\>@nHW$޽"M_t 2ѠZT!"K_Nc])Bp媉l@`x!QjXxg{HℕCEZWkԹKyB G8TѶ};ؖ{JP͛V>Uxބ&a@f|eʌ1E@ٻɊ\>=q㦶͉}$"<rҸ)c&8e)ySBurT*ͳ?gHR7oHo\4'VtafQ 7LSK׬r{5!E (iA" M0AdGHhM:@ Fbo܍8qyjV)7	}%g@ћK(i*R&44sLq0IT&vNmQ%Ir
xw4RoMe
8Uaخj.eKdȶqK!!!ꜯ8RnG	v.%DG#ifob=qmn*mIM[r	Wz	%ݳYs6"p!$K*!IG=R V.7x#֎y;-FHPk"D&cA.]p%!茻 -8LFjHfiJBg|C7KDS@ 04@QF獚.{Icn8k2F2Z s.o=>I|>&eI|N:6@Ωy\bk"i6c&z+287!8gQc1l>VoK0zha Fmmm7w_0tuTI[ї]5LIj.87<AA!ScidyV⡵5yUp"J490M>FhHG/O&AVyYpyI++:c7+@>
1N%\~>ΰ7٣6@ 0(P~>\Q$IيόFuXYU Adk=|EB7C (K"l$ho0F7J\7H$T&Ȩ,"rHAG=[yRSH	FLBP$,HɭrDH}}s_Z N1
XhQSS7|hsJ!x\3dDT^JGs,b:H)03JʺP5v<\с )UC2+)D
Lm	R	fl+ⲭm9ީp֩lZY:њgJg|6) o=iכo655|Mghkw_|q]7駝dj6GBK(˺_)G!>ڄH^Dis̶Ħu N).~4R-f,V~HSn.	v5e|c-f|kzJ׌+E/[ZHf
5\\_}G}S?[oo~@ CdU9r䪫zƏohJTGa߿ܫ~-cѱnݺAbŊe˖>|ц)/_hC@` (pTٳܰaömmK`aڴisYhф	mK 0@ 0bWZ #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q #Q %S:<IENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0
supervisor-4.2.5/docs/subprocess.rst0000644000076500000240000002541514351440431017351 0ustar00mnaberezstaffSubprocesses
============

:program:`supervisord`'s primary purpose is to create and manage
processes based on data in its configuration file.  It does this by
creating subprocesses.  Each subprocess spawned by supervisor is
managed for the entirety of its lifetime by supervisord
(:program:`supervisord` is the parent process of each process it
creates).  When a child dies, supervisor is notified of its death via
the ``SIGCHLD`` signal, and it performs the appropriate operation.

.. _nondaemonizing_of_subprocesses:

Nondaemonizing of Subprocesses
------------------------------

Programs meant to be run under supervisor should not daemonize
themselves.  Instead, they should run in the foreground.  They should
not detach from the terminal from which they are started.

The easiest way to tell if a program will run in the foreground is to
run the command that invokes the program from a shell prompt.  If it
gives you control of the terminal back, but continues running, it's
daemonizing itself and that will almost certainly be the wrong way to
run it under supervisor.  You want to run a command that essentially
requires you to press :kbd:`Ctrl-C` to get control of the terminal
back.  If it gives you a shell prompt back after running it without
needing to press :kbd:`Ctrl-C`, it's not useful under supervisor.  All
programs have options to be run in the foreground but there's no
"standard way" to do it; you'll need to read the documentation for
each program.

Below are configuration file examples that are known to start
common programs in "foreground" mode under Supervisor.

Examples of Program Configurations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Here are some "real world" program configuration examples:

Apache 2.2.6
++++++++++++

.. code-block:: ini

   [program:apache2]
   command=/path/to/httpd -c "ErrorLog /dev/stdout" -DFOREGROUND
   redirect_stderr=true

Two Zope 2.X instances and one ZEO server
+++++++++++++++++++++++++++++++++++++++++

.. code-block:: ini

   [program:zeo]
   command=/path/to/runzeo
   priority=1

   [program:zope1]
   command=/path/to/instance/home/bin/runzope
   priority=2
   redirect_stderr=true

   [program:zope2]
   command=/path/to/another/instance/home/bin/runzope
   priority=2
   redirect_stderr=true

Postgres 8.X
++++++++++++

.. code-block:: ini

   [program:postgres]
   command=/path/to/postmaster
   ; we use the "fast" shutdown signal SIGINT
   stopsignal=INT
   redirect_stderr=true

OpenLDAP :program:`slapd`
+++++++++++++++++++++++++

.. code-block:: ini

   [program:slapd]
   command=/path/to/slapd -f /path/to/slapd.conf -h ldap://0.0.0.0:8888
   redirect_stderr=true

Other Examples
~~~~~~~~~~~~~~

Other examples of shell scripts that could be used to start services
under :program:`supervisord` can be found at
`http://thedjbway.b0llix.net/services.html
`_.  These examples are
actually for :program:`daemontools` but the premise is the same for
supervisor.

Another collection of recipes for starting various programs in the
foreground is available from `http://smarden.org/runit/runscripts.html
`_.

:program:`pidproxy` Program
---------------------------

Some processes (like :program:`mysqld`) ignore signals sent to the
actual process which is spawned by :program:`supervisord`.  Instead, a
"special" thread/process is created by these kinds of programs which
is responsible for handling signals.  This is problematic because
:program:`supervisord` can only kill a process which it creates
itself.  If a process created by :program:`supervisord` creates its
own child processes, :program:`supervisord` cannot kill them.

Fortunately, these types of programs typically write a "pidfile" which
contains the "special" process' PID, and is meant to be read and used
in order to kill the process.  As a workaround for this case, a
special :program:`pidproxy` program can handle startup of these kinds
of processes.  The :program:`pidproxy` program is a small shim that
starts a process, and upon the receipt of a signal, sends the signal
to the pid provided in a pidfile.  A sample configuration program
entry for a pidproxy-enabled program is provided below.

.. code-block:: ini

   [program:mysql]
   command=/path/to/pidproxy /path/to/pidfile /path/to/mysqld_safe

The :program:`pidproxy` program is put into your configuration's
``$BINDIR`` when supervisor is installed (it is a "console script").

.. _subprocess_environment:

Subprocess Environment
----------------------

Subprocesses will inherit the environment of the shell used to start
the :program:`supervisord` program.  Several environment variables
will be set by :program:`supervisord` itself in the child's
environment also, including :envvar:`SUPERVISOR_ENABLED` (a flag
indicating the process is under supervisor control),
:envvar:`SUPERVISOR_PROCESS_NAME` (the config-file-specified process
name for this process) and :envvar:`SUPERVISOR_GROUP_NAME` (the
config-file-specified process group name for the child process).

These environment variables may be overridden within the
``[supervisord]`` section config option named ``environment`` (applies
to all subprocesses) or within the per- ``[program:x]`` section
``environment`` config option (applies only to the subprocess
specified within the ``[program:x]`` section).  These "environment"
settings are additive.  In other words, each subprocess' environment
will consist of:

  The environment variables set within the shell used to start
  supervisord...

  ... added-to/overridden-by ...

  ... the environment variables set within the "environment" global
      config option ...

   ... added-to/overridden-by ...

   ... supervisor-specific environment variables
       (:envvar:`SUPERVISOR_ENABLED`,
       :envvar:`SUPERVISOR_PROCESS_NAME`,
       :envvar:`SUPERVISOR_GROUP_NAME`) ..

   ... added-to/overridden-by ...

   ... the environment variables set within the per-process
       "environment" config option.

No shell is executed by :program:`supervisord` when it runs a
subprocess, so environment variables such as :envvar:`USER`,
:envvar:`PATH`, :envvar:`HOME`, :envvar:`SHELL`, :envvar:`LOGNAME`,
etc. are not changed from their defaults or otherwise reassigned.
This is particularly important to note when you are running a program
from a :program:`supervisord` run as root with a ``user=`` stanza in
the configuration.  Unlike :program:`cron`, :program:`supervisord`
does not attempt to divine and override "fundamental" environment
variables like :envvar:`USER`, :envvar:`PATH`, :envvar:`HOME`, and
:envvar:`LOGNAME` when it performs a setuid to the user defined within
the ``user=`` program config option.  If you need to set environment
variables for a particular program that might otherwise be set by a
shell invocation for a particular user, you must do it explicitly
within the ``environment=`` program config option.  An
example of setting these environment variables is as below.

.. code-block:: ini

   [program:apache2]
   command=/home/chrism/bin/httpd -c "ErrorLog /dev/stdout" -DFOREGROUND
   user=chrism
   environment=HOME="/home/chrism",USER="chrism"

.. _process_states:

Process States
--------------

A process controlled by supervisord will be in one of the below states
at any given time.  You may see these state names in various user
interface elements in clients.

``STOPPED`` (0)

  The process has been stopped due to a stop request or
  has never been started.

``STARTING`` (10)

  The process is starting due to a start request.

``RUNNING`` (20)

  The process is running.

``BACKOFF`` (30)

  The process entered the ``STARTING`` state but subsequently exited
  too quickly (before the time defined in ``startsecs``) to move to
  the ``RUNNING`` state.

``STOPPING`` (40)

  The process is stopping due to a stop request.

``EXITED`` (100)

  The process exited from the ``RUNNING`` state (expectedly or
  unexpectedly).

``FATAL`` (200)

  The process could not be started successfully.

``UNKNOWN`` (1000)

  The process is in an unknown state (:program:`supervisord`
  programming error).

Each process run under supervisor progresses through these states as
per the following directed graph.

.. figure:: subprocess-transitions.png
   :alt: Subprocess State Transition Graph

   Subprocess State Transition Graph

A process is in the ``STOPPED`` state if it has been stopped
administratively or if it has never been started.

When an autorestarting process is in the ``BACKOFF`` state, it will be
automatically restarted by :program:`supervisord`.  It will switch
between ``STARTING`` and ``BACKOFF`` states until it becomes evident
that it cannot be started because the number of ``startretries`` has
exceeded the maximum, at which point it will transition to the
``FATAL`` state.

.. note::
    Retries will take increasingly more time depending on the number of
    subsequent attempts made, adding one second each time.

    So if you set ``startretries=3``, :program:`supervisord` will wait one,
    two and then three seconds between each restart attempt, for a total of
    5 seconds.

When a process is in the ``EXITED`` state, it will
automatically restart:

- never if its ``autorestart`` parameter is set to ``false``.

- unconditionally if its ``autorestart`` parameter is set to ``true``.

- conditionally if its ``autorestart`` parameter is set to
  ``unexpected``.  If it exited with an exit code that doesn't match
  one of the exit codes defined in the ``exitcodes`` configuration
  parameter for the process, it will be restarted.

A process automatically transitions from ``EXITED`` to ``RUNNING`` as
a result of being configured to autorestart conditionally or
unconditionally.  The number of transitions between ``RUNNING`` and
``EXITED`` is not limited in any way: it is possible to create a
configuration that endlessly restarts an exited process.  This is a
feature, not a bug.

An autorestarted process will never be automatically restarted if it
ends up in the ``FATAL`` state (it must be manually restarted from
this state).

A process transitions into the ``STOPPING`` state via an
administrative stop request, and will then end up in the
``STOPPED`` state.

A process that cannot be stopped successfully will stay in the
``STOPPING`` state forever.  This situation should never be reached
during normal operations as it implies that the process did not
respond to a final ``SIGKILL`` signal sent to it by supervisor, which
is "impossible" under UNIX.

State transitions which always require user action to invoke are
these:

``FATAL``   -> ``STARTING``

``RUNNING`` -> ``STOPPING``

State transitions which typically, but not always, require user
action to invoke are these, with exceptions noted:

``STOPPED`` -> ``STARTING`` (except at supervisord startup if process
is configured to autostart)

``EXITED`` -> ``STARTING`` (except if process is configured to
autorestart)

All other state transitions are managed by supervisord automatically.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/docs/upgrading.rst0000644000076500000240000000616114340177153017144 0ustar00mnaberezstaffUpgrading Supervisor 2 to 3
===========================

The following is true when upgrading an installation from Supervisor
2.X to Supervisor 3.X:

#.  In ``[program:x]`` sections, the keys ``logfile``,
    ``logfile_backups``, ``logfile_maxbytes``, ``log_stderr`` and
    ``log_stdout`` are no longer valid.  Supervisor2 logged both
    stderr and stdout to a single log file.  Supervisor 3 logs stderr
    and stdout to separate log files.  You'll need to rename
    ``logfile`` to ``stdout_logfile``, ``logfile_backups`` to
    ``stdout_logfile_backups``, and ``logfile_maxbytes`` to
    ``stdout_logfile_maxbytes`` at the very least to preserve your
    configuration.  If you created program sections where
    ``log_stderr`` was true, to preserve the behavior of sending
    stderr output to the stdout log, use the ``redirect_stderr``
    boolean in the program section instead.

#.  The supervisor configuration file must include the following
    section verbatim for the XML-RPC interface (and thus the web
    interface and :program:`supervisorctl`) to work properly:

    .. code-block:: ini

       [rpcinterface:supervisor]
       supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

#.  The semantics of the ``autorestart`` parameter within
    ``[program:x]`` sections has changed.  This parameter used to
    accept only ``true`` or ``false``.  It now accepts an additional
    value, ``unexpected``, which indicates that the process should
    restart from the ``EXITED`` state only if its exit code does not
    match any of those represented by the ``exitcode`` parameter in
    the process' configuration (implying a process crash).  In
    addition, the default for ``autorestart`` is now ``unexpected``
    (it used to be ``true``, which meant restart unconditionally).

#.  We now allow :program:`supervisord` to listen on both a UNIX
    domain socket and an inet socket instead of making listening on
    one mutually exclusive with listening on the other.  As a result,
    the options ``http_port``, ``http_username``, ``http_password``,
    ``sockchmod`` and ``sockchown`` are no longer part of
    the ``[supervisord]`` section configuration. These have been
    supplanted by two other sections: ``[unix_http_server]`` and
    ``[inet_http_server]``.  You'll need to insert one or the other
    (depending on whether you want to listen on a UNIX domain socket
    or a TCP socket respectively) or both into your
    :file:`supervisord.conf` file.  These sections have their own
    options (where applicable) for ``port``, ``username``,
    ``password``, ``chmod``, and ``chown``.

#.  All supervisord command-line options related to ``http_port``,
    ``http_username``, ``http_password``, ``sockchmod`` and
    ``sockchown`` have been removed (see above point for rationale).

#. The option that used to be ``sockchown`` within the
   ``[supervisord]`` section (and is now named ``chown`` within the
   ``[unix_http_server]`` section) used to accept a dot-separated
   (``user.group``) value.  The separator now must be a
   colon, e.g. ``user:group``.  Unices allow for dots in
   usernames, so this change is a bugfix.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/docs/xmlrpc.rst0000644000076500000240000000452714340177153016475 0ustar00mnaberezstaffExtending Supervisor's XML-RPC API
==================================

Supervisor can be extended with new XML-RPC APIs.  Several third-party
plugins already exist that can be wired into your Supervisor
configuration.  You may additionally write your own.  Extensible
XML-RPC interfaces is an advanced feature, introduced in version 3.0.
You needn't understand it unless you wish to use an existing
third-party RPC interface plugin or if you wish to write your own RPC
interface plugin.

.. _rpcinterface_factories:

Configuring XML-RPC Interface Factories
---------------------------------------

An additional RPC interface is configured into a supervisor
installation by adding a ``[rpcinterface:x]`` section in the
Supervisor configuration file.

In the sample config file, there is a section which is named
``[rpcinterface:supervisor]``.  By default it looks like this:

.. code-block:: ini
    
   [rpcinterface:supervisor]
   supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

This section *must* remain in the configuration for the standard setup
of supervisor to work properly.  If you don't want supervisor to do
anything it doesn't already do out of the box, this is all you need to
know about this type of section.

However, if you wish to add additional XML-RPC interface namespaces to
a configuration of supervisor, you may add additional
``[rpcinterface:foo]`` sections, where "foo" represents the namespace
of the interface (from the web root), and the value named by
``supervisor.rpcinterface_factory`` is a factory callable written in
Python which should have a function signature that accepts a single
positional argument ``supervisord`` and as many keyword arguments as
required to perform configuration.  Any key/value pairs defined within
the ``rpcinterface:foo`` section will be passed as keyword arguments
to the factory.  Here's an example of a factory function, created in
the package ``my.package``.

.. code-block:: python

   def make_another_rpcinterface(supervisord, **config):
       retries = int(config.get('retries', 0))
       another_rpc_interface = AnotherRPCInterface(supervisord, retries)
       return another_rpc_interface

And a section in the config file meant to configure it.

.. code-block:: ini

   [rpcinterface:another]
   supervisor.rpcinterface_factory = my.package:make_another_rpcinterface
   retries = 1

././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1671843145.3974922
supervisor-4.2.5/setup.cfg0000644000076500000240000000023414351446511015315 0ustar00mnaberezstaff[easy_install]
zip_ok = false

[aliases]
dev = develop easy_install supervisor[testing]

[bdist_wheel]
universal = 1

[egg_info]
tag_build = 
tag_date = 0

././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671836081.0
supervisor-4.2.5/setup.py0000644000076500000240000000670514351430661015216 0ustar00mnaberezstaff##############################################################################
#
# Copyright (c) 2006-2015 Agendaless Consulting and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the BSD-like license at
# http://www.repoze.org/LICENSE.txt.  A copy of the license should accompany
# this distribution.  THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL
# EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND
# FITNESS FOR A PARTICULAR PURPOSE
#
##############################################################################

import os
import sys

py_version = sys.version_info[:2]

if py_version < (2, 7):
    raise RuntimeError('On Python 2, Supervisor requires Python 2.7 or later')
elif (3, 0) < py_version < (3, 4):
    raise RuntimeError('On Python 3, Supervisor requires Python 3.4 or later')

# pkg_resource is used in several places
requires = ["setuptools"]
tests_require = []
if py_version < (3, 3):
    tests_require.append('mock<4.0.0.dev0')

testing_extras = tests_require + [
    'pytest',
    'pytest-cov',
    ]

from setuptools import setup, find_packages
here = os.path.abspath(os.path.dirname(__file__))
try:
    with open(os.path.join(here, 'README.rst'), 'r') as f:
        README = f.read()
    with open(os.path.join(here, 'CHANGES.rst'), 'r') as f:
        CHANGES = f.read()
except Exception:
    README = """\
Supervisor is a client/server system that allows its users to
control a number of processes on UNIX-like operating systems. """
    CHANGES = ''

CLASSIFIERS = [
    'Development Status :: 5 - Production/Stable',
    'Environment :: No Input/Output (Daemon)',
    'Intended Audience :: System Administrators',
    'Natural Language :: English',
    'Operating System :: POSIX',
    'Topic :: System :: Boot',
    'Topic :: System :: Monitoring',
    'Topic :: System :: Systems Administration',
    "Programming Language :: Python",
    "Programming Language :: Python :: 2",
    "Programming Language :: Python :: 2.7",
    "Programming Language :: Python :: 3",
    "Programming Language :: Python :: 3.4",
    "Programming Language :: Python :: 3.5",
    "Programming Language :: Python :: 3.6",
    "Programming Language :: Python :: 3.7",
    "Programming Language :: Python :: 3.8",
    "Programming Language :: Python :: 3.9",
    "Programming Language :: Python :: 3.10",
]

version_txt = os.path.join(here, 'supervisor/version.txt')
with open(version_txt, 'r') as f:
    supervisor_version = f.read().strip()

dist = setup(
    name='supervisor',
    version=supervisor_version,
    license='BSD-derived (http://www.repoze.org/LICENSE.txt)',
    url='http://supervisord.org/',
    description="A system for controlling process state under UNIX",
    long_description=README + '\n\n' + CHANGES,
    classifiers=CLASSIFIERS,
    author="Chris McDonough",
    author_email="chrism@plope.com",
    packages=find_packages(),
    install_requires=requires,
    extras_require={
        'testing': testing_extras,
        },
    tests_require=tests_require,
    include_package_data=True,
    zip_safe=False,
    test_suite="supervisor.tests",
    entry_points={
        'console_scripts': [
            'supervisord = supervisor.supervisord:main',
            'supervisorctl = supervisor.supervisorctl:main',
            'echo_supervisord_conf = supervisor.confecho:main',
            'pidproxy = supervisor.pidproxy:main',
        ],
    },
)
././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1671843145.380704
supervisor-4.2.5/supervisor/0000755000076500000240000000000014351446511015716 5ustar00mnaberezstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/supervisor/__init__.py0000644000076500000240000000002414340177153020024 0ustar00mnaberezstaff# this is a package
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/supervisor/childutils.py0000644000076500000240000000500114340177153020431 0ustar00mnaberezstaffimport sys
import time

from supervisor.compat import xmlrpclib
from supervisor.compat import long
from supervisor.compat import as_string

from supervisor.xmlrpc import SupervisorTransport
from supervisor.events import ProcessCommunicationEvent
from supervisor.dispatchers import PEventListenerDispatcher

def getRPCTransport(env):
    u = env.get('SUPERVISOR_USERNAME', '')
    p = env.get('SUPERVISOR_PASSWORD', '')
    return SupervisorTransport(u, p, env['SUPERVISOR_SERVER_URL'])

def getRPCInterface(env):
    # dumbass ServerProxy won't allow us to pass in a non-HTTP url,
    # so we fake the url we pass into it and always use the transport's
    # 'serverurl' to figure out what to attach to
    return xmlrpclib.ServerProxy('http://127.0.0.1', getRPCTransport(env))

def get_headers(line):
    return dict([ x.split(':') for x in line.split() ])

def eventdata(payload):
    headerinfo, data = payload.split('\n', 1)
    headers = get_headers(headerinfo)
    return headers, data

def get_asctime(now=None):
    if now is None: # for testing
        now = time.time() # pragma: no cover
    msecs = (now - long(now)) * 1000
    part1 = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(now))
    asctime = '%s,%03d' % (part1, msecs)
    return asctime

class ProcessCommunicationsProtocol:
    def send(self, msg, fp=sys.stdout):
        fp.write(ProcessCommunicationEvent.BEGIN_TOKEN)
        fp.write(msg)
        fp.write(ProcessCommunicationEvent.END_TOKEN)
        fp.flush()

    def stdout(self, msg):
        return self.send(msg, sys.stdout)

    def stderr(self, msg):
        return self.send(msg, sys.stderr)

pcomm = ProcessCommunicationsProtocol()

class EventListenerProtocol:
    def wait(self, stdin=sys.stdin, stdout=sys.stdout):
        self.ready(stdout)
        line = stdin.readline()
        headers = get_headers(line)
        payload = stdin.read(int(headers['len']))
        return headers, payload

    def ready(self, stdout=sys.stdout):
        stdout.write(as_string(PEventListenerDispatcher.READY_FOR_EVENTS_TOKEN))
        stdout.flush()

    def ok(self, stdout=sys.stdout):
        self.send('OK', stdout)

    def fail(self, stdout=sys.stdout):
        self.send('FAIL', stdout)

    def send(self, data, stdout=sys.stdout):
        resultlen = len(data)
        result = '%s%s\n%s' % (as_string(PEventListenerDispatcher.RESULT_TOKEN_START),
                               str(resultlen),
                               data)
        stdout.write(result)
        stdout.flush()

listener = EventListenerProtocol()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/supervisor/compat.py0000644000076500000240000000724114340177153017560 0ustar00mnaberezstafffrom __future__ import absolute_import

import sys

PY2 = sys.version_info[0] == 2

if PY2: # pragma: no cover
    long = long
    raw_input = raw_input
    unicode = unicode
    unichr = unichr
    basestring = basestring

    def as_bytes(s, encoding='utf-8'):
        if isinstance(s, str):
            return s
        else:
            return s.encode(encoding)

    def as_string(s, encoding='utf-8'):
        if isinstance(s, unicode):
            return s
        else:
            return s.decode(encoding)

    def is_text_stream(stream):
        try:
            if isinstance(stream, file):
                return 'b' not in stream.mode
        except NameError:  # python 3
            pass

        try:
            import _io
            return isinstance(stream, _io._TextIOBase)
        except ImportError:
            import io
            return isinstance(stream, io.TextIOWrapper)

else: # pragma: no cover
    long = int
    basestring = str
    raw_input = input
    unichr = chr

    class unicode(str):
        def __init__(self, string, encoding, errors):
            str.__init__(self, string)

    def as_bytes(s, encoding='utf8'):
        if isinstance(s, bytes):
            return s
        else:
            return s.encode(encoding)

    def as_string(s, encoding='utf8'):
        if isinstance(s, str):
            return s
        else:
            return s.decode(encoding)

    def is_text_stream(stream):
        import _io
        return isinstance(stream, _io._TextIOBase)

try: # pragma: no cover
    import xmlrpc.client as xmlrpclib
except ImportError: # pragma: no cover
    import xmlrpclib

try: # pragma: no cover
    import urllib.parse as urlparse
    import urllib.parse as urllib
except ImportError: # pragma: no cover
    import urlparse
    import urllib

try: # pragma: no cover
    from hashlib import sha1
except ImportError: # pragma: no cover
    from sha import new as sha1

try: # pragma: no cover
    import syslog
except ImportError: # pragma: no cover
    syslog = None

try: # pragma: no cover
    import ConfigParser
except ImportError: # pragma: no cover
    import configparser as ConfigParser

try: # pragma: no cover
    from StringIO import StringIO
except ImportError: # pragma: no cover
    from io import StringIO

try: # pragma: no cover
    from sys import maxint
except ImportError: # pragma: no cover
    from sys import maxsize as maxint

try: # pragma: no cover
    import http.client as httplib
except ImportError: # pragma: no cover
    import httplib

try: # pragma: no cover
    from base64 import decodebytes as decodestring, encodebytes as encodestring
except ImportError: # pragma: no cover
    from base64 import decodestring, encodestring

try: # pragma: no cover
    from xmlrpc.client import Fault
except ImportError: # pragma: no cover
    from xmlrpclib import Fault

try: # pragma: no cover
    from string import ascii_letters as letters
except ImportError: # pragma: no cover
    from string import letters

try: # pragma: no cover
    from hashlib import md5
except ImportError: # pragma: no cover
    from md5 import md5

try: # pragma: no cover
    import thread
except ImportError: # pragma: no cover
    import _thread as thread

try: # pragma: no cover
    from types import StringTypes
except ImportError: # pragma: no cover
    StringTypes = (str,)

try: # pragma: no cover
    from html import escape
except ImportError: # pragma: no cover
    from cgi import escape

try: # pragma: no cover
    import html.entities as htmlentitydefs
except ImportError: # pragma: no cover
    import htmlentitydefs

try: # pragma: no cover
    from html.parser import HTMLParser
except ImportError: # pragma: no cover
    from HTMLParser import HTMLParser
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/supervisor/confecho.py0000644000076500000240000000031514340177153020054 0ustar00mnaberezstaffimport pkg_resources
import sys
from supervisor.compat import as_string

def main(out=sys.stdout):
    config = pkg_resources.resource_string(__name__, 'skel/sample.conf')
    out.write(as_string(config))
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671836081.0
supervisor-4.2.5/supervisor/datatypes.py0000644000076500000240000003172314351430661020273 0ustar00mnaberezstaffimport grp
import os
import pwd
import signal
import socket
import shlex

from supervisor.compat import urlparse
from supervisor.compat import long
from supervisor.loggers import getLevelNumByDescription

def process_or_group_name(name):
    """Ensures that a process or group name is not created with
       characters that break the eventlistener protocol or web UI URLs"""
    s = str(name).strip()
    for character in ' :/':
        if character in s:
            raise ValueError("Invalid name: %r because of character: %r" % (name, character))
    return s

def integer(value):
    try:
        return int(value)
    except (ValueError, OverflowError):
        return long(value) # why does this help ValueError? (CM)

TRUTHY_STRINGS = ('yes', 'true', 'on', '1')
FALSY_STRINGS  = ('no', 'false', 'off', '0')

def boolean(s):
    """Convert a string value to a boolean value."""
    ss = str(s).lower()
    if ss in TRUTHY_STRINGS:
        return True
    elif ss in FALSY_STRINGS:
        return False
    else:
        raise ValueError("not a valid boolean value: " + repr(s))

def list_of_strings(arg):
    if not arg:
        return []
    try:
        return [x.strip() for x in arg.split(',')]
    except:
        raise ValueError("not a valid list of strings: " + repr(arg))

def list_of_ints(arg):
    if not arg:
        return []
    else:
        try:
            return list(map(int, arg.split(",")))
        except:
            raise ValueError("not a valid list of ints: " + repr(arg))

def list_of_exitcodes(arg):
    try:
        vals = list_of_ints(arg)
        for val in vals:
            if (val > 255) or (val < 0):
                raise ValueError('Invalid exit code "%s"' % val)
        return vals
    except:
        raise ValueError("not a valid list of exit codes: " + repr(arg))

def dict_of_key_value_pairs(arg):
    """ parse KEY=val,KEY2=val2 into {'KEY':'val', 'KEY2':'val2'}
        Quotes can be used to allow commas in the value
    """
    lexer = shlex.shlex(str(arg))
    lexer.wordchars += '/.+-():'

    tokens = list(lexer)
    tokens_len = len(tokens)

    D = {}
    i = 0
    while i < tokens_len:
        k_eq_v = tokens[i:i+3]
        if len(k_eq_v) != 3 or k_eq_v[1] != '=':
            raise ValueError(
                "Unexpected end of key/value pairs in value '%s'" % arg)
        D[k_eq_v[0]] = k_eq_v[2].strip('\'"')
        i += 4
    return D

class Automatic:
    pass

class Syslog:
    """TODO deprecated; remove this special 'syslog' filename in the future"""
    pass

LOGFILE_NONES = ('none', 'off', None)
LOGFILE_AUTOS = (Automatic, 'auto')
LOGFILE_SYSLOGS = (Syslog, 'syslog')

def logfile_name(val):
    if hasattr(val, 'lower'):
        coerced = val.lower()
    else:
        coerced = val

    if coerced in LOGFILE_NONES:
        return None
    elif coerced in LOGFILE_AUTOS:
        return Automatic
    elif coerced in LOGFILE_SYSLOGS:
        return Syslog
    else:
        return existing_dirpath(val)

class RangeCheckedConversion:
    """Conversion helper that range checks another conversion."""

    def __init__(self, conversion, min=None, max=None):
        self._min = min
        self._max = max
        self._conversion = conversion

    def __call__(self, value):
        v = self._conversion(value)
        if self._min is not None and v < self._min:
            raise ValueError("%s is below lower bound (%s)"
                             % (repr(v), repr(self._min)))
        if self._max is not None and v > self._max:
            raise ValueError("%s is above upper bound (%s)"
                             % (repr(v), repr(self._max)))
        return v

port_number = RangeCheckedConversion(integer, min=1, max=0xffff).__call__

def inet_address(s):
    # returns (host, port) tuple
    host = ''
    if ":" in s:
        host, s = s.rsplit(":", 1)
        if not s:
            raise ValueError("no port number specified in %r" % s)
        port = port_number(s)
        host = host.lower()
    else:
        try:
            port = port_number(s)
        except ValueError:
            raise ValueError("not a valid port number: %r " %s)
    if not host or host == '*':
        host = ''
    return host, port

class SocketAddress:
    def __init__(self, s):
        # returns (family, address) tuple
        if "/" in s or s.find(os.sep) >= 0 or ":" not in s:
            self.family = getattr(socket, "AF_UNIX", None)
            self.address = s
        else:
            self.family = socket.AF_INET
            self.address = inet_address(s)

class SocketConfig:
    """ Abstract base class which provides a uniform abstraction
    for TCP vs Unix sockets """
    url = '' # socket url
    addr = None #socket addr
    backlog = None # socket listen backlog

    def __repr__(self):
        return '<%s at %s for %s>' % (self.__class__,
                                      id(self),
                                      self.url)

    def __str__(self):
        return str(self.url)

    def __eq__(self, other):
        if not isinstance(other, SocketConfig):
            return False

        if self.url != other.url:
            return False

        return True

    def __ne__(self, other):
        return not self.__eq__(other)

    def get_backlog(self):
        return self.backlog

    def addr(self): # pragma: no cover
        raise NotImplementedError

    def create_and_bind(self): # pragma: no cover
        raise NotImplementedError

class InetStreamSocketConfig(SocketConfig):
    """ TCP socket config helper """

    host = None # host name or ip to bind to
    port = None # integer port to bind to

    def __init__(self, host, port, **kwargs):
        self.host = host.lower()
        self.port = port_number(port)
        self.url = 'tcp://%s:%d' % (self.host, self.port)
        self.backlog = kwargs.get('backlog', None)

    def addr(self):
        return self.host, self.port

    def create_and_bind(self):
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        try:
            sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
            sock.bind(self.addr())
        except:
            sock.close()
            raise
        return sock

class UnixStreamSocketConfig(SocketConfig):
    """ Unix domain socket config helper """

    path = None # Unix domain socket path
    mode = None # Unix permission mode bits for socket
    owner = None # Tuple (uid, gid) for Unix ownership of socket
    sock = None # socket object

    def __init__(self, path, **kwargs):
        self.path = path
        self.url = 'unix://%s' % path
        self.mode = kwargs.get('mode', None)
        self.owner = kwargs.get('owner', None)
        self.backlog = kwargs.get('backlog', None)

    def addr(self):
        return self.path

    def create_and_bind(self):
        if os.path.exists(self.path):
            os.unlink(self.path)
        sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
        try:
            sock.bind(self.addr())
            self._chown()
            self._chmod()
        except:
            sock.close()
            if os.path.exists(self.path):
                os.unlink(self.path)
            raise
        return sock

    def get_mode(self):
        return self.mode

    def get_owner(self):
        return self.owner

    def _chmod(self):
        if self.mode is not None:
            try:
                os.chmod(self.path, self.mode)
            except Exception as e:
                raise ValueError("Could not change permissions of socket "
                                    + "file: %s" % e)

    def _chown(self):
        if self.owner is not None:
            try:
                os.chown(self.path, self.owner[0], self.owner[1])
            except Exception as e:
                raise ValueError("Could not change ownership of socket file: "
                                    + "%s" % e)

def colon_separated_user_group(arg):
    """ Find a user ID and group ID from a string like 'user:group'.  Returns
        a tuple (uid, gid).  If the string only contains a user like 'user'
        then (uid, -1) will be returned.  Raises ValueError if either
        the user or group can't be resolved to valid IDs on the system. """
    try:
        parts = arg.split(':', 1)
        if len(parts) == 1:
            uid = name_to_uid(parts[0])
            gid = -1
        else:
            uid = name_to_uid(parts[0])
            gid = name_to_gid(parts[1])
        return (uid, gid)
    except:
        raise ValueError('Invalid user:group definition %s' % arg)

def name_to_uid(name):
    """ Find a user ID from a string containing a user name or ID.
        Raises ValueError if the string can't be resolved to a valid
        user ID on the system. """
    try:
        uid = int(name)
    except ValueError:
        try:
            pwdrec = pwd.getpwnam(name)
        except KeyError:
            raise ValueError("Invalid user name %s" % name)
        uid = pwdrec[2]
    else:
        try:
            pwd.getpwuid(uid) # check if uid is valid
        except KeyError:
            raise ValueError("Invalid user id %s" % name)
    return uid

def name_to_gid(name):
    """ Find a group ID from a string containing a group name or ID.
        Raises ValueError if the string can't be resolved to a valid
        group ID on the system. """
    try:
        gid = int(name)
    except ValueError:
        try:
            grprec = grp.getgrnam(name)
        except KeyError:
            raise ValueError("Invalid group name %s" % name)
        gid = grprec[2]
    else:
        try:
            grp.getgrgid(gid) # check if gid is valid
        except KeyError:
            raise ValueError("Invalid group id %s" % name)
    return gid

def gid_for_uid(uid):
    pwrec = pwd.getpwuid(uid)
    return pwrec[3]

def octal_type(arg):
    try:
        return int(arg, 8)
    except (TypeError, ValueError):
        raise ValueError('%s can not be converted to an octal type' % arg)

def existing_directory(v):
    nv = os.path.expanduser(v)
    if os.path.isdir(nv):
        return nv
    raise ValueError('%s is not an existing directory' % v)

def existing_dirpath(v):
    nv = os.path.expanduser(v)
    dir = os.path.dirname(nv)
    if not dir:
        # relative pathname with no directory component
        return nv
    if os.path.isdir(dir):
        return nv
    raise ValueError('The directory named as part of the path %s '
                     'does not exist' % v)

def logging_level(value):
    s = str(value).lower()
    level = getLevelNumByDescription(s)
    if level is None:
        raise ValueError('bad logging level name %r' % value)
    return level

class SuffixMultiplier:
    # d is a dictionary of suffixes to integer multipliers.  If no suffixes
    # match, default is the multiplier.  Matches are case insensitive.  Return
    # values are in the fundamental unit.
    def __init__(self, d, default=1):
        self._d = d
        self._default = default
        # all keys must be the same size
        self._keysz = None
        for k in d.keys():
            if self._keysz is None:
                self._keysz = len(k)
            else:
                assert self._keysz == len(k)

    def __call__(self, v):
        v = v.lower()
        for s, m in self._d.items():
            if v[-self._keysz:] == s:
                return int(v[:-self._keysz]) * m
        return int(v) * self._default

byte_size = SuffixMultiplier({'kb': 1024,
                              'mb': 1024*1024,
                              'gb': 1024*1024*long(1024),})

def url(value):
    scheme, netloc, path, params, query, fragment = urlparse.urlparse(value)
    if scheme and (netloc or path):
        return value
    raise ValueError("value %r is not a URL" % value)

# all valid signal numbers
SIGNUMS = [ getattr(signal, k) for k in dir(signal) if k.startswith('SIG') ]

def signal_number(value):
    try:
        num = int(value)
    except (ValueError, TypeError):
        name = value.strip().upper()
        if not name.startswith('SIG'):
            name = 'SIG' + name
        num = getattr(signal, name, None)
        if num is None:
            raise ValueError('value %r is not a valid signal name' % value)
    if num not in SIGNUMS:
        raise ValueError('value %r is not a valid signal number' % value)
    return num

class RestartWhenExitUnexpected:
    pass

class RestartUnconditionally:
    pass

def auto_restart(value):
    value = str(value.lower())
    computed_value  = value
    if value in TRUTHY_STRINGS:
        computed_value = RestartUnconditionally
    elif value in FALSY_STRINGS:
        computed_value = False
    elif value == 'unexpected':
        computed_value = RestartWhenExitUnexpected
    if computed_value not in (RestartWhenExitUnexpected,
                              RestartUnconditionally, False):
        raise ValueError("invalid 'autorestart' value %r" % value)
    return computed_value

def profile_options(value):
    options = [x.lower() for x in list_of_strings(value) ]
    sort_options = []
    callers = False
    for thing in options:
        if thing != 'callers':
            sort_options.append(thing)
        else:
            callers = True
    return sort_options, callers
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0
supervisor-4.2.5/supervisor/dispatchers.py0000644000076500000240000004553514351440431020610 0ustar00mnaberezstaffimport errno
from supervisor.medusa.asynchat_25 import find_prefix_at_end
from supervisor.medusa.asyncore_25 import compact_traceback

from supervisor.compat import as_string
from supervisor.events import notify
from supervisor.events import EventRejectedEvent
from supervisor.events import ProcessLogStderrEvent
from supervisor.events import ProcessLogStdoutEvent
from supervisor.states import EventListenerStates
from supervisor.states import getEventListenerStateDescription
from supervisor import loggers

class PDispatcher:
    """ Asyncore dispatcher for mainloop, representing a process channel
    (stdin, stdout, or stderr).  This class is abstract. """

    closed = False # True if close() has been called

    def __init__(self, process, channel, fd):
        self.process = process  # process which "owns" this dispatcher
        self.channel = channel  # 'stderr' or 'stdout'
        self.fd = fd
        self.closed = False     # True if close() has been called

    def __repr__(self):
        return '<%s at %s for %s (%s)>' % (self.__class__.__name__,
                                           id(self),
                                           self.process,
                                           self.channel)

    def readable(self):
        raise NotImplementedError

    def writable(self):
        raise NotImplementedError

    def handle_read_event(self):
        raise NotImplementedError

    def handle_write_event(self):
        raise NotImplementedError

    def handle_error(self):
        nil, t, v, tbinfo = compact_traceback()

        self.process.config.options.logger.critical(
            'uncaptured python exception, closing channel %s (%s:%s %s)' % (
                repr(self),
                t,
                v,
                tbinfo
                )
            )
        self.close()

    def close(self):
        if not self.closed:
            self.process.config.options.logger.debug(
                'fd %s closed, stopped monitoring %s' % (self.fd, self))
            self.closed = True

    def flush(self):
        pass

class POutputDispatcher(PDispatcher):
    """
    Dispatcher for one channel (stdout or stderr) of one process.
    Serves several purposes:

    - capture output sent within  and
       tags and signal a ProcessCommunicationEvent
      by calling notify(event).
    - route the output to the appropriate log handlers as specified in the
      config.
    """

    childlog = None # the current logger (normallog or capturelog)
    normallog = None # the "normal" (non-capture) logger
    capturelog = None # the logger used while we're in capturemode
    capturemode = False # are we capturing process event data
    output_buffer = b'' # data waiting to be logged

    def __init__(self, process, event_type, fd):
        """
        Initialize the dispatcher.

        `event_type` should be one of ProcessLogStdoutEvent or
        ProcessLogStderrEvent
        """
        self.process = process
        self.event_type = event_type
        self.fd = fd
        self.channel = self.event_type.channel

        self._init_normallog()
        self._init_capturelog()

        self.childlog = self.normallog

        # all code below is purely for minor speedups
        begintoken = self.event_type.BEGIN_TOKEN
        endtoken = self.event_type.END_TOKEN
        self.begintoken_data = (begintoken, len(begintoken))
        self.endtoken_data = (endtoken, len(endtoken))
        self.mainlog_level = loggers.LevelsByName.DEBG
        config = self.process.config
        self.log_to_mainlog = config.options.loglevel <= self.mainlog_level
        self.stdout_events_enabled = config.stdout_events_enabled
        self.stderr_events_enabled = config.stderr_events_enabled

    def _init_normallog(self):
        """
        Configure the "normal" (non-capture) log for this channel of this
        process.  Sets self.normallog if logging is enabled.
        """
        config = self.process.config
        channel = self.channel

        logfile = getattr(config, '%s_logfile' % channel)
        maxbytes = getattr(config, '%s_logfile_maxbytes' % channel)
        backups = getattr(config, '%s_logfile_backups' % channel)
        to_syslog = getattr(config, '%s_syslog' % channel)

        if logfile or to_syslog:
            self.normallog = config.options.getLogger()

        if logfile:
            loggers.handle_file(
                self.normallog,
                filename=logfile,
                fmt='%(message)s',
                rotating=not not maxbytes, # optimization
                maxbytes=maxbytes,
                backups=backups
            )

        if to_syslog:
            loggers.handle_syslog(
                self.normallog,
                fmt=config.name + ' %(message)s'
            )

    def _init_capturelog(self):
        """
        Configure the capture log for this process.  This log is used to
        temporarily capture output when special output is detected.
        Sets self.capturelog if capturing is enabled.
        """
        capture_maxbytes = getattr(self.process.config,
                                   '%s_capture_maxbytes' % self.channel)
        if capture_maxbytes:
            self.capturelog = self.process.config.options.getLogger()
            loggers.handle_boundIO(
                self.capturelog,
                fmt='%(message)s',
                maxbytes=capture_maxbytes,
                )

    def removelogs(self):
        for log in (self.normallog, self.capturelog):
            if log is not None:
                for handler in log.handlers:
                    handler.remove()
                    handler.reopen()

    def reopenlogs(self):
        for log in (self.normallog, self.capturelog):
            if log is not None:
                for handler in log.handlers:
                    handler.reopen()

    def _log(self, data):
        if data:
            config = self.process.config
            if config.options.strip_ansi:
                data = stripEscapes(data)
            if self.childlog:
                self.childlog.info(data)
            if self.log_to_mainlog:
                if not isinstance(data, bytes):
                    text = data
                else:
                    try:
                        text = data.decode('utf-8')
                    except UnicodeDecodeError:
                        text = 'Undecodable: %r' % data
                msg = '%(name)r %(channel)s output:\n%(data)s'
                config.options.logger.log(
                    self.mainlog_level, msg, name=config.name,
                    channel=self.channel, data=text)
            if self.channel == 'stdout':
                if self.stdout_events_enabled:
                    notify(
                        ProcessLogStdoutEvent(self.process,
                            self.process.pid, data)
                    )
            else: # channel == stderr
                if self.stderr_events_enabled:
                    notify(
                        ProcessLogStderrEvent(self.process,
                            self.process.pid, data)
                    )

    def record_output(self):
        if self.capturelog is None:
            # shortcut trying to find capture data
            data = self.output_buffer
            self.output_buffer = b''
            self._log(data)
            return

        if self.capturemode:
            token, tokenlen = self.endtoken_data
        else:
            token, tokenlen = self.begintoken_data

        if len(self.output_buffer) <= tokenlen:
            return # not enough data

        data = self.output_buffer
        self.output_buffer = b''

        try:
            before, after = data.split(token, 1)
        except ValueError:
            after = None
            index = find_prefix_at_end(data, token)
            if index:
                self.output_buffer = self.output_buffer + data[-index:]
                data = data[:-index]
            self._log(data)
        else:
            self._log(before)
            self.toggle_capturemode()
            self.output_buffer = after

        if after:
            self.record_output()

    def toggle_capturemode(self):
        self.capturemode = not self.capturemode

        if self.capturelog is not None:
            if self.capturemode:
                self.childlog = self.capturelog
            else:
                for handler in self.capturelog.handlers:
                    handler.flush()
                data = self.capturelog.getvalue()
                channel = self.channel
                procname = self.process.config.name
                event = self.event_type(self.process, self.process.pid, data)
                notify(event)

                msg = "%(procname)r %(channel)s emitted a comm event"
                self.process.config.options.logger.debug(msg,
                                                         procname=procname,
                                                         channel=channel)
                for handler in self.capturelog.handlers:
                    handler.remove()
                    handler.reopen()
                self.childlog = self.normallog

    def writable(self):
        return False

    def readable(self):
        if self.closed:
            return False
        return True

    def handle_read_event(self):
        data = self.process.config.options.readfd(self.fd)
        self.output_buffer += data
        self.record_output()
        if not data:
            # if we get no data back from the pipe, it means that the
            # child process has ended.  See
            # mail.python.org/pipermail/python-dev/2004-August/046850.html
            self.close()

class PEventListenerDispatcher(PDispatcher):
    """ An output dispatcher that monitors and changes a process'
    listener_state """
    childlog = None # the logger
    state_buffer = b''  # data waiting to be reviewed for state changes

    READY_FOR_EVENTS_TOKEN = b'READY\n'
    RESULT_TOKEN_START = b'RESULT '
    READY_FOR_EVENTS_LEN = len(READY_FOR_EVENTS_TOKEN)
    RESULT_TOKEN_START_LEN = len(RESULT_TOKEN_START)

    def __init__(self, process, channel, fd):
        PDispatcher.__init__(self, process, channel, fd)
        # the initial state of our listener is ACKNOWLEDGED; this is a
        # "busy" state that implies we're awaiting a READY_FOR_EVENTS_TOKEN
        self.process.listener_state = EventListenerStates.ACKNOWLEDGED
        self.process.event = None
        self.result = b''
        self.resultlen = None

        logfile = getattr(process.config, '%s_logfile' % channel)

        if logfile:
            maxbytes = getattr(process.config, '%s_logfile_maxbytes' % channel)
            backups = getattr(process.config, '%s_logfile_backups' % channel)
            self.childlog = process.config.options.getLogger()
            loggers.handle_file(
                self.childlog,
                logfile,
                '%(message)s',
                rotating=not not maxbytes, # optimization
                maxbytes=maxbytes,
                backups=backups,
            )

    def removelogs(self):
        if self.childlog is not None:
            for handler in self.childlog.handlers:
                handler.remove()
                handler.reopen()

    def reopenlogs(self):
        if self.childlog is not None:
            for handler in self.childlog.handlers:
                handler.reopen()


    def writable(self):
        return False

    def readable(self):
        if self.closed:
            return False
        return True

    def handle_read_event(self):
        data = self.process.config.options.readfd(self.fd)
        if data:
            self.state_buffer += data
            procname = self.process.config.name
            msg = '%r %s output:\n%s' % (procname, self.channel, data)
            self.process.config.options.logger.debug(msg)

            if self.childlog:
                if self.process.config.options.strip_ansi:
                    data = stripEscapes(data)
                self.childlog.info(data)
        else:
            # if we get no data back from the pipe, it means that the
            # child process has ended.  See
            # mail.python.org/pipermail/python-dev/2004-August/046850.html
            self.close()

        self.handle_listener_state_change()

    def handle_listener_state_change(self):
        data = self.state_buffer

        if not data:
            return

        process = self.process
        procname = process.config.name
        state = process.listener_state

        if state == EventListenerStates.UNKNOWN:
            # this is a fatal state
            self.state_buffer = b''
            return

        if state == EventListenerStates.ACKNOWLEDGED:
            if len(data) < self.READY_FOR_EVENTS_LEN:
                # not enough info to make a decision
                return
            elif data.startswith(self.READY_FOR_EVENTS_TOKEN):
                self._change_listener_state(EventListenerStates.READY)
                tokenlen = self.READY_FOR_EVENTS_LEN
                self.state_buffer = self.state_buffer[tokenlen:]
                process.event = None
            else:
                self._change_listener_state(EventListenerStates.UNKNOWN)
                self.state_buffer = b''
                process.event = None
            if self.state_buffer:
                # keep going til its too short
                self.handle_listener_state_change()
            else:
                return

        elif state == EventListenerStates.READY:
            # the process sent some spurious data, be strict about it
            self._change_listener_state(EventListenerStates.UNKNOWN)
            self.state_buffer = b''
            process.event = None
            return

        elif state == EventListenerStates.BUSY:
            if self.resultlen is None:
                # we haven't begun gathering result data yet
                pos = data.find(b'\n')
                if pos == -1:
                    # we can't make a determination yet, we dont have a full
                    # results line
                    return

                result_line = self.state_buffer[:pos]
                self.state_buffer = self.state_buffer[pos+1:] # rid LF
                resultlen = result_line[self.RESULT_TOKEN_START_LEN:]
                try:
                    self.resultlen = int(resultlen)
                except ValueError:
                    try:
                        result_line = as_string(result_line)
                    except UnicodeDecodeError:
                        result_line = 'Undecodable: %r' % result_line
                    process.config.options.logger.warn(
                        '%s: bad result line: \'%s\'' % (procname, result_line)
                        )
                    self._change_listener_state(EventListenerStates.UNKNOWN)
                    self.state_buffer = b''
                    notify(EventRejectedEvent(process, process.event))
                    process.event = None
                    return

            else:
                needed = self.resultlen - len(self.result)

                if needed:
                    self.result += self.state_buffer[:needed]
                    self.state_buffer = self.state_buffer[needed:]
                    needed = self.resultlen - len(self.result)

                if not needed:
                    self.handle_result(self.result)
                    self.process.event = None
                    self.result = b''
                    self.resultlen = None

            if self.state_buffer:
                # keep going til its too short
                self.handle_listener_state_change()

    def handle_result(self, result):
        process = self.process
        procname = process.config.name
        logger = process.config.options.logger

        try:
            self.process.group.config.result_handler(process.event, result)
            logger.debug('%s: event was processed' % procname)
            self._change_listener_state(EventListenerStates.ACKNOWLEDGED)
        except RejectEvent:
            logger.warn('%s: event was rejected' % procname)
            self._change_listener_state(EventListenerStates.ACKNOWLEDGED)
            notify(EventRejectedEvent(process, process.event))
        except:
            logger.warn('%s: event caused an error' % procname)
            self._change_listener_state(EventListenerStates.UNKNOWN)
            notify(EventRejectedEvent(process, process.event))

    def _change_listener_state(self, new_state):
        process = self.process
        procname = process.config.name
        old_state = process.listener_state

        msg = '%s: %s -> %s' % (
            procname,
            getEventListenerStateDescription(old_state),
            getEventListenerStateDescription(new_state)
            )
        process.config.options.logger.debug(msg)

        process.listener_state = new_state
        if new_state == EventListenerStates.UNKNOWN:
            msg = ('%s: has entered the UNKNOWN state and will no longer '
                   'receive events, this usually indicates the process '
                   'violated the eventlistener protocol' % procname)
            process.config.options.logger.warn(msg)

class PInputDispatcher(PDispatcher):
    """ Input (stdin) dispatcher """

    def __init__(self, process, channel, fd):
        PDispatcher.__init__(self, process, channel, fd)
        self.input_buffer = b''

    def writable(self):
        if self.input_buffer and not self.closed:
            return True
        return False

    def readable(self):
        return False

    def flush(self):
        # other code depends on this raising EPIPE if the pipe is closed
        sent = self.process.config.options.write(self.fd,
                                                 self.input_buffer)
        self.input_buffer = self.input_buffer[sent:]

    def handle_write_event(self):
        if self.input_buffer:
            try:
                self.flush()
            except OSError as why:
                if why.args[0] == errno.EPIPE:
                    self.input_buffer = b''
                    self.close()
                else:
                    raise

ANSI_ESCAPE_BEGIN = b'\x1b['
ANSI_TERMINATORS = (b'H', b'f', b'A', b'B', b'C', b'D', b'R', b's', b'u', b'J',
                    b'K', b'h', b'l', b'p', b'm')

def stripEscapes(s):
    """
    Remove all ANSI color escapes from the given string.
    """
    result = b''
    show = 1
    i = 0
    L = len(s)
    while i < L:
        if show == 0 and s[i:i + 1] in ANSI_TERMINATORS:
            show = 1
        elif show:
            n = s.find(ANSI_ESCAPE_BEGIN, i)
            if n == -1:
                return result + s[i:]
            else:
                result = result + s[i:n]
                i = n
                show = 0
        i += 1
    return result

class RejectEvent(Exception):
    """ The exception type expected by a dispatcher when a handler wants
    to reject an event """

def default_handler(event, response):
    if response != b'OK':
        raise RejectEvent(response)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/supervisor/events.py0000644000076500000240000001610314340177153017576 0ustar00mnaberezstafffrom supervisor.states import getProcessStateDescription
from supervisor.compat import as_string

callbacks = []

def subscribe(type, callback):
    callbacks.append((type, callback))

def unsubscribe(type, callback):
    callbacks.remove((type, callback))

def notify(event):
    for type, callback in callbacks:
        if isinstance(event, type):
            callback(event)

def clear():
    callbacks[:] = []

class Event:
    """ Abstract event type """
    pass

class ProcessLogEvent(Event):
    """ Abstract """
    channel = None
    def __init__(self, process, pid, data):
        self.process = process
        self.pid = pid
        self.data = data

    def payload(self):
        groupname = ''
        if self.process.group is not None:
            groupname = self.process.group.config.name
        try:
            data = as_string(self.data)
        except UnicodeDecodeError:
            data = 'Undecodable: %r' % self.data
        # On Python 2, stuff needs to be in Unicode before invoking the
        # % operator, otherwise implicit encodings to ASCII can cause
        # failures
        fmt = as_string('processname:%s groupname:%s pid:%s channel:%s\n%s')
        result = fmt % (as_string(self.process.config.name),
                        as_string(groupname), self.pid,
                        as_string(self.channel), data)
        return result

class ProcessLogStdoutEvent(ProcessLogEvent):
    channel = 'stdout'

class ProcessLogStderrEvent(ProcessLogEvent):
    channel = 'stderr'

class ProcessCommunicationEvent(Event):
    """ Abstract """
    # event mode tokens
    BEGIN_TOKEN = b''
    END_TOKEN   = b''

    def __init__(self, process, pid, data):
        self.process = process
        self.pid = pid
        self.data = data

    def payload(self):
        groupname = ''
        if self.process.group is not None:
            groupname = self.process.group.config.name
        try:
            data = as_string(self.data)
        except UnicodeDecodeError:
            data = 'Undecodable: %r' % self.data
        return 'processname:%s groupname:%s pid:%s\n%s' % (
            self.process.config.name,
            groupname,
            self.pid,
            data)

class ProcessCommunicationStdoutEvent(ProcessCommunicationEvent):
    channel = 'stdout'

class ProcessCommunicationStderrEvent(ProcessCommunicationEvent):
    channel = 'stderr'

class RemoteCommunicationEvent(Event):
    def __init__(self, type, data):
        self.type = type
        self.data = data

    def payload(self):
        return 'type:%s\n%s' % (self.type, self.data)

class SupervisorStateChangeEvent(Event):
    """ Abstract class """
    def payload(self):
        return ''

class SupervisorRunningEvent(SupervisorStateChangeEvent):
    pass

class SupervisorStoppingEvent(SupervisorStateChangeEvent):
    pass

class EventRejectedEvent: # purposely does not subclass Event
    def __init__(self, process, event):
        self.process = process
        self.event = event

class ProcessStateEvent(Event):
    """ Abstract class, never raised directly """
    frm = None
    to = None
    def __init__(self, process, from_state, expected=True):
        self.process = process
        self.from_state = from_state
        self.expected = expected
        # we eagerly render these so if the process pid, etc changes beneath
        # us, we stash the values at the time the event was sent
        self.extra_values = self.get_extra_values()

    def payload(self):
        groupname = ''
        if self.process.group is not None:
            groupname = self.process.group.config.name
        L = [('processname', self.process.config.name), ('groupname', groupname),
             ('from_state', getProcessStateDescription(self.from_state))]
        L.extend(self.extra_values)
        s = ' '.join( [ '%s:%s' % (name, val) for (name, val) in L ] )
        return s

    def get_extra_values(self):
        return []

class ProcessStateFatalEvent(ProcessStateEvent):
    pass

class ProcessStateUnknownEvent(ProcessStateEvent):
    pass

class ProcessStateStartingOrBackoffEvent(ProcessStateEvent):
    def get_extra_values(self):
        return [('tries', int(self.process.backoff))]

class ProcessStateBackoffEvent(ProcessStateStartingOrBackoffEvent):
    pass

class ProcessStateStartingEvent(ProcessStateStartingOrBackoffEvent):
    pass

class ProcessStateExitedEvent(ProcessStateEvent):
    def get_extra_values(self):
        return [('expected', int(self.expected)), ('pid', self.process.pid)]

class ProcessStateRunningEvent(ProcessStateEvent):
    def get_extra_values(self):
        return [('pid', self.process.pid)]

class ProcessStateStoppingEvent(ProcessStateEvent):
    def get_extra_values(self):
        return [('pid', self.process.pid)]

class ProcessStateStoppedEvent(ProcessStateEvent):
    def get_extra_values(self):
        return [('pid', self.process.pid)]

class ProcessGroupEvent(Event):
    def __init__(self, group):
        self.group = group

    def payload(self):
        return 'groupname:%s\n' % self.group

class ProcessGroupAddedEvent(ProcessGroupEvent):
    pass

class ProcessGroupRemovedEvent(ProcessGroupEvent):
    pass

class TickEvent(Event):
    """ Abstract """
    def __init__(self, when, supervisord):
        self.when = when
        self.supervisord = supervisord

    def payload(self):
        return 'when:%s' % self.when

class Tick5Event(TickEvent):
    period = 5

class Tick60Event(TickEvent):
    period = 60

class Tick3600Event(TickEvent):
    period = 3600

TICK_EVENTS = [ Tick5Event, Tick60Event, Tick3600Event ] # imported elsewhere

class EventTypes:
    EVENT = Event # abstract
    PROCESS_STATE = ProcessStateEvent # abstract
    PROCESS_STATE_STOPPED = ProcessStateStoppedEvent
    PROCESS_STATE_EXITED = ProcessStateExitedEvent
    PROCESS_STATE_STARTING = ProcessStateStartingEvent
    PROCESS_STATE_STOPPING = ProcessStateStoppingEvent
    PROCESS_STATE_BACKOFF = ProcessStateBackoffEvent
    PROCESS_STATE_FATAL = ProcessStateFatalEvent
    PROCESS_STATE_RUNNING = ProcessStateRunningEvent
    PROCESS_STATE_UNKNOWN = ProcessStateUnknownEvent
    PROCESS_COMMUNICATION = ProcessCommunicationEvent # abstract
    PROCESS_COMMUNICATION_STDOUT = ProcessCommunicationStdoutEvent
    PROCESS_COMMUNICATION_STDERR = ProcessCommunicationStderrEvent
    PROCESS_LOG = ProcessLogEvent
    PROCESS_LOG_STDOUT = ProcessLogStdoutEvent
    PROCESS_LOG_STDERR = ProcessLogStderrEvent
    REMOTE_COMMUNICATION = RemoteCommunicationEvent
    SUPERVISOR_STATE_CHANGE = SupervisorStateChangeEvent # abstract
    SUPERVISOR_STATE_CHANGE_RUNNING = SupervisorRunningEvent
    SUPERVISOR_STATE_CHANGE_STOPPING = SupervisorStoppingEvent
    TICK = TickEvent # abstract
    TICK_5 = Tick5Event
    TICK_60 = Tick60Event
    TICK_3600 = Tick3600Event
    PROCESS_GROUP = ProcessGroupEvent # abstract
    PROCESS_GROUP_ADDED = ProcessGroupAddedEvent
    PROCESS_GROUP_REMOVED = ProcessGroupRemovedEvent

def getEventNameByType(requested):
    for name, typ in EventTypes.__dict__.items():
        if typ is requested:
            return name

def register(name, event):
    setattr(EventTypes, name, event)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0
supervisor-4.2.5/supervisor/http.py0000644000076500000240000007621214351440431017252 0ustar00mnaberezstaffimport os
import stat
import time
import sys
import socket
import errno
import weakref
import traceback

try:
    import pwd
except ImportError:  # Windows
    import getpass as pwd

from supervisor.compat import urllib
from supervisor.compat import sha1
from supervisor.compat import as_bytes
from supervisor.compat import as_string
from supervisor.medusa import asyncore_25 as asyncore
from supervisor.medusa import http_date
from supervisor.medusa import http_server
from supervisor.medusa import producers
from supervisor.medusa import filesys
from supervisor.medusa import default_handler

from supervisor.medusa.auth_handler import auth_handler

class NOT_DONE_YET:
    pass

class deferring_chunked_producer:
    """A producer that implements the 'chunked' transfer coding for HTTP/1.1.
    Here is a sample usage:
            request['Transfer-Encoding'] = 'chunked'
            request.push (
                    producers.chunked_producer (your_producer)
                    )
            request.done()
    """

    def __init__ (self, producer, footers=None):
        self.producer = producer
        self.footers = footers
        self.delay = 0.1

    def more (self):
        if self.producer:
            data = self.producer.more()
            if data is NOT_DONE_YET:
                return NOT_DONE_YET
            elif data:
                s = '%x' % len(data)
                return as_bytes(s) + b'\r\n' + data + b'\r\n'
            else:
                self.producer = None
                if self.footers:
                    return b'\r\n'.join([b'0'] + self.footers) + b'\r\n\r\n'
                else:
                    return b'0\r\n\r\n'
        else:
            return b''

class deferring_composite_producer:
    """combine a fifo of producers into one"""
    def __init__ (self, producers):
        self.producers = producers
        self.delay = 0.1

    def more (self):
        while len(self.producers):
            p = self.producers[0]
            d = p.more()
            if d is NOT_DONE_YET:
                return NOT_DONE_YET
            if d:
                return d
            else:
                self.producers.pop(0)
        else:
            return b''


class deferring_globbing_producer:
    """
    'glob' the output from a producer into a particular buffer size.
    helps reduce the number of calls to send().  [this appears to
    gain about 30% performance on requests to a single channel]
    """

    def __init__ (self, producer, buffer_size=1<<16):
        self.producer = producer
        self.buffer = b''
        self.buffer_size = buffer_size
        self.delay = 0.1

    def more (self):
        while len(self.buffer) < self.buffer_size:
            data = self.producer.more()
            if data is NOT_DONE_YET:
                return NOT_DONE_YET
            if data:
                try:
                    self.buffer = self.buffer + data
                except TypeError:
                    self.buffer = as_bytes(self.buffer) + as_bytes(data)
            else:
                break
        r = self.buffer
        self.buffer = b''
        return r


class deferring_hooked_producer:
    """
    A producer that will call  when it empties,.
    with an argument of the number of bytes produced.  Useful
    for logging/instrumentation purposes.
    """

    def __init__ (self, producer, function):
        self.producer = producer
        self.function = function
        self.bytes = 0
        self.delay = 0.1

    def more (self):
        if self.producer:
            result = self.producer.more()
            if result is NOT_DONE_YET:
                return NOT_DONE_YET
            if not result:
                self.producer = None
                self.function (self.bytes)
            else:
                self.bytes += len(result)
            return result
        else:
            return b''


class deferring_http_request(http_server.http_request):
    """ The medusa http_request class uses the default set of producers in
    medusa.producers.  We can't use these because they don't know anything
    about deferred responses, so we override various methods here.  This was
    added to support tail -f like behavior on the logtail handler """

    def done(self, *arg, **kw):

        """ I didn't want to override this, but there's no way around
        it in order to support deferreds - CM

        finalize this transaction - send output to the http channel"""

        # ----------------------------------------
        # persistent connection management
        # ----------------------------------------

        #  --- BUCKLE UP! ----

        connection = http_server.get_header(http_server.CONNECTION,self.header)
        connection = connection.lower()

        close_it = 0
        wrap_in_chunking = 0
        globbing = 1

        if self.version == '1.0':
            if connection == 'keep-alive':
                if not 'Content-Length' in self:
                    close_it = 1
                else:
                    self['Connection'] = 'Keep-Alive'
            else:
                close_it = 1
        elif self.version == '1.1':
            if connection == 'close':
                close_it = 1
            elif not 'Content-Length' in self:
                if 'Transfer-Encoding' in self:
                    if not self['Transfer-Encoding'] == 'chunked':
                        close_it = 1
                elif self.use_chunked:
                    self['Transfer-Encoding'] = 'chunked'
                    wrap_in_chunking = 1
                    # globbing slows down tail -f output, so only use it if
                    # we're not in chunked mode
                    globbing = 0
                else:
                    close_it = 1
        elif self.version is None:
            # Although we don't *really* support http/0.9 (because
            # we'd have to use \r\n as a terminator, and it would just
            # yuck up a lot of stuff) it's very common for developers
            # to not want to type a version number when using telnet
            # to debug a server.
            close_it = 1

        outgoing_header = producers.simple_producer(self.build_reply_header())

        if close_it:
            self['Connection'] = 'close'

        if wrap_in_chunking:
            outgoing_producer = deferring_chunked_producer(
                    deferring_composite_producer(self.outgoing)
                    )
            # prepend the header
            outgoing_producer = deferring_composite_producer(
                [outgoing_header, outgoing_producer]
                )
        else:
            # prepend the header
            self.outgoing.insert(0, outgoing_header)
            outgoing_producer = deferring_composite_producer(self.outgoing)

        # hook logging into the output
        outgoing_producer = deferring_hooked_producer(outgoing_producer,
                                                      self.log)

        if globbing:
            outgoing_producer = deferring_globbing_producer(outgoing_producer)

        self.channel.push_with_producer(outgoing_producer)

        self.channel.current_request = None

        if close_it:
            self.channel.close_when_done()

    def log (self, bytes):
        """ We need to override this because UNIX domain sockets return
        an empty string for the addr rather than a (host, port) combination """
        if self.channel.addr:
            host = self.channel.addr[0]
            port = self.channel.addr[1]
        else:
            host = 'localhost'
            port = 0
        self.channel.server.logger.log (
                host,
                '%d - - [%s] "%s" %d %d\n' % (
                        port,
                        self.log_date_string (time.time()),
                        self.request,
                        self.reply_code,
                        bytes
                        )
                )

    def cgi_environment(self):
        env = {}

        # maps request some headers to environment variables.
        # (those that don't start with 'HTTP_')
        header2env= {'content-length'    : 'CONTENT_LENGTH',
                     'content-type'      : 'CONTENT_TYPE',
                     'connection'        : 'CONNECTION_TYPE'}

        workdir = os.getcwd()
        (path, params, query, fragment) = self.split_uri()

        if params:
            path = path + params # undo medusa bug!

        while path and path[0] == '/':
            path = path[1:]
        if '%' in path:
            path = http_server.unquote(path)
        if query:
            query = query[1:]

        server = self.channel.server
        env['REQUEST_METHOD'] = self.command.upper()
        env['SERVER_PORT'] = str(server.port)
        env['SERVER_NAME'] = server.server_name
        env['SERVER_SOFTWARE'] = server.SERVER_IDENT
        env['SERVER_PROTOCOL'] = "HTTP/" + self.version
        env['channel.creation_time'] = self.channel.creation_time
        env['SCRIPT_NAME'] = ''
        env['PATH_INFO'] = '/' + path
        env['PATH_TRANSLATED'] = os.path.normpath(os.path.join(
                workdir, env['PATH_INFO']))
        if query:
            env['QUERY_STRING'] = query
        env['GATEWAY_INTERFACE'] = 'CGI/1.1'
        if self.channel.addr:
            env['REMOTE_ADDR'] = self.channel.addr[0]
        else:
            env['REMOTE_ADDR'] = '127.0.0.1'

        for header in self.header:
            key,value=header.split(":",1)
            key=key.lower()
            value=value.strip()
            if key in header2env and value:
                env[header2env.get(key)]=value
            else:
                key='HTTP_%s' % ("_".join(key.split( "-"))).upper()
                if value and key not in env:
                    env[key]=value
        return env

    def get_server_url(self):
        """ Functionality that medusa's http request doesn't have; set an
        attribute named 'server_url' on the request based on the Host: header
        """
        default_port={'http': '80', 'https': '443'}
        environ = self.cgi_environment()
        if (environ.get('HTTPS') in ('on', 'ON') or
            environ.get('SERVER_PORT_SECURE') == "1"):
            # XXX this will currently never be true
            protocol = 'https'
        else:
            protocol = 'http'

        if 'HTTP_HOST' in environ:
            host = environ['HTTP_HOST'].strip()
            hostname, port = urllib.splitport(host)
        else:
            hostname = environ['SERVER_NAME'].strip()
            port = environ['SERVER_PORT']

        if port is None or default_port[protocol] == port:
            host = hostname
        else:
            host = hostname + ':' + port
        server_url = '%s://%s' % (protocol, host)
        if server_url[-1:]=='/':
            server_url=server_url[:-1]
        return server_url

class deferring_http_channel(http_server.http_channel):

    # use a 4096-byte buffer size instead of the default 65536-byte buffer in
    # order to spew tail -f output faster (speculative)
    ac_out_buffer_size = 4096

    delay = 0 # seconds
    last_writable_check = 0 # timestamp of last writable check; 0 if never

    def writable(self, now=None):
        if now is None:  # for unit tests
            now = time.time()

        if self.delay:
            # we called a deferred producer via this channel (see refill_buffer)
            elapsed = now - self.last_writable_check
            if (elapsed > self.delay) or (elapsed < 0):
                self.last_writable_check = now
                return True
            else:
                return False

        return http_server.http_channel.writable(self)

    def refill_buffer (self):
        """ Implement deferreds """
        while 1:
            if len(self.producer_fifo):
                p = self.producer_fifo.first()
                # a 'None' in the producer fifo is a sentinel,
                # telling us to close the channel.
                if p is None:
                    if not self.ac_out_buffer:
                        self.producer_fifo.pop()
                        self.close()
                    return
                elif isinstance(p, bytes):
                    self.producer_fifo.pop()
                    self.ac_out_buffer += p
                    return

                data = p.more()

                if data is NOT_DONE_YET:
                    self.delay = p.delay
                    return

                elif data:
                    self.ac_out_buffer = self.ac_out_buffer + data
                    self.delay = False
                    return
                else:
                    self.producer_fifo.pop()
            else:
                return

    def found_terminator (self):
        """ We only override this to use 'deferring_http_request' class
        instead of the normal http_request class; it sucks to need to override
        this """
        if self.current_request:
            self.current_request.found_terminator()
        else:
            # we convert the header to text to facilitate processing.
            # some of the underlying APIs (such as splitquery)
            # expect text rather than bytes.
            header = as_string(self.in_buffer)
            self.in_buffer = b''
            lines = header.split('\r\n')

            # --------------------------------------------------
            # crack the request header
            # --------------------------------------------------

            while lines and not lines[0]:
                # as per the suggestion of http-1.1 section 4.1, (and
                # Eric Parker ), ignore a leading
                # blank lines (buggy browsers tack it onto the end of
                # POST requests)
                lines = lines[1:]

            if not lines:
                self.close_when_done()
                return

            request = lines[0]

            command, uri, version = http_server.crack_request (request)
            header = http_server.join_headers (lines[1:])

            # unquote path if necessary (thanks to Skip Montanaro for pointing
            # out that we must unquote in piecemeal fashion).
            rpath, rquery = http_server.splitquery(uri)
            if '%' in rpath:
                if rquery:
                    uri = http_server.unquote(rpath) + '?' + rquery
                else:
                    uri = http_server.unquote(rpath)

            r = deferring_http_request(self, request, command, uri, version,
                                       header)
            self.request_counter.increment()
            self.server.total_requests.increment()

            if command is None:
                self.log_info ('Bad HTTP request: %s' % repr(request), 'error')
                r.error (400)
                return

            # --------------------------------------------------
            # handler selection and dispatch
            # --------------------------------------------------
            for h in self.server.handlers:
                if h.match (r):
                    try:
                        self.current_request = r
                        # This isn't used anywhere.
                        # r.handler = h # CYCLE
                        h.handle_request (r)
                    except:
                        self.server.exceptions.increment()
                        (file, fun, line), t, v, tbinfo = \
                               asyncore.compact_traceback()
                        self.server.log_info(
                            'Server Error: %s, %s: file: %s line: %s' %
                            (t,v,file,line),
                            'error')
                        try:
                            r.error (500)
                        except:
                            pass
                    return

            # no handlers, so complain
            r.error (404)

class supervisor_http_server(http_server.http_server):
    channel_class = deferring_http_channel
    ip = None

    def prebind(self, sock, logger_object):
        """ Override __init__ to do logger setup earlier so it can
        go to our logger object instead of stdout """
        from supervisor.medusa import logger

        if not logger_object:
            logger_object = logger.file_logger(sys.stdout)

        logger_object = logger.unresolving_logger(logger_object)
        self.logger = logger_object

        asyncore.dispatcher.__init__ (self)
        self.set_socket(sock)

        self.handlers = []

        sock.setblocking(0)
        self.set_reuse_addr()

    def postbind(self):
        from supervisor.medusa.counter import counter
        from supervisor.medusa.http_server import VERSION_STRING

        self.listen(1024)

        self.total_clients = counter()
        self.total_requests = counter()
        self.exceptions = counter()
        self.bytes_out = counter()
        self.bytes_in  = counter()

        self.log_info (
                'Medusa (V%s) started at %s'
                '\n\tHostname: %s'
                '\n\tPort:%s'
                '\n' % (
                        VERSION_STRING,
                        time.ctime(time.time()),
                        self.server_name,
                        self.port,
                        )
                )

    def log_info(self, message, type='info'):
        ip = ''
        if getattr(self, 'ip', None) is not None:
            ip = self.ip
        self.logger.log(ip, message)

class supervisor_af_inet_http_server(supervisor_http_server):
    """ AF_INET version of supervisor HTTP server """

    def __init__(self, ip, port, logger_object):
        self.ip = ip
        self.port = port
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        self.prebind(sock, logger_object)
        self.bind((ip, port))

        if not ip:
            self.log_info('Computing default hostname', 'warning')
            hostname = socket.gethostname()
            try:
                ip = socket.gethostbyname(hostname)
            except socket.error:
                raise ValueError(
                    'Could not determine IP address for hostname %s, '
                    'please try setting an explicit IP address in the "port" '
                    'setting of your [inet_http_server] section.  For example, '
                    'instead of "port = 9001", try "port = 127.0.0.1:9001."'
                    % hostname)
        try:
            self.server_name = socket.gethostbyaddr (ip)[0]
        except socket.error:
            self.log_info('Cannot do reverse lookup', 'warning')
            self.server_name = ip       # use the IP address as the "hostname"

        self.postbind()

class supervisor_af_unix_http_server(supervisor_http_server):
    """ AF_UNIX version of supervisor HTTP server """

    def __init__(self, socketname, sockchmod, sockchown, logger_object):
        self.ip = socketname
        self.port = socketname

        # XXX this is insecure.  We really should do something like
        # http://developer.apple.com/samplecode/CFLocalServer/listing6.html
        # (see also http://developer.apple.com/technotes/tn2005/tn2083.html#SECUNIXDOMAINSOCKETS)
        # but it would be very inconvenient for the user to need to get all
        # the directory setup right.

        tempname = "%s.%d" % (socketname, os.getpid())

        try:
            os.unlink(tempname)
        except OSError:
            pass

        while 1:
            sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
            try:
                sock.bind(tempname)
                os.chmod(tempname, sockchmod)
                try:
                    # hard link
                    os.link(tempname, socketname)
                except OSError:
                    # Lock contention, or stale socket.
                    used = self.checkused(socketname)
                    if used:
                        # cooperate with 'openhttpserver' in supervisord
                        raise socket.error(errno.EADDRINUSE)

                    # Stale socket -- delete, sleep, and try again.
                    msg = "Unlinking stale socket %s\n" % socketname
                    sys.stderr.write(msg)
                    try:
                        os.unlink(socketname)
                    except:
                        pass
                    sock.close()
                    time.sleep(.3)
                    continue
                else:
                    try:
                        os.chown(socketname, sockchown[0], sockchown[1])
                    except OSError as why:
                        if why.args[0] == errno.EPERM:
                            msg = ('Not permitted to chown %s to uid/gid %s; '
                                   'adjust "sockchown" value in config file or '
                                   'on command line to values that the '
                                   'current user (%s) can successfully chown')
                            raise ValueError(msg % (socketname,
                                                    repr(sockchown),
                                                    pwd.getpwuid(
                                                        os.geteuid())[0],
                                                    ),
                                             )
                        else:
                            raise
                    self.prebind(sock, logger_object)
                    break

            finally:
                try:
                    os.unlink(tempname)
                except OSError:
                    pass

        self.server_name = ''
        self.postbind()

    def checkused(self, socketname):
        s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
        try:
            s.connect(socketname)
            s.send(as_bytes("GET / HTTP/1.0\r\n\r\n"))
            s.recv(1)
            s.close()
        except socket.error:
            return False
        else:
            return True

class tail_f_producer:
    def __init__(self, request, filename, head):
        self.request = weakref.ref(request)
        self.filename = filename
        self.delay = 0.1

        self._open()
        sz = self._fsize()
        if sz >= head:
            self.sz = sz - head

    def __del__(self):
        self._close()

    def more(self):
        self._follow()
        try:
            newsz = self._fsize()
        except (OSError, ValueError):
            # file descriptor was closed
            return b''
        bytes_added = newsz - self.sz
        if bytes_added < 0:
            self.sz = 0
            return "==> File truncated <==\n"
        if bytes_added > 0:
            self.file.seek(-bytes_added, 2)
            bytes = self.file.read(bytes_added)
            self.sz = newsz
            return bytes
        return NOT_DONE_YET

    def _open(self):
        self.file = open(self.filename, 'rb')
        self.ino = os.fstat(self.file.fileno())[stat.ST_INO]
        self.sz = 0

    def _close(self):
        self.file.close()

    def _follow(self):
        try:
            ino = os.stat(self.filename)[stat.ST_INO]
        except (OSError, ValueError):
            # file was unlinked
            return

        if self.ino != ino: # log rotation occurred
            self._close()
            self._open()

    def _fsize(self):
        return os.fstat(self.file.fileno())[stat.ST_SIZE]

class logtail_handler:
    IDENT = 'Logtail HTTP Request Handler'
    path = '/logtail'

    def __init__(self, supervisord):
        self.supervisord = supervisord

    def match(self, request):
        return request.uri.startswith(self.path)

    def handle_request(self, request):
        if request.command != 'GET':
            request.error (400) # bad request
            return

        path, params, query, fragment = request.split_uri()

        if '%' in path:
            path = http_server.unquote(path)

        # strip off all leading slashes
        while path and path[0] == '/':
            path = path[1:]

        path, process_name_and_channel = path.split('/', 1)

        try:
            process_name, channel = process_name_and_channel.split('/', 1)
        except ValueError:
            # no channel specified, default channel to stdout
            process_name = process_name_and_channel
            channel = 'stdout'

        from supervisor.options import split_namespec
        group_name, process_name = split_namespec(process_name)

        group = self.supervisord.process_groups.get(group_name)
        if group is None:
            request.error(404) # not found
            return

        process = group.processes.get(process_name)
        if process is None:
            request.error(404) # not found
            return

        logfile = getattr(process.config, '%s_logfile' % channel, None)

        if logfile is None or not os.path.exists(logfile):
            # we return 404 because no logfile is a temporary condition.
            # if the process has never been started, no logfile will exist
            # on disk.  a logfile of None is also a temporary condition,
            # since the config file can be reloaded.
            request.error(404) # not found
            return

        mtime = os.stat(logfile)[stat.ST_MTIME]
        request['Last-Modified'] = http_date.build_http_date(mtime)
        request['Content-Type'] = 'text/plain;charset=utf-8'
        # the lack of a Content-Length header makes the outputter
        # send a 'Transfer-Encoding: chunked' response
        request['X-Accel-Buffering'] = 'no'
        # tell reverse proxy server (e.g., nginx) to disable proxy buffering
        # (see also http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering)

        request.push(tail_f_producer(request, logfile, 1024))

        request.done()

class mainlogtail_handler:
    IDENT = 'Main Logtail HTTP Request Handler'
    path = '/mainlogtail'

    def __init__(self, supervisord):
        self.supervisord = supervisord

    def match(self, request):
        return request.uri.startswith(self.path)

    def handle_request(self, request):
        if request.command != 'GET':
            request.error (400) # bad request
            return

        logfile = self.supervisord.options.logfile

        if logfile is None or not os.path.exists(logfile):
            # we return 404 because no logfile is a temporary condition.
            # even if a log file of None is configured, the config file
            # may be reloaded, and the new config may have a logfile.
            request.error(404) # not found
            return

        mtime = os.stat(logfile)[stat.ST_MTIME]
        request['Last-Modified'] = http_date.build_http_date(mtime)
        request['Content-Type'] = 'text/plain;charset=utf-8'
        # the lack of a Content-Length header makes the outputter
        # send a 'Transfer-Encoding: chunked' response

        request.push(tail_f_producer(request, logfile, 1024))

        request.done()

def make_http_servers(options, supervisord):
    servers = []
    wrapper = LogWrapper(options.logger)

    for config in options.server_configs:
        family = config['family']

        if family == socket.AF_INET:
            host, port = config['host'], config['port']
            hs = supervisor_af_inet_http_server(host, port,
                                                logger_object=wrapper)
        elif family == socket.AF_UNIX:
            socketname = config['file']
            sockchmod = config['chmod']
            sockchown = config['chown']
            hs = supervisor_af_unix_http_server(socketname,sockchmod, sockchown,
                                                logger_object=wrapper)
        else:
            raise ValueError('Cannot determine socket type %r' % family)

        from supervisor.xmlrpc import supervisor_xmlrpc_handler
        from supervisor.xmlrpc import SystemNamespaceRPCInterface
        from supervisor.web import supervisor_ui_handler

        subinterfaces = []
        for name, factory, d in options.rpcinterface_factories:
            try:
                inst = factory(supervisord, **d)
            except:
                tb = traceback.format_exc()
                options.logger.warn(tb)
                raise ValueError('Could not make %s rpc interface' % name)
            subinterfaces.append((name, inst))
            options.logger.info('RPC interface %r initialized' % name)

        subinterfaces.append(('system',
                              SystemNamespaceRPCInterface(subinterfaces)))
        xmlrpchandler = supervisor_xmlrpc_handler(supervisord, subinterfaces)
        tailhandler = logtail_handler(supervisord)
        maintailhandler = mainlogtail_handler(supervisord)
        uihandler = supervisor_ui_handler(supervisord)
        here = os.path.abspath(os.path.dirname(__file__))
        templatedir = os.path.join(here, 'ui')
        filesystem = filesys.os_filesystem(templatedir)
        defaulthandler = default_handler.default_handler(filesystem)

        username = config['username']
        password = config['password']

        if username:
            # wrap the xmlrpc handler and tailhandler in an authentication
            # handler
            users = {username:password}
            xmlrpchandler = supervisor_auth_handler(users, xmlrpchandler)
            tailhandler = supervisor_auth_handler(users, tailhandler)
            maintailhandler = supervisor_auth_handler(users, maintailhandler)
            uihandler = supervisor_auth_handler(users, uihandler)
            defaulthandler = supervisor_auth_handler(users, defaulthandler)
        else:
            options.logger.critical(
                'Server %r running without any HTTP '
                'authentication checking' % config['section'])
        # defaulthandler must be consulted last as its match method matches
        # everything, so it's first here (indicating last checked)
        hs.install_handler(defaulthandler)
        hs.install_handler(uihandler)
        hs.install_handler(maintailhandler)
        hs.install_handler(tailhandler)
        hs.install_handler(xmlrpchandler) # last for speed (first checked)
        servers.append((config, hs))

    return servers

class LogWrapper:
    '''Receives log messages from the Medusa servers and forwards
    them to the Supervisor logger'''
    def __init__(self, logger):
        self.logger = logger

    def log(self, msg):
        '''Medusa servers call this method.  There is no log level so
        we have to sniff the message.  We want "Server Error" messages
        from medusa.http_server logged as errors at least.'''
        if msg.endswith('\n'):
            msg = msg[:-1]
        if 'error' in msg.lower():
            self.logger.error(msg)
        else:
            self.logger.trace(msg)

class encrypted_dictionary_authorizer:
    def __init__ (self, dict):
        self.dict = dict

    def authorize(self, auth_info):
        username, password = auth_info
        if username in self.dict:
            stored_password = self.dict[username]
            if stored_password.startswith('{SHA}'):
                password_hash = sha1(as_bytes(password)).hexdigest()
                return stored_password[5:] == password_hash
            else:
                return stored_password == password
        else:
            return False

class supervisor_auth_handler(auth_handler):
    def __init__(self, dict, handler, realm='default'):
        auth_handler.__init__(self, dict, handler, realm)
        # override the authorizer with one that knows about SHA hashes too
        self.authorizer = encrypted_dictionary_authorizer(dict)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/supervisor/http_client.py0000644000076500000240000001556614340177153020623 0ustar00mnaberezstaff# this code based on Daniel Krech's RDFLib HTTP client code (see rdflib.net)

import sys
import socket

from supervisor.compat import as_bytes
from supervisor.compat import as_string
from supervisor.compat import encodestring
from supervisor.compat import PY2
from supervisor.compat import urlparse
from supervisor.medusa import asynchat_25 as asynchat

CR = b'\x0d'
LF = b'\x0a'
CRLF = CR+LF

class Listener(object):

    def status(self, url, status):
        pass

    def error(self, url, error):
        sys.stderr.write("%s %s\n" % (url, error))

    def response_header(self, url, name, value):
        pass

    def done(self, url):
        pass

    def feed(self, url, data):
        try:
            sdata = as_string(data)
        except UnicodeDecodeError:
            sdata = 'Undecodable: %r' % data
        # We've got Unicode data in sdata now, but writing to stdout sometimes
        # fails - see issue #1231.
        try:
            sys.stdout.write(sdata)
        except UnicodeEncodeError:
            if PY2:
                # This might seem like The Wrong Thing To Do (writing bytes
                # rather than text to an output stream), but it seems to work
                # OK for Python 2.7.
                sys.stdout.write(data)
            else:
                s = ('Unable to write Unicode to stdout because it has '
                     'encoding %s' % sys.stdout.encoding)
                raise ValueError(s)
        sys.stdout.flush()

    def close(self, url):
        pass

class HTTPHandler(asynchat.async_chat):
    def __init__(
        self,
        listener,
        username='',
        password=None,
        conn=None,
        map=None
        ):
        asynchat.async_chat.__init__(self, conn, map)
        self.listener = listener
        self.user_agent = 'Supervisor HTTP Client'
        self.buffer = b''
        self.set_terminator(CRLF)
        self.connected = 0
        self.part = self.status_line
        self.chunk_size = 0
        self.chunk_read = 0
        self.length_read = 0
        self.length = 0
        self.encoding = None
        self.username = username
        self.password = password
        self.url = None
        self.error_handled = False

    def get(self, serverurl, path=''):
        if self.url is not None:
            raise AssertionError('Already doing a get')
        self.url = serverurl + path
        scheme, host, path_ignored, params, query, fragment = urlparse.urlparse(
            self.url)
        if not scheme in ("http", "unix"):
            raise NotImplementedError
        self.host = host
        if ":" in host:
            hostname, port = host.split(":", 1)
            port = int(port)
        else:
            hostname = host
            port = 80

        self.path = path
        self.port = port

        if scheme == "http":
            ip = hostname
            self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
            self.connect((ip, self.port))
        elif scheme == "unix":
            socketname = serverurl[7:]
            self.create_socket(socket.AF_UNIX, socket.SOCK_STREAM)
            self.connect(socketname)

    def close(self):
        self.listener.close(self.url)
        self.connected = 0
        self.del_channel()
        self.socket.close()
        self.url = "CLOSED"

    def header(self, name, value):
        self.push('%s: %s' % (name, value))
        self.push(CRLF)

    def handle_error(self):
        if self.error_handled:
            return
        if 1 or self.connected:
            t,v,tb = sys.exc_info()
            msg = 'Cannot connect, error: %s (%s)' % (t, v)
            self.listener.error(self.url, msg)
            self.part = self.ignore
            self.close()
            self.error_handled = True
            del t
            del v
            del tb

    def handle_connect(self):
        self.connected = 1
        method = "GET"
        version = "HTTP/1.1"
        self.push("%s %s %s" % (method, self.path, version))
        self.push(CRLF)
        self.header("Host", self.host)

        self.header('Accept-Encoding', 'chunked')
        self.header('Accept', '*/*')
        self.header('User-agent', self.user_agent)
        if self.password:
            auth = '%s:%s' % (self.username, self.password)
            auth = as_string(encodestring(as_bytes(auth))).strip()
            self.header('Authorization', 'Basic %s' % auth)
        self.push(CRLF)
        self.push(CRLF)


    def feed(self, data):
        self.listener.feed(self.url, data)

    def collect_incoming_data(self, bytes):
        self.buffer = self.buffer + bytes
        if self.part==self.body:
            self.feed(self.buffer)
            self.buffer = b''

    def found_terminator(self):
        self.part()
        self.buffer = b''

    def ignore(self):
        self.buffer = b''

    def status_line(self):
        line = self.buffer

        version, status, reason = line.split(None, 2)
        status = int(status)
        if not version.startswith(b'HTTP/'):
            raise ValueError(line)

        self.listener.status(self.url, status)

        if status == 200:
            self.part = self.headers
        else:
            self.part = self.ignore
            msg = 'Cannot read, status code %s' % status
            self.listener.error(self.url, msg)
            self.close()
        return version, status, reason

    def headers(self):
        line = self.buffer
        if not line:
            if self.encoding == b'chunked':
                self.part = self.chunked_size
            else:
                self.part = self.body
                self.set_terminator(self.length)
        else:
            name, value = line.split(b':', 1)
            if name and value:
                name = name.lower()
                value = value.strip()
                if name == b'transfer-encoding':
                    self.encoding = value
                elif name == b'content-length':
                    self.length = int(value)
                self.response_header(name, value)

    def response_header(self, name, value):
        self.listener.response_header(self.url, name, value)

    def body(self):
        self.done()
        self.close()

    def done(self):
        self.listener.done(self.url)

    def chunked_size(self):
        line = self.buffer
        if not line:
            return
        chunk_size = int(line.split()[0], 16)
        if chunk_size==0:
            self.part = self.trailer
        else:
            self.set_terminator(chunk_size)
            self.part = self.chunked_body
        self.length += chunk_size

    def chunked_body(self):
        line = self.buffer
        self.set_terminator(CRLF)
        self.part = self.chunked_size
        self.feed(line)

    def trailer(self):
        # http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.6.1
        # trailer        = *(entity-header CRLF)
        line = self.buffer
        if line == CRLF:
            self.done()
            self.close()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/supervisor/loggers.py0000644000076500000240000003177714340177153017752 0ustar00mnaberezstaff"""
Logger implementation loosely modeled on PEP 282.  We don't use the
PEP 282 logger implementation in the stdlib ('logging') because it's
idiosyncratic and a bit slow for our purposes (we don't use threads).
"""

# This module must not depend on any non-stdlib modules to
# avoid circular import problems

import os
import errno
import sys
import time
import traceback

from supervisor.compat import syslog
from supervisor.compat import long
from supervisor.compat import is_text_stream
from supervisor.compat import as_string

class LevelsByName:
    CRIT = 50   # messages that probably require immediate user attention
    ERRO = 40   # messages that indicate a potentially ignorable error condition
    WARN = 30   # messages that indicate issues which aren't errors
    INFO = 20   # normal informational output
    DEBG = 10   # messages useful for users trying to debug configurations
    TRAC = 5    # messages useful to developers trying to debug plugins
    BLAT = 3    # messages useful for developers trying to debug supervisor

class LevelsByDescription:
    critical = LevelsByName.CRIT
    error = LevelsByName.ERRO
    warn = LevelsByName.WARN
    info = LevelsByName.INFO
    debug = LevelsByName.DEBG
    trace = LevelsByName.TRAC
    blather = LevelsByName.BLAT

def _levelNumbers():
    bynumber = {}
    for name, number in LevelsByName.__dict__.items():
        if not name.startswith('_'):
            bynumber[number] = name
    return bynumber

LOG_LEVELS_BY_NUM = _levelNumbers()

def getLevelNumByDescription(description):
    num = getattr(LevelsByDescription, description, None)
    return num

class Handler:
    fmt = '%(message)s'
    level = LevelsByName.INFO

    def __init__(self, stream=None):
        self.stream = stream
        self.closed = False

    def setFormat(self, fmt):
        self.fmt = fmt

    def setLevel(self, level):
        self.level = level

    def flush(self):
        try:
            self.stream.flush()
        except IOError as why:
            # if supervisor output is piped, EPIPE can be raised at exit
            if why.args[0] != errno.EPIPE:
                raise

    def close(self):
        if not self.closed:
            if hasattr(self.stream, 'fileno'):
                try:
                    fd = self.stream.fileno()
                except IOError:
                    # on python 3, io.IOBase objects always have fileno()
                    # but calling it may raise io.UnsupportedOperation
                    pass
                else:
                    if fd < 3: # don't ever close stdout or stderr
                        return
            self.stream.close()
            self.closed = True

    def emit(self, record):
        try:
            binary = (self.fmt == '%(message)s' and
                      isinstance(record.msg, bytes) and
                      (not record.kw or record.kw == {'exc_info': None}))
            binary_stream = not is_text_stream(self.stream)
            if binary:
                msg = record.msg
            else:
                msg = self.fmt % record.asdict()
                if binary_stream:
                    msg = msg.encode('utf-8')
            try:
                self.stream.write(msg)
            except UnicodeError:
                # TODO sort out later
                # this only occurs because of a test stream type
                # which deliberately raises an exception the first
                # time it's called. So just do it again
                self.stream.write(msg)
            self.flush()
        except:
            self.handleError()

    def handleError(self):
        ei = sys.exc_info()
        traceback.print_exception(ei[0], ei[1], ei[2], None, sys.stderr)
        del ei

class StreamHandler(Handler):
    def __init__(self, strm=None):
        Handler.__init__(self, strm)

    def remove(self):
        if hasattr(self.stream, 'clear'):
            self.stream.clear()

    def reopen(self):
        pass

class BoundIO:
    def __init__(self, maxbytes, buf=b''):
        self.maxbytes = maxbytes
        self.buf = buf

    def flush(self):
        pass

    def close(self):
        self.clear()

    def write(self, b):
        blen = len(b)
        if len(self.buf) + blen > self.maxbytes:
            self.buf = self.buf[blen:]
        self.buf += b

    def getvalue(self):
        return self.buf

    def clear(self):
        self.buf = b''

class FileHandler(Handler):
    """File handler which supports reopening of logs.
    """

    def __init__(self, filename, mode='ab'):
        Handler.__init__(self)

        try:
            self.stream = open(filename, mode)
        except OSError as e:
            if mode == 'ab' and e.errno == errno.ESPIPE:
                # Python 3 can't open special files like
                # /dev/stdout in 'a' mode due to an implicit seek call
                # that fails with ESPIPE. Retry in 'w' mode.
                # See: http://bugs.python.org/issue27805
                mode = 'wb'
                self.stream = open(filename, mode)
            else:
                raise

        self.baseFilename = filename
        self.mode = mode

    def reopen(self):
        self.close()
        self.stream = open(self.baseFilename, self.mode)
        self.closed = False

    def remove(self):
        self.close()
        try:
            os.remove(self.baseFilename)
        except OSError as why:
            if why.args[0] != errno.ENOENT:
                raise

class RotatingFileHandler(FileHandler):
    def __init__(self, filename, mode='ab', maxBytes=512*1024*1024,
                 backupCount=10):
        """
        Open the specified file and use it as the stream for logging.

        By default, the file grows indefinitely. You can specify particular
        values of maxBytes and backupCount to allow the file to rollover at
        a predetermined size.

        Rollover occurs whenever the current log file is nearly maxBytes in
        length. If backupCount is >= 1, the system will successively create
        new files with the same pathname as the base file, but with extensions
        ".1", ".2" etc. appended to it. For example, with a backupCount of 5
        and a base file name of "app.log", you would get "app.log",
        "app.log.1", "app.log.2", ... through to "app.log.5". The file being
        written to is always "app.log" - when it gets filled up, it is closed
        and renamed to "app.log.1", and if files "app.log.1", "app.log.2" etc.
        exist, then they are renamed to "app.log.2", "app.log.3" etc.
        respectively.

        If maxBytes is zero, rollover never occurs.
        """
        if maxBytes > 0:
            mode = 'ab' # doesn't make sense otherwise!
        FileHandler.__init__(self, filename, mode)
        self.maxBytes = maxBytes
        self.backupCount = backupCount
        self.counter = 0
        self.every = 10

    def emit(self, record):
        """
        Emit a record.

        Output the record to the file, catering for rollover as described
        in doRollover().
        """
        FileHandler.emit(self, record)
        self.doRollover()

    def _remove(self, fn): # pragma: no cover
        # this is here to service stubbing in unit tests
        return os.remove(fn)

    def _rename(self, src, tgt): # pragma: no cover
        # this is here to service stubbing in unit tests
        return os.rename(src, tgt)

    def _exists(self, fn): # pragma: no cover
        # this is here to service stubbing in unit tests
        return os.path.exists(fn)

    def removeAndRename(self, sfn, dfn):
        if self._exists(dfn):
            try:
                self._remove(dfn)
            except OSError as why:
                # catch race condition (destination already deleted)
                if why.args[0] != errno.ENOENT:
                    raise
        try:
            self._rename(sfn, dfn)
        except OSError as why:
            # catch exceptional condition (source deleted)
            # E.g. cleanup script removes active log.
            if why.args[0] != errno.ENOENT:
                raise

    def doRollover(self):
        """
        Do a rollover, as described in __init__().
        """
        if self.maxBytes <= 0:
            return

        if not (self.stream.tell() >= self.maxBytes):
            return

        self.stream.close()
        if self.backupCount > 0:
            for i in range(self.backupCount - 1, 0, -1):
                sfn = "%s.%d" % (self.baseFilename, i)
                dfn = "%s.%d" % (self.baseFilename, i + 1)
                if os.path.exists(sfn):
                    self.removeAndRename(sfn, dfn)
            dfn = self.baseFilename + ".1"
            self.removeAndRename(self.baseFilename, dfn)
        self.stream = open(self.baseFilename, 'wb')

class LogRecord:
    def __init__(self, level, msg, **kw):
        self.level = level
        self.msg = msg
        self.kw = kw
        self.dictrepr = None

    def asdict(self):
        if self.dictrepr is None:
            now = time.time()
            msecs = (now - long(now)) * 1000
            part1 = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(now))
            asctime = '%s,%03d' % (part1, msecs)
            levelname = LOG_LEVELS_BY_NUM[self.level]
            msg = as_string(self.msg)
            if self.kw:
                msg = msg % self.kw
            self.dictrepr = {'message':msg, 'levelname':levelname,
                             'asctime':asctime}
        return self.dictrepr

class Logger:
    def __init__(self, level=None, handlers=None):
        if level is None:
            level = LevelsByName.INFO
        self.level = level

        if handlers is None:
            handlers = []
        self.handlers = handlers

    def close(self):
        for handler in self.handlers:
            handler.close()

    def blather(self, msg, **kw):
        if LevelsByName.BLAT >= self.level:
            self.log(LevelsByName.BLAT, msg, **kw)

    def trace(self, msg, **kw):
        if LevelsByName.TRAC >= self.level:
            self.log(LevelsByName.TRAC, msg, **kw)

    def debug(self, msg, **kw):
        if LevelsByName.DEBG >= self.level:
            self.log(LevelsByName.DEBG, msg, **kw)

    def info(self, msg, **kw):
        if LevelsByName.INFO >= self.level:
            self.log(LevelsByName.INFO, msg, **kw)

    def warn(self, msg, **kw):
        if LevelsByName.WARN >= self.level:
            self.log(LevelsByName.WARN, msg, **kw)

    def error(self, msg, **kw):
        if LevelsByName.ERRO >= self.level:
            self.log(LevelsByName.ERRO, msg, **kw)

    def critical(self, msg, **kw):
        if LevelsByName.CRIT >= self.level:
            self.log(LevelsByName.CRIT, msg, **kw)

    def log(self, level, msg, **kw):
        record = LogRecord(level, msg, **kw)
        for handler in self.handlers:
            if level >= handler.level:
                handler.emit(record)

    def addHandler(self, hdlr):
        self.handlers.append(hdlr)

    def getvalue(self):
        raise NotImplementedError

class SyslogHandler(Handler):
    def __init__(self):
        Handler.__init__(self)
        assert syslog is not None, "Syslog module not present"

    def close(self):
        pass

    def reopen(self):
        pass

    def _syslog(self, msg): # pragma: no cover
        # this exists only for unit test stubbing
        syslog.syslog(msg)

    def emit(self, record):
        try:
            params = record.asdict()
            message = params['message']
            for line in message.rstrip('\n').split('\n'):
                params['message'] = line
                msg = self.fmt % params
                try:
                    self._syslog(msg)
                except UnicodeError:
                    self._syslog(msg.encode("UTF-8"))
        except:
            self.handleError()

def getLogger(level=None):
    return Logger(level)

_2MB = 1<<21

def handle_boundIO(logger, fmt, maxbytes=_2MB):
    """Attach a new BoundIO handler to an existing Logger"""
    io = BoundIO(maxbytes)
    handler = StreamHandler(io)
    handler.setLevel(logger.level)
    handler.setFormat(fmt)
    logger.addHandler(handler)
    logger.getvalue = io.getvalue

def handle_stdout(logger, fmt):
    """Attach a new StreamHandler with stdout handler to an existing Logger"""
    handler = StreamHandler(sys.stdout)
    handler.setFormat(fmt)
    handler.setLevel(logger.level)
    logger.addHandler(handler)

def handle_syslog(logger, fmt):
    """Attach a new Syslog handler to an existing Logger"""
    handler = SyslogHandler()
    handler.setFormat(fmt)
    handler.setLevel(logger.level)
    logger.addHandler(handler)

def handle_file(logger, filename, fmt, rotating=False, maxbytes=0, backups=0):
    """Attach a new file handler to an existing Logger. If the filename
    is the magic name of 'syslog' then make it a syslog handler instead."""
    if filename == 'syslog': # TODO remove this
        handler = SyslogHandler()
    else:
        if rotating is False:
            handler = FileHandler(filename)
        else:
            handler = RotatingFileHandler(filename, 'a', maxbytes, backups)
    handler.setFormat(fmt)
    handler.setLevel(logger.level)
    logger.addHandler(handler)
././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1671843145.3841007
supervisor-4.2.5/supervisor/medusa/0000755000076500000240000000000014351446511017174 5ustar00mnaberezstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/supervisor/medusa/__init__.py0000644000076500000240000000017114340177153021305 0ustar00mnaberezstaff"""medusa.__init__
"""

# created 2002/03/19, AMK

__revision__ = "$Id: __init__.py,v 1.2 2002/03/19 22:49:34 amk Exp $"
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/supervisor/medusa/asynchat_25.py0000644000076500000240000002511414340177153021672 0ustar00mnaberezstaff# -*- Mode: Python; tab-width: 4 -*-
#       Id: asynchat.py,v 2.26 2000/09/07 22:29:26 rushing Exp
#       Author: Sam Rushing 

# ======================================================================
# Copyright 1996 by Sam Rushing
#
#                         All Rights Reserved
#
# Permission to use, copy, modify, and distribute this software and
# its documentation for any purpose and without fee is hereby
# granted, provided that the above copyright notice appear in all
# copies and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of Sam
# Rushing not be used in advertising or publicity pertaining to
# distribution of the software without specific, written prior
# permission.
#
# SAM RUSHING DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
# NO EVENT SHALL SAM RUSHING BE LIABLE FOR ANY SPECIAL, INDIRECT OR
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
# ======================================================================

r"""A class supporting chat-style (command/response) protocols.

This class adds support for 'chat' style protocols - where one side
sends a 'command', and the other sends a response (examples would be
the common internet protocols - smtp, nntp, ftp, etc..).

The handle_read() method looks at the input stream for the current
'terminator' (usually '\r\n' for single-line responses, '\r\n.\r\n'
for multi-line output), calling self.found_terminator() on its
receipt.

for example:
Say you build an async nntp client using this class.  At the start
of the connection, you'll have self.terminator set to '\r\n', in
order to process the single-line greeting.  Just before issuing a
'LIST' command you'll set it to '\r\n.\r\n'.  The output of the LIST
command will be accumulated (using your own 'collect_incoming_data'
method) up to the terminator, and then control will be returned to
you - by calling your self.found_terminator() method.
"""

import socket
from supervisor.medusa import asyncore_25 as asyncore
from supervisor.compat import long
from supervisor.compat import as_bytes

class async_chat (asyncore.dispatcher):
    """This is an abstract class.  You must derive from this class, and add
    the two methods collect_incoming_data() and found_terminator()"""

    # these are overridable defaults

    ac_in_buffer_size       = 4096
    ac_out_buffer_size      = 4096

    def __init__ (self, conn=None, map=None):
        self.ac_in_buffer = b''
        self.ac_out_buffer = b''
        self.producer_fifo = fifo()
        asyncore.dispatcher.__init__ (self, conn, map)

    def collect_incoming_data(self, data):
        raise NotImplementedError("must be implemented in subclass")

    def found_terminator(self):
        raise NotImplementedError("must be implemented in subclass")

    def set_terminator (self, term):
        """Set the input delimiter.  Can be a fixed string of any length, an integer, or None"""
        self.terminator = term

    def get_terminator (self):
        return self.terminator

    # grab some more data from the socket,
    # throw it to the collector method,
    # check for the terminator,
    # if found, transition to the next state.

    def handle_read (self):
        try:
            data = self.recv (self.ac_in_buffer_size)
        except socket.error:
            self.handle_error()
            return

        self.ac_in_buffer += data

        # Continue to search for self.terminator in self.ac_in_buffer,
        # while calling self.collect_incoming_data.  The while loop
        # is necessary because we might read several data+terminator
        # combos with a single recv(1024).

        while self.ac_in_buffer:
            lb = len(self.ac_in_buffer)
            terminator = self.get_terminator()
            if not terminator:
                # no terminator, collect it all
                self.collect_incoming_data (self.ac_in_buffer)
                self.ac_in_buffer = b''
            elif isinstance(terminator, int) or isinstance(terminator, long):
                # numeric terminator
                n = terminator
                if lb < n:
                    self.collect_incoming_data (self.ac_in_buffer)
                    self.ac_in_buffer = b''
                    self.terminator -= lb
                else:
                    self.collect_incoming_data (self.ac_in_buffer[:n])
                    self.ac_in_buffer = self.ac_in_buffer[n:]
                    self.terminator = 0
                    self.found_terminator()
            else:
                # 3 cases:
                # 1) end of buffer matches terminator exactly:
                #    collect data, transition
                # 2) end of buffer matches some prefix:
                #    collect data to the prefix
                # 3) end of buffer does not match any prefix:
                #    collect data
                terminator_len = len(terminator)
                index = self.ac_in_buffer.find(terminator)
                if index != -1:
                    # we found the terminator
                    if index > 0:
                        # don't bother reporting the empty string (source of subtle bugs)
                        self.collect_incoming_data (self.ac_in_buffer[:index])
                    self.ac_in_buffer = self.ac_in_buffer[index+terminator_len:]
                    # This does the Right Thing if the terminator is changed here.
                    self.found_terminator()
                else:
                    # check for a prefix of the terminator
                    index = find_prefix_at_end (self.ac_in_buffer, terminator)
                    if index:
                        if index != lb:
                            # we found a prefix, collect up to the prefix
                            self.collect_incoming_data (self.ac_in_buffer[:-index])
                            self.ac_in_buffer = self.ac_in_buffer[-index:]
                        break
                    else:
                        # no prefix, collect it all
                        self.collect_incoming_data (self.ac_in_buffer)
                        self.ac_in_buffer = b''

    def handle_write (self):
        self.initiate_send ()

    def handle_close (self):
        self.close()

    def push (self, data):
        data = as_bytes(data)
        self.producer_fifo.push(simple_producer(data))
        self.initiate_send()

    def push_with_producer (self, producer):
        self.producer_fifo.push (producer)
        self.initiate_send()

    def readable (self):
        """predicate for inclusion in the readable for select()"""
        return len(self.ac_in_buffer) <= self.ac_in_buffer_size

    def writable (self):
        """predicate for inclusion in the writable for select()"""
        # return len(self.ac_out_buffer) or len(self.producer_fifo) or (not self.connected)
        # this is about twice as fast, though not as clear.
        return not (
                (self.ac_out_buffer == b'') and
                self.producer_fifo.is_empty() and
                self.connected
                )

    def close_when_done (self):
        """automatically close this channel once the outgoing queue is empty"""
        self.producer_fifo.push (None)

    # refill the outgoing buffer by calling the more() method
    # of the first producer in the queue
    def refill_buffer (self):
        while 1:
            if len(self.producer_fifo):
                p = self.producer_fifo.first()
                # a 'None' in the producer fifo is a sentinel,
                # telling us to close the channel.
                if p is None:
                    if not self.ac_out_buffer:
                        self.producer_fifo.pop()
                        self.close()
                    return
                elif isinstance(p, bytes):
                    self.producer_fifo.pop()
                    self.ac_out_buffer += p
                    return
                data = p.more()
                if data:
                    self.ac_out_buffer = self.ac_out_buffer + data
                    return
                else:
                    self.producer_fifo.pop()
            else:
                return

    def initiate_send (self):
        obs = self.ac_out_buffer_size
        # try to refill the buffer
        if len (self.ac_out_buffer) < obs:
            self.refill_buffer()

        if self.ac_out_buffer and self.connected:
            # try to send the buffer
            try:
                num_sent = self.send (self.ac_out_buffer[:obs])
                if num_sent:
                    self.ac_out_buffer = self.ac_out_buffer[num_sent:]

            except socket.error:
                self.handle_error()
                return

    def discard_buffers (self):
        # Emergencies only!
        self.ac_in_buffer = b''
        self.ac_out_buffer = b''
        while self.producer_fifo:
            self.producer_fifo.pop()


class simple_producer:

    def __init__ (self, data, buffer_size=512):
        self.data = data
        self.buffer_size = buffer_size

    def more (self):
        if len (self.data) > self.buffer_size:
            result = self.data[:self.buffer_size]
            self.data = self.data[self.buffer_size:]
            return result
        else:
            result = self.data
            self.data = b''
            return result

class fifo:
    def __init__ (self, list=None):
        if not list:
            self.list = []
        else:
            self.list = list

    def __len__ (self):
        return len(self.list)

    def is_empty (self):
        return self.list == []

    def first (self):
        return self.list[0]

    def push (self, data):
        self.list.append(data)

    def pop (self):
        if self.list:
            return 1, self.list.pop(0)
        else:
            return 0, None

# Given 'haystack', see if any prefix of 'needle' is at its end.  This
# assumes an exact match has already been checked.  Return the number of
# characters matched.
# for example:
# f_p_a_e ("qwerty\r", "\r\n") => 1
# f_p_a_e ("qwertydkjf", "\r\n") => 0
# f_p_a_e ("qwerty\r\n", "\r\n") => 

# this could maybe be made faster with a computed regex?
# [answer: no; circa Python-2.0, Jan 2001]
# new python:   28961/s
# old python:   18307/s
# re:        12820/s
# regex:     14035/s

def find_prefix_at_end (haystack, needle):
    l = len(needle) - 1
    while l and not haystack.endswith(needle[:l]):
        l -= 1
    return l
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/supervisor/medusa/asyncore_25.py0000644000076500000240000004063014340177153021703 0ustar00mnaberezstaff# -*- Mode: Python -*-
#   Id: asyncore.py,v 2.51 2000/09/07 22:29:26 rushing Exp
#   Author: Sam Rushing 

# ======================================================================
# Copyright 1996 by Sam Rushing
#
#                         All Rights Reserved
#
# Permission to use, copy, modify, and distribute this software and
# its documentation for any purpose and without fee is hereby
# granted, provided that the above copyright notice appear in all
# copies and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of Sam
# Rushing not be used in advertising or publicity pertaining to
# distribution of the software without specific, written prior
# permission.
#
# SAM RUSHING DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
# NO EVENT SHALL SAM RUSHING BE LIABLE FOR ANY SPECIAL, INDIRECT OR
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
# ======================================================================

"""Basic infrastructure for asynchronous socket service clients and servers.

There are only two ways to have a program on a single processor do "more
than one thing at a time".  Multi-threaded programming is the simplest and
most popular way to do it, but there is another very different technique,
that lets you have nearly all the advantages of multi-threading, without
actually using multiple threads. it's really only practical if your program
is largely I/O bound. If your program is CPU bound, then preemptive
scheduled threads are probably what you really need. Network servers are
rarely CPU-bound, however.

If your operating system supports the select() system call in its I/O
library (and nearly all do), then you can use it to juggle multiple
communication channels at once; doing other work while your I/O is taking
place in the "background."  Although this strategy can seem strange and
complex, especially at first, it is in many ways easier to understand and
control than multi-threaded programming. The module documented here solves
many of the difficult problems for you, making the task of building
sophisticated high-performance network servers and clients a snap.
"""

import select
import socket
import sys
import time

import os
from errno import EALREADY, EINPROGRESS, EWOULDBLOCK, ECONNRESET, \
     ENOTCONN, ESHUTDOWN, EINTR, EISCONN, errorcode

from supervisor.compat import as_string, as_bytes

try:
    socket_map
except NameError:
    socket_map = {}

class ExitNow(Exception):
    pass

def read(obj):
    try:
        obj.handle_read_event()
    except ExitNow:
        raise
    except:
        obj.handle_error()

def write(obj):
    try:
        obj.handle_write_event()
    except ExitNow:
        raise
    except:
        obj.handle_error()

def _exception (obj):
    try:
        obj.handle_expt_event()
    except ExitNow:
        raise
    except:
        obj.handle_error()

def readwrite(obj, flags):
    try:
        if flags & (select.POLLIN | select.POLLPRI):
            obj.handle_read_event()
        if flags & select.POLLOUT:
            obj.handle_write_event()
        if flags & (select.POLLERR | select.POLLHUP | select.POLLNVAL):
            obj.handle_expt_event()
    except ExitNow:
        raise
    except:
        obj.handle_error()

def poll(timeout=0.0, map=None):
    if map is None:
        map = socket_map
    if map:
        r = []; w = []; e = []
        for fd, obj in map.items():
            is_r = obj.readable()
            is_w = obj.writable()
            if is_r:
                r.append(fd)
            if is_w:
                w.append(fd)
            if is_r or is_w:
                e.append(fd)
        if [] == r == w == e:
            time.sleep(timeout)
        else:
            try:
                r, w, e = select.select(r, w, e, timeout)
            except select.error as err:
                if err.args[0] != EINTR:
                    raise
                else:
                    return

        for fd in r:
            obj = map.get(fd)
            if obj is None:
                continue
            read(obj)

        for fd in w:
            obj = map.get(fd)
            if obj is None:
                continue
            write(obj)

        for fd in e:
            obj = map.get(fd)
            if obj is None:
                continue
            _exception(obj)

def poll2(timeout=0.0, map=None):
    # Use the poll() support added to the select module in Python 2.0
    if map is None:
        map = socket_map
    if timeout is not None:
        # timeout is in milliseconds
        timeout = int(timeout*1000)
    pollster = select.poll()
    if map:
        for fd, obj in map.items():
            flags = 0
            if obj.readable():
                flags |= select.POLLIN | select.POLLPRI
            if obj.writable():
                flags |= select.POLLOUT
            if flags:
                # Only check for exceptions if object was either readable
                # or writable.
                flags |= select.POLLERR | select.POLLHUP | select.POLLNVAL
                pollster.register(fd, flags)
        try:
            r = pollster.poll(timeout)
        except select.error as err:
            if err.args[0] != EINTR:
                raise
            r = []
        for fd, flags in r:
            obj = map.get(fd)
            if obj is None:
                continue
            readwrite(obj, flags)

poll3 = poll2                           # Alias for backward compatibility

def loop(timeout=30.0, use_poll=False, map=None, count=None):
    if map is None:
        map = socket_map

    if use_poll and hasattr(select, 'poll'):
        poll_fun = poll2
    else:
        poll_fun = poll

    if count is None:
        while map:
            poll_fun(timeout, map)

    else:
        while map and count > 0:
            poll_fun(timeout, map)
            count -= 1

class dispatcher:

    debug = False
    connected = False
    accepting = False
    closing = False
    addr = None

    def __init__(self, sock=None, map=None):
        if map is None:
            self._map = socket_map
        else:
            self._map = map

        if sock:
            self.set_socket(sock, map)
            # I think it should inherit this anyway
            self.socket.setblocking(0)
            self.connected = True
            # XXX Does the constructor require that the socket passed
            # be connected?
            try:
                self.addr = sock.getpeername()
            except socket.error:
                # The addr isn't crucial
                pass
        else:
            self.socket = None

    def __repr__(self):
        status = [self.__class__.__module__+"."+self.__class__.__name__]
        if self.accepting and self.addr:
            status.append('listening')
        elif self.connected:
            status.append('connected')
        if self.addr is not None:
            try:
                status.append('%s:%d' % self.addr)
            except TypeError:
                status.append(repr(self.addr))
        return '<%s at %#x>' % (' '.join(status), id(self))

    def add_channel(self, map=None):
        #self.log_info('adding channel %s' % self)
        if map is None:
            map = self._map
        map[self._fileno] = self

    def del_channel(self, map=None):
        fd = self._fileno
        if map is None:
            map = self._map
        if fd in map:
            #self.log_info('closing channel %d:%s' % (fd, self))
            del map[fd]
        self._fileno = None

    def create_socket(self, family, type):
        self.family_and_type = family, type
        self.socket = socket.socket(family, type)
        self.socket.setblocking(0)
        self._fileno = self.socket.fileno()
        self.add_channel()

    def set_socket(self, sock, map=None):
        self.socket = sock
##        self.__dict__['socket'] = sock
        self._fileno = sock.fileno()
        self.add_channel(map)

    def set_reuse_addr(self):
        # try to re-use a server port if possible
        try:
            self.socket.setsockopt(
                socket.SOL_SOCKET, socket.SO_REUSEADDR,
                self.socket.getsockopt(socket.SOL_SOCKET,
                                       socket.SO_REUSEADDR) | 1
                )
        except socket.error:
            pass

    # ==================================================
    # predicates for select()
    # these are used as filters for the lists of sockets
    # to pass to select().
    # ==================================================

    def readable(self):
        return True

    def writable(self):
        return True

    # ==================================================
    # socket object methods.
    # ==================================================

    def listen(self, num):
        self.accepting = True
        if os.name == 'nt' and num > 5:
            num = 1
        return self.socket.listen(num)

    def bind(self, addr):
        self.addr = addr
        return self.socket.bind(addr)

    def connect(self, address):
        self.connected = False
        err = self.socket.connect_ex(address)
        # XXX Should interpret Winsock return values
        if err in (EINPROGRESS, EALREADY, EWOULDBLOCK):
            return
        if err in (0, EISCONN):
            self.addr = address
            self.connected = True
            self.handle_connect()
        else:
            raise socket.error(err, errorcode[err])

    def accept(self):
        # XXX can return either an address pair or None
        try:
            conn, addr = self.socket.accept()
            return conn, addr
        except socket.error as why:
            if why.args[0] == EWOULDBLOCK:
                pass
            else:
                raise

    def send(self, data):
        try:
            result = self.socket.send(data)
            return result
        except socket.error as why:
            if why.args[0] == EWOULDBLOCK:
                return 0
            else:
                raise

    def recv(self, buffer_size):
        try:
            data = self.socket.recv(buffer_size)
            if not data:
                # a closed connection is indicated by signaling
                # a read condition, and having recv() return 0.
                self.handle_close()
                return b''
            else:
                return data
        except socket.error as why:
            # winsock sometimes throws ENOTCONN
            if why.args[0] in [ECONNRESET, ENOTCONN, ESHUTDOWN]:
                self.handle_close()
                return b''
            else:
                raise

    def close(self):
        self.del_channel()
        self.socket.close()

    # cheap inheritance, used to pass all other attribute
    # references to the underlying socket object.
    def __getattr__(self, attr):
        return getattr(self.socket, attr)

    # log and log_info may be overridden to provide more sophisticated
    # logging and warning methods. In general, log is for 'hit' logging
    # and 'log_info' is for informational, warning and error logging.

    def log(self, message):
        sys.stderr.write('log: %s\n' % str(message))

    def log_info(self, message, type='info'):
        if __debug__ or type != 'info':
            print('%s: %s' % (type, message))

    def handle_read_event(self):
        if self.accepting:
            # for an accepting socket, getting a read implies
            # that we are connected
            if not self.connected:
                self.connected = True
            self.handle_accept()
        elif not self.connected:
            self.handle_connect()
            self.connected = True
            self.handle_read()
        else:
            self.handle_read()

    def handle_write_event(self):
        # getting a write implies that we are connected
        if not self.connected:
            self.handle_connect()
            self.connected = True
        self.handle_write()

    def handle_expt_event(self):
        self.handle_expt()

    def handle_error(self):
        nil, t, v, tbinfo = compact_traceback()

        # sometimes a user repr method will crash.
        try:
            self_repr = repr(self)
        except:
            self_repr = '<__repr__(self) failed for object at %0x>' % id(self)

        self.log_info(
            'uncaptured python exception, closing channel %s (%s:%s %s)' % (
                self_repr,
                t,
                v,
                tbinfo
                ),
            'error'
            )
        self.close()

    def handle_expt(self):
        self.log_info('unhandled exception', 'warning')

    def handle_read(self):
        self.log_info('unhandled read event', 'warning')

    def handle_write(self):
        self.log_info('unhandled write event', 'warning')

    def handle_connect(self):
        self.log_info('unhandled connect event', 'warning')

    def handle_accept(self):
        self.log_info('unhandled accept event', 'warning')

    def handle_close(self):
        self.log_info('unhandled close event', 'warning')
        self.close()

# ---------------------------------------------------------------------------
# adds simple buffered output capability, useful for simple clients.
# [for more sophisticated usage use asynchat.async_chat]
# ---------------------------------------------------------------------------

class dispatcher_with_send(dispatcher):

    def __init__(self, sock=None, map=None):
        dispatcher.__init__(self, sock, map)
        self.out_buffer = b''

    def initiate_send(self):
        num_sent = dispatcher.send(self, self.out_buffer[:512])
        self.out_buffer = self.out_buffer[num_sent:]

    def handle_write(self):
        self.initiate_send()

    def writable(self):
        return (not self.connected) or len(self.out_buffer)

    def send(self, data):
        if self.debug:
            self.log_info('sending %s' % repr(data))
        self.out_buffer = self.out_buffer + data
        self.initiate_send()

# ---------------------------------------------------------------------------
# used for debugging.
# ---------------------------------------------------------------------------

def compact_traceback():
    t, v, tb = sys.exc_info()
    tbinfo = []
    assert tb # Must have a traceback
    while tb:
        tbinfo.append((
            tb.tb_frame.f_code.co_filename,
            tb.tb_frame.f_code.co_name,
            str(tb.tb_lineno)
            ))
        tb = tb.tb_next

    # just to be safe
    del tb

    file, function, line = tbinfo[-1]
    info = ' '.join(['[%s|%s|%s]' % x for x in tbinfo])
    return (file, function, line), t, v, info

def close_all(map=None):
    if map is None:
        map = socket_map
    for x in map.values():
        x.socket.close()
    map.clear()

# Asynchronous File I/O:
#
# After a little research (reading man pages on various unixen, and
# digging through the linux kernel), I've determined that select()
# isn't meant for doing asynchronous file i/o.
# Heartening, though - reading linux/mm/filemap.c shows that linux
# supports asynchronous read-ahead.  So _MOST_ of the time, the data
# will be sitting in memory for us already when we go to read it.
#
# What other OS's (besides NT) support async file i/o?  [VMS?]
#
# Regardless, this is useful for pipes, and stdin/stdout...

if os.name == 'posix':
    import fcntl

    class file_wrapper:
        # here we override just enough to make a file
        # look like a socket for the purposes of asyncore.

        def __init__(self, fd):
            self.fd = fd

        def recv(self, buffersize):
            return as_string(os.read(self.fd, buffersize))

        def send(self, s):
            return os.write(self.fd, as_bytes(s))

        read = recv
        write = send

        def close(self):
            os.close(self.fd)

        def fileno(self):
            return self.fd

    class file_dispatcher(dispatcher):

        def __init__(self, fd, map=None):
            dispatcher.__init__(self, None, map)
            self.connected = True
            self.set_file(fd)
            # set it to non-blocking mode
            flags = fcntl.fcntl(fd, fcntl.F_GETFL, 0)
            flags |= os.O_NONBLOCK
            fcntl.fcntl(fd, fcntl.F_SETFL, flags)

        def set_file(self, fd):
            self._fileno = fd
            self.socket = file_wrapper(fd)
            self.add_channel()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0
supervisor-4.2.5/supervisor/medusa/auth_handler.py0000644000076500000240000001157614340177153022217 0ustar00mnaberezstaff# -*- Mode: Python -*-
#
#       Author: Sam Rushing 
#       Copyright 1996-2000 by Sam Rushing
#                                                All Rights Reserved.
#

RCS_ID =  '$Id: auth_handler.py,v 1.6 2002/11/25 19:40:23 akuchling Exp $'

# support for 'basic' authentication.

import re
import sys
import time

from supervisor.compat import as_string, as_bytes
from supervisor.compat import encodestring, decodestring
from supervisor.compat import long
from supervisor.compat import md5

import supervisor.medusa.counter as counter
import supervisor.medusa.default_handler as default_handler

get_header = default_handler.get_header

import supervisor.medusa.producers as producers

# This is a 'handler' that wraps an authorization method
# around access to the resources normally served up by
# another handler.

# does anyone support digest authentication? (rfc2069)

class auth_handler:
    def __init__ (self, dict, handler, realm='default'):
        self.authorizer = dictionary_authorizer (dict)
        self.handler = handler
        self.realm = realm
        self.pass_count = counter.counter()
        self.fail_count = counter.counter()

    def match (self, request):
        # by default, use the given handler's matcher
        return self.handler.match (request)

    def handle_request (self, request):
        # authorize a request before handling it...
        scheme = get_header (AUTHORIZATION, request.header)

        if scheme:
            scheme = scheme.lower()
            if scheme == 'basic':
                cookie = get_header (AUTHORIZATION, request.header, 2)
                try:
                    decoded = as_string(decodestring(as_bytes(cookie)))
                except:
                    sys.stderr.write('malformed authorization info <%s>\n' % cookie)
                    request.error (400)
                    return
                auth_info = decoded.split(':', 1)
                if self.authorizer.authorize (auth_info):
                    self.pass_count.increment()
                    request.auth_info = auth_info
                    self.handler.handle_request (request)
                else:
                    self.handle_unauthorized (request)
            #elif scheme == 'digest':
            #       print 'digest: ',AUTHORIZATION.group(2)
            else:
                sys.stderr.write('unknown/unsupported auth method: %s\n' % scheme)
                self.handle_unauthorized(request)
        else:
            # list both?  prefer one or the other?
            # you could also use a 'nonce' here. [see below]
            #auth = 'Basic realm="%s" Digest realm="%s"' % (self.realm, self.realm)
            #nonce = self.make_nonce (request)
            #auth = 'Digest realm="%s" nonce="%s"' % (self.realm, nonce)
            #request['WWW-Authenticate'] = auth
            #print 'sending header: %s' % request['WWW-Authenticate']
            self.handle_unauthorized (request)

    def handle_unauthorized (self, request):
        # We are now going to receive data that we want to ignore.
        # to ignore the file data we're not interested in.
        self.fail_count.increment()
        request.channel.set_terminator (None)
        request['Connection'] = 'close'
        request['WWW-Authenticate'] = 'Basic realm="%s"' % self.realm
        request.error (401)

    def make_nonce (self, request):
        """A digest-authentication , constructed as suggested in RFC 2069"""
        ip = request.channel.server.ip
        now = str(long(time.time()))
        if now[-1:] == 'L':
            now = now[:-1]
        private_key = str (id (self))
        nonce = ':'.join([ip, now, private_key])
        return self.apply_hash (nonce)

    def apply_hash (self, s):
        """Apply MD5 to a string , then wrap it in base64 encoding."""
        m = md5()
        m.update (s)
        d = m.digest()
        # base64.encodestring tacks on an extra linefeed.
        return encodestring (d)[:-1]

    def status (self):
        # Thanks to mwm@contessa.phone.net (Mike Meyer)
        r = [
                producers.simple_producer (
                        '
  • Authorization Extension : ' 'Unauthorized requests: %s
      ' % self.fail_count ) ] if hasattr (self.handler, 'status'): r.append (self.handler.status()) r.append ( producers.simple_producer ('
    ') ) return producers.composite_producer(r) class dictionary_authorizer: def __init__ (self, dict): self.dict = dict def authorize (self, auth_info): [username, password] = auth_info if username in self.dict and self.dict[username] == password: return 1 else: return 0 AUTHORIZATION = re.compile ( # scheme challenge 'Authorization: ([^ ]+) (.*)', re.IGNORECASE ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/medusa/counter.py0000644000076500000240000000274414340177153021235 0ustar00mnaberezstaff# -*- Mode: Python -*- # It is tempting to add an __int__ method to this class, but it's not # a good idea. This class tries to gracefully handle integer # overflow, and to hide this detail from both the programmer and the # user. Note that the __str__ method can be relied on for printing out # the value of a counter: # # >>> print 'Total Client: %s' % self.total_clients # # If you need to do arithmetic with the value, then use the 'as_long' # method, the use of long arithmetic is a reminder that the counter # will overflow. from supervisor.compat import long class counter: """general-purpose counter""" def __init__ (self, initial_value=0): self.value = initial_value def increment (self, delta=1): result = self.value try: self.value = self.value + delta except OverflowError: self.value = long(self.value) + delta return result def decrement (self, delta=1): result = self.value try: self.value = self.value - delta except OverflowError: self.value = long(self.value) - delta return result def as_long (self): return long(self.value) def __nonzero__ (self): return self.value != 0 __bool__ = __nonzero__ def __repr__ (self): return '' % (self.value, id(self)) def __str__ (self): s = str(long(self.value)) if s[-1:] == 'L': s = s[:-1] return s ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/medusa/default_handler.py0000644000076500000240000001432314340177153022673 0ustar00mnaberezstaff# -*- Mode: Python -*- # # Author: Sam Rushing # Copyright 1997 by Sam Rushing # All Rights Reserved. # RCS_ID = '$Id: default_handler.py,v 1.8 2002/08/01 18:15:45 akuchling Exp $' # standard python modules import mimetypes import re import stat # medusa modules import supervisor.medusa.http_date as http_date import supervisor.medusa.http_server as http_server import supervisor.medusa.producers as producers from supervisor.medusa.util import html_repr unquote = http_server.unquote # This is the 'default' handler. it implements the base set of # features expected of a simple file-delivering HTTP server. file # services are provided through a 'filesystem' object, the very same # one used by the FTP server. # # You can replace or modify this handler if you want a non-standard # HTTP server. You can also derive your own handler classes from # it. # # support for handling POST requests is available in the derived # class , defined below. # from supervisor.medusa.counter import counter class default_handler: valid_commands = ['GET', 'HEAD'] IDENT = 'Default HTTP Request Handler' # Pathnames that are tried when a URI resolves to a directory name directory_defaults = [ 'index.html', 'default.html' ] default_file_producer = producers.file_producer def __init__ (self, filesystem): self.filesystem = filesystem # count total hits self.hit_counter = counter() # count file deliveries self.file_counter = counter() # count cache hits self.cache_counter = counter() hit_counter = 0 def __repr__ (self): return '<%s (%s hits) at %x>' % ( self.IDENT, self.hit_counter, id (self) ) # always match, since this is a default def match (self, request): return 1 # handle a file request, with caching. def handle_request (self, request): if request.command not in self.valid_commands: request.error (400) # bad request return self.hit_counter.increment() path, params, query, fragment = request.split_uri() if '%' in path: path = unquote (path) # strip off all leading slashes while path and path[0] == '/': path = path[1:] if self.filesystem.isdir (path): if path and path[-1] != '/': request['Location'] = 'http://%s/%s/' % ( request.channel.server.server_name, path ) request.error (301) return # we could also generate a directory listing here, # may want to move this into another method for that # purpose found = 0 if path and path[-1] != '/': path += '/' for default in self.directory_defaults: p = path + default if self.filesystem.isfile (p): path = p found = 1 break if not found: request.error (404) # Not Found return elif not self.filesystem.isfile (path): request.error (404) # Not Found return file_length = self.filesystem.stat (path)[stat.ST_SIZE] ims = get_header_match (IF_MODIFIED_SINCE, request.header) length_match = 1 if ims: length = ims.group (4) if length: try: length = int(length) if length != file_length: length_match = 0 except: pass ims_date = 0 if ims: ims_date = http_date.parse_http_date (ims.group (1)) try: mtime = self.filesystem.stat (path)[stat.ST_MTIME] except: request.error (404) return if length_match and ims_date: if mtime <= ims_date: request.reply_code = 304 request.done() self.cache_counter.increment() return try: file = self.filesystem.open (path, 'rb') except IOError: request.error (404) return request['Last-Modified'] = http_date.build_http_date (mtime) request['Content-Length'] = file_length self.set_content_type (path, request) if request.command == 'GET': request.push (self.default_file_producer (file)) self.file_counter.increment() request.done() def set_content_type (self, path, request): typ, encoding = mimetypes.guess_type(path) if typ is not None: request['Content-Type'] = typ else: # TODO: test a chunk off the front of the file for 8-bit # characters, and use application/octet-stream instead. request['Content-Type'] = 'text/plain' def status (self): return producers.simple_producer ( '
  • %s' % html_repr (self) + '
      ' + '
    • Total Hits: %s' % self.hit_counter + '
    • Files Delivered: %s' % self.file_counter + '
    • Cache Hits: %s' % self.cache_counter + '
    ' ) # HTTP/1.0 doesn't say anything about the "; length=nnnn" addition # to this header. I suppose its purpose is to avoid the overhead # of parsing dates... IF_MODIFIED_SINCE = re.compile ( 'If-Modified-Since: ([^;]+)((; length=([0-9]+)$)|$)', re.IGNORECASE ) USER_AGENT = re.compile ('User-Agent: (.*)', re.IGNORECASE) CONTENT_TYPE = re.compile ( r'Content-Type: ([^;]+)((; boundary=([A-Za-z0-9\'\(\)+_,./:=?-]+)$)|$)', re.IGNORECASE ) get_header = http_server.get_header get_header_match = http_server.get_header_match def get_extension (path): dirsep = path.rfind('/') dotsep = path.rfind('.') if dotsep > dirsep: return path[dotsep+1:] else: return '' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/medusa/filesys.py0000644000076500000240000002615014340177153021231 0ustar00mnaberezstaff# -*- Mode: Python -*- # $Id: filesys.py,v 1.9 2003/12/24 16:10:56 akuchling Exp $ # Author: Sam Rushing # # Generic filesystem interface. # # We want to provide a complete wrapper around any and all # filesystem operations. # this class is really just for documentation, # identifying the API for a filesystem object. # opening files for reading, and listing directories, should # return a producer. from supervisor.compat import long class abstract_filesystem: def __init__ (self): pass def current_directory (self): """Return a string representing the current directory.""" pass def listdir (self, path, long=0): """Return a listing of the directory at 'path' The empty string indicates the current directory. If 'long' is set, instead return a list of (name, stat_info) tuples """ pass def open (self, path, mode): """Return an open file object""" pass def stat (self, path): """Return the equivalent of os.stat() on the given path.""" pass def isdir (self, path): """Does the path represent a directory?""" pass def isfile (self, path): """Does the path represent a plain file?""" pass def cwd (self, path): """Change the working directory.""" pass def cdup (self): """Change to the parent of the current directory.""" pass def longify (self, path): """Return a 'long' representation of the filename [for the output of the LIST command]""" pass # standard wrapper around a unix-like filesystem, with a 'false root' # capability. # security considerations: can symbolic links be used to 'escape' the # root? should we allow it? if not, then we could scan the # filesystem on startup, but that would not help if they were added # later. We will probably need to check for symlinks in the cwd method. # what to do if wd is an invalid directory? import os import stat import re def safe_stat (path): try: return path, os.stat (path) except: return None class os_filesystem: path_module = os.path # set this to zero if you want to disable pathname globbing. # [we currently don't glob, anyway] do_globbing = 1 def __init__ (self, root, wd='/'): self.root = root self.wd = wd def current_directory (self): return self.wd def isfile (self, path): p = self.normalize (self.path_module.join (self.wd, path)) return self.path_module.isfile (self.translate(p)) def isdir (self, path): p = self.normalize (self.path_module.join (self.wd, path)) return self.path_module.isdir (self.translate(p)) def cwd (self, path): p = self.normalize (self.path_module.join (self.wd, path)) translated_path = self.translate(p) if not self.path_module.isdir (translated_path): return 0 else: old_dir = os.getcwd() # temporarily change to that directory, in order # to see if we have permission to do so. can = 0 try: try: os.chdir (translated_path) can = 1 self.wd = p except: pass finally: if can: os.chdir (old_dir) return can def cdup (self): return self.cwd ('..') def listdir (self, path, long=0): p = self.translate (path) # I think we should glob, but limit it to the current # directory only. ld = os.listdir (p) if not long: return list_producer (ld, None) else: old_dir = os.getcwd() try: os.chdir (p) # if os.stat fails we ignore that file. result = [_f for _f in map (safe_stat, ld) if _f] finally: os.chdir (old_dir) return list_producer (result, self.longify) # TODO: implement a cache w/timeout for stat() def stat (self, path): p = self.translate (path) return os.stat (p) def open (self, path, mode): p = self.translate (path) return open (p, mode) def unlink (self, path): p = self.translate (path) return os.unlink (p) def mkdir (self, path): p = self.translate (path) return os.mkdir (p) def rmdir (self, path): p = self.translate (path) return os.rmdir (p) def rename(self, src, dst): return os.rename(self.translate(src),self.translate(dst)) # utility methods def normalize (self, path): # watch for the ever-sneaky '/+' path element path = re.sub('/+', '/', path) p = self.path_module.normpath (path) # remove 'dangling' cdup's. if len(p) > 2 and p[:3] == '/..': p = '/' return p def translate (self, path): # we need to join together three separate # path components, and do it safely. # // # use the operating system's path separator. path = os.sep.join(path.split('/')) p = self.normalize (self.path_module.join (self.wd, path)) p = self.normalize (self.path_module.join (self.root, p[1:])) return p def longify (self, path_stat_info_tuple): (path, stat_info) = path_stat_info_tuple return unix_longify (path, stat_info) def __repr__ (self): return '' % ( self.root, self.wd ) if os.name == 'posix': class unix_filesystem (os_filesystem): pass class schizophrenic_unix_filesystem (os_filesystem): PROCESS_UID = os.getuid() PROCESS_EUID = os.geteuid() PROCESS_GID = os.getgid() PROCESS_EGID = os.getegid() def __init__ (self, root, wd='/', persona=(None, None)): os_filesystem.__init__ (self, root, wd) self.persona = persona def become_persona (self): if self.persona != (None, None): uid, gid = self.persona # the order of these is important! os.setegid (gid) os.seteuid (uid) def become_nobody (self): if self.persona != (None, None): os.seteuid (self.PROCESS_UID) os.setegid (self.PROCESS_GID) # cwd, cdup, open, listdir def cwd (self, path): try: self.become_persona() return os_filesystem.cwd (self, path) finally: self.become_nobody() def cdup (self): try: self.become_persona() return os_filesystem.cdup (self) finally: self.become_nobody() def open (self, filename, mode): try: self.become_persona() return os_filesystem.open (self, filename, mode) finally: self.become_nobody() def listdir (self, path, long=0): try: self.become_persona() return os_filesystem.listdir (self, path, long) finally: self.become_nobody() # For the 'real' root, we could obtain a list of drives, and then # use that. Doesn't win32 provide such a 'real' filesystem? # [yes, I think something like this "\\.\c\windows"] class msdos_filesystem (os_filesystem): def longify (self, path_stat_info_tuple): (path, stat_info) = path_stat_info_tuple return msdos_longify (path, stat_info) # A merged filesystem will let you plug other filesystems together. # We really need the equivalent of a 'mount' capability - this seems # to be the most general idea. So you'd use a 'mount' method to place # another filesystem somewhere in the hierarchy. # Note: this is most likely how I will handle ~user directories # with the http server. class merged_filesystem: def __init__ (self, *fsys): pass # this matches the output of NT's ftp server (when in # MSDOS mode) exactly. def msdos_longify (file, stat_info): if stat.S_ISDIR (stat_info[stat.ST_MODE]): dir = '' else: dir = ' ' date = msdos_date (stat_info[stat.ST_MTIME]) return '%s %s %8d %s' % ( date, dir, stat_info[stat.ST_SIZE], file ) def msdos_date (t): try: info = time.gmtime (t) except: info = time.gmtime (0) # year, month, day, hour, minute, second, ... hour = info[3] if hour > 11: merid = 'PM' hour -= 12 else: merid = 'AM' return '%02d-%02d-%02d %02d:%02d%s' % ( info[1], info[2], info[0]%100, hour, info[4], merid ) months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] mode_table = { '0':'---', '1':'--x', '2':'-w-', '3':'-wx', '4':'r--', '5':'r-x', '6':'rw-', '7':'rwx' } import time def unix_longify (file, stat_info): # for now, only pay attention to the lower bits mode = ('%o' % stat_info[stat.ST_MODE])[-3:] mode = ''.join([mode_table[x] for x in mode]) if stat.S_ISDIR (stat_info[stat.ST_MODE]): dirchar = 'd' else: dirchar = '-' date = ls_date (long(time.time()), stat_info[stat.ST_MTIME]) return '%s%s %3d %-8d %-8d %8d %s %s' % ( dirchar, mode, stat_info[stat.ST_NLINK], stat_info[stat.ST_UID], stat_info[stat.ST_GID], stat_info[stat.ST_SIZE], date, file ) # Emulate the unix 'ls' command's date field. # it has two formats - if the date is more than 180 # days in the past, then it's like this: # Oct 19 1995 # otherwise, it looks like this: # Oct 19 17:33 def ls_date (now, t): try: info = time.gmtime (t) except: info = time.gmtime (0) # 15,600,000 == 86,400 * 180 if (now - t) > 15600000: return '%s %2d %d' % ( months[info[1]-1], info[2], info[0] ) else: return '%s %2d %02d:%02d' % ( months[info[1]-1], info[2], info[3], info[4] ) # =========================================================================== # Producers # =========================================================================== class list_producer: def __init__ (self, list, func=None): self.list = list self.func = func # this should do a pushd/popd def more (self): if not self.list: return '' else: # do a few at a time bunch = self.list[:50] if self.func is not None: bunch = map (self.func, bunch) self.list = self.list[50:] return '\r\n'.join(bunch) + '\r\n' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/medusa/http_date.py0000644000076500000240000000622714340177153021532 0ustar00mnaberezstaff# -*- Mode: Python -*- import re import time def concat (*args): return ''.join (args) def join (seq, field=' '): return field.join (seq) def group (s): return '(' + s + ')' short_days = ['sun','mon','tue','wed','thu','fri','sat'] long_days = ['sunday','monday','tuesday','wednesday','thursday','friday','saturday'] short_day_reg = group (join (short_days, '|')) long_day_reg = group (join (long_days, '|')) daymap = {} for i in range(7): daymap[short_days[i]] = i daymap[long_days[i]] = i hms_reg = join (3 * [group('[0-9][0-9]')], ':') months = ['jan','feb','mar','apr','may','jun','jul','aug','sep','oct','nov','dec'] monmap = {} for i in range(12): monmap[months[i]] = i+1 months_reg = group (join (months, '|')) # From draft-ietf-http-v11-spec-07.txt/3.3.1 # Sun, 06 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123 # Sunday, 06-Nov-94 08:49:37 GMT ; RFC 850, obsoleted by RFC 1036 # Sun Nov 6 08:49:37 1994 ; ANSI C's asctime() format # rfc822 format rfc822_date = join ( [concat (short_day_reg,','), # day group('[0-9][0-9]?'), # date months_reg, # month group('[0-9]+'), # year hms_reg, # hour minute second 'gmt' ], ' ' ) rfc822_reg = re.compile (rfc822_date) def unpack_rfc822(m): g = m.group i = int return ( i(g(4)), # year monmap[g(3)], # month i(g(2)), # day i(g(5)), # hour i(g(6)), # minute i(g(7)), # second 0, 0, 0 ) # rfc850 format rfc850_date = join ( [concat (long_day_reg,','), join ( [group ('[0-9][0-9]?'), months_reg, group ('[0-9]+') ], '-' ), hms_reg, 'gmt' ], ' ' ) rfc850_reg = re.compile (rfc850_date) # they actually unpack the same way def unpack_rfc850(m): g = m.group i = int return ( i(g(4)), # year monmap[g(3)], # month i(g(2)), # day i(g(5)), # hour i(g(6)), # minute i(g(7)), # second 0, 0, 0 ) # parsedate.parsedate - ~700/sec. # parse_http_date - ~1333/sec. def build_http_date (when): return time.strftime ('%a, %d %b %Y %H:%M:%S GMT', time.gmtime(when)) def parse_http_date (d): d = d.lower() tz = time.timezone m = rfc850_reg.match (d) if m and m.end() == len(d): retval = int (time.mktime (unpack_rfc850(m)) - tz) else: m = rfc822_reg.match (d) if m and m.end() == len(d): retval = int (time.mktime (unpack_rfc822(m)) - tz) else: return 0 # Thanks to Craig Silverstein for pointing # out the DST discrepancy if time.daylight and time.localtime(retval)[-1] == 1: # DST correction retval += tz - time.altzone return retval ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/medusa/http_server.py0000644000076500000240000007156314340177153022130 0ustar00mnaberezstaff# -*- Mode: Python -*- # # Author: Sam Rushing # Copyright 1996-2000 by Sam Rushing # All Rights Reserved. # RCS_ID = '$Id: http_server.py,v 1.12 2004/04/21 15:11:44 akuchling Exp $' # python modules import re import socket import sys import time from supervisor.compat import as_bytes # async modules import supervisor.medusa.asyncore_25 as asyncore import supervisor.medusa.asynchat_25 as asynchat # medusa modules import supervisor.medusa.http_date as http_date import supervisor.medusa.producers as producers import supervisor.medusa.logger as logger VERSION_STRING = RCS_ID.split()[2] from supervisor.medusa.counter import counter try: from urllib import unquote, splitquery except ImportError: from urllib.parse import unquote, splitquery # =========================================================================== # Request Object # =========================================================================== class http_request: # default reply code reply_code = 200 request_counter = counter() # Whether to automatically use chunked encoding when # # HTTP version is 1.1 # Content-Length is not set # Chunked encoding is not already in effect # # If your clients are having trouble, you might want to disable this. use_chunked = 1 # by default, this request object ignores user data. collector = None def __init__ (self, *args): # unpack information about the request (self.channel, self.request, self.command, self.uri, self.version, self.header) = args self.outgoing = [] self.reply_headers = { 'Server' : 'Medusa/%s' % VERSION_STRING, 'Date' : http_date.build_http_date (time.time()) } # New reply header list (to support multiple # headers with same name) self.__reply_header_list = [] self.request_number = http_request.request_counter.increment() self._split_uri = None self._header_cache = {} # -------------------------------------------------- # reply header management # -------------------------------------------------- def __setitem__ (self, key, value): self.reply_headers[key] = value def __getitem__ (self, key): return self.reply_headers[key] def __contains__(self, key): return key in self.reply_headers def has_key (self, key): return key in self.reply_headers def build_reply_header (self): header_items = ['%s: %s' % item for item in self.reply_headers.items()] result = '\r\n'.join ( [self.response(self.reply_code)] + header_items) + '\r\n\r\n' return as_bytes(result) #################################################### # multiple reply header management #################################################### # These are intended for allowing multiple occurrences # of the same header. # Usually you can fold such headers together, separating # their contents by a comma (e.g. Accept: text/html, text/plain) # but the big exception is the Set-Cookie header. # dictionary centric. #--------------------------------------------------- def add_header(self, name, value): """ Adds a header to the reply headers """ self.__reply_header_list.append((name, value)) def clear_headers(self): """ Clears the reply header list """ # Remove things from the old dict as well self.reply_headers.clear() self.__reply_header_list[:] = [] def remove_header(self, name, value=None): """ Removes the specified header. If a value is provided, the name and value must match to remove the header. If the value is None, removes all headers with that name.""" found_it = 0 # Remove things from the old dict as well if (name in self.reply_headers and (value is None or self.reply_headers[name] == value)): del self.reply_headers[name] found_it = 1 removed_headers = [] if not value is None: if (name, value) in self.__reply_header_list: removed_headers = [(name, value)] found_it = 1 else: for h in self.__reply_header_list: if h[0] == name: removed_headers.append(h) found_it = 1 if not found_it: if value is None: search_value = "%s" % name else: search_value = "%s: %s" % (name, value) raise LookupError("Header '%s' not found" % search_value) for h in removed_headers: self.__reply_header_list.remove(h) def get_reply_headers(self): """ Get the tuple of headers that will be used for generating reply headers""" header_tuples = self.__reply_header_list[:] # The idea here is to insert the headers from # the old header dict into the new header list, # UNLESS there's already an entry in the list # that would have overwritten the dict entry # if the dict was the only storage... header_names = [n for n,v in header_tuples] for n,v in self.reply_headers.items(): if n not in header_names: header_tuples.append((n,v)) header_names.append(n) # Ok, that should do it. Now, if there were any # headers in the dict that weren't in the list, # they should have been copied in. If the name # was already in the list, we didn't copy it, # because the value from the dict has been # 'overwritten' by the one in the list. return header_tuples def get_reply_header_text(self): """ Gets the reply header (including status and additional crlf)""" header_tuples = self.get_reply_headers() headers = [self.response(self.reply_code)] headers += ["%s: %s" % h for h in header_tuples] return '\r\n'.join(headers) + '\r\n\r\n' #--------------------------------------------------- # This is the end of the new reply header # management section. #################################################### # -------------------------------------------------- # split a uri # -------------------------------------------------- # ;?# path_regex = re.compile ( # path params query fragment r'([^;?#]*)(;[^?#]*)?(\?[^#]*)?(#.*)?' ) def split_uri (self): if self._split_uri is None: m = self.path_regex.match (self.uri) if m.end() != len(self.uri): raise ValueError("Broken URI") else: self._split_uri = m.groups() return self._split_uri def get_header_with_regex (self, head_reg, group): for line in self.header: m = head_reg.match (line) if m.end() == len(line): return m.group (group) return '' def get_header (self, header): header = header.lower() hc = self._header_cache if header not in hc: h = header + ': ' hl = len(h) for line in self.header: if line[:hl].lower() == h: r = line[hl:] hc[header] = r return r hc[header] = None return None else: return hc[header] # -------------------------------------------------- # user data # -------------------------------------------------- def collect_incoming_data (self, data): if self.collector: self.collector.collect_incoming_data (data) else: self.log_info( 'Dropping %d bytes of incoming request data' % len(data), 'warning' ) def found_terminator (self): if self.collector: self.collector.found_terminator() else: self.log_info ( 'Unexpected end-of-record for incoming request', 'warning' ) def push (self, thing): # Sometimes, text gets pushed by XMLRPC logic for later # processing. if isinstance(thing, str): thing = as_bytes(thing) if isinstance(thing, bytes): thing = producers.simple_producer(thing, buffer_size=len(thing)) self.outgoing.append(thing) def response (self, code=200): message = self.responses[code] self.reply_code = code return 'HTTP/%s %d %s' % (self.version, code, message) def error (self, code): self.reply_code = code message = self.responses[code] s = self.DEFAULT_ERROR_MESSAGE % { 'code': code, 'message': message, } s = as_bytes(s) self['Content-Length'] = len(s) self['Content-Type'] = 'text/html' # make an error reply self.push(s) self.done() # can also be used for empty replies reply_now = error def done (self): """finalize this transaction - send output to the http channel""" # ---------------------------------------- # persistent connection management # ---------------------------------------- # --- BUCKLE UP! ---- connection = get_header(CONNECTION, self.header).lower() close_it = 0 wrap_in_chunking = 0 if self.version == '1.0': if connection == 'keep-alive': if 'Content-Length' not in self: close_it = 1 else: self['Connection'] = 'Keep-Alive' else: close_it = 1 elif self.version == '1.1': if connection == 'close': close_it = 1 elif 'Content-Length' not in self: if 'Transfer-Encoding' in self: if not self['Transfer-Encoding'] == 'chunked': close_it = 1 elif self.use_chunked: self['Transfer-Encoding'] = 'chunked' wrap_in_chunking = 1 else: close_it = 1 elif self.version is None: # Although we don't *really* support http/0.9 (because we'd have to # use \r\n as a terminator, and it would just yuck up a lot of stuff) # it's very common for developers to not want to type a version number # when using telnet to debug a server. close_it = 1 outgoing_header = producers.simple_producer(self.get_reply_header_text()) if close_it: self['Connection'] = 'close' if wrap_in_chunking: outgoing_producer = producers.chunked_producer ( producers.composite_producer (self.outgoing) ) # prepend the header outgoing_producer = producers.composite_producer( [outgoing_header, outgoing_producer] ) else: # prepend the header self.outgoing.insert(0, outgoing_header) outgoing_producer = producers.composite_producer (self.outgoing) # apply a few final transformations to the output self.channel.push_with_producer ( # globbing gives us large packets producers.globbing_producer ( # hooking lets us log the number of bytes sent producers.hooked_producer ( outgoing_producer, self.log ) ) ) self.channel.current_request = None if close_it: self.channel.close_when_done() def log_date_string (self, when): gmt = time.gmtime(when) if time.daylight and gmt[8]: tz = time.altzone else: tz = time.timezone if tz > 0: neg = 1 else: neg = 0 tz = -tz h, rem = divmod (tz, 3600) m, rem = divmod (rem, 60) if neg: offset = '-%02d%02d' % (h, m) else: offset = '+%02d%02d' % (h, m) return time.strftime ( '%d/%b/%Y:%H:%M:%S ', gmt) + offset def log (self, bytes): self.channel.server.logger.log ( self.channel.addr[0], '%d - - [%s] "%s" %d %d\n' % ( self.channel.addr[1], self.log_date_string (time.time()), self.request, self.reply_code, bytes ) ) responses = { 100: "Continue", 101: "Switching Protocols", 200: "OK", 201: "Created", 202: "Accepted", 203: "Non-Authoritative Information", 204: "No Content", 205: "Reset Content", 206: "Partial Content", 300: "Multiple Choices", 301: "Moved Permanently", 302: "Moved Temporarily", 303: "See Other", 304: "Not Modified", 305: "Use Proxy", 400: "Bad Request", 401: "Unauthorized", 402: "Payment Required", 403: "Forbidden", 404: "Not Found", 405: "Method Not Allowed", 406: "Not Acceptable", 407: "Proxy Authentication Required", 408: "Request Time-out", 409: "Conflict", 410: "Gone", 411: "Length Required", 412: "Precondition Failed", 413: "Request Entity Too Large", 414: "Request-URI Too Large", 415: "Unsupported Media Type", 500: "Internal Server Error", 501: "Not Implemented", 502: "Bad Gateway", 503: "Service Unavailable", 504: "Gateway Time-out", 505: "HTTP Version not supported" } # Default error message DEFAULT_ERROR_MESSAGE = '\r\n'.join( ('', 'Error response', '', '', '

    Error response

    ', '

    Error code %(code)d.', '

    Message: %(message)s.', '', '' ) ) def log_info(self, msg, level): pass # =========================================================================== # HTTP Channel Object # =========================================================================== class http_channel (asynchat.async_chat): # use a larger default output buffer ac_out_buffer_size = 1<<16 current_request = None channel_counter = counter() def __init__ (self, server, conn, addr): self.channel_number = http_channel.channel_counter.increment() self.request_counter = counter() asynchat.async_chat.__init__ (self, conn) self.server = server self.addr = addr self.set_terminator (b'\r\n\r\n') self.in_buffer = b'' self.creation_time = int (time.time()) self.last_used = self.creation_time self.check_maintenance() def __repr__ (self): ar = asynchat.async_chat.__repr__(self)[1:-1] return '<%s channel#: %s requests:%s>' % ( ar, self.channel_number, self.request_counter ) # Channel Counter, Maintenance Interval... maintenance_interval = 500 def check_maintenance (self): if not self.channel_number % self.maintenance_interval: self.maintenance() def maintenance (self): self.kill_zombies() # 30-minute zombie timeout. status_handler also knows how to kill zombies. zombie_timeout = 30 * 60 def kill_zombies (self): now = int (time.time()) for channel in list(asyncore.socket_map.values()): if channel.__class__ == self.__class__: if (now - channel.last_used) > channel.zombie_timeout: channel.close() # -------------------------------------------------- # send/recv overrides, good place for instrumentation. # -------------------------------------------------- # this information needs to get into the request object, # so that it may log correctly. def send (self, data): result = asynchat.async_chat.send (self, data) self.server.bytes_out.increment (len(data)) self.last_used = int (time.time()) return result def recv (self, buffer_size): try: result = asynchat.async_chat.recv (self, buffer_size) self.server.bytes_in.increment (len(result)) self.last_used = int (time.time()) return result except MemoryError: # --- Save a Trip to Your Service Provider --- # It's possible for a process to eat up all the memory of # the machine, and put it in an extremely wedged state, # where medusa keeps running and can't be shut down. This # is where MemoryError tends to get thrown, though of # course it could get thrown elsewhere. sys.exit ("Out of Memory!") def handle_error (self): t, v = sys.exc_info()[:2] if t is SystemExit: raise t(v) else: asynchat.async_chat.handle_error (self) def log (self, *args): pass # -------------------------------------------------- # async_chat methods # -------------------------------------------------- def collect_incoming_data (self, data): if self.current_request: # we are receiving data (probably POST data) for a request self.current_request.collect_incoming_data (data) else: # we are receiving header (request) data self.in_buffer = self.in_buffer + data def found_terminator (self): if self.current_request: self.current_request.found_terminator() else: header = self.in_buffer self.in_buffer = b'' lines = header.split(b'\r\n') # -------------------------------------------------- # crack the request header # -------------------------------------------------- while lines and not lines[0]: # as per the suggestion of http-1.1 section 4.1, (and # Eric Parker ), ignore a leading # blank lines (buggy browsers tack it onto the end of # POST requests) lines = lines[1:] if not lines: self.close_when_done() return request = lines[0] command, uri, version = crack_request (request) header = join_headers (lines[1:]) # unquote path if necessary (thanks to Skip Montanaro for pointing # out that we must unquote in piecemeal fashion). rpath, rquery = splitquery(uri) if '%' in rpath: if rquery: uri = unquote (rpath) + '?' + rquery else: uri = unquote (rpath) r = http_request (self, request, command, uri, version, header) self.request_counter.increment() self.server.total_requests.increment() if command is None: self.log_info ('Bad HTTP request: %s' % repr(request), 'error') r.error (400) return # -------------------------------------------------- # handler selection and dispatch # -------------------------------------------------- for h in self.server.handlers: if h.match (r): try: self.current_request = r # This isn't used anywhere. # r.handler = h # CYCLE h.handle_request (r) except: self.server.exceptions.increment() (file, fun, line), t, v, tbinfo = asyncore.compact_traceback() self.log_info( 'Server Error: %s, %s: file: %s line: %s' % (t,v,file,line), 'error') try: r.error (500) except: pass return # no handlers, so complain r.error (404) def writable_for_proxy (self): # this version of writable supports the idea of a 'stalled' producer # [i.e., it's not ready to produce any output yet] This is needed by # the proxy, which will be waiting for the magic combination of # 1) hostname resolved # 2) connection made # 3) data available. if self.ac_out_buffer: return 1 elif len(self.producer_fifo): p = self.producer_fifo.first() if hasattr (p, 'stalled'): return not p.stalled() else: return 1 # =========================================================================== # HTTP Server Object # =========================================================================== class http_server (asyncore.dispatcher): SERVER_IDENT = 'HTTP Server (V%s)' % VERSION_STRING channel_class = http_channel def __init__ (self, ip, port, resolver=None, logger_object=None): self.ip = ip self.port = port asyncore.dispatcher.__init__ (self) self.create_socket (socket.AF_INET, socket.SOCK_STREAM) self.handlers = [] if not logger_object: logger_object = logger.file_logger (sys.stdout) self.set_reuse_addr() self.bind ((ip, port)) # lower this to 5 if your OS complains self.listen (1024) host, port = self.socket.getsockname() if not ip: self.log_info('Computing default hostname', 'warning') ip = socket.gethostbyname (socket.gethostname()) try: self.server_name = socket.gethostbyaddr (ip)[0] except socket.error: self.log_info('Cannot do reverse lookup', 'warning') self.server_name = ip # use the IP address as the "hostname" self.server_port = port self.total_clients = counter() self.total_requests = counter() self.exceptions = counter() self.bytes_out = counter() self.bytes_in = counter() if not logger_object: logger_object = logger.file_logger (sys.stdout) if resolver: self.logger = logger.resolving_logger (resolver, logger_object) else: self.logger = logger.unresolving_logger (logger_object) self.log_info ( 'Medusa (V%s) started at %s' '\n\tHostname: %s' '\n\tPort:%d' '\n' % ( VERSION_STRING, time.ctime(time.time()), self.server_name, port, ) ) def writable (self): return 0 def handle_read (self): pass def readable (self): return self.accepting def handle_connect (self): pass def handle_accept (self): self.total_clients.increment() try: conn, addr = self.accept() except socket.error: # linux: on rare occasions we get a bogus socket back from # accept. socketmodule.c:makesockaddr complains that the # address family is unknown. We don't want the whole server # to shut down because of this. self.log_info ('warning: server accept() threw an exception', 'warning') return except TypeError: # unpack non-sequence. this can happen when a read event # fires on a listening socket, but when we call accept() # we get EWOULDBLOCK, so dispatcher.accept() returns None. # Seen on FreeBSD3. self.log_info ('warning: server accept() threw EWOULDBLOCK', 'warning') return self.channel_class (self, conn, addr) def install_handler (self, handler, back=0): if back: self.handlers.append (handler) else: self.handlers.insert (0, handler) def remove_handler (self, handler): self.handlers.remove (handler) def status (self): from supervisor.medusa.util import english_bytes def nice_bytes (n): return ''.join(english_bytes (n)) handler_stats = [_f for _f in map (maybe_status, self.handlers) if _f] if self.total_clients: ratio = self.total_requests.as_long() / float(self.total_clients.as_long()) else: ratio = 0.0 return producers.composite_producer ( [producers.lines_producer ( ['

    %s

    ' % self.SERVER_IDENT, '
    Listening on: Host: %s' % self.server_name, 'Port: %d' % self.port, '

      ' '
    • Total Clients: %s' % self.total_clients, 'Requests: %s' % self.total_requests, 'Requests/Client: %.1f' % ratio, '
    • Total Bytes In: %s' % (nice_bytes (self.bytes_in.as_long())), 'Bytes Out: %s' % (nice_bytes (self.bytes_out.as_long())), '
    • Total Exceptions: %s' % self.exceptions, '

    ' 'Extension List

      ', ])] + handler_stats + [producers.simple_producer('
    ')] ) def maybe_status (thing): if hasattr (thing, 'status'): return thing.status() else: return None CONNECTION = re.compile ('Connection: (.*)', re.IGNORECASE) # merge multi-line headers # [486dx2: ~500/sec] def join_headers (headers): r = [] for i in range(len(headers)): if headers[i][0] in ' \t': r[-1] = r[-1] + headers[i][1:] else: r.append (headers[i]) return r def get_header (head_reg, lines, group=1): for line in lines: m = head_reg.match (line) if m and m.end() == len(line): return m.group (group) return '' def get_header_match (head_reg, lines): for line in lines: m = head_reg.match (line) if m and m.end() == len(line): return m return '' REQUEST = re.compile ('([^ ]+) ([^ ]+)(( HTTP/([0-9.]+))$|$)') def crack_request (r): m = REQUEST.match (r) if m and m.end() == len(r): if m.group(3): version = m.group(5) else: version = None return m.group(1), m.group(2), version else: return None, None, None if __name__ == '__main__': if len(sys.argv) < 2: print('usage: %s ' % (sys.argv[0])) else: import supervisor.medusa.monitor as monitor import supervisor.medusa.filesys as filesys import supervisor.medusa.default_handler as default_handler import supervisor.medusa.ftp_server as ftp_server import supervisor.medusa.chat_server as chat_server import supervisor.medusa.resolver as resolver rs = resolver.caching_resolver ('127.0.0.1') lg = logger.file_logger (sys.stdout) ms = monitor.secure_monitor_server ('fnord', '127.0.0.1', 9999) fs = filesys.os_filesystem (sys.argv[1]) dh = default_handler.default_handler (fs) hs = http_server('', int(sys.argv[2]), rs, lg) hs.install_handler (dh) ftp = ftp_server.ftp_server ( ftp_server.dummy_authorizer(sys.argv[1]), port=8021, resolver=rs, logger_object=lg ) cs = chat_server.chat_server ('', 7777) if '-p' in sys.argv: def profile_loop (): try: asyncore.loop() except KeyboardInterrupt: pass import profile profile.run ('profile_loop()', 'profile.out') else: asyncore.loop() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/medusa/logger.py0000644000076500000240000001564314340177153021037 0ustar00mnaberezstaff# -*- Mode: Python -*- import supervisor.medusa.asynchat_25 as asynchat import socket import time # these three are for the rotating logger import os # | import stat # v # # two types of log: # 1) file # with optional flushing. Also, one that rotates the log. # 2) socket # dump output directly to a socket connection. [how do we # keep it open?] # # The 'standard' interface to a logging object is simply # log_object.log (message) # # a file-like object that captures output, and # makes sure to flush it always... this could # be connected to: # o stdio file # o low-level file # o socket channel # o syslog output... class file_logger: # pass this either a path or a file object. def __init__ (self, file, flush=1, mode='a'): if isinstance(file, str): if file == '-': import sys self.file = sys.stdout else: self.file = open (file, mode) else: self.file = file self.do_flush = flush def __repr__ (self): return '' % self.file def write (self, data): self.file.write (data) self.maybe_flush() def writeline (self, line): self.file.writeline (line) self.maybe_flush() def writelines (self, lines): self.file.writelines (lines) self.maybe_flush() def maybe_flush (self): if self.do_flush: self.file.flush() def flush (self): self.file.flush() def softspace (self, *args): pass def log (self, message): if message[-1] not in ('\r', '\n'): self.write (message + '\n') else: self.write (message) # like a file_logger, but it must be attached to a filename. # When the log gets too full, or a certain time has passed, # it backs up the log and starts a new one. Note that backing # up the log is done via "mv" because anything else (cp, gzip) # would take time, during which medusa would do nothing else. class rotating_file_logger (file_logger): # If freq is non-None we back up "daily", "weekly", or "monthly". # Else if maxsize is non-None we back up whenever the log gets # to big. If both are None we never back up. def __init__ (self, file, freq=None, maxsize=None, flush=1, mode='a'): file_logger.__init__ (self, file, flush, mode) self.filename = file self.mode = mode self.freq = freq self.maxsize = maxsize self.rotate_when = self.next_backup(self.freq) def __repr__ (self): return '' % self.file # We back up at midnight every 1) day, 2) monday, or 3) 1st of month def next_backup (self, freq): (yr, mo, day, hr, min, sec, wd, jday, dst) = time.localtime(time.time()) if freq == 'daily': return time.mktime((yr,mo,day+1, 0,0,0, 0,0,-1)) elif freq == 'weekly': return time.mktime((yr,mo,day-wd+7, 0,0,0, 0,0,-1)) # wd(monday)==0 elif freq == 'monthly': return time.mktime((yr,mo+1,1, 0,0,0, 0,0,-1)) else: return None # not a date-based backup def maybe_flush (self): # rotate first if necessary self.maybe_rotate() if self.do_flush: # from file_logger() self.file.flush() def maybe_rotate (self): if self.freq and time.time() > self.rotate_when: self.rotate() self.rotate_when = self.next_backup(self.freq) elif self.maxsize: # rotate when we get too big try: if os.stat(self.filename)[stat.ST_SIZE] > self.maxsize: self.rotate() except os.error: # file not found, probably self.rotate() # will create a new file def rotate (self): (yr, mo, day, hr, min, sec, wd, jday, dst) = time.localtime(time.time()) try: self.file.close() newname = '%s.ends%04d%02d%02d' % (self.filename, yr, mo, day) try: open(newname, "r").close() # check if file exists newname += "-%02d%02d%02d" % (hr, min, sec) except: # YEAR_MONTH_DAY is unique pass os.rename(self.filename, newname) self.file = open(self.filename, self.mode) except: pass # log to a stream socket, asynchronously class socket_logger (asynchat.async_chat): def __init__ (self, address): asynchat.async_chat.__init__(self) if isinstance(address, str): self.create_socket (socket.AF_UNIX, socket.SOCK_STREAM) else: self.create_socket (socket.AF_INET, socket.SOCK_STREAM) self.connect (address) self.address = address def __repr__ (self): return '' % self.address def log (self, message): if message[-2:] != '\r\n': self.socket.push (message + '\r\n') else: self.socket.push (message) # log to multiple places class multi_logger: def __init__ (self, loggers): self.loggers = loggers def __repr__ (self): return '' % (repr(self.loggers)) def log (self, message): for logger in self.loggers: logger.log (message) class resolving_logger: """Feed (ip, message) combinations into this logger to get a resolved hostname in front of the message. The message will not be logged until the PTR request finishes (or fails).""" def __init__ (self, resolver, logger): self.resolver = resolver self.logger = logger class logger_thunk: def __init__ (self, message, logger): self.message = message self.logger = logger def __call__ (self, host, ttl, answer): if not answer: answer = host self.logger.log ('%s:%s' % (answer, self.message)) def log (self, ip, message): self.resolver.resolve_ptr ( ip, self.logger_thunk ( message, self.logger ) ) class unresolving_logger: """Just in case you don't want to resolve""" def __init__ (self, logger): self.logger = logger def log (self, ip, message): self.logger.log ('%s:%s' % (ip, message)) def strip_eol (line): while line and line[-1] in '\r\n': line = line[:-1] return line class tail_logger: """Keep track of the last log messages""" def __init__ (self, logger, size=500): self.size = size self.logger = logger self.messages = [] def log (self, message): self.messages.append (strip_eol (message)) if len (self.messages) > self.size: del self.messages[0] self.logger.log (message) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0 supervisor-4.2.5/supervisor/medusa/producers.py0000644000076500000240000002144114351440431021551 0ustar00mnaberezstaff# -*- Mode: Python -*- RCS_ID = '$Id: producers.py,v 1.9 2004/04/21 13:56:28 akuchling Exp $' """ A collection of producers. Each producer implements a particular feature: They can be combined in various ways to get interesting and useful behaviors. For example, you can feed dynamically-produced output into the compressing producer, then wrap this with the 'chunked' transfer-encoding producer. """ from supervisor.medusa.asynchat_25 import find_prefix_at_end from supervisor.compat import as_bytes class simple_producer: """producer for a string""" def __init__ (self, data, buffer_size=1024): self.data = data self.buffer_size = buffer_size def more (self): if len (self.data) > self.buffer_size: result = self.data[:self.buffer_size] self.data = self.data[self.buffer_size:] return result else: result = self.data self.data = b'' return result class scanning_producer: """like simple_producer, but more efficient for large strings""" def __init__ (self, data, buffer_size=1024): self.data = data self.buffer_size = buffer_size self.pos = 0 def more (self): if self.pos < len(self.data): lp = self.pos rp = min ( len(self.data), self.pos + self.buffer_size ) result = self.data[lp:rp] self.pos += len(result) return result else: return b'' class lines_producer: """producer for a list of lines""" def __init__ (self, lines): self.lines = lines def more (self): if self.lines: chunk = self.lines[:50] self.lines = self.lines[50:] return '\r\n'.join(chunk) + '\r\n' else: return '' class buffer_list_producer: """producer for a list of strings""" # i.e., data == ''.join(buffers) def __init__ (self, buffers): self.index = 0 self.buffers = buffers def more (self): if self.index >= len(self.buffers): return b'' else: data = self.buffers[self.index] self.index += 1 return data class file_producer: """producer wrapper for file[-like] objects""" # match http_channel's outgoing buffer size out_buffer_size = 1<<16 def __init__ (self, file): self.done = 0 self.file = file def more (self): if self.done: return b'' else: data = self.file.read (self.out_buffer_size) if not data: self.file.close() del self.file self.done = 1 return b'' else: return data # A simple output producer. This one does not [yet] have # the safety feature builtin to the monitor channel: runaway # output will not be caught. # don't try to print from within any of the methods # of this object. class output_producer: """Acts like an output file; suitable for capturing sys.stdout""" def __init__ (self): self.data = b'' def write (self, data): lines = data.split('\n') data = '\r\n'.join(lines) self.data += data def writeline (self, line): self.data = self.data + line + '\r\n' def writelines (self, lines): self.data = self.data + '\r\n'.join(lines) + '\r\n' def flush (self): pass def softspace (self, *args): pass def more (self): if self.data: result = self.data[:512] self.data = self.data[512:] return result else: return '' class composite_producer: """combine a fifo of producers into one""" def __init__ (self, producers): self.producers = producers def more (self): while len(self.producers): p = self.producers[0] d = p.more() if d: return d else: self.producers.pop(0) else: return b'' class globbing_producer: """ 'glob' the output from a producer into a particular buffer size. helps reduce the number of calls to send(). [this appears to gain about 30% performance on requests to a single channel] """ def __init__ (self, producer, buffer_size=1<<16): self.producer = producer self.buffer = b'' self.buffer_size = buffer_size def more (self): while len(self.buffer) < self.buffer_size: data = self.producer.more() if data: self.buffer = self.buffer + data else: break r = self.buffer self.buffer = b'' return r class hooked_producer: """ A producer that will call when it empties,. with an argument of the number of bytes produced. Useful for logging/instrumentation purposes. """ def __init__ (self, producer, function): self.producer = producer self.function = function self.bytes = 0 def more (self): if self.producer: result = self.producer.more() if not result: self.producer = None self.function (self.bytes) else: self.bytes += len(result) return result else: return '' # HTTP 1.1 emphasizes that an advertised Content-Length header MUST be # correct. In the face of Strange Files, it is conceivable that # reading a 'file' may produce an amount of data not matching that # reported by os.stat() [text/binary mode issues, perhaps the file is # being appended to, etc..] This makes the chunked encoding a True # Blessing, and it really ought to be used even with normal files. # How beautifully it blends with the concept of the producer. class chunked_producer: """A producer that implements the 'chunked' transfer coding for HTTP/1.1. Here is a sample usage: request['Transfer-Encoding'] = 'chunked' request.push ( producers.chunked_producer (your_producer) ) request.done() """ def __init__ (self, producer, footers=None): self.producer = producer self.footers = footers def more (self): if self.producer: data = self.producer.more() if data: s = '%x' % len(data) return as_bytes(s) + b'\r\n' + data + b'\r\n' else: self.producer = None if self.footers: return b'\r\n'.join([b'0'] + self.footers) + b'\r\n\r\n' else: return b'0\r\n\r\n' else: return b'' try: import zlib except ImportError: zlib = None class compressed_producer: """ Compress another producer on-the-fly, using ZLIB """ # Note: It's not very efficient to have the server repeatedly # compressing your outgoing files: compress them ahead of time, or # use a compress-once-and-store scheme. However, if you have low # bandwidth and low traffic, this may make more sense than # maintaining your source files compressed. # # Can also be used for compressing dynamically-produced output. def __init__ (self, producer, level=5): self.producer = producer self.compressor = zlib.compressobj (level) def more (self): if self.producer: cdata = b'' # feed until we get some output while not cdata: data = self.producer.more() if not data: self.producer = None return self.compressor.flush() else: cdata = self.compressor.compress (data) return cdata else: return b'' class escaping_producer: """A producer that escapes a sequence of characters""" # Common usage: escaping the CRLF.CRLF sequence in SMTP, NNTP, etc... def __init__ (self, producer, esc_from='\r\n.', esc_to='\r\n..'): self.producer = producer self.esc_from = esc_from self.esc_to = esc_to self.buffer = b'' self.find_prefix_at_end = find_prefix_at_end def more (self): esc_from = self.esc_from esc_to = self.esc_to buffer = self.buffer + self.producer.more() if buffer: buffer = buffer.replace(esc_from, esc_to) i = self.find_prefix_at_end (buffer, esc_from) if i: # we found a prefix self.buffer = buffer[-i:] return buffer[:-i] else: # no prefix, return it all self.buffer = b'' return buffer else: return buffer ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/medusa/util.py0000644000076500000240000000251214340177153020524 0ustar00mnaberezstafffrom supervisor.compat import escape def html_repr (object): so = escape (repr (object)) if hasattr (object, 'hyper_respond'): return '%s' % (id (object), so) else: return so # for example, tera, giga, mega, kilo # p_d (n, (1024, 1024, 1024, 1024)) # smallest divider goes first - for example # minutes, hours, days # p_d (n, (60, 60, 24)) def progressive_divide (n, parts): result = [] for part in parts: n, rem = divmod (n, part) result.append (rem) result.append (n) return result # b,k,m,g,t def split_by_units (n, units, dividers, format_string): divs = progressive_divide (n, dividers) result = [] for i in range(len(units)): if divs[i]: result.append (format_string % (divs[i], units[i])) result.reverse() if not result: return [format_string % (0, units[0])] else: return result def english_bytes (n): return split_by_units ( n, ('','K','M','G','T'), (1024, 1024, 1024, 1024, 1024), '%d %sB' ) def english_time (n): return split_by_units ( n, ('secs', 'mins', 'hours', 'days', 'weeks', 'years'), ( 60, 60, 24, 7, 52), '%d %s' ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/medusa/xmlrpc_handler.py0000644000076500000240000000607714340177153022563 0ustar00mnaberezstaff# -*- Mode: Python -*- # See http://www.xml-rpc.com/ # http://www.pythonware.com/products/xmlrpc/ # Based on "xmlrpcserver.py" by Fredrik Lundh (fredrik@pythonware.com) VERSION = "$Id: xmlrpc_handler.py,v 1.6 2004/04/21 14:09:24 akuchling Exp $" from supervisor.compat import as_string import supervisor.medusa.http_server as http_server try: import xmlrpclib except: import xmlrpc.client as xmlrpclib import sys class xmlrpc_handler: def match (self, request): # Note: /RPC2 is not required by the spec, so you may override this method. if request.uri[:5] == '/RPC2': return 1 else: return 0 def handle_request (self, request): if request.command == 'POST': request.collector = collector (self, request) else: request.error (400) def continue_request (self, data, request): params, method = xmlrpclib.loads (data) try: # generate response try: response = self.call (method, params) if type(response) != type(()): response = (response,) except: # report exception back to server response = xmlrpclib.dumps ( xmlrpclib.Fault (1, "%s:%s" % (sys.exc_info()[0], sys.exc_info()[1])) ) else: response = xmlrpclib.dumps (response, methodresponse=1) except: # internal error, report as HTTP server error request.error (500) else: # got a valid XML RPC response request['Content-Type'] = 'text/xml' request.push (response) request.done() def call (self, method, params): # override this method to implement RPC methods raise Exception("NotYetImplemented") class collector: """gathers input for POST and PUT requests""" def __init__ (self, handler, request): self.handler = handler self.request = request self.data = [] # make sure there's a content-length header cl = request.get_header ('content-length') if not cl: request.error (411) else: cl = int(cl) # using a 'numeric' terminator self.request.channel.set_terminator (cl) def collect_incoming_data (self, data): self.data.append(data) def found_terminator (self): # set the terminator back to the default self.request.channel.set_terminator (b'\r\n\r\n') # convert the data back to text for processing data = as_string(b''.join(self.data)) self.handler.continue_request (data, self.request) if __name__ == '__main__': class rpc_demo (xmlrpc_handler): def call (self, method, params): print('method="%s" params=%s' % (method, params)) return "Sure, that works" import supervisor.medusa.asyncore_25 as asyncore hs = http_server.http_server ('', 8000) rpc = rpc_demo() hs.install_handler (rpc) asyncore.loop() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840622.0 supervisor-4.2.5/supervisor/options.py0000644000076500000240000025455214351441556020004 0ustar00mnaberezstaffimport socket import getopt import os import sys import tempfile import errno import signal import re import pwd import grp import resource import stat import pkg_resources import glob import platform import warnings import fcntl from supervisor.compat import PY2 from supervisor.compat import ConfigParser from supervisor.compat import as_bytes, as_string from supervisor.compat import xmlrpclib from supervisor.compat import StringIO from supervisor.compat import basestring from supervisor.medusa import asyncore_25 as asyncore from supervisor.datatypes import process_or_group_name from supervisor.datatypes import boolean from supervisor.datatypes import integer from supervisor.datatypes import name_to_uid from supervisor.datatypes import gid_for_uid from supervisor.datatypes import existing_dirpath from supervisor.datatypes import byte_size from supervisor.datatypes import signal_number from supervisor.datatypes import list_of_exitcodes from supervisor.datatypes import dict_of_key_value_pairs from supervisor.datatypes import logfile_name from supervisor.datatypes import list_of_strings from supervisor.datatypes import octal_type from supervisor.datatypes import existing_directory from supervisor.datatypes import logging_level from supervisor.datatypes import colon_separated_user_group from supervisor.datatypes import inet_address from supervisor.datatypes import InetStreamSocketConfig from supervisor.datatypes import UnixStreamSocketConfig from supervisor.datatypes import url from supervisor.datatypes import Automatic from supervisor.datatypes import Syslog from supervisor.datatypes import auto_restart from supervisor.datatypes import profile_options from supervisor import loggers from supervisor import states from supervisor import xmlrpc from supervisor import poller def _read_version_txt(): mydir = os.path.abspath(os.path.dirname(__file__)) version_txt = os.path.join(mydir, 'version.txt') with open(version_txt, 'r') as f: return f.read().strip() VERSION = _read_version_txt() def normalize_path(v): return os.path.normpath(os.path.abspath(os.path.expanduser(v))) class Dummy: pass class Options: stderr = sys.stderr stdout = sys.stdout exit = sys.exit warnings = warnings uid = gid = None progname = sys.argv[0] configfile = None schemadir = None configroot = None here = None # Class variable deciding whether positional arguments are allowed. # If you want positional arguments, set this to 1 in your subclass. positional_args_allowed = 0 def __init__(self, require_configfile=True): """Constructor. Params: require_configfile -- whether we should fail on no config file. """ self.names_list = [] self.short_options = [] self.long_options = [] self.options_map = {} self.default_map = {} self.required_map = {} self.environ_map = {} self.attr_priorities = {} self.require_configfile = require_configfile self.add(None, None, "h", "help", self.help) self.add(None, None, "?", None, self.help) self.add("configfile", None, "c:", "configuration=") here = os.path.dirname(os.path.dirname(sys.argv[0])) searchpaths = [os.path.join(here, 'etc', 'supervisord.conf'), os.path.join(here, 'supervisord.conf'), 'supervisord.conf', 'etc/supervisord.conf', '/etc/supervisord.conf', '/etc/supervisor/supervisord.conf', ] self.searchpaths = searchpaths self.environ_expansions = {} for k, v in os.environ.items(): self.environ_expansions['ENV_%s' % k] = v def default_configfile(self): """Return the name of the found config file or print usage/exit.""" config = None for path in self.searchpaths: if os.path.exists(path): config = path break if config is None and self.require_configfile: self.usage('No config file found at default paths (%s); ' 'use the -c option to specify a config file ' 'at a different path' % ', '.join(self.searchpaths)) return config def help(self, dummy): """Print a long help message to stdout and exit(0). Occurrences of "%s" in are replaced by self.progname. """ help = self.doc + "\n" if help.find("%s") > 0: help = help.replace("%s", self.progname) self.stdout.write(help) self.exit(0) def usage(self, msg): """Print a brief error message to stderr and exit(2).""" self.stderr.write("Error: %s\n" % str(msg)) self.stderr.write("For help, use %s -h\n" % self.progname) self.exit(2) def add(self, name=None, # attribute name on self confname=None, # dotted config path name short=None, # short option name long=None, # long option name handler=None, # handler (defaults to string) default=None, # default value required=None, # message if not provided flag=None, # if not None, flag value env=None, # if not None, environment variable ): """Add information about a configuration option. This can take several forms: add(name, confname) Configuration option 'confname' maps to attribute 'name' add(name, None, short, long) Command line option '-short' or '--long' maps to 'name' add(None, None, short, long, handler) Command line option calls handler add(name, None, short, long, handler) Assign handler return value to attribute 'name' In addition, one of the following keyword arguments may be given: default=... -- if not None, the default value required=... -- if nonempty, an error message if no value provided flag=... -- if not None, flag value for command line option env=... -- if not None, name of environment variable that overrides the configuration file or default """ if flag is not None: if handler is not None: raise ValueError("use at most one of flag= and handler=") if not long and not short: raise ValueError("flag= requires a command line flag") if short and short.endswith(":"): raise ValueError("flag= requires a command line flag") if long and long.endswith("="): raise ValueError("flag= requires a command line flag") handler = lambda arg, flag=flag: flag if short and long: if short.endswith(":") != long.endswith("="): raise ValueError("inconsistent short/long options: %r %r" % ( short, long)) if short: if short[0] == "-": raise ValueError("short option should not start with '-'") key, rest = short[:1], short[1:] if rest not in ("", ":"): raise ValueError("short option should be 'x' or 'x:'") key = "-" + key if key in self.options_map: raise ValueError("duplicate short option key '%s'" % key) self.options_map[key] = (name, handler) self.short_options.append(short) if long: if long[0] == "-": raise ValueError("long option should not start with '-'") key = long if key[-1] == "=": key = key[:-1] key = "--" + key if key in self.options_map: raise ValueError("duplicate long option key '%s'" % key) self.options_map[key] = (name, handler) self.long_options.append(long) if env: self.environ_map[env] = (name, handler) if name: if not hasattr(self, name): setattr(self, name, None) self.names_list.append((name, confname)) if default is not None: self.default_map[name] = default if required: self.required_map[name] = required def _set(self, attr, value, prio): current = self.attr_priorities.get(attr, -1) if prio >= current: setattr(self, attr, value) self.attr_priorities[attr] = prio def realize(self, args=None, doc=None, progname=None): """Realize a configuration. Optional arguments: args -- the command line arguments, less the program name (default is sys.argv[1:]) doc -- usage message (default is __main__.__doc__) """ # Provide dynamic default method arguments if args is None: args = sys.argv[1:] if progname is None: progname = sys.argv[0] if doc is None: try: import __main__ doc = __main__.__doc__ except Exception: pass self.progname = progname self.doc = doc self.options = [] self.args = [] # Call getopt try: self.options, self.args = getopt.getopt( args, "".join(self.short_options), self.long_options) except getopt.error as exc: self.usage(str(exc)) # Check for positional args if self.args and not self.positional_args_allowed: self.usage("positional arguments are not supported: %s" % (str(self.args))) # Process options returned by getopt for opt, arg in self.options: name, handler = self.options_map[opt] if handler is not None: try: arg = handler(arg) except ValueError as msg: self.usage("invalid value for %s %r: %s" % (opt, arg, msg)) if name and arg is not None: if getattr(self, name) is not None: self.usage("conflicting command line option %r" % opt) self._set(name, arg, 2) # Process environment variables for envvar in self.environ_map.keys(): name, handler = self.environ_map[envvar] if envvar in os.environ: value = os.environ[envvar] if handler is not None: try: value = handler(value) except ValueError as msg: self.usage("invalid environment value for %s %r: %s" % (envvar, value, msg)) if name and value is not None: self._set(name, value, 1) if self.configfile is None: self.configfile = self.default_configfile() self.process_config() def process_config(self, do_usage=True): """Process configuration data structure. This includes reading config file if necessary, setting defaults etc. """ if self.configfile: self.process_config_file(do_usage) # Copy config options to attributes of self. This only fills # in options that aren't already set from the command line. for name, confname in self.names_list: if confname: parts = confname.split(".") obj = self.configroot for part in parts: if obj is None: break # Here AttributeError is not a user error! obj = getattr(obj, part) self._set(name, obj, 0) # Process defaults for name, value in self.default_map.items(): if getattr(self, name) is None: setattr(self, name, value) # Process required options for name, message in self.required_map.items(): if getattr(self, name) is None: self.usage(message) def process_config_file(self, do_usage): # Process config file if not hasattr(self.configfile, 'read'): self.here = os.path.abspath(os.path.dirname(self.configfile)) try: self.read_config(self.configfile) except ValueError as msg: if do_usage: # if this is not called from an RPC method, run usage and exit. self.usage(str(msg)) else: # if this is called from an RPC method, raise an error raise ValueError(msg) def exists(self, path): return os.path.exists(path) def open(self, fn, mode='r'): return open(fn, mode) def get_plugins(self, parser, factory_key, section_prefix): factories = [] for section in parser.sections(): if not section.startswith(section_prefix): continue name = section.split(':', 1)[1] factory_spec = parser.saneget(section, factory_key, None) if factory_spec is None: raise ValueError('section [%s] does not specify a %s' % (section, factory_key)) try: factory = self.import_spec(factory_spec) except ImportError: raise ValueError('%s cannot be resolved within [%s]' % ( factory_spec, section)) extras = {} for k in parser.options(section): if k != factory_key: extras[k] = parser.saneget(section, k) factories.append((name, factory, extras)) return factories def import_spec(self, spec): ep = pkg_resources.EntryPoint.parse("x=" + spec) if hasattr(ep, 'resolve'): # this is available on setuptools >= 10.2 return ep.resolve() else: # this causes a DeprecationWarning on setuptools >= 11.3 return ep.load(False) class ServerOptions(Options): user = None sockchown = None sockchmod = None logfile = None loglevel = None pidfile = None passwdfile = None nodaemon = None silent = None httpservers = () unlink_pidfile = False unlink_socketfiles = False mood = states.SupervisorStates.RUNNING def __init__(self): Options.__init__(self) self.configroot = Dummy() self.configroot.supervisord = Dummy() self.add(None, None, "v", "version", self.version) self.add("nodaemon", "supervisord.nodaemon", "n", "nodaemon", flag=1, default=0) self.add("user", "supervisord.user", "u:", "user=") self.add("umask", "supervisord.umask", "m:", "umask=", octal_type, default='022') self.add("directory", "supervisord.directory", "d:", "directory=", existing_directory) self.add("logfile", "supervisord.logfile", "l:", "logfile=", existing_dirpath, default="supervisord.log") self.add("logfile_maxbytes", "supervisord.logfile_maxbytes", "y:", "logfile_maxbytes=", byte_size, default=50 * 1024 * 1024) # 50MB self.add("logfile_backups", "supervisord.logfile_backups", "z:", "logfile_backups=", integer, default=10) self.add("loglevel", "supervisord.loglevel", "e:", "loglevel=", logging_level, default="info") self.add("pidfile", "supervisord.pidfile", "j:", "pidfile=", existing_dirpath, default="supervisord.pid") self.add("identifier", "supervisord.identifier", "i:", "identifier=", str, default="supervisor") self.add("childlogdir", "supervisord.childlogdir", "q:", "childlogdir=", existing_directory, default=tempfile.gettempdir()) self.add("minfds", "supervisord.minfds", "a:", "minfds=", int, default=1024) self.add("minprocs", "supervisord.minprocs", "", "minprocs=", int, default=200) self.add("nocleanup", "supervisord.nocleanup", "k", "nocleanup", flag=1, default=0) self.add("strip_ansi", "supervisord.strip_ansi", "t", "strip_ansi", flag=1, default=0) self.add("profile_options", "supervisord.profile_options", "", "profile_options=", profile_options, default=None) self.add("silent", "supervisord.silent", "s", "silent", flag=1, default=0) self.pidhistory = {} self.process_group_configs = [] self.parse_criticals = [] self.parse_warnings = [] self.parse_infos = [] self.signal_receiver = SignalReceiver() self.poller = poller.Poller(self) def version(self, dummy): """Print version to stdout and exit(0). """ self.stdout.write('%s\n' % VERSION) self.exit(0) # TODO: not covered by any test, but used by dispatchers def getLogger(self, *args, **kwargs): return loggers.getLogger(*args, **kwargs) def default_configfile(self): if os.getuid() == 0: self.warnings.warn( 'Supervisord is running as root and it is searching ' 'for its configuration file in default locations ' '(including its current working directory); you ' 'probably want to specify a "-c" argument specifying an ' 'absolute path to a configuration file for improved ' 'security.' ) return Options.default_configfile(self) def realize(self, *arg, **kw): Options.realize(self, *arg, **kw) section = self.configroot.supervisord # Additional checking of user option; set uid and gid if self.user is not None: try: uid = name_to_uid(self.user) except ValueError as msg: self.usage(msg) # invalid user self.uid = uid self.gid = gid_for_uid(uid) if not self.loglevel: self.loglevel = section.loglevel if self.logfile: logfile = self.logfile else: logfile = section.logfile if logfile != 'syslog': # if the value for logfile is "syslog", we don't want to # normalize the path to something like $CWD/syslog.log, but # instead use the syslog service. self.logfile = normalize_path(logfile) if self.pidfile: pidfile = self.pidfile else: pidfile = section.pidfile self.pidfile = normalize_path(pidfile) self.rpcinterface_factories = section.rpcinterface_factories self.serverurl = None self.server_configs = sconfigs = section.server_configs # we need to set a fallback serverurl that process.spawn can use # prefer a unix domain socket for config in [ config for config in sconfigs if config['family'] is socket.AF_UNIX ]: path = config['file'] self.serverurl = 'unix://%s' % path break # fall back to an inet socket if self.serverurl is None: for config in [ config for config in sconfigs if config['family'] is socket.AF_INET]: host = config['host'] port = config['port'] if not host: host = 'localhost' self.serverurl = 'http://%s:%s' % (host, port) # self.serverurl may still be None if no servers at all are # configured in the config file def process_config(self, do_usage=True): Options.process_config(self, do_usage=do_usage) new = self.configroot.supervisord.process_group_configs self.process_group_configs = new def read_config(self, fp): # Clear parse messages, since we may be re-reading the # config a second time after a reload. self.parse_criticals = [] self.parse_warnings = [] self.parse_infos = [] section = self.configroot.supervisord need_close = False if not hasattr(fp, 'read'): if not self.exists(fp): raise ValueError("could not find config file %s" % fp) try: fp = self.open(fp, 'r') need_close = True except (IOError, OSError): raise ValueError("could not read config file %s" % fp) parser = UnhosedConfigParser() parser.expansions = self.environ_expansions try: try: parser.read_file(fp) except AttributeError: parser.readfp(fp) except ConfigParser.ParsingError as why: raise ValueError(str(why)) finally: if need_close: fp.close() host_node_name = platform.node() expansions = {'here':self.here, 'host_node_name':host_node_name} expansions.update(self.environ_expansions) if parser.has_section('include'): parser.expand_here(self.here) if not parser.has_option('include', 'files'): raise ValueError(".ini file has [include] section, but no " "files setting") files = parser.get('include', 'files') files = expand(files, expansions, 'include.files') files = files.split() if hasattr(fp, 'name'): base = os.path.dirname(os.path.abspath(fp.name)) else: base = '.' for pattern in files: pattern = os.path.join(base, pattern) filenames = glob.glob(pattern) if not filenames: self.parse_warnings.append( 'No file matches via include "%s"' % pattern) continue for filename in sorted(filenames): self.parse_infos.append( 'Included extra file "%s" during parsing' % filename) try: parser.read(filename) except ConfigParser.ParsingError as why: raise ValueError(str(why)) else: parser.expand_here( os.path.abspath(os.path.dirname(filename)) ) sections = parser.sections() if not 'supervisord' in sections: raise ValueError('.ini file does not include supervisord section') common_expansions = {'here':self.here} def get(opt, default, **kwargs): expansions = kwargs.get('expansions', {}) expansions.update(common_expansions) kwargs['expansions'] = expansions return parser.getdefault(opt, default, **kwargs) section.minfds = integer(get('minfds', 1024)) section.minprocs = integer(get('minprocs', 200)) directory = get('directory', None) if directory is None: section.directory = None else: section.directory = existing_directory(directory) section.user = get('user', None) section.umask = octal_type(get('umask', '022')) section.logfile = existing_dirpath(get('logfile', 'supervisord.log')) section.logfile_maxbytes = byte_size(get('logfile_maxbytes', '50MB')) section.logfile_backups = integer(get('logfile_backups', 10)) section.loglevel = logging_level(get('loglevel', 'info')) section.pidfile = existing_dirpath(get('pidfile', 'supervisord.pid')) section.identifier = get('identifier', 'supervisor') section.nodaemon = boolean(get('nodaemon', 'false')) section.silent = boolean(get('silent', 'false')) tempdir = tempfile.gettempdir() section.childlogdir = existing_directory(get('childlogdir', tempdir)) section.nocleanup = boolean(get('nocleanup', 'false')) section.strip_ansi = boolean(get('strip_ansi', 'false')) environ_str = get('environment', '') environ_str = expand(environ_str, expansions, 'environment') section.environment = dict_of_key_value_pairs(environ_str) # extend expansions for global from [supervisord] environment definition for k, v in section.environment.items(): self.environ_expansions['ENV_%s' % k ] = v # Process rpcinterface plugins before groups to allow custom events to # be registered. section.rpcinterface_factories = self.get_plugins( parser, 'supervisor.rpcinterface_factory', 'rpcinterface:' ) section.process_group_configs = self.process_groups_from_parser(parser) for group in section.process_group_configs: for proc in group.process_configs: env = section.environment.copy() env.update(proc.environment) proc.environment = env section.server_configs = self.server_configs_from_parser(parser) section.profile_options = None return section def process_groups_from_parser(self, parser): groups = [] all_sections = parser.sections() homogeneous_exclude = [] common_expansions = {'here':self.here} def get(section, opt, default, **kwargs): expansions = kwargs.get('expansions', {}) expansions.update(common_expansions) kwargs['expansions'] = expansions return parser.saneget(section, opt, default, **kwargs) # process heterogeneous groups for section in all_sections: if not section.startswith('group:'): continue group_name = process_or_group_name(section.split(':', 1)[1]) programs = list_of_strings(get(section, 'programs', None)) priority = integer(get(section, 'priority', 999)) group_processes = [] for program in programs: program_section = "program:%s" % program fcgi_section = "fcgi-program:%s" % program if not program_section in all_sections and not fcgi_section in all_sections: raise ValueError( '[%s] names unknown program or fcgi-program %s' % (section, program)) if program_section in all_sections and fcgi_section in all_sections: raise ValueError( '[%s] name %s is ambiguous (exists as program and fcgi-program)' % (section, program)) section = program_section if program_section in all_sections else fcgi_section homogeneous_exclude.append(section) processes = self.processes_from_section(parser, section, group_name, ProcessConfig) group_processes.extend(processes) groups.append( ProcessGroupConfig(self, group_name, priority, group_processes) ) # process "normal" homogeneous groups for section in all_sections: if ( (not section.startswith('program:') ) or section in homogeneous_exclude ): continue program_name = process_or_group_name(section.split(':', 1)[1]) priority = integer(get(section, 'priority', 999)) processes=self.processes_from_section(parser, section, program_name, ProcessConfig) groups.append( ProcessGroupConfig(self, program_name, priority, processes) ) # process "event listener" homogeneous groups for section in all_sections: if not section.startswith('eventlistener:'): continue pool_name = section.split(':', 1)[1] # give listeners a "high" default priority so they are started first # and stopped last at mainloop exit priority = integer(get(section, 'priority', -1)) buffer_size = integer(get(section, 'buffer_size', 10)) if buffer_size < 1: raise ValueError('[%s] section sets invalid buffer_size (%d)' % (section, buffer_size)) result_handler = get(section, 'result_handler', 'supervisor.dispatchers:default_handler') try: result_handler = self.import_spec(result_handler) except ImportError: raise ValueError('%s cannot be resolved within [%s]' % ( result_handler, section)) pool_event_names = [x.upper() for x in list_of_strings(get(section, 'events', ''))] pool_event_names = set(pool_event_names) if not pool_event_names: raise ValueError('[%s] section requires an "events" line' % section) from supervisor.events import EventTypes pool_events = [] for pool_event_name in pool_event_names: pool_event = getattr(EventTypes, pool_event_name, None) if pool_event is None: raise ValueError('Unknown event type %s in [%s] events' % (pool_event_name, section)) pool_events.append(pool_event) redirect_stderr = boolean(get(section, 'redirect_stderr', 'false')) if redirect_stderr: raise ValueError('[%s] section sets redirect_stderr=true ' 'but this is not allowed because it will interfere ' 'with the eventlistener protocol' % section) processes=self.processes_from_section(parser, section, pool_name, EventListenerConfig) groups.append( EventListenerPoolConfig(self, pool_name, priority, processes, buffer_size, pool_events, result_handler) ) # process fastcgi homogeneous groups for section in all_sections: if ( (not section.startswith('fcgi-program:') ) or section in homogeneous_exclude ): continue program_name = process_or_group_name(section.split(':', 1)[1]) priority = integer(get(section, 'priority', 999)) fcgi_expansions = {'program_name': program_name} # find proc_uid from "user" option proc_user = get(section, 'user', None) if proc_user is None: proc_uid = None else: proc_uid = name_to_uid(proc_user) socket_backlog = get(section, 'socket_backlog', None) if socket_backlog is not None: socket_backlog = integer(socket_backlog) if (socket_backlog < 1 or socket_backlog > 65535): raise ValueError('Invalid socket_backlog value %s' % socket_backlog) socket_owner = get(section, 'socket_owner', None) if socket_owner is not None: try: socket_owner = colon_separated_user_group(socket_owner) except ValueError: raise ValueError('Invalid socket_owner value %s' % socket_owner) socket_mode = get(section, 'socket_mode', None) if socket_mode is not None: try: socket_mode = octal_type(socket_mode) except (TypeError, ValueError): raise ValueError('Invalid socket_mode value %s' % socket_mode) socket = get(section, 'socket', None, expansions=fcgi_expansions) if not socket: raise ValueError('[%s] section requires a "socket" line' % section) try: socket_config = self.parse_fcgi_socket(socket, proc_uid, socket_owner, socket_mode, socket_backlog) except ValueError as e: raise ValueError('%s in [%s] socket' % (str(e), section)) processes=self.processes_from_section(parser, section, program_name, FastCGIProcessConfig) groups.append( FastCGIGroupConfig(self, program_name, priority, processes, socket_config) ) groups.sort() return groups def parse_fcgi_socket(self, sock, proc_uid, socket_owner, socket_mode, socket_backlog): if sock.startswith('unix://'): path = sock[7:] #Check it's an absolute path if not os.path.isabs(path): raise ValueError("Unix socket path %s is not an absolute path", path) path = normalize_path(path) if socket_owner is None: uid = os.getuid() if proc_uid is not None and proc_uid != uid: socket_owner = (proc_uid, gid_for_uid(proc_uid)) if socket_mode is None: socket_mode = 0o700 return UnixStreamSocketConfig(path, owner=socket_owner, mode=socket_mode, backlog=socket_backlog) if socket_owner is not None or socket_mode is not None: raise ValueError("socket_owner and socket_mode params should" + " only be used with a Unix domain socket") m = re.match(r'tcp://([^\s:]+):(\d+)$', sock) if m: host = m.group(1) port = int(m.group(2)) return InetStreamSocketConfig(host, port, backlog=socket_backlog) raise ValueError("Bad socket format %s", sock) def processes_from_section(self, parser, section, group_name, klass=None): try: return self._processes_from_section( parser, section, group_name, klass) except ValueError as e: filename = parser.section_to_file.get(section, self.configfile) raise ValueError('%s in section %r (file: %r)' % (e, section, filename)) def _processes_from_section(self, parser, section, group_name, klass=None): if klass is None: klass = ProcessConfig programs = [] program_name = process_or_group_name(section.split(':', 1)[1]) host_node_name = platform.node() common_expansions = {'here':self.here, 'program_name':program_name, 'host_node_name':host_node_name, 'group_name':group_name} def get(section, opt, *args, **kwargs): expansions = kwargs.get('expansions', {}) expansions.update(common_expansions) kwargs['expansions'] = expansions return parser.saneget(section, opt, *args, **kwargs) priority = integer(get(section, 'priority', 999)) autostart = boolean(get(section, 'autostart', 'true')) autorestart = auto_restart(get(section, 'autorestart', 'unexpected')) startsecs = integer(get(section, 'startsecs', 1)) startretries = integer(get(section, 'startretries', 3)) stopsignal = signal_number(get(section, 'stopsignal', 'TERM')) stopwaitsecs = integer(get(section, 'stopwaitsecs', 10)) stopasgroup = boolean(get(section, 'stopasgroup', 'false')) killasgroup = boolean(get(section, 'killasgroup', stopasgroup)) exitcodes = list_of_exitcodes(get(section, 'exitcodes', '0')) # see also redirect_stderr check in process_groups_from_parser() redirect_stderr = boolean(get(section, 'redirect_stderr','false')) numprocs = integer(get(section, 'numprocs', 1)) numprocs_start = integer(get(section, 'numprocs_start', 0)) environment_str = get(section, 'environment', '', do_expand=False) stdout_cmaxbytes = byte_size(get(section,'stdout_capture_maxbytes','0')) stdout_events = boolean(get(section, 'stdout_events_enabled','false')) stderr_cmaxbytes = byte_size(get(section,'stderr_capture_maxbytes','0')) stderr_events = boolean(get(section, 'stderr_events_enabled','false')) serverurl = get(section, 'serverurl', None) if serverurl and serverurl.strip().upper() == 'AUTO': serverurl = None # find uid from "user" option user = get(section, 'user', None) if user is None: uid = None else: uid = name_to_uid(user) umask = get(section, 'umask', None) if umask is not None: umask = octal_type(umask) process_name = process_or_group_name( get(section, 'process_name', '%(program_name)s', do_expand=False)) if numprocs > 1: if not '%(process_num)' in process_name: # process_name needs to include process_num when we # represent a group of processes raise ValueError( '%(process_num) must be present within process_name when ' 'numprocs > 1') if stopasgroup and not killasgroup: raise ValueError( "Cannot set stopasgroup=true and killasgroup=false" ) for process_num in range(numprocs_start, numprocs + numprocs_start): expansions = common_expansions expansions.update({'process_num': process_num, 'numprocs': numprocs}) expansions.update(self.environ_expansions) environment = dict_of_key_value_pairs( expand(environment_str, expansions, 'environment')) # extend expansions for process from [program:x] environment definition for k, v in environment.items(): expansions['ENV_%s' % k] = v directory = get(section, 'directory', None) logfiles = {} for k in ('stdout', 'stderr'): lf_key = '%s_logfile' % k lf_val = get(section, lf_key, Automatic) if isinstance(lf_val, basestring): lf_val = expand(lf_val, expansions, lf_key) lf_val = logfile_name(lf_val) logfiles[lf_key] = lf_val bu_key = '%s_logfile_backups' % k backups = integer(get(section, bu_key, 10)) logfiles[bu_key] = backups mb_key = '%s_logfile_maxbytes' % k maxbytes = byte_size(get(section, mb_key, '50MB')) logfiles[mb_key] = maxbytes sy_key = '%s_syslog' % k syslog = boolean(get(section, sy_key, False)) logfiles[sy_key] = syslog # rewrite deprecated "syslog" magic logfile into the equivalent # TODO remove this in a future version if lf_val is Syslog: self.parse_warnings.append( 'For [%s], %s=syslog but this is deprecated and will ' 'be removed. Use %s=true to enable syslog instead.' % ( section, lf_key, sy_key)) logfiles[lf_key] = lf_val = None logfiles[sy_key] = True if lf_val is Automatic and not maxbytes: self.parse_warnings.append( 'For [%s], AUTO logging used for %s without ' 'rollover, set maxbytes > 0 to avoid filling up ' 'filesystem unintentionally' % (section, lf_key)) if redirect_stderr: if logfiles['stderr_logfile'] not in (Automatic, None): self.parse_warnings.append( 'For [%s], redirect_stderr=true but stderr_logfile has ' 'also been set to a filename, the filename has been ' 'ignored' % section) # never create an stderr logfile when redirected logfiles['stderr_logfile'] = None command = get(section, 'command', None, expansions=expansions) if command is None: raise ValueError( 'program section %s does not specify a command' % section) pconfig = klass( self, name=expand(process_name, expansions, 'process_name'), command=command, directory=directory, umask=umask, priority=priority, autostart=autostart, autorestart=autorestart, startsecs=startsecs, startretries=startretries, uid=uid, stdout_logfile=logfiles['stdout_logfile'], stdout_capture_maxbytes = stdout_cmaxbytes, stdout_events_enabled = stdout_events, stdout_logfile_backups=logfiles['stdout_logfile_backups'], stdout_logfile_maxbytes=logfiles['stdout_logfile_maxbytes'], stdout_syslog=logfiles['stdout_syslog'], stderr_logfile=logfiles['stderr_logfile'], stderr_capture_maxbytes = stderr_cmaxbytes, stderr_events_enabled = stderr_events, stderr_logfile_backups=logfiles['stderr_logfile_backups'], stderr_logfile_maxbytes=logfiles['stderr_logfile_maxbytes'], stderr_syslog=logfiles['stderr_syslog'], stopsignal=stopsignal, stopwaitsecs=stopwaitsecs, stopasgroup=stopasgroup, killasgroup=killasgroup, exitcodes=exitcodes, redirect_stderr=redirect_stderr, environment=environment, serverurl=serverurl) programs.append(pconfig) programs.sort() # asc by priority return programs def _parse_servernames(self, parser, stype): options = [] for section in parser.sections(): if section.startswith(stype): parts = section.split(':', 1) if len(parts) > 1: name = parts[1] else: name = None # default sentinel options.append((name, section)) return options def _parse_username_and_password(self, parser, section): get = parser.saneget username = get(section, 'username', None) password = get(section, 'password', None) if username is not None or password is not None: if username is None or password is None: raise ValueError( 'Section [%s] contains incomplete authentication: ' 'If a username or a password is specified, both the ' 'username and password must be specified' % section) return {'username':username, 'password':password} def server_configs_from_parser(self, parser): configs = [] inet_serverdefs = self._parse_servernames(parser, 'inet_http_server') for name, section in inet_serverdefs: config = {} get = parser.saneget config.update(self._parse_username_and_password(parser, section)) config['name'] = name config['family'] = socket.AF_INET port = get(section, 'port', None) if port is None: raise ValueError('section [%s] has no port value' % section) host, port = inet_address(port) config['host'] = host config['port'] = port config['section'] = section configs.append(config) unix_serverdefs = self._parse_servernames(parser, 'unix_http_server') for name, section in unix_serverdefs: config = {} get = parser.saneget sfile = get(section, 'file', None, expansions={'here': self.here}) if sfile is None: raise ValueError('section [%s] has no file value' % section) sfile = sfile.strip() config['name'] = name config['family'] = socket.AF_UNIX config['file'] = normalize_path(sfile) config.update(self._parse_username_and_password(parser, section)) chown = get(section, 'chown', None) if chown is not None: try: chown = colon_separated_user_group(chown) except ValueError: raise ValueError('Invalid sockchown value %s' % chown) else: chown = (-1, -1) config['chown'] = chown chmod = get(section, 'chmod', None) if chmod is not None: try: chmod = octal_type(chmod) except (TypeError, ValueError): raise ValueError('Invalid chmod value %s' % chmod) else: chmod = 0o700 config['chmod'] = chmod config['section'] = section configs.append(config) return configs def daemonize(self): self.poller.before_daemonize() self._daemonize() self.poller.after_daemonize() def _daemonize(self): # To daemonize, we need to become the leader of our own session # (process) group. If we do not, signals sent to our # parent process will also be sent to us. This might be bad because # signals such as SIGINT can be sent to our parent process during # normal (uninteresting) operations such as when we press Ctrl-C in the # parent terminal window to escape from a logtail command. # To disassociate ourselves from our parent's session group we use # os.setsid. It means "set session id", which has the effect of # disassociating a process from is current session and process group # and setting itself up as a new session leader. # # Unfortunately we cannot call setsid if we're already a session group # leader, so we use "fork" to make a copy of ourselves that is # guaranteed to not be a session group leader. # # We also change directories, set stderr and stdout to null, and # change our umask. # # This explanation was (gratefully) garnered from # http://www.cems.uwe.ac.uk/~irjohnso/coursenotes/lrc/system/daemons/d3.htm pid = os.fork() if pid != 0: # Parent self.logger.blather("supervisord forked; parent exiting") os._exit(0) # Child self.logger.info("daemonizing the supervisord process") if self.directory: try: os.chdir(self.directory) except OSError as err: self.logger.critical("can't chdir into %r: %s" % (self.directory, err)) else: self.logger.info("set current directory: %r" % self.directory) os.close(0) self.stdin = sys.stdin = sys.__stdin__ = open("/dev/null") os.close(1) self.stdout = sys.stdout = sys.__stdout__ = open("/dev/null", "w") os.close(2) self.stderr = sys.stderr = sys.__stderr__ = open("/dev/null", "w") os.setsid() os.umask(self.umask) # XXX Stevens, in his Advanced Unix book, section 13.3 (page # 417) recommends calling umask(0) and closing unused # file descriptors. In his Network Programming book, he # additionally recommends ignoring SIGHUP and forking again # after the setsid() call, for obscure SVR4 reasons. def write_pidfile(self): pid = os.getpid() try: with open(self.pidfile, 'w') as f: f.write('%s\n' % pid) except (IOError, OSError): self.logger.critical('could not write pidfile %s' % self.pidfile) else: self.unlink_pidfile = True self.logger.info('supervisord started with pid %s' % pid) def cleanup(self): for config, server in self.httpservers: if config['family'] == socket.AF_UNIX: if self.unlink_socketfiles: socketname = config['file'] self._try_unlink(socketname) if self.unlink_pidfile: self._try_unlink(self.pidfile) self.poller.close() def _try_unlink(self, path): try: os.unlink(path) except OSError: pass def close_httpservers(self): dispatcher_servers = [] for config, server in self.httpservers: server.close() # server._map is a reference to the asyncore socket_map for dispatcher in self.get_socket_map().values(): dispatcher_server = getattr(dispatcher, 'server', None) if dispatcher_server is server: dispatcher_servers.append(dispatcher) for server in dispatcher_servers: # TODO: try to remove this entirely. # For unknown reasons, sometimes an http_channel # dispatcher in the socket map related to servers # remains open *during a reload*. If one of these # exists at this point, we need to close it by hand # (thus removing it from the asyncore.socket_map). If # we don't do this, 'cleanup_fds' will cause its file # descriptor to be closed, but it will still remain in # the socket_map, and eventually its file descriptor # will be passed to # select(), which will bomb. See # also https://web.archive.org/web/20160729222427/http://www.plope.com/software/collector/253 server.close() def close_logger(self): self.logger.close() def setsignals(self): receive = self.signal_receiver.receive signal.signal(signal.SIGTERM, receive) signal.signal(signal.SIGINT, receive) signal.signal(signal.SIGQUIT, receive) signal.signal(signal.SIGHUP, receive) signal.signal(signal.SIGCHLD, receive) signal.signal(signal.SIGUSR2, receive) def get_signal(self): return self.signal_receiver.get_signal() def openhttpservers(self, supervisord): try: self.httpservers = self.make_http_servers(supervisord) self.unlink_socketfiles = True except socket.error as why: if why.args[0] == errno.EADDRINUSE: self.usage('Another program is already listening on ' 'a port that one of our HTTP servers is ' 'configured to use. Shut this program ' 'down first before starting supervisord.') else: help = 'Cannot open an HTTP server: socket.error reported' errorname = errno.errorcode.get(why.args[0]) if errorname is None: self.usage('%s %s' % (help, why.args[0])) else: self.usage('%s errno.%s (%d)' % (help, errorname, why.args[0])) except ValueError as why: self.usage(why.args[0]) def get_autochildlog_name(self, name, identifier, channel): prefix='%s-%s---%s-' % (name, channel, identifier) logfile = self.mktempfile( suffix='.log', prefix=prefix, dir=self.childlogdir) return logfile def clear_autochildlogdir(self): # must be called after realize() childlogdir = self.childlogdir fnre = re.compile(r'.+?---%s-\S+\.log\.{0,1}\d{0,4}' % self.identifier) try: filenames = os.listdir(childlogdir) except (IOError, OSError): self.logger.warn('Could not clear childlog dir') return for filename in filenames: if fnre.match(filename): pathname = os.path.join(childlogdir, filename) try: self.remove(pathname) except (OSError, IOError): self.logger.warn('Failed to clean up %r' % pathname) def get_socket_map(self): return asyncore.socket_map def cleanup_fds(self): # try to close any leaked file descriptors (for reload) start = 5 os.closerange(start, self.minfds) def kill(self, pid, signal): os.kill(pid, signal) def waitpid(self): # Need pthread_sigmask here to avoid concurrent sigchld, but Python # doesn't offer in Python < 3.4. There is still a race condition here; # we can get a sigchld while we're sitting in the waitpid call. # However, AFAICT, if waitpid is interrupted by SIGCHLD, as long as we # call waitpid again (which happens every so often during the normal # course in the mainloop), we'll eventually reap the child that we # tried to reap during the interrupted call. At least on Linux, this # appears to be true, or at least stopping 50 processes at once never # left zombies laying around. try: pid, sts = os.waitpid(-1, os.WNOHANG) except OSError as exc: code = exc.args[0] if code not in (errno.ECHILD, errno.EINTR): self.logger.critical( 'waitpid error %r; ' 'a process may not be cleaned up properly' % code ) if code == errno.EINTR: self.logger.blather('EINTR during reap') pid, sts = None, None return pid, sts def drop_privileges(self, user): """Drop privileges to become the specified user, which may be a username or uid. Called for supervisord startup and when spawning subprocesses. Returns None on success or a string error message if privileges could not be dropped.""" if user is None: return "No user specified to setuid to!" # get uid for user, which can be a number or username try: uid = int(user) except ValueError: try: pwrec = pwd.getpwnam(user) except KeyError: return "Can't find username %r" % user uid = pwrec[2] else: try: pwrec = pwd.getpwuid(uid) except KeyError: return "Can't find uid %r" % uid current_uid = os.getuid() if current_uid == uid: # do nothing and return successfully if the uid is already the # current one. this allows a supervisord running as an # unprivileged user "foo" to start a process where the config # has "user=foo" (same user) in it. return if current_uid != 0: return "Can't drop privilege as nonroot user" gid = pwrec[3] if hasattr(os, 'setgroups'): user = pwrec[0] groups = [grprec[2] for grprec in grp.getgrall() if user in grprec[3]] # always put our primary gid first in this list, otherwise we can # lose group info since sometimes the first group in the setgroups # list gets overwritten on the subsequent setgid call (at least on # freebsd 9 with python 2.7 - this will be safe though for all unix # /python version combos) groups.insert(0, gid) try: os.setgroups(groups) except OSError: return 'Could not set groups of effective user' try: os.setgid(gid) except OSError: return 'Could not set group id of effective user' os.setuid(uid) def set_uid_or_exit(self): """Set the uid of the supervisord process. Called during supervisord startup only. No return value. Exits the process via usage() if privileges could not be dropped.""" if self.uid is None: if os.getuid() == 0: self.parse_criticals.append('Supervisor is running as root. ' 'Privileges were not dropped because no user is ' 'specified in the config file. If you intend to run ' 'as root, you can set user=root in the config file ' 'to avoid this message.') else: msg = self.drop_privileges(self.uid) if msg is None: self.parse_infos.append('Set uid to user %s succeeded' % self.uid) else: # failed to drop privileges self.usage(msg) def set_rlimits_or_exit(self): """Set the rlimits of the supervisord process. Called during supervisord startup only. No return value. Exits the process via usage() if any rlimits could not be set.""" limits = [] if hasattr(resource, 'RLIMIT_NOFILE'): limits.append( { 'msg':('The minimum number of file descriptors required ' 'to run this process is %(min_limit)s as per the "minfds" ' 'command-line argument or config file setting. ' 'The current environment will only allow you ' 'to open %(hard)s file descriptors. Either raise ' 'the number of usable file descriptors in your ' 'environment (see README.rst) or lower the ' 'minfds setting in the config file to allow ' 'the process to start.'), 'min':self.minfds, 'resource':resource.RLIMIT_NOFILE, 'name':'RLIMIT_NOFILE', }) if hasattr(resource, 'RLIMIT_NPROC'): limits.append( { 'msg':('The minimum number of available processes required ' 'to run this program is %(min_limit)s as per the "minprocs" ' 'command-line argument or config file setting. ' 'The current environment will only allow you ' 'to open %(hard)s processes. Either raise ' 'the number of usable processes in your ' 'environment (see README.rst) or lower the ' 'minprocs setting in the config file to allow ' 'the program to start.'), 'min':self.minprocs, 'resource':resource.RLIMIT_NPROC, 'name':'RLIMIT_NPROC', }) for limit in limits: min_limit = limit['min'] res = limit['resource'] msg = limit['msg'] name = limit['name'] name = name # name is used below by locals() soft, hard = resource.getrlimit(res) if (soft < min_limit) and (soft != -1): # -1 means unlimited if (hard < min_limit) and (hard != -1): # setrlimit should increase the hard limit if we are # root, if not then setrlimit raises and we print usage hard = min_limit try: resource.setrlimit(res, (min_limit, hard)) self.parse_infos.append('Increased %(name)s limit to ' '%(min_limit)s' % locals()) except (resource.error, ValueError): self.usage(msg % locals()) def make_logger(self): # must be called after realize() and after supervisor does setuid() format = '%(asctime)s %(levelname)s %(message)s\n' self.logger = loggers.getLogger(self.loglevel) if self.nodaemon and not self.silent: loggers.handle_stdout(self.logger, format) loggers.handle_file( self.logger, self.logfile, format, rotating=not not self.logfile_maxbytes, maxbytes=self.logfile_maxbytes, backups=self.logfile_backups, ) for msg in self.parse_criticals: self.logger.critical(msg) for msg in self.parse_warnings: self.logger.warn(msg) for msg in self.parse_infos: self.logger.info(msg) def make_http_servers(self, supervisord): from supervisor.http import make_http_servers return make_http_servers(self, supervisord) def close_fd(self, fd): try: os.close(fd) except OSError: pass def fork(self): return os.fork() def dup2(self, frm, to): return os.dup2(frm, to) def setpgrp(self): return os.setpgrp() def stat(self, filename): return os.stat(filename) def write(self, fd, data): return os.write(fd, as_bytes(data)) def execve(self, filename, argv, env): return os.execve(filename, argv, env) def mktempfile(self, suffix, prefix, dir): # set os._urandomfd as a hack around bad file descriptor bug # seen in the wild, see # https://web.archive.org/web/20160729044005/http://www.plope.com/software/collector/252 os._urandomfd = None fd, filename = tempfile.mkstemp(suffix, prefix, dir) os.close(fd) return filename def remove(self, path): os.remove(path) def _exit(self, code): os._exit(code) def setumask(self, mask): os.umask(mask) def get_path(self): """Return a list corresponding to $PATH, or a default.""" path = ["/bin", "/usr/bin", "/usr/local/bin"] if "PATH" in os.environ: p = os.environ["PATH"] if p: path = p.split(os.pathsep) return path def get_pid(self): return os.getpid() def check_execv_args(self, filename, argv, st): if st is None: raise NotFound("can't find command %r" % filename) elif stat.S_ISDIR(st[stat.ST_MODE]): raise NotExecutable("command at %r is a directory" % filename) elif not (stat.S_IMODE(st[stat.ST_MODE]) & 0o111): raise NotExecutable("command at %r is not executable" % filename) elif not os.access(filename, os.X_OK): raise NoPermission("no permission to run command %r" % filename) def reopenlogs(self): self.logger.info('supervisord logreopen') for handler in self.logger.handlers: if hasattr(handler, 'reopen'): handler.reopen() def readfd(self, fd): try: data = os.read(fd, 2 << 16) # 128K except OSError as why: if why.args[0] not in (errno.EWOULDBLOCK, errno.EBADF, errno.EINTR): raise data = b'' return data def chdir(self, dir): os.chdir(dir) def make_pipes(self, stderr=True): """ Create pipes for parent to child stdin/stdout/stderr communications. Open fd in non-blocking mode so we can read them in the mainloop without blocking. If stderr is False, don't create a pipe for stderr. """ pipes = {'child_stdin':None, 'stdin':None, 'stdout':None, 'child_stdout':None, 'stderr':None, 'child_stderr':None} try: stdin, child_stdin = os.pipe() pipes['child_stdin'], pipes['stdin'] = stdin, child_stdin stdout, child_stdout = os.pipe() pipes['stdout'], pipes['child_stdout'] = stdout, child_stdout if stderr: stderr, child_stderr = os.pipe() pipes['stderr'], pipes['child_stderr'] = stderr, child_stderr for fd in (pipes['stdout'], pipes['stderr'], pipes['stdin']): if fd is not None: flags = fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NDELAY fcntl.fcntl(fd, fcntl.F_SETFL, flags) return pipes except OSError: for fd in pipes.values(): if fd is not None: self.close_fd(fd) raise def close_parent_pipes(self, pipes): for fdname in ('stdin', 'stdout', 'stderr'): fd = pipes.get(fdname) if fd is not None: self.close_fd(fd) def close_child_pipes(self, pipes): for fdname in ('child_stdin', 'child_stdout', 'child_stderr'): fd = pipes.get(fdname) if fd is not None: self.close_fd(fd) class ClientOptions(Options): positional_args_allowed = 1 interactive = None prompt = None serverurl = None username = None password = None history_file = None def __init__(self): Options.__init__(self, require_configfile=False) self.configroot = Dummy() self.configroot.supervisorctl = Dummy() self.configroot.supervisorctl.interactive = None self.configroot.supervisorctl.prompt = 'supervisor' self.configroot.supervisorctl.serverurl = None self.configroot.supervisorctl.username = None self.configroot.supervisorctl.password = None self.configroot.supervisorctl.history_file = None from supervisor.supervisorctl import DefaultControllerPlugin default_factory = ('default', DefaultControllerPlugin, {}) # we always add the default factory. If you want to a supervisorctl # without the default plugin, please write your own supervisorctl. self.plugin_factories = [default_factory] self.add("interactive", "supervisorctl.interactive", "i", "interactive", flag=1, default=0) self.add("prompt", "supervisorctl.prompt", default="supervisor") self.add("serverurl", "supervisorctl.serverurl", "s:", "serverurl=", url, default="http://localhost:9001") self.add("username", "supervisorctl.username", "u:", "username=") self.add("password", "supervisorctl.password", "p:", "password=") self.add("history", "supervisorctl.history_file", "r:", "history_file=") def realize(self, *arg, **kw): Options.realize(self, *arg, **kw) if not self.args: self.interactive = 1 def read_config(self, fp): section = self.configroot.supervisorctl need_close = False if not hasattr(fp, 'read'): self.here = os.path.dirname(normalize_path(fp)) if not self.exists(fp): raise ValueError("could not find config file %s" % fp) try: fp = self.open(fp, 'r') need_close = True except (IOError, OSError): raise ValueError("could not read config file %s" % fp) parser = UnhosedConfigParser() parser.expansions = self.environ_expansions parser.mysection = 'supervisorctl' try: parser.read_file(fp) except AttributeError: parser.readfp(fp) if need_close: fp.close() sections = parser.sections() if not 'supervisorctl' in sections: raise ValueError('.ini file does not include supervisorctl section') serverurl = parser.getdefault('serverurl', 'http://localhost:9001', expansions={'here': self.here}) if serverurl.startswith('unix://'): path = normalize_path(serverurl[7:]) serverurl = 'unix://%s' % path section.serverurl = serverurl # The defaults used below are really set in __init__ (since # section==self.configroot.supervisorctl) section.prompt = parser.getdefault('prompt', section.prompt) section.username = parser.getdefault('username', section.username) section.password = parser.getdefault('password', section.password) history_file = parser.getdefault('history_file', section.history_file, expansions={'here': self.here}) if history_file: history_file = normalize_path(history_file) section.history_file = history_file self.history_file = history_file else: section.history_file = None self.history_file = None self.plugin_factories += self.get_plugins( parser, 'supervisor.ctl_factory', 'ctlplugin:' ) return section # TODO: not covered by any test, but used by supervisorctl def getServerProxy(self): return xmlrpclib.ServerProxy( # dumbass ServerProxy won't allow us to pass in a non-HTTP url, # so we fake the url we pass into it and always use the transport's # 'serverurl' to figure out what to attach to 'http://127.0.0.1', transport = xmlrpc.SupervisorTransport(self.username, self.password, self.serverurl) ) _marker = [] class UnhosedConfigParser(ConfigParser.RawConfigParser): mysection = 'supervisord' def __init__(self, *args, **kwargs): # inline_comment_prefixes and strict were added in Python 3 but their # defaults make RawConfigParser behave differently than it did on # Python 2. We make it work like 2 by default for backwards compat. if not PY2: if 'inline_comment_prefixes' not in kwargs: kwargs['inline_comment_prefixes'] = (';', '#') if 'strict' not in kwargs: kwargs['strict'] = False ConfigParser.RawConfigParser.__init__(self, *args, **kwargs) self.section_to_file = {} self.expansions = {} def read_string(self, string, source=''): '''Parse configuration data from a string. This is intended to be used in tests only. We add this method for Py 2/3 compat.''' try: return ConfigParser.RawConfigParser.read_string( self, string, source) # Python 3.2 or later except AttributeError: return self.readfp(StringIO(string)) def read(self, filenames, **kwargs): '''Attempt to read and parse a list of filenames, returning a list of filenames which were successfully parsed. This is a method of RawConfigParser that is overridden to build self.section_to_file, which is a mapping of section names to the files they came from. ''' if isinstance(filenames, basestring): # RawConfigParser compat filenames = [filenames] ok_filenames = [] for filename in filenames: sections_orig = self._sections.copy() ok_filenames.extend( ConfigParser.RawConfigParser.read(self, [filename], **kwargs)) diff = frozenset(self._sections) - frozenset(sections_orig) for section in diff: self.section_to_file[section] = filename return ok_filenames def saneget(self, section, option, default=_marker, do_expand=True, expansions={}): try: optval = self.get(section, option) except ConfigParser.NoOptionError: if default is _marker: raise else: optval = default if do_expand and isinstance(optval, basestring): combined_expansions = dict( list(self.expansions.items()) + list(expansions.items())) optval = expand(optval, combined_expansions, "%s.%s" % (section, option)) return optval def getdefault(self, option, default=_marker, expansions={}, **kwargs): return self.saneget(self.mysection, option, default=default, expansions=expansions, **kwargs) def expand_here(self, here): HERE_FORMAT = '%(here)s' for section in self.sections(): for key, value in self.items(section): if HERE_FORMAT in value: assert here is not None, "here has not been set to a path" value = value.replace(HERE_FORMAT, here) self.set(section, key, value) class Config(object): def __ne__(self, other): return not self.__eq__(other) def __lt__(self, other): if self.priority == other.priority: return self.name < other.name return self.priority < other.priority def __le__(self, other): if self.priority == other.priority: return self.name <= other.name return self.priority <= other.priority def __gt__(self, other): if self.priority == other.priority: return self.name > other.name return self.priority > other.priority def __ge__(self, other): if self.priority == other.priority: return self.name >= other.name return self.priority >= other.priority def __repr__(self): return '<%s instance at %s named %s>' % (self.__class__, id(self), self.name) class ProcessConfig(Config): req_param_names = [ 'name', 'uid', 'command', 'directory', 'umask', 'priority', 'autostart', 'autorestart', 'startsecs', 'startretries', 'stdout_logfile', 'stdout_capture_maxbytes', 'stdout_events_enabled', 'stdout_syslog', 'stdout_logfile_backups', 'stdout_logfile_maxbytes', 'stderr_logfile', 'stderr_capture_maxbytes', 'stderr_logfile_backups', 'stderr_logfile_maxbytes', 'stderr_events_enabled', 'stderr_syslog', 'stopsignal', 'stopwaitsecs', 'stopasgroup', 'killasgroup', 'exitcodes', 'redirect_stderr' ] optional_param_names = [ 'environment', 'serverurl' ] def __init__(self, options, **params): self.options = options for name in self.req_param_names: setattr(self, name, params[name]) for name in self.optional_param_names: setattr(self, name, params.get(name, None)) def __eq__(self, other): if not isinstance(other, ProcessConfig): return False for name in self.req_param_names + self.optional_param_names: if Automatic in [getattr(self, name), getattr(other, name)] : continue if getattr(self, name) != getattr(other, name): return False return True def get_path(self): '''Return a list corresponding to $PATH that is configured to be set in the process environment, or the system default.''' if self.environment is not None: path = self.environment.get('PATH') if path is not None: return path.split(os.pathsep) return self.options.get_path() def create_autochildlogs(self): # temporary logfiles which are erased at start time get_autoname = self.options.get_autochildlog_name sid = self.options.identifier name = self.name if self.stdout_logfile is Automatic: self.stdout_logfile = get_autoname(name, sid, 'stdout') if self.stderr_logfile is Automatic: self.stderr_logfile = get_autoname(name, sid, 'stderr') def make_process(self, group=None): from supervisor.process import Subprocess process = Subprocess(self) process.group = group return process def make_dispatchers(self, proc): use_stderr = not self.redirect_stderr p = self.options.make_pipes(use_stderr) stdout_fd,stderr_fd,stdin_fd = p['stdout'],p['stderr'],p['stdin'] dispatchers = {} from supervisor.dispatchers import POutputDispatcher from supervisor.dispatchers import PInputDispatcher from supervisor import events if stdout_fd is not None: etype = events.ProcessCommunicationStdoutEvent dispatchers[stdout_fd] = POutputDispatcher(proc, etype, stdout_fd) if stderr_fd is not None: etype = events.ProcessCommunicationStderrEvent dispatchers[stderr_fd] = POutputDispatcher(proc,etype, stderr_fd) if stdin_fd is not None: dispatchers[stdin_fd] = PInputDispatcher(proc, 'stdin', stdin_fd) return dispatchers, p class EventListenerConfig(ProcessConfig): def make_dispatchers(self, proc): # always use_stderr=True for eventlisteners because mixing stderr # messages into stdout would break the eventlistener protocol use_stderr = True p = self.options.make_pipes(use_stderr) stdout_fd,stderr_fd,stdin_fd = p['stdout'],p['stderr'],p['stdin'] dispatchers = {} from supervisor.dispatchers import PEventListenerDispatcher from supervisor.dispatchers import PInputDispatcher from supervisor.dispatchers import POutputDispatcher from supervisor import events if stdout_fd is not None: dispatchers[stdout_fd] = PEventListenerDispatcher(proc, 'stdout', stdout_fd) if stderr_fd is not None: etype = events.ProcessCommunicationStderrEvent dispatchers[stderr_fd] = POutputDispatcher(proc, etype, stderr_fd) if stdin_fd is not None: dispatchers[stdin_fd] = PInputDispatcher(proc, 'stdin', stdin_fd) return dispatchers, p class FastCGIProcessConfig(ProcessConfig): def make_process(self, group=None): if group is None: raise NotImplementedError('FastCGI programs require a group') from supervisor.process import FastCGISubprocess process = FastCGISubprocess(self) process.group = group return process def make_dispatchers(self, proc): dispatchers, p = ProcessConfig.make_dispatchers(self, proc) #FastCGI child processes expect the FastCGI socket set to #file descriptor 0, so supervisord cannot use stdin #to communicate with the child process stdin_fd = p['stdin'] if stdin_fd is not None: dispatchers[stdin_fd].close() return dispatchers, p class ProcessGroupConfig(Config): def __init__(self, options, name, priority, process_configs): self.options = options self.name = name self.priority = priority self.process_configs = process_configs def __eq__(self, other): if not isinstance(other, ProcessGroupConfig): return False if self.name != other.name: return False if self.priority != other.priority: return False if self.process_configs != other.process_configs: return False return True def after_setuid(self): for config in self.process_configs: config.create_autochildlogs() def make_group(self): from supervisor.process import ProcessGroup return ProcessGroup(self) class EventListenerPoolConfig(Config): def __init__(self, options, name, priority, process_configs, buffer_size, pool_events, result_handler): self.options = options self.name = name self.priority = priority self.process_configs = process_configs self.buffer_size = buffer_size self.pool_events = pool_events self.result_handler = result_handler def __eq__(self, other): if not isinstance(other, EventListenerPoolConfig): return False if ((self.name == other.name) and (self.priority == other.priority) and (self.process_configs == other.process_configs) and (self.buffer_size == other.buffer_size) and (self.pool_events == other.pool_events) and (self.result_handler == other.result_handler)): return True return False def after_setuid(self): for config in self.process_configs: config.create_autochildlogs() def make_group(self): from supervisor.process import EventListenerPool return EventListenerPool(self) class FastCGIGroupConfig(ProcessGroupConfig): def __init__(self, options, name, priority, process_configs, socket_config): ProcessGroupConfig.__init__( self, options, name, priority, process_configs, ) self.socket_config = socket_config def __eq__(self, other): if not isinstance(other, FastCGIGroupConfig): return False if self.socket_config != other.socket_config: return False return ProcessGroupConfig.__eq__(self, other) def make_group(self): from supervisor.process import FastCGIProcessGroup return FastCGIProcessGroup(self) def readFile(filename, offset, length): """ Read length bytes from the file named by filename starting at offset """ absoffset = abs(offset) abslength = abs(length) try: with open(filename, 'rb') as f: if absoffset != offset: # negative offset returns offset bytes from tail of the file if length: raise ValueError('BAD_ARGUMENTS') f.seek(0, 2) sz = f.tell() pos = int(sz - absoffset) if pos < 0: pos = 0 f.seek(pos) data = f.read(absoffset) else: if abslength != length: raise ValueError('BAD_ARGUMENTS') if length == 0: f.seek(offset) data = f.read() else: f.seek(offset) data = f.read(length) except (OSError, IOError): raise ValueError('FAILED') return data def tailFile(filename, offset, length): """ Read length bytes from the file named by filename starting at offset, automatically increasing offset and setting overflow flag if log size has grown beyond (offset + length). If length bytes are not available, as many bytes as are available are returned. """ try: with open(filename, 'rb') as f: overflow = False f.seek(0, 2) sz = f.tell() if sz > (offset + length): overflow = True offset = sz - 1 if (offset + length) > sz: if offset > (sz - 1): length = 0 offset = sz - length if offset < 0: offset = 0 if length < 0: length = 0 if length == 0: data = b'' else: f.seek(offset) data = f.read(length) offset = sz return [as_string(data), offset, overflow] except (OSError, IOError): return ['', offset, False] # Helpers for dealing with signals and exit status def decode_wait_status(sts): """Decode the status returned by wait() or waitpid(). Return a tuple (exitstatus, message) where exitstatus is the exit status, or -1 if the process was killed by a signal; and message is a message telling what happened. It is the caller's responsibility to display the message. """ if os.WIFEXITED(sts): es = os.WEXITSTATUS(sts) & 0xffff msg = "exit status %s" % es return es, msg elif os.WIFSIGNALED(sts): sig = os.WTERMSIG(sts) msg = "terminated by %s" % signame(sig) if hasattr(os, "WCOREDUMP"): iscore = os.WCOREDUMP(sts) else: iscore = sts & 0x80 if iscore: msg += " (core dumped)" return -1, msg else: msg = "unknown termination cause 0x%04x" % sts return -1, msg _signames = None def signame(sig): """Return a symbolic name for a signal. Return "signal NNN" if there is no corresponding SIG name in the signal module. """ if _signames is None: _init_signames() return _signames.get(sig) or "signal %d" % sig def _init_signames(): global _signames d = {} for k, v in signal.__dict__.items(): k_startswith = getattr(k, "startswith", None) if k_startswith is None: continue if k_startswith("SIG") and not k_startswith("SIG_"): d[v] = k _signames = d class SignalReceiver: def __init__(self): self._signals_recvd = [] def receive(self, sig, frame): if sig not in self._signals_recvd: self._signals_recvd.append(sig) def get_signal(self): if self._signals_recvd: sig = self._signals_recvd.pop(0) else: sig = None return sig # miscellaneous utility functions def expand(s, expansions, name): try: return s % expansions except KeyError as ex: available = list(expansions.keys()) available.sort() raise ValueError( 'Format string %r for %r contains names (%s) which cannot be ' 'expanded. Available names: %s' % (s, name, str(ex), ", ".join(available))) except Exception as ex: raise ValueError( 'Format string %r for %r is badly formatted: %s' % (s, name, str(ex)) ) def make_namespec(group_name, process_name): # we want to refer to the process by its "short name" (a process named # process1 in the group process1 has a name "process1"). This is for # backwards compatibility if group_name == process_name: name = process_name else: name = '%s:%s' % (group_name, process_name) return name def split_namespec(namespec): names = namespec.split(':', 1) if len(names) == 2: # group and process name differ group_name, process_name = names if not process_name or process_name == '*': process_name = None else: # group name is same as process name group_name, process_name = namespec, namespec return group_name, process_name # exceptions class ProcessException(Exception): """ Specialized exceptions used when attempting to start a process """ class BadCommand(ProcessException): """ Indicates the command could not be parsed properly. """ class NotExecutable(ProcessException): """ Indicates that the filespec cannot be executed because its path resolves to a file which is not executable, or which is a directory. """ class NotFound(ProcessException): """ Indicates that the filespec cannot be executed because it could not be found """ class NoPermission(ProcessException): """ Indicates that the file cannot be executed because the supervisor process does not possess the appropriate UNIX filesystem permission to execute the file. """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/pidproxy.py0000755000076500000240000000372614340177153020162 0ustar00mnaberezstaff#!/usr/bin/env python -u """pidproxy -- run command and proxy signals to it via its pidfile. This executable runs a command and then monitors a pidfile. When this executable receives a signal, it sends the same signal to the pid in the pidfile. Usage: %s [ ...] """ import os import sys import signal import time class PidProxy: pid = None def __init__(self, args): try: self.pidfile, cmdargs = args[1], args[2:] self.abscmd = os.path.abspath(cmdargs[0]) self.cmdargs = cmdargs except (ValueError, IndexError): self.usage() sys.exit(1) def go(self): self.setsignals() self.pid = os.spawnv(os.P_NOWAIT, self.abscmd, self.cmdargs) while 1: time.sleep(5) try: pid = os.waitpid(-1, os.WNOHANG)[0] except OSError: pid = None if pid: break def usage(self): print(__doc__ % sys.argv[0]) def setsignals(self): signal.signal(signal.SIGTERM, self.passtochild) signal.signal(signal.SIGHUP, self.passtochild) signal.signal(signal.SIGINT, self.passtochild) signal.signal(signal.SIGUSR1, self.passtochild) signal.signal(signal.SIGUSR2, self.passtochild) signal.signal(signal.SIGQUIT, self.passtochild) signal.signal(signal.SIGCHLD, self.reap) def reap(self, sig, frame): # do nothing, we reap our child synchronously pass def passtochild(self, sig, frame): try: with open(self.pidfile, 'r') as f: pid = int(f.read().strip()) except: print("Can't read child pidfile %s!" % self.pidfile) return os.kill(pid, sig) if sig in [signal.SIGTERM, signal.SIGINT, signal.SIGQUIT]: sys.exit(0) def main(): pp = PidProxy(sys.argv) pp.go() if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/poller.py0000644000076500000240000001505714340177153017576 0ustar00mnaberezstaffimport select import errno class BasePoller: def __init__(self, options): self.options = options self.initialize() def initialize(self): pass def register_readable(self, fd): raise NotImplementedError def register_writable(self, fd): raise NotImplementedError def unregister_readable(self, fd): raise NotImplementedError def unregister_writable(self, fd): raise NotImplementedError def poll(self, timeout): raise NotImplementedError def before_daemonize(self): pass def after_daemonize(self): pass def close(self): pass class SelectPoller(BasePoller): def initialize(self): self._select = select self._init_fdsets() def register_readable(self, fd): self.readables.add(fd) def register_writable(self, fd): self.writables.add(fd) def unregister_readable(self, fd): self.readables.discard(fd) def unregister_writable(self, fd): self.writables.discard(fd) def unregister_all(self): self._init_fdsets() def poll(self, timeout): try: r, w, x = self._select.select( self.readables, self.writables, [], timeout ) except select.error as err: if err.args[0] == errno.EINTR: self.options.logger.blather('EINTR encountered in poll') return [], [] if err.args[0] == errno.EBADF: self.options.logger.blather('EBADF encountered in poll') self.unregister_all() return [], [] raise return r, w def _init_fdsets(self): self.readables = set() self.writables = set() class PollPoller(BasePoller): def initialize(self): self._poller = select.poll() self.READ = select.POLLIN | select.POLLPRI | select.POLLHUP self.WRITE = select.POLLOUT self.readables = set() self.writables = set() def register_readable(self, fd): self._poller.register(fd, self.READ) self.readables.add(fd) def register_writable(self, fd): self._poller.register(fd, self.WRITE) self.writables.add(fd) def unregister_readable(self, fd): self.readables.discard(fd) self._poller.unregister(fd) if fd in self.writables: self._poller.register(fd, self.WRITE) def unregister_writable(self, fd): self.writables.discard(fd) self._poller.unregister(fd) if fd in self.readables: self._poller.register(fd, self.READ) def poll(self, timeout): fds = self._poll_fds(timeout) readables, writables = [], [] for fd, eventmask in fds: if self._ignore_invalid(fd, eventmask): continue if eventmask & self.READ: readables.append(fd) if eventmask & self.WRITE: writables.append(fd) return readables, writables def _poll_fds(self, timeout): try: return self._poller.poll(timeout * 1000) except select.error as err: if err.args[0] == errno.EINTR: self.options.logger.blather('EINTR encountered in poll') return [] raise def _ignore_invalid(self, fd, eventmask): if eventmask & select.POLLNVAL: # POLLNVAL means `fd` value is invalid, not open. # When a process quits it's `fd`s are closed so there # is no more reason to keep this `fd` registered # If the process restarts it's `fd`s are registered again self._poller.unregister(fd) self.readables.discard(fd) self.writables.discard(fd) return True return False class KQueuePoller(BasePoller): ''' Wrapper for select.kqueue()/kevent() ''' max_events = 1000 def initialize(self): self._kqueue = select.kqueue() self.readables = set() self.writables = set() def register_readable(self, fd): self.readables.add(fd) kevent = select.kevent(fd, filter=select.KQ_FILTER_READ, flags=select.KQ_EV_ADD) self._kqueue_control(fd, kevent) def register_writable(self, fd): self.writables.add(fd) kevent = select.kevent(fd, filter=select.KQ_FILTER_WRITE, flags=select.KQ_EV_ADD) self._kqueue_control(fd, kevent) def unregister_readable(self, fd): kevent = select.kevent(fd, filter=select.KQ_FILTER_READ, flags=select.KQ_EV_DELETE) self.readables.discard(fd) self._kqueue_control(fd, kevent) def unregister_writable(self, fd): kevent = select.kevent(fd, filter=select.KQ_FILTER_WRITE, flags=select.KQ_EV_DELETE) self.writables.discard(fd) self._kqueue_control(fd, kevent) def _kqueue_control(self, fd, kevent): try: self._kqueue.control([kevent], 0) except OSError as error: if error.errno == errno.EBADF: self.options.logger.blather('EBADF encountered in kqueue. ' 'Invalid file descriptor %s' % fd) else: raise def poll(self, timeout): readables, writables = [], [] try: kevents = self._kqueue.control(None, self.max_events, timeout) except OSError as error: if error.errno == errno.EINTR: self.options.logger.blather('EINTR encountered in poll') return readables, writables raise for kevent in kevents: if kevent.filter == select.KQ_FILTER_READ: readables.append(kevent.ident) if kevent.filter == select.KQ_FILTER_WRITE: writables.append(kevent.ident) return readables, writables def before_daemonize(self): self.close() def after_daemonize(self): self._kqueue = select.kqueue() for fd in self.readables: self.register_readable(fd) for fd in self.writables: self.register_writable(fd) def close(self): self._kqueue.close() self._kqueue = None def implements_poll(): return hasattr(select, 'poll') def implements_kqueue(): return hasattr(select, 'kqueue') if implements_kqueue(): Poller = KQueuePoller elif implements_poll(): Poller = PollPoller else: Poller = SelectPoller ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671836081.0 supervisor-4.2.5/supervisor/process.py0000644000076500000240000011502414351430661017750 0ustar00mnaberezstaffimport errno import functools import os import signal import shlex import time import traceback from supervisor.compat import maxint from supervisor.compat import as_bytes from supervisor.compat import as_string from supervisor.compat import PY2 from supervisor.medusa import asyncore_25 as asyncore from supervisor.states import ProcessStates from supervisor.states import SupervisorStates from supervisor.states import getProcessStateDescription from supervisor.states import STOPPED_STATES from supervisor.options import decode_wait_status from supervisor.options import signame from supervisor.options import ProcessException, BadCommand from supervisor.dispatchers import EventListenerStates from supervisor import events from supervisor.datatypes import RestartUnconditionally from supervisor.socket_manager import SocketManager @functools.total_ordering class Subprocess(object): """A class to manage a subprocess.""" # Initial state; overridden by instance variables pid = 0 # Subprocess pid; 0 when not running config = None # ProcessConfig instance state = None # process state code listener_state = None # listener state code (if we're an event listener) event = None # event currently being processed (if we're an event listener) laststart = 0 # Last time the subprocess was started; 0 if never laststop = 0 # Last time the subprocess was stopped; 0 if never laststopreport = 0 # Last time "waiting for x to stop" logged, to throttle delay = 0 # If nonzero, delay starting or killing until this time administrative_stop = False # true if process has been stopped by an admin system_stop = False # true if process has been stopped by the system killing = False # true if we are trying to kill this process backoff = 0 # backoff counter (to startretries) dispatchers = None # asyncore output dispatchers (keyed by fd) pipes = None # map of channel name to file descriptor # exitstatus = None # status attached to dead process by finish() spawnerr = None # error message attached by spawn() if any group = None # ProcessGroup instance if process is in the group def __init__(self, config): """Constructor. Argument is a ProcessConfig instance. """ self.config = config self.dispatchers = {} self.pipes = {} self.state = ProcessStates.STOPPED def removelogs(self): for dispatcher in self.dispatchers.values(): if hasattr(dispatcher, 'removelogs'): dispatcher.removelogs() def reopenlogs(self): for dispatcher in self.dispatchers.values(): if hasattr(dispatcher, 'reopenlogs'): dispatcher.reopenlogs() def drain(self): for dispatcher in self.dispatchers.values(): # note that we *must* call readable() for every # dispatcher, as it may have side effects for a given # dispatcher (eg. call handle_listener_state_change for # event listener processes) if dispatcher.readable(): dispatcher.handle_read_event() if dispatcher.writable(): dispatcher.handle_write_event() def write(self, chars): if not self.pid or self.killing: raise OSError(errno.EPIPE, "Process already closed") stdin_fd = self.pipes['stdin'] if stdin_fd is None: raise OSError(errno.EPIPE, "Process has no stdin channel") dispatcher = self.dispatchers[stdin_fd] if dispatcher.closed: raise OSError(errno.EPIPE, "Process' stdin channel is closed") dispatcher.input_buffer += chars dispatcher.flush() # this must raise EPIPE if the pipe is closed def get_execv_args(self): """Internal: turn a program name into a file name, using $PATH, make sure it exists / is executable, raising a ProcessException if not """ try: commandargs = shlex.split(self.config.command) except ValueError as e: raise BadCommand("can't parse command %r: %s" % \ (self.config.command, str(e))) if commandargs: program = commandargs[0] else: raise BadCommand("command is empty") if "/" in program: filename = program try: st = self.config.options.stat(filename) except OSError: st = None else: path = self.config.get_path() found = None st = None for dir in path: found = os.path.join(dir, program) try: st = self.config.options.stat(found) except OSError: pass else: break if st is None: filename = program else: filename = found # check_execv_args will raise a ProcessException if the execv # args are bogus, we break it out into a separate options # method call here only to service unit tests self.config.options.check_execv_args(filename, commandargs, st) return filename, commandargs event_map = { ProcessStates.BACKOFF: events.ProcessStateBackoffEvent, ProcessStates.FATAL: events.ProcessStateFatalEvent, ProcessStates.UNKNOWN: events.ProcessStateUnknownEvent, ProcessStates.STOPPED: events.ProcessStateStoppedEvent, ProcessStates.EXITED: events.ProcessStateExitedEvent, ProcessStates.RUNNING: events.ProcessStateRunningEvent, ProcessStates.STARTING: events.ProcessStateStartingEvent, ProcessStates.STOPPING: events.ProcessStateStoppingEvent, } def change_state(self, new_state, expected=True): old_state = self.state if new_state is old_state: # exists for unit tests return False self.state = new_state if new_state == ProcessStates.BACKOFF: now = time.time() self.backoff += 1 self.delay = now + self.backoff event_class = self.event_map.get(new_state) if event_class is not None: event = event_class(self, old_state, expected) events.notify(event) def _assertInState(self, *states): if self.state not in states: current_state = getProcessStateDescription(self.state) allowable_states = ' '.join(map(getProcessStateDescription, states)) processname = as_string(self.config.name) raise AssertionError('Assertion failed for %s: %s not in %s' % ( processname, current_state, allowable_states)) def record_spawnerr(self, msg): self.spawnerr = msg self.config.options.logger.info("spawnerr: %s" % msg) def spawn(self): """Start the subprocess. It must not be running already. Return the process id. If the fork() call fails, return None. """ options = self.config.options processname = as_string(self.config.name) if self.pid: msg = 'process \'%s\' already running' % processname options.logger.warn(msg) return self.killing = False self.spawnerr = None self.exitstatus = None self.system_stop = False self.administrative_stop = False self.laststart = time.time() self._assertInState(ProcessStates.EXITED, ProcessStates.FATAL, ProcessStates.BACKOFF, ProcessStates.STOPPED) self.change_state(ProcessStates.STARTING) try: filename, argv = self.get_execv_args() except ProcessException as what: self.record_spawnerr(what.args[0]) self._assertInState(ProcessStates.STARTING) self.change_state(ProcessStates.BACKOFF) return try: self.dispatchers, self.pipes = self.config.make_dispatchers(self) except (OSError, IOError) as why: code = why.args[0] if code == errno.EMFILE: # too many file descriptors open msg = 'too many open files to spawn \'%s\'' % processname else: msg = 'unknown error making dispatchers for \'%s\': %s' % ( processname, errno.errorcode.get(code, code)) self.record_spawnerr(msg) self._assertInState(ProcessStates.STARTING) self.change_state(ProcessStates.BACKOFF) return try: pid = options.fork() except OSError as why: code = why.args[0] if code == errno.EAGAIN: # process table full msg = ('Too many processes in process table to spawn \'%s\'' % processname) else: msg = 'unknown error during fork for \'%s\': %s' % ( processname, errno.errorcode.get(code, code)) self.record_spawnerr(msg) self._assertInState(ProcessStates.STARTING) self.change_state(ProcessStates.BACKOFF) options.close_parent_pipes(self.pipes) options.close_child_pipes(self.pipes) return if pid != 0: return self._spawn_as_parent(pid) else: return self._spawn_as_child(filename, argv) def _spawn_as_parent(self, pid): # Parent self.pid = pid options = self.config.options options.close_child_pipes(self.pipes) options.logger.info('spawned: \'%s\' with pid %s' % (as_string(self.config.name), pid)) self.spawnerr = None self.delay = time.time() + self.config.startsecs options.pidhistory[pid] = self return pid def _prepare_child_fds(self): options = self.config.options options.dup2(self.pipes['child_stdin'], 0) options.dup2(self.pipes['child_stdout'], 1) if self.config.redirect_stderr: options.dup2(self.pipes['child_stdout'], 2) else: options.dup2(self.pipes['child_stderr'], 2) for i in range(3, options.minfds): options.close_fd(i) def _spawn_as_child(self, filename, argv): options = self.config.options try: # prevent child from receiving signals sent to the # parent by calling os.setpgrp to create a new process # group for the child; this prevents, for instance, # the case of child processes being sent a SIGINT when # running supervisor in foreground mode and Ctrl-C in # the terminal window running supervisord is pressed. # Presumably it also prevents HUP, etc received by # supervisord from being sent to children. options.setpgrp() self._prepare_child_fds() # sending to fd 2 will put this output in the stderr log # set user setuid_msg = self.set_uid() if setuid_msg: uid = self.config.uid msg = "couldn't setuid to %s: %s\n" % (uid, setuid_msg) options.write(2, "supervisor: " + msg) return # finally clause will exit the child process # set environment env = os.environ.copy() env['SUPERVISOR_ENABLED'] = '1' serverurl = self.config.serverurl if serverurl is None: # unset serverurl = self.config.options.serverurl # might still be None if serverurl: env['SUPERVISOR_SERVER_URL'] = serverurl env['SUPERVISOR_PROCESS_NAME'] = self.config.name if self.group: env['SUPERVISOR_GROUP_NAME'] = self.group.config.name if self.config.environment is not None: env.update(self.config.environment) # change directory cwd = self.config.directory try: if cwd is not None: options.chdir(cwd) except OSError as why: code = errno.errorcode.get(why.args[0], why.args[0]) msg = "couldn't chdir to %s: %s\n" % (cwd, code) options.write(2, "supervisor: " + msg) return # finally clause will exit the child process # set umask, then execve try: if self.config.umask is not None: options.setumask(self.config.umask) options.execve(filename, argv, env) except OSError as why: code = errno.errorcode.get(why.args[0], why.args[0]) msg = "couldn't exec %s: %s\n" % (argv[0], code) options.write(2, "supervisor: " + msg) except: (file, fun, line), t,v,tbinfo = asyncore.compact_traceback() error = '%s, %s: file: %s line: %s' % (t, v, file, line) msg = "couldn't exec %s: %s\n" % (filename, error) options.write(2, "supervisor: " + msg) # this point should only be reached if execve failed. # the finally clause will exit the child process. finally: options.write(2, "supervisor: child process was not spawned\n") options._exit(127) # exit process with code for spawn failure def _check_and_adjust_for_system_clock_rollback(self, test_time): """ Check if system clock has rolled backward beyond test_time. If so, set affected timestamps to test_time. """ if self.state == ProcessStates.STARTING: if test_time < self.laststart: self.laststart = test_time; if self.delay > 0 and test_time < (self.delay - self.config.startsecs): self.delay = test_time + self.config.startsecs elif self.state == ProcessStates.RUNNING: if test_time > self.laststart and test_time < (self.laststart + self.config.startsecs): self.laststart = test_time - self.config.startsecs elif self.state == ProcessStates.STOPPING: if test_time < self.laststopreport: self.laststopreport = test_time; if self.delay > 0 and test_time < (self.delay - self.config.stopwaitsecs): self.delay = test_time + self.config.stopwaitsecs elif self.state == ProcessStates.BACKOFF: if self.delay > 0 and test_time < (self.delay - self.backoff): self.delay = test_time + self.backoff def stop(self): """ Administrative stop """ self.administrative_stop = True self.laststopreport = 0 return self.kill(self.config.stopsignal) def stop_report(self): """ Log a 'waiting for x to stop' message with throttling. """ if self.state == ProcessStates.STOPPING: now = time.time() self._check_and_adjust_for_system_clock_rollback(now) if now > (self.laststopreport + 2): # every 2 seconds self.config.options.logger.info( 'waiting for %s to stop' % as_string(self.config.name)) self.laststopreport = now def give_up(self): self.delay = 0 self.backoff = 0 self.system_stop = True self._assertInState(ProcessStates.BACKOFF) self.change_state(ProcessStates.FATAL) def kill(self, sig): """Send a signal to the subprocess with the intention to kill it (to make it exit). This may or may not actually kill it. Return None if the signal was sent, or an error message string if an error occurred or if the subprocess is not running. """ now = time.time() options = self.config.options processname = as_string(self.config.name) # If the process is in BACKOFF and we want to stop or kill it, then # BACKOFF -> STOPPED. This is needed because if startretries is a # large number and the process isn't starting successfully, the stop # request would be blocked for a long time waiting for the retries. if self.state == ProcessStates.BACKOFF: msg = ("Attempted to kill %s, which is in BACKOFF state." % processname) options.logger.debug(msg) self.change_state(ProcessStates.STOPPED) return None if not self.pid: msg = ("attempted to kill %s with sig %s but it wasn't running" % (processname, signame(sig))) options.logger.debug(msg) return msg # If we're in the stopping state, then we've already sent the stop # signal and this is the kill signal if self.state == ProcessStates.STOPPING: killasgroup = self.config.killasgroup else: killasgroup = self.config.stopasgroup as_group = "" if killasgroup: as_group = "process group " options.logger.debug('killing %s (pid %s) %swith signal %s' % (processname, self.pid, as_group, signame(sig)) ) # RUNNING/STARTING/STOPPING -> STOPPING self.killing = True self.delay = now + self.config.stopwaitsecs # we will already be in the STOPPING state if we're doing a # SIGKILL as a result of overrunning stopwaitsecs self._assertInState(ProcessStates.RUNNING, ProcessStates.STARTING, ProcessStates.STOPPING) self.change_state(ProcessStates.STOPPING) pid = self.pid if killasgroup: # send to the whole process group instead pid = -self.pid try: try: options.kill(pid, sig) except OSError as exc: if exc.errno == errno.ESRCH: msg = ("unable to signal %s (pid %s), it probably just exited " "on its own: %s" % (processname, self.pid, str(exc))) options.logger.debug(msg) # we could change the state here but we intentionally do # not. we will do it during normal SIGCHLD processing. return None raise except: tb = traceback.format_exc() msg = 'unknown problem killing %s (%s):%s' % (processname, self.pid, tb) options.logger.critical(msg) self.change_state(ProcessStates.UNKNOWN) self.killing = False self.delay = 0 return msg return None def signal(self, sig): """Send a signal to the subprocess, without intending to kill it. Return None if the signal was sent, or an error message string if an error occurred or if the subprocess is not running. """ options = self.config.options processname = as_string(self.config.name) if not self.pid: msg = ("attempted to send %s sig %s but it wasn't running" % (processname, signame(sig))) options.logger.debug(msg) return msg options.logger.debug('sending %s (pid %s) sig %s' % (processname, self.pid, signame(sig)) ) self._assertInState(ProcessStates.RUNNING, ProcessStates.STARTING, ProcessStates.STOPPING) try: try: options.kill(self.pid, sig) except OSError as exc: if exc.errno == errno.ESRCH: msg = ("unable to signal %s (pid %s), it probably just now exited " "on its own: %s" % (processname, self.pid, str(exc))) options.logger.debug(msg) # we could change the state here but we intentionally do # not. we will do it during normal SIGCHLD processing. return None raise except: tb = traceback.format_exc() msg = 'unknown problem sending sig %s (%s):%s' % ( processname, self.pid, tb) options.logger.critical(msg) self.change_state(ProcessStates.UNKNOWN) return msg return None def finish(self, pid, sts): """ The process was reaped and we need to report and manage its state """ self.drain() es, msg = decode_wait_status(sts) now = time.time() self._check_and_adjust_for_system_clock_rollback(now) self.laststop = now processname = as_string(self.config.name) if now > self.laststart: too_quickly = now - self.laststart < self.config.startsecs else: too_quickly = False self.config.options.logger.warn( "process \'%s\' (%s) laststart time is in the future, don't " "know how long process was running so assuming it did " "not exit too quickly" % (processname, self.pid)) exit_expected = es in self.config.exitcodes if self.killing: # likely the result of a stop request # implies STOPPING -> STOPPED self.killing = False self.delay = 0 self.exitstatus = es msg = "stopped: %s (%s)" % (processname, msg) self._assertInState(ProcessStates.STOPPING) self.change_state(ProcessStates.STOPPED) if exit_expected: self.config.options.logger.info(msg) else: self.config.options.logger.warn(msg) elif too_quickly: # the program did not stay up long enough to make it to RUNNING # implies STARTING -> BACKOFF self.exitstatus = None self.spawnerr = 'Exited too quickly (process log may have details)' msg = "exited: %s (%s)" % (processname, msg + "; not expected") self._assertInState(ProcessStates.STARTING) self.change_state(ProcessStates.BACKOFF) self.config.options.logger.warn(msg) else: # this finish was not the result of a stop request, the # program was in the RUNNING state but exited # implies RUNNING -> EXITED normally but see next comment self.delay = 0 self.backoff = 0 self.exitstatus = es # if the process was STARTING but a system time change causes # self.laststart to be in the future, the normal STARTING->RUNNING # transition can be subverted so we perform the transition here. if self.state == ProcessStates.STARTING: self.change_state(ProcessStates.RUNNING) self._assertInState(ProcessStates.RUNNING) if exit_expected: # expected exit code msg = "exited: %s (%s)" % (processname, msg + "; expected") self.change_state(ProcessStates.EXITED, expected=True) self.config.options.logger.info(msg) else: # unexpected exit code self.spawnerr = 'Bad exit code %s' % es msg = "exited: %s (%s)" % (processname, msg + "; not expected") self.change_state(ProcessStates.EXITED, expected=False) self.config.options.logger.warn(msg) self.pid = 0 self.config.options.close_parent_pipes(self.pipes) self.pipes = {} self.dispatchers = {} # if we died before we processed the current event (only happens # if we're an event listener), notify the event system that this # event was rejected so it can be processed again. if self.event is not None: # Note: this should only be true if we were in the BUSY # state when finish() was called. events.notify(events.EventRejectedEvent(self, self.event)) self.event = None def set_uid(self): if self.config.uid is None: return msg = self.config.options.drop_privileges(self.config.uid) return msg def __lt__(self, other): return self.config.priority < other.config.priority def __eq__(self, other): # sort by priority return self.config.priority == other.config.priority def __repr__(self): # repr can't return anything other than a native string, # but the name might be unicode - a problem on Python 2. name = self.config.name if PY2: name = as_string(name).encode('unicode-escape') return '' % ( id(self), name, getProcessStateDescription(self.get_state())) def get_state(self): return self.state def transition(self): now = time.time() state = self.state self._check_and_adjust_for_system_clock_rollback(now) logger = self.config.options.logger if self.config.options.mood > SupervisorStates.RESTARTING: # dont start any processes if supervisor is shutting down if state == ProcessStates.EXITED: if self.config.autorestart: if self.config.autorestart is RestartUnconditionally: # EXITED -> STARTING self.spawn() else: # autorestart is RestartWhenExitUnexpected if self.exitstatus not in self.config.exitcodes: # EXITED -> STARTING self.spawn() elif state == ProcessStates.STOPPED and not self.laststart: if self.config.autostart: # STOPPED -> STARTING self.spawn() elif state == ProcessStates.BACKOFF: if self.backoff <= self.config.startretries: if now > self.delay: # BACKOFF -> STARTING self.spawn() processname = as_string(self.config.name) if state == ProcessStates.STARTING: if now - self.laststart > self.config.startsecs: # STARTING -> RUNNING if the proc has started # successfully and it has stayed up for at least # proc.config.startsecs, self.delay = 0 self.backoff = 0 self._assertInState(ProcessStates.STARTING) self.change_state(ProcessStates.RUNNING) msg = ( 'entered RUNNING state, process has stayed up for ' '> than %s seconds (startsecs)' % self.config.startsecs) logger.info('success: %s %s' % (processname, msg)) if state == ProcessStates.BACKOFF: if self.backoff > self.config.startretries: # BACKOFF -> FATAL if the proc has exceeded its number # of retries self.give_up() msg = ('entered FATAL state, too many start retries too ' 'quickly') logger.info('gave up: %s %s' % (processname, msg)) elif state == ProcessStates.STOPPING: time_left = self.delay - now if time_left <= 0: # kill processes which are taking too long to stop with a final # sigkill. if this doesn't kill it, the process will be stuck # in the STOPPING state forever. self.config.options.logger.warn( 'killing \'%s\' (%s) with SIGKILL' % (processname, self.pid)) self.kill(signal.SIGKILL) class FastCGISubprocess(Subprocess): """Extends Subprocess class to handle FastCGI subprocesses""" def __init__(self, config): Subprocess.__init__(self, config) self.fcgi_sock = None def before_spawn(self): """ The FastCGI socket needs to be created by the parent before we fork """ if self.group is None: raise NotImplementedError('No group set for FastCGISubprocess') if not hasattr(self.group, 'socket_manager'): raise NotImplementedError('No SocketManager set for ' '%s:%s' % (self.group, dir(self.group))) self.fcgi_sock = self.group.socket_manager.get_socket() def spawn(self): """ Overrides Subprocess.spawn() so we can hook in before it happens """ self.before_spawn() pid = Subprocess.spawn(self) if pid is None: #Remove object reference to decrement the reference count on error self.fcgi_sock = None return pid def after_finish(self): """ Releases reference to FastCGI socket when process is reaped """ #Remove object reference to decrement the reference count self.fcgi_sock = None def finish(self, pid, sts): """ Overrides Subprocess.finish() so we can hook in after it happens """ retval = Subprocess.finish(self, pid, sts) self.after_finish() return retval def _prepare_child_fds(self): """ Overrides Subprocess._prepare_child_fds() The FastCGI socket needs to be set to file descriptor 0 in the child """ sock_fd = self.fcgi_sock.fileno() options = self.config.options options.dup2(sock_fd, 0) options.dup2(self.pipes['child_stdout'], 1) if self.config.redirect_stderr: options.dup2(self.pipes['child_stdout'], 2) else: options.dup2(self.pipes['child_stderr'], 2) for i in range(3, options.minfds): options.close_fd(i) @functools.total_ordering class ProcessGroupBase(object): def __init__(self, config): self.config = config self.processes = {} for pconfig in self.config.process_configs: self.processes[pconfig.name] = pconfig.make_process(self) def __lt__(self, other): return self.config.priority < other.config.priority def __eq__(self, other): return self.config.priority == other.config.priority def __repr__(self): # repr can't return anything other than a native string, # but the name might be unicode - a problem on Python 2. name = self.config.name if PY2: name = as_string(name).encode('unicode-escape') return '<%s instance at %s named %s>' % (self.__class__, id(self), name) def removelogs(self): for process in self.processes.values(): process.removelogs() def reopenlogs(self): for process in self.processes.values(): process.reopenlogs() def stop_all(self): processes = list(self.processes.values()) processes.sort() processes.reverse() # stop in desc priority order for proc in processes: state = proc.get_state() if state == ProcessStates.RUNNING: # RUNNING -> STOPPING proc.stop() elif state == ProcessStates.STARTING: # STARTING -> STOPPING proc.stop() elif state == ProcessStates.BACKOFF: # BACKOFF -> FATAL proc.give_up() def get_unstopped_processes(self): """ Processes which aren't in a state that is considered 'stopped' """ return [ x for x in self.processes.values() if x.get_state() not in STOPPED_STATES ] def get_dispatchers(self): dispatchers = {} for process in self.processes.values(): dispatchers.update(process.dispatchers) return dispatchers def before_remove(self): pass class ProcessGroup(ProcessGroupBase): def transition(self): for proc in self.processes.values(): proc.transition() class FastCGIProcessGroup(ProcessGroup): def __init__(self, config, **kwargs): ProcessGroup.__init__(self, config) sockManagerKlass = kwargs.get('socketManager', SocketManager) self.socket_manager = sockManagerKlass(config.socket_config, logger=config.options.logger) # It's not required to call get_socket() here but we want # to fail early during start up if there is a config error try: self.socket_manager.get_socket() except Exception as e: raise ValueError( 'Could not create FastCGI socket %s: %s' % ( self.socket_manager.config(), e) ) class EventListenerPool(ProcessGroupBase): def __init__(self, config): ProcessGroupBase.__init__(self, config) self.event_buffer = [] self.serial = -1 self.last_dispatch = 0 self.dispatch_throttle = 0 # in seconds: .00195 is an interesting one self._subscribe() def handle_rejected(self, event): process = event.process procs = self.processes.values() if process in procs: # this is one of our processes # rebuffer the event self._acceptEvent(event.event, head=True) def transition(self): processes = self.processes.values() dispatch_capable = False for process in processes: process.transition() # this is redundant, we do it in _dispatchEvent too, but we # want to reduce function call overhead if process.state == ProcessStates.RUNNING: if process.listener_state == EventListenerStates.READY: dispatch_capable = True if dispatch_capable: if self.dispatch_throttle: now = time.time() if now < self.last_dispatch: # The system clock appears to have moved backward # Reset self.last_dispatch accordingly self.last_dispatch = now; if now - self.last_dispatch < self.dispatch_throttle: return self.dispatch() def before_remove(self): self._unsubscribe() def dispatch(self): while self.event_buffer: # dispatch the oldest event event = self.event_buffer.pop(0) ok = self._dispatchEvent(event) if not ok: # if we can't dispatch an event, rebuffer it and stop trying # to process any further events in the buffer self._acceptEvent(event, head=True) break self.last_dispatch = time.time() def _acceptEvent(self, event, head=False): # events are required to be instances # this has a side effect to fail with an attribute error on 'old style' # classes processname = as_string(self.config.name) if not hasattr(event, 'serial'): event.serial = new_serial(GlobalSerial) if not hasattr(event, 'pool_serials'): event.pool_serials = {} if self.config.name not in event.pool_serials: event.pool_serials[self.config.name] = new_serial(self) else: self.config.options.logger.debug( 'rebuffering event %s for pool %s (buf size=%d, max=%d)' % ( (event.serial, processname, len(self.event_buffer), self.config.buffer_size))) if len(self.event_buffer) >= self.config.buffer_size: if self.event_buffer: # discard the oldest event discarded_event = self.event_buffer.pop(0) self.config.options.logger.error( 'pool %s event buffer overflowed, discarding event %s' % ( (processname, discarded_event.serial))) if head: self.event_buffer.insert(0, event) else: self.event_buffer.append(event) def _dispatchEvent(self, event): pool_serial = event.pool_serials[self.config.name] for process in self.processes.values(): if process.state != ProcessStates.RUNNING: continue if process.listener_state == EventListenerStates.READY: processname = as_string(process.config.name) payload = event.payload() try: event_type = event.__class__ serial = event.serial envelope = self._eventEnvelope(event_type, serial, pool_serial, payload) process.write(as_bytes(envelope)) except OSError as why: if why.args[0] != errno.EPIPE: raise self.config.options.logger.debug( 'epipe occurred while sending event %s ' 'to listener %s, listener state unchanged' % ( event.serial, processname)) continue process.listener_state = EventListenerStates.BUSY process.event = event self.config.options.logger.debug( 'event %s sent to listener %s' % ( event.serial, processname)) return True return False def _eventEnvelope(self, event_type, serial, pool_serial, payload): event_name = events.getEventNameByType(event_type) payload_len = len(payload) D = { 'ver':'3.0', 'sid':self.config.options.identifier, 'serial':serial, 'pool_name':self.config.name, 'pool_serial':pool_serial, 'event_name':event_name, 'len':payload_len, 'payload':payload, } return ('ver:%(ver)s server:%(sid)s serial:%(serial)s ' 'pool:%(pool_name)s poolserial:%(pool_serial)s ' 'eventname:%(event_name)s len:%(len)s\n%(payload)s' % D) def _subscribe(self): for event_type in self.config.pool_events: events.subscribe(event_type, self._acceptEvent) events.subscribe(events.EventRejectedEvent, self.handle_rejected) def _unsubscribe(self): for event_type in self.config.pool_events: events.unsubscribe(event_type, self._acceptEvent) events.unsubscribe(events.EventRejectedEvent, self.handle_rejected) class GlobalSerial(object): def __init__(self): self.serial = -1 GlobalSerial = GlobalSerial() # singleton def new_serial(inst): if inst.serial == maxint: inst.serial = -1 inst.serial += 1 return inst.serial ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0 supervisor-4.2.5/supervisor/rpcinterface.py0000644000076500000240000011167714351440431020745 0ustar00mnaberezstaffimport os import time import datetime import errno import types from supervisor.compat import as_string from supervisor.compat import as_bytes from supervisor.compat import unicode from supervisor.datatypes import ( Automatic, signal_number, ) from supervisor.options import readFile from supervisor.options import tailFile from supervisor.options import BadCommand from supervisor.options import NotExecutable from supervisor.options import NotFound from supervisor.options import NoPermission from supervisor.options import make_namespec from supervisor.options import split_namespec from supervisor.options import VERSION from supervisor.events import notify from supervisor.events import RemoteCommunicationEvent from supervisor.http import NOT_DONE_YET from supervisor.xmlrpc import ( capped_int, Faults, RPCError, ) from supervisor.states import SupervisorStates from supervisor.states import getSupervisorStateDescription from supervisor.states import ProcessStates from supervisor.states import getProcessStateDescription from supervisor.states import ( RUNNING_STATES, STOPPED_STATES, SIGNALLABLE_STATES ) API_VERSION = '3.0' class SupervisorNamespaceRPCInterface: def __init__(self, supervisord): self.supervisord = supervisord def _update(self, text): self.update_text = text # for unit tests, mainly if ( isinstance(self.supervisord.options.mood, int) and self.supervisord.options.mood < SupervisorStates.RUNNING ): raise RPCError(Faults.SHUTDOWN_STATE) # RPC API methods def getAPIVersion(self): """ Return the version of the RPC API used by supervisord @return string version id """ self._update('getAPIVersion') return API_VERSION getVersion = getAPIVersion # b/w compatibility with releases before 3.0 def getSupervisorVersion(self): """ Return the version of the supervisor package in use by supervisord @return string version id """ self._update('getSupervisorVersion') return VERSION def getIdentification(self): """ Return identifying string of supervisord @return string identifier identifying string """ self._update('getIdentification') return self.supervisord.options.identifier def getState(self): """ Return current state of supervisord as a struct @return struct A struct with keys int statecode, string statename """ self._update('getState') state = self.supervisord.options.mood statename = getSupervisorStateDescription(state) data = { 'statecode':state, 'statename':statename, } return data def getPID(self): """ Return the PID of supervisord @return int PID """ self._update('getPID') return self.supervisord.options.get_pid() def readLog(self, offset, length): """ Read length bytes from the main log starting at offset @param int offset offset to start reading from. @param int length number of bytes to read from the log. @return string result Bytes of log """ self._update('readLog') logfile = self.supervisord.options.logfile if logfile is None or not os.path.exists(logfile): raise RPCError(Faults.NO_FILE, logfile) try: return as_string(readFile(logfile, int(offset), int(length))) except ValueError as inst: why = inst.args[0] raise RPCError(getattr(Faults, why)) readMainLog = readLog # b/w compatibility with releases before 2.1 def clearLog(self): """ Clear the main log. @return boolean result always returns True unless error """ self._update('clearLog') logfile = self.supervisord.options.logfile if logfile is None or not self.supervisord.options.exists(logfile): raise RPCError(Faults.NO_FILE) # there is a race condition here, but ignore it. try: self.supervisord.options.remove(logfile) except (OSError, IOError): raise RPCError(Faults.FAILED) for handler in self.supervisord.options.logger.handlers: if hasattr(handler, 'reopen'): self.supervisord.options.logger.info('reopening log file') handler.reopen() return True def shutdown(self): """ Shut down the supervisor process @return boolean result always returns True unless error """ self._update('shutdown') self.supervisord.options.mood = SupervisorStates.SHUTDOWN return True def restart(self): """ Restart the supervisor process @return boolean result always return True unless error """ self._update('restart') self.supervisord.options.mood = SupervisorStates.RESTARTING return True def reloadConfig(self): """ Reload the configuration. The result contains three arrays containing names of process groups: * `added` gives the process groups that have been added * `changed` gives the process groups whose contents have changed * `removed` gives the process groups that are no longer in the configuration @return array result [[added, changed, removed]] """ self._update('reloadConfig') try: self.supervisord.options.process_config(do_usage=False) except ValueError as msg: raise RPCError(Faults.CANT_REREAD, msg) added, changed, removed = self.supervisord.diff_to_active() added = [group.name for group in added] changed = [group.name for group in changed] removed = [group.name for group in removed] return [[added, changed, removed]] # cannot return len > 1, apparently def addProcessGroup(self, name): """ Update the config for a running process from config file. @param string name name of process group to add @return boolean result true if successful """ self._update('addProcessGroup') for config in self.supervisord.options.process_group_configs: if config.name == name: result = self.supervisord.add_process_group(config) if not result: raise RPCError(Faults.ALREADY_ADDED, name) return True raise RPCError(Faults.BAD_NAME, name) def removeProcessGroup(self, name): """ Remove a stopped process from the active configuration. @param string name name of process group to remove @return boolean result Indicates whether the removal was successful """ self._update('removeProcessGroup') if name not in self.supervisord.process_groups: raise RPCError(Faults.BAD_NAME, name) result = self.supervisord.remove_process_group(name) if not result: raise RPCError(Faults.STILL_RUNNING, name) return True def _getAllProcesses(self, lexical=False): # if lexical is true, return processes sorted in lexical order, # otherwise, sort in priority order all_processes = [] if lexical: group_names = list(self.supervisord.process_groups.keys()) group_names.sort() for group_name in group_names: group = self.supervisord.process_groups[group_name] process_names = list(group.processes.keys()) process_names.sort() for process_name in process_names: process = group.processes[process_name] all_processes.append((group, process)) else: groups = list(self.supervisord.process_groups.values()) groups.sort() # asc by priority for group in groups: processes = list(group.processes.values()) processes.sort() # asc by priority for process in processes: all_processes.append((group, process)) return all_processes def _getGroupAndProcess(self, name): # get process to start from name group_name, process_name = split_namespec(name) group = self.supervisord.process_groups.get(group_name) if group is None: raise RPCError(Faults.BAD_NAME, name) if process_name is None: return group, None process = group.processes.get(process_name) if process is None: raise RPCError(Faults.BAD_NAME, name) return group, process def startProcess(self, name, wait=True): """ Start a process @param string name Process name (or ``group:name``, or ``group:*``) @param boolean wait Wait for process to be fully started @return boolean result Always true unless error """ self._update('startProcess') group, process = self._getGroupAndProcess(name) if process is None: group_name, process_name = split_namespec(name) return self.startProcessGroup(group_name, wait) # test filespec, don't bother trying to spawn if we know it will # eventually fail try: filename, argv = process.get_execv_args() except NotFound as why: raise RPCError(Faults.NO_FILE, why.args[0]) except (BadCommand, NotExecutable, NoPermission) as why: raise RPCError(Faults.NOT_EXECUTABLE, why.args[0]) if process.get_state() in RUNNING_STATES: raise RPCError(Faults.ALREADY_STARTED, name) if process.get_state() == ProcessStates.UNKNOWN: raise RPCError(Faults.FAILED, "%s is in an unknown process state" % name) process.spawn() # We call reap() in order to more quickly obtain the side effects of # process.finish(), which reap() eventually ends up calling. This # might be the case if the spawn() was successful but then the process # died before its startsecs elapsed or it exited with an unexpected # exit code. In particular, finish() may set spawnerr, which we can # check and immediately raise an RPCError, avoiding the need to # defer by returning a callback. self.supervisord.reap() if process.spawnerr: raise RPCError(Faults.SPAWN_ERROR, name) # We call process.transition() in order to more quickly obtain its # side effects. In particular, it might set the process' state from # STARTING->RUNNING if the process has a startsecs==0. process.transition() if wait and process.get_state() != ProcessStates.RUNNING: # by default, this branch will almost always be hit for processes # with default startsecs configurations, because the default number # of startsecs for a process is "1", and the process will not have # entered the RUNNING state yet even though we've called # transition() on it. This is because a process is not considered # RUNNING until it has stayed up > startsecs. def onwait(): if process.spawnerr: raise RPCError(Faults.SPAWN_ERROR, name) state = process.get_state() if state not in (ProcessStates.STARTING, ProcessStates.RUNNING): raise RPCError(Faults.ABNORMAL_TERMINATION, name) if state == ProcessStates.RUNNING: return True return NOT_DONE_YET onwait.delay = 0.05 onwait.rpcinterface = self return onwait # deferred return True def startProcessGroup(self, name, wait=True): """ Start all processes in the group named 'name' @param string name The group name @param boolean wait Wait for each process to be fully started @return array result An array of process status info structs """ self._update('startProcessGroup') group = self.supervisord.process_groups.get(name) if group is None: raise RPCError(Faults.BAD_NAME, name) processes = list(group.processes.values()) processes.sort() processes = [ (group, process) for process in processes ] startall = make_allfunc(processes, isNotRunning, self.startProcess, wait=wait) startall.delay = 0.05 startall.rpcinterface = self return startall # deferred def startAllProcesses(self, wait=True): """ Start all processes listed in the configuration file @param boolean wait Wait for each process to be fully started @return array result An array of process status info structs """ self._update('startAllProcesses') processes = self._getAllProcesses() startall = make_allfunc(processes, isNotRunning, self.startProcess, wait=wait) startall.delay = 0.05 startall.rpcinterface = self return startall # deferred def stopProcess(self, name, wait=True): """ Stop a process named by name @param string name The name of the process to stop (or 'group:name') @param boolean wait Wait for the process to be fully stopped @return boolean result Always return True unless error """ self._update('stopProcess') group, process = self._getGroupAndProcess(name) if process is None: group_name, process_name = split_namespec(name) return self.stopProcessGroup(group_name, wait) if process.get_state() not in RUNNING_STATES: raise RPCError(Faults.NOT_RUNNING, name) msg = process.stop() if msg is not None: raise RPCError(Faults.FAILED, msg) # We'll try to reap any killed child. FWIW, reap calls waitpid, and # then, if waitpid returns a pid, calls finish() on the process with # that pid, which drains any I/O from the process' dispatchers and # changes the process' state. I chose to call reap without once=True # because we don't really care if we reap more than one child. Even if # we only reap one child. we may not even be reaping the child that we # just stopped (this is all async, and process.stop() may not work, and # we'll need to wait for SIGKILL during process.transition() as the # result of normal select looping). self.supervisord.reap() if wait and process.get_state() not in STOPPED_STATES: def onwait(): # process will eventually enter a stopped state by # virtue of the supervisord.reap() method being called # during normal operations process.stop_report() if process.get_state() not in STOPPED_STATES: return NOT_DONE_YET return True onwait.delay = 0 onwait.rpcinterface = self return onwait # deferred return True def stopProcessGroup(self, name, wait=True): """ Stop all processes in the process group named 'name' @param string name The group name @param boolean wait Wait for each process to be fully stopped @return array result An array of process status info structs """ self._update('stopProcessGroup') group = self.supervisord.process_groups.get(name) if group is None: raise RPCError(Faults.BAD_NAME, name) processes = list(group.processes.values()) processes.sort() processes = [ (group, process) for process in processes ] killall = make_allfunc(processes, isRunning, self.stopProcess, wait=wait) killall.delay = 0.05 killall.rpcinterface = self return killall # deferred def stopAllProcesses(self, wait=True): """ Stop all processes in the process list @param boolean wait Wait for each process to be fully stopped @return array result An array of process status info structs """ self._update('stopAllProcesses') processes = self._getAllProcesses() killall = make_allfunc(processes, isRunning, self.stopProcess, wait=wait) killall.delay = 0.05 killall.rpcinterface = self return killall # deferred def signalProcess(self, name, signal): """ Send an arbitrary UNIX signal to the process named by name @param string name Name of the process to signal (or 'group:name') @param string signal Signal to send, as name ('HUP') or number ('1') @return boolean """ self._update('signalProcess') group, process = self._getGroupAndProcess(name) if process is None: group_name, process_name = split_namespec(name) return self.signalProcessGroup(group_name, signal=signal) try: sig = signal_number(signal) except ValueError: raise RPCError(Faults.BAD_SIGNAL, signal) if process.get_state() not in SIGNALLABLE_STATES: raise RPCError(Faults.NOT_RUNNING, name) msg = process.signal(sig) if not msg is None: raise RPCError(Faults.FAILED, msg) return True def signalProcessGroup(self, name, signal): """ Send a signal to all processes in the group named 'name' @param string name The group name @param string signal Signal to send, as name ('HUP') or number ('1') @return array """ group = self.supervisord.process_groups.get(name) self._update('signalProcessGroup') if group is None: raise RPCError(Faults.BAD_NAME, name) processes = list(group.processes.values()) processes.sort() processes = [(group, process) for process in processes] sendall = make_allfunc(processes, isSignallable, self.signalProcess, signal=signal) result = sendall() self._update('signalProcessGroup') return result def signalAllProcesses(self, signal): """ Send a signal to all processes in the process list @param string signal Signal to send, as name ('HUP') or number ('1') @return array An array of process status info structs """ processes = self._getAllProcesses() signalall = make_allfunc(processes, isSignallable, self.signalProcess, signal=signal) result = signalall() self._update('signalAllProcesses') return result def getAllConfigInfo(self): """ Get info about all available process configurations. Each struct represents a single process (i.e. groups get flattened). @return array result An array of process config info structs """ self._update('getAllConfigInfo') configinfo = [] for gconfig in self.supervisord.options.process_group_configs: inuse = gconfig.name in self.supervisord.process_groups for pconfig in gconfig.process_configs: d = {'autostart': pconfig.autostart, 'directory': pconfig.directory, 'uid': pconfig.uid, 'command': pconfig.command, 'exitcodes': pconfig.exitcodes, 'group': gconfig.name, 'group_prio': gconfig.priority, 'inuse': inuse, 'killasgroup': pconfig.killasgroup, 'name': pconfig.name, 'process_prio': pconfig.priority, 'redirect_stderr': pconfig.redirect_stderr, 'startretries': pconfig.startretries, 'startsecs': pconfig.startsecs, 'stdout_capture_maxbytes': pconfig.stdout_capture_maxbytes, 'stdout_events_enabled': pconfig.stdout_events_enabled, 'stdout_logfile': pconfig.stdout_logfile, 'stdout_logfile_backups': pconfig.stdout_logfile_backups, 'stdout_logfile_maxbytes': pconfig.stdout_logfile_maxbytes, 'stdout_syslog': pconfig.stdout_syslog, 'stopsignal': int(pconfig.stopsignal), # enum on py3 'stopwaitsecs': pconfig.stopwaitsecs, 'stderr_capture_maxbytes': pconfig.stderr_capture_maxbytes, 'stderr_events_enabled': pconfig.stderr_events_enabled, 'stderr_logfile': pconfig.stderr_logfile, 'stderr_logfile_backups': pconfig.stderr_logfile_backups, 'stderr_logfile_maxbytes': pconfig.stderr_logfile_maxbytes, 'stderr_syslog': pconfig.stderr_syslog, 'serverurl': pconfig.serverurl, } # no support for these types in xml-rpc d.update((k, 'auto') for k, v in d.items() if v is Automatic) d.update((k, 'none') for k, v in d.items() if v is None) configinfo.append(d) configinfo.sort(key=lambda r: r['name']) return configinfo def _interpretProcessInfo(self, info): state = info['state'] if state == ProcessStates.RUNNING: start = info['start'] now = info['now'] start_dt = datetime.datetime(*time.gmtime(start)[:6]) now_dt = datetime.datetime(*time.gmtime(now)[:6]) uptime = now_dt - start_dt if _total_seconds(uptime) < 0: # system time set back uptime = datetime.timedelta(0) desc = 'pid %s, uptime %s' % (info['pid'], uptime) elif state in (ProcessStates.FATAL, ProcessStates.BACKOFF): desc = info['spawnerr'] if not desc: desc = 'unknown error (try "tail %s")' % info['name'] elif state in (ProcessStates.STOPPED, ProcessStates.EXITED): if info['start']: stop = info['stop'] stop_dt = datetime.datetime(*time.localtime(stop)[:7]) desc = stop_dt.strftime('%b %d %I:%M %p') else: desc = 'Not started' else: desc = '' return desc def getProcessInfo(self, name): """ Get info about a process named name @param string name The name of the process (or 'group:name') @return struct result A structure containing data about the process """ self._update('getProcessInfo') group, process = self._getGroupAndProcess(name) if process is None: raise RPCError(Faults.BAD_NAME, name) # TODO timestamps are returned as xml-rpc integers for b/c but will # saturate the xml-rpc integer type in jan 2038 ("year 2038 problem"). # future api versions should return timestamps as a different type. start = capped_int(process.laststart) stop = capped_int(process.laststop) now = capped_int(self._now()) state = process.get_state() spawnerr = process.spawnerr or '' exitstatus = process.exitstatus or 0 stdout_logfile = process.config.stdout_logfile or '' stderr_logfile = process.config.stderr_logfile or '' info = { 'name':process.config.name, 'group':group.config.name, 'start':start, 'stop':stop, 'now':now, 'state':state, 'statename':getProcessStateDescription(state), 'spawnerr':spawnerr, 'exitstatus':exitstatus, 'logfile':stdout_logfile, # b/c alias 'stdout_logfile':stdout_logfile, 'stderr_logfile':stderr_logfile, 'pid':process.pid, } description = self._interpretProcessInfo(info) info['description'] = description return info def _now(self): # pragma: no cover # this is here to service stubbing in unit tests return time.time() def getAllProcessInfo(self): """ Get info about all processes @return array result An array of process status results """ self._update('getAllProcessInfo') all_processes = self._getAllProcesses(lexical=True) output = [] for group, process in all_processes: name = make_namespec(group.config.name, process.config.name) output.append(self.getProcessInfo(name)) return output def _readProcessLog(self, name, offset, length, channel): group, process = self._getGroupAndProcess(name) if process is None: raise RPCError(Faults.BAD_NAME, name) logfile = getattr(process.config, '%s_logfile' % channel) if logfile is None or not os.path.exists(logfile): raise RPCError(Faults.NO_FILE, logfile) try: return as_string(readFile(logfile, int(offset), int(length))) except ValueError as inst: why = inst.args[0] raise RPCError(getattr(Faults, why)) def readProcessStdoutLog(self, name, offset, length): """ Read length bytes from name's stdout log starting at offset @param string name the name of the process (or 'group:name') @param int offset offset to start reading from. @param int length number of bytes to read from the log. @return string result Bytes of log """ self._update('readProcessStdoutLog') return self._readProcessLog(name, offset, length, 'stdout') readProcessLog = readProcessStdoutLog # b/c alias def readProcessStderrLog(self, name, offset, length): """ Read length bytes from name's stderr log starting at offset @param string name the name of the process (or 'group:name') @param int offset offset to start reading from. @param int length number of bytes to read from the log. @return string result Bytes of log """ self._update('readProcessStderrLog') return self._readProcessLog(name, offset, length, 'stderr') def _tailProcessLog(self, name, offset, length, channel): group, process = self._getGroupAndProcess(name) if process is None: raise RPCError(Faults.BAD_NAME, name) logfile = getattr(process.config, '%s_logfile' % channel) if logfile is None or not os.path.exists(logfile): return ['', 0, False] return tailFile(logfile, int(offset), int(length)) def tailProcessStdoutLog(self, name, offset, length): """ Provides a more efficient way to tail the (stdout) log than readProcessStdoutLog(). Use readProcessStdoutLog() to read chunks and tailProcessStdoutLog() to tail. Requests (length) bytes from the (name)'s log, starting at (offset). If the total log size is greater than (offset + length), the overflow flag is set and the (offset) is automatically increased to position the buffer at the end of the log. If less than (length) bytes are available, the maximum number of available bytes will be returned. (offset) returned is always the last offset in the log +1. @param string name the name of the process (or 'group:name') @param int offset offset to start reading from @param int length maximum number of bytes to return @return array result [string bytes, int offset, bool overflow] """ self._update('tailProcessStdoutLog') return self._tailProcessLog(name, offset, length, 'stdout') tailProcessLog = tailProcessStdoutLog # b/c alias def tailProcessStderrLog(self, name, offset, length): """ Provides a more efficient way to tail the (stderr) log than readProcessStderrLog(). Use readProcessStderrLog() to read chunks and tailProcessStderrLog() to tail. Requests (length) bytes from the (name)'s log, starting at (offset). If the total log size is greater than (offset + length), the overflow flag is set and the (offset) is automatically increased to position the buffer at the end of the log. If less than (length) bytes are available, the maximum number of available bytes will be returned. (offset) returned is always the last offset in the log +1. @param string name the name of the process (or 'group:name') @param int offset offset to start reading from @param int length maximum number of bytes to return @return array result [string bytes, int offset, bool overflow] """ self._update('tailProcessStderrLog') return self._tailProcessLog(name, offset, length, 'stderr') def clearProcessLogs(self, name): """ Clear the stdout and stderr logs for the named process and reopen them. @param string name The name of the process (or 'group:name') @return boolean result Always True unless error """ self._update('clearProcessLogs') group, process = self._getGroupAndProcess(name) if process is None: raise RPCError(Faults.BAD_NAME, name) try: # implies a reopen process.removelogs() except (IOError, OSError): raise RPCError(Faults.FAILED, name) return True clearProcessLog = clearProcessLogs # b/c alias def clearAllProcessLogs(self): """ Clear all process log files @return array result An array of process status info structs """ self._update('clearAllProcessLogs') results = [] callbacks = [] all_processes = self._getAllProcesses() for group, process in all_processes: callbacks.append((group, process, self.clearProcessLog)) def clearall(): if not callbacks: return results group, process, callback = callbacks.pop(0) name = make_namespec(group.config.name, process.config.name) try: callback(name) except RPCError as e: results.append( {'name':process.config.name, 'group':group.config.name, 'status':e.code, 'description':e.text}) else: results.append( {'name':process.config.name, 'group':group.config.name, 'status':Faults.SUCCESS, 'description':'OK'} ) if callbacks: return NOT_DONE_YET return results clearall.delay = 0.05 clearall.rpcinterface = self return clearall # deferred def sendProcessStdin(self, name, chars): """ Send a string of chars to the stdin of the process name. If non-7-bit data is sent (unicode), it is encoded to utf-8 before being sent to the process' stdin. If chars is not a string or is not unicode, raise INCORRECT_PARAMETERS. If the process is not running, raise NOT_RUNNING. If the process' stdin cannot accept input (e.g. it was closed by the child process), raise NO_FILE. @param string name The process name to send to (or 'group:name') @param string chars The character data to send to the process @return boolean result Always return True unless error """ self._update('sendProcessStdin') if not isinstance(chars, (str, bytes, unicode)): raise RPCError(Faults.INCORRECT_PARAMETERS, chars) chars = as_bytes(chars) group, process = self._getGroupAndProcess(name) if process is None: raise RPCError(Faults.BAD_NAME, name) if not process.pid or process.killing: raise RPCError(Faults.NOT_RUNNING, name) try: process.write(chars) except OSError as why: if why.args[0] == errno.EPIPE: raise RPCError(Faults.NO_FILE, name) else: raise return True def sendRemoteCommEvent(self, type, data): """ Send an event that will be received by event listener subprocesses subscribing to the RemoteCommunicationEvent. @param string type String for the "type" key in the event header @param string data Data for the event body @return boolean Always return True unless error """ if isinstance(type, unicode): type = type.encode('utf-8') if isinstance(data, unicode): data = data.encode('utf-8') notify( RemoteCommunicationEvent(type, data) ) return True def _total_seconds(timedelta): return ((timedelta.days * 86400 + timedelta.seconds) * 10**6 + timedelta.microseconds) / 10**6 def make_allfunc(processes, predicate, func, **extra_kwargs): """ Return a closure representing a function that calls a function for every process, and returns a result """ callbacks = [] results = [] def allfunc( processes=processes, predicate=predicate, func=func, extra_kwargs=extra_kwargs, callbacks=callbacks, # used only to fool scoping, never passed by caller results=results, # used only to fool scoping, never passed by caller ): if not callbacks: for group, process in processes: name = make_namespec(group.config.name, process.config.name) if predicate(process): try: callback = func(name, **extra_kwargs) except RPCError as e: results.append({'name':process.config.name, 'group':group.config.name, 'status':e.code, 'description':e.text}) continue if isinstance(callback, types.FunctionType): callbacks.append((group, process, callback)) else: results.append( {'name':process.config.name, 'group':group.config.name, 'status':Faults.SUCCESS, 'description':'OK'} ) if not callbacks: return results for struct in callbacks[:]: group, process, cb = struct try: value = cb() except RPCError as e: results.append( {'name':process.config.name, 'group':group.config.name, 'status':e.code, 'description':e.text}) callbacks.remove(struct) else: if value is not NOT_DONE_YET: results.append( {'name':process.config.name, 'group':group.config.name, 'status':Faults.SUCCESS, 'description':'OK'} ) callbacks.remove(struct) if callbacks: return NOT_DONE_YET return results # XXX the above implementation has a weakness inasmuch as the # first call into each individual process callback will always # return NOT_DONE_YET, so they need to be called twice. The # symptom of this is that calling this method causes the # client to block for much longer than it actually requires to # kill all of the running processes. After the first call to # the killit callback, the process is actually dead, but the # above killall method processes the callbacks one at a time # during the select loop, which, because there is no output # from child processes after e.g. stopAllProcesses is called, # is not busy, so hits the timeout for each callback. I # attempted to make this better, but the only way to make it # better assumes totally synchronous reaping of child # processes, which requires infrastructure changes to # supervisord that are scary at the moment as it could take a # while to pin down all of the platform differences and might # require a C extension to the Python signal module to allow # the setting of ignore flags to signals. return allfunc def isRunning(process): return process.get_state() in RUNNING_STATES def isNotRunning(process): return not isRunning(process) def isSignallable(process): if process.get_state() in SIGNALLABLE_STATES: return True # this is not used in code but referenced via an entry point in the conf file def make_main_rpcinterface(supervisord): return SupervisorNamespaceRPCInterface(supervisord) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1671843145.384721 supervisor-4.2.5/supervisor/scripts/0000755000076500000240000000000014351446511017405 5ustar00mnaberezstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/scripts/loop_eventgen.py0000755000076500000240000000141314340177153022626 0ustar00mnaberezstaff#!/usr/bin/env python # A process which emits a process communications event on its stdout, # and subsequently waits for a line to be sent back to its stdin by # loop_listener.py. import sys import time from supervisor import childutils def main(max): start = time.time() report = open('/tmp/report', 'w') i = 0 while 1: childutils.pcomm.stdout('the_data') sys.stdin.readline() report.write(str(i) + ' @ %s\n' % childutils.get_asctime()) report.flush() i+=1 if max and i >= max: end = time.time() report.write('%s per second\n' % (i / (end - start))) sys.exit(0) if __name__ == '__main__': max = 0 if len(sys.argv) > 1: max = int(sys.argv[1]) main(max) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/scripts/loop_listener.py0000755000076500000240000000131414340177153022640 0ustar00mnaberezstaff#!/usr/bin/env python -u # An event listener that listens for process communications events # from loop_eventgen.py and uses RPC to write data to the event # generator's stdin. import os from supervisor import childutils def main(): rpcinterface = childutils.getRPCInterface(os.environ) while 1: headers, payload = childutils.listener.wait() if headers['eventname'].startswith('PROCESS_COMMUNICATION'): pheaders, pdata = childutils.eventdata(payload) pname = '%s:%s' % (pheaders['processname'], pheaders['groupname']) rpcinterface.supervisor.sendProcessStdin(pname, 'Got it yo\n') childutils.listener.ok() if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/scripts/sample_commevent.py0000755000076500000240000000106214340177153023320 0ustar00mnaberezstaff#!/usr/bin/env python # An example process which emits a stdout process communication event every # second (or every number of seconds specified as a single argument). import sys import time def write_stdout(s): sys.stdout.write(s) sys.stdout.flush() def main(sleep): while 1: write_stdout('') write_stdout('the data') write_stdout('') time.sleep(sleep) if __name__ == '__main__': if len(sys.argv) > 1: main(float(sys.argv[1])) else: main(1) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/scripts/sample_eventlistener.py0000755000076500000240000000243414340177153024216 0ustar00mnaberezstaff#!/usr/bin/env python -u # A sample long-running supervisor event listener which demonstrates # how to accept event notifications from supervisor and how to respond # properly. This demonstration does *not* use the # supervisor.childutils module, which wraps the specifics of # communications in higher-level API functions. If your listeners are # implemented using Python, it is recommended that you use the # childutils module API instead of modeling your scripts on the # lower-level protocol example below. import sys def write_stdout(s): sys.stdout.write(s) sys.stdout.flush() def write_stderr(s): sys.stderr.write(s) sys.stderr.flush() def main(): while 1: write_stdout('READY\n') # transition from ACKNOWLEDGED to READY line = sys.stdin.readline() # read header line from stdin write_stderr(line) # print it out to stderr (testing only) headers = dict([ x.split(':') for x in line.split() ]) data = sys.stdin.read(int(headers['len'])) # read the event payload write_stderr(data) # print the event payload to stderr (testing only) write_stdout('RESULT 2\nOK') # transition from BUSY to ACKNOWLEDGED #write_stdout('RESULT 4\nFAIL') # transition from BUSY TO ACKNOWLEDGED if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/scripts/sample_exiting_eventlistener.py0000755000076500000240000000265414340177153025751 0ustar00mnaberezstaff#!/usr/bin/env python # A sample long-running supervisor event listener which demonstrates # how to accept event notifications from supervisor and how to respond # properly. It is the same as the sample_eventlistener.py script # except it exits after each request (presumably to be restarted by # supervisor). This demonstration does *not* use the # supervisor.childutils module, which wraps the specifics of # communications in higher-level API functions. If your listeners are # implemented using Python, it is recommended that you use the # childutils module API instead of modeling your scripts on the # lower-level protocol example below. import sys def write_stdout(s): sys.stdout.write(s) sys.stdout.flush() def write_stderr(s): sys.stderr.write(s) sys.stderr.flush() def main(): write_stdout('READY\n') # transition from ACKNOWLEDGED to READY line = sys.stdin.readline() # read a line from stdin from supervisord write_stderr(line) # print it out to stderr (testing only) headers = dict([ x.split(':') for x in line.split() ]) data = sys.stdin.read(int(headers['len'])) # read the event payload write_stderr(data) # print the event payload to stderr (testing only) write_stdout('RESULT 2\nOK') # transition from READY to ACKNOWLEDGED # exit, if the eventlistener process config has autorestart=true, # it will be restarted by supervisord. if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1671843145.384848 supervisor-4.2.5/supervisor/skel/0000755000076500000240000000000014351446511016654 5ustar00mnaberezstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/skel/sample.conf0000644000076500000240000002455714340177153021022 0ustar00mnaberezstaff; Sample supervisor config file. ; ; For more information on the config file, please see: ; http://supervisord.org/configuration.html ; ; Notes: ; - Shell expansion ("~" or "$HOME") is not supported. Environment ; variables can be expanded using this syntax: "%(ENV_HOME)s". ; - Quotes around values are not supported, except in the case of ; the environment= options as shown below. ; - Comments must have a leading space: "a=b ;comment" not "a=b;comment". ; - Command will be truncated if it looks like a config file comment, e.g. ; "command=bash -c 'foo ; bar'" will truncate to "command=bash -c 'foo ". ; ; Warning: ; Paths throughout this example file use /tmp because it is available on most ; systems. You will likely need to change these to locations more appropriate ; for your system. Some systems periodically delete older files in /tmp. ; Notably, if the socket file defined in the [unix_http_server] section below ; is deleted, supervisorctl will be unable to connect to supervisord. [unix_http_server] file=/tmp/supervisor.sock ; the path to the socket file ;chmod=0700 ; socket file mode (default 0700) ;chown=nobody:nogroup ; socket file uid:gid owner ;username=user ; default is no username (open server) ;password=123 ; default is no password (open server) ; Security Warning: ; The inet HTTP server is not enabled by default. The inet HTTP server is ; enabled by uncommenting the [inet_http_server] section below. The inet ; HTTP server is intended for use within a trusted environment only. It ; should only be bound to localhost or only accessible from within an ; isolated, trusted network. The inet HTTP server does not support any ; form of encryption. The inet HTTP server does not use authentication ; by default (see the username= and password= options to add authentication). ; Never expose the inet HTTP server to the public internet. ;[inet_http_server] ; inet (TCP) server disabled by default ;port=127.0.0.1:9001 ; ip_address:port specifier, *:port for all iface ;username=user ; default is no username (open server) ;password=123 ; default is no password (open server) [supervisord] logfile=/tmp/supervisord.log ; main log file; default $CWD/supervisord.log logfile_maxbytes=50MB ; max main logfile bytes b4 rotation; default 50MB logfile_backups=10 ; # of main logfile backups; 0 means none, default 10 loglevel=info ; log level; default info; others: debug,warn,trace pidfile=/tmp/supervisord.pid ; supervisord pidfile; default supervisord.pid nodaemon=false ; start in foreground if true; default false silent=false ; no logs to stdout if true; default false minfds=1024 ; min. avail startup file descriptors; default 1024 minprocs=200 ; min. avail process descriptors;default 200 ;umask=022 ; process file creation umask; default 022 ;user=supervisord ; setuid to this UNIX account at startup; recommended if root ;identifier=supervisor ; supervisord identifier, default is 'supervisor' ;directory=/tmp ; default is not to cd during start ;nocleanup=true ; don't clean up tempfiles at start; default false ;childlogdir=/tmp ; 'AUTO' child log dir, default $TEMP ;environment=KEY="value" ; key value pairs to add to environment ;strip_ansi=false ; strip ansi escape codes in logs; def. false ; The rpcinterface:supervisor section must remain in the config file for ; RPC (supervisorctl/web interface) to work. Additional interfaces may be ; added by defining them in separate [rpcinterface:x] sections. [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface ; The supervisorctl section configures how supervisorctl will connect to ; supervisord. configure it match the settings in either the unix_http_server ; or inet_http_server section. [supervisorctl] serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket ;serverurl=http://127.0.0.1:9001 ; use an http:// url to specify an inet socket ;username=chris ; should be same as in [*_http_server] if set ;password=123 ; should be same as in [*_http_server] if set ;prompt=mysupervisor ; cmd line prompt (default "supervisor") ;history_file=~/.sc_history ; use readline history if available ; The sample program section below shows all possible program subsection values. ; Create one or more 'real' program: sections to be able to control them under ; supervisor. ;[program:theprogramname] ;command=/bin/cat ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=999 ; the relative start priority (default 999) ;autostart=true ; start at supervisord start (default: true) ;startsecs=1 ; # of secs prog must stay up to be running (def. 1) ;startretries=3 ; max # of serial start failures when starting (default 3) ;autorestart=unexpected ; when to restart if exited after running (def: unexpected) ;exitcodes=0 ; 'expected' exit codes used with autorestart (default 0) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;stopasgroup=false ; send stop signal to the UNIX process group (default false) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (0 means none, default 10) ;stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stdout_syslog=false ; send stdout to syslog with process name (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups=10 ; # of stderr logfile backups (0 means none, default 10) ;stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;stderr_syslog=false ; send stderr to syslog with process name (default false) ;environment=A="1",B="2" ; process environment additions (def no adds) ;serverurl=AUTO ; override serverurl computation (childutils) ; The sample eventlistener section below shows all possible eventlistener ; subsection values. Create one or more 'real' eventlistener: sections to be ; able to handle event notifications sent by supervisord. ;[eventlistener:theeventlistenername] ;command=/bin/eventlistener ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;events=EVENT ; event notif. types to subscribe to (req'd) ;buffer_size=10 ; event buffer queue size (default 10) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=-1 ; the relative start priority (default -1) ;autostart=true ; start at supervisord start (default: true) ;startsecs=1 ; # of secs prog must stay up to be running (def. 1) ;startretries=3 ; max # of serial start failures when starting (default 3) ;autorestart=unexpected ; autorestart if exited after running (def: unexpected) ;exitcodes=0 ; 'expected' exit codes used with autorestart (default 0) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;stopasgroup=false ; send stop signal to the UNIX process group (default false) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=false ; redirect_stderr=true is not allowed for eventlisteners ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (0 means none, default 10) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stdout_syslog=false ; send stdout to syslog with process name (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups=10 ; # of stderr logfile backups (0 means none, default 10) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;stderr_syslog=false ; send stderr to syslog with process name (default false) ;environment=A="1",B="2" ; process environment additions ;serverurl=AUTO ; override serverurl computation (childutils) ; The sample group section below shows all possible group values. Create one ; or more 'real' group: sections to create "heterogeneous" process groups. ;[group:thegroupname] ;programs=progname1,progname2 ; each refers to 'x' in [program:x] definitions ;priority=999 ; the relative start priority (default 999) ; The [include] section can just contain the "files" setting. This ; setting can list multiple files (separated by whitespace or ; newlines). It can also contain wildcards. The filenames are ; interpreted as relative to this file. Included files *cannot* ; include files themselves. ;[include] ;files = relative/directory/*.ini ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/socket_manager.py0000644000076500000240000000602714340177153021260 0ustar00mnaberezstaffimport socket class Proxy: """ Class for wrapping a shared resource object and getting notified when it's deleted """ def __init__(self, object, **kwargs): self.object = object self.on_delete = kwargs.get('on_delete', None) def __del__(self): if self.on_delete: self.on_delete() def __getattr__(self, name): return getattr(self.object, name) def _get(self): return self.object class ReferenceCounter: """ Class for tracking references to a shared resource """ def __init__(self, **kwargs): self.on_non_zero = kwargs['on_non_zero'] self.on_zero = kwargs['on_zero'] self.ref_count = 0 def get_count(self): return self.ref_count def increment(self): if self.ref_count == 0: self.on_non_zero() self.ref_count += 1 def decrement(self): if self.ref_count <= 0: raise Exception('Illegal operation: cannot decrement below zero') self.ref_count -= 1 if self.ref_count == 0: self.on_zero() class SocketManager: """ Class for managing sockets in servers that create/bind/listen before forking multiple child processes to accept() Sockets are managed at the process group level and referenced counted at the process level b/c that's really the only place to hook in """ def __init__(self, socket_config, **kwargs): self.logger = kwargs.get('logger', None) self.socket = None self.prepared = False self.socket_config = socket_config self.ref_ctr = ReferenceCounter( on_zero=self._close, on_non_zero=self._prepare_socket ) def __repr__(self): return '<%s at %s for %s>' % (self.__class__, id(self), self.socket_config.url) def config(self): return self.socket_config def is_prepared(self): return self.prepared def get_socket(self): self.ref_ctr.increment() self._require_prepared() return Proxy(self.socket, on_delete=self.ref_ctr.decrement) def get_socket_ref_count(self): self._require_prepared() return self.ref_ctr.get_count() def _require_prepared(self): if not self.prepared: raise Exception('Socket has not been prepared') def _prepare_socket(self): if not self.prepared: if self.logger: self.logger.info('Creating socket %s' % self.socket_config) self.socket = self.socket_config.create_and_bind() if self.socket_config.get_backlog(): self.socket.listen(self.socket_config.get_backlog()) else: self.socket.listen(socket.SOMAXCONN) self.prepared = True def _close(self): self._require_prepared() if self.logger: self.logger.info('Closing socket %s' % self.socket_config) self.socket.close() self.prepared = False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/states.py0000644000076500000240000000337714340177153017606 0ustar00mnaberezstaff# This module must not depend on any other non-stdlib module to prevent # circular import problems. class ProcessStates: STOPPED = 0 STARTING = 10 RUNNING = 20 BACKOFF = 30 STOPPING = 40 EXITED = 100 FATAL = 200 UNKNOWN = 1000 STOPPED_STATES = (ProcessStates.STOPPED, ProcessStates.EXITED, ProcessStates.FATAL, ProcessStates.UNKNOWN) RUNNING_STATES = (ProcessStates.RUNNING, ProcessStates.BACKOFF, ProcessStates.STARTING) SIGNALLABLE_STATES = (ProcessStates.RUNNING, ProcessStates.STARTING, ProcessStates.STOPPING) def getProcessStateDescription(code): return _process_states_by_code.get(code) class SupervisorStates: FATAL = 2 RUNNING = 1 RESTARTING = 0 SHUTDOWN = -1 def getSupervisorStateDescription(code): return _supervisor_states_by_code.get(code) class EventListenerStates: READY = 10 # the process ready to be sent an event from supervisor BUSY = 20 # event listener is processing an event sent to it by supervisor ACKNOWLEDGED = 30 # the event listener processed an event UNKNOWN = 40 # the event listener is in an unknown state def getEventListenerStateDescription(code): return _eventlistener_states_by_code.get(code) # below is an optimization for internal use in this module only def _names_by_code(states): d = {} for name in states.__dict__: if not name.startswith('__'): code = getattr(states, name) d[code] = name return d _process_states_by_code = _names_by_code(ProcessStates) _supervisor_states_by_code = _names_by_code(SupervisorStates) _eventlistener_states_by_code = _names_by_code(EventListenerStates) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0 supervisor-4.2.5/supervisor/supervisorctl.py0000755000076500000240000015314514351440431021223 0ustar00mnaberezstaff#!/usr/bin/env python -u """supervisorctl -- control applications run by supervisord from the cmd line. Usage: %s [options] [action [arguments]] Options: -c/--configuration FILENAME -- configuration file path (searches if not given) -h/--help -- print usage message and exit -i/--interactive -- start an interactive shell after executing commands -s/--serverurl URL -- URL on which supervisord server is listening (default "http://localhost:9001"). -u/--username USERNAME -- username to use for authentication with server -p/--password PASSWORD -- password to use for authentication with server -r/--history-file -- keep a readline history (if readline is available) action [arguments] -- see below Actions are commands like "tail" or "stop". If -i is specified or no action is specified on the command line, a "shell" interpreting actions typed interactively is started. Use the action "help" to find out about available actions. """ import cmd import errno import getpass import socket import sys import threading from supervisor.compat import xmlrpclib from supervisor.compat import urlparse from supervisor.compat import unicode from supervisor.compat import raw_input from supervisor.compat import as_string from supervisor.medusa import asyncore_25 as asyncore from supervisor.options import ClientOptions from supervisor.options import make_namespec from supervisor.options import split_namespec from supervisor import xmlrpc from supervisor import states from supervisor import http_client class LSBInitExitStatuses: SUCCESS = 0 GENERIC = 1 INVALID_ARGS = 2 UNIMPLEMENTED_FEATURE = 3 INSUFFICIENT_PRIVILEGES = 4 NOT_INSTALLED = 5 NOT_RUNNING = 7 class LSBStatusExitStatuses: NOT_RUNNING = 3 UNKNOWN = 4 DEAD_PROGRAM_FAULTS = (xmlrpc.Faults.SPAWN_ERROR, xmlrpc.Faults.ABNORMAL_TERMINATION, xmlrpc.Faults.NOT_RUNNING) class fgthread(threading.Thread): """ A subclass of threading.Thread, with a kill() method. To be used for foreground output/error streaming. http://mail.python.org/pipermail/python-list/2004-May/260937.html """ def __init__(self, program, ctl): threading.Thread.__init__(self) self.killed = False self.program = program self.ctl = ctl self.listener = http_client.Listener() self.output_handler = http_client.HTTPHandler(self.listener, self.ctl.options.username, self.ctl.options.password) self.error_handler = http_client.HTTPHandler(self.listener, self.ctl.options.username, self.ctl.options.password) def start(self): # pragma: no cover # Start the thread self.__run_backup = self.run self.run = self.__run threading.Thread.start(self) def run(self): # pragma: no cover self.output_handler.get(self.ctl.options.serverurl, '/logtail/%s/stdout' % self.program) self.error_handler.get(self.ctl.options.serverurl, '/logtail/%s/stderr' % self.program) asyncore.loop() def __run(self): # pragma: no cover # Hacked run function, which installs the trace sys.settrace(self.globaltrace) self.__run_backup() self.run = self.__run_backup def globaltrace(self, frame, why, arg): if why == 'call': return self.localtrace else: return None def localtrace(self, frame, why, arg): if self.killed: if why == 'line': raise SystemExit() return self.localtrace def kill(self): self.output_handler.close() self.error_handler.close() self.killed = True class Controller(cmd.Cmd): def __init__(self, options, completekey='tab', stdin=None, stdout=None): self.options = options self.prompt = self.options.prompt + '> ' self.options.plugins = [] self.vocab = ['help'] self._complete_info = None self.exitstatus = LSBInitExitStatuses.SUCCESS cmd.Cmd.__init__(self, completekey, stdin, stdout) for name, factory, kwargs in self.options.plugin_factories: plugin = factory(self, **kwargs) for a in dir(plugin): if a.startswith('do_') and callable(getattr(plugin, a)): self.vocab.append(a[3:]) self.options.plugins.append(plugin) plugin.name = name def emptyline(self): # We don't want a blank line to repeat the last command. return def default(self, line): self.output('*** Unknown syntax: %s' % line) self.exitstatus = LSBInitExitStatuses.GENERIC def exec_cmdloop(self, args, options): try: import readline delims = readline.get_completer_delims() delims = delims.replace(':', '') # "group:process" as one word delims = delims.replace('*', '') # "group:*" as one word delims = delims.replace('-', '') # names with "-" as one word readline.set_completer_delims(delims) if options.history_file: try: readline.read_history_file(options.history_file) except IOError: pass def save(): try: readline.write_history_file(options.history_file) except IOError: pass import atexit atexit.register(save) except ImportError: pass try: self.cmdqueue.append('status') self.cmdloop() except KeyboardInterrupt: self.output('') pass def set_exitstatus_from_xmlrpc_fault(self, faultcode, ignored_faultcode=None): if faultcode in (ignored_faultcode, xmlrpc.Faults.SUCCESS): pass elif faultcode in DEAD_PROGRAM_FAULTS: self.exitstatus = LSBInitExitStatuses.NOT_RUNNING else: self.exitstatus = LSBInitExitStatuses.GENERIC def onecmd(self, line): """ Override the onecmd method to: - catch and print all exceptions - call 'do_foo' on plugins rather than ourself """ cmd, arg, line = self.parseline(line) if not line: return self.emptyline() if cmd is None: return self.default(line) self._complete_info = None self.lastcmd = line if cmd == '': return self.default(line) else: do_func = self._get_do_func(cmd) if do_func is None: return self.default(line) try: try: return do_func(arg) except xmlrpclib.ProtocolError as e: if e.errcode == 401: if self.options.interactive: self.output('Server requires authentication') username = raw_input('Username:') password = getpass.getpass(prompt='Password:') self.output('') self.options.username = username self.options.password = password return self.onecmd(line) else: self.output('Server requires authentication') self.exitstatus = LSBInitExitStatuses.GENERIC else: self.exitstatus = LSBInitExitStatuses.GENERIC raise do_func(arg) except Exception: (file, fun, line), t, v, tbinfo = asyncore.compact_traceback() error = 'error: %s, %s: file: %s line: %s' % (t, v, file, line) self.output(error) self.exitstatus = LSBInitExitStatuses.GENERIC def _get_do_func(self, cmd): func_name = 'do_' + cmd func = getattr(self, func_name, None) if not func: for plugin in self.options.plugins: func = getattr(plugin, func_name, None) if func is not None: break return func def output(self, message): if isinstance(message, unicode): message = message.encode('utf-8') self.stdout.write(message + '\n') def get_supervisor(self): return self.get_server_proxy('supervisor') def get_server_proxy(self, namespace=None): proxy = self.options.getServerProxy() if namespace is None: return proxy else: return getattr(proxy, namespace) def upcheck(self): try: supervisor = self.get_supervisor() api = supervisor.getVersion() # deprecated from supervisor import rpcinterface if api != rpcinterface.API_VERSION: self.output( 'Sorry, this version of supervisorctl expects to ' 'talk to a server with API version %s, but the ' 'remote version is %s.' % (rpcinterface.API_VERSION, api)) self.exitstatus = LSBInitExitStatuses.NOT_INSTALLED return False except xmlrpclib.Fault as e: if e.faultCode == xmlrpc.Faults.UNKNOWN_METHOD: self.output( 'Sorry, supervisord responded but did not recognize ' 'the supervisor namespace commands that supervisorctl ' 'uses to control it. Please check that the ' '[rpcinterface:supervisor] section is enabled in the ' 'configuration file (see sample.conf).') self.exitstatus = LSBInitExitStatuses.UNIMPLEMENTED_FEATURE return False self.exitstatus = LSBInitExitStatuses.GENERIC raise except socket.error as e: if e.args[0] == errno.ECONNREFUSED: self.output('%s refused connection' % self.options.serverurl) self.exitstatus = LSBInitExitStatuses.INSUFFICIENT_PRIVILEGES return False elif e.args[0] == errno.ENOENT: self.output('%s no such file' % self.options.serverurl) self.exitstatus = LSBInitExitStatuses.NOT_RUNNING return False self.exitstatus = LSBInitExitStatuses.GENERIC raise return True def complete(self, text, state, line=None): """Completer function that Cmd will register with readline using readline.set_completer(). This function will be called by readline as complete(text, state) where text is a fragment to complete and state is an integer (0..n). Each call returns a string with a new completion. When no more are available, None is returned.""" if line is None: # line is only set in tests import readline line = readline.get_line_buffer() matches = [] # blank line completes to action list if not line.strip(): matches = self._complete_actions(text) else: words = line.split() action = words[0] # incomplete action completes to action list if len(words) == 1 and not line.endswith(' '): matches = self._complete_actions(text) # actions that accept an action name elif action in ('help'): matches = self._complete_actions(text) # actions that accept a group name elif action in ('add', 'remove', 'update'): matches = self._complete_groups(text) # actions that accept a process name elif action in ('clear', 'fg', 'pid', 'restart', 'signal', 'start', 'status', 'stop', 'tail'): matches = self._complete_processes(text) if len(matches) > state: return matches[state] def _complete_actions(self, text): """Build a completion list of action names matching text""" return [ a + ' ' for a in self.vocab if a.startswith(text)] def _complete_groups(self, text): """Build a completion list of group names matching text""" groups = [] for info in self._get_complete_info(): if info['group'] not in groups: groups.append(info['group']) return [ g + ' ' for g in groups if g.startswith(text) ] def _complete_processes(self, text): """Build a completion list of process names matching text""" processes = [] for info in self._get_complete_info(): if ':' in text or info['name'] != info['group']: processes.append('%s:%s' % (info['group'], info['name'])) if '%s:*' % info['group'] not in processes: processes.append('%s:*' % info['group']) else: processes.append(info['name']) return [ p + ' ' for p in processes if p.startswith(text) ] def _get_complete_info(self): """Get all process info used for completion. We cache this between commands to reduce XML-RPC calls because readline may call complete() many times if the user hits tab only once.""" if self._complete_info is None: self._complete_info = self.get_supervisor().getAllProcessInfo() return self._complete_info def do_help(self, arg): if arg.strip() == 'help': self.help_help() else: for plugin in self.options.plugins: plugin.do_help(arg) def help_help(self): self.output("help\t\tPrint a list of available actions") self.output("help \tPrint help for ") def do_EOF(self, arg): self.output('') return 1 def help_EOF(self): self.output("To quit, type ^D or use the quit command") def get_names(inst): names = [] classes = [inst.__class__] while classes: aclass = classes.pop(0) if aclass.__bases__: classes = classes + list(aclass.__bases__) names = names + dir(aclass) return names class ControllerPluginBase: name = 'unnamed' def __init__(self, controller): self.ctl = controller def _doc_header(self): return "%s commands (type help ):" % self.name doc_header = property(_doc_header) def do_help(self, arg): if arg: # XXX check arg syntax try: func = getattr(self, 'help_' + arg) except AttributeError: try: doc = getattr(self, 'do_' + arg).__doc__ if doc: self.ctl.output(doc) return except AttributeError: pass self.ctl.output(self.ctl.nohelp % (arg,)) return func() else: names = get_names(self) cmds_doc = [] cmds_undoc = [] help = {} for name in names: if name[:5] == 'help_': help[name[5:]]=1 names.sort() # There can be duplicates if routines overridden prevname = '' for name in names: if name[:3] == 'do_': if name == prevname: continue prevname = name cmd=name[3:] if cmd in help: cmds_doc.append(cmd) del help[cmd] elif getattr(self, name).__doc__: cmds_doc.append(cmd) else: cmds_undoc.append(cmd) self.ctl.output('') self.ctl.print_topics(self.doc_header, cmds_doc, 15, 80) def not_all_langs(): enc = getattr(sys.stdout, 'encoding', None) or '' return None if enc.lower().startswith('utf') else sys.stdout.encoding def check_encoding(ctl): problematic_enc = not_all_langs() if problematic_enc: ctl.output('Warning: sys.stdout.encoding is set to %s, so Unicode ' 'output may fail. Check your LANG and PYTHONIOENCODING ' 'environment settings.' % problematic_enc) class DefaultControllerPlugin(ControllerPluginBase): name = 'default' listener = None # for unit tests def _tailf(self, path): check_encoding(self.ctl) self.ctl.output('==> Press Ctrl-C to exit <==') username = self.ctl.options.username password = self.ctl.options.password handler = None try: # Python's urllib2 (at least as of Python 2.4.2) isn't up # to this task; it doesn't actually implement a proper # HTTP/1.1 client that deals with chunked responses (it # always sends a Connection: close header). We use a # homegrown client based on asyncore instead. This makes # me sad. if self.listener is None: listener = http_client.Listener() else: listener = self.listener # for unit tests handler = http_client.HTTPHandler(listener, username, password) handler.get(self.ctl.options.serverurl, path) asyncore.loop() except KeyboardInterrupt: if handler: handler.close() self.ctl.output('') return def do_tail(self, arg): if not self.ctl.upcheck(): return args = arg.split() if len(args) < 1: self.ctl.output('Error: too few arguments') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC self.help_tail() return elif len(args) > 3: self.ctl.output('Error: too many arguments') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC self.help_tail() return modifier = None if args[0].startswith('-'): modifier = args.pop(0) if len(args) == 1: name = args[-1] channel = 'stdout' else: if args: name = args[0] channel = args[-1].lower() if channel not in ('stderr', 'stdout'): self.ctl.output('Error: bad channel %r' % channel) self.ctl.exitstatus = LSBInitExitStatuses.GENERIC return else: self.ctl.output('Error: tail requires process name') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC return bytes = 1600 if modifier is not None: what = modifier[1:] if what == 'f': bytes = None else: try: bytes = int(what) except: self.ctl.output('Error: bad argument %s' % modifier) self.ctl.exitstatus = LSBInitExitStatuses.GENERIC return supervisor = self.ctl.get_supervisor() if bytes is None: return self._tailf('/logtail/%s/%s' % (name, channel)) else: check_encoding(self.ctl) try: if channel == 'stdout': output = supervisor.readProcessStdoutLog(name, -bytes, 0) else: output = supervisor.readProcessStderrLog(name, -bytes, 0) except xmlrpclib.Fault as e: self.ctl.exitstatus = LSBInitExitStatuses.GENERIC template = '%s: ERROR (%s)' if e.faultCode == xmlrpc.Faults.NO_FILE: self.ctl.output(template % (name, 'no log file')) elif e.faultCode == xmlrpc.Faults.FAILED: self.ctl.output(template % (name, 'unknown error reading log')) elif e.faultCode == xmlrpc.Faults.BAD_NAME: self.ctl.output(template % (name, 'no such process name')) else: raise else: self.ctl.output(output) def help_tail(self): self.ctl.output( "tail [-f] [stdout|stderr] (default stdout)\n" "Ex:\n" "tail -f \t\tContinuous tail of named process stdout\n" "\t\t\tCtrl-C to exit.\n" "tail -100 \tlast 100 *bytes* of process stdout\n" "tail stderr\tlast 1600 *bytes* of process stderr" ) def do_maintail(self, arg): if not self.ctl.upcheck(): return args = arg.split() if len(args) > 1: self.ctl.output('Error: too many arguments') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC self.help_maintail() return elif len(args) == 1: if args[0].startswith('-'): what = args[0][1:] if what == 'f': path = '/mainlogtail' return self._tailf(path) try: what = int(what) except: self.ctl.output('Error: bad argument %s' % args[0]) self.ctl.exitstatus = LSBInitExitStatuses.GENERIC return else: bytes = what else: self.ctl.output('Error: bad argument %s' % args[0]) self.ctl.exitstatus = LSBInitExitStatuses.GENERIC return else: bytes = 1600 supervisor = self.ctl.get_supervisor() try: output = supervisor.readLog(-bytes, 0) except xmlrpclib.Fault as e: self.ctl.exitstatus = LSBInitExitStatuses.GENERIC template = '%s: ERROR (%s)' if e.faultCode == xmlrpc.Faults.NO_FILE: self.ctl.output(template % ('supervisord', 'no log file')) elif e.faultCode == xmlrpc.Faults.FAILED: self.ctl.output(template % ('supervisord', 'unknown error reading log')) else: raise else: self.ctl.output(output) def help_maintail(self): self.ctl.output( "maintail -f \tContinuous tail of supervisor main log file" " (Ctrl-C to exit)\n" "maintail -100\tlast 100 *bytes* of supervisord main log file\n" "maintail\tlast 1600 *bytes* of supervisor main log file\n" ) def do_quit(self, arg): return self.ctl.do_EOF(arg) def help_quit(self): self.ctl.output("quit\tExit the supervisor shell.") do_exit = do_quit def help_exit(self): self.ctl.output("exit\tExit the supervisor shell.") def _show_statuses(self, process_infos): namespecs, maxlen = [], 30 for i, info in enumerate(process_infos): namespecs.append(make_namespec(info['group'], info['name'])) if len(namespecs[i]) > maxlen: maxlen = len(namespecs[i]) template = '%(namespec)-' + str(maxlen+3) + 's%(state)-10s%(desc)s' for i, info in enumerate(process_infos): line = template % {'namespec': namespecs[i], 'state': info['statename'], 'desc': info['description']} self.ctl.output(line) def do_status(self, arg): # XXX In case upcheck fails, we override the exitstatus which # should only return 4 for do_status # TODO review this if not self.ctl.upcheck(): self.ctl.exitstatus = LSBStatusExitStatuses.UNKNOWN return supervisor = self.ctl.get_supervisor() all_infos = supervisor.getAllProcessInfo() names = as_string(arg).split() if not names or "all" in names: matching_infos = all_infos else: matching_infos = [] for name in names: bad_name = True group_name, process_name = split_namespec(name) for info in all_infos: matched = info['group'] == group_name if process_name is not None: matched = matched and info['name'] == process_name if matched: bad_name = False matching_infos.append(info) if bad_name: if process_name is None: msg = "%s: ERROR (no such group)" % group_name else: msg = "%s: ERROR (no such process)" % name self.ctl.output(msg) self.ctl.exitstatus = LSBStatusExitStatuses.UNKNOWN self._show_statuses(matching_infos) for info in matching_infos: if info['state'] in states.STOPPED_STATES: self.ctl.exitstatus = LSBStatusExitStatuses.NOT_RUNNING def help_status(self): self.ctl.output("status \t\tGet status for a single process") self.ctl.output("status :*\tGet status for all " "processes in a group") self.ctl.output("status \tGet status for multiple named " "processes") self.ctl.output("status\t\t\tGet all process status info") def do_pid(self, arg): supervisor = self.ctl.get_supervisor() if not self.ctl.upcheck(): return names = arg.split() if not names: pid = supervisor.getPID() self.ctl.output(str(pid)) elif 'all' in names: for info in supervisor.getAllProcessInfo(): self.ctl.output(str(info['pid'])) else: for name in names: try: info = supervisor.getProcessInfo(name) except xmlrpclib.Fault as e: self.ctl.exitstatus = LSBInitExitStatuses.GENERIC if e.faultCode == xmlrpc.Faults.BAD_NAME: self.ctl.output('No such process %s' % name) else: raise else: pid = info['pid'] self.ctl.output(str(pid)) if pid == 0: self.ctl.exitstatus = LSBInitExitStatuses.NOT_RUNNING def help_pid(self): self.ctl.output("pid\t\t\tGet the PID of supervisord.") self.ctl.output("pid \t\tGet the PID of a single " "child process by name.") self.ctl.output("pid all\t\t\tGet the PID of every child " "process, one per line.") def _startresult(self, result): name = make_namespec(result['group'], result['name']) code = result['status'] template = '%s: ERROR (%s)' if code == xmlrpc.Faults.BAD_NAME: return template % (name, 'no such process') elif code == xmlrpc.Faults.NO_FILE: return template % (name, 'no such file') elif code == xmlrpc.Faults.NOT_EXECUTABLE: return template % (name, 'file is not executable') elif code == xmlrpc.Faults.ALREADY_STARTED: return template % (name, 'already started') elif code == xmlrpc.Faults.SPAWN_ERROR: return template % (name, 'spawn error') elif code == xmlrpc.Faults.ABNORMAL_TERMINATION: return template % (name, 'abnormal termination') elif code == xmlrpc.Faults.SUCCESS: return '%s: started' % name # assertion raise ValueError('Unknown result code %s for %s' % (code, name)) def do_start(self, arg): if not self.ctl.upcheck(): return names = arg.split() supervisor = self.ctl.get_supervisor() if not names: self.ctl.output("Error: start requires a process name") self.ctl.exitstatus = LSBInitExitStatuses.INVALID_ARGS self.help_start() return if 'all' in names: results = supervisor.startAllProcesses() for result in results: self.ctl.output(self._startresult(result)) self.ctl.set_exitstatus_from_xmlrpc_fault(result['status'], xmlrpc.Faults.ALREADY_STARTED) else: for name in names: group_name, process_name = split_namespec(name) if process_name is None: try: results = supervisor.startProcessGroup(group_name) for result in results: self.ctl.output(self._startresult(result)) self.ctl.set_exitstatus_from_xmlrpc_fault(result['status'], xmlrpc.Faults.ALREADY_STARTED) except xmlrpclib.Fault as e: if e.faultCode == xmlrpc.Faults.BAD_NAME: error = "%s: ERROR (no such group)" % group_name self.ctl.output(error) self.ctl.exitstatus = LSBInitExitStatuses.INVALID_ARGS else: self.ctl.exitstatus = LSBInitExitStatuses.GENERIC raise else: try: result = supervisor.startProcess(name) except xmlrpclib.Fault as e: error = {'status': e.faultCode, 'name': process_name, 'group': group_name, 'description': e.faultString} self.ctl.output(self._startresult(error)) self.ctl.set_exitstatus_from_xmlrpc_fault(error['status'], xmlrpc.Faults.ALREADY_STARTED) else: name = make_namespec(group_name, process_name) self.ctl.output('%s: started' % name) def help_start(self): self.ctl.output("start \t\tStart a process") self.ctl.output("start :*\t\tStart all processes in a group") self.ctl.output( "start \tStart multiple processes or groups") self.ctl.output("start all\t\tStart all processes") def _signalresult(self, result, success='signalled'): name = make_namespec(result['group'], result['name']) code = result['status'] fault_string = result['description'] template = '%s: ERROR (%s)' if code == xmlrpc.Faults.BAD_NAME: return template % (name, 'no such process') elif code == xmlrpc.Faults.BAD_SIGNAL: return template % (name, 'bad signal name') elif code == xmlrpc.Faults.NOT_RUNNING: return template % (name, 'not running') elif code == xmlrpc.Faults.SUCCESS: return '%s: %s' % (name, success) elif code == xmlrpc.Faults.FAILED: return fault_string # assertion raise ValueError('Unknown result code %s for %s' % (code, name)) def _stopresult(self, result): return self._signalresult(result, success='stopped') def do_stop(self, arg): if not self.ctl.upcheck(): return names = arg.split() supervisor = self.ctl.get_supervisor() if not names: self.ctl.output('Error: stop requires a process name') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC self.help_stop() return if 'all' in names: results = supervisor.stopAllProcesses() for result in results: self.ctl.output(self._stopresult(result)) self.ctl.set_exitstatus_from_xmlrpc_fault(result['status'], xmlrpc.Faults.NOT_RUNNING) else: for name in names: group_name, process_name = split_namespec(name) if process_name is None: try: results = supervisor.stopProcessGroup(group_name) for result in results: self.ctl.output(self._stopresult(result)) self.ctl.set_exitstatus_from_xmlrpc_fault(result['status'], xmlrpc.Faults.NOT_RUNNING) except xmlrpclib.Fault as e: self.ctl.exitstatus = LSBInitExitStatuses.GENERIC if e.faultCode == xmlrpc.Faults.BAD_NAME: error = "%s: ERROR (no such group)" % group_name self.ctl.output(error) else: raise else: try: supervisor.stopProcess(name) except xmlrpclib.Fault as e: error = {'status': e.faultCode, 'name': process_name, 'group': group_name, 'description':e.faultString} self.ctl.output(self._stopresult(error)) self.ctl.set_exitstatus_from_xmlrpc_fault(error['status'], xmlrpc.Faults.NOT_RUNNING) else: name = make_namespec(group_name, process_name) self.ctl.output('%s: stopped' % name) def help_stop(self): self.ctl.output("stop \t\tStop a process") self.ctl.output("stop :*\t\tStop all processes in a group") self.ctl.output("stop \tStop multiple processes or groups") self.ctl.output("stop all\t\tStop all processes") def do_signal(self, arg): if not self.ctl.upcheck(): return args = arg.split() if len(args) < 2: self.ctl.output( 'Error: signal requires a signal name and a process name') self.help_signal() self.ctl.exitstatus = LSBInitExitStatuses.GENERIC return sig = args[0] names = args[1:] supervisor = self.ctl.get_supervisor() if 'all' in names: results = supervisor.signalAllProcesses(sig) for result in results: self.ctl.output(self._signalresult(result)) self.ctl.set_exitstatus_from_xmlrpc_fault(result['status']) else: for name in names: group_name, process_name = split_namespec(name) if process_name is None: try: results = supervisor.signalProcessGroup( group_name, sig ) for result in results: self.ctl.output(self._signalresult(result)) self.ctl.set_exitstatus_from_xmlrpc_fault(result['status']) except xmlrpclib.Fault as e: if e.faultCode == xmlrpc.Faults.BAD_NAME: error = "%s: ERROR (no such group)" % group_name self.ctl.output(error) self.ctl.exitstatus = LSBInitExitStatuses.GENERIC else: raise else: try: supervisor.signalProcess(name, sig) except xmlrpclib.Fault as e: error = {'status': e.faultCode, 'name': process_name, 'group': group_name, 'description':e.faultString} self.ctl.output(self._signalresult(error)) self.ctl.set_exitstatus_from_xmlrpc_fault(error['status']) else: name = make_namespec(group_name, process_name) self.ctl.output('%s: signalled' % name) def help_signal(self): self.ctl.output("signal \t\tSignal a process") self.ctl.output("signal :*\t\tSignal all processes in a group") self.ctl.output("signal \tSignal multiple processes or groups") self.ctl.output("signal all\t\tSignal all processes") def do_restart(self, arg): if not self.ctl.upcheck(): return names = arg.split() if not names: self.ctl.output('Error: restart requires a process name') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC self.help_restart() return self.do_stop(arg) self.do_start(arg) def help_restart(self): self.ctl.output("restart \t\tRestart a process") self.ctl.output("restart :*\tRestart all processes in a group") self.ctl.output("restart \tRestart multiple processes or " "groups") self.ctl.output("restart all\t\tRestart all processes") self.ctl.output("Note: restart does not reread config files. For that," " see reread and update.") def do_shutdown(self, arg): if arg: self.ctl.output('Error: shutdown accepts no arguments') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC self.help_shutdown() return if self.ctl.options.interactive: yesno = raw_input('Really shut the remote supervisord process ' 'down y/N? ') really = yesno.lower().startswith('y') else: really = 1 if really: supervisor = self.ctl.get_supervisor() try: supervisor.shutdown() except xmlrpclib.Fault as e: if e.faultCode == xmlrpc.Faults.SHUTDOWN_STATE: self.ctl.output('ERROR: already shutting down') else: self.ctl.exitstatus = LSBInitExitStatuses.GENERIC raise except socket.error as e: self.ctl.exitstatus = LSBInitExitStatuses.GENERIC if e.args[0] == errno.ECONNREFUSED: msg = 'ERROR: %s refused connection (already shut down?)' self.ctl.output(msg % self.ctl.options.serverurl) elif e.args[0] == errno.ENOENT: msg = 'ERROR: %s no such file (already shut down?)' self.ctl.output(msg % self.ctl.options.serverurl) else: raise else: self.ctl.output('Shut down') def help_shutdown(self): self.ctl.output("shutdown \tShut the remote supervisord down.") def do_reload(self, arg): if arg: self.ctl.output('Error: reload accepts no arguments') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC self.help_reload() return if self.ctl.options.interactive: yesno = raw_input('Really restart the remote supervisord process ' 'y/N? ') really = yesno.lower().startswith('y') else: really = 1 if really: supervisor = self.ctl.get_supervisor() try: supervisor.restart() except xmlrpclib.Fault as e: self.ctl.exitstatus = LSBInitExitStatuses.GENERIC if e.faultCode == xmlrpc.Faults.SHUTDOWN_STATE: self.ctl.output('ERROR: already shutting down') else: raise else: self.ctl.output('Restarted supervisord') def help_reload(self): self.ctl.output("reload \t\tRestart the remote supervisord.") def _formatChanges(self, added_changed_dropped_tuple): added, changed, dropped = added_changed_dropped_tuple changedict = {} for n, t in [(added, 'available'), (changed, 'changed'), (dropped, 'disappeared')]: changedict.update(dict(zip(n, [t] * len(n)))) if changedict: names = list(changedict.keys()) names.sort() for name in names: self.ctl.output("%s: %s" % (name, changedict[name])) else: self.ctl.output("No config updates to processes") def _formatConfigInfo(self, configinfo): name = make_namespec(configinfo['group'], configinfo['name']) formatted = { 'name': name } if configinfo['inuse']: formatted['inuse'] = 'in use' else: formatted['inuse'] = 'avail' if configinfo['autostart']: formatted['autostart'] = 'auto' else: formatted['autostart'] = 'manual' formatted['priority'] = "%s:%s" % (configinfo['group_prio'], configinfo['process_prio']) template = '%(name)-32s %(inuse)-9s %(autostart)-9s %(priority)s' return template % formatted def do_avail(self, arg): if arg: self.ctl.output('Error: avail accepts no arguments') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC self.help_avail() return supervisor = self.ctl.get_supervisor() try: configinfo = supervisor.getAllConfigInfo() except xmlrpclib.Fault as e: self.ctl.exitstatus = LSBInitExitStatuses.GENERIC if e.faultCode == xmlrpc.Faults.SHUTDOWN_STATE: self.ctl.output('ERROR: supervisor shutting down') else: raise else: for pinfo in configinfo: self.ctl.output(self._formatConfigInfo(pinfo)) def help_avail(self): self.ctl.output("avail\t\t\tDisplay all configured processes") def do_reread(self, arg): if arg: self.ctl.output('Error: reread accepts no arguments') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC self.help_reread() return supervisor = self.ctl.get_supervisor() try: result = supervisor.reloadConfig() except xmlrpclib.Fault as e: self.ctl.exitstatus = LSBInitExitStatuses.GENERIC if e.faultCode == xmlrpc.Faults.SHUTDOWN_STATE: self.ctl.output('ERROR: supervisor shutting down') elif e.faultCode == xmlrpc.Faults.CANT_REREAD: self.ctl.output("ERROR: %s" % e.faultString) else: raise else: self._formatChanges(result[0]) def help_reread(self): self.ctl.output("reread \t\t\tReload the daemon's configuration files without add/remove") def do_add(self, arg): names = arg.split() supervisor = self.ctl.get_supervisor() for name in names: try: supervisor.addProcessGroup(name) except xmlrpclib.Fault as e: if e.faultCode == xmlrpc.Faults.SHUTDOWN_STATE: self.ctl.output('ERROR: shutting down') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC elif e.faultCode == xmlrpc.Faults.ALREADY_ADDED: self.ctl.output('ERROR: process group already active') elif e.faultCode == xmlrpc.Faults.BAD_NAME: self.ctl.output("ERROR: no such process/group: %s" % name) self.ctl.exitstatus = LSBInitExitStatuses.GENERIC else: self.ctl.exitstatus = LSBInitExitStatuses.GENERIC raise else: self.ctl.output("%s: added process group" % name) def help_add(self): self.ctl.output("add [...]\tActivates any updates in config " "for process/group") def do_remove(self, arg): names = arg.split() supervisor = self.ctl.get_supervisor() for name in names: try: supervisor.removeProcessGroup(name) except xmlrpclib.Fault as e: self.ctl.exitstatus = LSBInitExitStatuses.GENERIC if e.faultCode == xmlrpc.Faults.STILL_RUNNING: self.ctl.output('ERROR: process/group still running: %s' % name) elif e.faultCode == xmlrpc.Faults.BAD_NAME: self.ctl.output("ERROR: no such process/group: %s" % name) else: raise else: self.ctl.output("%s: removed process group" % name) def help_remove(self): self.ctl.output("remove [...]\tRemoves process/group from " "active config") def do_update(self, arg): def log(name, message): self.ctl.output("%s: %s" % (name, message)) supervisor = self.ctl.get_supervisor() try: result = supervisor.reloadConfig() except xmlrpclib.Fault as e: self.ctl.exitstatus = LSBInitExitStatuses.GENERIC if e.faultCode == xmlrpc.Faults.SHUTDOWN_STATE: self.ctl.output('ERROR: already shutting down') return else: raise added, changed, removed = result[0] valid_gnames = set(arg.split()) # If all is specified treat it as if nothing was specified. if "all" in valid_gnames: valid_gnames = set() # If any gnames are specified we need to verify that they are # valid in order to print a useful error message. if valid_gnames: groups = set() for info in supervisor.getAllProcessInfo(): groups.add(info['group']) # New gnames would not currently exist in this set so # add those as well. groups.update(added) for gname in valid_gnames: if gname not in groups: self.ctl.output('ERROR: no such group: %s' % gname) self.ctl.exitstatus = LSBInitExitStatuses.GENERIC for gname in removed: if valid_gnames and gname not in valid_gnames: continue results = supervisor.stopProcessGroup(gname) log(gname, "stopped") fails = [res for res in results if res['status'] == xmlrpc.Faults.FAILED] if fails: self.ctl.output("%s: %s" % (gname, "has problems; not removing")) self.ctl.exitstatus = LSBInitExitStatuses.GENERIC continue supervisor.removeProcessGroup(gname) log(gname, "removed process group") for gname in changed: if valid_gnames and gname not in valid_gnames: continue supervisor.stopProcessGroup(gname) log(gname, "stopped") supervisor.removeProcessGroup(gname) supervisor.addProcessGroup(gname) log(gname, "updated process group") for gname in added: if valid_gnames and gname not in valid_gnames: continue supervisor.addProcessGroup(gname) log(gname, "added process group") def help_update(self): self.ctl.output("update\t\t\tReload config and add/remove as necessary, and will restart affected programs") self.ctl.output("update all\t\tReload config and add/remove as necessary, and will restart affected programs") self.ctl.output("update [...]\tUpdate specific groups") def _clearresult(self, result): name = make_namespec(result['group'], result['name']) code = result['status'] template = '%s: ERROR (%s)' if code == xmlrpc.Faults.BAD_NAME: return template % (name, 'no such process') elif code == xmlrpc.Faults.FAILED: return template % (name, 'failed') elif code == xmlrpc.Faults.SUCCESS: return '%s: cleared' % name raise ValueError('Unknown result code %s for %s' % (code, name)) def do_clear(self, arg): if not self.ctl.upcheck(): return names = arg.split() if not names: self.ctl.output('Error: clear requires a process name') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC self.help_clear() return supervisor = self.ctl.get_supervisor() if 'all' in names: results = supervisor.clearAllProcessLogs() for result in results: self.ctl.output(self._clearresult(result)) self.ctl.set_exitstatus_from_xmlrpc_fault(result['status']) else: for name in names: group_name, process_name = split_namespec(name) try: supervisor.clearProcessLogs(name) except xmlrpclib.Fault as e: error = {'status': e.faultCode, 'name': process_name, 'group': group_name, 'description': e.faultString} self.ctl.output(self._clearresult(error)) self.ctl.set_exitstatus_from_xmlrpc_fault(error['status']) else: name = make_namespec(group_name, process_name) self.ctl.output('%s: cleared' % name) def help_clear(self): self.ctl.output("clear \t\tClear a process' log files.") self.ctl.output( "clear \tClear multiple process' log files") self.ctl.output("clear all\t\tClear all process' log files") def do_open(self, arg): url = arg.strip() parts = urlparse.urlparse(url) if parts[0] not in ('unix', 'http'): self.ctl.output('ERROR: url must be http:// or unix://') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC return self.ctl.options.serverurl = url # TODO review this old_exitstatus = self.ctl.exitstatus self.do_status('') self.ctl.exitstatus = old_exitstatus def help_open(self): self.ctl.output("open \tConnect to a remote supervisord process.") self.ctl.output("\t\t(for UNIX domain socket, use unix:///socket/path)") def do_version(self, arg): if arg: self.ctl.output('Error: version accepts no arguments') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC self.help_version() return if not self.ctl.upcheck(): return supervisor = self.ctl.get_supervisor() self.ctl.output(supervisor.getSupervisorVersion()) def help_version(self): self.ctl.output( "version\t\t\tShow the version of the remote supervisord " "process") def do_fg(self, arg): if not self.ctl.upcheck(): return names = arg.split() if not names: self.ctl.output('ERROR: no process name supplied') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC self.help_fg() return if len(names) > 1: self.ctl.output('ERROR: too many process names supplied') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC return name = names[0] supervisor = self.ctl.get_supervisor() try: info = supervisor.getProcessInfo(name) except xmlrpclib.Fault as e: if e.faultCode == xmlrpc.Faults.BAD_NAME: self.ctl.output('ERROR: bad process name supplied') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC else: self.ctl.output('ERROR: ' + str(e)) return if info['state'] != states.ProcessStates.RUNNING: self.ctl.output('ERROR: process not running') self.ctl.exitstatus = LSBInitExitStatuses.GENERIC return self.ctl.output('==> Press Ctrl-C to exit <==') a = None try: # this thread takes care of the output/error messages a = fgthread(name, self.ctl) a.start() # this takes care of the user input while True: inp = raw_input() + '\n' try: supervisor.sendProcessStdin(name, inp) except xmlrpclib.Fault as e: if e.faultCode == xmlrpc.Faults.NOT_RUNNING: self.ctl.output('Process got killed') else: self.ctl.output('ERROR: ' + str(e)) self.ctl.output('Exiting foreground') a.kill() return info = supervisor.getProcessInfo(name) if info['state'] != states.ProcessStates.RUNNING: self.ctl.output('Process got killed') self.ctl.output('Exiting foreground') a.kill() return except (KeyboardInterrupt, EOFError): self.ctl.output('Exiting foreground') if a: a.kill() def help_fg(self,args=None): self.ctl.output('fg \tConnect to a process in foreground mode') self.ctl.output("\t\tCtrl-C to exit") def main(args=None, options=None): if options is None: options = ClientOptions() options.realize(args, doc=__doc__) c = Controller(options) if options.args: c.onecmd(" ".join(options.args)) sys.exit(c.exitstatus) if options.interactive: c.exec_cmdloop(args, options) sys.exit(0) # exitstatus always 0 for interactive mode if __name__ == "__main__": main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0 supervisor-4.2.5/supervisor/supervisord.py0000755000076500000240000003425614351440431020665 0ustar00mnaberezstaff#!/usr/bin/env python """supervisord -- run a set of applications as daemons. Usage: %s [options] Options: -c/--configuration FILENAME -- configuration file path (searches if not given) -n/--nodaemon -- run in the foreground (same as 'nodaemon=true' in config file) -s/--silent -- no logs to stdout (maps to 'silent=true' in config file) -h/--help -- print this usage message and exit -v/--version -- print supervisord version number and exit -u/--user USER -- run supervisord as this user (or numeric uid) -m/--umask UMASK -- use this umask for daemon subprocess (default is 022) -d/--directory DIRECTORY -- directory to chdir to when daemonized -l/--logfile FILENAME -- use FILENAME as logfile path -y/--logfile_maxbytes BYTES -- use BYTES to limit the max size of logfile -z/--logfile_backups NUM -- number of backups to keep when max bytes reached -e/--loglevel LEVEL -- use LEVEL as log level (debug,info,warn,error,critical) -j/--pidfile FILENAME -- write a pid file for the daemon process to FILENAME -i/--identifier STR -- identifier used for this instance of supervisord -q/--childlogdir DIRECTORY -- the log directory for child process logs -k/--nocleanup -- prevent the process from performing cleanup (removal of old automatic child log files) at startup. -a/--minfds NUM -- the minimum number of file descriptors for start success -t/--strip_ansi -- strip ansi escape codes from process output --minprocs NUM -- the minimum number of processes available for start success --profile_options OPTIONS -- run supervisord under profiler and output results based on OPTIONS, which is a comma-sep'd list of 'cumulative', 'calls', and/or 'callers', e.g. 'cumulative,callers') """ import os import time import signal from supervisor.medusa import asyncore_25 as asyncore from supervisor.compat import as_string from supervisor.options import ServerOptions from supervisor.options import decode_wait_status from supervisor.options import signame from supervisor import events from supervisor.states import SupervisorStates from supervisor.states import getProcessStateDescription class Supervisor: stopping = False # set after we detect that we are handling a stop request lastshutdownreport = 0 # throttle for delayed process error reports at stop process_groups = None # map of process group name to process group object stop_groups = None # list used for priority ordered shutdown def __init__(self, options): self.options = options self.process_groups = {} self.ticks = {} def main(self): if not self.options.first: # prevent crash on libdispatch-based systems, at least for the # first request self.options.cleanup_fds() self.options.set_uid_or_exit() if self.options.first: self.options.set_rlimits_or_exit() # this sets the options.logger object # delay logger instantiation until after setuid self.options.make_logger() if not self.options.nocleanup: # clean up old automatic logs self.options.clear_autochildlogdir() self.run() def run(self): self.process_groups = {} # clear self.stop_groups = None # clear events.clear() try: for config in self.options.process_group_configs: self.add_process_group(config) self.options.openhttpservers(self) self.options.setsignals() if (not self.options.nodaemon) and self.options.first: self.options.daemonize() # writing pid file needs to come *after* daemonizing or pid # will be wrong self.options.write_pidfile() self.runforever() finally: self.options.cleanup() def diff_to_active(self): new = self.options.process_group_configs cur = [group.config for group in self.process_groups.values()] curdict = dict(zip([cfg.name for cfg in cur], cur)) newdict = dict(zip([cfg.name for cfg in new], new)) added = [cand for cand in new if cand.name not in curdict] removed = [cand for cand in cur if cand.name not in newdict] changed = [cand for cand in new if cand != curdict.get(cand.name, cand)] return added, changed, removed def add_process_group(self, config): name = config.name if name not in self.process_groups: config.after_setuid() self.process_groups[name] = config.make_group() events.notify(events.ProcessGroupAddedEvent(name)) return True return False def remove_process_group(self, name): if self.process_groups[name].get_unstopped_processes(): return False self.process_groups[name].before_remove() del self.process_groups[name] events.notify(events.ProcessGroupRemovedEvent(name)) return True def get_process_map(self): process_map = {} for group in self.process_groups.values(): process_map.update(group.get_dispatchers()) return process_map def shutdown_report(self): unstopped = [] for group in self.process_groups.values(): unstopped.extend(group.get_unstopped_processes()) if unstopped: # throttle 'waiting for x to die' reports now = time.time() if now > (self.lastshutdownreport + 3): # every 3 secs names = [ as_string(p.config.name) for p in unstopped ] namestr = ', '.join(names) self.options.logger.info('waiting for %s to die' % namestr) self.lastshutdownreport = now for proc in unstopped: state = getProcessStateDescription(proc.get_state()) self.options.logger.blather( '%s state: %s' % (proc.config.name, state)) return unstopped def ordered_stop_groups_phase_1(self): if self.stop_groups: # stop the last group (the one with the "highest" priority) self.stop_groups[-1].stop_all() def ordered_stop_groups_phase_2(self): # after phase 1 we've transitioned and reaped, let's see if we # can remove the group we stopped from the stop_groups queue. if self.stop_groups: # pop the last group (the one with the "highest" priority) group = self.stop_groups.pop() if group.get_unstopped_processes(): # if any processes in the group aren't yet in a # stopped state, we're not yet done shutting this # group down, so push it back on to the end of the # stop group queue self.stop_groups.append(group) def runforever(self): events.notify(events.SupervisorRunningEvent()) timeout = 1 # this cannot be fewer than the smallest TickEvent (5) socket_map = self.options.get_socket_map() while 1: combined_map = {} combined_map.update(socket_map) combined_map.update(self.get_process_map()) pgroups = list(self.process_groups.values()) pgroups.sort() if self.options.mood < SupervisorStates.RUNNING: if not self.stopping: # first time, set the stopping flag, do a # notification and set stop_groups self.stopping = True self.stop_groups = pgroups[:] events.notify(events.SupervisorStoppingEvent()) self.ordered_stop_groups_phase_1() if not self.shutdown_report(): # if there are no unstopped processes (we're done # killing everything), it's OK to shutdown or reload raise asyncore.ExitNow for fd, dispatcher in combined_map.items(): if dispatcher.readable(): self.options.poller.register_readable(fd) if dispatcher.writable(): self.options.poller.register_writable(fd) r, w = self.options.poller.poll(timeout) for fd in r: if fd in combined_map: try: dispatcher = combined_map[fd] self.options.logger.blather( 'read event caused by %(dispatcher)r', dispatcher=dispatcher) dispatcher.handle_read_event() if not dispatcher.readable(): self.options.poller.unregister_readable(fd) except asyncore.ExitNow: raise except: combined_map[fd].handle_error() for fd in w: if fd in combined_map: try: dispatcher = combined_map[fd] self.options.logger.blather( 'write event caused by %(dispatcher)r', dispatcher=dispatcher) dispatcher.handle_write_event() if not dispatcher.writable(): self.options.poller.unregister_writable(fd) except asyncore.ExitNow: raise except: combined_map[fd].handle_error() for group in pgroups: group.transition() self.reap() self.handle_signal() self.tick() if self.options.mood < SupervisorStates.RUNNING: self.ordered_stop_groups_phase_2() if self.options.test: break def tick(self, now=None): """ Send one or more 'tick' events when the timeslice related to the period for the event type rolls over """ if now is None: # now won't be None in unit tests now = time.time() for event in events.TICK_EVENTS: period = event.period last_tick = self.ticks.get(period) if last_tick is None: # we just started up last_tick = self.ticks[period] = timeslice(period, now) this_tick = timeslice(period, now) if this_tick != last_tick: self.ticks[period] = this_tick events.notify(event(this_tick, self)) def reap(self, once=False, recursionguard=0): if recursionguard == 100: return pid, sts = self.options.waitpid() if pid: process = self.options.pidhistory.get(pid, None) if process is None: _, msg = decode_wait_status(sts) self.options.logger.info('reaped unknown pid %s (%s)' % (pid, msg)) else: process.finish(pid, sts) del self.options.pidhistory[pid] if not once: # keep reaping until no more kids to reap, but don't recurse # infinitely self.reap(once=False, recursionguard=recursionguard+1) def handle_signal(self): sig = self.options.get_signal() if sig: if sig in (signal.SIGTERM, signal.SIGINT, signal.SIGQUIT): self.options.logger.warn( 'received %s indicating exit request' % signame(sig)) self.options.mood = SupervisorStates.SHUTDOWN elif sig == signal.SIGHUP: if self.options.mood == SupervisorStates.SHUTDOWN: self.options.logger.warn( 'ignored %s indicating restart request (shutdown in progress)' % signame(sig)) else: self.options.logger.warn( 'received %s indicating restart request' % signame(sig)) self.options.mood = SupervisorStates.RESTARTING elif sig == signal.SIGCHLD: self.options.logger.debug( 'received %s indicating a child quit' % signame(sig)) elif sig == signal.SIGUSR2: self.options.logger.info( 'received %s indicating log reopen request' % signame(sig)) self.options.reopenlogs() for group in self.process_groups.values(): group.reopenlogs() else: self.options.logger.blather( 'received %s indicating nothing' % signame(sig)) def get_state(self): return self.options.mood def timeslice(period, when): return int(when - (when % period)) # profile entry point def profile(cmd, globals, locals, sort_order, callers): # pragma: no cover try: import cProfile as profile except ImportError: import profile import pstats import tempfile fd, fn = tempfile.mkstemp() try: profile.runctx(cmd, globals, locals, fn) stats = pstats.Stats(fn) stats.strip_dirs() # calls,time,cumulative and cumulative,calls,time are useful stats.sort_stats(*sort_order or ('cumulative', 'calls', 'time')) if callers: stats.print_callers(.3) else: stats.print_stats(.3) finally: os.remove(fn) # Main program def main(args=None, test=False): assert os.name == "posix", "This code makes Unix-specific assumptions" # if we hup, restart by making a new Supervisor() first = True while 1: options = ServerOptions() options.realize(args, doc=__doc__) options.first = first options.test = test if options.profile_options: sort_order, callers = options.profile_options profile('go(options)', globals(), locals(), sort_order, callers) else: go(options) options.close_httpservers() options.close_logger() first = False if test or (options.mood < SupervisorStates.RESTARTING): break def go(options): # pragma: no cover d = Supervisor(options) try: d.main() except asyncore.ExitNow: pass if __name__ == "__main__": # pragma: no cover main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0 supervisor-4.2.5/supervisor/templating.py0000644000076500000240000013375114351440431020441 0ustar00mnaberezstaff# This file was originally based on the meld3 package version 2.0.0 # (https://pypi.org/project/meld3/2.0.0/). The meld3 package is not # called out separately in Supervisor's license or copyright files # because meld3 had the same authors, copyright, and license as # Supervisor at the time this file was bundled with Supervisor. import email import re from xml.etree.ElementTree import ( Comment, ElementPath, ProcessingInstruction, QName, TreeBuilder, XMLParser, parse as et_parse ) from supervisor.compat import ( PY2, htmlentitydefs, HTMLParser, StringIO, StringTypes, unichr, as_bytes, as_string, ) AUTOCLOSE = "p", "li", "tr", "th", "td", "head", "body" IGNOREEND = "img", "hr", "meta", "link", "br" _BLANK = as_bytes('', encoding='latin1') _SPACE = as_bytes(' ', encoding='latin1') _EQUAL = as_bytes('=', encoding='latin1') _QUOTE = as_bytes('"', encoding='latin1') _OPEN_TAG_START = as_bytes("<", encoding='latin1') _CLOSE_TAG_START = as_bytes("", encoding='latin1') _SELF_CLOSE = as_bytes(" />", encoding='latin1') _OMITTED_TEXT = as_bytes(' [...]\n', encoding='latin1') _COMMENT_START = as_bytes('', encoding='latin1') _PI_START = as_bytes('', encoding='latin1') _AMPER_ESCAPED = as_bytes('&', encoding='latin1') _LT = as_bytes('<', encoding='latin1') _LT_ESCAPED = as_bytes('<', encoding='latin1') _QUOTE_ESCAPED = as_bytes(""", encoding='latin1') _XML_PROLOG_BEGIN = as_bytes('\n', encoding='latin1') _DOCTYPE_BEGIN = as_bytes('\n', encoding='latin1') if PY2: def encode(text, encoding): return text.encode(encoding) else: def encode(text, encoding): if not isinstance(text, bytes): text = text.encode(encoding) return text # replace element factory def Replace(text, structure=False): element = _MeldElementInterface(Replace, {}) element.text = text element.structure = structure return element class PyHelper: def findmeld(self, node, name, default=None): iterator = self.getiterator(node) for element in iterator: val = element.attrib.get(_MELD_ID) if val == name: return element return default def clone(self, node, parent=None): element = _MeldElementInterface(node.tag, node.attrib.copy()) element.text = node.text element.tail = node.tail element.structure = node.structure if parent is not None: # avoid calling self.append to reduce function call overhead parent._children.append(element) element.parent = parent for child in node._children: self.clone(child, element) return element def _bfclone(self, nodes, parent): L = [] for node in nodes: element = _MeldElementInterface(node.tag, node.attrib.copy()) element.parent = parent element.text = node.text element.tail = node.tail element.structure = node.structure if node._children: self._bfclone(node._children, element) L.append(element) parent._children = L def bfclone(self, node, parent=None): element = _MeldElementInterface(node.tag, node.attrib.copy()) element.text = node.text element.tail = node.tail element.structure = node.structure element.parent = parent if parent is not None: parent._children.append(element) if node._children: self._bfclone(node._children, element) return element def getiterator(self, node, tag=None): nodes = [] if tag == "*": tag = None if tag is None or node.tag == tag: nodes.append(node) for element in node._children: nodes.extend(self.getiterator(element, tag)) return nodes def content(self, node, text, structure=False): node.text = None replacenode = Replace(text, structure) replacenode.parent = node replacenode.text = text replacenode.structure = structure node._children = [replacenode] helper = PyHelper() _MELD_NS_URL = 'https://github.com/Supervisor/supervisor' _MELD_PREFIX = '{%s}' % _MELD_NS_URL _MELD_LOCAL = 'id' _MELD_ID = '%s%s' % (_MELD_PREFIX, _MELD_LOCAL) _MELD_SHORT_ID = 'meld:%s' % _MELD_LOCAL _XHTML_NS_URL = 'http://www.w3.org/1999/xhtml' _XHTML_PREFIX = '{%s}' % _XHTML_NS_URL _XHTML_PREFIX_LEN = len(_XHTML_PREFIX) _marker = [] class doctype: # lookup table for ease of use in external code html_strict = ('HTML', '-//W3C//DTD HTML 4.01//EN', 'http://www.w3.org/TR/html4/strict.dtd') html = ('HTML', '-//W3C//DTD HTML 4.01 Transitional//EN', 'http://www.w3.org/TR/html4/loose.dtd') xhtml_strict = ('html', '-//W3C//DTD XHTML 1.0 Strict//EN', 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd') xhtml = ('html', '-//W3C//DTD XHTML 1.0 Transitional//EN', 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd') class _MeldElementInterface: parent = None attrib = None text = None tail = None structure = None # overrides to reduce MRU lookups def __init__(self, tag, attrib): self.tag = tag self.attrib = attrib self._children = [] def __repr__(self): return "" % (self.tag, id(self)) def __len__(self): return len(self._children) def __getitem__(self, index): return self._children[index] def __getslice__(self, start, stop): return self._children[start:stop] def getchildren(self): return self._children def find(self, path): return ElementPath.find(self, path) def findtext(self, path, default=None): return ElementPath.findtext(self, path, default) def findall(self, path): return ElementPath.findall(self, path) def clear(self): self.attrib.clear() self._children = [] self.text = self.tail = None def get(self, key, default=None): return self.attrib.get(key, default) def set(self, key, value): self.attrib[key] = value def keys(self): return list(self.attrib.keys()) def items(self): return list(self.attrib.items()) def getiterator(self, *ignored_args, **ignored_kw): # we ignore any tag= passed in to us, originally because it was too # painfail to support in the old C extension, now for b/w compat return helper.getiterator(self) # overrides to support parent pointers and factories def __setitem__(self, index, element): if isinstance(index, slice): for e in element: e.parent = self else: element.parent = self self._children[index] = element # TODO: Can __setslice__ be removed now? def __setslice__(self, start, stop, elements): for element in elements: element.parent = self self._children[start:stop] = list(elements) def append(self, element): self._children.append(element) element.parent = self def insert(self, index, element): self._children.insert(index, element) element.parent = self def __delitem__(self, index): if isinstance(index, slice): for ob in self._children[index]: ob.parent = None else: self._children[index].parent = None ob = self._children[index] del self._children[index] # TODO: Can __delslice__ be removed now? def __delslice__(self, start, stop): obs = self._children[start:stop] for ob in obs: ob.parent = None del self._children[start:stop] def remove(self, element): self._children.remove(element) element.parent = None def makeelement(self, tag, attrib): return self.__class__(tag, attrib) # meld-specific def __mod__(self, other): """ Fill in the text values of meld nodes in tree; only support dictionarylike operand (sequence operand doesn't seem to make sense here)""" return self.fillmelds(**other) def fillmelds(self, **kw): """ Fill in the text values of meld nodes in tree using the keyword arguments passed in; use the keyword keys as meld ids and the keyword values as text that should fill in the node text on which that meld id is found. Return a list of keys from **kw that were not able to be found anywhere in the tree. Never raises an exception. """ unfilled = [] for k in kw: node = self.findmeld(k) if node is None: unfilled.append(k) else: node.text = kw[k] return unfilled def fillmeldhtmlform(self, **kw): """ Perform magic to 'fill in' HTML form element values from a dictionary. Unlike 'fillmelds', the type of element being 'filled' is taken into consideration. Perform a 'findmeld' on each key in the dictionary and use the value that corresponds to the key to perform mutation of the tree, changing data in what is presumed to be one or more HTML form elements according to the following rules:: If the found element is an 'input group' (its meld id ends with the string ':inputgroup'), set the 'checked' attribute on the appropriate subelement which has a 'value' attribute which matches the dictionary value. Also remove the 'checked' attribute from every other 'input' subelement of the input group. If no input subelement's value matches the dictionary value, this key is treated as 'unfilled'. If the found element is an 'input type=text', 'input type=hidden', 'input type=submit', 'input type=password', 'input type=reset' or 'input type=file' element, replace its 'value' attribute with the value. If the found element is an 'input type=checkbox' or 'input type='radio' element, set its 'checked' attribute to true if the dict value is true, or remove its 'checked' attribute if the dict value is false. If the found element is a 'select' element and the value exists in the 'value=' attribute of one of its 'option' subelements, change that option's 'selected' attribute to true and mark all other option elements as unselected. If the select element does not contain an option with a value that matches the dictionary value, do nothing and return this key as unfilled. If the found element is a 'textarea' or any other kind of element, replace its text with the value. If the element corresponding to the key is not found, do nothing and treat the key as 'unfilled'. Return a list of 'unfilled' keys, representing meld ids present in the dictionary but not present in the element tree or meld ids which could not be filled due to the lack of any matching subelements for 'select' nodes or 'inputgroup' nodes. """ unfilled = [] for k in kw: node = self.findmeld(k) if node is None: unfilled.append(k) continue val = kw[k] if k.endswith(':inputgroup'): # an input group is a list of input type="checkbox" or # input type="radio" elements that can be treated as a group # because they attempt to specify the same value found = [] unfound = [] for child in node.findall('input'): input_type = child.attrib.get('type', '').lower() if input_type not in ('checkbox', 'radio'): continue input_val = child.attrib.get('value', '') if val == input_val: found.append(child) else: unfound.append(child) if not found: unfilled.append(k) else: for option in found: option.attrib['checked'] = 'checked' for option in unfound: try: del option.attrib['checked'] except KeyError: pass else: tag = node.tag.lower() if tag == 'input': input_type = node.attrib.get('type', 'text').lower() # fill in value attrib for most input types if input_type in ('hidden', 'submit', 'text', 'password', 'reset', 'file'): node.attrib['value'] = val # unless it's a checkbox or radio attribute, then we # fill in its checked attribute elif input_type in ('checkbox', 'radio'): if val: node.attrib['checked'] = 'checked' else: try: del node.attrib['checked'] except KeyError: pass else: unfilled.append(k) elif tag == 'select': # if the node is a select node, we want to select # the value matching val, otherwise it's unfilled found = [] unfound = [] for option in node.findall('option'): if option.attrib.get('value', '') == val: found.append(option) else: unfound.append(option) if not found: unfilled.append(k) else: for option in found: option.attrib['selected'] = 'selected' for option in unfound: try: del option.attrib['selected'] except KeyError: pass else: node.text = kw[k] return unfilled def findmeld(self, name, default=None): """ Find a node in the tree that has a 'meld id' corresponding to 'name'. Iterate over all subnodes recursively looking for a node which matches. If we can't find the node, return None.""" # this could be faster if we indexed all the meld nodes in the # tree; we just walk the whole hierarchy now. result = helper.findmeld(self, name) if result is None: return default return result def findmelds(self): """ Find all nodes that have a meld id attribute and return the found nodes in a list""" return self.findwithattrib(_MELD_ID) def findwithattrib(self, attrib, value=None): """ Find all nodes that have an attribute named 'attrib'. If 'value' is not None, omit nodes on which the attribute value does not compare equally to 'value'. Return the found nodes in a list.""" iterator = helper.getiterator(self) elements = [] for element in iterator: attribval = element.attrib.get(attrib) if attribval is not None: if value is None: elements.append(element) else: if value == attribval: elements.append(element) return elements # ZPT-alike methods def repeat(self, iterable, childname=None): """repeats an element with values from an iterable. If 'childname' is not None, repeat the element on which the repeat is called, otherwise find the child element with a 'meld:id' matching 'childname' and repeat that. The element is repeated within its parent element (nodes that are created as a result of a repeat share the same parent). This method returns an iterable; the value of each iteration is a two-sequence in the form (newelement, data). 'newelement' is a clone of the template element (including clones of its children) which has already been seated in its parent element in the template. 'data' is a value from the passed in iterable. Changing 'newelement' (typically based on values from 'data') mutates the element 'in place'.""" if childname: element = self.findmeld(childname) else: element = self parent = element.parent # creating a list is faster than yielding a generator (py 2.4) L = [] first = True for thing in iterable: if first is True: clone = element else: clone = helper.bfclone(element, parent) L.append((clone, thing)) first = False return L def replace(self, text, structure=False): """ Replace this element with a Replace node in our parent with the text 'text' and return the index of our position in our parent. If we have no parent, do nothing, and return None. Pass the 'structure' flag to the replace node so it can do the right thing at render time. """ parent = self.parent i = self.deparent() if i is not None: # reduce function call overhead by not calling self.insert node = Replace(text, structure) parent._children.insert(i, node) node.parent = parent return i def content(self, text, structure=False): """ Delete this node's children and append a Replace node that contains text. Always return None. Pass the 'structure' flag to the replace node so it can do the right thing at render time.""" helper.content(self, text, structure) def attributes(self, **kw): """ Set attributes on this node. """ for k, v in kw.items(): # prevent this from getting to the parser if possible if not isinstance(k, StringTypes): raise ValueError('do not set non-stringtype as key: %s' % k) if not isinstance(v, StringTypes): raise ValueError('do not set non-stringtype as val: %s' % v) self.attrib[k] = kw[k] # output methods def write_xmlstring(self, encoding=None, doctype=None, fragment=False, declaration=True, pipeline=False): data = [] write = data.append if not fragment: if declaration: _write_declaration(write, encoding) if doctype: _write_doctype(write, doctype) _write_xml(write, self, encoding, {}, pipeline) return _BLANK.join(data) def write_xml(self, file, encoding=None, doctype=None, fragment=False, declaration=True, pipeline=False): """ Write XML to 'file' (which can be a filename or filelike object) encoding - encoding string (if None, 'utf-8' encoding is assumed) Must be a recognizable Python encoding type. doctype - 3-tuple indicating name, pubid, system of doctype. The default is to prevent a doctype from being emitted. fragment - True if a 'fragment' should be emitted for this node (no declaration, no doctype). This causes both the 'declaration' and 'doctype' parameters to become ignored if provided. declaration - emit an xml declaration header (including an encoding if it's not None). The default is to emit the doctype. pipeline - preserve 'meld' namespace identifiers in output for use in pipelining """ if not hasattr(file, "write"): file = open(file, "wb") data = self.write_xmlstring(encoding, doctype, fragment, declaration, pipeline) file.write(data) def write_htmlstring(self, encoding=None, doctype=doctype.html, fragment=False): data = [] write = data.append if encoding is None: encoding = 'utf8' if not fragment: if doctype: _write_doctype(write, doctype) _write_html(write, self, encoding, {}) joined = _BLANK.join(data) return joined def write_html(self, file, encoding=None, doctype=doctype.html, fragment=False): """ Write HTML to 'file' (which can be a filename or filelike object) encoding - encoding string (if None, 'utf-8' encoding is assumed). Unlike XML output, this is not used in a declaration, but it is used to do actual character encoding during output. Must be a recognizable Python encoding type. doctype - 3-tuple indicating name, pubid, system of doctype. The default is the value of doctype.html (HTML 4.0 'loose') fragment - True if a "fragment" should be omitted (no doctype). This overrides any provided "doctype" parameter if provided. Namespace'd elements and attributes have their namespaces removed during output when writing HTML, so pipelining cannot be performed. HTML is not valid XML, so an XML declaration header is never emitted. """ if not hasattr(file, "write"): file = open(file, "wb") page = self.write_htmlstring(encoding, doctype, fragment) file.write(page) def write_xhtmlstring(self, encoding=None, doctype=doctype.xhtml, fragment=False, declaration=False, pipeline=False): data = [] write = data.append if not fragment: if declaration: _write_declaration(write, encoding) if doctype: _write_doctype(write, doctype) _write_xml(write, self, encoding, {}, pipeline, xhtml=True) return _BLANK.join(data) def write_xhtml(self, file, encoding=None, doctype=doctype.xhtml, fragment=False, declaration=False, pipeline=False): """ Write XHTML to 'file' (which can be a filename or filelike object) encoding - encoding string (if None, 'utf-8' encoding is assumed) Must be a recognizable Python encoding type. doctype - 3-tuple indicating name, pubid, system of doctype. The default is the value of doctype.xhtml (XHTML 'loose'). fragment - True if a 'fragment' should be emitted for this node (no declaration, no doctype). This causes both the 'declaration' and 'doctype' parameters to be ignored. declaration - emit an xml declaration header (including an encoding string if 'encoding' is not None) pipeline - preserve 'meld' namespace identifiers in output for use in pipelining """ if not hasattr(file, "write"): file = open(file, "wb") page = self.write_xhtmlstring(encoding, doctype, fragment, declaration, pipeline) file.write(page) def clone(self, parent=None): """ Create a clone of an element. If parent is not None, append the element to the parent. Recurse as necessary to create a deep clone of the element. """ return helper.bfclone(self, parent) def deparent(self): """ Remove ourselves from our parent node (de-parent) and return the index of the parent which was deleted. """ i = self.parentindex() if i is not None: del self.parent[i] return i def parentindex(self): """ Return the parent node index in which we live """ parent = self.parent if parent is not None: return parent._children.index(self) def shortrepr(self, encoding=None): data = [] _write_html(data.append, self, encoding, {}, maxdepth=2) return _BLANK.join(data) def diffmeld(self, other): """ Compute the meld element differences from this node (the source) to 'other' (the target). Return a dictionary of sequences in the form {'unreduced: {'added':[], 'removed':[], 'moved':[]}, 'reduced': {'added':[], 'removed':[], 'moved':[]},} """ srcelements = self.findmelds() tgtelements = other.findmelds() srcids = [ x.meldid() for x in srcelements ] tgtids = [ x.meldid() for x in tgtelements ] removed = [] for srcelement in srcelements: if srcelement.meldid() not in tgtids: removed.append(srcelement) added = [] for tgtelement in tgtelements: if tgtelement.meldid() not in srcids: added.append(tgtelement) moved = [] for srcelement in srcelements: srcid = srcelement.meldid() if srcid in tgtids: i = tgtids.index(srcid) tgtelement = tgtelements[i] if not sharedlineage(srcelement, tgtelement): moved.append(tgtelement) unreduced = {'added':added, 'removed':removed, 'moved':moved} moved_reduced = diffreduce(moved) added_reduced = diffreduce(added) removed_reduced = diffreduce(removed) reduced = {'moved':moved_reduced, 'added':added_reduced, 'removed':removed_reduced} return {'unreduced':unreduced, 'reduced':reduced} def meldid(self): return self.attrib.get(_MELD_ID) def lineage(self): L = [] parent = self while parent is not None: L.append(parent) parent = parent.parent return L class MeldTreeBuilder(TreeBuilder): def __init__(self): TreeBuilder.__init__(self, element_factory=_MeldElementInterface) self.meldids = {} def start(self, tag, attrs): elem = TreeBuilder.start(self, tag, attrs) for key, value in attrs.items(): if key == _MELD_ID: if value in self.meldids: raise ValueError('Repeated meld id "%s" in source' % value) self.meldids[value] = 1 break return elem def comment(self, data): self.start(Comment, {}) self.data(data) self.end(Comment) def doctype(self, name, pubid, system): pass class HTMLXMLParser(HTMLParser): """ A mostly-cut-and-paste of ElementTree's HTMLTreeBuilder that does special meld3 things (like preserve comments and munge meld ids). Subclassing is not possible due to private attributes. :-(""" def __init__(self, builder=None, encoding=None): self.__stack = [] if builder is None: builder = MeldTreeBuilder() self.builder = builder self.encoding = encoding or "iso-8859-1" try: # ``convert_charrefs`` was added in Python 3.4. Set it to avoid # "DeprecationWarning: The value of convert_charrefs will become # True in 3.5. You are encouraged to set the value explicitly." HTMLParser.__init__(self, convert_charrefs=False) except TypeError: HTMLParser.__init__(self) self.meldids = {} def close(self): HTMLParser.close(self) self.meldids = {} return self.builder.close() def handle_starttag(self, tag, attrs): if tag == "meta": # look for encoding directives http_equiv = content = None for k, v in attrs: if k == "http-equiv": http_equiv = v.lower() elif k == "content": content = v if http_equiv == "content-type" and content: # use email to parse the http header msg = email.message_from_string( "%s: %s\n\n" % (http_equiv, content) ) encoding = msg.get_param("charset") if encoding: self.encoding = encoding if tag in AUTOCLOSE: if self.__stack and self.__stack[-1] == tag: self.handle_endtag(tag) self.__stack.append(tag) attrib = {} if attrs: for k, v in attrs: if k == _MELD_SHORT_ID: k = _MELD_ID if self.meldids.get(v): raise ValueError('Repeated meld id "%s" in source' % v) self.meldids[v] = 1 else: k = k.lower() attrib[k] = v self.builder.start(tag, attrib) if tag in IGNOREEND: self.__stack.pop() self.builder.end(tag) def handle_endtag(self, tag): if tag in IGNOREEND: return lasttag = self.__stack.pop() if tag != lasttag and lasttag in AUTOCLOSE: self.handle_endtag(lasttag) self.builder.end(tag) def handle_charref(self, char): if char[:1] == "x": char = int(char[1:], 16) else: char = int(char) self.builder.data(unichr(char)) def handle_entityref(self, name): entity = htmlentitydefs.entitydefs.get(name) if entity: if len(entity) == 1: entity = ord(entity) else: entity = int(entity[2:-1]) self.builder.data(unichr(entity)) else: self.unknown_entityref(name) def handle_data(self, data): if isinstance(data, bytes): data = as_string(data, self.encoding) self.builder.data(data) def unknown_entityref(self, name): pass # ignore by default; override if necessary def handle_comment(self, data): self.builder.start(Comment, {}) self.builder.data(data) self.builder.end(Comment) def do_parse(source, parser): root = et_parse(source, parser=parser).getroot() iterator = root.getiterator() for p in iterator: for c in p: c.parent = p return root def parse_xml(source): """ Parse source (a filelike object) into an element tree. If html is true, use a parser that can resolve somewhat ambiguous HTML into XHTML. Otherwise use a 'normal' parser only.""" builder = MeldTreeBuilder() parser = XMLParser(target=builder) return do_parse(source, parser) def parse_html(source, encoding=None): builder = MeldTreeBuilder() parser = HTMLXMLParser(builder, encoding) return do_parse(source, parser) def parse_xmlstring(text): source = StringIO(text) return parse_xml(source) def parse_htmlstring(text, encoding=None): source = StringIO(text) return parse_html(source, encoding) attrib_needs_escaping = re.compile(r'[&"<]').search cdata_needs_escaping = re.compile(r'[&<]').search def _both_case(mapping): # Add equivalent upper-case keys to mapping. lc_keys = list(mapping.keys()) for k in lc_keys: mapping[k.upper()] = mapping[k] _HTMLTAGS_UNBALANCED = {'area':1, 'base':1, 'basefont':1, 'br':1, 'col':1, 'frame':1, 'hr':1, 'img':1, 'input':1, 'isindex':1, 'link':1, 'meta':1, 'param':1} _both_case(_HTMLTAGS_UNBALANCED) _HTMLTAGS_NOESCAPE = {'script':1, 'style':1} _both_case(_HTMLTAGS_NOESCAPE) _HTMLATTRS_BOOLEAN = {'selected':1, 'checked':1, 'compact':1, 'declare':1, 'defer':1, 'disabled':1, 'ismap':1, 'multiple':1, 'nohref':1, 'noresize':1, 'noshade':1, 'nowrap':1} _both_case(_HTMLATTRS_BOOLEAN) def _write_html(write, node, encoding, namespaces, depth=-1, maxdepth=None): """ Walk 'node', calling 'write' with bytes(?). """ if encoding is None: encoding = 'utf-8' tag = node.tag tail = node.tail text = node.text tail = node.tail to_write = _BLANK if tag is Replace: if not node.structure: if cdata_needs_escaping(text): text = _escape_cdata(text) write(encode(text, encoding)) elif tag is Comment: if cdata_needs_escaping(text): text = _escape_cdata(text) write(encode('', encoding)) elif tag is ProcessingInstruction: if cdata_needs_escaping(text): text = _escape_cdata(text) write(encode('', encoding)) else: xmlns_items = [] # new namespaces in this scope try: if tag[:1] == "{": if tag[:_XHTML_PREFIX_LEN] == _XHTML_PREFIX: tag = tag[_XHTML_PREFIX_LEN:] else: tag, xmlns = fixtag(tag, namespaces) if xmlns: xmlns_items.append(xmlns) except TypeError: _raise_serialization_error(tag) to_write += _OPEN_TAG_START + encode(tag, encoding) attrib = node.attrib if attrib is not None: if len(attrib) > 1: attrib_keys = list(attrib.keys()) attrib_keys.sort() else: attrib_keys = attrib for k in attrib_keys: try: if k[:1] == "{": continue except TypeError: _raise_serialization_error(k) if k in _HTMLATTRS_BOOLEAN: to_write += _SPACE + encode(k, encoding) else: v = attrib[k] to_write += _encode_attrib(k, v, encoding) for k, v in xmlns_items: to_write += _encode_attrib(k, v, encoding) to_write += _OPEN_TAG_END if text is not None and text: if tag in _HTMLTAGS_NOESCAPE: to_write += encode(text, encoding) elif cdata_needs_escaping(text): to_write += _escape_cdata(text) else: to_write += encode(text,encoding) write(to_write) for child in node._children: if maxdepth is not None: depth = depth + 1 if depth < maxdepth: _write_html(write, child, encoding, namespaces, depth, maxdepth) elif depth == maxdepth and text: write(_OMITTED_TEXT) else: _write_html(write, child, encoding, namespaces, depth, maxdepth) if text or node._children or tag not in _HTMLTAGS_UNBALANCED: write(_CLOSE_TAG_START + encode(tag, encoding) + _CLOSE_TAG_END) if tail: if cdata_needs_escaping(tail): write(_escape_cdata(tail)) else: write(encode(tail,encoding)) def _write_xml(write, node, encoding, namespaces, pipeline, xhtml=False): """ Write XML to a file """ if encoding is None: encoding = 'utf-8' tag = node.tag if tag is Comment: write(_COMMENT_START + _escape_cdata(node.text, encoding) + _COMMENT_END) elif tag is ProcessingInstruction: write(_PI_START + _escape_cdata(node.text, encoding) + _PI_END) elif tag is Replace: if node.structure: # this may produce invalid xml write(encode(node.text, encoding)) else: write(_escape_cdata(node.text, encoding)) else: if xhtml: if tag[:_XHTML_PREFIX_LEN] == _XHTML_PREFIX: tag = tag[_XHTML_PREFIX_LEN:] if node.attrib: items = list(node.attrib.items()) else: items = [] # must always be sortable. xmlns_items = [] # new namespaces in this scope try: if tag[:1] == "{": tag, xmlns = fixtag(tag, namespaces) if xmlns: xmlns_items.append(xmlns) except TypeError: _raise_serialization_error(tag) write(_OPEN_TAG_START + encode(tag, encoding)) if items or xmlns_items: items.sort() # lexical order for k, v in items: try: if k[:1] == "{": if not pipeline: if k == _MELD_ID: continue k, xmlns = fixtag(k, namespaces) if xmlns: xmlns_items.append(xmlns) if not pipeline: # special-case for HTML input if k == 'xmlns:meld': continue except TypeError: _raise_serialization_error(k) write(_encode_attrib(k, v, encoding)) for k, v in xmlns_items: write(_encode_attrib(k, v, encoding)) if node.text or node._children: write(_OPEN_TAG_END) if node.text: write(_escape_cdata(node.text, encoding)) for n in node._children: _write_xml(write, n, encoding, namespaces, pipeline, xhtml) write(_CLOSE_TAG_START + encode(tag, encoding) + _CLOSE_TAG_END) else: write(_SELF_CLOSE) for k, v in xmlns_items: del namespaces[v] if node.tail: write(_escape_cdata(node.tail, encoding)) def _encode_attrib(k, v, encoding): return _BLANK.join((_SPACE, encode(k, encoding), _EQUAL, _QUOTE, _escape_attrib(v, encoding), _QUOTE, )) # overrides to elementtree to increase speed and get entity quoting correct. # negative lookahead assertion _NONENTITY_RE = re.compile(as_bytes(r'&(?!([#\w]*;))', encoding='latin1')) def _escape_cdata(text, encoding=None): # Return escaped character data as bytes. try: if encoding: try: encoded = encode(text, encoding) except UnicodeError: return _encode_entity(text) else: encoded = as_bytes(text, encoding='latin1') encoded = _NONENTITY_RE.sub(_AMPER_ESCAPED, encoded) encoded = encoded.replace(_LT, _LT_ESCAPED) return encoded except (TypeError, AttributeError): _raise_serialization_error(text) def _escape_attrib(text, encoding): # Return escaped attribute value as bytes. try: if encoding: try: encoded = encode(text, encoding) except UnicodeError: return _encode_entity(text) else: encoded = as_bytes(text, encoding='latin1') # don't requote properly-quoted entities encoded = _NONENTITY_RE.sub(_AMPER_ESCAPED, encoded) encoded = encoded.replace(_LT, _LT_ESCAPED) encoded = encoded.replace(_QUOTE, _QUOTE_ESCAPED) return encoded except (TypeError, AttributeError): _raise_serialization_error(text) # utility functions def _write_declaration(write, encoding): # Write as bytes. if not encoding: write(_XML_PROLOG_BEGIN + _XML_PROLOG_END) else: write(_XML_PROLOG_BEGIN + _SPACE + _ENCODING + _EQUAL + _QUOTE + as_bytes(encoding, encoding='latin1') + _QUOTE + _XML_PROLOG_END) def _write_doctype(write, doctype): # Write as bytes. try: name, pubid, system = doctype except (ValueError, TypeError): raise ValueError("doctype must be supplied as a 3-tuple in the form " "(name, pubid, system) e.g. '%s'" % doctype.xhtml) write(_DOCTYPE_BEGIN + _SPACE + as_bytes(name, encoding='latin1') + _SPACE + _PUBLIC + _SPACE + _QUOTE + as_bytes(pubid, encoding='latin1') + _QUOTE + _SPACE + _QUOTE + as_bytes(system, encoding='latin1') + _QUOTE + _DOCTYPE_END) _XML_DECL_RE = re.compile(r'<\?xml .*?\?>') _BEGIN_TAG_RE = re.compile(r'<[^/?!]?\w+') def insert_doctype(data, doctype=doctype.xhtml): # jam an html doctype declaration into 'data' if it # doesn't already contain a doctype declaration match = _XML_DECL_RE.search(data) dt_string = '' % doctype if match is not None: start, end = match.span(0) before = data[:start] tag = data[start:end] after = data[end:] return before + tag + dt_string + after else: return dt_string + data def insert_meld_ns_decl(data): match = _BEGIN_TAG_RE.search(data) if match is not None: start, end = match.span(0) before = data[:start] tag = data[start:end] + ' xmlns:meld="%s"' % _MELD_NS_URL after = data[end:] data = before + tag + after return data def prefeed(data, doctype=doctype.xhtml): if data.find('": ">", '"': """, } _namespace_map = { # "well-known" namespace prefixes "http://www.w3.org/XML/1998/namespace": "xml", "http://www.w3.org/1999/xhtml": "html", "http://www.w3.org/1999/02/22-rdf-syntax-ns#": "rdf", "http://schemas.xmlsoap.org/wsdl/": "wsdl", } def _encode(s, encoding): try: return s.encode(encoding) except AttributeError: return s def _raise_serialization_error(text): raise TypeError( "cannot serialize %r (type %s)" % (text, type(text).__name__) ) _pattern = None def _encode_entity(text): # map reserved and non-ascii characters to numerical entities global _pattern if _pattern is None: _ptxt = r'[&<>\"' + _NON_ASCII_MIN + '-' + _NON_ASCII_MAX + ']+' #_pattern = re.compile(eval(r'u"[&<>\"\u0080-\uffff]+"')) _pattern = re.compile(_ptxt) def _escape_entities(m): out = [] append = out.append for char in m.group(): text = _escape_map.get(char) if text is None: text = "&#%d;" % ord(char) append(text) return ''.join(out) try: return _encode(_pattern.sub(_escape_entities, text), "ascii") except TypeError: _raise_serialization_error(text) def fixtag(tag, namespaces): # given a decorated tag (of the form {uri}tag), return prefixed # tag and namespace declaration, if any if isinstance(tag, QName): tag = tag.text namespace_uri, tag = tag[1:].split("}", 1) prefix = namespaces.get(namespace_uri) if prefix is None: prefix = _namespace_map.get(namespace_uri) if prefix is None: prefix = "ns%d" % len(namespaces) namespaces[namespace_uri] = prefix if prefix == "xml": xmlns = None else: xmlns = ("xmlns:%s" % prefix, namespace_uri) else: xmlns = None return "%s:%s" % (prefix, tag), xmlns #----------------------------------------------------------------------------- # End fork from Python 2.6.8 stdlib #----------------------------------------------------------------------------- ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1671843145.3910868 supervisor-4.2.5/supervisor/tests/0000755000076500000240000000000014351446511017060 5ustar00mnaberezstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/__init__.py0000644000076500000240000000002414340177153021166 0ustar00mnaberezstaff# this is a package ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671836122.0 supervisor-4.2.5/supervisor/tests/base.py0000644000076500000240000010601714351430732020347 0ustar00mnaberezstaff_NOW = 1151365354 _TIMEFORMAT = '%b %d %I:%M %p' import functools from supervisor.compat import Fault from supervisor.compat import as_bytes # mock is imported here for py2/3 compat. we only declare mock as a dependency # via tests_require so it is not available on all supervisor installs. the # modules imported in supervisor.compat must always be available. try: # pragma: no cover from unittest.mock import Mock, patch, sentinel except ImportError: # pragma: no cover from mock import Mock, patch, sentinel try: # pragma: no cover import unittest.mock as mock except ImportError: # pragma: no cover import mock class DummyOptions: loglevel = 20 minfds = 5 chdir_exception = None fork_exception = None execv_exception = None kill_exception = None make_pipes_exception = None remove_exception = None write_exception = None def __init__(self): self.identifier = 'supervisor' self.childlogdir = '/tmp' self.uid = 999 self.logger = self.getLogger() self.backofflimit = 10 self.logfile = '/tmp/logfile' self.nocleanup = False self.strip_ansi = False self.pidhistory = {} self.process_group_configs = [] self.nodaemon = False self.socket_map = {} self.mood = 1 self.mustreopen = False self.realizeargs = None self.fds_cleaned_up = False self.rlimit_set = False self.setuid_called = False self.httpservers_opened = False self.signals_set = False self.daemonized = False self.make_logger_messages = None self.autochildlogdir_cleared = False self.cleaned_up = False self.pidfile_written = False self.directory = None self.waitpid_return = None, None self.kills = {} self._signal = None self.parent_pipes_closed = None self.child_pipes_closed = None self.forkpid = 0 self.pgrp_set = None self.duped = {} self.written = {} self.fds_closed = [] self._exitcode = None self.execve_called = False self.execv_args = None self.setuid_msg = None self.privsdropped = None self.logs_reopened = False self.write_accept = None self.tempfile_name = '/foo/bar' self.removed = [] self.existing = [] self.openreturn = None self.readfd_result = '' self.parse_criticals = [] self.parse_warnings = [] self.parse_infos = [] self.serverurl = 'http://localhost:9001' self.changed_directory = False self.umaskset = None self.poller = DummyPoller(self) self.silent = False def getLogger(self, *args, **kw): logger = DummyLogger() logger.handlers = [DummyLogger()] logger.args = args, kw return logger def realize(self, args, **kw): self.realizeargs = args self.realizekw = kw def process_config(self, do_usage=True): pass def cleanup_fds(self): self.fds_cleaned_up = True def set_rlimits_or_exit(self): self.rlimits_set = True self.parse_infos.append('rlimits_set') def set_uid_or_exit(self): self.setuid_called = True self.parse_criticals.append('setuid_called') def openhttpservers(self, supervisord): self.httpservers_opened = True def daemonize(self): self.daemonized = True def setsignals(self): self.signals_set = True def get_signal(self): return self._signal def get_socket_map(self): return self.socket_map def make_logger(self): pass def clear_autochildlogdir(self): self.autochildlogdir_cleared = True def get_autochildlog_name(self, *ignored): return self.tempfile_name def cleanup(self): self.cleaned_up = True def write_pidfile(self): self.pidfile_written = True def waitpid(self): return self.waitpid_return def kill(self, pid, sig): if self.kill_exception is not None: raise self.kill_exception self.kills[pid] = sig def stat(self, filename): import os return os.stat(filename) def get_path(self): return ["/bin", "/usr/bin", "/usr/local/bin"] def get_pid(self): import os return os.getpid() def check_execv_args(self, filename, argv, st): if filename == '/bad/filename': from supervisor.options import NotFound raise NotFound('bad filename') def make_pipes(self, stderr=True): if self.make_pipes_exception is not None: raise self.make_pipes_exception pipes = {'child_stdin': 3, 'stdin': 4, 'stdout': 5, 'child_stdout': 6} if stderr: pipes['stderr'], pipes['child_stderr'] = (7, 8) else: pipes['stderr'], pipes['child_stderr'] = None, None return pipes def write(self, fd, chars): if self.write_exception is not None: raise self.write_exception if self.write_accept: chars = chars[self.write_accept] data = self.written.setdefault(fd, '') data += chars self.written[fd] = data return len(chars) def fork(self): if self.fork_exception is not None: raise self.fork_exception return self.forkpid def close_fd(self, fd): self.fds_closed.append(fd) def close_parent_pipes(self, pipes): self.parent_pipes_closed = pipes def close_child_pipes(self, pipes): self.child_pipes_closed = pipes def setpgrp(self): self.pgrp_set = True def dup2(self, frm, to): self.duped[frm] = to def _exit(self, code): self._exitcode = code def execve(self, filename, argv, environment): self.execve_called = True if self.execv_exception is not None: raise self.execv_exception self.execv_args = (filename, argv) self.execv_environment = environment def drop_privileges(self, uid): if self.setuid_msg: return self.setuid_msg self.privsdropped = uid def readfd(self, fd): return self.readfd_result def reopenlogs(self): self.logs_reopened = True def mktempfile(self, prefix, suffix, dir): return self.tempfile_name def remove(self, path): if self.remove_exception is not None: raise self.remove_exception self.removed.append(path) def exists(self, path): if path in self.existing: return True return False def open(self, name, mode='r'): if self.openreturn: return self.openreturn return open(name, mode) def chdir(self, dir): if self.chdir_exception is not None: raise self.chdir_exception self.changed_directory = True def setumask(self, mask): self.umaskset = mask class DummyLogger: level = None def __init__(self): self.reopened = False self.removed = False self.closed = False self.data = [] def info(self, msg, **kw): if kw: msg = msg % kw self.data.append(msg) warn = debug = critical = trace = error = blather = info def log(self, level, msg, **kw): if kw: msg = msg % kw self.data.append(msg) def addHandler(self, handler): handler.close() def reopen(self): self.reopened = True def close(self): self.closed = True def remove(self): self.removed = True def flush(self): self.flushed = True def getvalue(self): return ''.join(self.data) class DummySupervisor: def __init__(self, options=None, state=None, process_groups=None): if options is None: self.options = DummyOptions() else: self.options = options if state is None: from supervisor.supervisord import SupervisorStates self.options.mood = SupervisorStates.RUNNING else: self.options.mood = state if process_groups is None: self.process_groups = {} else: self.process_groups = process_groups def get_state(self): return self.options.mood class DummySocket: bind_called = False bind_addr = None listen_called = False listen_backlog = None close_called = False def __init__(self, fd): self.fd = fd def fileno(self): return self.fd def bind(self, addr): self.bind_called = True self.bind_addr = addr def listen(self, backlog): self.listen_called = True self.listen_backlog = backlog def close(self): self.close_called = True def __str__(self): return 'dummy socket' class DummySocketConfig: def __init__(self, fd, backlog=128): self.fd = fd self.backlog = backlog self.url = 'unix:///sock' def addr(self): return 'dummy addr' def __eq__(self, other): return self.fd == other.fd def __ne__(self, other): return not self.__eq__(other) def get_backlog(self): return self.backlog def create_and_bind(self): return DummySocket(self.fd) class DummySocketManager: def __init__(self, config, **kwargs): self._config = config def config(self): return self._config def get_socket(self): return DummySocket(self._config.fd) @functools.total_ordering class DummyProcess(object): write_exception = None # Initial state; overridden by instance variables pid = 0 # Subprocess pid; 0 when not running laststart = 0 # Last time the subprocess was started; 0 if never laststop = 0 # Last time the subprocess was stopped; 0 if never delay = 0 # If nonzero, delay starting or killing until this time administrative_stop = False # true if the process stopped by an admin system_stop = False # true if the process has been stopped by the system killing = False # flag determining whether we are trying to kill this proc backoff = 0 # backoff counter (to backofflimit) waitstatus = None exitstatus = None pipes = None rpipes = None dispatchers = None stdout_logged = '' stderr_logged = '' spawnerr = None stdout_buffer = '' # buffer of characters from child stdout output to log stderr_buffer = '' # buffer of characters from child stderr output to log stdin_buffer = '' # buffer of characters to send to child process' stdin listener_state = None group = None sent_signal = None def __init__(self, config, state=None): self.config = config self.logsremoved = False self.stop_called = False self.stop_report_called = True self.backoff_secs = None self.spawned = False if state is None: from supervisor.process import ProcessStates state = ProcessStates.RUNNING self.state = state self.error_at_clear = False self.killed_with = None self.drained = False self.stdout_buffer = as_bytes('') self.stderr_buffer = as_bytes('') self.stdout_logged = as_bytes('') self.stderr_logged = as_bytes('') self.stdin_buffer = as_bytes('') self.pipes = {} self.rpipes = {} self.dispatchers = {} self.finished = None self.logs_reopened = False self.execv_arg_exception = None self.input_fd_drained = None self.output_fd_drained = None self.transitioned = False def reopenlogs(self): self.logs_reopened = True def removelogs(self): if self.error_at_clear: raise IOError('whatever') self.logsremoved = True def get_state(self): return self.state def stop(self): self.stop_called = True self.killing = False from supervisor.process import ProcessStates self.state = ProcessStates.STOPPED def stop_report(self): self.stop_report_called = True def kill(self, signal): self.killed_with = signal def signal(self, signal): self.sent_signal = signal def spawn(self): self.spawned = True from supervisor.process import ProcessStates self.state = ProcessStates.RUNNING def drain(self): self.drained = True def readable_fds(self): return [] def record_output(self): self.stdout_logged += self.stdout_buffer self.stdout_buffer = '' self.stderr_logged += self.stderr_buffer self.stderr_buffer = '' def finish(self, pid, sts): self.finished = pid, sts def give_up(self): from supervisor.process import ProcessStates self.state = ProcessStates.FATAL def get_execv_args(self): if self.execv_arg_exception: raise self.execv_arg_exception('whatever') import shlex commandargs = shlex.split(self.config.command) program = commandargs[0] return program, commandargs def drain_output_fd(self, fd): self.output_fd_drained = fd def drain_input_fd(self, fd): self.input_fd_drained = fd def write(self, chars): if self.write_exception is not None: raise self.write_exception self.stdin_buffer += chars def transition(self): self.transitioned = True def __eq__(self, other): return self.config.priority == other.config.priority def __lt__(self, other): return self.config.priority < other.config.priority class DummyPConfig: def __init__(self, options, name, command, directory=None, umask=None, priority=999, autostart=True, autorestart=True, startsecs=10, startretries=999, uid=None, stdout_logfile=None, stdout_capture_maxbytes=0, stdout_events_enabled=False, stdout_logfile_backups=0, stdout_logfile_maxbytes=0, stdout_syslog=False, stderr_logfile=None, stderr_capture_maxbytes=0, stderr_events_enabled=False, stderr_logfile_backups=0, stderr_logfile_maxbytes=0, stderr_syslog=False, redirect_stderr=False, stopsignal=None, stopwaitsecs=10, stopasgroup=False, killasgroup=False, exitcodes=(0,), environment=None, serverurl=None): self.options = options self.name = name self.command = command self.priority = priority self.autostart = autostart self.autorestart = autorestart self.startsecs = startsecs self.startretries = startretries self.uid = uid self.stdout_logfile = stdout_logfile self.stdout_capture_maxbytes = stdout_capture_maxbytes self.stdout_events_enabled = stdout_events_enabled self.stdout_logfile_backups = stdout_logfile_backups self.stdout_logfile_maxbytes = stdout_logfile_maxbytes self.stdout_syslog = stdout_syslog self.stderr_logfile = stderr_logfile self.stderr_capture_maxbytes = stderr_capture_maxbytes self.stderr_events_enabled = stderr_events_enabled self.stderr_logfile_backups = stderr_logfile_backups self.stderr_logfile_maxbytes = stderr_logfile_maxbytes self.stderr_syslog = stderr_syslog self.redirect_stderr = redirect_stderr if stopsignal is None: import signal stopsignal = signal.SIGTERM self.stopsignal = stopsignal self.stopwaitsecs = stopwaitsecs self.stopasgroup = stopasgroup self.killasgroup = killasgroup self.exitcodes = exitcodes self.environment = environment self.directory = directory self.umask = umask self.autochildlogs_created = False self.serverurl = serverurl def get_path(self): return ["/bin", "/usr/bin", "/usr/local/bin"] def create_autochildlogs(self): self.autochildlogs_created = True def make_process(self, group=None): process = DummyProcess(self) process.group = group return process def make_dispatchers(self, proc): use_stderr = not self.redirect_stderr pipes = self.options.make_pipes(use_stderr) stdout_fd,stderr_fd,stdin_fd = (pipes['stdout'],pipes['stderr'], pipes['stdin']) dispatchers = {} if stdout_fd is not None: dispatchers[stdout_fd] = DummyDispatcher(readable=True) if stderr_fd is not None: dispatchers[stderr_fd] = DummyDispatcher(readable=True) if stdin_fd is not None: dispatchers[stdin_fd] = DummyDispatcher(writable=True) return dispatchers, pipes def makeExecutable(file, substitutions=None): import os import sys import tempfile if substitutions is None: substitutions = {} data = open(file).read() last = os.path.split(file)[1] substitutions['PYTHON'] = sys.executable for key in substitutions.keys(): data = data.replace('<<%s>>' % key.upper(), substitutions[key]) with tempfile.NamedTemporaryFile(prefix=last, delete=False) as f: tmpnam = f.name f.write(data) os.chmod(tmpnam, 0o755) return tmpnam def makeSpew(unkillable=False): import os here = os.path.dirname(__file__) if not unkillable: return makeExecutable(os.path.join(here, 'fixtures/spew.py')) return makeExecutable(os.path.join(here, 'fixtures/unkillable_spew.py')) class DummyMedusaServerLogger: def __init__(self): self.logged = [] def log(self, category, msg): self.logged.append((category, msg)) class DummyMedusaServer: def __init__(self): self.logger = DummyMedusaServerLogger() class DummyMedusaChannel: def __init__(self): self.server = DummyMedusaServer() self.producer = None def push_with_producer(self, producer): self.producer = producer def close_when_done(self): pass def set_terminator(self, terminator): pass class DummyRequest(object): command = 'GET' _error = None _done = False version = '1.0' def __init__(self, path, params, query, fragment, env=None): self.args = path, params, query, fragment self.producers = [] self.headers = {} self.header = [] self.outgoing = [] self.channel = DummyMedusaChannel() if env is None: self.env = {} else: self.env = env def split_uri(self): return self.args def error(self, code): self._error = code def push(self, producer): self.producers.append(producer) def __setitem__(self, header, value): self.headers[header] = value def __getitem__(self, header): return self.headers[header] def __delitem__(self, header): del self.headers[header] def has_key(self, header): return header in self.headers def __contains__(self, item): return item in self.headers def done(self): self._done = True def build_reply_header(self): return '' def log(self, *arg, **kw): pass def cgi_environment(self): return self.env def get_server_url(self): return 'http://example.com' class DummyRPCInterfaceFactory: def __init__(self, supervisord, **config): self.supervisord = supervisord self.config = config class DummyRPCServer: def __init__(self): self.supervisor = DummySupervisorRPCNamespace() self.system = DummySystemRPCNamespace() class DummySystemRPCNamespace: pass class DummySupervisorRPCNamespace: _restartable = True _restarted = False _shutdown = False _readlog_error = False from supervisor.process import ProcessStates all_process_info = [ { 'name':'foo', 'group':'foo', 'pid':11, 'state':ProcessStates.RUNNING, 'statename':'RUNNING', 'start':_NOW - 100, 'stop':0, 'spawnerr':'', 'now':_NOW, 'description':'foo description', }, { 'name':'bar', 'group':'bar', 'pid':12, 'state':ProcessStates.FATAL, 'statename':'FATAL', 'start':_NOW - 100, 'stop':_NOW - 50, 'spawnerr':'screwed', 'now':_NOW, 'description':'bar description', }, { 'name':'baz_01', 'group':'baz', 'pid':13, 'state':ProcessStates.STOPPED, 'statename':'STOPPED', 'start':_NOW - 100, 'stop':_NOW - 25, 'spawnerr':'', 'now':_NOW, 'description':'baz description', }, ] def getAPIVersion(self): return '3.0' getVersion = getAPIVersion # deprecated def getPID(self): return 42 def _read_log(self, channel, name, offset, length): from supervisor import xmlrpc if name == 'BAD_NAME': raise Fault(xmlrpc.Faults.BAD_NAME, 'BAD_NAME') elif name == 'FAILED': raise Fault(xmlrpc.Faults.FAILED, 'FAILED') elif name == 'NO_FILE': raise Fault(xmlrpc.Faults.NO_FILE, 'NO_FILE') a = (channel + ' line\n') * 10 return a[offset:] def readProcessStdoutLog(self, name, offset, length): return self._read_log('stdout', name, offset, length) readProcessLog = readProcessStdoutLog def readProcessStderrLog(self, name, offset, length): return self._read_log('stderr', name, offset, length) def getAllProcessInfo(self): return self.all_process_info def getProcessInfo(self, name): from supervisor import xmlrpc for i in self.all_process_info: if i['name']==name: info=i return info if name == 'BAD_NAME': raise Fault(xmlrpc.Faults.BAD_NAME, 'BAD_NAME') if name == 'FAILED': raise Fault(xmlrpc.Faults.FAILED, 'FAILED') if name == 'NO_FILE': raise Fault(xmlrpc.Faults.NO_FILE, 'NO_FILE') def startProcess(self, name): from supervisor import xmlrpc if name == 'BAD_NAME:BAD_NAME': raise Fault(xmlrpc.Faults.BAD_NAME, 'BAD_NAME:BAD_NAME') if name == 'BAD_NAME': raise Fault(xmlrpc.Faults.BAD_NAME, 'BAD_NAME') if name == 'NO_FILE': raise Fault(xmlrpc.Faults.NO_FILE, 'NO_FILE') if name == 'NOT_EXECUTABLE': raise Fault(xmlrpc.Faults.NOT_EXECUTABLE, 'NOT_EXECUTABLE') if name == 'ALREADY_STARTED': raise Fault(xmlrpc.Faults.ALREADY_STARTED, 'ALREADY_STARTED') if name == 'SPAWN_ERROR': raise Fault(xmlrpc.Faults.SPAWN_ERROR, 'SPAWN_ERROR') if name == 'ABNORMAL_TERMINATION': raise Fault(xmlrpc.Faults.ABNORMAL_TERMINATION, 'ABNORMAL_TERMINATION') return True def startProcessGroup(self, name): from supervisor import xmlrpc from supervisor.compat import Fault if name == 'BAD_NAME': raise Fault(xmlrpc.Faults.BAD_NAME, 'BAD_NAME') return [ {'name':'foo_00', 'group':'foo', 'status': xmlrpc.Faults.SUCCESS, 'description': 'OK'}, {'name':'foo_01', 'group':'foo', 'status':xmlrpc.Faults.SUCCESS, 'description': 'OK'}, ] def startAllProcesses(self): from supervisor import xmlrpc return [ {'name':'foo', 'group':'foo', 'status': xmlrpc.Faults.SUCCESS, 'description': 'OK'}, {'name':'foo2', 'group':'foo2', 'status':xmlrpc.Faults.SUCCESS, 'description': 'OK'}, {'name':'failed', 'group':'failed_group', 'status':xmlrpc.Faults.SPAWN_ERROR, 'description':'SPAWN_ERROR'} ] def stopProcessGroup(self, name): from supervisor import xmlrpc from supervisor.compat import Fault if name == 'BAD_NAME': raise Fault(xmlrpc.Faults.BAD_NAME, 'BAD_NAME') return [ {'name':'foo_00', 'group':'foo', 'status': xmlrpc.Faults.SUCCESS, 'description': 'OK'}, {'name':'foo_01', 'group':'foo', 'status':xmlrpc.Faults.SUCCESS, 'description': 'OK'}, ] def stopProcess(self, name): from supervisor import xmlrpc if name == 'BAD_NAME:BAD_NAME': raise Fault(xmlrpc.Faults.BAD_NAME, 'BAD_NAME:BAD_NAME') if name == 'BAD_NAME': raise Fault(xmlrpc.Faults.BAD_NAME, 'BAD_NAME') if name == 'NOT_RUNNING': raise Fault(xmlrpc.Faults.NOT_RUNNING, 'NOT_RUNNING') if name == 'FAILED': raise Fault(xmlrpc.Faults.FAILED, 'FAILED') return True def stopAllProcesses(self): from supervisor import xmlrpc return [ {'name':'foo','group':'foo', 'status': xmlrpc.Faults.SUCCESS, 'description': 'OK'}, {'name':'foo2', 'group':'foo2', 'status':xmlrpc.Faults.SUCCESS,'description': 'OK'}, {'name':'failed', 'group':'failed_group', 'status':xmlrpc.Faults.BAD_NAME, 'description':'FAILED'} ] def restart(self): if self._restartable: self._restarted = True return from supervisor import xmlrpc raise Fault(xmlrpc.Faults.SHUTDOWN_STATE, '') def shutdown(self): if self._restartable: self._shutdown = True return from supervisor import xmlrpc raise Fault(xmlrpc.Faults.SHUTDOWN_STATE, '') def reloadConfig(self): return [[['added'], ['changed'], ['removed']]] def addProcessGroup(self, name): from supervisor import xmlrpc if name == 'ALREADY_ADDED': raise Fault(xmlrpc.Faults.ALREADY_ADDED, '') if name == 'BAD_NAME': raise Fault(xmlrpc.Faults.BAD_NAME, '') if name == 'FAILED': raise Fault(xmlrpc.Faults.FAILED, '') if name == 'SHUTDOWN_STATE': raise Fault(xmlrpc.Faults.SHUTDOWN_STATE, '') if hasattr(self, 'processes'): self.processes.append(name) else: self.processes = [name] def removeProcessGroup(self, name): from supervisor import xmlrpc if name == 'STILL_RUNNING': raise Fault(xmlrpc.Faults.STILL_RUNNING, '') if name == 'BAD_NAME': raise Fault(xmlrpc.Faults.BAD_NAME, '') if name == 'FAILED': raise Fault(xmlrpc.Faults.FAILED, '') self.processes.remove(name) def clearProcessStdoutLog(self, name): from supervisor import xmlrpc if name == 'BAD_NAME': raise Fault(xmlrpc.Faults.BAD_NAME, 'BAD_NAME') return True clearProcessLog = clearProcessStdoutLog clearProcessStderrLog = clearProcessStdoutLog clearProcessLogs = clearProcessStdoutLog def clearAllProcessLogs(self): from supervisor import xmlrpc return [ {'name':'foo', 'group':'foo', 'status':xmlrpc.Faults.SUCCESS, 'description': 'OK'}, {'name':'foo2', 'group':'foo2', 'status':xmlrpc.Faults.SUCCESS, 'description': 'OK'}, {'name':'failed', 'group':'failed_group', 'status':xmlrpc.Faults.FAILED, 'description':'FAILED'} ] def raiseError(self): raise ValueError('error') def getXmlRpcUnmarshallable(self): return {'stdout_logfile': None} # None is unmarshallable def getSupervisorVersion(self): return '3000' def readLog(self, whence, offset): if self._readlog_error: raise Fault(self._readlog_error, '') return 'mainlogdata' def signalProcessGroup(self, name, signal): from supervisor import xmlrpc if name == 'BAD_NAME': raise Fault(xmlrpc.Faults.BAD_NAME, 'BAD_NAME') return [ {'name':'foo_00', 'group':'foo', 'status': xmlrpc.Faults.SUCCESS, 'description': 'OK'}, {'name':'foo_01', 'group':'foo', 'status':xmlrpc.Faults.SUCCESS, 'description': 'OK'}, ] def signalProcess(self, name, signal): from supervisor import xmlrpc if signal == 'BAD_SIGNAL': raise Fault(xmlrpc.Faults.BAD_SIGNAL, 'BAD_SIGNAL') if name == 'BAD_NAME:BAD_NAME': raise Fault(xmlrpc.Faults.BAD_NAME, 'BAD_NAME:BAD_NAME') if name == 'BAD_NAME': raise Fault(xmlrpc.Faults.BAD_NAME, 'BAD_NAME') if name == 'NOT_RUNNING': raise Fault(xmlrpc.Faults.NOT_RUNNING, 'NOT_RUNNING') if name == 'FAILED': raise Fault(xmlrpc.Faults.FAILED, 'FAILED') return True def signalAllProcesses(self, signal): from supervisor import xmlrpc return [ {'name':'foo', 'group':'foo', 'status': xmlrpc.Faults.SUCCESS, 'description': 'OK'}, {'name':'foo2', 'group':'foo2', 'status':xmlrpc.Faults.SUCCESS, 'description': 'OK'}, {'name':'failed', 'group':'failed_group', 'status':xmlrpc.Faults.BAD_NAME, 'description':'FAILED'} ] class DummyPGroupConfig: def __init__(self, options, name='whatever', priority=999, pconfigs=None): self.options = options self.name = name self.priority = priority if pconfigs is None: pconfigs = [] self.process_configs = pconfigs self.after_setuid_called = False self.pool_events = [] self.buffer_size = 10 def after_setuid(self): self.after_setuid_called = True def make_group(self): return DummyProcessGroup(self) def __repr__(self): return '<%s instance at %s named %s>' % (self.__class__, id(self), self.name) class DummyFCGIGroupConfig(DummyPGroupConfig): def __init__(self, options, name='whatever', priority=999, pconfigs=None, socket_config=DummySocketConfig(1)): DummyPGroupConfig.__init__(self, options, name, priority, pconfigs) self.socket_config = socket_config @functools.total_ordering class DummyProcessGroup(object): def __init__(self, config): self.config = config self.transitioned = False self.all_stopped = False self.dispatchers = {} self.unstopped_processes = [] self.before_remove_called = False def transition(self): self.transitioned = True def before_remove(self): self.before_remove_called = True def stop_all(self): self.all_stopped = True def get_unstopped_processes(self): return self.unstopped_processes def get_dispatchers(self): return self.dispatchers def __lt__(self, other): return self.config.priority < other.config.priority def __eq__(self, other): return self.config.priority == other.config.priority def reopenlogs(self): self.logs_reopened = True class DummyFCGIProcessGroup(DummyProcessGroup): def __init__(self, config): DummyProcessGroup.__init__(self, config) self.socket_manager = DummySocketManager(config.socket_config) class PopulatedDummySupervisor(DummySupervisor): def __init__(self, options, group_name, *pconfigs): DummySupervisor.__init__(self, options) self.process_groups = {} processes = {} self.group_name = group_name gconfig = DummyPGroupConfig(options, group_name, pconfigs=pconfigs) pgroup = DummyProcessGroup(gconfig) self.process_groups[group_name] = pgroup for pconfig in pconfigs: process = DummyProcess(pconfig) processes[pconfig.name] = process pgroup.processes = processes def set_procattr(self, process_name, attr_name, val, group_name=None): if group_name is None: group_name = self.group_name process = self.process_groups[group_name].processes[process_name] setattr(process, attr_name, val) def reap(self): self.reaped = True class DummyDispatcher: flush_exception = None write_event_handled = False read_event_handled = False error_handled = False logs_reopened = False logs_removed = False closed = False flushed = False def __init__(self, readable=False, writable=False, error=False): self._readable = readable self._writable = writable self._error = error self.input_buffer = '' if readable: # only readable dispatchers should have these methods def reopenlogs(): self.logs_reopened = True self.reopenlogs = reopenlogs def removelogs(): self.logs_removed = True self.removelogs = removelogs def readable(self): return self._readable def writable(self): return self._writable def handle_write_event(self): if self._error: raise self._error self.write_event_handled = True def handle_read_event(self): if self._error: raise self._error self.read_event_handled = True def handle_error(self): self.error_handled = True def close(self): self.closed = True def flush(self): if self.flush_exception: raise self.flush_exception self.flushed = True class DummyStream: def __init__(self, error=None, fileno=20): self.error = error self.closed = False self.flushed = False self.written = b'' self._fileno = fileno def close(self): if self.error: raise self.error self.closed = True def flush(self): if self.error: raise self.error self.flushed = True def write(self, msg): if self.error: error = self.error self.error = None raise error self.written += as_bytes(msg) def seek(self, num, whence=0): pass def tell(self): return len(self.written) def fileno(self): return self._fileno class DummyEvent: def __init__(self, serial='abc'): if serial is not None: self.serial = serial def payload(self): return 'dummy event' class DummyPoller: def __init__(self, options): self.result = [], [] self.closed = False def register_readable(self, fd): pass def register_writable(self, fd): pass def poll(self, timeout): return self.result def close(self): self.closed = True def dummy_handler(event, result): pass def rejecting_handler(event, result): from supervisor.dispatchers import RejectEvent raise RejectEvent(result) def exception_handler(event, result): raise ValueError(result) def lstrip(s): strings = [x.strip() for x in s.split('\n')] return '\n'.join(strings) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1671843145.3950732 supervisor-4.2.5/supervisor/tests/fixtures/0000755000076500000240000000000014351446511020731 5ustar00mnaberezstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/donothing.conf0000644000076500000240000000035514340177153023575 0ustar00mnaberezstaff[supervisord] logfile=/tmp/donothing.log ; (main log file;default $CWD/supervisord.log) pidfile=/tmp/donothing.pid ; (supervisord pidfile;default supervisord.pid) nodaemon=true ; (start in foreground if true;default false) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1671843145.395201 supervisor-4.2.5/supervisor/tests/fixtures/example/0000755000076500000240000000000014351446511022364 5ustar00mnaberezstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840622.0 supervisor-4.2.5/supervisor/tests/fixtures/example/included.conf0000644000076500000240000000004514351441556025025 0ustar00mnaberezstaff[supervisord] childlogdir = %(here)s ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/include.conf0000644000076500000240000000011414340177153023220 0ustar00mnaberezstaff[include] files = ./example/included.conf [supervisord] logfile = %(here)s ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-1054.conf0000644000076500000240000000066214340177153023324 0ustar00mnaberezstaff[supervisord] loglevel = debug logfile=/tmp/issue-1054.log pidfile=/tmp/issue-1054.pid nodaemon = true [unix_http_server] file=/tmp/issue-1054.sock ; the path to the socket file [supervisorctl] serverurl=unix:///tmp/issue-1054.sock ; use a unix:// URL for a unix socket [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [program:cat] command = /bin/cat startsecs = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-1170a.conf0000644000076500000240000000072614340177153023465 0ustar00mnaberezstaff[supervisord] nodaemon=true ; start in foreground if true; default false loglevel=debug ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-1170a.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-1170a.pid ; supervisord pidfile; default supervisord.pid environment=FOO="set from [supervisord] section" [program:echo] command=bash -c "echo '%(ENV_FOO)s'" startsecs=0 startretries=0 autorestart=false ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-1170b.conf0000644000076500000240000000100314340177153023453 0ustar00mnaberezstaff[supervisord] nodaemon=true ; start in foreground if true; default false loglevel=debug ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-1170b.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-1170b.pid ; supervisord pidfile; default supervisord.pid environment=FOO="set from [supervisord] section" [program:echo] command=bash -c "echo '%(ENV_FOO)s'" environment=FOO="set from [program] section" startsecs=0 startretries=0 autorestart=false ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-1170c.conf0000644000076500000240000000105614340177153023464 0ustar00mnaberezstaff[supervisord] nodaemon=true ; start in foreground if true; default false loglevel=debug ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-1170c.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-1170c.pid ; supervisord pidfile; default supervisord.pid environment=FOO="set from [supervisord] section" [eventlistener:echo] command=bash -c "echo '%(ENV_FOO)s' >&2" environment=FOO="set from [eventlistener] section" events=PROCESS_STATE_FATAL startsecs=0 startretries=0 autorestart=false ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-1224.conf0000644000076500000240000000025014340177153023314 0ustar00mnaberezstaff[supervisord] nodaemon = true pidfile = /tmp/issue-1224.pid nodaemon = true logfile = /dev/stdout logfile_maxbytes = 0 [program:cat] command = /bin/cat startsecs = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-1231a.conf0000644000076500000240000000122114340177153023452 0ustar00mnaberezstaff[supervisord] loglevel=info ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-1231a.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-1231a.pid ; supervisord pidfile; default supervisord.pid nodaemon=true ; start in foreground if true; default false [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [unix_http_server] file=/tmp/issue-1231a.sock ; the path to the socket file [supervisorctl] serverurl=unix:///tmp/issue-1231a.sock ; use a unix:// URL for a unix socket [program:hello] command=python %(here)s/test_1231.py ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-1231b.conf0000644000076500000240000000122114340177153023453 0ustar00mnaberezstaff[supervisord] loglevel=info ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-1231b.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-1231b.pid ; supervisord pidfile; default supervisord.pid nodaemon=true ; start in foreground if true; default false [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [unix_http_server] file=/tmp/issue-1231b.sock ; the path to the socket file [supervisorctl] serverurl=unix:///tmp/issue-1231b.sock ; use a unix:// URL for a unix socket [program:hello] command=python %(here)s/test_1231.py ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-1231c.conf0000644000076500000240000000122114340177153023454 0ustar00mnaberezstaff[supervisord] loglevel=info ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-1231c.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-1231c.pid ; supervisord pidfile; default supervisord.pid nodaemon=true ; start in foreground if true; default false [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [unix_http_server] file=/tmp/issue-1231c.sock ; the path to the socket file [supervisorctl] serverurl=unix:///tmp/issue-1231c.sock ; use a unix:// URL for a unix socket [program:hello] command=python %(here)s/test_1231.py ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-1298.conf0000644000076500000240000000064314340177153023335 0ustar00mnaberezstaff[supervisord] nodaemon=true ; start in foreground if true; default false [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [unix_http_server] file=/tmp/issue-1298.sock ; the path to the socket file [supervisorctl] serverurl=unix:///tmp/issue-1298.sock ; use a unix:// URL for a unix socket [program:spew] command=python %(here)s/spew.py ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-1483a.conf0000644000076500000240000000077314340177153023476 0ustar00mnaberezstaff[supervisord] loglevel=info ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-1483a.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-1483a.pid ; supervisord pidfile; default supervisord.pid nodaemon=true ; start in foreground if true; default false [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [unix_http_server] file=/tmp/issue-1483a.sock ; the path to the socket file ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-1483b.conf0000644000076500000240000000102714340177153023470 0ustar00mnaberezstaff[supervisord] loglevel=info ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-1483b.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-1483b.pid ; supervisord pidfile; default supervisord.pid nodaemon=true ; start in foreground if true; default false identifier=from_config_file [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [unix_http_server] file=/tmp/issue-1483b.sock ; the path to the socket file ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-1483c.conf0000644000076500000240000000102714340177153023471 0ustar00mnaberezstaff[supervisord] loglevel=info ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-1483c.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-1483c.pid ; supervisord pidfile; default supervisord.pid nodaemon=true ; start in foreground if true; default false identifier=from_config_file [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [unix_http_server] file=/tmp/issue-1483c.sock ; the path to the socket file ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-291a.conf0000644000076500000240000000063014340177153023402 0ustar00mnaberezstaff[supervisord] loglevel=debug ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-291a.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-291a.pid ; supervisord pidfile; default supervisord.pid nodaemon=true ; start in foreground if true; default false [program:print_env] command=python %(here)s/print_env.py startsecs=0 autorestart=false ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-550.conf0000644000076500000240000000134714340177153023245 0ustar00mnaberezstaff[supervisord] loglevel=info ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-550.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-550.pid ; supervisord pidfile; default supervisord.pid nodaemon=true ; start in foreground if true; default false environment=THIS_SHOULD=BE_IN_CHILD_ENV [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [unix_http_server] file=/tmp/issue-550.sock ; the path to the socket file [supervisorctl] serverurl=unix:///tmp/issue-550.sock ; use a unix:// URL for a unix socket [program:print_env] command=python %(here)s/print_env.py startsecs=0 startretries=0 autorestart=false ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-565.conf0000644000076500000240000000146614340177153023255 0ustar00mnaberezstaff[supervisord] loglevel=info ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-565.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-565.pid ; supervisord pidfile; default supervisord.pid nodaemon=true ; start in foreground if true; default false [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [unix_http_server] file=/tmp/issue-565.sock ; the path to the socket file [supervisorctl] serverurl=unix:///tmp/issue-565.sock ; use a unix:// URL for a unix socket [program:hello] command=bash %(here)s/hello.sh stdout_events_enabled=true startretries=0 autorestart=false [eventlistener:listener] command=python %(here)s/listener.py events=PROCESS_LOG startretries=0 autorestart=false ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-638.conf0000644000076500000240000000064414340177153023253 0ustar00mnaberezstaff[supervisord] loglevel=debug ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-638.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-638.pid ; supervisord pidfile; default supervisord.pid nodaemon=true ; start in foreground if true; default false [program:produce-unicode-error] command=bash -c 'echo -e "\x88"' startretries=0 autorestart=false ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-663.conf0000644000076500000240000000031714340177153023246 0ustar00mnaberezstaff[supervisord] loglevel=debug logfile=/tmp/issue-663.log pidfile=/tmp/issue-663.pid nodaemon=true [eventlistener:listener] command=python %(here)s/listener.py events=TICK_5 startretries=0 autorestart=false ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-664.conf0000644000076500000240000000071114340177153023245 0ustar00mnaberezstaff[supervisord] loglevel=debug logfile=/tmp/issue-664.log pidfile=/tmp/issue-664.pid nodaemon=true [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [unix_http_server] file=/tmp/issue-664.sock ; the path to the socket file [supervisorctl] serverurl=unix:///tmp/issue-664.sock ; use a unix:// URL for a unix socket [program:test_öäü] command = /bin/cat startretries = 0 autorestart = false ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-733.conf0000644000076500000240000000232214340177153023242 0ustar00mnaberezstaff[supervisord] loglevel=debug ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-733.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-733.pid ; supervisord pidfile; default supervisord.pid nodaemon=true ; start in foreground if true; default false ; ;This command does not exist so the process will enter the FATAL state. ; [program:nonexistent] command=%(here)s/nonexistent startsecs=0 startretries=0 autorestart=false ; ;The one-line eventlistener below will cause supervisord to exit when any process ;enters the FATAL state. Based on: ;https://github.com/Supervisor/supervisor/issues/733#issuecomment-781254766 ; ;Differences from that example: ; 1. $PPID is used instead of a hardcoded PID 1. Child processes are always forked ; from supervisord, so their PPID is the PID of supervisord. ; 2. "printf" is used instead of "echo". The result "OK" must not have a newline ; or else the protocol will be violated and supervisord will log a warning. ; [eventlistener:fatalexit] events=PROCESS_STATE_FATAL command=sh -c 'while true; do printf "READY\n"; read line; kill -15 $PPID; printf "RESULT 2\n"; printf "OK"; done' startsecs=0 startretries=0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-835.conf0000644000076500000240000000054614340177153023253 0ustar00mnaberezstaff[supervisord] loglevel = debug logfile=/tmp/issue-835.log pidfile=/tmp/issue-835.pid nodaemon = true [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [unix_http_server] file=/tmp/issue-835.sock ; the path to the socket file [program:cat] command = /bin/cat startretries = 0 autorestart = false ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-836.conf0000644000076500000240000000071114340177153023246 0ustar00mnaberezstaff[supervisord] loglevel = debug logfile=/tmp/supervisord.log pidfile=/tmp/supervisord.pid nodaemon = true [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [unix_http_server] file=/tmp/issue-565.sock ; the path to the socket file [supervisorctl] serverurl=unix:///tmp/issue-565.sock ; use a unix:// URL for a unix socket [program:cat] command = /bin/cat startretries = 0 autorestart = false ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/issue-986.conf0000644000076500000240000000132414340177153023255 0ustar00mnaberezstaff[supervisord] loglevel=debug ; log level; default info; others: debug,warn,trace logfile=/tmp/issue-986.log ; main log file; default $CWD/supervisord.log pidfile=/tmp/issue-986.pid ; supervisord pidfile; default supervisord.pid nodaemon=true ; start in foreground if true; default false [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [unix_http_server] file=/tmp/issue-986.sock ; the path to the socket file [supervisorctl] serverurl=unix:///tmp/issue-986.sock ; use a unix:// URL for a unix socket [program:echo] command=bash -c "echo 'dhcrelay -d -q -a %%h:%%p %%P -i Vlan1000 192.168.0.1'" startsecs=0 autorestart=false ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/listener.py0000644000076500000240000000164714340177153023141 0ustar00mnaberezstaff import sys def write_and_flush(stream, s): stream.write(s) stream.flush() def write_stdout(s): # only eventlistener protocol messages may be sent to stdout sys.stdout.write(s) sys.stdout.flush() def write_stderr(s): sys.stderr.write(s) sys.stderr.flush() def main(): stdin = sys.stdin stdout = sys.stdout stderr = sys.stderr while True: # transition from ACKNOWLEDGED to READY write_and_flush(stdout, 'READY\n') # read header line and print it to stderr line = stdin.readline() write_and_flush(stderr, line) # read event payload and print it to stderr headers = dict([ x.split(':') for x in line.split() ]) data = stdin.read(int(headers['len'])) write_and_flush(stderr, data) # transition from READY to ACKNOWLEDGED write_and_flush(stdout, 'RESULT 2\nOK') if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/print_env.py0000644000076500000240000000012314340177153023304 0ustar00mnaberezstaff#!<> import os for k, v in os.environ.items(): print("%s=%s" % (k,v)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/spew.py0000755000076500000240000000027214340177153022266 0ustar00mnaberezstaff#!<> import sys import time counter = 0 while counter < 30000: sys.stdout.write("more spewage %d\n" % counter) sys.stdout.flush() time.sleep(0.01) counter += 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/test_1231.py0000644000076500000240000000075114340177153022734 0ustar00mnaberezstaff# -*- coding: utf-8 -*- import logging import random import sys import time def main(): logging.basicConfig(level=logging.INFO, stream=sys.stdout, format='%(levelname)s [%(asctime)s] %(message)s', datefmt='%m-%d|%H:%M:%S') i = 1 while i < 500: delay = random.randint(400, 1200) time.sleep(delay / 1000.0) logging.info('%d - hash=57d94b…381088', i) i += 1 if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/fixtures/unkillable_spew.py0000755000076500000240000000027214340177153024470 0ustar00mnaberezstaff#!<> import time import signal signal.signal(signal.SIGTERM, signal.SIG_IGN) counter = 0 while 1: time.sleep(0.01) print("more spewage %s" % counter) counter += 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/test_childutils.py0000644000076500000240000001255314340177153022644 0ustar00mnaberezstafffrom io import BytesIO import sys import time import unittest from supervisor.compat import StringIO from supervisor.compat import as_string class ChildUtilsTests(unittest.TestCase): def test_getRPCInterface(self): from supervisor.childutils import getRPCInterface rpc = getRPCInterface({'SUPERVISOR_SERVER_URL':'http://localhost:9001'}) # we can't really test this thing; its a magic object self.assertTrue(rpc is not None) def test_getRPCTransport_no_uname_pass(self): from supervisor.childutils import getRPCTransport t = getRPCTransport({'SUPERVISOR_SERVER_URL':'http://localhost:9001'}) self.assertEqual(t.username, '') self.assertEqual(t.password, '') self.assertEqual(t.serverurl, 'http://localhost:9001') def test_getRPCTransport_with_uname_pass(self): from supervisor.childutils import getRPCTransport env = {'SUPERVISOR_SERVER_URL':'http://localhost:9001', 'SUPERVISOR_USERNAME':'chrism', 'SUPERVISOR_PASSWORD':'abc123'} t = getRPCTransport(env) self.assertEqual(t.username, 'chrism') self.assertEqual(t.password, 'abc123') self.assertEqual(t.serverurl, 'http://localhost:9001') def test_get_headers(self): from supervisor.childutils import get_headers line = 'a:1 b:2' result = get_headers(line) self.assertEqual(result, {'a':'1', 'b':'2'}) def test_eventdata(self): from supervisor.childutils import eventdata payload = 'a:1 b:2\nthedata\n' headers, data = eventdata(payload) self.assertEqual(headers, {'a':'1', 'b':'2'}) self.assertEqual(data, 'thedata\n') def test_get_asctime(self): from supervisor.childutils import get_asctime timestamp = time.mktime((2009, 1, 18, 22, 14, 7, 0, 0, -1)) result = get_asctime(timestamp) self.assertEqual(result, '2009-01-18 22:14:07,000') class TestProcessCommunicationsProtocol(unittest.TestCase): def test_send(self): from supervisor.childutils import pcomm stdout = BytesIO() pcomm.send(b'hello', stdout) from supervisor.events import ProcessCommunicationEvent begin = ProcessCommunicationEvent.BEGIN_TOKEN end = ProcessCommunicationEvent.END_TOKEN self.assertEqual(stdout.getvalue(), begin + b'hello' + end) def test_stdout(self): from supervisor.childutils import pcomm old = sys.stdout try: io = sys.stdout = BytesIO() pcomm.stdout(b'hello') from supervisor.events import ProcessCommunicationEvent begin = ProcessCommunicationEvent.BEGIN_TOKEN end = ProcessCommunicationEvent.END_TOKEN self.assertEqual(io.getvalue(), begin + b'hello' + end) finally: sys.stdout = old def test_stderr(self): from supervisor.childutils import pcomm old = sys.stderr try: io = sys.stderr = BytesIO() pcomm.stderr(b'hello') from supervisor.events import ProcessCommunicationEvent begin = ProcessCommunicationEvent.BEGIN_TOKEN end = ProcessCommunicationEvent.END_TOKEN self.assertEqual(io.getvalue(), begin + b'hello' + end) finally: sys.stderr = old class TestEventListenerProtocol(unittest.TestCase): def test_wait(self): from supervisor.childutils import listener class Dummy: def readline(self): return 'len:5' def read(self, *ignored): return 'hello' stdin = Dummy() stdout = StringIO() headers, payload = listener.wait(stdin, stdout) self.assertEqual(headers, {'len':'5'}) self.assertEqual(payload, 'hello') self.assertEqual(stdout.getvalue(), 'READY\n') def test_token(self): from supervisor.childutils import listener from supervisor.dispatchers import PEventListenerDispatcher token = as_string(PEventListenerDispatcher.READY_FOR_EVENTS_TOKEN) stdout = StringIO() listener.ready(stdout) self.assertEqual(stdout.getvalue(), token) def test_ok(self): from supervisor.childutils import listener from supervisor.dispatchers import PEventListenerDispatcher begin = as_string(PEventListenerDispatcher.RESULT_TOKEN_START) stdout = StringIO() listener.ok(stdout) self.assertEqual(stdout.getvalue(), begin + '2\nOK') def test_fail(self): from supervisor.childutils import listener from supervisor.dispatchers import PEventListenerDispatcher begin = as_string(PEventListenerDispatcher.RESULT_TOKEN_START) stdout = StringIO() listener.fail(stdout) self.assertEqual(stdout.getvalue(), begin + '4\nFAIL') def test_send(self): from supervisor.childutils import listener from supervisor.dispatchers import PEventListenerDispatcher begin = as_string(PEventListenerDispatcher.RESULT_TOKEN_START) stdout = StringIO() msg = 'the body data ya fool\n' listener.send(msg, stdout) expected = '%s%s\n%s' % (begin, len(msg), msg) self.assertEqual(stdout.getvalue(), expected) def test_suite(): return unittest.findTestCases(sys.modules[__name__]) if __name__ == '__main__': unittest.main(defaultTest='test_suite') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/test_confecho.py0000644000076500000240000000104514340177153022256 0ustar00mnaberezstaff"""Test suite for supervisor.confecho""" import sys import unittest from supervisor.compat import StringIO from supervisor import confecho class TopLevelFunctionTests(unittest.TestCase): def test_main_writes_data_out_that_looks_like_a_config_file(self): sio = StringIO() confecho.main(out=sio) output = sio.getvalue() self.assertTrue("[supervisord]" in output) def test_suite(): return unittest.findTestCases(sys.modules[__name__]) if __name__ == '__main__': unittest.main(defaultTest='test_suite') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0 supervisor-4.2.5/supervisor/tests/test_datatypes.py0000644000076500000240000006674114351440431022500 0ustar00mnaberezstaff"""Test suite for supervisor.datatypes""" import os import signal import socket import tempfile import unittest from supervisor.tests.base import Mock, patch, sentinel from supervisor.compat import maxint from supervisor import datatypes class ProcessOrGroupName(unittest.TestCase): def _callFUT(self, arg): return datatypes.process_or_group_name(arg) def test_strips_surrounding_whitespace(self): name = " foo\t" self.assertEqual(self._callFUT(name), "foo") def test_disallows_inner_spaces_for_eventlistener_protocol(self): name = "foo bar" self.assertRaises(ValueError, self._callFUT, name) def test_disallows_colons_for_eventlistener_protocol(self): name = "foo:bar" self.assertRaises(ValueError, self._callFUT, name) def test_disallows_slashes_for_web_ui_urls(self): name = "foo/bar" self.assertRaises(ValueError, self._callFUT, name) class IntegerTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.integer(arg) def test_converts_numeric(self): self.assertEqual(self._callFUT('1'), 1) def test_converts_numeric_overflowing_int(self): self.assertEqual(self._callFUT(str(maxint+1)), maxint+1) def test_raises_for_non_numeric(self): self.assertRaises(ValueError, self._callFUT, 'abc') class BooleanTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.boolean(arg) def test_returns_true_for_truthy_values(self): for s in datatypes.TRUTHY_STRINGS: self.assertEqual(self._callFUT(s), True) def test_returns_true_for_upper_truthy_values(self): for s in map(str.upper, datatypes.TRUTHY_STRINGS): self.assertEqual(self._callFUT(s), True) def test_returns_false_for_falsy_values(self): for s in datatypes.FALSY_STRINGS: self.assertEqual(self._callFUT(s), False) def test_returns_false_for_upper_falsy_values(self): for s in map(str.upper, datatypes.FALSY_STRINGS): self.assertEqual(self._callFUT(s), False) def test_braises_value_error_for_bad_value(self): self.assertRaises(ValueError, self._callFUT, 'not-a-value') class ListOfStringsTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.list_of_strings(arg) def test_returns_empty_list_for_empty_string(self): self.assertEqual(self._callFUT(''), []) def test_returns_list_of_strings_by_comma_split(self): self.assertEqual(self._callFUT('foo,bar'), ['foo', 'bar']) def test_returns_strings_with_whitespace_stripped(self): self.assertEqual(self._callFUT(' foo , bar '), ['foo', 'bar']) def test_raises_value_error_when_comma_split_fails(self): self.assertRaises(ValueError, self._callFUT, 42) class ListOfIntsTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.list_of_ints(arg) def test_returns_empty_list_for_empty_string(self): self.assertEqual(self._callFUT(''), []) def test_returns_list_of_ints_by_comma_split(self): self.assertEqual(self._callFUT('1,42'), [1,42]) def test_returns_ints_even_if_whitespace_in_string(self): self.assertEqual(self._callFUT(' 1 , 42 '), [1,42]) def test_raises_value_error_when_comma_split_fails(self): self.assertRaises(ValueError, self._callFUT, 42) def test_raises_value_error_when_one_value_is_bad(self): self.assertRaises(ValueError, self._callFUT, '1, bad, 42') class ListOfExitcodesTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.list_of_exitcodes(arg) def test_returns_list_of_ints_from_csv(self): self.assertEqual(self._callFUT('1,2,3'), [1,2,3]) def test_returns_list_of_ints_from_one(self): self.assertEqual(self._callFUT('1'), [1]) def test_raises_for_invalid_exitcode_values(self): self.assertRaises(ValueError, self._callFUT, 'a,b,c') self.assertRaises(ValueError, self._callFUT, '1024') self.assertRaises(ValueError, self._callFUT, '-1,1') class DictOfKeyValuePairsTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.dict_of_key_value_pairs(arg) def test_returns_empty_dict_for_empty_str(self): actual = self._callFUT('') self.assertEqual({}, actual) def test_returns_dict_from_single_pair_str(self): actual = self._callFUT('foo=bar') expected = {'foo': 'bar'} self.assertEqual(actual, expected) def test_returns_dict_from_multi_pair_str(self): actual = self._callFUT('foo=bar,baz=qux') expected = {'foo': 'bar', 'baz': 'qux'} self.assertEqual(actual, expected) def test_returns_dict_even_if_whitespace(self): actual = self._callFUT(' foo = bar , baz = qux ') expected = {'foo': 'bar', 'baz': 'qux'} self.assertEqual(actual, expected) def test_returns_dict_even_if_newlines(self): actual = self._callFUT('foo\n=\nbar\n,\nbaz\n=\nqux') expected = {'foo': 'bar', 'baz': 'qux'} self.assertEqual(actual, expected) def test_handles_commas_inside_apostrophes(self): actual = self._callFUT("foo='bar,baz',baz='q,ux'") expected = {'foo': 'bar,baz', 'baz': 'q,ux'} self.assertEqual(actual, expected) def test_handles_commas_inside_quotes(self): actual = self._callFUT('foo="bar,baz",baz="q,ux"') expected = {'foo': 'bar,baz', 'baz': 'q,ux'} self.assertEqual(actual, expected) def test_handles_newlines_inside_quotes(self): actual = datatypes.dict_of_key_value_pairs('foo="a\nb\nc"') expected = {'foo': 'a\nb\nc'} self.assertEqual(actual, expected) def test_handles_empty_inside_quotes(self): actual = datatypes.dict_of_key_value_pairs('foo=""') expected = {'foo': ''} self.assertEqual(actual, expected) def test_handles_empty_inside_quotes_with_second_unquoted_pair(self): actual = datatypes.dict_of_key_value_pairs('foo="",bar=a') expected = {'foo': '', 'bar': 'a'} self.assertEqual(actual, expected) def test_handles_unquoted_non_alphanum(self): actual = self._callFUT( 'HOME=/home/auser,FOO=/.foo+(1.2)-_/,' 'SUPERVISOR_SERVER_URL=http://127.0.0.1:9001') expected = {'HOME': '/home/auser', 'FOO': '/.foo+(1.2)-_/', 'SUPERVISOR_SERVER_URL': 'http://127.0.0.1:9001'} self.assertEqual(actual, expected) def test_allows_trailing_comma(self): actual = self._callFUT('foo=bar,') expected = {'foo': 'bar'} self.assertEqual(actual, expected) def test_raises_value_error_on_too_short(self): self.assertRaises(ValueError, self._callFUT, 'foo') self.assertRaises(ValueError, self._callFUT, 'foo=') self.assertRaises(ValueError, self._callFUT, 'foo=bar,baz') self.assertRaises(ValueError, self._callFUT, 'foo=bar,baz=') def test_raises_when_comma_is_missing(self): kvp = 'KEY1=no-comma KEY2=ends-with-comma,' self.assertRaises(ValueError, self._callFUT, kvp) class LogfileNameTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.logfile_name(arg) def test_returns_none_for_none_values(self): for thing in datatypes.LOGFILE_NONES: actual = self._callFUT(thing) self.assertEqual(actual, None) def test_returns_none_for_uppered_none_values(self): for thing in datatypes.LOGFILE_NONES: if hasattr(thing, 'upper'): thing = thing.upper() actual = self._callFUT(thing) self.assertEqual(actual, None) def test_returns_automatic_for_auto_values(self): for thing in datatypes.LOGFILE_AUTOS: actual = self._callFUT(thing) self.assertEqual(actual, datatypes.Automatic) def test_returns_automatic_for_uppered_auto_values(self): for thing in datatypes.LOGFILE_AUTOS: if hasattr(thing, 'upper'): thing = thing.upper() actual = self._callFUT(thing) self.assertEqual(actual, datatypes.Automatic) def test_returns_syslog_for_syslog_values(self): for thing in datatypes.LOGFILE_SYSLOGS: actual = self._callFUT(thing) self.assertEqual(actual, datatypes.Syslog) def test_returns_syslog_for_uppered_syslog_values(self): for thing in datatypes.LOGFILE_SYSLOGS: if hasattr(thing, 'upper'): thing = thing.upper() actual = self._callFUT(thing) self.assertEqual(actual, datatypes.Syslog) def test_returns_existing_dirpath_for_other_values(self): func = datatypes.existing_dirpath datatypes.existing_dirpath = lambda path: path try: path = '/path/to/logfile/With/Case/Preserved' actual = self._callFUT(path) self.assertEqual(actual, path) finally: datatypes.existing_dirpath = func class RangeCheckedConversionTests(unittest.TestCase): def _getTargetClass(self): return datatypes.RangeCheckedConversion def _makeOne(self, conversion, min=None, max=None): return self._getTargetClass()(conversion, min, max) def test_below_lower_bound(self): conversion = self._makeOne(lambda *arg: -1, 0) self.assertRaises(ValueError, conversion, None) def test_above_upper_lower_bound(self): conversion = self._makeOne(lambda *arg: 1, 0, 0) self.assertRaises(ValueError, conversion, None) def test_passes(self): conversion = self._makeOne(lambda *arg: 0, 0, 0) self.assertEqual(conversion(0), 0) class NameToGidTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.name_to_gid(arg) @patch("grp.getgrnam", Mock(return_value=[0,0,42])) def test_gets_gid_from_group_name(self): gid = self._callFUT("foo") self.assertEqual(gid, 42) @patch("grp.getgrgid", Mock(return_value=[0,0,42])) def test_gets_gid_from_group_id(self): gid = self._callFUT("42") self.assertEqual(gid, 42) @patch("grp.getgrnam", Mock(side_effect=KeyError("bad group name"))) def test_raises_for_bad_group_name(self): self.assertRaises(ValueError, self._callFUT, "foo") @patch("grp.getgrgid", Mock(side_effect=KeyError("bad group id"))) def test_raises_for_bad_group_id(self): self.assertRaises(ValueError, self._callFUT, "42") class NameToUidTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.name_to_uid(arg) @patch("pwd.getpwnam", Mock(return_value=[0,0,42])) def test_gets_uid_from_username(self): uid = self._callFUT("foo") self.assertEqual(uid, 42) @patch("pwd.getpwuid", Mock(return_value=[0,0,42])) def test_gets_uid_from_user_id(self): uid = self._callFUT("42") self.assertEqual(uid, 42) @patch("pwd.getpwnam", Mock(side_effect=KeyError("bad username"))) def test_raises_for_bad_username(self): self.assertRaises(ValueError, self._callFUT, "foo") @patch("pwd.getpwuid", Mock(side_effect=KeyError("bad user id"))) def test_raises_for_bad_user_id(self): self.assertRaises(ValueError, self._callFUT, "42") class OctalTypeTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.octal_type(arg) def test_success(self): self.assertEqual(self._callFUT('10'), 8) def test_raises_for_non_numeric(self): try: self._callFUT('bad') self.fail() except ValueError as e: expected = 'bad can not be converted to an octal type' self.assertEqual(e.args[0], expected) def test_raises_for_unconvertable_numeric(self): try: self._callFUT('1.2') self.fail() except ValueError as e: expected = '1.2 can not be converted to an octal type' self.assertEqual(e.args[0], expected) class ExistingDirectoryTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.existing_directory(arg) def test_dir_exists(self): path = os.path.dirname(__file__) self.assertEqual(path, self._callFUT(path)) def test_dir_does_not_exist(self): path = os.path.join(os.path.dirname(__file__), 'nonexistent') try: self._callFUT(path) self.fail() except ValueError as e: expected = "%s is not an existing directory" % path self.assertEqual(e.args[0], expected) def test_not_a_directory(self): path = __file__ try: self._callFUT(path) self.fail() except ValueError as e: expected = "%s is not an existing directory" % path self.assertEqual(e.args[0], expected) def test_expands_home(self): home = os.path.expanduser('~') if os.path.exists(home): path = self._callFUT('~') self.assertEqual(home, path) class ExistingDirpathTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.existing_dirpath(arg) def test_returns_existing_dirpath(self): self.assertEqual(self._callFUT(__file__), __file__) def test_returns_dirpath_if_relative(self): self.assertEqual(self._callFUT('foo'), 'foo') def test_raises_if_dir_does_not_exist(self): path = os.path.join(os.path.dirname(__file__), 'nonexistent', 'foo') try: self._callFUT(path) self.fail() except ValueError as e: expected = ('The directory named as part of the path %s ' 'does not exist' % path) self.assertEqual(e.args[0], expected) def test_raises_if_exists_but_not_a_dir(self): path = os.path.join(os.path.dirname(__file__), os.path.basename(__file__), 'foo') try: self._callFUT(path) self.fail() except ValueError as e: expected = ('The directory named as part of the path %s ' 'does not exist' % path) self.assertEqual(e.args[0], expected) def test_expands_home(self): home = os.path.expanduser('~') if os.path.exists(home): path = self._callFUT('~/foo') self.assertEqual(os.path.join(home, 'foo'), path) class LoggingLevelTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.logging_level(arg) def test_returns_level_from_name_case_insensitive(self): from supervisor.loggers import LevelsByName self.assertEqual(self._callFUT("wArN"), LevelsByName.WARN) def test_raises_for_bad_level_name(self): self.assertRaises(ValueError, self._callFUT, "foo") class UrlTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.url(arg) def test_accepts_urlparse_recognized_scheme_with_netloc(self): good_url = 'http://localhost:9001' self.assertEqual(self._callFUT(good_url), good_url) def test_rejects_urlparse_recognized_scheme_but_no_netloc(self): bad_url = 'http://' self.assertRaises(ValueError, self._callFUT, bad_url) def test_accepts_unix_scheme_with_path(self): good_url = "unix://somepath" self.assertEqual(good_url, self._callFUT(good_url)) def test_rejects_unix_scheme_with_no_slashes_or_path(self): bad_url = "unix:" self.assertRaises(ValueError, self._callFUT, bad_url) def test_rejects_unix_scheme_with_slashes_but_no_path(self): bad_url = "unix://" self.assertRaises(ValueError, self._callFUT, bad_url) class InetStreamSocketConfigTests(unittest.TestCase): def _getTargetClass(self): return datatypes.InetStreamSocketConfig def _makeOne(self, *args, **kw): return self._getTargetClass()(*args, **kw) def test_url(self): conf = self._makeOne('127.0.0.1', 8675) self.assertEqual(conf.url, 'tcp://127.0.0.1:8675') def test___str__(self): cfg = self._makeOne('localhost', 65531) self.assertEqual(str(cfg), 'tcp://localhost:65531') def test_repr(self): conf = self._makeOne('127.0.0.1', 8675) s = repr(conf) self.assertTrue('supervisor.datatypes.InetStreamSocketConfig' in s) self.assertTrue(s.endswith('for tcp://127.0.0.1:8675>'), s) def test_addr(self): conf = self._makeOne('127.0.0.1', 8675) addr = conf.addr() self.assertEqual(addr, ('127.0.0.1', 8675)) def test_port_as_string(self): conf = self._makeOne('localhost', '5001') addr = conf.addr() self.assertEqual(addr, ('localhost', 5001)) def test_create_and_bind(self): conf = self._makeOne('127.0.0.1', 8675) sock = conf.create_and_bind() reuse = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) self.assertTrue(reuse) self.assertEqual(conf.addr(), sock.getsockname()) #verifies that bind was called sock.close() def test_same_urls_are_equal(self): conf1 = self._makeOne('localhost', 5001) conf2 = self._makeOne('localhost', 5001) self.assertTrue(conf1 == conf2) self.assertFalse(conf1 != conf2) def test_diff_urls_are_not_equal(self): conf1 = self._makeOne('localhost', 5001) conf2 = self._makeOne('localhost', 5002) self.assertTrue(conf1 != conf2) self.assertFalse(conf1 == conf2) def test_diff_objs_are_not_equal(self): conf1 = self._makeOne('localhost', 5001) conf2 = 'blah' self.assertTrue(conf1 != conf2) self.assertFalse(conf1 == conf2) class UnixStreamSocketConfigTests(unittest.TestCase): def _getTargetClass(self): return datatypes.UnixStreamSocketConfig def _makeOne(self, *args, **kw): return self._getTargetClass()(*args, **kw) def test_url(self): conf = self._makeOne('/tmp/foo.sock') self.assertEqual(conf.url, 'unix:///tmp/foo.sock') def test___str__(self): cfg = self._makeOne('foo/bar') self.assertEqual(str(cfg), 'unix://foo/bar') def test_repr(self): conf = self._makeOne('/tmp/foo.sock') s = repr(conf) self.assertTrue('supervisor.datatypes.UnixStreamSocketConfig' in s) self.assertTrue(s.endswith('for unix:///tmp/foo.sock>'), s) def test_get_addr(self): conf = self._makeOne('/tmp/foo.sock') addr = conf.addr() self.assertEqual(addr, '/tmp/foo.sock') def test_create_and_bind(self): (tf_fd, tf_name) = tempfile.mkstemp() owner = (sentinel.uid, sentinel.gid) mode = sentinel.mode conf = self._makeOne(tf_name, owner=owner, mode=mode) # Patch os.chmod and os.chown functions with mocks # objects so that the test does not depend on # any specific system users or permissions chown_mock = Mock() chmod_mock = Mock() @patch('os.chown', chown_mock) @patch('os.chmod', chmod_mock) def call_create_and_bind(conf): return conf.create_and_bind() sock = call_create_and_bind(conf) self.assertTrue(os.path.exists(tf_name)) # verifies that bind was called self.assertEqual(conf.addr(), sock.getsockname()) sock.close() self.assertTrue(os.path.exists(tf_name)) os.unlink(tf_name) # Verify that os.chown was called with correct args self.assertEqual(1, chown_mock.call_count) path_arg = chown_mock.call_args[0][0] uid_arg = chown_mock.call_args[0][1] gid_arg = chown_mock.call_args[0][2] self.assertEqual(tf_name, path_arg) self.assertEqual(owner[0], uid_arg) self.assertEqual(owner[1], gid_arg) # Verify that os.chmod was called with correct args self.assertEqual(1, chmod_mock.call_count) path_arg = chmod_mock.call_args[0][0] mode_arg = chmod_mock.call_args[0][1] self.assertEqual(tf_name, path_arg) self.assertEqual(mode, mode_arg) def test_create_and_bind_when_chown_fails(self): (tf_fd, tf_name) = tempfile.mkstemp() owner = (sentinel.uid, sentinel.gid) mode = sentinel.mode conf = self._makeOne(tf_name, owner=owner, mode=mode) @patch('os.chown', Mock(side_effect=OSError("msg"))) @patch('os.chmod', Mock()) def call_create_and_bind(conf): return conf.create_and_bind() try: call_create_and_bind(conf) self.fail() except ValueError as e: expected = "Could not change ownership of socket file: msg" self.assertEqual(e.args[0], expected) self.assertFalse(os.path.exists(tf_name)) def test_create_and_bind_when_chmod_fails(self): (tf_fd, tf_name) = tempfile.mkstemp() owner = (sentinel.uid, sentinel.gid) mode = sentinel.mode conf = self._makeOne(tf_name, owner=owner, mode=mode) @patch('os.chown', Mock()) @patch('os.chmod', Mock(side_effect=OSError("msg"))) def call_create_and_bind(conf): return conf.create_and_bind() try: call_create_and_bind(conf) self.fail() except ValueError as e: expected = "Could not change permissions of socket file: msg" self.assertEqual(e.args[0], expected) self.assertFalse(os.path.exists(tf_name)) def test_same_paths_are_equal(self): conf1 = self._makeOne('/tmp/foo.sock') conf2 = self._makeOne('/tmp/foo.sock') self.assertTrue(conf1 == conf2) self.assertFalse(conf1 != conf2) def test_diff_paths_are_not_equal(self): conf1 = self._makeOne('/tmp/foo.sock') conf2 = self._makeOne('/tmp/bar.sock') self.assertTrue(conf1 != conf2) self.assertFalse(conf1 == conf2) def test_diff_objs_are_not_equal(self): conf1 = self._makeOne('/tmp/foo.sock') conf2 = 'blah' self.assertTrue(conf1 != conf2) self.assertFalse(conf1 == conf2) class InetAddressTests(unittest.TestCase): def _callFUT(self, s): return datatypes.inet_address(s) def test_no_port_number(self): self.assertRaises(ValueError, self._callFUT, 'a:') def test_bad_port_number(self): self.assertRaises(ValueError, self._callFUT, 'a') def test_default_host(self): host, port = self._callFUT('*:9001') self.assertEqual(host, '') self.assertEqual(port, 9001) def test_hostname_and_port(self): host, port = self._callFUT('localhost:9001') self.assertEqual(host, 'localhost') self.assertEqual(port, 9001) def test_ipv4_address_and_port(self): host, port = self._callFUT('127.0.0.1:9001') self.assertEqual(host, '127.0.0.1') self.assertEqual(port, 9001) def test_ipv6_address_and_port(self): host, port = self._callFUT('2001:db8:ff:55:0:0:0:138:9001') self.assertEqual(host, '2001:db8:ff:55:0:0:0:138') self.assertEqual(port, 9001) class SocketAddressTests(unittest.TestCase): def _getTargetClass(self): return datatypes.SocketAddress def _makeOne(self, s): return self._getTargetClass()(s) def test_unix_socket(self): addr = self._makeOne('/foo/bar') self.assertEqual(addr.family, socket.AF_UNIX) self.assertEqual(addr.address, '/foo/bar') def test_inet_socket(self): addr = self._makeOne('localhost:8080') self.assertEqual(addr.family, socket.AF_INET) self.assertEqual(addr.address, ('localhost', 8080)) class ColonSeparatedUserGroupTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.colon_separated_user_group(arg) def test_ok_username(self): self.assertEqual(self._callFUT('root')[0], 0) def test_missinguser_username(self): self.assertRaises(ValueError, self._callFUT, 'godihopethisuserdoesntexist') def test_missinguser_username_and_groupname(self): self.assertRaises(ValueError, self._callFUT, 'godihopethisuserdoesntexist:foo') def test_separated_user_group_returns_both(self): name_to_uid = Mock(return_value=12) name_to_gid = Mock(return_value=34) @patch("supervisor.datatypes.name_to_uid", name_to_uid) @patch("supervisor.datatypes.name_to_gid", name_to_gid) def colon_separated(value): return self._callFUT(value) uid, gid = colon_separated("foo:bar") name_to_uid.assert_called_with("foo") self.assertEqual(12, uid) name_to_gid.assert_called_with("bar") self.assertEqual(34, gid) def test_separated_user_group_returns_user_only(self): name_to_uid = Mock(return_value=42) @patch("supervisor.datatypes.name_to_uid", name_to_uid) def colon_separated(value): return self._callFUT(value) uid, gid = colon_separated("foo") name_to_uid.assert_called_with("foo") self.assertEqual(42, uid) self.assertEqual(-1, gid) class SignalNumberTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.signal_number(arg) def test_converts_number(self): self.assertEqual(self._callFUT(signal.SIGTERM), signal.SIGTERM) def test_converts_name(self): self.assertEqual(self._callFUT(' term '), signal.SIGTERM) def test_converts_signame(self): self.assertEqual(self._callFUT('SIGTERM'), signal.SIGTERM) def test_raises_for_bad_number(self): try: self._callFUT('12345678') self.fail() except ValueError as e: expected = "value '12345678' is not a valid signal number" self.assertEqual(e.args[0], expected) def test_raises_for_bad_name(self): try: self._callFUT('BADSIG') self.fail() except ValueError as e: expected = "value 'BADSIG' is not a valid signal name" self.assertEqual(e.args[0], expected) class AutoRestartTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.auto_restart(arg) def test_converts_truthy(self): for s in datatypes.TRUTHY_STRINGS: result = self._callFUT(s) self.assertEqual(result, datatypes.RestartUnconditionally) def test_converts_falsy(self): for s in datatypes.FALSY_STRINGS: self.assertFalse(self._callFUT(s)) def test_converts_unexpected(self): for s in ('unexpected', 'UNEXPECTED'): result = self._callFUT(s) self.assertEqual(result, datatypes.RestartWhenExitUnexpected) def test_raises_for_bad_value(self): try: self._callFUT('bad') self.fail() except ValueError as e: self.assertEqual(e.args[0], "invalid 'autorestart' value 'bad'") class ProfileOptionsTests(unittest.TestCase): def _callFUT(self, arg): return datatypes.profile_options(arg) def test_empty(self): sort_options, callers = self._callFUT('') self.assertEqual([], sort_options) self.assertFalse(callers) def test_without_callers(self): sort_options, callers = self._callFUT('CUMULATIVE,calls') self.assertEqual(['cumulative', 'calls'], sort_options) self.assertFalse(callers) def test_with_callers(self): sort_options, callers = self._callFUT('cumulative, callers') self.assertEqual(['cumulative'], sort_options) self.assertTrue(callers) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/test_dispatchers.py0000644000076500000240000015073014340177153023011 0ustar00mnaberezstaffimport unittest import os import sys from supervisor.compat import as_bytes from supervisor.tests.base import DummyOptions from supervisor.tests.base import DummyProcess from supervisor.tests.base import DummyPConfig from supervisor.tests.base import DummyLogger from supervisor.tests.base import DummyEvent class PDispatcherTests(unittest.TestCase): def setUp(self): from supervisor.events import clear clear() def tearDown(self): from supervisor.events import clear clear() def _getTargetClass(self): from supervisor.dispatchers import PDispatcher return PDispatcher def _makeOne(self, process=None, channel='stdout', fd=0): return self._getTargetClass()(process, channel, fd) def test_readable(self): inst = self._makeOne() self.assertRaises(NotImplementedError, inst.readable) def test_writable(self): inst = self._makeOne() self.assertRaises(NotImplementedError, inst.writable) def test_flush(self): inst = self._makeOne() self.assertEqual(inst.flush(), None) class POutputDispatcherTests(unittest.TestCase): def setUp(self): from supervisor.events import clear clear() def tearDown(self): from supervisor.events import clear clear() def _getTargetClass(self): from supervisor.dispatchers import POutputDispatcher return POutputDispatcher def _makeOne(self, process, channel='stdout'): from supervisor import events events = {'stdout': events.ProcessCommunicationStdoutEvent, 'stderr': events.ProcessCommunicationStderrEvent} # dispatcher derives its channel from event class return self._getTargetClass()(process, events[channel], 0) def test_writable(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.writable(), False) def test_readable_open(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.closed = False self.assertEqual(dispatcher.readable(), True) def test_readable_closed(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.closed = True self.assertEqual(dispatcher.readable(), False) def test_handle_write_event(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertRaises(NotImplementedError, dispatcher.handle_write_event) def test_handle_read_event(self): options = DummyOptions() options.readfd_result = b'abc' config = DummyPConfig(options, 'process1', '/bin/process1', stdout_capture_maxbytes=100) process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.handle_read_event(), None) self.assertEqual(dispatcher.output_buffer, b'abc') def test_handle_read_event_no_data_closes(self): options = DummyOptions() options.readfd_result = b'' config = DummyPConfig(options, 'process1', '/bin/process1', stdout_capture_maxbytes=100) process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertFalse(dispatcher.closed) self.assertEqual(dispatcher.handle_read_event(), None) self.assertEqual(dispatcher.output_buffer, b'') self.assertTrue(dispatcher.closed) def test_handle_error(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') process = DummyProcess(config) dispatcher = self._makeOne(process) try: raise ValueError('foo') except: dispatcher.handle_error() result = options.logger.data[0] self.assertTrue(result.startswith( 'uncaptured python exception, closing channel'),result) def test_toggle_capturemode_sends_event(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile='/tmp/foo', stdout_capture_maxbytes=500) process = DummyProcess(config) process.pid = 4000 dispatcher = self._makeOne(process) dispatcher.capturemode = True dispatcher.capturelog.getvalue = lambda: 'hallooo' L = [] def doit(event): L.append(event) from supervisor import events events.subscribe(events.EventTypes.PROCESS_COMMUNICATION, doit) dispatcher.toggle_capturemode() self.assertEqual(len(L), 1) event = L[0] self.assertEqual(event.process, process) self.assertEqual(event.pid, 4000) self.assertEqual(event.data, 'hallooo') def test_removelogs(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile='/tmp/foo') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.removelogs() self.assertEqual(dispatcher.normallog.handlers[0].reopened, True) self.assertEqual(dispatcher.normallog.handlers[0].removed, True) self.assertEqual(dispatcher.childlog.handlers[0].reopened, True) self.assertEqual(dispatcher.childlog.handlers[0].removed, True) def test_reopenlogs(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile='/tmp/foo') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.reopenlogs() self.assertEqual(dispatcher.childlog.handlers[0].reopened, True) self.assertEqual(dispatcher.normallog.handlers[0].reopened, True) def test_record_output_log_non_capturemode(self): # stdout/stderr goes to the process log and the main log, # in non-capturemode, the data length doesn't matter options = DummyOptions() from supervisor import loggers options.loglevel = loggers.LevelsByName.TRAC config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile='/tmp/foo') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.output_buffer = 'a' dispatcher.record_output() self.assertEqual(dispatcher.childlog.data, ['a']) self.assertEqual(options.logger.data[0], "'process1' stdout output:\na") self.assertEqual(dispatcher.output_buffer, b'') def test_record_output_emits_stdout_event_when_enabled(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stdout_events_enabled=True) process = DummyProcess(config) dispatcher = self._makeOne(process, 'stdout') dispatcher.output_buffer = b'hello from stdout' L = [] def doit(event): L.append(event) from supervisor import events events.subscribe(events.EventTypes.PROCESS_LOG_STDOUT, doit) dispatcher.record_output() self.assertEqual(len(L), 1) event = L[0] self.assertEqual(event.process, process) self.assertEqual(event.data, b'hello from stdout') def test_record_output_does_not_emit_stdout_event_when_disabled(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stdout_events_enabled=False) process = DummyProcess(config) dispatcher = self._makeOne(process, 'stdout') dispatcher.output_buffer = b'hello from stdout' L = [] def doit(event): L.append(event) from supervisor import events events.subscribe(events.EventTypes.PROCESS_LOG_STDOUT, doit) dispatcher.record_output() self.assertEqual(len(L), 0) def test_record_output_emits_stderr_event_when_enabled(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stderr_events_enabled=True) process = DummyProcess(config) dispatcher = self._makeOne(process, 'stderr') dispatcher.output_buffer = b'hello from stderr' L = [] def doit(event): L.append(event) from supervisor import events events.subscribe(events.EventTypes.PROCESS_LOG_STDERR, doit) dispatcher.record_output() self.assertEqual(len(L), 1) event = L[0] self.assertEqual(event.process, process) self.assertEqual(event.data, b'hello from stderr') def test_record_output_does_not_emit_stderr_event_when_disabled(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stderr_events_enabled=False) process = DummyProcess(config) dispatcher = self._makeOne(process, 'stderr') dispatcher.output_buffer = b'hello from stderr' L = [] def doit(event): L.append(event) from supervisor import events events.subscribe(events.EventTypes.PROCESS_LOG_STDERR, doit) dispatcher.record_output() self.assertEqual(len(L), 0) def test_record_output_capturemode_string_longer_than_token(self): # stdout/stderr goes to the process log and the main log, # in capturemode, the length of the data needs to be longer # than the capture token to make it out. options = DummyOptions() from supervisor import loggers options.loglevel = loggers.LevelsByName.TRAC config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile='/tmp/foo', stdout_capture_maxbytes=100) process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.output_buffer = b'stdout string longer than a token' dispatcher.record_output() self.assertEqual(dispatcher.childlog.data, [b'stdout string longer than a token']) self.assertEqual(options.logger.data[0], "'process1' stdout output:\nstdout string longer than a token") def test_record_output_capturemode_string_not_longer_than_token(self): # stdout/stderr goes to the process log and the main log, # in capturemode, the length of the data needs to be longer # than the capture token to make it out. options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile='/tmp/foo', stdout_capture_maxbytes=100) process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.output_buffer = 'a' dispatcher.record_output() self.assertEqual(dispatcher.childlog.data, []) self.assertEqual(dispatcher.output_buffer, 'a') def test_stdout_capturemode_single_buffer(self): # mike reported that comm events that took place within a single # output buffer were broken 8/20/2007 from supervisor.events import ProcessCommunicationEvent from supervisor.events import subscribe events = [] def doit(event): events.append(event) subscribe(ProcessCommunicationEvent, doit) BEGIN_TOKEN = ProcessCommunicationEvent.BEGIN_TOKEN END_TOKEN = ProcessCommunicationEvent.END_TOKEN data = BEGIN_TOKEN + b'hello' + END_TOKEN options = DummyOptions() from supervisor.loggers import getLogger options.getLogger = getLogger # actually use real logger logfile = '/tmp/log' config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile=logfile, stdout_capture_maxbytes=1000) process = DummyProcess(config) dispatcher = self._makeOne(process) try: dispatcher.output_buffer = data dispatcher.record_output() self.assertEqual(os.path.getsize(logfile), 0) self.assertEqual(len(dispatcher.output_buffer), 0) self.assertEqual(len(events), 1) event = events[0] from supervisor.events import ProcessCommunicationStdoutEvent self.assertEqual(event.__class__, ProcessCommunicationStdoutEvent) self.assertEqual(event.process, process) self.assertEqual(event.channel, 'stdout') self.assertEqual(event.data, b'hello') finally: try: dispatcher.capturelog.close() dispatcher.childlog.close() os.remove(logfile) except (OSError, IOError): pass def test_stdout_capturemode_multiple_buffers(self): from supervisor.events import ProcessCommunicationEvent from supervisor.events import subscribe events = [] def doit(event): events.append(event) subscribe(ProcessCommunicationEvent, doit) import string # ascii_letters for python 3 letters = as_bytes(getattr(string, "letters", string.ascii_letters)) digits = as_bytes(string.digits) * 4 BEGIN_TOKEN = ProcessCommunicationEvent.BEGIN_TOKEN END_TOKEN = ProcessCommunicationEvent.END_TOKEN data = (letters + BEGIN_TOKEN + digits + END_TOKEN + letters) # boundaries that split tokens colon = b':' broken = data.split(colon) first = broken[0] + colon second = broken[1] + colon third = broken[2] options = DummyOptions() from supervisor.loggers import getLogger options.getLogger = getLogger # actually use real logger logfile = '/tmp/log' config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile=logfile, stdout_capture_maxbytes=10000) process = DummyProcess(config) dispatcher = self._makeOne(process) try: dispatcher.output_buffer = first dispatcher.record_output() [ x.flush() for x in dispatcher.childlog.handlers ] with open(logfile, 'rb') as f: self.assertEqual(f.read(), letters) self.assertEqual(dispatcher.output_buffer, first[len(letters):]) self.assertEqual(len(events), 0) dispatcher.output_buffer += second dispatcher.record_output() self.assertEqual(len(events), 0) [ x.flush() for x in dispatcher.childlog.handlers ] with open(logfile, 'rb') as f: self.assertEqual(f.read(), letters) self.assertEqual(dispatcher.output_buffer, first[len(letters):]) self.assertEqual(len(events), 0) dispatcher.output_buffer += third dispatcher.record_output() [ x.flush() for x in dispatcher.childlog.handlers ] with open(logfile, 'rb') as f: self.assertEqual(f.read(), letters * 2) self.assertEqual(len(events), 1) event = events[0] from supervisor.events import ProcessCommunicationStdoutEvent self.assertEqual(event.__class__, ProcessCommunicationStdoutEvent) self.assertEqual(event.process, process) self.assertEqual(event.channel, 'stdout') self.assertEqual(event.data, digits) finally: try: dispatcher.capturelog.close() dispatcher.childlog.close() os.remove(logfile) except (OSError, IOError): pass def test_strip_ansi(self): options = DummyOptions() options.strip_ansi = True config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile='/tmp/foo') process = DummyProcess(config) dispatcher = self._makeOne(process) ansi = b'\x1b[34mHello world... this is longer than a token!\x1b[0m' noansi = b'Hello world... this is longer than a token!' dispatcher.output_buffer = ansi dispatcher.record_output() self.assertEqual(len(dispatcher.childlog.data), 1) self.assertEqual(dispatcher.childlog.data[0], noansi) options.strip_ansi = False dispatcher.output_buffer = ansi dispatcher.record_output() self.assertEqual(len(dispatcher.childlog.data), 2) self.assertEqual(dispatcher.childlog.data[1], ansi) def test_ctor_no_logfiles(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.process, process) self.assertEqual(dispatcher.channel, 'stdout') self.assertEqual(dispatcher.fd, 0) self.assertEqual(dispatcher.capturelog, None) self.assertEqual(dispatcher.normallog, None) self.assertEqual(dispatcher.childlog, None) def test_ctor_logfile_only(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile='/tmp/foo') process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.process, process) self.assertEqual(dispatcher.channel, 'stdout') self.assertEqual(dispatcher.fd, 0) self.assertEqual(dispatcher.capturelog, None) self.assertEqual(dispatcher.normallog.__class__, DummyLogger) self.assertEqual(dispatcher.childlog, dispatcher.normallog) def test_ctor_capturelog_only(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stdout_capture_maxbytes=300) process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.process, process) self.assertEqual(dispatcher.channel, 'stdout') self.assertEqual(dispatcher.fd, 0) self.assertEqual(dispatcher.capturelog.__class__, DummyLogger) self.assertEqual(dispatcher.normallog, None) self.assertEqual(dispatcher.childlog, None) def test_ctor_stdout_logfile_is_empty_string(self): from supervisor.datatypes import logfile_name options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile=logfile_name('')) process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.process, process) self.assertEqual(dispatcher.channel, 'stdout') self.assertEqual(dispatcher.fd, 0) self.assertEqual(dispatcher.normallog, None) def test_ctor_stdout_logfile_none_and_stdout_syslog_false(self): from supervisor.datatypes import boolean, logfile_name options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile=logfile_name('NONE'), stdout_syslog=boolean('false')) process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.process, process) self.assertEqual(dispatcher.channel, 'stdout') self.assertEqual(dispatcher.fd, 0) self.assertEqual(dispatcher.normallog, None) def test_ctor_stdout_logfile_none_and_stdout_syslog_true(self): from supervisor.datatypes import boolean, logfile_name from supervisor.loggers import LevelsByName, SyslogHandler from supervisor.options import ServerOptions options = ServerOptions() # need real options to get a real logger options.loglevel = LevelsByName.TRAC config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile=logfile_name('NONE'), stdout_syslog=boolean('true')) process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.process, process) self.assertEqual(dispatcher.channel, 'stdout') self.assertEqual(dispatcher.fd, 0) self.assertEqual(len(dispatcher.normallog.handlers), 1) self.assertEqual(dispatcher.normallog.handlers[0].__class__, SyslogHandler) def test_ctor_stdout_logfile_str_and_stdout_syslog_false(self): from supervisor.datatypes import boolean, logfile_name from supervisor.loggers import FileHandler, LevelsByName from supervisor.options import ServerOptions options = ServerOptions() # need real options to get a real logger options.loglevel = LevelsByName.TRAC config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile=logfile_name('/tmp/foo'), stdout_syslog=boolean('false')) process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.process, process) self.assertEqual(dispatcher.channel, 'stdout') self.assertEqual(dispatcher.fd, 0) self.assertEqual(len(dispatcher.normallog.handlers), 1) self.assertEqual(dispatcher.normallog.handlers[0].__class__, FileHandler) dispatcher.normallog.close() def test_ctor_stdout_logfile_str_and_stdout_syslog_true(self): from supervisor.datatypes import boolean, logfile_name from supervisor.loggers import FileHandler, LevelsByName, SyslogHandler from supervisor.options import ServerOptions options = ServerOptions() # need real options to get a real logger options.loglevel = LevelsByName.TRAC config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile=logfile_name('/tmp/foo'), stdout_syslog=boolean('true')) process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.process, process) self.assertEqual(dispatcher.channel, 'stdout') self.assertEqual(dispatcher.fd, 0) self.assertEqual(len(dispatcher.normallog.handlers), 2) self.assertTrue(any(isinstance(h, FileHandler) for h in dispatcher.normallog.handlers)) self.assertTrue(any(isinstance(h, SyslogHandler) for h in dispatcher.normallog.handlers)) dispatcher.normallog.close() def test_repr(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) drepr = repr(dispatcher) self.assertTrue('POutputDispatcher' in drepr) self.assertNotEqual( drepr.find('supervisor.tests.base.DummyProcess'), -1) self.assertTrue(drepr.endswith('(stdout)>'), drepr) def test_close(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.close() self.assertEqual(dispatcher.closed, True) dispatcher.close() # make sure we don't error if we try to close twice self.assertEqual(dispatcher.closed, True) class PInputDispatcherTests(unittest.TestCase): def _getTargetClass(self): from supervisor.dispatchers import PInputDispatcher return PInputDispatcher def _makeOne(self, process): channel = 'stdin' return self._getTargetClass()(process, channel, 0) def test_writable_open_nodata(self): process = DummyProcess(None) dispatcher = self._makeOne(process) dispatcher.input_buffer = 'a' dispatcher.closed = False self.assertEqual(dispatcher.writable(), True) def test_writable_open_withdata(self): process = DummyProcess(None) dispatcher = self._makeOne(process) dispatcher.input_buffer = '' dispatcher.closed = False self.assertEqual(dispatcher.writable(), False) def test_writable_closed_nodata(self): process = DummyProcess(None) dispatcher = self._makeOne(process) dispatcher.input_buffer = 'a' dispatcher.closed = True self.assertEqual(dispatcher.writable(), False) def test_writable_closed_withdata(self): process = DummyProcess(None) dispatcher = self._makeOne(process) dispatcher.input_buffer = '' dispatcher.closed = True self.assertEqual(dispatcher.writable(), False) def test_readable(self): process = DummyProcess(None) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.readable(), False) def test_handle_write_event(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.input_buffer = 'halloooo' self.assertEqual(dispatcher.handle_write_event(), None) self.assertEqual(options.written[0], 'halloooo') def test_handle_write_event_nodata(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.input_buffer, b'') dispatcher.handle_write_event() self.assertEqual(dispatcher.input_buffer, b'') self.assertEqual(options.written, {}) def test_handle_write_event_epipe_raised(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.input_buffer = 'halloooo' import errno options.write_exception = OSError(errno.EPIPE, os.strerror(errno.EPIPE)) dispatcher.handle_write_event() self.assertEqual(dispatcher.input_buffer, b'') self.assertTrue(options.logger.data[0].startswith( 'fd 0 closed, stopped monitoring')) self.assertTrue(options.logger.data[0].endswith('(stdin)>')) def test_handle_write_event_uncaught_raised(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.input_buffer = 'halloooo' import errno options.write_exception = OSError(errno.EBADF, os.strerror(errno.EBADF)) self.assertRaises(OSError, dispatcher.handle_write_event) def test_handle_write_event_over_os_limit(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') process = DummyProcess(config) dispatcher = self._makeOne(process) options.write_accept = 1 dispatcher.input_buffer = 'a' * 50 dispatcher.handle_write_event() self.assertEqual(len(dispatcher.input_buffer), 49) self.assertEqual(options.written[0], 'a') def test_handle_read_event(self): process = DummyProcess(None) dispatcher = self._makeOne(process) self.assertRaises(NotImplementedError, dispatcher.handle_read_event) def test_handle_error(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') process = DummyProcess(config) dispatcher = self._makeOne(process) try: raise ValueError('foo') except: dispatcher.handle_error() result = options.logger.data[0] self.assertTrue(result.startswith( 'uncaptured python exception, closing channel'),result) def test_repr(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) drepr = repr(dispatcher) self.assertTrue('PInputDispatcher' in drepr) self.assertNotEqual( drepr.find('supervisor.tests.base.DummyProcess'), -1) self.assertTrue(drepr.endswith('(stdin)>'), drepr) def test_close(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.close() self.assertEqual(dispatcher.closed, True) dispatcher.close() # make sure we don't error if we try to close twice self.assertEqual(dispatcher.closed, True) class PEventListenerDispatcherTests(unittest.TestCase): def setUp(self): from supervisor.events import clear clear() def tearDown(self): from supervisor.events import clear clear() def _getTargetClass(self): from supervisor.dispatchers import PEventListenerDispatcher return PEventListenerDispatcher def _makeOne(self, process): channel = 'stdout' return self._getTargetClass()(process, channel, 0) def test_writable(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.writable(), False) def test_readable_open(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.closed = False self.assertEqual(dispatcher.readable(), True) def test_readable_closed(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.closed = True self.assertEqual(dispatcher.readable(), False) def test_handle_write_event(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertRaises(NotImplementedError, dispatcher.handle_write_event) def test_handle_read_event_calls_handle_listener_state_change(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile='/tmp/foo') process = DummyProcess(config) from supervisor.dispatchers import EventListenerStates process.listener_state = EventListenerStates.ACKNOWLEDGED dispatcher = self._makeOne(process) options.readfd_result = dispatcher.READY_FOR_EVENTS_TOKEN self.assertEqual(dispatcher.handle_read_event(), None) self.assertEqual(process.listener_state, EventListenerStates.READY) self.assertEqual(dispatcher.state_buffer, b'') self.assertEqual(len(dispatcher.childlog.data), 1) self.assertEqual(dispatcher.childlog.data[0], dispatcher.READY_FOR_EVENTS_TOKEN) def test_handle_read_event_nodata(self): options = DummyOptions() options.readfd_result = '' config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.handle_read_event(), None) self.assertEqual(dispatcher.state_buffer, b'') from supervisor.dispatchers import EventListenerStates self.assertEqual(dispatcher.process.listener_state, EventListenerStates.ACKNOWLEDGED) def test_handle_read_event_logging_nologs(self): options = DummyOptions() options.readfd_result = b'supercalifragilisticexpialidocious' config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) # just make sure there are no errors if a child logger doesnt # exist self.assertEqual(dispatcher.handle_read_event(), None) self.assertEqual(dispatcher.childlog, None) def test_handle_read_event_logging_childlog(self): options = DummyOptions() options.readfd_result = b'supercalifragilisticexpialidocious' config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile='/tmp/foo') process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.handle_read_event(), None) self.assertEqual(len(dispatcher.childlog.data), 1) self.assertEqual(dispatcher.childlog.data[0], b'supercalifragilisticexpialidocious') def test_handle_listener_state_change_from_unknown(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) from supervisor.dispatchers import EventListenerStates dispatcher = self._makeOne(process) process.listener_state = EventListenerStates.UNKNOWN dispatcher.state_buffer = b'whatever' self.assertEqual(dispatcher.handle_listener_state_change(), None) self.assertEqual(dispatcher.state_buffer, b'') self.assertEqual(options.logger.data, []) self.assertEqual(process.listener_state, EventListenerStates.UNKNOWN) def test_handle_listener_state_change_acknowledged_to_ready(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) from supervisor.dispatchers import EventListenerStates dispatcher = self._makeOne(process) process.listener_state = EventListenerStates.ACKNOWLEDGED dispatcher.state_buffer = b'READY\n' self.assertEqual(dispatcher.handle_listener_state_change(), None) self.assertEqual(dispatcher.state_buffer, b'') self.assertEqual(options.logger.data[0], 'process1: ACKNOWLEDGED -> READY') self.assertEqual(process.listener_state, EventListenerStates.READY) def test_handle_listener_state_change_acknowledged_gobbles(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) from supervisor.dispatchers import EventListenerStates dispatcher = self._makeOne(process) process.listener_state = EventListenerStates.ACKNOWLEDGED dispatcher.state_buffer = b'READY\ngarbage\n' self.assertEqual(dispatcher.handle_listener_state_change(), None) self.assertEqual(dispatcher.state_buffer, b'') self.assertEqual(options.logger.data[0], 'process1: ACKNOWLEDGED -> READY') self.assertEqual(options.logger.data[1], 'process1: READY -> UNKNOWN') self.assertEqual(process.listener_state, EventListenerStates.UNKNOWN) def test_handle_listener_state_change_acknowledged_to_insufficient(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) from supervisor.dispatchers import EventListenerStates dispatcher = self._makeOne(process) process.listener_state = EventListenerStates.ACKNOWLEDGED dispatcher.state_buffer = b'RE' self.assertEqual(dispatcher.handle_listener_state_change(), None) self.assertEqual(dispatcher.state_buffer, b'RE') self.assertEqual(options.logger.data, []) self.assertEqual(process.listener_state, EventListenerStates.ACKNOWLEDGED) def test_handle_listener_state_change_acknowledged_to_unknown(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) from supervisor.dispatchers import EventListenerStates dispatcher = self._makeOne(process) process.listener_state = EventListenerStates.ACKNOWLEDGED dispatcher.state_buffer = b'bogus data yo' self.assertEqual(dispatcher.handle_listener_state_change(), None) self.assertEqual(dispatcher.state_buffer, b'') self.assertEqual(options.logger.data[0], 'process1: ACKNOWLEDGED -> UNKNOWN') self.assertEqual(options.logger.data[1], 'process1: has entered the UNKNOWN state and will ' 'no longer receive events, this usually indicates ' 'the process violated the eventlistener protocol') self.assertEqual(process.listener_state, EventListenerStates.UNKNOWN) def test_handle_listener_state_change_ready_to_unknown(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) from supervisor.dispatchers import EventListenerStates dispatcher = self._makeOne(process) process.listener_state = EventListenerStates.READY dispatcher.state_buffer = b'bogus data yo' self.assertEqual(dispatcher.handle_listener_state_change(), None) self.assertEqual(dispatcher.state_buffer, b'') self.assertEqual(options.logger.data[0], 'process1: READY -> UNKNOWN') self.assertEqual(options.logger.data[1], 'process1: has entered the UNKNOWN state and will ' 'no longer receive events, this usually indicates ' 'the process violated the eventlistener protocol') self.assertEqual(process.listener_state, EventListenerStates.UNKNOWN) def test_handle_listener_state_change_busy_to_insufficient(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) from supervisor.dispatchers import EventListenerStates dispatcher = self._makeOne(process) process.listener_state = EventListenerStates.BUSY dispatcher.state_buffer = b'bogus data yo' self.assertEqual(dispatcher.handle_listener_state_change(), None) self.assertEqual(dispatcher.state_buffer, b'bogus data yo') self.assertEqual(process.listener_state, EventListenerStates.BUSY) def test_handle_listener_state_change_busy_to_acknowledged_procd(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) from supervisor.dispatchers import EventListenerStates dispatcher = self._makeOne(process) process.listener_state = EventListenerStates.BUSY class Dummy: pass process.group = Dummy() process.group.config = Dummy() from supervisor.dispatchers import default_handler process.group.config.result_handler = default_handler dispatcher.state_buffer = b'RESULT 2\nOKabc' self.assertEqual(dispatcher.handle_listener_state_change(), None) self.assertEqual(dispatcher.state_buffer, b'abc') self.assertEqual(options.logger.data[0], 'process1: event was processed') self.assertEqual(options.logger.data[1], 'process1: BUSY -> ACKNOWLEDGED') self.assertEqual(process.listener_state, EventListenerStates.ACKNOWLEDGED) def test_handle_listener_state_change_busy_to_acknowledged_rejected(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) from supervisor.dispatchers import EventListenerStates dispatcher = self._makeOne(process) process.listener_state = EventListenerStates.BUSY class Dummy: pass process.group = Dummy() process.group.config = Dummy() from supervisor.dispatchers import default_handler process.group.config.result_handler = default_handler dispatcher.state_buffer = b'RESULT 4\nFAILabc' self.assertEqual(dispatcher.handle_listener_state_change(), None) self.assertEqual(dispatcher.state_buffer, b'abc') self.assertEqual(options.logger.data[0], 'process1: event was rejected') self.assertEqual(options.logger.data[1], 'process1: BUSY -> ACKNOWLEDGED') self.assertEqual(process.listener_state, EventListenerStates.ACKNOWLEDGED) def test_handle_listener_state_change_busy_to_unknown(self): from supervisor.events import EventRejectedEvent from supervisor.events import subscribe events = [] def doit(event): events.append(event) subscribe(EventRejectedEvent, doit) options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) from supervisor.dispatchers import EventListenerStates dispatcher = self._makeOne(process) process.listener_state = EventListenerStates.BUSY current_event = DummyEvent() process.event = current_event dispatcher.state_buffer = b'bogus data\n' self.assertEqual(dispatcher.handle_listener_state_change(), None) self.assertEqual(dispatcher.state_buffer, b'') self.assertEqual(options.logger.data[0], "process1: bad result line: 'bogus data'") self.assertEqual(options.logger.data[1], 'process1: BUSY -> UNKNOWN') self.assertEqual(options.logger.data[2], 'process1: has entered the UNKNOWN state and will ' 'no longer receive events, this usually indicates ' 'the process violated the eventlistener protocol') self.assertEqual(process.listener_state, EventListenerStates.UNKNOWN) self.assertEqual(events[0].process, process) self.assertEqual(events[0].event, current_event) def test_handle_listener_state_busy_gobbles(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) from supervisor.dispatchers import EventListenerStates dispatcher = self._makeOne(process) process.listener_state = EventListenerStates.BUSY class Dummy: pass process.group = Dummy() process.group.config = Dummy() from supervisor.dispatchers import default_handler process.group.config.result_handler = default_handler dispatcher.state_buffer = b'RESULT 2\nOKbogus data\n' self.assertEqual(dispatcher.handle_listener_state_change(), None) self.assertEqual(dispatcher.state_buffer, b'') self.assertEqual(options.logger.data[0], 'process1: event was processed') self.assertEqual(options.logger.data[1], 'process1: BUSY -> ACKNOWLEDGED') self.assertEqual(options.logger.data[2], 'process1: ACKNOWLEDGED -> UNKNOWN') self.assertEqual(options.logger.data[3], 'process1: has entered the UNKNOWN state and will ' 'no longer receive events, this usually indicates ' 'the process violated the eventlistener protocol') self.assertEqual(process.listener_state, EventListenerStates.UNKNOWN) def test_handle_result_accept(self): from supervisor.events import subscribe options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) L = [] def doit(event): L.append(event) from supervisor import events subscribe(events.EventRejectedEvent, doit) from supervisor.dispatchers import EventListenerStates dispatcher = self._makeOne(process) def handle(event, result): pass class Dummy: pass process.group = Dummy() process.group.config = Dummy() process.group.config.result_handler = handle process.listener_state = EventListenerStates.BUSY dispatcher.handle_result('foo') self.assertEqual(len(L), 0) self.assertEqual(process.listener_state, EventListenerStates.ACKNOWLEDGED) self.assertEqual(options.logger.data[0], 'process1: event was processed') self.assertEqual(options.logger.data[1], 'process1: BUSY -> ACKNOWLEDGED') def test_handle_result_rejectevent(self): from supervisor.events import subscribe options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) L = [] def doit(event): L.append(event) from supervisor import events subscribe(events.EventRejectedEvent, doit) from supervisor.dispatchers import EventListenerStates dispatcher = self._makeOne(process) def rejected(event, result): from supervisor.dispatchers import RejectEvent raise RejectEvent(result) class Dummy: pass process.group = Dummy() process.group.config = Dummy() process.group.config.result_handler = rejected process.listener_state = EventListenerStates.BUSY dispatcher.handle_result('foo') self.assertEqual(len(L), 1) self.assertEqual(L[0].__class__, events.EventRejectedEvent) self.assertEqual(process.listener_state, EventListenerStates.ACKNOWLEDGED) self.assertEqual(options.logger.data[0], 'process1: event was rejected') self.assertEqual(options.logger.data[1], 'process1: BUSY -> ACKNOWLEDGED') def test_handle_result_exception(self): from supervisor.events import subscribe options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) L = [] def doit(event): L.append(event) from supervisor import events subscribe(events.EventRejectedEvent, doit) from supervisor.dispatchers import EventListenerStates dispatcher = self._makeOne(process) def exception(event, result): raise ValueError class Dummy: pass process.group = Dummy() process.group.config = Dummy() process.group.config.result_handler = exception process.group.result_handler = exception process.listener_state = EventListenerStates.BUSY dispatcher.handle_result('foo') self.assertEqual(len(L), 1) self.assertEqual(L[0].__class__, events.EventRejectedEvent) self.assertEqual(process.listener_state, EventListenerStates.UNKNOWN) self.assertEqual(options.logger.data[0], 'process1: event caused an error') self.assertEqual(options.logger.data[1], 'process1: BUSY -> UNKNOWN') self.assertEqual(options.logger.data[2], 'process1: has entered the UNKNOWN state and will ' 'no longer receive events, this usually indicates ' 'the process violated the eventlistener protocol') def test_handle_error(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') process = DummyProcess(config) dispatcher = self._makeOne(process) try: raise ValueError('foo') except: dispatcher.handle_error() result = options.logger.data[0] self.assertTrue(result.startswith( 'uncaptured python exception, closing channel'),result) def test_removelogs(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile='/tmp/foo') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.removelogs() self.assertEqual(dispatcher.childlog.handlers[0].reopened, True) self.assertEqual(dispatcher.childlog.handlers[0].removed, True) def test_reopenlogs(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile='/tmp/foo') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.reopenlogs() self.assertEqual(dispatcher.childlog.handlers[0].reopened, True) def test_strip_ansi(self): options = DummyOptions() options.strip_ansi = True config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile='/tmp/foo') process = DummyProcess(config) dispatcher = self._makeOne(process) ansi = b'\x1b[34mHello world... this is longer than a token!\x1b[0m' noansi = b'Hello world... this is longer than a token!' options.readfd_result = ansi dispatcher.handle_read_event() self.assertEqual(len(dispatcher.childlog.data), 1) self.assertEqual(dispatcher.childlog.data[0], noansi) options.strip_ansi = False options.readfd_result = ansi dispatcher.handle_read_event() self.assertEqual(len(dispatcher.childlog.data), 2) self.assertEqual(dispatcher.childlog.data[1], ansi) def test_ctor_nologfiles(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.process, process) self.assertEqual(dispatcher.channel, 'stdout') self.assertEqual(dispatcher.fd, 0) self.assertEqual(dispatcher.childlog, None) def test_ctor_logfile_only(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1', stdout_logfile='/tmp/foo') process = DummyProcess(config) dispatcher = self._makeOne(process) self.assertEqual(dispatcher.process, process) self.assertEqual(dispatcher.channel, 'stdout') self.assertEqual(dispatcher.fd, 0) self.assertEqual(dispatcher.childlog.__class__, DummyLogger) def test_repr(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) drepr = repr(dispatcher) self.assertTrue('PEventListenerDispatcher' in drepr) self.assertNotEqual( drepr.find('supervisor.tests.base.DummyProcess'), -1) self.assertTrue(drepr.endswith('(stdout)>'), drepr) def test_close(self): options = DummyOptions() config = DummyPConfig(options, 'process1', '/bin/process1') process = DummyProcess(config) dispatcher = self._makeOne(process) dispatcher.close() self.assertEqual(dispatcher.closed, True) dispatcher.close() # make sure we don't error if we try to close twice self.assertEqual(dispatcher.closed, True) class stripEscapeTests(unittest.TestCase): def _callFUT(self, s): from supervisor.dispatchers import stripEscapes return stripEscapes(s) def test_zero_length_string(self): self.assertEqual(self._callFUT(b''), b'') def test_ansi(self): ansi = b'\x1b[34mHello world... this is longer than a token!\x1b[0m' noansi = b'Hello world... this is longer than a token!' self.assertEqual(self._callFUT(ansi), noansi) def test_noansi(self): noansi = b'Hello world... this is longer than a token!' self.assertEqual(self._callFUT(noansi), noansi) def test_suite(): return unittest.findTestCases(sys.modules[__name__]) if __name__ == '__main__': unittest.main(defaultTest='test_suite') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/test_end_to_end.py0000644000076500000240000005244514340177153022602 0ustar00mnaberezstaff# ~*~ coding: utf-8 ~*~ from __future__ import unicode_literals import os import signal import sys import unittest import pkg_resources from supervisor.compat import xmlrpclib from supervisor.xmlrpc import SupervisorTransport # end-to-test tests are slow so only run them when asked if 'END_TO_END' in os.environ: import pexpect BaseTestCase = unittest.TestCase else: BaseTestCase = object class EndToEndTests(BaseTestCase): def test_issue_291a_percent_signs_in_original_env_are_preserved(self): """When an environment variable whose value contains a percent sign is present in the environment before supervisord starts, the value is passed to the child without the percent sign being mangled.""" key = "SUPERVISOR_TEST_1441B" val = "foo_%s_%_%%_%%%_%2_bar" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-291a.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] try: os.environ[key] = val supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact(key + "=" + val) finally: del os.environ[key] def test_issue_550(self): """When an environment variable is set in the [supervisord] section, it should be put into the environment of the subprocess.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-550.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('success: print_env entered RUNNING state') supervisord.expect_exact('exited: print_env (exit status 0; expected)') args = ['-m', 'supervisor.supervisorctl', '-c', filename, 'tail -100000', 'print_env'] supervisorctl = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisorctl.kill, signal.SIGINT) supervisorctl.expect_exact("THIS_SHOULD=BE_IN_CHILD_ENV") supervisorctl.expect(pexpect.EOF) def test_issue_565(self): """When a log file has Unicode characters in it, 'supervisorctl tail -f name' should still work.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-565.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('success: hello entered RUNNING state') args = ['-m', 'supervisor.supervisorctl', '-c', filename, 'tail', '-f', 'hello'] supervisorctl = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisorctl.kill, signal.SIGINT) for i in range(1, 4): line = 'The Øresund bridge ends in Malmö - %d' % i supervisorctl.expect_exact(line, timeout=30) def test_issue_638(self): """When a process outputs something on its stdout or stderr file descriptor that is not valid UTF-8, supervisord should not crash.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-638.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) is_py2 = sys.version_info[0] < 3 if is_py2: b_prefix = '' else: b_prefix = 'b' supervisord.expect_exact(r"Undecodable: %s'\x88\n'" % b_prefix, timeout=30) supervisord.expect('received SIGCH?LD indicating a child quit', timeout=30) if is_py2: # need to investigate why this message is only printed under 2.x supervisord.expect_exact('gave up: produce-unicode-error entered FATAL state, ' 'too many start retries too quickly', timeout=60) def test_issue_663(self): """When Supervisor is run on Python 3, the eventlistener protocol should work.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-663.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) for i in range(2): supervisord.expect_exact('OKREADY', timeout=60) supervisord.expect_exact('BUSY -> ACKNOWLEDGED', timeout=30) def test_issue_664(self): """When a subprocess name has Unicode characters, 'supervisord' should not send incomplete XML-RPC responses and 'supervisorctl status' should work.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-664.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('test_öäü entered RUNNING state', timeout=60) args = ['-m', 'supervisor.supervisorctl', '-c', filename, 'status'] supervisorctl = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisorctl.kill, signal.SIGINT) try: supervisorctl.expect('test_öäü\\s+RUNNING', timeout=30) seen = True except pexpect.ExceptionPexpect: seen = False self.assertTrue(seen) def test_issue_733(self): """When a subprocess enters the FATAL state, a one-line eventlistener can be used to signal supervisord to shut down.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-733.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('gave up: nonexistent entered FATAL state') supervisord.expect_exact('received SIGTERM indicating exit request') supervisord.expect(pexpect.EOF) def test_issue_835(self): filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-835.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('cat entered RUNNING state', timeout=60) transport = SupervisorTransport('', '', 'unix:///tmp/issue-835.sock') server = xmlrpclib.ServerProxy('http://anything/RPC2', transport) try: for s in ('The Øresund bridge ends in Malmö', 'hello'): result = server.supervisor.sendProcessStdin('cat', s) self.assertTrue(result) supervisord.expect_exact(s, timeout=30) finally: transport.connection.close() def test_issue_836(self): filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-836.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('cat entered RUNNING state', timeout=60) args = ['-m', 'supervisor.supervisorctl', '-c', filename, 'fg', 'cat'] supervisorctl = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisorctl.kill, signal.SIGINT) try: for s in ('Hi', 'Hello', 'The Øresund bridge ends in Malmö'): supervisorctl.sendline(s) supervisord.expect_exact(s, timeout=60) supervisorctl.expect_exact(s) # echoed locally supervisorctl.expect_exact(s) # sent back by supervisord seen = True except pexpect.ExceptionPexpect: seen = False self.assertTrue(seen) def test_issue_986_command_string_with_double_percent(self): """A percent sign can be used in a command= string without being expanded if it is escaped by a second percent sign.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-986.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('dhcrelay -d -q -a %h:%p %P -i Vlan1000 192.168.0.1') def test_issue_1054(self): """When run on Python 3, the 'supervisorctl avail' command should work.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-1054.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('cat entered RUNNING state', timeout=60) args = ['-m', 'supervisor.supervisorctl', '-c', filename, 'avail'] supervisorctl = pexpect.spawn(sys.executable, args, encoding='utf-8') try: supervisorctl.expect('cat\\s+in use\\s+auto', timeout=30) seen = True except pexpect.ExceptionPexpect: seen = False self.assertTrue(seen) def test_issue_1170a(self): """When the [supervisord] section has a variable defined in environment=, that variable should be able to be used in an %(ENV_x) expansion in a [program] section.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-1170a.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact("set from [supervisord] section") def test_issue_1170b(self): """When the [supervisord] section has a variable defined in environment=, and a variable by the same name is defined in enviroment= of a [program] section, the one in the [program] section should be used.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-1170b.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact("set from [program] section") def test_issue_1170c(self): """When the [supervisord] section has a variable defined in environment=, and a variable by the same name is defined in enviroment= of an [eventlistener] section, the one in the [eventlistener] section should be used.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-1170c.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact("set from [eventlistener] section") def test_issue_1224(self): """When the main log file does not need rotation (logfile_maxbyte=0) then the non-rotating logger will be used to avoid an IllegalSeekError in the case that the user has configured a non-seekable file like /dev/stdout.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-1224.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('cat entered RUNNING state', timeout=60) def test_issue_1231a(self): """When 'supervisorctl tail -f name' is run and the log contains unicode, it should not fail.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-1231a.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('success: hello entered RUNNING state') args = ['-m', 'supervisor.supervisorctl', '-c', filename, 'tail', '-f', 'hello'] supervisorctl = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisorctl.kill, signal.SIGINT) for i in range(1, 4): line = '%d - hash=57d94b…381088' % i supervisorctl.expect_exact(line, timeout=30) def test_issue_1231b(self): """When 'supervisorctl tail -f name' is run and the log contains unicode, it should not fail.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-1231b.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('success: hello entered RUNNING state') args = ['-m', 'supervisor.supervisorctl', '-c', filename, 'tail', '-f', 'hello'] env = os.environ.copy() env['LANG'] = 'oops' supervisorctl = pexpect.spawn(sys.executable, args, encoding='utf-8', env=env) self.addCleanup(supervisorctl.kill, signal.SIGINT) # For Python 3 < 3.7, LANG=oops leads to warnings because of the # stdout encoding. For 3.7 (and presumably later), the encoding is # utf-8 when LANG=oops. if sys.version_info[:2] < (3, 7): supervisorctl.expect('Warning: sys.stdout.encoding is set to ', timeout=30) supervisorctl.expect('Unicode output may fail.', timeout=30) for i in range(1, 4): line = '%d - hash=57d94b…381088' % i try: supervisorctl.expect_exact(line, timeout=30) except pexpect.exceptions.EOF: self.assertIn('Unable to write Unicode to stdout because it ' 'has encoding ', supervisorctl.before) break def test_issue_1231c(self): """When 'supervisorctl tail -f name' is run and the log contains unicode, it should not fail.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-1231c.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('success: hello entered RUNNING state') args = ['-m', 'supervisor.supervisorctl', '-c', filename, 'tail', 'hello'] env = os.environ.copy() env['LANG'] = 'oops' supervisorctl = pexpect.spawn(sys.executable, args, encoding='utf-8', env=env) self.addCleanup(supervisorctl.kill, signal.SIGINT) # For Python 3 < 3.7, LANG=oops leads to warnings because of the # stdout encoding. For 3.7 (and presumably later), the encoding is # utf-8 when LANG=oops. if sys.version_info[:2] < (3, 7): supervisorctl.expect('Warning: sys.stdout.encoding is set to ', timeout=30) supervisorctl.expect('Unicode output may fail.', timeout=30) def test_issue_1251(self): """When -? is given to supervisord or supervisorctl, help should be displayed like -h does.""" args = ['-m', 'supervisor.supervisord', '-?'] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact("supervisord -- run a set of applications") supervisord.expect_exact("-l/--logfile FILENAME -- use FILENAME as") supervisord.expect(pexpect.EOF) args = ['-m', 'supervisor.supervisorctl', '-?'] supervisorctl = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisorctl.kill, signal.SIGINT) supervisorctl.expect_exact("supervisorctl -- control applications") supervisorctl.expect_exact("-i/--interactive -- start an interactive") supervisorctl.expect(pexpect.EOF) def test_issue_1298(self): """When the output of 'supervisorctl tail -f worker' is piped such as 'supervisor tail -f worker | grep something', 'supervisorctl' should not crash.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-1298.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('success: spew entered RUNNING state') cmd = "'%s' -m supervisor.supervisorctl -c '%s' tail -f spew | /bin/cat -u" % ( sys.executable, filename ) bash = pexpect.spawn('/bin/sh', ['-c', cmd], encoding='utf-8') self.addCleanup(bash.kill, signal.SIGINT) bash.expect('spewage 2', timeout=30) def test_issue_1418_pidproxy_cmd_with_no_args(self): """When pidproxy is given a command to run that has no arguments, it runs that command.""" args = ['-m', 'supervisor.pidproxy', 'nonexistent-pidfile', "/bin/echo"] pidproxy = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(pidproxy.kill, signal.SIGINT) pidproxy.expect(pexpect.EOF) self.assertEqual(pidproxy.before.strip(), "") def test_issue_1418_pidproxy_cmd_with_args(self): """When pidproxy is given a command to run that has arguments, it runs that command.""" args = ['-m', 'supervisor.pidproxy', 'nonexistent-pidfile', "/bin/echo", "1", "2"] pidproxy = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(pidproxy.kill, signal.SIGINT) pidproxy.expect(pexpect.EOF) self.assertEqual(pidproxy.before.strip(), "1 2") def test_issue_1483a_identifier_default(self): """When no identifier is supplied on the command line or in the config file, the default is used.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-1483a.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('supervisord started with pid') from supervisor.compat import xmlrpclib from supervisor.xmlrpc import SupervisorTransport transport = SupervisorTransport('', '', 'unix:///tmp/issue-1483a.sock') try: server = xmlrpclib.ServerProxy('http://transport.ignores.host/RPC2', transport) ident = server.supervisor.getIdentification() finally: transport.close() self.assertEqual(ident, "supervisor") def test_issue_1483b_identifier_from_config_file(self): """When the identifier is supplied in the config file only, that identifier is used instead of the default.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-1483b.conf') args = ['-m', 'supervisor.supervisord', '-c', filename] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('supervisord started with pid') from supervisor.compat import xmlrpclib from supervisor.xmlrpc import SupervisorTransport transport = SupervisorTransport('', '', 'unix:///tmp/issue-1483b.sock') try: server = xmlrpclib.ServerProxy('http://transport.ignores.host/RPC2', transport) ident = server.supervisor.getIdentification() finally: transport.close() self.assertEqual(ident, "from_config_file") def test_issue_1483c_identifier_from_command_line(self): """When an identifier is supplied in both the config file and on the command line, the one from the command line is used.""" filename = pkg_resources.resource_filename(__name__, 'fixtures/issue-1483c.conf') args = ['-m', 'supervisor.supervisord', '-c', filename, '-i', 'from_command_line'] supervisord = pexpect.spawn(sys.executable, args, encoding='utf-8') self.addCleanup(supervisord.kill, signal.SIGINT) supervisord.expect_exact('supervisord started with pid') from supervisor.compat import xmlrpclib from supervisor.xmlrpc import SupervisorTransport transport = SupervisorTransport('', '', 'unix:///tmp/issue-1483c.sock') try: server = xmlrpclib.ServerProxy('http://transport.ignores.host/RPC2', transport) ident = server.supervisor.getIdentification() finally: transport.close() self.assertEqual(ident, "from_command_line") def test_suite(): return unittest.findTestCases(sys.modules[__name__]) if __name__ == '__main__': unittest.main(defaultTest='test_suite') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0 supervisor-4.2.5/supervisor/tests/test_events.py0000644000076500000240000005067014351440431022000 0ustar00mnaberezstaffimport sys import unittest from supervisor.tests.base import DummyOptions from supervisor.tests.base import DummyPConfig from supervisor.tests.base import DummyProcess from supervisor.tests.base import DummyEvent class EventSubscriptionNotificationTests(unittest.TestCase): def setUp(self): from supervisor import events events.callbacks[:] = [] def tearDown(self): from supervisor import events events.callbacks[:] = [] def test_subscribe(self): from supervisor import events events.subscribe(None, None) self.assertEqual(events.callbacks, [(None, None)]) def test_unsubscribe(self): from supervisor import events events.callbacks[:] = [(1, 1), (2, 2), (3, 3)] events.unsubscribe(2, 2) self.assertEqual(events.callbacks, [(1, 1), (3, 3)]) def test_clear(self): from supervisor import events events.callbacks[:] = [(None, None)] events.clear() self.assertEqual(events.callbacks, []) def test_notify_true(self): from supervisor import events L = [] def callback(event): L.append(1) events.callbacks[:] = [(DummyEvent, callback)] events.notify(DummyEvent()) self.assertEqual(L, [1]) def test_notify_false(self): from supervisor import events L = [] def callback(event): L.append(1) class AnotherEvent: pass events.callbacks[:] = [(AnotherEvent, callback)] events.notify(DummyEvent()) self.assertEqual(L, []) def test_notify_via_subclass(self): from supervisor import events L = [] def callback(event): L.append(1) class ASubclassEvent(DummyEvent): pass events.callbacks[:] = [(DummyEvent, callback)] events.notify(ASubclassEvent()) self.assertEqual(L, [1]) class TestEventTypes(unittest.TestCase): def test_ProcessLogEvent_attributes(self): from supervisor.events import ProcessLogEvent inst = ProcessLogEvent(1, 2, 3) self.assertEqual(inst.process, 1) self.assertEqual(inst.pid, 2) self.assertEqual(inst.data, 3) def test_ProcessLogEvent_inheritance(self): from supervisor.events import ProcessLogEvent from supervisor.events import Event self.assertTrue( issubclass(ProcessLogEvent, Event) ) def test_ProcessLogStdoutEvent_attributes(self): from supervisor.events import ProcessLogStdoutEvent inst = ProcessLogStdoutEvent(1, 2, 3) self.assertEqual(inst.process, 1) self.assertEqual(inst.pid, 2) self.assertEqual(inst.data, 3) self.assertEqual(inst.channel, 'stdout') def test_ProcessLogStdoutEvent_inheritance(self): from supervisor.events import ProcessLogStdoutEvent from supervisor.events import ProcessLogEvent self.assertTrue( issubclass(ProcessLogStdoutEvent, ProcessLogEvent) ) def test_ProcessLogStderrEvent_attributes(self): from supervisor.events import ProcessLogStderrEvent inst = ProcessLogStderrEvent(1, 2, 3) self.assertEqual(inst.process, 1) self.assertEqual(inst.pid, 2) self.assertEqual(inst.data, 3) self.assertEqual(inst.channel, 'stderr') def test_ProcessLogStderrEvent_inheritance(self): from supervisor.events import ProcessLogStderrEvent from supervisor.events import ProcessLogEvent self.assertTrue( issubclass(ProcessLogStderrEvent, ProcessLogEvent) ) def test_ProcessCommunicationEvent_attributes(self): from supervisor.events import ProcessCommunicationEvent inst = ProcessCommunicationEvent(1, 2, 3) self.assertEqual(inst.process, 1) self.assertEqual(inst.pid, 2) self.assertEqual(inst.data, 3) def test_ProcessCommunicationEvent_inheritance(self): from supervisor.events import ProcessCommunicationEvent from supervisor.events import Event self.assertTrue( issubclass(ProcessCommunicationEvent, Event) ) def test_ProcessCommunicationStdoutEvent_attributes(self): from supervisor.events import ProcessCommunicationStdoutEvent inst = ProcessCommunicationStdoutEvent(1, 2, 3) self.assertEqual(inst.process, 1) self.assertEqual(inst.pid, 2) self.assertEqual(inst.data, 3) self.assertEqual(inst.channel, 'stdout') def test_ProcessCommunicationStdoutEvent_inheritance(self): from supervisor.events import ProcessCommunicationStdoutEvent from supervisor.events import ProcessCommunicationEvent self.assertTrue( issubclass(ProcessCommunicationStdoutEvent, ProcessCommunicationEvent) ) def test_ProcessCommunicationStderrEvent_attributes(self): from supervisor.events import ProcessCommunicationStderrEvent inst = ProcessCommunicationStderrEvent(1, 2, 3) self.assertEqual(inst.process, 1) self.assertEqual(inst.pid, 2) self.assertEqual(inst.data, 3) self.assertEqual(inst.channel, 'stderr') def test_ProcessCommunicationStderrEvent_inheritance(self): from supervisor.events import ProcessCommunicationStderrEvent from supervisor.events import ProcessCommunicationEvent self.assertTrue( issubclass(ProcessCommunicationStderrEvent, ProcessCommunicationEvent) ) def test_RemoteCommunicationEvent_attributes(self): from supervisor.events import RemoteCommunicationEvent inst = RemoteCommunicationEvent(1, 2) self.assertEqual(inst.type, 1) self.assertEqual(inst.data, 2) def test_RemoteCommunicationEvent_inheritance(self): from supervisor.events import RemoteCommunicationEvent from supervisor.events import Event self.assertTrue( issubclass(RemoteCommunicationEvent, Event) ) def test_EventRejectedEvent_attributes(self): from supervisor.events import EventRejectedEvent options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process = DummyProcess(pconfig1) rejected_event = DummyEvent() event = EventRejectedEvent(process, rejected_event) self.assertEqual(event.process, process) self.assertEqual(event.event, rejected_event) def test_EventRejectedEvent_does_not_inherit_from_event(self): from supervisor.events import EventRejectedEvent from supervisor.events import Event self.assertFalse( issubclass(EventRejectedEvent, Event) ) def test_all_SupervisorStateChangeEvents(self): from supervisor import events for klass in ( events.SupervisorStateChangeEvent, events.SupervisorRunningEvent, events.SupervisorStoppingEvent ): self._test_one_SupervisorStateChangeEvent(klass) def _test_one_SupervisorStateChangeEvent(self, klass): from supervisor.events import SupervisorStateChangeEvent self.assertTrue(issubclass(klass, SupervisorStateChangeEvent)) def test_all_ProcessStateEvents(self): from supervisor import events for klass in ( events.ProcessStateEvent, events.ProcessStateStoppedEvent, events.ProcessStateExitedEvent, events.ProcessStateFatalEvent, events.ProcessStateBackoffEvent, events.ProcessStateRunningEvent, events.ProcessStateUnknownEvent, events.ProcessStateStoppingEvent, events.ProcessStateStartingEvent, ): self._test_one_ProcessStateEvent(klass) def _test_one_ProcessStateEvent(self, klass): from supervisor.states import ProcessStates from supervisor.events import ProcessStateEvent self.assertTrue(issubclass(klass, ProcessStateEvent)) options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process = DummyProcess(pconfig1) inst = klass(process, ProcessStates.STARTING) self.assertEqual(inst.process, process) self.assertEqual(inst.from_state, ProcessStates.STARTING) self.assertEqual(inst.expected, True) def test_all_TickEvents(self): from supervisor import events for klass in ( events.TickEvent, events.Tick5Event, events.Tick60Event, events.Tick3600Event ): self._test_one_TickEvent(klass) def _test_one_TickEvent(self, klass): from supervisor.events import TickEvent self.assertTrue(issubclass(klass, TickEvent)) inst = klass(1, 2) self.assertEqual(inst.when, 1) self.assertEqual(inst.supervisord, 2) def test_ProcessGroupAddedEvent_attributes(self): from supervisor.events import ProcessGroupAddedEvent inst = ProcessGroupAddedEvent('myprocess') self.assertEqual(inst.group, 'myprocess') def test_ProcessGroupRemovedEvent_attributes(self): from supervisor.events import ProcessGroupRemovedEvent inst = ProcessGroupRemovedEvent('myprocess') self.assertEqual(inst.group, 'myprocess') class TestSerializations(unittest.TestCase): def _deserialize(self, serialization): data = serialization.split('\n') headerdata = data[0] payload = '' headers = {} if len(data) > 1: payload = data[1] if headerdata: try: headers = dict( [ x.split(':',1) for x in headerdata.split()] ) except ValueError: raise AssertionError('headerdata %r could not be deserialized' % headerdata) return headers, payload def test_plog_stdout_event(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1) from supervisor.events import ProcessLogStdoutEvent class DummyGroup: config = pconfig1 process1.group = DummyGroup event = ProcessLogStdoutEvent(process1, 1, 'yo') headers, payload = self._deserialize(event.payload()) self.assertEqual(headers['processname'], 'process1', headers) self.assertEqual(headers['groupname'], 'process1', headers) self.assertEqual(headers['pid'], '1', headers) self.assertEqual(payload, 'yo') def test_plog_stderr_event(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1) from supervisor.events import ProcessLogStderrEvent class DummyGroup: config = pconfig1 process1.group = DummyGroup event = ProcessLogStderrEvent(process1, 1, 'yo') headers, payload = self._deserialize(event.payload()) self.assertEqual(headers['processname'], 'process1', headers) self.assertEqual(headers['groupname'], 'process1', headers) self.assertEqual(headers['pid'], '1', headers) self.assertEqual(payload, 'yo') def test_pcomm_stdout_event(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1) from supervisor.events import ProcessCommunicationStdoutEvent class DummyGroup: config = pconfig1 process1.group = DummyGroup event = ProcessCommunicationStdoutEvent(process1, 1, 'yo') headers, payload = self._deserialize(event.payload()) self.assertEqual(headers['processname'], 'process1', headers) self.assertEqual(headers['groupname'], 'process1', headers) self.assertEqual(headers['pid'], '1', headers) self.assertEqual(payload, 'yo') def test_pcomm_stderr_event(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1) class DummyGroup: config = pconfig1 process1.group = DummyGroup from supervisor.events import ProcessCommunicationStderrEvent event = ProcessCommunicationStderrEvent(process1, 1, 'yo') headers, payload = self._deserialize(event.payload()) self.assertEqual(headers['processname'], 'process1', headers) self.assertEqual(headers['groupname'], 'process1', headers) self.assertEqual(headers['pid'], '1', headers) self.assertEqual(payload, 'yo') def test_remote_comm_event(self): from supervisor.events import RemoteCommunicationEvent event = RemoteCommunicationEvent('foo', 'bar') headers, payload = self._deserialize(event.payload()) self.assertEqual(headers['type'], 'foo', headers) self.assertEqual(payload, 'bar') def test_process_group_added_event(self): from supervisor.events import ProcessGroupAddedEvent event = ProcessGroupAddedEvent('foo') headers, payload = self._deserialize(event.payload()) self.assertEqual(headers['groupname'], 'foo') self.assertEqual(payload, '') def test_process_group_removed_event(self): from supervisor.events import ProcessGroupRemovedEvent event = ProcessGroupRemovedEvent('foo') headers, payload = self._deserialize(event.payload()) self.assertEqual(headers['groupname'], 'foo') self.assertEqual(payload, '') def test_process_state_events_without_extra_values(self): from supervisor.states import ProcessStates from supervisor import events for klass in ( events.ProcessStateFatalEvent, events.ProcessStateUnknownEvent, ): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'process1', '/bin/process1') class DummyGroup: config = pconfig1 process1 = DummyProcess(pconfig1) process1.group = DummyGroup event = klass(process1, ProcessStates.STARTING) headers, payload = self._deserialize(event.payload()) self.assertEqual(len(headers), 3) self.assertEqual(headers['processname'], 'process1') self.assertEqual(headers['groupname'], 'process1') self.assertEqual(headers['from_state'], 'STARTING') self.assertEqual(payload, '') def test_process_state_events_with_pid(self): from supervisor.states import ProcessStates from supervisor import events for klass in ( events.ProcessStateRunningEvent, events.ProcessStateStoppedEvent, events.ProcessStateStoppingEvent, ): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'process1', '/bin/process1') class DummyGroup: config = pconfig1 process1 = DummyProcess(pconfig1) process1.group = DummyGroup process1.pid = 1 event = klass(process1, ProcessStates.STARTING) headers, payload = self._deserialize(event.payload()) self.assertEqual(len(headers), 4) self.assertEqual(headers['processname'], 'process1') self.assertEqual(headers['groupname'], 'process1') self.assertEqual(headers['from_state'], 'STARTING') self.assertEqual(headers['pid'], '1') self.assertEqual(payload, '') def test_process_state_events_starting_and_backoff(self): from supervisor.states import ProcessStates from supervisor import events for klass in ( events.ProcessStateStartingEvent, events.ProcessStateBackoffEvent, ): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'process1', '/bin/process1') class DummyGroup: config = pconfig1 process1 = DummyProcess(pconfig1) process1.group = DummyGroup event = klass(process1, ProcessStates.STARTING) headers, payload = self._deserialize(event.payload()) self.assertEqual(len(headers), 4) self.assertEqual(headers['processname'], 'process1') self.assertEqual(headers['groupname'], 'process1') self.assertEqual(headers['from_state'], 'STARTING') self.assertEqual(headers['tries'], '0') self.assertEqual(payload, '') process1.backoff = 1 event = klass(process1, ProcessStates.STARTING) headers, payload = self._deserialize(event.payload()) self.assertEqual(headers['tries'], '1') process1.backoff = 2 event = klass(process1, ProcessStates.STARTING) headers, payload = self._deserialize(event.payload()) self.assertEqual(headers['tries'], '2') def test_process_state_exited_event_expected(self): from supervisor import events from supervisor.states import ProcessStates options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1) class DummyGroup: config = pconfig1 process1.group = DummyGroup process1.pid = 1 event = events.ProcessStateExitedEvent(process1, ProcessStates.STARTING, expected=True) headers, payload = self._deserialize(event.payload()) self.assertEqual(len(headers), 5) self.assertEqual(headers['processname'], 'process1') self.assertEqual(headers['groupname'], 'process1') self.assertEqual(headers['pid'], '1') self.assertEqual(headers['from_state'], 'STARTING') self.assertEqual(headers['expected'], '1') self.assertEqual(payload, '') def test_process_state_exited_event_unexpected(self): from supervisor import events from supervisor.states import ProcessStates options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1) class DummyGroup: config = pconfig1 process1.group = DummyGroup process1.pid = 1 event = events.ProcessStateExitedEvent(process1, ProcessStates.STARTING, expected=False) headers, payload = self._deserialize(event.payload()) self.assertEqual(len(headers), 5) self.assertEqual(headers['processname'], 'process1') self.assertEqual(headers['groupname'], 'process1') self.assertEqual(headers['pid'], '1') self.assertEqual(headers['from_state'], 'STARTING') self.assertEqual(headers['expected'], '0') self.assertEqual(payload, '') def test_supervisor_sc_event(self): from supervisor import events event = events.SupervisorRunningEvent() headers, payload = self._deserialize(event.payload()) self.assertEqual(headers, {}) self.assertEqual(payload, '') def test_tick_events(self): from supervisor import events for klass in ( events.Tick5Event, events.Tick60Event, events.Tick3600Event, ): event = klass(1, 2) headers, payload = self._deserialize(event.payload()) self.assertEqual(headers, {'when':'1'}) self.assertEqual(payload, '') class TestUtilityFunctions(unittest.TestCase): def test_getEventNameByType(self): from supervisor import events for name, value in events.EventTypes.__dict__.items(): self.assertEqual(events.getEventNameByType(value), name) def test_register(self): from supervisor import events self.assertFalse(hasattr(events.EventTypes, 'FOO')) class FooEvent(events.Event): pass try: events.register('FOO', FooEvent) self.assertTrue(events.EventTypes.FOO is FooEvent) finally: del events.EventTypes.FOO def test_suite(): return unittest.findTestCases(sys.modules[__name__]) if __name__ == '__main__': unittest.main(defaultTest='test_suite') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840643.0 supervisor-4.2.5/supervisor/tests/test_http.py0000644000076500000240000006210314351441603021447 0ustar00mnaberezstaffimport base64 import os import stat import sys import socket import tempfile import unittest from supervisor.compat import as_bytes from supervisor.compat import as_string from supervisor.compat import sha1 from supervisor.tests.base import DummySupervisor from supervisor.tests.base import PopulatedDummySupervisor from supervisor.tests.base import DummyRPCInterfaceFactory from supervisor.tests.base import DummyPConfig from supervisor.tests.base import DummyOptions from supervisor.tests.base import DummyRequest from supervisor.tests.base import DummyLogger from supervisor.http import NOT_DONE_YET class HandlerTests: def _makeOne(self, supervisord): return self._getTargetClass()(supervisord) def test_match(self): class FakeRequest: def __init__(self, uri): self.uri = uri supervisor = DummySupervisor() handler = self._makeOne(supervisor) self.assertEqual(handler.match(FakeRequest(handler.path)), True) class LogtailHandlerTests(HandlerTests, unittest.TestCase): def _getTargetClass(self): from supervisor.http import logtail_handler return logtail_handler def test_handle_request_stdout_logfile_none(self): options = DummyOptions() pconfig = DummyPConfig(options, 'process1', '/bin/process1', priority=1, stdout_logfile='/tmp/process1.log') supervisord = PopulatedDummySupervisor(options, 'process1', pconfig) handler = self._makeOne(supervisord) request = DummyRequest('/logtail/process1', None, None, None) handler.handle_request(request) self.assertEqual(request._error, 404) def test_handle_request_stdout_logfile_missing(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', 'foo', 'it/is/missing') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) handler = self._makeOne(supervisord) request = DummyRequest('/logtail/foo', None, None, None) handler.handle_request(request) self.assertEqual(request._error, 404) def test_handle_request(self): with tempfile.NamedTemporaryFile() as f: t = f.name options = DummyOptions() pconfig = DummyPConfig(options, 'foo', 'foo', stdout_logfile=t) supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) handler = self._makeOne(supervisord) request = DummyRequest('/logtail/foo', None, None, None) handler.handle_request(request) self.assertEqual(request._error, None) from supervisor.medusa import http_date self.assertEqual(request.headers['Last-Modified'], http_date.build_http_date(os.stat(t)[stat.ST_MTIME])) self.assertEqual(request.headers['Content-Type'], 'text/plain;charset=utf-8') self.assertEqual(request.headers['X-Accel-Buffering'], 'no') self.assertEqual(len(request.producers), 1) self.assertEqual(request._done, True) class MainLogTailHandlerTests(HandlerTests, unittest.TestCase): def _getTargetClass(self): from supervisor.http import mainlogtail_handler return mainlogtail_handler def test_handle_request_stdout_logfile_none(self): supervisor = DummySupervisor() handler = self._makeOne(supervisor) request = DummyRequest('/mainlogtail', None, None, None) handler.handle_request(request) self.assertEqual(request._error, 404) def test_handle_request_stdout_logfile_missing(self): supervisor = DummySupervisor() supervisor.options.logfile = '/not/there' request = DummyRequest('/mainlogtail', None, None, None) handler = self._makeOne(supervisor) handler.handle_request(request) self.assertEqual(request._error, 404) def test_handle_request(self): supervisor = DummySupervisor() with tempfile.NamedTemporaryFile() as f: t = f.name supervisor.options.logfile = t handler = self._makeOne(supervisor) request = DummyRequest('/mainlogtail', None, None, None) handler.handle_request(request) self.assertEqual(request._error, None) from supervisor.medusa import http_date self.assertEqual(request.headers['Last-Modified'], http_date.build_http_date(os.stat(t)[stat.ST_MTIME])) self.assertEqual(request.headers['Content-Type'], 'text/plain;charset=utf-8') self.assertEqual(len(request.producers), 1) self.assertEqual(request._done, True) class TailFProducerTests(unittest.TestCase): def _getTargetClass(self): from supervisor.http import tail_f_producer return tail_f_producer def _makeOne(self, request, filename, head): return self._getTargetClass()(request, filename, head) def test_handle_more(self): request = DummyRequest('/logtail/foo', None, None, None) from supervisor import http f = tempfile.NamedTemporaryFile() f.write(b'a' * 80) f.flush() producer = self._makeOne(request, f.name, 80) result = producer.more() self.assertEqual(result, b'a' * 80) f.write(as_bytes(b'w' * 100)) f.flush() result = producer.more() self.assertEqual(result, b'w' * 100) result = producer.more() self.assertEqual(result, http.NOT_DONE_YET) f.truncate(0) f.flush() result = producer.more() self.assertEqual(result, '==> File truncated <==\n') def test_handle_more_fd_closed(self): request = DummyRequest('/logtail/foo', None, None, None) with tempfile.NamedTemporaryFile() as f: f.write(as_bytes('a' * 80)) f.flush() producer = self._makeOne(request, f.name, 80) producer.file.close() result = producer.more() self.assertEqual(result, producer.more()) def test_handle_more_follow_file_recreated(self): request = DummyRequest('/logtail/foo', None, None, None) f = tempfile.NamedTemporaryFile() f.write(as_bytes('a' * 80)) f.flush() producer = self._makeOne(request, f.name, 80) result = producer.more() self.assertEqual(result, b'a' * 80) f.close() f2 = open(f.name, 'wb') try: f2.write(as_bytes(b'b' * 80)) f2.close() result = producer.more() finally: os.unlink(f2.name) self.assertEqual(result, b'b' * 80) def test_handle_more_follow_file_gone(self): request = DummyRequest('/logtail/foo', None, None, None) with tempfile.NamedTemporaryFile(delete=False) as f: filename = f.name f.write(b'a' * 80) try: producer = self._makeOne(request, f.name, 80) finally: os.unlink(f.name) result = producer.more() self.assertEqual(result, b'a' * 80) with open(filename, 'wb') as f: f.write(as_bytes(b'b' * 80)) try: result = producer.more() # should open in new file self.assertEqual(result, b'b' * 80) finally: os.unlink(f.name) class DeferringChunkedProducerTests(unittest.TestCase): def _getTargetClass(self): from supervisor.http import deferring_chunked_producer return deferring_chunked_producer def _makeOne(self, producer, footers=None): return self._getTargetClass()(producer, footers) def test_more_not_done_yet(self): wrapped = DummyProducer(NOT_DONE_YET) producer = self._makeOne(wrapped) self.assertEqual(producer.more(), NOT_DONE_YET) def test_more_string(self): wrapped = DummyProducer(b'hello') producer = self._makeOne(wrapped) self.assertEqual(producer.more(), b'5\r\nhello\r\n') def test_more_nodata(self): wrapped = DummyProducer() producer = self._makeOne(wrapped, footers=[b'a', b'b']) self.assertEqual(producer.more(), b'0\r\na\r\nb\r\n\r\n') def test_more_nodata_footers(self): wrapped = DummyProducer(b'') producer = self._makeOne(wrapped, footers=[b'a', b'b']) self.assertEqual(producer.more(), b'0\r\na\r\nb\r\n\r\n') def test_more_nodata_nofooters(self): wrapped = DummyProducer(b'') producer = self._makeOne(wrapped) self.assertEqual(producer.more(), b'0\r\n\r\n') def test_more_noproducer(self): producer = self._makeOne(None) self.assertEqual(producer.more(), b'') class DeferringCompositeProducerTests(unittest.TestCase): def _getTargetClass(self): from supervisor.http import deferring_composite_producer return deferring_composite_producer def _makeOne(self, producers): return self._getTargetClass()(producers) def test_more_not_done_yet(self): wrapped = DummyProducer(NOT_DONE_YET) producer = self._makeOne([wrapped]) self.assertEqual(producer.more(), NOT_DONE_YET) def test_more_string(self): wrapped1 = DummyProducer('hello') wrapped2 = DummyProducer('goodbye') producer = self._makeOne([wrapped1, wrapped2]) self.assertEqual(producer.more(), 'hello') self.assertEqual(producer.more(), 'goodbye') self.assertEqual(producer.more(), b'') def test_more_nodata(self): wrapped = DummyProducer() producer = self._makeOne([wrapped]) self.assertEqual(producer.more(), b'') class DeferringGlobbingProducerTests(unittest.TestCase): def _getTargetClass(self): from supervisor.http import deferring_globbing_producer return deferring_globbing_producer def _makeOne(self, producer, buffer_size=1<<16): return self._getTargetClass()(producer, buffer_size) def test_more_not_done_yet(self): wrapped = DummyProducer(NOT_DONE_YET) producer = self._makeOne(wrapped) self.assertEqual(producer.more(), NOT_DONE_YET) def test_more_string(self): wrapped = DummyProducer('hello', 'there', 'guy') producer = self._makeOne(wrapped, buffer_size=1) self.assertEqual(producer.more(), b'hello') wrapped = DummyProducer('hello', 'there', 'guy') producer = self._makeOne(wrapped, buffer_size=50) self.assertEqual(producer.more(), b'hellothereguy') def test_more_nodata(self): wrapped = DummyProducer() producer = self._makeOne(wrapped) self.assertEqual(producer.more(), b'') class DeferringHookedProducerTests(unittest.TestCase): def _getTargetClass(self): from supervisor.http import deferring_hooked_producer return deferring_hooked_producer def _makeOne(self, producer, function): return self._getTargetClass()(producer, function) def test_more_not_done_yet(self): wrapped = DummyProducer(NOT_DONE_YET) producer = self._makeOne(wrapped, None) self.assertEqual(producer.more(), NOT_DONE_YET) def test_more_string(self): wrapped = DummyProducer('hello') L = [] def callback(bytes): L.append(bytes) producer = self._makeOne(wrapped, callback) self.assertEqual(producer.more(), 'hello') self.assertEqual(L, []) producer.more() self.assertEqual(L, [5]) def test_more_nodata(self): wrapped = DummyProducer() L = [] def callback(bytes): L.append(bytes) producer = self._makeOne(wrapped, callback) self.assertEqual(producer.more(), b'') self.assertEqual(L, [0]) def test_more_noproducer(self): producer = self._makeOne(None, None) self.assertEqual(producer.more(), b'') class DeferringHttpRequestTests(unittest.TestCase): def _getTargetClass(self): from supervisor.http import deferring_http_request return deferring_http_request def _makeOne( self, channel=None, req='GET / HTTP/1.0', command='GET', uri='/', version='1.0', header=(), ): return self._getTargetClass()( channel, req, command, uri, version, header ) def _makeChannel(self): class Channel: closed = False def close_when_done(self): self.closed = True def push_with_producer(self, producer): self.producer = producer return Channel() def test_done_http_10_nokeepalive(self): channel = self._makeChannel() inst = self._makeOne(channel=channel, version='1.0') inst.done() self.assertTrue(channel.closed) def test_done_http_10_keepalive_no_content_length(self): channel = self._makeChannel() inst = self._makeOne( channel=channel, version='1.0', header=['Connection: Keep-Alive'], ) inst.done() self.assertTrue(channel.closed) def test_done_http_10_keepalive_and_content_length(self): channel = self._makeChannel() inst = self._makeOne( channel=channel, version='1.0', header=['Connection: Keep-Alive'], ) inst.reply_headers['Content-Length'] = 1 inst.done() self.assertEqual(inst['Connection'], 'Keep-Alive') self.assertFalse(channel.closed) def test_done_http_11_connection_close(self): channel = self._makeChannel() inst = self._makeOne( channel=channel, version='1.1', header=['Connection: close'] ) inst.done() self.assertTrue(channel.closed) def test_done_http_11_unknown_transfer_encoding(self): channel = self._makeChannel() inst = self._makeOne( channel=channel, version='1.1', ) inst.reply_headers['Transfer-Encoding'] = 'notchunked' inst.done() self.assertTrue(channel.closed) def test_done_http_11_chunked_transfer_encoding(self): channel = self._makeChannel() inst = self._makeOne( channel=channel, version='1.1', ) inst.reply_headers['Transfer-Encoding'] = 'chunked' inst.done() self.assertFalse(channel.closed) def test_done_http_11_use_chunked(self): channel = self._makeChannel() inst = self._makeOne( channel=channel, version='1.1', ) inst.use_chunked = True inst.done() self.assertTrue('Transfer-Encoding' in inst) self.assertFalse(channel.closed) def test_done_http_11_wo_content_length_no_te_no_use_chunked_close(self): channel = self._makeChannel() inst = self._makeOne( channel=channel, version='1.1', ) inst.use_chunked = False inst.done() self.assertTrue(channel.closed) def test_done_http_09(self): channel = self._makeChannel() inst = self._makeOne( channel=channel, version=None, ) inst.done() self.assertTrue(channel.closed) class DeferringHttpChannelTests(unittest.TestCase): def _getTargetClass(self): from supervisor.http import deferring_http_channel return deferring_http_channel def _makeOne(self): return self._getTargetClass()( server=None, conn=None, addr=None ) def test_defaults_delay_and_last_writable_check_time(self): channel = self._makeOne() self.assertEqual(channel.delay, 0) self.assertEqual(channel.last_writable_check, 0) def test_writable_with_delay_is_False_if_elapsed_lt_delay(self): channel = self._makeOne() channel.delay = 2 channel.last_writable_check = _NOW later = _NOW + 1 self.assertFalse(channel.writable(now=later)) self.assertEqual(channel.last_writable_check, _NOW) def test_writable_with_delay_is_False_if_elapsed_eq_delay(self): channel = self._makeOne() channel.delay = 2 channel.last_writable_check = _NOW later = _NOW + channel.delay self.assertFalse(channel.writable(now=later)) self.assertEqual(channel.last_writable_check, _NOW) def test_writable_with_delay_is_True_if_elapsed_gt_delay(self): channel = self._makeOne() channel.delay = 2 channel.last_writable_check = _NOW later = _NOW + channel.delay + 0.1 self.assertTrue(channel.writable(now=later)) self.assertEqual(channel.last_writable_check, later) def test_writable_with_delay_is_True_if_system_time_goes_backwards(self): channel = self._makeOne() channel.delay = 2 channel.last_writable_check = _NOW later = _NOW - 3600 # last check was in the future self.assertTrue(channel.writable(now=later)) self.assertEqual(channel.last_writable_check, later) _NOW = 1470085990 class EncryptedDictionaryAuthorizedTests(unittest.TestCase): def _getTargetClass(self): from supervisor.http import encrypted_dictionary_authorizer return encrypted_dictionary_authorizer def _makeOne(self, dict): return self._getTargetClass()(dict) def test_authorize_baduser(self): authorizer = self._makeOne({}) self.assertFalse(authorizer.authorize(('foo', 'bar'))) def test_authorize_gooduser_badpassword(self): authorizer = self._makeOne({'foo':'password'}) self.assertFalse(authorizer.authorize(('foo', 'bar'))) def test_authorize_gooduser_goodpassword(self): authorizer = self._makeOne({'foo':'password'}) self.assertTrue(authorizer.authorize(('foo', 'password'))) def test_authorize_gooduser_goodpassword_with_colon(self): authorizer = self._makeOne({'foo':'pass:word'}) self.assertTrue(authorizer.authorize(('foo', 'pass:word'))) def test_authorize_gooduser_badpassword_sha(self): password = '{SHA}' + sha1(as_bytes('password')).hexdigest() authorizer = self._makeOne({'foo':password}) self.assertFalse(authorizer.authorize(('foo', 'bar'))) def test_authorize_gooduser_goodpassword_sha(self): password = '{SHA}' + sha1(as_bytes('password')).hexdigest() authorizer = self._makeOne({'foo':password}) self.assertTrue(authorizer.authorize(('foo', 'password'))) class SupervisorAuthHandlerTests(unittest.TestCase): def _getTargetClass(self): from supervisor.http import supervisor_auth_handler return supervisor_auth_handler def _makeOne(self, dict, handler): return self._getTargetClass()(dict, handler) def test_ctor(self): handler = self._makeOne({'a':1}, None) from supervisor.http import encrypted_dictionary_authorizer self.assertEqual(handler.authorizer.__class__, encrypted_dictionary_authorizer) def test_handle_request_authorizes_good_credentials(self): request = DummyRequest('/logtail/process1', None, None, None) encoded = base64.b64encode(as_bytes("user:password")) request.header = ["Authorization: Basic %s" % as_string(encoded)] handler = DummyHandler() auth_handler = self._makeOne({'user':'password'}, handler) auth_handler.handle_request(request) self.assertTrue(handler.handled_request) def test_handle_request_authorizes_good_password_with_colon(self): request = DummyRequest('/logtail/process1', None, None, None) # password contains colon encoded = base64.b64encode(as_bytes("user:pass:word")) request.header = ["Authorization: Basic %s" % as_string(encoded)] handler = DummyHandler() auth_handler = self._makeOne({'user':'pass:word'}, handler) auth_handler.handle_request(request) self.assertTrue(handler.handled_request) def test_handle_request_does_not_authorize_bad_credentials(self): request = DummyRequest('/logtail/process1', None, None, None) encoded = base64.b64encode(as_bytes("wrong:wrong")) request.header = ["Authorization: Basic %s" % as_string(encoded)] handler = DummyHandler() auth_handler = self._makeOne({'user':'password'}, handler) auth_handler.handle_request(request) self.assertFalse(handler.handled_request) class LogWrapperTests(unittest.TestCase): def _getTargetClass(self): from supervisor.http import LogWrapper return LogWrapper def _makeOne(self, logger): return self._getTargetClass()(logger) def test_strips_trailing_newlines_from_msgs(self): logger = DummyLogger() log_wrapper = self._makeOne(logger) log_wrapper.log("foo\n") logdata = logger.data self.assertEqual(len(logdata), 1) self.assertEqual(logdata[0], "foo") def test_logs_msgs_with_error_at_error_level(self): logger = DummyLogger() log_wrapper = self._makeOne(logger) errors = [] logger.error = errors.append log_wrapper.log("Server Error") self.assertEqual(len(errors), 1) self.assertEqual(errors[0], "Server Error") def test_logs_other_messages_at_trace_level(self): logger = DummyLogger() log_wrapper = self._makeOne(logger) traces = [] logger.trace = traces.append log_wrapper.log("GET /") self.assertEqual(len(traces), 1) self.assertEqual(traces[0], "GET /") class TopLevelFunctionTests(unittest.TestCase): def _make_http_servers(self, sconfigs): options = DummyOptions() options.server_configs = sconfigs options.rpcinterface_factories = [('dummy',DummyRPCInterfaceFactory,{})] supervisord = DummySupervisor() from supervisor.http import make_http_servers servers = make_http_servers(options, supervisord) try: for config, s in servers: s.close() socketfile = config.get('file') if socketfile is not None: os.unlink(socketfile) finally: from supervisor.medusa.asyncore_25 import socket_map socket_map.clear() return servers def test_make_http_servers_socket_type_error(self): config = {'family':999, 'host':'localhost', 'port':17735, 'username':None, 'password':None, 'section':'inet_http_server'} try: self._make_http_servers([config]) self.fail('nothing raised') except ValueError as exc: self.assertEqual(exc.args[0], 'Cannot determine socket type 999') def test_make_http_servers_noauth(self): with tempfile.NamedTemporaryFile(delete=True) as f: socketfile = f.name self.assertFalse(os.path.exists(socketfile)) inet = {'family':socket.AF_INET, 'host':'localhost', 'port':17735, 'username':None, 'password':None, 'section':'inet_http_server'} unix = {'family':socket.AF_UNIX, 'file':socketfile, 'chmod':0o700, 'chown':(-1, -1), 'username':None, 'password':None, 'section':'unix_http_server'} servers = self._make_http_servers([inet, unix]) self.assertEqual(len(servers), 2) inetdata = servers[0] self.assertEqual(inetdata[0], inet) server = inetdata[1] idents = [ 'Supervisor XML-RPC Handler', 'Logtail HTTP Request Handler', 'Main Logtail HTTP Request Handler', 'Supervisor Web UI HTTP Request Handler', 'Default HTTP Request Handler' ] self.assertEqual([x.IDENT for x in server.handlers], idents) unixdata = servers[1] self.assertEqual(unixdata[0], unix) server = unixdata[1] self.assertEqual([x.IDENT for x in server.handlers], idents) def test_make_http_servers_withauth(self): with tempfile.NamedTemporaryFile(delete=True) as f: socketfile = f.name self.assertFalse(os.path.exists(socketfile)) inet = {'family':socket.AF_INET, 'host':'localhost', 'port':17736, 'username':'username', 'password':'password', 'section':'inet_http_server'} unix = {'family':socket.AF_UNIX, 'file':socketfile, 'chmod':0o700, 'chown':(-1, -1), 'username':'username', 'password':'password', 'section':'unix_http_server'} servers = self._make_http_servers([inet, unix]) self.assertEqual(len(servers), 2) from supervisor.http import supervisor_auth_handler for config, server in servers: for handler in server.handlers: self.assertTrue(isinstance(handler, supervisor_auth_handler), handler) class DummyHandler: def __init__(self): self.handled_request = False def handle_request(self, request): self.handled_request = True class DummyProducer: def __init__(self, *data): self.data = list(data) def more(self): if self.data: return self.data.pop(0) else: return b'' def test_suite(): return unittest.findTestCases(sys.modules[__name__]) if __name__ == '__main__': unittest.main(defaultTest='test_suite') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/test_http_client.py0000644000076500000240000003277314340177153023023 0ustar00mnaberezstaffimport socket import sys import unittest from supervisor.compat import as_bytes from supervisor.compat import StringIO class ListenerTests(unittest.TestCase): def _getTargetClass(self): from supervisor.http_client import Listener return Listener def _makeOne(self): return self._getTargetClass()() def test_status(self): inst = self._makeOne() self.assertEqual(inst.status(None, None), None) def test_error(self): inst = self._makeOne() try: old_stderr = sys.stderr stderr = StringIO() sys.stderr = stderr self.assertEqual(inst.error('url', 'error'), None) self.assertEqual(stderr.getvalue(), 'url error\n') finally: sys.stderr = old_stderr def test_response_header(self): inst = self._makeOne() self.assertEqual(inst.response_header(None, None, None), None) def test_done(self): inst = self._makeOne() self.assertEqual(inst.done(None), None) def test_feed(self): inst = self._makeOne() try: old_stdout = sys.stdout stdout = StringIO() sys.stdout = stdout inst.feed('url', 'data') self.assertEqual(stdout.getvalue(), 'data') finally: sys.stdout = old_stdout def test_close(self): inst = self._makeOne() self.assertEqual(inst.close(None), None) class HTTPHandlerTests(unittest.TestCase): def _getTargetClass(self): from supervisor.http_client import HTTPHandler return HTTPHandler def _makeOne(self, listener=None, username='', password=None): if listener is None: listener = self._makeListener() socket_map = {} return self._getTargetClass()( listener, username, password, map=socket_map, ) def _makeListener(self): listener = DummyListener() return listener def test_get_url_not_None(self): inst = self._makeOne() inst.url = 'abc' self.assertRaises(AssertionError, inst.get, 'abc') def test_get_bad_scheme(self): inst = self._makeOne() self.assertRaises( NotImplementedError, inst.get, 'nothttp://localhost', '/abc' ) def test_get_implied_port_80(self): inst = self._makeOne() sockets = [] connects = [] inst.create_socket = lambda *arg: sockets.append(arg) inst.connect = lambda tup: connects.append(tup) inst.get('http://localhost', '/abc/def') self.assertEqual(inst.port, 80) self.assertEqual(sockets, [(socket.AF_INET, socket.SOCK_STREAM)]) self.assertEqual(connects, [('localhost', 80)]) def test_get_explicit_port(self): inst = self._makeOne() sockets = [] connects = [] inst.create_socket = lambda *arg: sockets.append(arg) inst.connect = lambda tup: connects.append(tup) inst.get('http://localhost:8080', '/abc/def') self.assertEqual(inst.port, 8080) self.assertEqual(sockets, [(socket.AF_INET, socket.SOCK_STREAM)]) self.assertEqual(connects, [('localhost', 8080)]) def test_get_explicit_unix_domain_socket(self): inst = self._makeOne() sockets = [] connects = [] inst.create_socket = lambda *arg: sockets.append(arg) inst.connect = lambda tup: connects.append(tup) inst.get('unix:///a/b/c', '') self.assertEqual(sockets, [(socket.AF_UNIX, socket.SOCK_STREAM)]) self.assertEqual(connects, ['/a/b/c']) def test_close(self): inst = self._makeOne() dels = [] inst.del_channel = lambda: dels.append(True) inst.socket = DummySocket() inst.close() self.assertEqual(inst.listener.closed, None) self.assertEqual(inst.connected, 0) self.assertEqual(dels, [True]) self.assertTrue(inst.socket.closed) self.assertEqual(inst.url, 'CLOSED') def test_header(self): from supervisor.http_client import CRLF inst = self._makeOne() pushes = [] inst.push = lambda val: pushes.append(val) inst.header('name', 'val') self.assertEqual(pushes, ['name: val', CRLF]) def test_handle_error_already_handled(self): inst = self._makeOne() inst.error_handled = True self.assertEqual(inst.handle_error(), None) def test_handle_error(self): inst = self._makeOne() closed = [] inst.close = lambda: closed.append(True) inst.url = 'foo' self.assertEqual(inst.handle_error(), None) self.assertEqual(inst.listener.error_url, 'foo') self.assertEqual( inst.listener.error_msg, 'Cannot connect, error: None (None)', ) self.assertEqual(closed, [True]) self.assertTrue(inst.error_handled) def test_handle_connect_no_password(self): inst = self._makeOne() pushed = [] inst.push = lambda val: pushed.append(as_bytes(val)) inst.path = '/' inst.host = 'localhost' inst.handle_connect() self.assertTrue(inst.connected) self.assertEqual( pushed, [b'GET / HTTP/1.1', b'\r\n', b'Host: localhost', b'\r\n', b'Accept-Encoding: chunked', b'\r\n', b'Accept: */*', b'\r\n', b'User-agent: Supervisor HTTP Client', b'\r\n', b'\r\n', b'\r\n'] ) def test_handle_connect_with_password(self): inst = self._makeOne() pushed = [] inst.push = lambda val: pushed.append(as_bytes(val)) inst.path = '/' inst.host = 'localhost' inst.password = 'password' inst.username = 'username' inst.handle_connect() self.assertTrue(inst.connected) self.assertEqual( pushed, [b'GET / HTTP/1.1', b'\r\n', b'Host: localhost', b'\r\n', b'Accept-Encoding: chunked', b'\r\n', b'Accept: */*', b'\r\n', b'User-agent: Supervisor HTTP Client', b'\r\n', b'Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=', b'\r\n', b'\r\n', b'\r\n'], ) def test_feed(self): inst = self._makeOne() inst.feed('data') self.assertEqual(inst.listener.fed_data, ['data']) def test_collect_incoming_data_part_is_body(self): inst = self._makeOne() inst.part = inst.body inst.buffer = 'abc' inst.collect_incoming_data('foo') self.assertEqual(inst.listener.fed_data, ['abcfoo']) self.assertEqual(inst.buffer, b'') def test_collect_incoming_data_part_is_not_body(self): inst = self._makeOne() inst.part = None inst.buffer = 'abc' inst.collect_incoming_data('foo') self.assertEqual(inst.listener.fed_data, []) self.assertEqual(inst.buffer, 'abcfoo') def test_found_terminator(self): inst = self._makeOne() parted = [] inst.part = lambda: parted.append(True) inst.buffer = None inst.found_terminator() self.assertEqual(parted, [True]) self.assertEqual(inst.buffer, b'') def test_ignore(self): inst = self._makeOne() inst.buffer = None inst.ignore() self.assertEqual(inst.buffer, b'') def test_status_line_not_startswith_http(self): inst = self._makeOne() inst.buffer = b'NOTHTTP/1.0 200 OK' self.assertRaises(ValueError, inst.status_line) def test_status_line_200(self): inst = self._makeOne() inst.buffer = b'HTTP/1.0 200 OK' version, status, reason = inst.status_line() self.assertEqual(version, b'HTTP/1.0') self.assertEqual(status, 200) self.assertEqual(reason, b'OK') self.assertEqual(inst.part, inst.headers) def test_status_line_not_200(self): inst = self._makeOne() inst.buffer = b'HTTP/1.0 201 OK' closed = [] inst.close = lambda: closed.append(True) version, status, reason = inst.status_line() self.assertEqual(version, b'HTTP/1.0') self.assertEqual(status, 201) self.assertEqual(reason, b'OK') self.assertEqual(inst.part, inst.ignore) self.assertEqual( inst.listener.error_msg, 'Cannot read, status code 201' ) self.assertEqual(closed, [True]) def test_headers_empty_line_nonchunked(self): inst = self._makeOne() inst.buffer = b'' inst.encoding = b'not chunked' inst.length = 3 terms = [] inst.set_terminator = lambda L: terms.append(L) inst.headers() self.assertEqual(inst.part, inst.body) self.assertEqual(terms, [3]) def test_headers_empty_line_chunked(self): inst = self._makeOne() inst.buffer = b'' inst.encoding = b'chunked' inst.headers() self.assertEqual(inst.part, inst.chunked_size) def test_headers_nonempty_line_no_name_no_value(self): inst = self._makeOne() inst.buffer = b':' self.assertEqual(inst.headers(), None) def test_headers_nonempty_line_transfer_encoding(self): inst = self._makeOne() inst.buffer = b'Transfer-Encoding: chunked' responses = [] inst.response_header = lambda n, v: responses.append((n, v)) inst.headers() self.assertEqual(inst.encoding, b'chunked') self.assertEqual(responses, [(b'transfer-encoding', b'chunked')]) def test_headers_nonempty_line_content_length(self): inst = self._makeOne() inst.buffer = b'Content-Length: 3' responses = [] inst.response_header = lambda n, v: responses.append((n, v)) inst.headers() self.assertEqual(inst.length, 3) self.assertEqual(responses, [(b'content-length', b'3')]) def test_headers_nonempty_line_arbitrary(self): inst = self._makeOne() inst.buffer = b'X-Test: abc' responses = [] inst.response_header = lambda n, v: responses.append((n, v)) inst.headers() self.assertEqual(responses, [(b'x-test', b'abc')]) def test_response_header(self): inst = self._makeOne() inst.response_header(b'a', b'b') self.assertEqual(inst.listener.response_header_name, b'a') self.assertEqual(inst.listener.response_header_value, b'b') def test_body(self): inst = self._makeOne() closed = [] inst.close = lambda: closed.append(True) inst.body() self.assertEqual(closed, [True]) self.assertTrue(inst.listener.done) def test_done(self): inst = self._makeOne() inst.done() self.assertTrue(inst.listener.done) def test_chunked_size_empty_line(self): inst = self._makeOne() inst.buffer = b'' inst.length = 1 self.assertEqual(inst.chunked_size(), None) self.assertEqual(inst.length, 1) def test_chunked_size_zero_size(self): inst = self._makeOne() inst.buffer = b'0' inst.length = 1 self.assertEqual(inst.chunked_size(), None) self.assertEqual(inst.length, 1) self.assertEqual(inst.part, inst.trailer) def test_chunked_size_nonzero_size(self): inst = self._makeOne() inst.buffer = b'10' inst.length = 1 terms = [] inst.set_terminator = lambda sz: terms.append(sz) self.assertEqual(inst.chunked_size(), None) self.assertEqual(inst.part, inst.chunked_body) self.assertEqual(inst.length, 17) self.assertEqual(terms, [16]) def test_chunked_body(self): from supervisor.http_client import CRLF inst = self._makeOne() inst.buffer = b'buffer' terms = [] lines = [] inst.set_terminator = lambda v: terms.append(v) inst.feed = lambda v: lines.append(v) inst.chunked_body() self.assertEqual(terms, [CRLF]) self.assertEqual(lines, [b'buffer']) self.assertEqual(inst.part, inst.chunked_size) def test_trailer_line_not_crlf(self): inst = self._makeOne() inst.buffer = b'' self.assertEqual(inst.trailer(), None) def test_trailer_line_crlf(self): from supervisor.http_client import CRLF inst = self._makeOne() inst.buffer = CRLF dones = [] closes = [] inst.done = lambda: dones.append(True) inst.close = lambda: closes.append(True) self.assertEqual(inst.trailer(), None) self.assertEqual(dones, [True]) self.assertEqual(closes, [True]) class DummyListener(object): closed = None error_url = None error_msg = None done = False def __init__(self): self.fed_data = [] def close(self, url): self.closed = url def error(self, url, msg): self.error_url = url self.error_msg = msg def feed(self, url, data): self.fed_data.append(data) def status(self, url, int): self.status_url = url self.status_int = int def response_header(self, url, name, value): self.response_header_name = name self.response_header_value = value def done(self, url): self.done = True class DummySocket(object): closed = False def close(self): self.closed = True ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/test_loggers.py0000644000076500000240000005170214340177153022141 0ustar00mnaberezstaff# -*- coding: utf-8 -*- import errno import sys import unittest import tempfile import shutil import os import syslog from supervisor.compat import PY2 from supervisor.compat import as_string from supervisor.compat import StringIO from supervisor.compat import unicode from supervisor.tests.base import mock from supervisor.tests.base import DummyStream class LevelTests(unittest.TestCase): def test_LOG_LEVELS_BY_NUM_doesnt_include_builtins(self): from supervisor import loggers for level_name in loggers.LOG_LEVELS_BY_NUM.values(): self.assertFalse(level_name.startswith('_')) class HandlerTests: def setUp(self): self.basedir = tempfile.mkdtemp() self.filename = os.path.join(self.basedir, 'thelog') def tearDown(self): try: shutil.rmtree(self.basedir) except OSError: pass def _makeOne(self, *arg, **kw): klass = self._getTargetClass() return klass(*arg, **kw) def _makeLogRecord(self, msg): from supervisor import loggers record = loggers.LogRecord( level=loggers.LevelsByName.INFO, msg=msg, exc_info=None ) return record class BareHandlerTests(HandlerTests, unittest.TestCase): def _getTargetClass(self): from supervisor.loggers import Handler return Handler def test_flush_stream_flush_raises_IOError_EPIPE(self): stream = DummyStream(error=IOError(errno.EPIPE)) inst = self._makeOne(stream=stream) self.assertEqual(inst.flush(), None) # does not raise def test_flush_stream_flush_raises_IOError_not_EPIPE(self): stream = DummyStream(error=IOError(errno.EALREADY)) inst = self._makeOne(stream=stream) self.assertRaises(IOError, inst.flush) # non-EPIPE IOError raises def test_close_already_closed(self): stream = DummyStream() inst = self._makeOne(stream=stream) inst.closed = True self.assertEqual(inst.close(), None) def test_close_stream_fileno_above_3(self): stream = DummyStream(fileno=50) inst = self._makeOne(stream=stream) self.assertEqual(inst.close(), None) self.assertTrue(inst.closed) self.assertTrue(inst.stream.closed) def test_close_stream_fileno_below_3(self): stream = DummyStream(fileno=0) inst = self._makeOne(stream=stream) self.assertEqual(inst.close(), None) self.assertFalse(inst.closed) self.assertFalse(inst.stream.closed) def test_close_stream_handles_fileno_unsupported_operation(self): # on python 2, StringIO does not have fileno() # on python 3, StringIO has fileno() but calling it raises stream = StringIO() inst = self._makeOne(stream=stream) inst.close() # shouldn't raise self.assertTrue(inst.closed) def test_close_stream_handles_fileno_ioerror(self): stream = DummyStream() def raise_ioerror(): raise IOError() stream.fileno = raise_ioerror inst = self._makeOne(stream=stream) inst.close() # shouldn't raise self.assertTrue(inst.closed) def test_emit_gardenpath(self): stream = DummyStream() inst = self._makeOne(stream=stream) record = self._makeLogRecord(b'foo') inst.emit(record) self.assertEqual(stream.flushed, True) self.assertEqual(stream.written, b'foo') def test_emit_unicode_error(self): stream = DummyStream(error=UnicodeError) inst = self._makeOne(stream=stream) record = self._makeLogRecord(b'foo') inst.emit(record) self.assertEqual(stream.flushed, True) self.assertEqual(stream.written, b'foo') def test_emit_other_error(self): stream = DummyStream(error=ValueError) inst = self._makeOne(stream=stream) handled = [] inst.handleError = lambda: handled.append(True) record = self._makeLogRecord(b'foo') inst.emit(record) self.assertEqual(stream.flushed, False) self.assertEqual(stream.written, b'') class FileHandlerTests(HandlerTests, unittest.TestCase): def _getTargetClass(self): from supervisor.loggers import FileHandler return FileHandler def test_ctor(self): handler = self._makeOne(self.filename) self.assertTrue(os.path.exists(self.filename), self.filename) self.assertEqual(handler.mode, 'ab') self.assertEqual(handler.baseFilename, self.filename) self.assertEqual(handler.stream.name, self.filename) handler.close() def test_close(self): handler = self._makeOne(self.filename) handler.stream.close() handler.stream = DummyStream() handler.close() self.assertEqual(handler.stream.closed, True) def test_close_raises(self): handler = self._makeOne(self.filename) handler.stream.close() handler.stream = DummyStream(OSError) self.assertRaises(OSError, handler.close) self.assertEqual(handler.stream.closed, False) def test_reopen(self): handler = self._makeOne(self.filename) handler.stream.close() stream = DummyStream() handler.stream = stream handler.reopen() self.assertEqual(stream.closed, True) self.assertEqual(handler.stream.name, self.filename) handler.close() def test_reopen_raises(self): handler = self._makeOne(self.filename) handler.stream.close() stream = DummyStream() handler.stream = stream handler.baseFilename = os.path.join(self.basedir, 'notthere', 'a.log') self.assertRaises(IOError, handler.reopen) self.assertEqual(stream.closed, True) def test_remove_exists(self): handler = self._makeOne(self.filename) self.assertTrue(os.path.exists(self.filename), self.filename) handler.remove() self.assertFalse(os.path.exists(self.filename), self.filename) def test_remove_doesntexist(self): handler = self._makeOne(self.filename) os.remove(self.filename) self.assertFalse(os.path.exists(self.filename), self.filename) handler.remove() # should not raise self.assertFalse(os.path.exists(self.filename), self.filename) def test_remove_raises(self): handler = self._makeOne(self.filename) os.remove(self.filename) os.mkdir(self.filename) self.assertTrue(os.path.exists(self.filename), self.filename) self.assertRaises(OSError, handler.remove) def test_emit_ascii_noerror(self): handler = self._makeOne(self.filename) record = self._makeLogRecord(b'hello!') handler.emit(record) handler.close() with open(self.filename, 'rb') as f: self.assertEqual(f.read(), b'hello!') def test_emit_unicode_noerror(self): handler = self._makeOne(self.filename) record = self._makeLogRecord(b'fi\xc3\xad') handler.emit(record) handler.close() with open(self.filename, 'rb') as f: self.assertEqual(f.read(), b'fi\xc3\xad') def test_emit_error(self): handler = self._makeOne(self.filename) handler.stream.close() handler.stream = DummyStream(error=OSError) record = self._makeLogRecord(b'hello!') try: old_stderr = sys.stderr dummy_stderr = DummyStream() sys.stderr = dummy_stderr handler.emit(record) finally: sys.stderr = old_stderr self.assertTrue(dummy_stderr.written.endswith(b'OSError\n'), dummy_stderr.written) if os.path.exists('/dev/stdout'): StdoutTestsBase = FileHandlerTests else: # Skip the stdout tests on platforms that don't have /dev/stdout. StdoutTestsBase = object class StdoutTests(StdoutTestsBase): def test_ctor_with_dev_stdout(self): handler = self._makeOne('/dev/stdout') # Modes 'w' and 'a' have the same semantics when applied to # character device files and fifos. self.assertTrue(handler.mode in ['wb', 'ab'], handler.mode) self.assertEqual(handler.baseFilename, '/dev/stdout') self.assertEqual(handler.stream.name, '/dev/stdout') handler.close() class RotatingFileHandlerTests(FileHandlerTests): def _getTargetClass(self): from supervisor.loggers import RotatingFileHandler return RotatingFileHandler def test_ctor(self): handler = self._makeOne(self.filename) self.assertEqual(handler.mode, 'ab') self.assertEqual(handler.maxBytes, 512*1024*1024) self.assertEqual(handler.backupCount, 10) handler.close() def test_emit_does_rollover(self): handler = self._makeOne(self.filename, maxBytes=10, backupCount=2) record = self._makeLogRecord(b'a' * 4) handler.emit(record) # 4 bytes self.assertFalse(os.path.exists(self.filename + '.1')) self.assertFalse(os.path.exists(self.filename + '.2')) handler.emit(record) # 8 bytes self.assertFalse(os.path.exists(self.filename + '.1')) self.assertFalse(os.path.exists(self.filename + '.2')) handler.emit(record) # 12 bytes, do rollover self.assertTrue(os.path.exists(self.filename + '.1')) self.assertFalse(os.path.exists(self.filename + '.2')) handler.emit(record) # 16 bytes self.assertTrue(os.path.exists(self.filename + '.1')) self.assertFalse(os.path.exists(self.filename + '.2')) handler.emit(record) # 20 bytes self.assertTrue(os.path.exists(self.filename + '.1')) self.assertFalse(os.path.exists(self.filename + '.2')) handler.emit(record) # 24 bytes, do rollover self.assertTrue(os.path.exists(self.filename + '.1')) self.assertTrue(os.path.exists(self.filename + '.2')) handler.emit(record) # 28 bytes handler.close() self.assertTrue(os.path.exists(self.filename + '.1')) self.assertTrue(os.path.exists(self.filename + '.2')) with open(self.filename, 'rb') as f: self.assertEqual(f.read(), b'a' * 4) with open(self.filename+'.1', 'rb') as f: self.assertEqual(f.read(), b'a' * 12) with open(self.filename+'.2', 'rb') as f: self.assertEqual(f.read(), b'a' * 12) def test_current_logfile_removed(self): handler = self._makeOne(self.filename, maxBytes=6, backupCount=1) record = self._makeLogRecord(b'a' * 4) handler.emit(record) # 4 bytes self.assertTrue(os.path.exists(self.filename)) self.assertFalse(os.path.exists(self.filename + '.1')) # Someone removes the active log file! :-( os.unlink(self.filename) self.assertFalse(os.path.exists(self.filename)) handler.emit(record) # 8 bytes, do rollover handler.close() self.assertTrue(os.path.exists(self.filename)) self.assertFalse(os.path.exists(self.filename + '.1')) def test_removeAndRename_destination_does_not_exist(self): inst = self._makeOne(self.filename) renames = [] removes = [] inst._remove = lambda v: removes.append(v) inst._exists = lambda v: False inst._rename = lambda s, t: renames.append((s, t)) inst.removeAndRename('foo', 'bar') self.assertEqual(renames, [('foo', 'bar')]) self.assertEqual(removes, []) inst.close() def test_removeAndRename_destination_exists(self): inst = self._makeOne(self.filename) renames = [] removes = [] inst._remove = lambda v: removes.append(v) inst._exists = lambda v: True inst._rename = lambda s, t: renames.append((s, t)) inst.removeAndRename('foo', 'bar') self.assertEqual(renames, [('foo', 'bar')]) self.assertEqual(removes, ['bar']) inst.close() def test_removeAndRename_remove_raises_ENOENT(self): def remove(fn): raise OSError(errno.ENOENT) inst = self._makeOne(self.filename) renames = [] inst._remove = remove inst._exists = lambda v: True inst._rename = lambda s, t: renames.append((s, t)) inst.removeAndRename('foo', 'bar') self.assertEqual(renames, [('foo', 'bar')]) inst.close() def test_removeAndRename_remove_raises_other_than_ENOENT(self): def remove(fn): raise OSError(errno.EAGAIN) inst = self._makeOne(self.filename) inst._remove = remove inst._exists = lambda v: True self.assertRaises(OSError, inst.removeAndRename, 'foo', 'bar') inst.close() def test_removeAndRename_rename_raises_ENOENT(self): def rename(s, d): raise OSError(errno.ENOENT) inst = self._makeOne(self.filename) inst._rename = rename inst._exists = lambda v: False self.assertEqual(inst.removeAndRename('foo', 'bar'), None) inst.close() def test_removeAndRename_rename_raises_other_than_ENOENT(self): def rename(s, d): raise OSError(errno.EAGAIN) inst = self._makeOne(self.filename) inst._rename = rename inst._exists = lambda v: False self.assertRaises(OSError, inst.removeAndRename, 'foo', 'bar') inst.close() def test_doRollover_maxbytes_lte_zero(self): inst = self._makeOne(self.filename) inst.maxBytes = 0 self.assertEqual(inst.doRollover(), None) inst.close() class BoundIOTests(unittest.TestCase): def _getTargetClass(self): from supervisor.loggers import BoundIO return BoundIO def _makeOne(self, maxbytes, buf=''): klass = self._getTargetClass() return klass(maxbytes, buf) def test_write_overflow(self): io = self._makeOne(1, b'a') io.write(b'b') self.assertEqual(io.buf, b'b') def test_getvalue(self): io = self._makeOne(1, b'a') self.assertEqual(io.getvalue(), b'a') def test_clear(self): io = self._makeOne(1, b'a') io.clear() self.assertEqual(io.buf, b'') def test_close(self): io = self._makeOne(1, b'a') io.close() self.assertEqual(io.buf, b'') class LoggerTests(unittest.TestCase): def _getTargetClass(self): from supervisor.loggers import Logger return Logger def _makeOne(self, level=None, handlers=None): klass = self._getTargetClass() return klass(level, handlers) def test_blather(self): from supervisor.loggers import LevelsByName handler = DummyHandler(LevelsByName.BLAT) logger = self._makeOne(LevelsByName.BLAT, (handler,)) logger.blather('hello') self.assertEqual(len(handler.records), 1) logger.level = LevelsByName.TRAC logger.blather('hello') self.assertEqual(len(handler.records), 1) def test_trace(self): from supervisor.loggers import LevelsByName handler = DummyHandler(LevelsByName.TRAC) logger = self._makeOne(LevelsByName.TRAC, (handler,)) logger.trace('hello') self.assertEqual(len(handler.records), 1) logger.level = LevelsByName.DEBG logger.trace('hello') self.assertEqual(len(handler.records), 1) def test_debug(self): from supervisor.loggers import LevelsByName handler = DummyHandler(LevelsByName.DEBG) logger = self._makeOne(LevelsByName.DEBG, (handler,)) logger.debug('hello') self.assertEqual(len(handler.records), 1) logger.level = LevelsByName.INFO logger.debug('hello') self.assertEqual(len(handler.records), 1) def test_info(self): from supervisor.loggers import LevelsByName handler = DummyHandler(LevelsByName.INFO) logger = self._makeOne(LevelsByName.INFO, (handler,)) logger.info('hello') self.assertEqual(len(handler.records), 1) logger.level = LevelsByName.WARN logger.info('hello') self.assertEqual(len(handler.records), 1) def test_warn(self): from supervisor.loggers import LevelsByName handler = DummyHandler(LevelsByName.WARN) logger = self._makeOne(LevelsByName.WARN, (handler,)) logger.warn('hello') self.assertEqual(len(handler.records), 1) logger.level = LevelsByName.ERRO logger.warn('hello') self.assertEqual(len(handler.records), 1) def test_error(self): from supervisor.loggers import LevelsByName handler = DummyHandler(LevelsByName.ERRO) logger = self._makeOne(LevelsByName.ERRO, (handler,)) logger.error('hello') self.assertEqual(len(handler.records), 1) logger.level = LevelsByName.CRIT logger.error('hello') self.assertEqual(len(handler.records), 1) def test_critical(self): from supervisor.loggers import LevelsByName handler = DummyHandler(LevelsByName.CRIT) logger = self._makeOne(LevelsByName.CRIT, (handler,)) logger.critical('hello') self.assertEqual(len(handler.records), 1) def test_close(self): from supervisor.loggers import LevelsByName handler = DummyHandler(LevelsByName.CRIT) logger = self._makeOne(LevelsByName.CRIT, (handler,)) logger.close() self.assertEqual(handler.closed, True) def test_getvalue(self): from supervisor.loggers import LevelsByName handler = DummyHandler(LevelsByName.CRIT) logger = self._makeOne(LevelsByName.CRIT, (handler,)) self.assertRaises(NotImplementedError, logger.getvalue) class MockSysLog(mock.Mock): def __call__(self, *args, **kwargs): message = args[-1] if sys.version_info < (3, 0) and isinstance(message, unicode): # Python 2.x raises a UnicodeEncodeError when attempting to # transmit unicode characters that don't encode in the # default encoding. message.encode() super(MockSysLog, self).__call__(*args, **kwargs) class SyslogHandlerTests(HandlerTests, unittest.TestCase): def setUp(self): pass def tearDown(self): pass def _getTargetClass(self): return __import__('supervisor.loggers').loggers.SyslogHandler def _makeOne(self): return self._getTargetClass()() def test_emit_record_asdict_raises(self): class Record(object): def asdict(self): raise TypeError record = Record() handler = self._makeOne() handled = [] handler.handleError = lambda: handled.append(True) handler.emit(record) self.assertEqual(handled, [True]) @mock.patch('syslog.syslog', MockSysLog()) def test_emit_ascii_noerror(self): handler = self._makeOne() record = self._makeLogRecord(b'hello!') handler.emit(record) syslog.syslog.assert_called_with('hello!') record = self._makeLogRecord('hi!') handler.emit(record) syslog.syslog.assert_called_with('hi!') @mock.patch('syslog.syslog', MockSysLog()) def test_close(self): handler = self._makeOne() handler.close() # no-op for syslog @mock.patch('syslog.syslog', MockSysLog()) def test_reopen(self): handler = self._makeOne() handler.reopen() # no-op for syslog if PY2: @mock.patch('syslog.syslog', MockSysLog()) def test_emit_unicode_noerror(self): handler = self._makeOne() inp = as_string('fií') record = self._makeLogRecord(inp) handler.emit(record) syslog.syslog.assert_called_with('fi\xc3\xad') def test_emit_unicode_witherror(self): handler = self._makeOne() called = [] def fake_syslog(msg): if not called: called.append(msg) raise UnicodeError handler._syslog = fake_syslog record = self._makeLogRecord(as_string('fií')) handler.emit(record) self.assertEqual(called, [as_string('fi\xc3\xad')]) else: @mock.patch('syslog.syslog', MockSysLog()) def test_emit_unicode_noerror(self): handler = self._makeOne() record = self._makeLogRecord('fií') handler.emit(record) syslog.syslog.assert_called_with('fií') def test_emit_unicode_witherror(self): handler = self._makeOne() called = [] def fake_syslog(msg): if not called: called.append(msg) raise UnicodeError handler._syslog = fake_syslog record = self._makeLogRecord('fií') handler.emit(record) self.assertEqual(called, ['fií']) class DummyHandler: close = False def __init__(self, level): self.level = level self.records = [] def emit(self, record): self.records.append(record) def close(self): self.closed = True def test_suite(): return unittest.findTestCases(sys.modules[__name__]) if __name__ == '__main__': unittest.main(defaultTest='test_suite') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840643.0 supervisor-4.2.5/supervisor/tests/test_options.py0000644000076500000240000043366214351441603022177 0ustar00mnaberezstaff"""Test suite for supervisor.options""" import os import sys import tempfile import socket import unittest import signal import shutil import errno import platform from supervisor.compat import StringIO from supervisor.compat import as_bytes from supervisor.tests.base import Mock, sentinel, patch from supervisor.loggers import LevelsByName from supervisor.tests.base import DummySupervisor from supervisor.tests.base import DummyLogger from supervisor.tests.base import DummyOptions from supervisor.tests.base import DummyPoller from supervisor.tests.base import DummyPConfig from supervisor.tests.base import DummyProcess from supervisor.tests.base import DummySocketConfig from supervisor.tests.base import lstrip class OptionTests(unittest.TestCase): def _getTargetClass(self): from supervisor.options import Options return Options def _makeOptions(self, read_error=False): Options = self._getTargetClass() from supervisor.datatypes import integer class MyOptions(Options): master = { 'other': 41 } def __init__(self, read_error=read_error): self.read_error = read_error Options.__init__(self) class Foo(object): pass self.configroot = Foo() def read_config(self, fp): if self.read_error: raise ValueError(self.read_error) # Pretend we read it from file: self.configroot.__dict__.update(self.default_map) self.configroot.__dict__.update(self.master) options = MyOptions() options.configfile = StringIO() options.add(name='anoption', confname='anoption', short='o', long='option', default='default') options.add(name='other', confname='other', env='OTHER', short='p:', long='other=', handler=integer) return options def test_add_flag_not_None_handler_not_None(self): cls = self._getTargetClass() inst = cls() self.assertRaises(ValueError, inst.add, flag=True, handler=True) def test_add_flag_not_None_long_false_short_false(self): cls = self._getTargetClass() inst = cls() self.assertRaises( ValueError, inst.add, flag=True, long=False, short=False, ) def test_add_flag_not_None_short_endswith_colon(self): cls = self._getTargetClass() inst = cls() self.assertRaises( ValueError, inst.add, flag=True, long=False, short=":", ) def test_add_flag_not_None_long_endswith_equal(self): cls = self._getTargetClass() inst = cls() self.assertRaises( ValueError, inst.add, flag=True, long='=', short=False, ) def test_add_inconsistent_short_long_options(self): cls = self._getTargetClass() inst = cls() self.assertRaises( ValueError, inst.add, long='=', short='abc', ) def test_add_short_option_startswith_dash(self): cls = self._getTargetClass() inst = cls() self.assertRaises( ValueError, inst.add, long=False, short='-abc', ) def test_add_short_option_too_long(self): cls = self._getTargetClass() inst = cls() self.assertRaises( ValueError, inst.add, long=False, short='abc', ) def test_add_duplicate_short_option_key(self): cls = self._getTargetClass() inst = cls() inst.options_map = {'-a':True} self.assertRaises( ValueError, inst.add, long=False, short='a', ) def test_add_long_option_startswith_dash(self): cls = self._getTargetClass() inst = cls() self.assertRaises( ValueError, inst.add, long='-abc', short=False, ) def test_add_duplicate_long_option_key(self): cls = self._getTargetClass() inst = cls() inst.options_map = {'--abc':True} self.assertRaises( ValueError, inst.add, long='abc', short=False, ) def test_searchpaths(self): options = self._makeOptions() self.assertEqual(len(options.searchpaths), 6) self.assertEqual(options.searchpaths[-4:], [ 'supervisord.conf', 'etc/supervisord.conf', '/etc/supervisord.conf', '/etc/supervisor/supervisord.conf', ]) def test_options_and_args_order(self): # Only config file exists options = self._makeOptions() options.realize([]) self.assertEqual(options.anoption, 'default') self.assertEqual(options.other, 41) # Env should trump config options = self._makeOptions() os.environ['OTHER'] = '42' options.realize([]) self.assertEqual(options.other, 42) # Opt should trump both env (still set) and config options = self._makeOptions() options.realize(['-p', '43']) self.assertEqual(options.other, 43) del os.environ['OTHER'] def test_config_reload(self): options = self._makeOptions() options.realize([]) self.assertEqual(options.other, 41) options.master['other'] = 42 options.process_config() self.assertEqual(options.other, 42) def test_config_reload_do_usage_false(self): options = self._makeOptions(read_error='error') self.assertRaises(ValueError, options.process_config, False) def test_config_reload_do_usage_true(self): options = self._makeOptions(read_error='error') exitcodes = [] options.exit = lambda x: exitcodes.append(x) options.stderr = options.stdout = StringIO() options.configroot.anoption = 1 options.configroot.other = 1 options.process_config(do_usage=True) self.assertEqual(exitcodes, [2]) def test__set(self): from supervisor.options import Options options = Options() options._set('foo', 'bar', 0) self.assertEqual(options.foo, 'bar') self.assertEqual(options.attr_priorities['foo'], 0) options._set('foo', 'baz', 1) self.assertEqual(options.foo, 'baz') self.assertEqual(options.attr_priorities['foo'], 1) options._set('foo', 'gazonk', 0) self.assertEqual(options.foo, 'baz') self.assertEqual(options.attr_priorities['foo'], 1) options._set('foo', 'gazonk', 1) self.assertEqual(options.foo, 'gazonk') def test_missing_default_config(self): options = self._makeOptions() options.searchpaths = [] exitcodes = [] options.exit = lambda x: exitcodes.append(x) options.stderr = StringIO() options.default_configfile() self.assertEqual(exitcodes, [2]) msg = "Error: No config file found at default paths" self.assertTrue(options.stderr.getvalue().startswith(msg)) def test_default_config(self): options = self._makeOptions() with tempfile.NamedTemporaryFile() as f: options.searchpaths = [f.name] config = options.default_configfile() self.assertEqual(config, f.name) def test_help(self): options = self._makeOptions() exitcodes = [] options.exit = lambda x: exitcodes.append(x) options.stdout = StringIO() options.progname = 'test_help' options.doc = 'A sample docstring for %s' options.help('') self.assertEqual(exitcodes, [0]) msg = 'A sample docstring for test_help\n' self.assertEqual(options.stdout.getvalue(), msg) class ClientOptionsTests(unittest.TestCase): def _getTargetClass(self): from supervisor.options import ClientOptions return ClientOptions def _makeOne(self): return self._getTargetClass()() def test_no_config_file(self): """Making sure config file is not required.""" instance = self._makeOne() instance.searchpaths = [] exitcodes = [] instance.exit = lambda x: exitcodes.append(x) instance.realize(args=['-s', 'http://localhost:9001', '-u', 'chris', '-p', '123']) self.assertEqual(exitcodes, []) self.assertEqual(instance.interactive, 1) self.assertEqual(instance.serverurl, 'http://localhost:9001') self.assertEqual(instance.username, 'chris') self.assertEqual(instance.password, '123') def test_options(self): tempdir = tempfile.gettempdir() s = lstrip("""[supervisorctl] serverurl=http://localhost:9001 username=chris password=123 prompt=mysupervisor history_file=%s/sc_history """ % tempdir) fp = StringIO(s) instance = self._makeOne() instance.configfile = fp instance.realize(args=[]) self.assertEqual(instance.interactive, True) history_file = os.path.join(tempdir, 'sc_history') self.assertEqual(instance.history_file, history_file) options = instance.configroot.supervisorctl self.assertEqual(options.prompt, 'mysupervisor') self.assertEqual(options.serverurl, 'http://localhost:9001') self.assertEqual(options.username, 'chris') self.assertEqual(options.password, '123') self.assertEqual(options.history_file, history_file) def test_options_ignores_space_prefixed_inline_comments(self): text = lstrip(""" [supervisorctl] serverurl=http://127.0.0.1:9001 ; use an http:// url to specify an inet socket """) instance = self._makeOne() instance.configfile = StringIO(text) instance.realize(args=[]) options = instance.configroot.supervisorctl self.assertEqual(options.serverurl, 'http://127.0.0.1:9001') def test_options_ignores_tab_prefixed_inline_comments(self): text = lstrip(""" [supervisorctl] serverurl=http://127.0.0.1:9001\t;use an http:// url to specify an inet socket """) instance = self._makeOne() instance.configfile = StringIO(text) instance.realize(args=[]) options = instance.configroot.supervisorctl self.assertEqual(options.serverurl, 'http://127.0.0.1:9001') def test_options_parses_as_nonstrict_for_py2_py3_compat(self): text = lstrip(""" [supervisorctl] serverurl=http://localhost:9001 ;duplicate [supervisorctl] serverurl=http://localhost:9001 ;duplicate """) instance = self._makeOne() instance.configfile = StringIO(text) instance.realize(args=[]) # should not raise configparser.DuplicateSectionError on py3 def test_options_with_environment_expansions(self): s = lstrip(""" [supervisorctl] serverurl=http://localhost:%(ENV_SERVER_PORT)s username=%(ENV_CLIENT_USER)s password=%(ENV_CLIENT_PASS)s prompt=%(ENV_CLIENT_PROMPT)s history_file=/path/to/histdir/.supervisorctl%(ENV_CLIENT_HIST_EXT)s """) fp = StringIO(s) instance = self._makeOne() instance.environ_expansions = {'ENV_HOME': tempfile.gettempdir(), 'ENV_USER': 'johndoe', 'ENV_SERVER_PORT': '9210', 'ENV_CLIENT_USER': 'someuser', 'ENV_CLIENT_PASS': 'passwordhere', 'ENV_CLIENT_PROMPT': 'xsupervisor', 'ENV_CLIENT_HIST_EXT': '.hist', } instance.configfile = fp instance.realize(args=[]) self.assertEqual(instance.interactive, True) options = instance.configroot.supervisorctl self.assertEqual(options.prompt, 'xsupervisor') self.assertEqual(options.serverurl, 'http://localhost:9210') self.assertEqual(options.username, 'someuser') self.assertEqual(options.password, 'passwordhere') self.assertEqual(options.history_file, '/path/to/histdir/.supervisorctl.hist') def test_options_supervisorctl_section_expands_here(self): instance = self._makeOne() text = lstrip(''' [supervisorctl] history_file=%(here)s/sc_history serverurl=unix://%(here)s/supervisord.sock ''') here = tempfile.mkdtemp() supervisord_conf = os.path.join(here, 'supervisord.conf') with open(supervisord_conf, 'w') as f: f.write(text) try: instance.configfile = supervisord_conf instance.realize(args=[]) finally: shutil.rmtree(here, ignore_errors=True) options = instance.configroot.supervisorctl self.assertEqual(options.history_file, os.path.join(here, 'sc_history')) self.assertEqual(options.serverurl, 'unix://' + os.path.join(here, 'supervisord.sock')) def test_read_config_not_found(self): nonexistent = os.path.join(os.path.dirname(__file__), 'nonexistent') instance = self._makeOne() try: instance.read_config(nonexistent) self.fail("nothing raised") except ValueError as exc: self.assertTrue("could not find config file" in exc.args[0]) def test_read_config_unreadable(self): instance = self._makeOne() def dummy_open(fn, mode): raise IOError(errno.EACCES, 'Permission denied: %s' % fn) instance.open = dummy_open try: instance.read_config(__file__) self.fail("expected exception") except ValueError as exc: self.assertTrue("could not read config file" in exc.args[0]) def test_read_config_no_supervisord_section_raises_valueerror(self): instance = self._makeOne() try: instance.read_config(StringIO()) self.fail("nothing raised") except ValueError as exc: self.assertEqual(exc.args[0], ".ini file does not include supervisorctl section") def test_options_unixsocket_cli(self): fp = StringIO('[supervisorctl]') instance = self._makeOne() instance.configfile = fp instance.realize(args=['--serverurl', 'unix:///dev/null']) self.assertEqual(instance.serverurl, 'unix:///dev/null') def test_options_unixsocket_configfile(self): s = lstrip("""[supervisorctl] serverurl=unix:///dev/null """) fp = StringIO(s) instance = self._makeOne() instance.configfile = fp instance.realize(args=[]) self.assertEqual(instance.serverurl, 'unix:///dev/null') class ServerOptionsTests(unittest.TestCase): def _getTargetClass(self): from supervisor.options import ServerOptions return ServerOptions def _makeOne(self): return self._getTargetClass()() def test_version(self): from supervisor.options import VERSION options = self._makeOne() options.stdout = StringIO() self.assertRaises(SystemExit, options.version, None) self.assertEqual(options.stdout.getvalue(), VERSION + '\n') def test_options(self): s = lstrip(""" [supervisord] directory=%(tempdir)s backofflimit=10 user=root umask=022 logfile=supervisord.log logfile_maxbytes=1000MB logfile_backups=5 loglevel=error pidfile=supervisord.pid nodaemon=true silent=true identifier=fleeb childlogdir=%(tempdir)s nocleanup=true minfds=2048 minprocs=300 environment=FAKE_ENV_VAR=/some/path [inet_http_server] port=127.0.0.1:8999 username=chrism password=foo [program:cat1] command=/bin/cat priority=1 autostart=true user=root stdout_logfile=/tmp/cat.log stopsignal=KILL stopwaitsecs=5 startsecs=5 startretries=10 directory=/tmp umask=002 [program:cat2] priority=2 command=/bin/cat autostart=true autorestart=false stdout_logfile_maxbytes = 1024 stdout_logfile_backups = 2 stdout_logfile = /tmp/cat2.log [program:cat3] priority=3 process_name = replaced command=/bin/cat autorestart=true exitcodes=0,1,127 stopasgroup=true killasgroup=true [program:cat4] priority=4 process_name = fleeb_%%(process_num)s numprocs = 2 command = /bin/cat autorestart=unexpected [program:cat5] priority=5 process_name = foo_%%(process_num)02d numprocs = 2 numprocs_start = 1 command = /bin/cat directory = /some/path/foo_%%(process_num)02d """ % {'tempdir':tempfile.gettempdir()}) from supervisor import datatypes fp = StringIO(s) instance = self._makeOne() instance.configfile = fp instance.realize(args=[]) options = instance.configroot.supervisord self.assertEqual(options.directory, tempfile.gettempdir()) self.assertEqual(options.umask, 0o22) self.assertEqual(options.logfile, 'supervisord.log') self.assertEqual(options.logfile_maxbytes, 1000 * 1024 * 1024) self.assertEqual(options.logfile_backups, 5) self.assertEqual(options.loglevel, 40) self.assertEqual(options.pidfile, 'supervisord.pid') self.assertEqual(options.nodaemon, True) self.assertEqual(options.silent, True) self.assertEqual(options.identifier, 'fleeb') self.assertEqual(options.childlogdir, tempfile.gettempdir()) self.assertEqual(len(options.server_configs), 1) self.assertEqual(options.server_configs[0]['family'], socket.AF_INET) self.assertEqual(options.server_configs[0]['host'], '127.0.0.1') self.assertEqual(options.server_configs[0]['port'], 8999) self.assertEqual(options.server_configs[0]['username'], 'chrism') self.assertEqual(options.server_configs[0]['password'], 'foo') self.assertEqual(options.nocleanup, True) self.assertEqual(options.minfds, 2048) self.assertEqual(options.minprocs, 300) self.assertEqual(options.nocleanup, True) self.assertEqual(len(options.process_group_configs), 5) self.assertEqual(options.environment, dict(FAKE_ENV_VAR='/some/path')) cat1 = options.process_group_configs[0] self.assertEqual(cat1.name, 'cat1') self.assertEqual(cat1.priority, 1) self.assertEqual(len(cat1.process_configs), 1) proc1 = cat1.process_configs[0] self.assertEqual(proc1.name, 'cat1') self.assertEqual(proc1.command, '/bin/cat') self.assertEqual(proc1.priority, 1) self.assertEqual(proc1.autostart, True) self.assertEqual(proc1.autorestart, datatypes.RestartWhenExitUnexpected) self.assertEqual(proc1.startsecs, 5) self.assertEqual(proc1.startretries, 10) self.assertEqual(proc1.uid, 0) self.assertEqual(proc1.stdout_logfile, '/tmp/cat.log') self.assertEqual(proc1.stopsignal, signal.SIGKILL) self.assertEqual(proc1.stopwaitsecs, 5) self.assertEqual(proc1.stopasgroup, False) self.assertEqual(proc1.killasgroup, False) self.assertEqual(proc1.stdout_logfile_maxbytes, datatypes.byte_size('50MB')) self.assertEqual(proc1.stdout_logfile_backups, 10) self.assertEqual(proc1.exitcodes, [0]) self.assertEqual(proc1.directory, '/tmp') self.assertEqual(proc1.umask, 2) self.assertEqual(proc1.environment, dict(FAKE_ENV_VAR='/some/path')) cat2 = options.process_group_configs[1] self.assertEqual(cat2.name, 'cat2') self.assertEqual(cat2.priority, 2) self.assertEqual(len(cat2.process_configs), 1) proc2 = cat2.process_configs[0] self.assertEqual(proc2.name, 'cat2') self.assertEqual(proc2.command, '/bin/cat') self.assertEqual(proc2.priority, 2) self.assertEqual(proc2.autostart, True) self.assertEqual(proc2.autorestart, False) self.assertEqual(proc2.uid, None) self.assertEqual(proc2.stdout_logfile, '/tmp/cat2.log') self.assertEqual(proc2.stopsignal, signal.SIGTERM) self.assertEqual(proc2.stopasgroup, False) self.assertEqual(proc2.killasgroup, False) self.assertEqual(proc2.stdout_logfile_maxbytes, 1024) self.assertEqual(proc2.stdout_logfile_backups, 2) self.assertEqual(proc2.exitcodes, [0]) self.assertEqual(proc2.directory, None) cat3 = options.process_group_configs[2] self.assertEqual(cat3.name, 'cat3') self.assertEqual(cat3.priority, 3) self.assertEqual(len(cat3.process_configs), 1) proc3 = cat3.process_configs[0] self.assertEqual(proc3.name, 'replaced') self.assertEqual(proc3.command, '/bin/cat') self.assertEqual(proc3.priority, 3) self.assertEqual(proc3.autostart, True) self.assertEqual(proc3.autorestart, datatypes.RestartUnconditionally) self.assertEqual(proc3.uid, None) self.assertEqual(proc3.stdout_logfile, datatypes.Automatic) self.assertEqual(proc3.stdout_logfile_maxbytes, datatypes.byte_size('50MB')) self.assertEqual(proc3.stdout_logfile_backups, 10) self.assertEqual(proc3.exitcodes, [0,1,127]) self.assertEqual(proc3.stopsignal, signal.SIGTERM) self.assertEqual(proc3.stopasgroup, True) self.assertEqual(proc3.killasgroup, True) cat4 = options.process_group_configs[3] self.assertEqual(cat4.name, 'cat4') self.assertEqual(cat4.priority, 4) self.assertEqual(len(cat4.process_configs), 2) proc4_a = cat4.process_configs[0] self.assertEqual(proc4_a.name, 'fleeb_0') self.assertEqual(proc4_a.command, '/bin/cat') self.assertEqual(proc4_a.priority, 4) self.assertEqual(proc4_a.autostart, True) self.assertEqual(proc4_a.autorestart, datatypes.RestartWhenExitUnexpected) self.assertEqual(proc4_a.uid, None) self.assertEqual(proc4_a.stdout_logfile, datatypes.Automatic) self.assertEqual(proc4_a.stdout_logfile_maxbytes, datatypes.byte_size('50MB')) self.assertEqual(proc4_a.stdout_logfile_backups, 10) self.assertEqual(proc4_a.exitcodes, [0]) self.assertEqual(proc4_a.stopsignal, signal.SIGTERM) self.assertEqual(proc4_a.stopasgroup, False) self.assertEqual(proc4_a.killasgroup, False) self.assertEqual(proc4_a.directory, None) proc4_b = cat4.process_configs[1] self.assertEqual(proc4_b.name, 'fleeb_1') self.assertEqual(proc4_b.command, '/bin/cat') self.assertEqual(proc4_b.priority, 4) self.assertEqual(proc4_b.autostart, True) self.assertEqual(proc4_b.autorestart, datatypes.RestartWhenExitUnexpected) self.assertEqual(proc4_b.uid, None) self.assertEqual(proc4_b.stdout_logfile, datatypes.Automatic) self.assertEqual(proc4_b.stdout_logfile_maxbytes, datatypes.byte_size('50MB')) self.assertEqual(proc4_b.stdout_logfile_backups, 10) self.assertEqual(proc4_b.exitcodes, [0]) self.assertEqual(proc4_b.stopsignal, signal.SIGTERM) self.assertEqual(proc4_b.stopasgroup, False) self.assertEqual(proc4_b.killasgroup, False) self.assertEqual(proc4_b.directory, None) cat5 = options.process_group_configs[4] self.assertEqual(cat5.name, 'cat5') self.assertEqual(cat5.priority, 5) self.assertEqual(len(cat5.process_configs), 2) proc5_a = cat5.process_configs[0] self.assertEqual(proc5_a.name, 'foo_01') self.assertEqual(proc5_a.directory, '/some/path/foo_01') proc5_b = cat5.process_configs[1] self.assertEqual(proc5_b.name, 'foo_02') self.assertEqual(proc5_b.directory, '/some/path/foo_02') here = os.path.abspath(os.getcwd()) self.assertEqual(instance.uid, 0) self.assertEqual(instance.gid, 0) self.assertEqual(instance.directory, tempfile.gettempdir()) self.assertEqual(instance.umask, 0o22) self.assertEqual(instance.logfile, os.path.join(here,'supervisord.log')) self.assertEqual(instance.logfile_maxbytes, 1000 * 1024 * 1024) self.assertEqual(instance.logfile_backups, 5) self.assertEqual(instance.loglevel, 40) self.assertEqual(instance.pidfile, os.path.join(here,'supervisord.pid')) self.assertEqual(instance.nodaemon, True) self.assertEqual(instance.silent, True) self.assertEqual(instance.passwdfile, None) self.assertEqual(instance.identifier, 'fleeb') self.assertEqual(instance.childlogdir, tempfile.gettempdir()) self.assertEqual(len(instance.server_configs), 1) self.assertEqual(instance.server_configs[0]['family'], socket.AF_INET) self.assertEqual(instance.server_configs[0]['host'], '127.0.0.1') self.assertEqual(instance.server_configs[0]['port'], 8999) self.assertEqual(instance.server_configs[0]['username'], 'chrism') self.assertEqual(instance.server_configs[0]['password'], 'foo') self.assertEqual(instance.nocleanup, True) self.assertEqual(instance.minfds, 2048) self.assertEqual(instance.minprocs, 300) def test_options_ignores_space_prefixed_inline_comments(self): text = lstrip(""" [supervisord] logfile=/tmp/supervisord.log ;(main log file;default $CWD/supervisord.log) minfds=123 ; (min. avail startup file descriptors;default 1024) """) instance = self._makeOne() instance.configfile = StringIO(text) instance.realize(args=[]) options = instance.configroot.supervisord self.assertEqual(options.logfile, "/tmp/supervisord.log") self.assertEqual(options.minfds, 123) def test_options_ignores_tab_prefixed_inline_comments(self): text = lstrip(""" [supervisord] logfile=/tmp/supervisord.log\t;(main log file;default $CWD/supervisord.log) minfds=123\t; (min. avail startup file descriptors;default 1024) """) instance = self._makeOne() instance.configfile = StringIO(text) instance.realize(args=[]) options = instance.configroot.supervisord self.assertEqual(options.minfds, 123) def test_options_parses_as_nonstrict_for_py2_py3_compat(self): text = lstrip(""" [supervisord] [program:duplicate] command=/bin/cat [program:duplicate] command=/bin/cat """) instance = self._makeOne() instance.configfile = StringIO(text) instance.realize(args=[]) # should not raise configparser.DuplicateSectionError on py3 def test_reload(self): text = lstrip("""\ [supervisord] user=root [program:one] command = /bin/cat [program:two] command = /bin/dog [program:four] command = /bin/sheep [group:thegroup] programs = one,two """) instance = self._makeOne() instance.configfile = StringIO(text) instance.realize(args=[]) section = instance.configroot.supervisord self.assertEqual(len(section.process_group_configs), 2) cat = section.process_group_configs[0] self.assertEqual(len(cat.process_configs), 1) cat = section.process_group_configs[1] self.assertEqual(len(cat.process_configs), 2) self.assertTrue(section.process_group_configs is instance.process_group_configs) text = lstrip("""\ [supervisord] user=root [program:one] command = /bin/cat [program:three] command = /bin/pig [group:thegroup] programs = three """) instance.configfile = StringIO(text) instance.process_config(do_usage=False) section = instance.configroot.supervisord self.assertEqual(len(section.process_group_configs), 2) cat = section.process_group_configs[0] self.assertEqual(len(cat.process_configs), 1) proc = cat.process_configs[0] self.assertEqual(proc.name, 'one') self.assertEqual(proc.command, '/bin/cat') self.assertTrue(section.process_group_configs is instance.process_group_configs) cat = section.process_group_configs[1] self.assertEqual(len(cat.process_configs), 1) proc = cat.process_configs[0] self.assertEqual(proc.name, 'three') self.assertEqual(proc.command, '/bin/pig') def test_reload_clears_parse_messages(self): instance = self._makeOne() old_msg = "Message from a prior config read" instance.parse_criticals = [old_msg] instance.parse_warnings = [old_msg] instance.parse_infos = [old_msg] text = lstrip("""\ [supervisord] user=root [program:cat] command = /bin/cat """) instance.configfile = StringIO(text) instance.realize(args=[]) self.assertFalse(old_msg in instance.parse_criticals) self.assertFalse(old_msg in instance.parse_warnings) self.assertFalse(old_msg in instance.parse_infos) def test_reload_clears_parse_infos(self): instance = self._makeOne() old_info = "Info from a prior config read" instance.infos = [old_info] text = lstrip("""\ [supervisord] user=root [program:cat] command = /bin/cat """) instance.configfile = StringIO(text) instance.realize(args=[]) self.assertFalse(old_info in instance.parse_infos) def test_read_config_not_found(self): nonexistent = os.path.join(os.path.dirname(__file__), 'nonexistent') instance = self._makeOne() try: instance.read_config(nonexistent) self.fail("nothing raised") except ValueError as exc: self.assertTrue("could not find config file" in exc.args[0]) def test_read_config_unreadable(self): instance = self._makeOne() def dummy_open(fn, mode): raise IOError(errno.EACCES, 'Permission denied: %s' % fn) instance.open = dummy_open try: instance.read_config(__file__) self.fail("nothing raised") except ValueError as exc: self.assertTrue("could not read config file" in exc.args[0]) def test_read_config_malformed_config_file_raises_valueerror(self): instance = self._makeOne() with tempfile.NamedTemporaryFile(mode="w+") as f: try: f.write("[supervisord]\njunk") f.flush() instance.read_config(f.name) self.fail("nothing raised") except ValueError as exc: self.assertTrue('contains parsing errors:' in exc.args[0]) self.assertTrue(f.name in exc.args[0]) def test_read_config_logfile_with_nonexistent_dirpath(self): instance = self._makeOne() logfile_with_nonexistent_dir = os.path.join( os.path.dirname(__file__), "nonexistent", "supervisord.log" ) text = lstrip("""\ [supervisord] logfile=%s """ % logfile_with_nonexistent_dir) try: instance.read_config(StringIO(text)) self.fail("nothing raised") except ValueError as exc: self.assertEqual(exc.args[0], "The directory named as part of the path %s does not exist" % logfile_with_nonexistent_dir ) def test_read_config_no_supervisord_section_raises_valueerror(self): instance = self._makeOne() try: instance.read_config(StringIO()) self.fail("nothing raised") except ValueError as exc: self.assertEqual(exc.args[0], ".ini file does not include supervisord section") def test_read_config_include_with_no_files_raises_valueerror(self): instance = self._makeOne() text = lstrip("""\ [supervisord] [include] ;no files= """) try: instance.read_config(StringIO(text)) self.fail("nothing raised") except ValueError as exc: self.assertEqual(exc.args[0], ".ini file has [include] section, but no files setting") def test_read_config_include_with_no_matching_files_logs_warning(self): instance = self._makeOne() text = lstrip("""\ [supervisord] [include] files=nonexistent/* """) instance.read_config(StringIO(text)) self.assertEqual(instance.parse_warnings, ['No file matches via include "./nonexistent/*"']) def test_read_config_include_reads_extra_files(self): dirname = tempfile.mkdtemp() conf_d = os.path.join(dirname, "conf.d") os.mkdir(conf_d) supervisord_conf = os.path.join(dirname, "supervisord.conf") text = lstrip("""\ [supervisord] [include] files=%s/conf.d/*.conf %s/conf.d/*.ini """ % (dirname, dirname)) with open(supervisord_conf, 'w') as f: f.write(text) conf_file = os.path.join(conf_d, "a.conf") with open(conf_file, 'w') as f: f.write("[inet_http_server]\nport=8000\n") ini_file = os.path.join(conf_d, "a.ini") with open(ini_file, 'w') as f: f.write("[unix_http_server]\nfile=/tmp/file\n") instance = self._makeOne() try: instance.read_config(supervisord_conf) finally: shutil.rmtree(dirname, ignore_errors=True) options = instance.configroot.supervisord self.assertEqual(len(options.server_configs), 2) msg = 'Included extra file "%s" during parsing' % conf_file self.assertTrue(msg in instance.parse_infos) msg = 'Included extra file "%s" during parsing' % ini_file self.assertTrue(msg in instance.parse_infos) def test_read_config_include_reads_files_in_sorted_order(self): dirname = tempfile.mkdtemp() conf_d = os.path.join(dirname, "conf.d") os.mkdir(conf_d) supervisord_conf = os.path.join(dirname, "supervisord.conf") text = lstrip("""\ [supervisord] [include] files=%s/conf.d/*.conf """ % dirname) with open(supervisord_conf, 'w') as f: f.write(text) from supervisor.compat import letters a_z = letters[:26] for letter in reversed(a_z): filename = os.path.join(conf_d, "%s.conf" % letter) with open(filename, "w") as f: f.write("[program:%s]\n" "command=/bin/%s\n" % (letter, letter)) instance = self._makeOne() try: instance.read_config(supervisord_conf) finally: shutil.rmtree(dirname, ignore_errors=True) expected_msgs = [] for letter in sorted(a_z): filename = os.path.join(conf_d, "%s.conf" % letter) expected_msgs.append( 'Included extra file "%s" during parsing' % filename) self.assertEqual(instance.parse_infos, expected_msgs) def test_read_config_include_extra_file_malformed(self): dirname = tempfile.mkdtemp() conf_d = os.path.join(dirname, "conf.d") os.mkdir(conf_d) supervisord_conf = os.path.join(dirname, "supervisord.conf") text = lstrip("""\ [supervisord] [include] files=%s/conf.d/*.conf """ % dirname) with open(supervisord_conf, 'w') as f: f.write(text) malformed_file = os.path.join(conf_d, "a.conf") with open(malformed_file, 'w') as f: f.write("[inet_http_server]\njunk\n") instance = self._makeOne() try: instance.read_config(supervisord_conf) self.fail("nothing raised") except ValueError as exc: self.assertTrue('contains parsing errors:' in exc.args[0]) self.assertTrue(malformed_file in exc.args[0]) msg = 'Included extra file "%s" during parsing' % malformed_file self.assertTrue(msg in instance.parse_infos) finally: shutil.rmtree(dirname, ignore_errors=True) def test_read_config_include_expands_host_node_name(self): dirname = tempfile.mkdtemp() conf_d = os.path.join(dirname, "conf.d") os.mkdir(conf_d) supervisord_conf = os.path.join(dirname, "supervisord.conf") text = lstrip("""\ [supervisord] [include] files=%s/conf.d/%s.conf """ % (dirname, "%(host_node_name)s")) with open(supervisord_conf, 'w') as f: f.write(text) conf_file = os.path.join(conf_d, "%s.conf" % platform.node()) with open(conf_file, 'w') as f: f.write("[inet_http_server]\nport=8000\n") instance = self._makeOne() try: instance.read_config(supervisord_conf) finally: shutil.rmtree(dirname, ignore_errors=True) options = instance.configroot.supervisord self.assertEqual(len(options.server_configs), 1) msg = 'Included extra file "%s" during parsing' % conf_file self.assertTrue(msg in instance.parse_infos) def test_read_config_include_expands_here(self): conf = os.path.join( os.path.abspath(os.path.dirname(__file__)), 'fixtures', 'include.conf') root_here = os.path.dirname(conf) include_here = os.path.join(root_here, 'example') parser = self._makeOne() parser.configfile = conf parser.process_config_file(True) section = parser.configroot.supervisord self.assertEqual(section.logfile, root_here) self.assertEqual(section.childlogdir, include_here) def test_readFile_failed(self): from supervisor.options import readFile try: readFile('/notthere', 0, 10) except ValueError as inst: self.assertEqual(inst.args[0], 'FAILED') else: raise AssertionError("Didn't raise") def test_get_pid(self): instance = self._makeOne() self.assertEqual(os.getpid(), instance.get_pid()) def test_get_signal_delegates_to_signal_receiver(self): instance = self._makeOne() instance.signal_receiver.receive(signal.SIGTERM, None) instance.signal_receiver.receive(signal.SIGCHLD, None) self.assertEqual(instance.get_signal(), signal.SIGTERM) self.assertEqual(instance.get_signal(), signal.SIGCHLD) self.assertEqual(instance.get_signal(), None) def test_check_execv_args_cant_find_command(self): instance = self._makeOne() from supervisor.options import NotFound self.assertRaises(NotFound, instance.check_execv_args, '/not/there', None, None) def test_check_execv_args_notexecutable(self): instance = self._makeOne() from supervisor.options import NotExecutable self.assertRaises(NotExecutable, instance.check_execv_args, '/etc/passwd', ['etc/passwd'], os.stat('/etc/passwd')) def test_check_execv_args_isdir(self): instance = self._makeOne() from supervisor.options import NotExecutable self.assertRaises(NotExecutable, instance.check_execv_args, '/', ['/'], os.stat('/')) def test_realize_positional_args_not_supported(self): instance = self._makeOne() recorder = [] def record_usage(message): recorder.append(message) instance.usage = record_usage instance.configfile=StringIO('[supervisord]') args = ['foo', 'bar'] instance.realize(args=args) self.assertEqual(len(recorder), 1) self.assertEqual(recorder[0], 'positional arguments are not supported: %s' % args) def test_realize_getopt_error(self): instance = self._makeOne() recorder = [] def record_usage(message): recorder.append(message) instance.usage = record_usage instance.configfile=StringIO('[supervisord]') instance.realize(args=["--bad=1"]) self.assertEqual(len(recorder), 1) self.assertEqual(recorder[0], "option --bad not recognized") def test_realize_prefers_identifier_from_args(self): text = lstrip(""" [supervisord] identifier=from_config_file """) instance = self._makeOne() instance.configfile = StringIO(text) instance.realize(args=['-i', 'from_args']) self.assertEqual(instance.identifier, "from_args") def test_options_afunix(self): instance = self._makeOne() text = lstrip("""\ [unix_http_server] file=/tmp/supvtest.sock username=johndoe password=passwordhere [supervisord] ; ... """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance.configfile = StringIO(text) instance.read_config(StringIO(text)) instance.realize(args=[]) # unix_http_server options = instance.configroot.supervisord self.assertEqual(options.server_configs[0]['family'], socket.AF_UNIX) self.assertEqual(options.server_configs[0]['file'], '/tmp/supvtest.sock') self.assertEqual(options.server_configs[0]['chmod'], 448) # defaults self.assertEqual(options.server_configs[0]['chown'], (-1,-1)) # defaults def test_options_afunix_chxxx_values_valid(self): instance = self._makeOne() text = lstrip("""\ [unix_http_server] file=/tmp/supvtest.sock username=johndoe password=passwordhere chmod=0755 [supervisord] ; ... """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance.configfile = StringIO(text) instance.read_config(StringIO(text)) instance.realize(args=[]) # unix_http_server options = instance.configroot.supervisord self.assertEqual(options.server_configs[0]['family'], socket.AF_UNIX) self.assertEqual(options.server_configs[0]['file'], '/tmp/supvtest.sock') self.assertEqual(options.server_configs[0]['chmod'], 493) def test_options_afunix_chmod_bad(self): instance = self._makeOne() text = lstrip("""\ [supervisord] [unix_http_server] file=/tmp/file chmod=NaN """) instance.configfile = StringIO(text) try: instance.read_config(StringIO(text)) self.fail("nothing raised") except ValueError as exc: self.assertEqual(exc.args[0], "Invalid chmod value NaN") def test_options_afunix_chown_bad(self): instance = self._makeOne() text = lstrip("""\ [supervisord] [unix_http_server] file=/tmp/file chown=thisisnotavaliduser """) instance.configfile = StringIO(text) try: instance.read_config(StringIO(text)) self.fail("nothing raised") except ValueError as exc: self.assertEqual(exc.args[0], "Invalid sockchown value thisisnotavaliduser") def test_options_afunix_no_file(self): instance = self._makeOne() text = lstrip("""\ [supervisord] [unix_http_server] ;no file= """) instance.configfile = StringIO(text) try: instance.read_config(StringIO(text)) self.fail("nothing raised") except ValueError as exc: self.assertEqual(exc.args[0], "section [unix_http_server] has no file value") def test_options_afunix_username_without_password(self): instance = self._makeOne() text = lstrip("""\ [supervisord] [unix_http_server] file=/tmp/supvtest.sock username=usernamehere ;no password= chmod=0755 """) instance.configfile = StringIO(text) try: instance.read_config(StringIO(text)) self.fail("nothing raised") except ValueError as exc: self.assertEqual(exc.args[0], 'Section [unix_http_server] contains incomplete ' 'authentication: If a username or a password is ' 'specified, both the username and password must ' 'be specified') def test_options_afunix_password_without_username(self): instance = self._makeOne() text = lstrip("""\ [supervisord] [unix_http_server] file=/tmp/supvtest.sock ;no username= password=passwordhere chmod=0755 """) instance.configfile = StringIO(text) try: instance.read_config(StringIO(text)) self.fail("nothing raised") except ValueError as exc: self.assertEqual(exc.args[0], 'Section [unix_http_server] contains incomplete ' 'authentication: If a username or a password is ' 'specified, both the username and password must ' 'be specified') def test_options_afunix_file_expands_here(self): instance = self._makeOne() text = lstrip("""\ [supervisord] [unix_http_server] file=%(here)s/supervisord.sock """) here = tempfile.mkdtemp() supervisord_conf = os.path.join(here, 'supervisord.conf') with open(supervisord_conf, 'w') as f: f.write(text) try: instance.configfile = supervisord_conf instance.realize(args=[]) finally: shutil.rmtree(here, ignore_errors=True) options = instance.configroot.supervisord # unix_http_server serverconf = options.server_configs[0] self.assertEqual(serverconf['family'], socket.AF_UNIX) self.assertEqual(serverconf['file'], os.path.join(here, 'supervisord.sock')) def test_options_afinet_username_without_password(self): instance = self._makeOne() text = lstrip("""\ [supervisord] [inet_http_server] file=/tmp/supvtest.sock username=usernamehere ;no password= chmod=0755 """) instance.configfile = StringIO(text) try: instance.read_config(StringIO(text)) self.fail("nothing raised") except ValueError as exc: self.assertEqual(exc.args[0], 'Section [inet_http_server] contains incomplete ' 'authentication: If a username or a password is ' 'specified, both the username and password must ' 'be specified') def test_options_afinet_password_without_username(self): instance = self._makeOne() text = lstrip("""\ [supervisord] [inet_http_server] password=passwordhere ;no username= """) instance.configfile = StringIO(text) try: instance.read_config(StringIO(text)) self.fail("nothing raised") except ValueError as exc: self.assertEqual(exc.args[0], 'Section [inet_http_server] contains incomplete ' 'authentication: If a username or a password is ' 'specified, both the username and password must ' 'be specified') def test_options_afinet_no_port(self): instance = self._makeOne() text = lstrip("""\ [supervisord] [inet_http_server] ;no port= """) instance.configfile = StringIO(text) try: instance.read_config(StringIO(text)) self.fail("nothing raised") except ValueError as exc: self.assertEqual(exc.args[0], "section [inet_http_server] has no port value") def test_cleanup_afunix_unlink(self): with tempfile.NamedTemporaryFile(delete=False) as f: fn = f.name f.write(b'foo') instance = self._makeOne() instance.unlink_socketfiles = True class Server: pass instance.httpservers = [({'family':socket.AF_UNIX, 'file':fn}, Server())] instance.pidfile = '' instance.cleanup() self.assertFalse(os.path.exists(fn)) def test_cleanup_afunix_nounlink(self): with tempfile.NamedTemporaryFile(delete=False) as f: fn = f.name f.write(b'foo') try: instance = self._makeOne() class Server: pass instance.httpservers = [({'family':socket.AF_UNIX, 'file':fn}, Server())] instance.pidfile = '' instance.unlink_socketfiles = False instance.cleanup() self.assertTrue(os.path.exists(fn)) finally: try: os.unlink(fn) except OSError: pass def test_cleanup_afunix_ignores_oserror_enoent(self): notfound = os.path.join(os.path.dirname(__file__), 'notfound') with tempfile.NamedTemporaryFile(delete=False) as f: socketname = f.name f.write(b'foo') try: instance = self._makeOne() instance.unlink_socketfiles = True class Server: pass instance.httpservers = [ ({'family': socket.AF_UNIX, 'file': notfound}, Server()), ({'family': socket.AF_UNIX, 'file': socketname}, Server()), ] instance.pidfile = '' instance.cleanup() self.assertFalse(os.path.exists(socketname)) finally: try: os.unlink(socketname) except OSError: pass def test_cleanup_removes_pidfile(self): with tempfile.NamedTemporaryFile(delete=False) as f: pidfile = f.name f.write(b'2') try: instance = self._makeOne() instance.pidfile = pidfile instance.logger = DummyLogger() instance.write_pidfile() self.assertTrue(instance.unlink_pidfile) instance.cleanup() self.assertFalse(os.path.exists(pidfile)) finally: try: os.unlink(pidfile) except OSError: pass def test_cleanup_pidfile_ignores_oserror_enoent(self): notfound = os.path.join(os.path.dirname(__file__), 'notfound') instance = self._makeOne() instance.pidfile = notfound instance.cleanup() # shouldn't raise def test_cleanup_does_not_remove_pidfile_from_another_supervisord(self): with tempfile.NamedTemporaryFile(delete=False) as f: pidfile = f.name f.write(b'1234') try: instance = self._makeOne() # pidfile exists but unlink_pidfile indicates we did not write it. # pidfile must be from another instance of supervisord and # shouldn't be removed. instance.pidfile = pidfile self.assertFalse(instance.unlink_pidfile) instance.cleanup() self.assertTrue(os.path.exists(pidfile)) finally: try: os.unlink(pidfile) except OSError: pass def test_cleanup_closes_poller(self): with tempfile.NamedTemporaryFile(delete=False) as f: pidfile = f.name f.write(b'2') try: instance = self._makeOne() instance.pidfile = pidfile poller = DummyPoller({}) instance.poller = poller self.assertFalse(poller.closed) instance.cleanup() self.assertTrue(poller.closed) finally: try: os.unlink(pidfile) except OSError: pass @patch('os.closerange', Mock()) def test_cleanup_fds_closes_5_upto_minfds(self): instance = self._makeOne() instance.minfds = 10 def f(): instance.cleanup_fds() f() os.closerange.assert_called_with(5, 10) def test_close_httpservers(self): instance = self._makeOne() class Server: closed = False def close(self): self.closed = True server = Server() instance.httpservers = [({}, server)] instance.close_httpservers() self.assertEqual(server.closed, True) def test_close_logger(self): instance = self._makeOne() logger = DummyLogger() instance.logger = logger instance.close_logger() self.assertEqual(logger.closed, True) def test_close_parent_pipes(self): instance = self._makeOne() closed = [] def close_fd(fd): closed.append(fd) instance.close_fd = close_fd pipes = {'stdin': 0, 'stdout': 1, 'stderr': 2, 'child_stdin': 3, 'child_stdout': 4, 'child_stderr': 5} instance.close_parent_pipes(pipes) self.assertEqual(sorted(closed), [0, 1, 2]) def test_close_parent_pipes_ignores_fd_of_none(self): instance = self._makeOne() closed = [] def close_fd(fd): closed.append(fd) instance.close_fd = close_fd pipes = {'stdin': None} instance.close_parent_pipes(pipes) self.assertEqual(closed, []) def test_close_child_pipes(self): instance = self._makeOne() closed = [] def close_fd(fd): closed.append(fd) instance.close_fd = close_fd pipes = {'stdin': 0, 'stdout': 1, 'stderr': 2, 'child_stdin': 3, 'child_stdout': 4, 'child_stderr': 5} instance.close_child_pipes(pipes) self.assertEqual(sorted(closed), [3, 4, 5]) def test_close_child_pipes_ignores_fd_of_none(self): instance = self._makeOne() closed = [] def close_fd(fd): closed.append(fd) instance.close_fd = close_fd pipes = {'child_stdin': None} instance.close_parent_pipes(pipes) self.assertEqual(sorted(closed), []) def test_reopenlogs(self): instance = self._makeOne() logger = DummyLogger() logger.handlers = [DummyLogger()] instance.logger = logger instance.reopenlogs() self.assertEqual(logger.handlers[0].reopened, True) self.assertEqual(logger.data[0], 'supervisord logreopen') def test_write_pidfile_ok(self): with tempfile.NamedTemporaryFile(delete=True) as f: fn = f.name self.assertFalse(os.path.exists(fn)) try: instance = self._makeOne() instance.logger = DummyLogger() instance.pidfile = fn instance.write_pidfile() self.assertTrue(os.path.exists(fn)) with open(fn, 'r') as f: pid = int(f.read().strip()) self.assertEqual(pid, os.getpid()) msg = instance.logger.data[0] self.assertTrue(msg.startswith('supervisord started with pid')) self.assertTrue(instance.unlink_pidfile) finally: try: os.unlink(fn) except OSError: pass def test_write_pidfile_fail(self): fn = '/cannot/possibly/exist' instance = self._makeOne() instance.logger = DummyLogger() instance.pidfile = fn instance.write_pidfile() msg = instance.logger.data[0] self.assertTrue(msg.startswith('could not write pidfile')) self.assertFalse(instance.unlink_pidfile) def test_close_fd(self): instance = self._makeOne() innie, outie = os.pipe() os.read(innie, 0) # we can read it while its open os.write(outie, as_bytes('foo')) # we can write to it while its open instance.close_fd(innie) self.assertRaises(OSError, os.read, innie, 0) instance.close_fd(outie) self.assertRaises(OSError, os.write, outie, as_bytes('foo')) @patch('os.close', Mock(side_effect=OSError)) def test_close_fd_ignores_oserror(self): instance = self._makeOne() instance.close_fd(0) # shouldn't raise def test_processes_from_section(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/cat priority = 1 autostart = false autorestart = false startsecs = 100 startretries = 100 user = root stdout_logfile = NONE stdout_logfile_backups = 1 stdout_logfile_maxbytes = 100MB stdout_events_enabled = true stopsignal = KILL stopwaitsecs = 100 killasgroup = true exitcodes = 1,4 redirect_stderr = false environment = KEY1=val1,KEY2=val2,KEY3=%(process_num)s numprocs = 2 process_name = %(group_name)s_%(program_name)s_%(process_num)02d """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) pconfigs = instance.processes_from_section(config, 'program:foo', 'bar') self.assertEqual(len(pconfigs), 2) pconfig = pconfigs[0] self.assertEqual(pconfig.name, 'bar_foo_00') self.assertEqual(pconfig.command, '/bin/cat') self.assertEqual(pconfig.autostart, False) self.assertEqual(pconfig.autorestart, False) self.assertEqual(pconfig.startsecs, 100) self.assertEqual(pconfig.startretries, 100) self.assertEqual(pconfig.uid, 0) self.assertEqual(pconfig.stdout_logfile, None) self.assertEqual(pconfig.stdout_capture_maxbytes, 0) self.assertEqual(pconfig.stdout_logfile_maxbytes, 104857600) self.assertEqual(pconfig.stdout_events_enabled, True) self.assertEqual(pconfig.stopsignal, signal.SIGKILL) self.assertEqual(pconfig.stopasgroup, False) self.assertEqual(pconfig.killasgroup, True) self.assertEqual(pconfig.stopwaitsecs, 100) self.assertEqual(pconfig.exitcodes, [1,4]) self.assertEqual(pconfig.redirect_stderr, False) self.assertEqual(pconfig.environment, {'KEY1':'val1', 'KEY2':'val2', 'KEY3':'0'}) def test_processes_from_section_host_node_name_expansion(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/foo --host=%(host_node_name)s """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) pconfigs = instance.processes_from_section(config, 'program:foo', 'bar') expected = "/bin/foo --host=" + platform.node() self.assertEqual(pconfigs[0].command, expected) def test_processes_from_section_process_num_expansion(self): instance = self._makeOne() text = lstrip("""\ [program:foo] process_name = foo_%(process_num)d command = /bin/foo --num=%(process_num)d directory = /tmp/foo_%(process_num)d stderr_logfile = /tmp/foo_%(process_num)d_stderr stdout_logfile = /tmp/foo_%(process_num)d_stdout environment = NUM=%(process_num)d numprocs = 2 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) pconfigs = instance.processes_from_section(config, 'program:foo', 'bar') self.assertEqual(len(pconfigs), 2) for num in (0, 1): self.assertEqual(pconfigs[num].name, 'foo_%d' % num) self.assertEqual(pconfigs[num].command, "/bin/foo --num=%d" % num) self.assertEqual(pconfigs[num].directory, '/tmp/foo_%d' % num) self.assertEqual(pconfigs[num].stderr_logfile, '/tmp/foo_%d_stderr' % num) self.assertEqual(pconfigs[num].stdout_logfile, '/tmp/foo_%d_stdout' % num) self.assertEqual(pconfigs[num].environment, {'NUM': '%d' % num}) def test_processes_from_section_numprocs_expansion(self): instance = self._makeOne() text = lstrip("""\ [program:foo] process_name = foo_%(process_num)d command = /bin/foo --numprocs=%(numprocs)d numprocs = 2 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) pconfigs = instance.processes_from_section(config, 'program:foo', 'bar') self.assertEqual(len(pconfigs), 2) for num in (0, 1): self.assertEqual(pconfigs[num].name, 'foo_%d' % num) self.assertEqual(pconfigs[num].command, "/bin/foo --numprocs=%d" % 2) def test_processes_from_section_expands_directory(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/cat directory = /tmp/%(ENV_FOO)s """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.expansions = {'ENV_FOO': 'bar'} config.read_string(text) pconfigs = instance.processes_from_section(config, 'program:foo', 'bar') self.assertEqual(pconfigs[0].directory, '/tmp/bar') def test_processes_from_section_environment_variables_expansion(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/foo --path='%(ENV_PATH)s' """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) pconfigs = instance.processes_from_section(config, 'program:foo', 'bar') expected = "/bin/foo --path='%s'" % os.environ['PATH'] self.assertEqual(pconfigs[0].command, expected) def test_processes_from_section_expands_env_in_environment(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/foo environment = PATH='/foo/bar:%(ENV_PATH)s' """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) pconfigs = instance.processes_from_section(config, 'program:foo', 'bar') expected = "/foo/bar:%s" % os.environ['PATH'] self.assertEqual(pconfigs[0].environment['PATH'], expected) def test_processes_from_section_redirect_stderr_with_filename(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/foo redirect_stderr = true stderr_logfile = /tmp/logfile """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) pconfigs = instance.processes_from_section(config, 'program:foo', 'bar') self.assertEqual(instance.parse_warnings[0], 'For [program:foo], redirect_stderr=true but stderr_logfile has ' 'also been set to a filename, the filename has been ignored') self.assertEqual(pconfigs[0].stderr_logfile, None) def test_processes_from_section_rewrites_stdout_logfile_of_syslog(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/foo stdout_logfile = syslog """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) pconfigs = instance.processes_from_section(config, 'program:foo', 'bar') self.assertEqual(instance.parse_warnings[0], 'For [program:foo], stdout_logfile=syslog but this is deprecated ' 'and will be removed. Use stdout_syslog=true to enable syslog ' 'instead.') self.assertEqual(pconfigs[0].stdout_logfile, None) self.assertEqual(pconfigs[0].stdout_syslog, True) def test_processes_from_section_rewrites_stderr_logfile_of_syslog(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/foo stderr_logfile = syslog """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) pconfigs = instance.processes_from_section(config, 'program:foo', 'bar') self.assertEqual(instance.parse_warnings[0], 'For [program:foo], stderr_logfile=syslog but this is deprecated ' 'and will be removed. Use stderr_syslog=true to enable syslog ' 'instead.') self.assertEqual(pconfigs[0].stderr_logfile, None) self.assertEqual(pconfigs[0].stderr_syslog, True) def test_processes_from_section_redirect_stderr_with_auto(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/foo redirect_stderr = true stderr_logfile = auto """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) pconfigs = instance.processes_from_section(config, 'program:foo', 'bar') self.assertEqual(instance.parse_warnings, []) self.assertEqual(pconfigs[0].stderr_logfile, None) def test_processes_from_section_accepts_number_for_stopsignal(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/foo stopsignal = %d """ % signal.SIGQUIT) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) pconfigs = instance.processes_from_section(config, 'program:foo', 'bar') self.assertEqual(instance.parse_warnings, []) self.assertEqual(pconfigs[0].stopsignal, signal.SIGQUIT) def test_options_with_environment_expansions(self): text = lstrip("""\ [supervisord] logfile = %(ENV_HOME)s/supervisord.log logfile_maxbytes = %(ENV_SUPD_LOGFILE_MAXBYTES)s logfile_backups = %(ENV_SUPD_LOGFILE_BACKUPS)s loglevel = %(ENV_SUPD_LOGLEVEL)s nodaemon = %(ENV_SUPD_NODAEMON)s minfds = %(ENV_SUPD_MINFDS)s minprocs = %(ENV_SUPD_MINPROCS)s umask = %(ENV_SUPD_UMASK)s identifier = supervisor_%(ENV_USER)s nocleanup = %(ENV_SUPD_NOCLEANUP)s childlogdir = %(ENV_HOME)s strip_ansi = %(ENV_SUPD_STRIP_ANSI)s environment = FAKE_ENV_VAR=/some/path [inet_http_server] port=*:%(ENV_HTSRV_PORT)s username=%(ENV_HTSRV_USER)s password=%(ENV_HTSRV_PASS)s [program:cat1] command=%(ENV_CAT1_COMMAND)s --logdir=%(ENV_CAT1_COMMAND_LOGDIR)s priority=%(ENV_CAT1_PRIORITY)s autostart=%(ENV_CAT1_AUTOSTART)s user=%(ENV_CAT1_USER)s stdout_logfile=%(ENV_CAT1_STDOUT_LOGFILE)s stdout_logfile_maxbytes = %(ENV_CAT1_STDOUT_LOGFILE_MAXBYTES)s stdout_logfile_backups = %(ENV_CAT1_STDOUT_LOGFILE_BACKUPS)s stopsignal=%(ENV_CAT1_STOPSIGNAL)s stopwaitsecs=%(ENV_CAT1_STOPWAIT)s startsecs=%(ENV_CAT1_STARTWAIT)s startretries=%(ENV_CAT1_STARTRETRIES)s directory=%(ENV_CAT1_DIR)s umask=%(ENV_CAT1_UMASK)s """) from supervisor import datatypes from supervisor.options import UnhosedConfigParser instance = self._makeOne() instance.environ_expansions = { 'ENV_HOME': tempfile.gettempdir(), 'ENV_USER': 'johndoe', 'ENV_HTSRV_PORT': '9210', 'ENV_HTSRV_USER': 'someuser', 'ENV_HTSRV_PASS': 'passwordhere', 'ENV_SUPD_LOGFILE_MAXBYTES': '51MB', 'ENV_SUPD_LOGFILE_BACKUPS': '10', 'ENV_SUPD_LOGLEVEL': 'info', 'ENV_SUPD_NODAEMON': 'false', 'ENV_SUPD_SILENT': 'false', 'ENV_SUPD_MINFDS': '1024', 'ENV_SUPD_MINPROCS': '200', 'ENV_SUPD_UMASK': '002', 'ENV_SUPD_NOCLEANUP': 'true', 'ENV_SUPD_STRIP_ANSI': 'false', 'ENV_CAT1_COMMAND': '/bin/customcat', 'ENV_CAT1_COMMAND_LOGDIR': '/path/to/logs', 'ENV_CAT1_PRIORITY': '3', 'ENV_CAT1_AUTOSTART': 'true', 'ENV_CAT1_USER': 'root', # resolved to uid 'ENV_CAT1_STDOUT_LOGFILE': '/tmp/cat.log', 'ENV_CAT1_STDOUT_LOGFILE_MAXBYTES': '78KB', 'ENV_CAT1_STDOUT_LOGFILE_BACKUPS': '2', 'ENV_CAT1_STOPSIGNAL': 'KILL', 'ENV_CAT1_STOPWAIT': '5', 'ENV_CAT1_STARTWAIT': '5', 'ENV_CAT1_STARTRETRIES': '10', 'ENV_CAT1_DIR': '/tmp', 'ENV_CAT1_UMASK': '002', } config = UnhosedConfigParser() config.expansions = instance.environ_expansions config.read_string(text) instance.configfile = StringIO(text) instance.read_config(StringIO(text)) instance.realize(args=[]) # supervisord self.assertEqual(instance.logfile, '%(ENV_HOME)s/supervisord.log' % config.expansions) self.assertEqual(instance.identifier, 'supervisor_%(ENV_USER)s' % config.expansions) self.assertEqual(instance.logfile_maxbytes, 53477376) self.assertEqual(instance.logfile_backups, 10) self.assertEqual(instance.loglevel, LevelsByName.INFO) self.assertEqual(instance.nodaemon, False) self.assertEqual(instance.silent, False) self.assertEqual(instance.minfds, 1024) self.assertEqual(instance.minprocs, 200) self.assertEqual(instance.nocleanup, True) self.assertEqual(instance.childlogdir, config.expansions['ENV_HOME']) self.assertEqual(instance.strip_ansi, False) # inet_http_server options = instance.configroot.supervisord self.assertEqual(options.server_configs[0]['family'], socket.AF_INET) self.assertEqual(options.server_configs[0]['host'], '') self.assertEqual(options.server_configs[0]['port'], 9210) self.assertEqual(options.server_configs[0]['username'], 'someuser') self.assertEqual(options.server_configs[0]['password'], 'passwordhere') # cat1 cat1 = options.process_group_configs[0] self.assertEqual(cat1.name, 'cat1') self.assertEqual(cat1.priority, 3) self.assertEqual(len(cat1.process_configs), 1) proc1 = cat1.process_configs[0] self.assertEqual(proc1.name, 'cat1') self.assertEqual(proc1.command, '/bin/customcat --logdir=/path/to/logs') self.assertEqual(proc1.priority, 3) self.assertEqual(proc1.autostart, True) self.assertEqual(proc1.autorestart, datatypes.RestartWhenExitUnexpected) self.assertEqual(proc1.startsecs, 5) self.assertEqual(proc1.startretries, 10) self.assertEqual(proc1.uid, 0) self.assertEqual(proc1.stdout_logfile, '/tmp/cat.log') self.assertEqual(proc1.stopsignal, signal.SIGKILL) self.assertEqual(proc1.stopwaitsecs, 5) self.assertEqual(proc1.stopasgroup, False) self.assertEqual(proc1.killasgroup, False) self.assertEqual(proc1.stdout_logfile_maxbytes, datatypes.byte_size('78KB')) self.assertEqual(proc1.stdout_logfile_backups, 2) self.assertEqual(proc1.exitcodes, [0]) self.assertEqual(proc1.directory, '/tmp') self.assertEqual(proc1.umask, 2) self.assertEqual(proc1.environment, dict(FAKE_ENV_VAR='/some/path')) def test_options_supervisord_section_expands_here(self): instance = self._makeOne() text = lstrip('''\ [supervisord] childlogdir=%(here)s directory=%(here)s logfile=%(here)s/supervisord.log pidfile=%(here)s/supervisord.pid ''') here = tempfile.mkdtemp() supervisord_conf = os.path.join(here, 'supervisord.conf') with open(supervisord_conf, 'w') as f: f.write(text) try: instance.configfile = supervisord_conf instance.realize(args=[]) finally: shutil.rmtree(here, ignore_errors=True) self.assertEqual(instance.childlogdir, os.path.join(here)) self.assertEqual(instance.directory, os.path.join(here)) self.assertEqual(instance.logfile, os.path.join(here, 'supervisord.log')) self.assertEqual(instance.pidfile, os.path.join(here, 'supervisord.pid')) def test_options_program_section_expands_env_from_supervisord_sect(self): instance = self._makeOne() text = lstrip(''' [supervisord] environment=CMD=/bin/from/supervisord/section [program:cmd] command=%(ENV_CMD)s ''') here = tempfile.mkdtemp() supervisord_conf = os.path.join(here, 'supervisord.conf') with open(supervisord_conf, 'w') as f: f.write(text) try: instance.configfile = supervisord_conf instance.realize(args=[]) finally: shutil.rmtree(here, ignore_errors=True) options = instance.configroot.supervisord group = options.process_group_configs[0] self.assertEqual(group.name, 'cmd') proc = group.process_configs[0] self.assertEqual(proc.command, os.path.join(here, '/bin/from/supervisord/section')) def test_options_program_section_expands_env_from_program_sect(self): instance = self._makeOne() text = lstrip(''' [supervisord] environment=CMD=/bin/from/supervisord/section [program:cmd] command=%(ENV_CMD)s environment=CMD=/bin/from/program/section ''') here = tempfile.mkdtemp() supervisord_conf = os.path.join(here, 'supervisord.conf') with open(supervisord_conf, 'w') as f: f.write(text) try: instance.configfile = supervisord_conf instance.realize(args=[]) finally: shutil.rmtree(here, ignore_errors=True) options = instance.configroot.supervisord group = options.process_group_configs[0] self.assertEqual(group.name, 'cmd') proc = group.process_configs[0] self.assertEqual(proc.command, os.path.join(here, '/bin/from/program/section')) def test_options_program_section_expands_here(self): instance = self._makeOne() text = lstrip(''' [supervisord] [program:cat] command=%(here)s/bin/cat directory=%(here)s/thedirectory environment=FOO=%(here)s/foo serverurl=unix://%(here)s/supervisord.sock stdout_logfile=%(here)s/stdout.log stderr_logfile=%(here)s/stderr.log ''') here = tempfile.mkdtemp() supervisord_conf = os.path.join(here, 'supervisord.conf') with open(supervisord_conf, 'w') as f: f.write(text) try: instance.configfile = supervisord_conf instance.realize(args=[]) finally: shutil.rmtree(here, ignore_errors=True) options = instance.configroot.supervisord group = options.process_group_configs[0] self.assertEqual(group.name, 'cat') proc = group.process_configs[0] self.assertEqual(proc.directory, os.path.join(here, 'thedirectory')) self.assertEqual(proc.command, os.path.join(here, 'bin/cat')) self.assertEqual(proc.environment, {'FOO': os.path.join(here, 'foo')}) self.assertEqual(proc.serverurl, 'unix://' + os.path.join(here, 'supervisord.sock')) self.assertEqual(proc.stdout_logfile, os.path.join(here, 'stdout.log')) self.assertEqual(proc.stderr_logfile, os.path.join(here, 'stderr.log')) def test_options_eventlistener_section_expands_here(self): instance = self._makeOne() text = lstrip(''' [supervisord] [eventlistener:memmon] events=TICK_60 command=%(here)s/bin/memmon directory=%(here)s/thedirectory environment=FOO=%(here)s/foo serverurl=unix://%(here)s/supervisord.sock stdout_logfile=%(here)s/stdout.log stderr_logfile=%(here)s/stderr.log ''') here = tempfile.mkdtemp() supervisord_conf = os.path.join(here, 'supervisord.conf') with open(supervisord_conf, 'w') as f: f.write(text) try: instance.configfile = supervisord_conf instance.realize(args=[]) finally: shutil.rmtree(here, ignore_errors=True) options = instance.configroot.supervisord group = options.process_group_configs[0] self.assertEqual(group.name, 'memmon') proc = group.process_configs[0] self.assertEqual(proc.directory, os.path.join(here, 'thedirectory')) self.assertEqual(proc.command, os.path.join(here, 'bin/memmon')) self.assertEqual(proc.environment, {'FOO': os.path.join(here, 'foo')}) self.assertEqual(proc.serverurl, 'unix://' + os.path.join(here, 'supervisord.sock')) self.assertEqual(proc.stdout_logfile, os.path.join(here, 'stdout.log')) self.assertEqual(proc.stderr_logfile, os.path.join(here, 'stderr.log')) def test_options_expands_combined_expansions(self): instance = self._makeOne() text = lstrip(''' [supervisord] logfile = %(here)s/%(ENV_LOGNAME)s.log [program:cat] ''') text += ('command = %(here)s/bin/cat --foo=%(ENV_FOO)s ' '--num=%(process_num)d --node=%(host_node_name)s') here = tempfile.mkdtemp() supervisord_conf = os.path.join(here, 'supervisord.conf') with open(supervisord_conf, 'w') as f: f.write(text) try: instance.environ_expansions = { 'ENV_LOGNAME': 'mainlog', 'ENV_FOO': 'bar' } instance.configfile = supervisord_conf instance.realize(args=[]) finally: shutil.rmtree(here, ignore_errors=True) section = instance.configroot.supervisord self.assertEqual(section.logfile, os.path.join(here, 'mainlog.log')) cat_group = section.process_group_configs[0] cat_0 = cat_group.process_configs[0] expected = '%s --foo=bar --num=0 --node=%s' % ( os.path.join(here, 'bin/cat'), platform.node() ) self.assertEqual(cat_0.command, expected) def test_options_error_handler_shows_main_filename(self): dirname = tempfile.mkdtemp() supervisord_conf = os.path.join(dirname, 'supervisord.conf') text = lstrip(''' [supervisord] [program:cat] command = /bin/cat stopsignal = NOTASIGNAL ''') with open(supervisord_conf, 'w') as f: f.write(text) instance = self._makeOne() try: instance.configfile = supervisord_conf try: instance.process_config(do_usage=False) self.fail('nothing raised') except ValueError as e: self.assertEqual(str(e.args[0]), "value 'NOTASIGNAL' is not a valid signal name " "in section 'program:cat' (file: %r)" % supervisord_conf) finally: shutil.rmtree(dirname, ignore_errors=True) def test_options_error_handler_shows_included_filename(self): dirname = tempfile.mkdtemp() supervisord_conf = os.path.join(dirname, "supervisord.conf") text = lstrip("""\ [supervisord] [include] files=%s/conf.d/*.conf """ % dirname) with open(supervisord_conf, 'w') as f: f.write(text) conf_d = os.path.join(dirname, "conf.d") os.mkdir(conf_d) included_conf = os.path.join(conf_d, "included.conf") text = lstrip('''\ [program:cat] command = /bin/cat stopsignal = NOTASIGNAL ''') with open(included_conf, 'w') as f: f.write(text) instance = self._makeOne() try: instance.configfile = supervisord_conf try: instance.process_config(do_usage=False) self.fail('nothing raised') except ValueError as e: self.assertEqual(str(e.args[0]), "value 'NOTASIGNAL' is not a valid signal name " "in section 'program:cat' (file: %r)" % included_conf) finally: shutil.rmtree(dirname, ignore_errors=True) def test_processes_from_section_bad_program_name_spaces(self): instance = self._makeOne() text = lstrip("""\ [program:spaces are bad] """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) self.assertRaises(ValueError, instance.processes_from_section, config, 'program:spaces are bad', None) def test_processes_from_section_bad_program_name_colons(self): instance = self._makeOne() text = lstrip("""\ [program:colons:are:bad] """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) self.assertRaises(ValueError, instance.processes_from_section, config, 'program:colons:are:bad', None) def test_processes_from_section_no_procnum_in_processname(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/cat numprocs = 2 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) self.assertRaises(ValueError, instance.processes_from_section, config, 'program:foo', None) def test_processes_from_section_no_command(self): instance = self._makeOne() text = lstrip("""\ [program:foo] """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) try: instance.processes_from_section(config, 'program:foo', None) self.fail('nothing raised') except ValueError as exc: self.assertTrue(exc.args[0].startswith( 'program section program:foo does not specify a command')) def test_processes_from_section_missing_replacement_in_process_name(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/cat process_name = %(not_there)s """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) self.assertRaises(ValueError, instance.processes_from_section, config, 'program:foo', None) def test_processes_from_section_bad_expression_in_process_name(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/cat process_name = %(program_name) """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) self.assertRaises(ValueError, instance.processes_from_section, config, 'program:foo', None) def test_processes_from_section_bad_chars_in_process_name(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/cat process_name = colons:are:bad """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) self.assertRaises(ValueError, instance.processes_from_section, config, 'program:foo', None) def test_processes_from_section_stopasgroup_implies_killasgroup(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/cat process_name = %(program_name)s stopasgroup = true """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) pconfigs = instance.processes_from_section(config, 'program:foo', 'bar') self.assertEqual(len(pconfigs), 1) pconfig = pconfigs[0] self.assertEqual(pconfig.stopasgroup, True) self.assertEqual(pconfig.killasgroup, True) def test_processes_from_section_killasgroup_mismatch_w_stopasgroup(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/cat process_name = %(program_name)s stopasgroup = true killasgroup = false """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) self.assertRaises(ValueError, instance.processes_from_section, config, 'program:foo', None) def test_processes_from_section_unexpected_end_of_key_value_pairs(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/cat environment = KEY1=val1,KEY2=val2,KEY3 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) try: instance.processes_from_section(config, 'program:foo', None) except ValueError as e: self.assertTrue( "Unexpected end of key/value pairs in value " "'KEY1=val1,KEY2=val2,KEY3' in section 'program:foo'" in str(e)) else: self.fail('instance.processes_from_section should ' 'raise a ValueError') def test_processes_from_section_shows_conf_filename_on_valueerror(self): instance = self._makeOne() text = lstrip("""\ [program:foo] ;no command """) with tempfile.NamedTemporaryFile(mode="w+") as f: try: f.write(text) f.flush() from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read(f.name) instance.processes_from_section(config, 'program:foo', None) except ValueError as e: self.assertEqual(e.args[0], "program section program:foo does not specify a command " "in section 'program:foo' (file: %r)" % f.name) else: self.fail('nothing raised') def test_processes_from_section_autolog_without_rollover(self): instance = self._makeOne() text = lstrip("""\ [program:foo] command = /bin/foo stdout_logfile = AUTO stdout_logfile_maxbytes = 0 stderr_logfile = AUTO stderr_logfile_maxbytes = 0 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() instance.logger = DummyLogger() config.read_string(text) instance.processes_from_section(config, 'program:foo', None) self.assertEqual(instance.parse_warnings[0], 'For [program:foo], AUTO logging used for stdout_logfile ' 'without rollover, set maxbytes > 0 to avoid filling up ' 'filesystem unintentionally') self.assertEqual(instance.parse_warnings[1], 'For [program:foo], AUTO logging used for stderr_logfile ' 'without rollover, set maxbytes > 0 to avoid filling up ' 'filesystem unintentionally') def test_homogeneous_process_groups_from_parser(self): text = lstrip("""\ [program:many] process_name = %(program_name)s_%(process_num)s command = /bin/cat numprocs = 2 priority = 1 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() gconfigs = instance.process_groups_from_parser(config) self.assertEqual(len(gconfigs), 1) gconfig = gconfigs[0] self.assertEqual(gconfig.name, 'many') self.assertEqual(gconfig.priority, 1) self.assertEqual(len(gconfig.process_configs), 2) def test_event_listener_pools_from_parser(self): text = lstrip("""\ [eventlistener:dog] events=PROCESS_COMMUNICATION process_name = %(program_name)s_%(process_num)s command = /bin/dog numprocs = 2 priority = 1 [eventlistener:cat] events=PROCESS_COMMUNICATION process_name = %(program_name)s_%(process_num)s command = /bin/cat numprocs = 3 [eventlistener:biz] events=PROCESS_COMMUNICATION process_name = %(program_name)s_%(process_num)s command = /bin/biz numprocs = 2 """) from supervisor.options import UnhosedConfigParser from supervisor.dispatchers import default_handler config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() gconfigs = instance.process_groups_from_parser(config) self.assertEqual(len(gconfigs), 3) gconfig1 = gconfigs[0] self.assertEqual(gconfig1.name, 'biz') self.assertEqual(gconfig1.result_handler, default_handler) self.assertEqual(len(gconfig1.process_configs), 2) gconfig1 = gconfigs[1] self.assertEqual(gconfig1.name, 'cat') self.assertEqual(gconfig1.priority, -1) self.assertEqual(gconfig1.result_handler, default_handler) self.assertEqual(len(gconfig1.process_configs), 3) gconfig1 = gconfigs[2] self.assertEqual(gconfig1.name, 'dog') self.assertEqual(gconfig1.priority, 1) self.assertEqual(gconfig1.result_handler, default_handler) self.assertEqual(len(gconfig1.process_configs), 2) def test_event_listener_pools_from_parser_with_environment_expansions(self): text = lstrip("""\ [eventlistener:dog] events=PROCESS_COMMUNICATION process_name = %(ENV_EL1_PROCNAME)s_%(program_name)s_%(process_num)s command = %(ENV_EL1_COMMAND)s numprocs = %(ENV_EL1_NUMPROCS)s priority = %(ENV_EL1_PRIORITY)s [eventlistener:cat] events=PROCESS_COMMUNICATION process_name = %(program_name)s_%(process_num)s command = /bin/cat numprocs = 3 """) from supervisor.options import UnhosedConfigParser from supervisor.dispatchers import default_handler instance = self._makeOne() instance.environ_expansions = {'ENV_HOME': tempfile.gettempdir(), 'ENV_USER': 'johndoe', 'ENV_EL1_PROCNAME': 'myeventlistener', 'ENV_EL1_COMMAND': '/bin/dog', 'ENV_EL1_NUMPROCS': '2', 'ENV_EL1_PRIORITY': '1', } config = UnhosedConfigParser() config.expansions = instance.environ_expansions config.read_string(text) gconfigs = instance.process_groups_from_parser(config) self.assertEqual(len(gconfigs), 2) gconfig0 = gconfigs[0] self.assertEqual(gconfig0.name, 'cat') self.assertEqual(gconfig0.priority, -1) self.assertEqual(gconfig0.result_handler, default_handler) self.assertEqual(len(gconfig0.process_configs), 3) gconfig1 = gconfigs[1] self.assertEqual(gconfig1.name, 'dog') self.assertEqual(gconfig1.priority, 1) self.assertEqual(gconfig1.result_handler, default_handler) self.assertEqual(len(gconfig1.process_configs), 2) dog0 = gconfig1.process_configs[0] self.assertEqual(dog0.name, 'myeventlistener_dog_0') self.assertEqual(dog0.command, '/bin/dog') self.assertEqual(dog0.priority, 1) dog1 = gconfig1.process_configs[1] self.assertEqual(dog1.name, 'myeventlistener_dog_1') self.assertEqual(dog1.command, '/bin/dog') self.assertEqual(dog1.priority, 1) def test_event_listener_pool_disallows_buffer_size_zero(self): text = lstrip("""\ [eventlistener:dog] events=EVENT command = /bin/dog buffer_size = 0 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() try: instance.process_groups_from_parser(config) self.fail('nothing raised') except ValueError as exc: self.assertEqual(exc.args[0], '[eventlistener:dog] section sets ' 'invalid buffer_size (0)') def test_event_listener_pool_disallows_redirect_stderr(self): text = lstrip("""\ [eventlistener:dog] events=PROCESS_COMMUNICATION command = /bin/dog redirect_stderr = True """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() try: instance.process_groups_from_parser(config) self.fail('nothing raised') except ValueError as exc: self.assertEqual(exc.args[0], '[eventlistener:dog] section sets ' 'redirect_stderr=true but this is not allowed because it ' 'will interfere with the eventlistener protocol') def test_event_listener_pool_with_event_result_handler(self): text = lstrip("""\ [eventlistener:dog] events=PROCESS_COMMUNICATION command = /bin/dog result_handler = supervisor.tests.base:dummy_handler """) from supervisor.options import UnhosedConfigParser from supervisor.tests.base import dummy_handler config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() gconfigs = instance.process_groups_from_parser(config) self.assertEqual(len(gconfigs), 1) gconfig1 = gconfigs[0] self.assertEqual(gconfig1.result_handler, dummy_handler) def test_event_listener_pool_result_handler_unimportable(self): text = lstrip("""\ [eventlistener:cat] events=PROCESS_COMMUNICATION command = /bin/cat result_handler = supervisor.tests.base:nonexistent """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() try: instance.process_groups_from_parser(config) self.fail('nothing raised') except ValueError as exc: self.assertEqual(exc.args[0], 'supervisor.tests.base:nonexistent cannot be ' 'resolved within [eventlistener:cat]') def test_event_listener_pool_noeventsline(self): text = lstrip("""\ [eventlistener:dog] process_name = %(program_name)s_%(process_num)s command = /bin/dog numprocs = 2 priority = 1 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() self.assertRaises(ValueError,instance.process_groups_from_parser,config) def test_event_listener_pool_unknown_eventtype(self): text = lstrip("""\ [eventlistener:dog] events=PROCESS_COMMUNICATION,THIS_EVENT_TYPE_DOESNT_EXIST process_name = %(program_name)s_%(process_num)s command = /bin/dog numprocs = 2 priority = 1 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() self.assertRaises(ValueError,instance.process_groups_from_parser,config) def test_fcgi_programs_from_parser(self): from supervisor.options import FastCGIGroupConfig from supervisor.options import FastCGIProcessConfig text = lstrip("""\ [fcgi-program:foo] socket = unix:///tmp/%(program_name)s.sock socket_owner = testuser:testgroup socket_mode = 0666 socket_backlog = 32676 process_name = %(program_name)s_%(process_num)s command = /bin/foo numprocs = 2 priority = 1 [fcgi-program:bar] socket = unix:///tmp/%(program_name)s.sock process_name = %(program_name)s_%(process_num)s command = /bin/bar user = testuser numprocs = 3 [fcgi-program:flub] socket = unix:///tmp/%(program_name)s.sock command = /bin/flub [fcgi-program:cub] socket = tcp://localhost:6000 command = /bin/cub """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() #Patch pwd and grp module functions to give us sentinel #uid/gid values so that the test does not depend on #any specific system users pwd_mock = Mock() pwd_mock.return_value = (None, None, sentinel.uid, sentinel.gid) grp_mock = Mock() grp_mock.return_value = (None, None, sentinel.gid) @patch('pwd.getpwuid', pwd_mock) @patch('pwd.getpwnam', pwd_mock) @patch('grp.getgrnam', grp_mock) def get_process_groups(instance, config): return instance.process_groups_from_parser(config) gconfigs = get_process_groups(instance, config) exp_owner = (sentinel.uid, sentinel.gid) self.assertEqual(len(gconfigs), 4) gconf_foo = gconfigs[0] self.assertEqual(gconf_foo.__class__, FastCGIGroupConfig) self.assertEqual(gconf_foo.name, 'foo') self.assertEqual(gconf_foo.priority, 1) self.assertEqual(gconf_foo.socket_config.url, 'unix:///tmp/foo.sock') self.assertEqual(exp_owner, gconf_foo.socket_config.get_owner()) self.assertEqual(0o666, gconf_foo.socket_config.get_mode()) self.assertEqual(32676, gconf_foo.socket_config.get_backlog()) self.assertEqual(len(gconf_foo.process_configs), 2) pconfig_foo = gconf_foo.process_configs[0] self.assertEqual(pconfig_foo.__class__, FastCGIProcessConfig) gconf_bar = gconfigs[1] self.assertEqual(gconf_bar.name, 'bar') self.assertEqual(gconf_bar.priority, 999) self.assertEqual(gconf_bar.socket_config.url, 'unix:///tmp/bar.sock') self.assertEqual(exp_owner, gconf_bar.socket_config.get_owner()) self.assertEqual(0o700, gconf_bar.socket_config.get_mode()) self.assertEqual(len(gconf_bar.process_configs), 3) gconf_cub = gconfigs[2] self.assertEqual(gconf_cub.name, 'cub') self.assertEqual(gconf_cub.socket_config.url, 'tcp://localhost:6000') self.assertEqual(len(gconf_cub.process_configs), 1) gconf_flub = gconfigs[3] self.assertEqual(gconf_flub.name, 'flub') self.assertEqual(gconf_flub.socket_config.url, 'unix:///tmp/flub.sock') self.assertEqual(None, gconf_flub.socket_config.get_owner()) self.assertEqual(0o700, gconf_flub.socket_config.get_mode()) self.assertEqual(len(gconf_flub.process_configs), 1) def test_fcgi_programs_from_parser_with_environment_expansions(self): from supervisor.options import FastCGIGroupConfig from supervisor.options import FastCGIProcessConfig text = lstrip("""\ [fcgi-program:foo] socket = unix:///tmp/%(program_name)s%(ENV_FOO_SOCKET_EXT)s socket_owner = %(ENV_FOO_SOCKET_USER)s:testgroup socket_mode = %(ENV_FOO_SOCKET_MODE)s socket_backlog = %(ENV_FOO_SOCKET_BACKLOG)s process_name = %(ENV_FOO_PROCESS_PREFIX)s_%(program_name)s_%(process_num)s command = /bin/foo --arg1=%(ENV_FOO_COMMAND_ARG1)s numprocs = %(ENV_FOO_NUMPROCS)s priority = %(ENV_FOO_PRIORITY)s """) from supervisor.options import UnhosedConfigParser instance = self._makeOne() instance.environ_expansions = {'ENV_HOME': '/tmp', 'ENV_SERVER_PORT': '9210', 'ENV_FOO_SOCKET_EXT': '.usock', 'ENV_FOO_SOCKET_USER': 'testuser', 'ENV_FOO_SOCKET_MODE': '0666', 'ENV_FOO_SOCKET_BACKLOG': '32676', 'ENV_FOO_PROCESS_PREFIX': 'fcgi-', 'ENV_FOO_COMMAND_ARG1': 'bar', 'ENV_FOO_NUMPROCS': '2', 'ENV_FOO_PRIORITY': '1', } config = UnhosedConfigParser() config.expansions = instance.environ_expansions config.read_string(text) # Patch pwd and grp module functions to give us sentinel # uid/gid values so that the test does not depend on # any specific system users pwd_mock = Mock() pwd_mock.return_value = (None, None, sentinel.uid, sentinel.gid) grp_mock = Mock() grp_mock.return_value = (None, None, sentinel.gid) @patch('pwd.getpwuid', pwd_mock) @patch('pwd.getpwnam', pwd_mock) @patch('grp.getgrnam', grp_mock) def get_process_groups(instance, config): return instance.process_groups_from_parser(config) gconfigs = get_process_groups(instance, config) exp_owner = (sentinel.uid, sentinel.gid) self.assertEqual(len(gconfigs), 1) gconf_foo = gconfigs[0] self.assertEqual(gconf_foo.__class__, FastCGIGroupConfig) self.assertEqual(gconf_foo.name, 'foo') self.assertEqual(gconf_foo.priority, 1) self.assertEqual(gconf_foo.socket_config.url, 'unix:///tmp/foo.usock') self.assertEqual(exp_owner, gconf_foo.socket_config.get_owner()) self.assertEqual(0o666, gconf_foo.socket_config.get_mode()) self.assertEqual(32676, gconf_foo.socket_config.get_backlog()) self.assertEqual(len(gconf_foo.process_configs), 2) pconfig_foo = gconf_foo.process_configs[0] self.assertEqual(pconfig_foo.__class__, FastCGIProcessConfig) self.assertEqual(pconfig_foo.command, '/bin/foo --arg1=bar') def test_fcgi_program_no_socket(self): text = lstrip("""\ [fcgi-program:foo] process_name = %(program_name)s_%(process_num)s command = /bin/foo numprocs = 2 priority = 1 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() self.assertRaises(ValueError,instance.process_groups_from_parser,config) def test_fcgi_program_unknown_socket_protocol(self): text = lstrip("""\ [fcgi-program:foo] socket=junk://blah process_name = %(program_name)s_%(process_num)s command = /bin/foo numprocs = 2 priority = 1 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() self.assertRaises(ValueError,instance.process_groups_from_parser,config) def test_fcgi_program_rel_unix_sock_path(self): text = lstrip("""\ [fcgi-program:foo] socket=unix://relative/path process_name = %(program_name)s_%(process_num)s command = /bin/foo numprocs = 2 priority = 1 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() self.assertRaises(ValueError,instance.process_groups_from_parser,config) def test_fcgi_program_bad_tcp_sock_format(self): text = lstrip("""\ [fcgi-program:foo] socket=tcp://missingport process_name = %(program_name)s_%(process_num)s command = /bin/foo numprocs = 2 priority = 1 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() self.assertRaises(ValueError,instance.process_groups_from_parser,config) def test_fcgi_program_bad_expansion_proc_num(self): text = lstrip("""\ [fcgi-program:foo] socket=unix:///tmp/%(process_num)s.sock process_name = %(program_name)s_%(process_num)s command = /bin/foo numprocs = 2 priority = 1 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() self.assertRaises(ValueError,instance.process_groups_from_parser,config) def test_fcgi_program_socket_owner_set_for_tcp(self): text = lstrip("""\ [fcgi-program:foo] socket=tcp://localhost:8000 socket_owner=nobody:nobody command = /bin/foo """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() self.assertRaises(ValueError,instance.process_groups_from_parser,config) def test_fcgi_program_socket_mode_set_for_tcp(self): text = lstrip("""\ [fcgi-program:foo] socket = tcp://localhost:8000 socket_mode = 0777 command = /bin/foo """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() self.assertRaises(ValueError,instance.process_groups_from_parser,config) def test_fcgi_program_bad_socket_owner(self): text = lstrip("""\ [fcgi-program:foo] socket = unix:///tmp/foo.sock socket_owner = sometotaljunkuserthatshouldnobethere command = /bin/foo """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() self.assertRaises(ValueError,instance.process_groups_from_parser,config) def test_fcgi_program_bad_socket_mode(self): text = lstrip("""\ [fcgi-program:foo] socket = unix:///tmp/foo.sock socket_mode = junk command = /bin/foo """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() self.assertRaises(ValueError,instance.process_groups_from_parser,config) def test_fcgi_program_bad_socket_backlog(self): text = lstrip("""\ [fcgi-program:foo] socket = unix:///tmp/foo.sock socket_backlog = -1 command = /bin/foo """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() self.assertRaises(ValueError,instance.process_groups_from_parser,config) def test_heterogeneous_process_groups_from_parser(self): text = lstrip("""\ [program:one] command = /bin/cat [program:two] command = /bin/cat [group:thegroup] programs = one,two priority = 5 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() gconfigs = instance.process_groups_from_parser(config) self.assertEqual(len(gconfigs), 1) gconfig = gconfigs[0] self.assertEqual(gconfig.name, 'thegroup') self.assertEqual(gconfig.priority, 5) self.assertEqual(len(gconfig.process_configs), 2) def test_mixed_process_groups_from_parser1(self): text = lstrip("""\ [program:one] command = /bin/cat [program:two] command = /bin/cat [program:many] process_name = %(program_name)s_%(process_num)s command = /bin/cat numprocs = 2 priority = 1 [group:thegroup] programs = one,two priority = 5 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() gconfigs = instance.process_groups_from_parser(config) self.assertEqual(len(gconfigs), 2) manyconfig = gconfigs[0] self.assertEqual(manyconfig.name, 'many') self.assertEqual(manyconfig.priority, 1) self.assertEqual(len(manyconfig.process_configs), 2) gconfig = gconfigs[1] self.assertEqual(gconfig.name, 'thegroup') self.assertEqual(gconfig.priority, 5) self.assertEqual(len(gconfig.process_configs), 2) def test_mixed_process_groups_from_parser2(self): text = lstrip("""\ [program:one] command = /bin/cat [program:two] command = /bin/cat [program:many] process_name = %(program_name)s_%(process_num)s command = /bin/cat numprocs = 2 priority = 1 [group:thegroup] programs = one,two, many priority = 5 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() gconfigs = instance.process_groups_from_parser(config) self.assertEqual(len(gconfigs), 1) gconfig = gconfigs[0] self.assertEqual(gconfig.name, 'thegroup') self.assertEqual(gconfig.priority, 5) self.assertEqual(len(gconfig.process_configs), 4) def test_mixed_process_groups_from_parser3(self): text = lstrip("""\ [program:one] command = /bin/cat [fcgi-program:two] command = /bin/cat [program:many] process_name = %(program_name)s_%(process_num)s command = /bin/cat numprocs = 2 priority = 1 [fcgi-program:more] process_name = %(program_name)s_%(process_num)s command = /bin/cat numprocs = 2 priority = 1 [group:thegroup] programs = one,two,many,more priority = 5 """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() gconfigs = instance.process_groups_from_parser(config) self.assertEqual(len(gconfigs), 1) gconfig = gconfigs[0] self.assertEqual(gconfig.name, 'thegroup') self.assertEqual(gconfig.priority, 5) self.assertEqual(len(gconfig.process_configs), 6) def test_ambiguous_process_in_heterogeneous_group(self): text = lstrip("""\ [program:one] command = /bin/cat [fcgi-program:one] command = /bin/cat [group:thegroup] programs = one""") from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() self.assertRaises(ValueError, instance.process_groups_from_parser, config) def test_unknown_program_in_heterogeneous_group(self): text = lstrip("""\ [program:one] command = /bin/cat [group:foo] programs = notthere """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() self.assertRaises(ValueError, instance.process_groups_from_parser, config) def test_rpcinterfaces_from_parser(self): text = lstrip("""\ [rpcinterface:dummy] supervisor.rpcinterface_factory = %s foo = bar baz = qux """ % __name__) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() factories = instance.get_plugins(config, 'supervisor.rpcinterface_factory', 'rpcinterface:') self.assertEqual(len(factories), 1) factory = factories[0] self.assertEqual(factory[0], 'dummy') self.assertEqual(factory[1], sys.modules[__name__]) self.assertEqual(factory[2], {'foo':'bar', 'baz':'qux'}) def test_rpcinterfaces_from_parser_factory_expansions(self): text = lstrip("""\ [rpcinterface:dummy] supervisor.rpcinterface_factory = %(factory)s foo = %(pet)s """) from supervisor.options import UnhosedConfigParser instance = self._makeOne() config = UnhosedConfigParser() config.expansions = {'factory': __name__, 'pet': 'cat'} config.read_string(text) factories = instance.get_plugins(config, 'supervisor.rpcinterface_factory', 'rpcinterface:') self.assertEqual(len(factories), 1) factory = factories[0] self.assertEqual(factory[0], 'dummy') self.assertEqual(factory[1], sys.modules[__name__]) self.assertEqual(factory[2], {'foo': 'cat'}) def test_rpcinterfaces_from_parser_factory_missing(self): text = lstrip("""\ [rpcinterface:dummy] # note: no supervisor.rpcinterface_factory here """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() try: instance.get_plugins(config, 'supervisor.rpcinterface_factory', 'rpcinterface:') self.fail('nothing raised') except ValueError as exc: self.assertEqual(exc.args[0], 'section [rpcinterface:dummy] ' 'does not specify a supervisor.rpcinterface_factory') def test_rpcinterfaces_from_parser_factory_not_importable(self): text = lstrip("""\ [rpcinterface:dummy] supervisor.rpcinterface_factory = nonexistent """) from supervisor.options import UnhosedConfigParser config = UnhosedConfigParser() config.read_string(text) instance = self._makeOne() try: instance.get_plugins(config, 'supervisor.rpcinterface_factory', 'rpcinterface:') self.fail('nothing raised') except ValueError as exc: self.assertEqual(exc.args[0], 'nonexistent cannot be resolved ' 'within [rpcinterface:dummy]') def test_clear_autochildlogdir(self): dn = tempfile.mkdtemp() try: instance = self._makeOne() instance.childlogdir = dn sid = 'supervisor' instance.identifier = sid logfn = instance.get_autochildlog_name('foo', sid,'stdout') first = logfn + '.1' second = logfn + '.2' f1 = open(first, 'w') f2 = open(second, 'w') instance.clear_autochildlogdir() self.assertFalse(os.path.exists(logfn)) self.assertFalse(os.path.exists(first)) self.assertFalse(os.path.exists(second)) f1.close() f2.close() finally: shutil.rmtree(dn, ignore_errors=True) def test_clear_autochildlogdir_listdir_oserror(self): instance = self._makeOne() instance.childlogdir = '/tmp/this/cant/possibly/existjjjj' instance.logger = DummyLogger() instance.clear_autochildlogdir() self.assertEqual(instance.logger.data, ['Could not clear childlog dir']) def test_clear_autochildlogdir_unlink_oserror(self): dirname = tempfile.mkdtemp() instance = self._makeOne() instance.childlogdir = dirname ident = instance.identifier filename = os.path.join(dirname, 'cat-stdout---%s-ayWAp9.log' % ident) with open(filename, 'w') as f: f.write("log") def raise_oserror(*args): raise OSError(errno.ENOENT) instance.remove = raise_oserror instance.logger = DummyLogger() instance.clear_autochildlogdir() self.assertEqual(instance.logger.data, ["Failed to clean up '%s'" % filename]) def test_openhttpservers_reports_friendly_usage_when_eaddrinuse(self): supervisord = DummySupervisor() instance = self._makeOne() def raise_eaddrinuse(supervisord): raise socket.error(errno.EADDRINUSE) instance.make_http_servers = raise_eaddrinuse recorder = [] def record_usage(message): recorder.append(message) instance.usage = record_usage instance.openhttpservers(supervisord) self.assertEqual(len(recorder), 1) expected = 'Another program is already listening' self.assertTrue(recorder[0].startswith(expected)) def test_openhttpservers_reports_socket_error_with_errno(self): supervisord = DummySupervisor() instance = self._makeOne() def make_http_servers(supervisord): raise socket.error(errno.EPERM) instance.make_http_servers = make_http_servers recorder = [] def record_usage(message): recorder.append(message) instance.usage = record_usage instance.openhttpservers(supervisord) self.assertEqual(len(recorder), 1) expected = ('Cannot open an HTTP server: socket.error ' 'reported errno.EPERM (%d)' % errno.EPERM) self.assertEqual(recorder[0], expected) def test_openhttpservers_reports_other_socket_errors(self): supervisord = DummySupervisor() instance = self._makeOne() def make_http_servers(supervisord): raise socket.error('uh oh') instance.make_http_servers = make_http_servers recorder = [] def record_usage(message): recorder.append(message) instance.usage = record_usage instance.openhttpservers(supervisord) self.assertEqual(len(recorder), 1) expected = ('Cannot open an HTTP server: socket.error ' 'reported uh oh') self.assertEqual(recorder[0], expected) def test_openhttpservers_reports_value_errors(self): supervisord = DummySupervisor() instance = self._makeOne() def make_http_servers(supervisord): raise ValueError('not prefixed with help') instance.make_http_servers = make_http_servers recorder = [] def record_usage(message): recorder.append(message) instance.usage = record_usage instance.openhttpservers(supervisord) self.assertEqual(len(recorder), 1) expected = 'not prefixed with help' self.assertEqual(recorder[0], expected) def test_openhttpservers_does_not_catch_other_exception_types(self): supervisord = DummySupervisor() instance = self._makeOne() def make_http_servers(supervisord): raise OverflowError instance.make_http_servers = make_http_servers # this scenario probably means a bug in supervisor. we dump # all the gory details on the poor user for troubleshooting self.assertRaises(OverflowError, instance.openhttpservers, supervisord) def test_drop_privileges_user_none(self): instance = self._makeOne() msg = instance.drop_privileges(None) self.assertEqual(msg, "No user specified to setuid to!") @patch('pwd.getpwuid', Mock(return_value=["foo", None, 12, 34])) @patch('os.getuid', Mock(return_value=12)) def test_drop_privileges_nonroot_same_user(self): instance = self._makeOne() msg = instance.drop_privileges(os.getuid()) self.assertEqual(msg, None) # no error if same user @patch('pwd.getpwuid', Mock(return_value=["foo", None, 55, 34])) @patch('os.getuid', Mock(return_value=12)) def test_drop_privileges_nonroot_different_user(self): instance = self._makeOne() msg = instance.drop_privileges(42) self.assertEqual(msg, "Can't drop privilege as nonroot user") def test_daemonize_notifies_poller_before_and_after_fork(self): instance = self._makeOne() instance._daemonize = lambda: None instance.poller = Mock() instance.daemonize() instance.poller.before_daemonize.assert_called_once_with() instance.poller.after_daemonize.assert_called_once_with() class ProcessConfigTests(unittest.TestCase): def _getTargetClass(self): from supervisor.options import ProcessConfig return ProcessConfig def _makeOne(self, *arg, **kw): defaults = {} for name in ('name', 'command', 'directory', 'umask', 'priority', 'autostart', 'autorestart', 'startsecs', 'startretries', 'uid', 'stdout_logfile', 'stdout_capture_maxbytes', 'stdout_events_enabled', 'stdout_syslog', 'stderr_logfile', 'stderr_capture_maxbytes', 'stderr_events_enabled', 'stderr_syslog', 'stopsignal', 'stopwaitsecs', 'stopasgroup', 'killasgroup', 'exitcodes', 'redirect_stderr', 'environment'): defaults[name] = name for name in ('stdout_logfile_backups', 'stdout_logfile_maxbytes', 'stderr_logfile_backups', 'stderr_logfile_maxbytes'): defaults[name] = 10 defaults.update(kw) return self._getTargetClass()(*arg, **defaults) def test_get_path_env_is_None_delegates_to_options(self): options = DummyOptions() instance = self._makeOne(options, environment=None) self.assertEqual(instance.get_path(), options.get_path()) def test_get_path_env_dict_with_no_PATH_delegates_to_options(self): options = DummyOptions() instance = self._makeOne(options, environment={'FOO': '1'}) self.assertEqual(instance.get_path(), options.get_path()) def test_get_path_env_dict_with_PATH_uses_it(self): options = DummyOptions() instance = self._makeOne(options, environment={'PATH': '/a:/b:/c'}) self.assertNotEqual(instance.get_path(), options.get_path()) self.assertEqual(instance.get_path(), ['/a', '/b', '/c']) def test_create_autochildlogs(self): options = DummyOptions() instance = self._makeOne(options) from supervisor.datatypes import Automatic instance.stdout_logfile = Automatic instance.stderr_logfile = Automatic instance.create_autochildlogs() self.assertEqual(instance.stdout_logfile, options.tempfile_name) self.assertEqual(instance.stderr_logfile, options.tempfile_name) def test_make_process(self): options = DummyOptions() instance = self._makeOne(options) process = instance.make_process() from supervisor.process import Subprocess self.assertEqual(process.__class__, Subprocess) self.assertEqual(process.group, None) def test_make_process_with_group(self): options = DummyOptions() instance = self._makeOne(options) process = instance.make_process('abc') from supervisor.process import Subprocess self.assertEqual(process.__class__, Subprocess) self.assertEqual(process.group, 'abc') def test_make_dispatchers_stderr_not_redirected(self): options = DummyOptions() instance = self._makeOne(options) with tempfile.NamedTemporaryFile() as stdout_logfile: with tempfile.NamedTemporaryFile() as stderr_logfile: instance.stdout_logfile = stdout_logfile.name instance.stderr_logfile = stderr_logfile.name instance.redirect_stderr = False process1 = DummyProcess(instance) dispatchers, pipes = instance.make_dispatchers(process1) self.assertEqual(dispatchers[5].channel, 'stdout') from supervisor.events import ProcessCommunicationStdoutEvent self.assertEqual(dispatchers[5].event_type, ProcessCommunicationStdoutEvent) self.assertEqual(pipes['stdout'], 5) self.assertEqual(dispatchers[7].channel, 'stderr') from supervisor.events import ProcessCommunicationStderrEvent self.assertEqual(dispatchers[7].event_type, ProcessCommunicationStderrEvent) self.assertEqual(pipes['stderr'], 7) def test_make_dispatchers_stderr_redirected(self): options = DummyOptions() instance = self._makeOne(options) with tempfile.NamedTemporaryFile() as stdout_logfile: instance.stdout_logfile = stdout_logfile.name process1 = DummyProcess(instance) dispatchers, pipes = instance.make_dispatchers(process1) self.assertEqual(dispatchers[5].channel, 'stdout') self.assertEqual(pipes['stdout'], 5) self.assertEqual(pipes['stderr'], None) class EventListenerConfigTests(unittest.TestCase): def _getTargetClass(self): from supervisor.options import EventListenerConfig return EventListenerConfig def _makeOne(self, *arg, **kw): defaults = {} for name in ('name', 'command', 'directory', 'umask', 'priority', 'autostart', 'autorestart', 'startsecs', 'startretries', 'uid', 'stdout_logfile', 'stdout_capture_maxbytes', 'stdout_events_enabled', 'stdout_syslog', 'stderr_logfile', 'stderr_capture_maxbytes', 'stderr_events_enabled', 'stderr_syslog', 'stopsignal', 'stopwaitsecs', 'stopasgroup', 'killasgroup', 'exitcodes', 'redirect_stderr', 'environment'): defaults[name] = name for name in ('stdout_logfile_backups', 'stdout_logfile_maxbytes', 'stderr_logfile_backups', 'stderr_logfile_maxbytes'): defaults[name] = 10 defaults.update(kw) return self._getTargetClass()(*arg, **defaults) def test_make_dispatchers(self): options = DummyOptions() instance = self._makeOne(options) with tempfile.NamedTemporaryFile() as stdout_logfile: with tempfile.NamedTemporaryFile() as stderr_logfile: instance.stdout_logfile = stdout_logfile.name instance.stderr_logfile = stderr_logfile.name instance.redirect_stderr = False process1 = DummyProcess(instance) dispatchers, pipes = instance.make_dispatchers(process1) self.assertEqual(dispatchers[4].channel, 'stdin') self.assertEqual(dispatchers[4].closed, False) self.assertEqual(dispatchers[5].channel, 'stdout') from supervisor.states import EventListenerStates self.assertEqual(dispatchers[5].process.listener_state, EventListenerStates.ACKNOWLEDGED) self.assertEqual(pipes['stdout'], 5) self.assertEqual(dispatchers[7].channel, 'stderr') from supervisor.events import ProcessCommunicationStderrEvent self.assertEqual(dispatchers[7].event_type, ProcessCommunicationStderrEvent) self.assertEqual(pipes['stderr'], 7) class FastCGIProcessConfigTests(unittest.TestCase): def _getTargetClass(self): from supervisor.options import FastCGIProcessConfig return FastCGIProcessConfig def _makeOne(self, *arg, **kw): defaults = {} for name in ('name', 'command', 'directory', 'umask', 'priority', 'autostart', 'autorestart', 'startsecs', 'startretries', 'uid', 'stdout_logfile', 'stdout_capture_maxbytes', 'stdout_events_enabled', 'stdout_syslog', 'stderr_logfile', 'stderr_capture_maxbytes', 'stderr_events_enabled', 'stderr_syslog', 'stopsignal', 'stopwaitsecs', 'stopasgroup', 'killasgroup', 'exitcodes', 'redirect_stderr', 'environment'): defaults[name] = name for name in ('stdout_logfile_backups', 'stdout_logfile_maxbytes', 'stderr_logfile_backups', 'stderr_logfile_maxbytes'): defaults[name] = 10 defaults.update(kw) return self._getTargetClass()(*arg, **defaults) def test_make_process(self): options = DummyOptions() instance = self._makeOne(options) self.assertRaises(NotImplementedError, instance.make_process) def test_make_process_with_group(self): options = DummyOptions() instance = self._makeOne(options) process = instance.make_process('abc') from supervisor.process import FastCGISubprocess self.assertEqual(process.__class__, FastCGISubprocess) self.assertEqual(process.group, 'abc') def test_make_dispatchers(self): options = DummyOptions() instance = self._makeOne(options) with tempfile.NamedTemporaryFile() as stdout_logfile: with tempfile.NamedTemporaryFile() as stderr_logfile: instance.stdout_logfile = stdout_logfile.name instance.stderr_logfile = stderr_logfile.name instance.redirect_stderr = False process1 = DummyProcess(instance) dispatchers, pipes = instance.make_dispatchers(process1) self.assertEqual(dispatchers[4].channel, 'stdin') self.assertEqual(dispatchers[4].closed, True) self.assertEqual(dispatchers[5].channel, 'stdout') from supervisor.events import ProcessCommunicationStdoutEvent self.assertEqual(dispatchers[5].event_type, ProcessCommunicationStdoutEvent) self.assertEqual(pipes['stdout'], 5) self.assertEqual(dispatchers[7].channel, 'stderr') from supervisor.events import ProcessCommunicationStderrEvent self.assertEqual(dispatchers[7].event_type, ProcessCommunicationStderrEvent) self.assertEqual(pipes['stderr'], 7) class ProcessGroupConfigTests(unittest.TestCase): def _getTargetClass(self): from supervisor.options import ProcessGroupConfig return ProcessGroupConfig def _makeOne(self, options, name, priority, pconfigs): return self._getTargetClass()(options, name, priority, pconfigs) def test_ctor(self): options = DummyOptions() instance = self._makeOne(options, 'whatever', 999, []) self.assertEqual(instance.options, options) self.assertEqual(instance.name, 'whatever') self.assertEqual(instance.priority, 999) self.assertEqual(instance.process_configs, []) def test_after_setuid(self): options = DummyOptions() pconfigs = [DummyPConfig(options, 'process1', '/bin/process1')] instance = self._makeOne(options, 'whatever', 999, pconfigs) instance.after_setuid() self.assertEqual(pconfigs[0].autochildlogs_created, True) def test_make_group(self): options = DummyOptions() pconfigs = [DummyPConfig(options, 'process1', '/bin/process1')] instance = self._makeOne(options, 'whatever', 999, pconfigs) group = instance.make_group() from supervisor.process import ProcessGroup self.assertEqual(group.__class__, ProcessGroup) class EventListenerPoolConfigTests(unittest.TestCase): def _getTargetClass(self): from supervisor.options import EventListenerPoolConfig return EventListenerPoolConfig def _makeOne(self, options, name, priority, process_configs, buffer_size, pool_events, result_handler): return self._getTargetClass()(options, name, priority, process_configs, buffer_size, pool_events, result_handler) def test_after_setuid(self): options = DummyOptions() pconfigs = [DummyPConfig(options, 'process1', '/bin/process1')] instance = self._makeOne(options, 'name', 999, pconfigs, 1, [], None) instance.after_setuid() self.assertEqual(pconfigs[0].autochildlogs_created, True) def test_make_group(self): options = DummyOptions() pconfigs = [DummyPConfig(options, 'process1', '/bin/process1')] instance = self._makeOne(options, 'name', 999, pconfigs, 1, [], None) group = instance.make_group() from supervisor.process import EventListenerPool self.assertEqual(group.__class__, EventListenerPool) class FastCGIGroupConfigTests(unittest.TestCase): def _getTargetClass(self): from supervisor.options import FastCGIGroupConfig return FastCGIGroupConfig def _makeOne(self, *args, **kw): return self._getTargetClass()(*args, **kw) def test_ctor(self): options = DummyOptions() sock_config = DummySocketConfig(6) instance = self._makeOne(options, 'whatever', 999, [], sock_config) self.assertEqual(instance.options, options) self.assertEqual(instance.name, 'whatever') self.assertEqual(instance.priority, 999) self.assertEqual(instance.process_configs, []) self.assertEqual(instance.socket_config, sock_config) def test_same_sockets_are_equal(self): options = DummyOptions() sock_config1 = DummySocketConfig(6) instance1 = self._makeOne(options, 'whatever', 999, [], sock_config1) sock_config2 = DummySocketConfig(6) instance2 = self._makeOne(options, 'whatever', 999, [], sock_config2) self.assertTrue(instance1 == instance2) self.assertFalse(instance1 != instance2) def test_diff_sockets_are_not_equal(self): options = DummyOptions() sock_config1 = DummySocketConfig(6) instance1 = self._makeOne(options, 'whatever', 999, [], sock_config1) sock_config2 = DummySocketConfig(7) instance2 = self._makeOne(options, 'whatever', 999, [], sock_config2) self.assertTrue(instance1 != instance2) self.assertFalse(instance1 == instance2) def test_make_group(self): options = DummyOptions() sock_config = DummySocketConfig(6) instance = self._makeOne(options, 'name', 999, [], sock_config) group = instance.make_group() from supervisor.process import FastCGIProcessGroup self.assertEqual(group.__class__, FastCGIProcessGroup) class SignalReceiverTests(unittest.TestCase): def test_returns_None_initially(self): from supervisor.options import SignalReceiver sr = SignalReceiver() self.assertEqual(sr.get_signal(), None) def test_returns_signals_in_order_received(self): from supervisor.options import SignalReceiver sr = SignalReceiver() sr.receive(signal.SIGTERM, 'frame') sr.receive(signal.SIGCHLD, 'frame') self.assertEqual(sr.get_signal(), signal.SIGTERM) self.assertEqual(sr.get_signal(), signal.SIGCHLD) self.assertEqual(sr.get_signal(), None) def test_does_not_queue_duplicate_signals(self): from supervisor.options import SignalReceiver sr = SignalReceiver() sr.receive(signal.SIGTERM, 'frame') sr.receive(signal.SIGTERM, 'frame') self.assertEqual(sr.get_signal(), signal.SIGTERM) self.assertEqual(sr.get_signal(), None) def test_queues_again_after_being_emptied(self): from supervisor.options import SignalReceiver sr = SignalReceiver() sr.receive(signal.SIGTERM, 'frame') self.assertEqual(sr.get_signal(), signal.SIGTERM) self.assertEqual(sr.get_signal(), None) sr.receive(signal.SIGCHLD, 'frame') self.assertEqual(sr.get_signal(), signal.SIGCHLD) self.assertEqual(sr.get_signal(), None) class UnhosedConfigParserTests(unittest.TestCase): def _getTargetClass(self): from supervisor.options import UnhosedConfigParser return UnhosedConfigParser def _makeOne(self, *args, **kw): return self._getTargetClass()(*args, **kw) def test_saneget_no_default(self): parser = self._makeOne() parser.read_string("[supervisord]\n") from supervisor.compat import ConfigParser self.assertRaises(ConfigParser.NoOptionError, parser.saneget, "supervisord", "missing") def test_saneget_with_default(self): parser = self._makeOne() parser.read_string("[supervisord]\n") result = parser.saneget("supervisord", "missing", default="abc") self.assertEqual(result, "abc") def test_saneget_with_default_and_expand(self): parser = self._makeOne() parser.expansions = {'pet': 'dog'} parser.read_string("[supervisord]\n") result = parser.saneget("supervisord", "foo", default="%(pet)s") self.assertEqual(result, "dog") def test_saneget_with_default_no_expand(self): parser = self._makeOne() parser.expansions = {'pet': 'dog'} parser.read_string("[supervisord]\n") result = parser.saneget("supervisord", "foo", default="%(pet)s", do_expand=False) self.assertEqual(result, "%(pet)s") def test_saneget_no_default_no_expand(self): parser = self._makeOne() parser.read_string("[supervisord]\nfoo=%(pet)s\n") result = parser.saneget("supervisord", "foo", do_expand=False) self.assertEqual(result, "%(pet)s") def test_saneget_expands_instance_expansions(self): parser = self._makeOne() parser.expansions = {'pet': 'dog'} parser.read_string("[supervisord]\nfoo=%(pet)s\n") result = parser.saneget("supervisord", "foo") self.assertEqual(result, "dog") def test_saneget_expands_arg_expansions(self): parser = self._makeOne() parser.expansions = {'pet': 'dog'} parser.read_string("[supervisord]\nfoo=%(pet)s\n") result = parser.saneget("supervisord", "foo", expansions={'pet': 'cat'}) self.assertEqual(result, "cat") def test_getdefault_does_saneget_with_mysection(self): parser = self._makeOne() parser.read_string("[%s]\nfoo=bar\n" % parser.mysection) self.assertEqual(parser.getdefault("foo"), "bar") def test_read_filenames_as_string(self): parser = self._makeOne() with tempfile.NamedTemporaryFile(mode="w+") as f: f.write("[foo]\n") f.flush() ok_filenames = parser.read(f.name) self.assertEqual(ok_filenames, [f.name]) def test_read_filenames_as_list(self): parser = self._makeOne() with tempfile.NamedTemporaryFile(mode="w+") as f: f.write("[foo]\n") f.flush() ok_filenames = parser.read([f.name]) self.assertEqual(ok_filenames, [f.name]) def test_read_returns_ok_filenames_like_rawconfigparser(self): nonexistent = os.path.join(os.path.dirname(__file__), "nonexistent") parser = self._makeOne() with tempfile.NamedTemporaryFile(mode="w+") as f: f.write("[foo]\n") f.flush() ok_filenames = parser.read([nonexistent, f.name]) self.assertEqual(ok_filenames, [f.name]) def test_read_section_to_file_initially_empty(self): parser = self._makeOne() self.assertEqual(parser.section_to_file, {}) def test_read_section_to_file_read_one_file(self): parser = self._makeOne() with tempfile.NamedTemporaryFile(mode="w+") as f: f.write("[foo]\n") f.flush() parser.read([f.name]) self.assertEqual(parser.section_to_file['foo'], f.name) def test_read_section_to_file_read_multiple_files(self): parser = self._makeOne() with tempfile.NamedTemporaryFile(mode="w+") as f1: with tempfile.NamedTemporaryFile(mode="w+") as f2: f1.write("[foo]\n") f1.flush() f2.write("[bar]\n") f2.flush() parser.read([f1.name, f2.name]) self.assertEqual(parser.section_to_file['foo'], f1.name) self.assertEqual(parser.section_to_file['bar'], f2.name) class UtilFunctionsTests(unittest.TestCase): def test_make_namespec(self): from supervisor.options import make_namespec self.assertEqual(make_namespec('group', 'process'), 'group:process') self.assertEqual(make_namespec('process', 'process'), 'process') def test_split_namespec(self): from supervisor.options import split_namespec s = split_namespec self.assertEqual(s('process:group'), ('process', 'group')) self.assertEqual(s('process'), ('process', 'process')) self.assertEqual(s('group:'), ('group', None)) self.assertEqual(s('group:*'), ('group', None)) def test_suite(): return unittest.findTestCases(sys.modules[__name__]) if __name__ == '__main__': unittest.main(defaultTest='test_suite') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/test_pidproxy.py0000644000076500000240000000112014340177153022342 0ustar00mnaberezstaffimport os import unittest class PidProxyTests(unittest.TestCase): def _getTargetClass(self): from supervisor.pidproxy import PidProxy return PidProxy def _makeOne(self, *arg, **kw): return self._getTargetClass()(*arg, **kw) def test_ctor_parses_args(self): args = ["pidproxy.py", "/path/to/pidfile", "./cmd", "-arg1", "-arg2"] pp = self._makeOne(args) self.assertEqual(pp.pidfile, "/path/to/pidfile") self.assertEqual(pp.abscmd, os.path.abspath("./cmd")) self.assertEqual(pp.cmdargs, ["./cmd", "-arg1", "-arg2"]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/test_poller.py0000644000076500000240000004046514340177153022000 0ustar00mnaberezstaffimport sys import unittest import errno import select from supervisor.tests.base import Mock from supervisor.poller import SelectPoller, PollPoller, KQueuePoller from supervisor.poller import implements_poll, implements_kqueue from supervisor.tests.base import DummyOptions # this base class is used instead of unittest.TestCase to hide # a TestCase subclass from test runner when the implementation is # not available SkipTestCase = object class BasePollerTests(unittest.TestCase): def _makeOne(self, options): from supervisor.poller import BasePoller return BasePoller(options) def test_register_readable(self): inst = self._makeOne(None) self.assertRaises(NotImplementedError, inst.register_readable, None) def test_register_writable(self): inst = self._makeOne(None) self.assertRaises(NotImplementedError, inst.register_writable, None) def test_unregister_readable(self): inst = self._makeOne(None) self.assertRaises(NotImplementedError, inst.unregister_readable, None) def test_unregister_writable(self): inst = self._makeOne(None) self.assertRaises(NotImplementedError, inst.unregister_writable, None) def test_poll(self): inst = self._makeOne(None) self.assertRaises(NotImplementedError, inst.poll, None) def test_before_daemonize(self): inst = self._makeOne(None) self.assertEqual(inst.before_daemonize(), None) def test_after_daemonize(self): inst = self._makeOne(None) self.assertEqual(inst.after_daemonize(), None) class SelectPollerTests(unittest.TestCase): def _makeOne(self, options): return SelectPoller(options) def test_register_readable(self): poller = self._makeOne(DummyOptions()) poller.register_readable(6) poller.register_readable(7) self.assertEqual(sorted(poller.readables), [6,7]) def test_register_writable(self): poller = self._makeOne(DummyOptions()) poller.register_writable(6) poller.register_writable(7) self.assertEqual(sorted(poller.writables), [6,7]) def test_unregister_readable(self): poller = self._makeOne(DummyOptions()) poller.register_readable(6) poller.register_readable(7) poller.register_writable(8) poller.register_writable(9) poller.unregister_readable(6) poller.unregister_readable(9) poller.unregister_readable(100) # not registered, ignore error self.assertEqual(list(poller.readables), [7]) self.assertEqual(list(poller.writables), [8, 9]) def test_unregister_writable(self): poller = self._makeOne(DummyOptions()) poller.register_readable(6) poller.register_readable(7) poller.register_writable(8) poller.register_writable(6) poller.unregister_writable(7) poller.unregister_writable(6) poller.unregister_writable(100) # not registered, ignore error self.assertEqual(list(poller.readables), [6, 7]) self.assertEqual(list(poller.writables), [8]) def test_poll_returns_readables_and_writables(self): _select = DummySelect(result={'readables': [6], 'writables': [8]}) poller = self._makeOne(DummyOptions()) poller._select = _select poller.register_readable(6) poller.register_readable(7) poller.register_writable(8) readables, writables = poller.poll(1) self.assertEqual(readables, [6]) self.assertEqual(writables, [8]) def test_poll_ignores_eintr(self): _select = DummySelect(error=errno.EINTR) options = DummyOptions() poller = self._makeOne(options) poller._select = _select poller.register_readable(6) poller.poll(1) self.assertEqual(options.logger.data[0], 'EINTR encountered in poll') def test_poll_ignores_ebadf(self): _select = DummySelect(error=errno.EBADF) options = DummyOptions() poller = self._makeOne(options) poller._select = _select poller.register_readable(6) poller.poll(1) self.assertEqual(options.logger.data[0], 'EBADF encountered in poll') self.assertEqual(list(poller.readables), []) self.assertEqual(list(poller.writables), []) def test_poll_uncaught_exception(self): _select = DummySelect(error=errno.EPERM) options = DummyOptions() poller = self._makeOne(options) poller._select = _select poller.register_readable(6) self.assertRaises(select.error, poller.poll, 1) if implements_kqueue(): KQueuePollerTestsBase = unittest.TestCase else: KQueuePollerTestsBase = SkipTestCase class KQueuePollerTests(KQueuePollerTestsBase): def _makeOne(self, options): return KQueuePoller(options) def test_register_readable(self): kqueue = DummyKQueue() poller = self._makeOne(DummyOptions()) poller._kqueue = kqueue poller.register_readable(6) self.assertEqual(list(poller.readables), [6]) self.assertEqual(len(kqueue.registered_kevents), 1) self.assertReadEventAdded(kqueue.registered_kevents[0], 6) def test_register_writable(self): kqueue = DummyKQueue() poller = self._makeOne(DummyOptions()) poller._kqueue = kqueue poller.register_writable(7) self.assertEqual(list(poller.writables), [7]) self.assertEqual(len(kqueue.registered_kevents), 1) self.assertWriteEventAdded(kqueue.registered_kevents[0], 7) def test_unregister_readable(self): kqueue = DummyKQueue() poller = self._makeOne(DummyOptions()) poller._kqueue = kqueue poller.register_writable(7) poller.register_readable(8) poller.unregister_readable(7) poller.unregister_readable(8) poller.unregister_readable(100) # not registered, ignore error self.assertEqual(list(poller.writables), [7]) self.assertEqual(list(poller.readables), []) self.assertWriteEventAdded(kqueue.registered_kevents[0], 7) self.assertReadEventAdded(kqueue.registered_kevents[1], 8) self.assertReadEventDeleted(kqueue.registered_kevents[2], 7) self.assertReadEventDeleted(kqueue.registered_kevents[3], 8) def test_unregister_writable(self): kqueue = DummyKQueue() poller = self._makeOne(DummyOptions()) poller._kqueue = kqueue poller.register_writable(7) poller.register_readable(8) poller.unregister_writable(7) poller.unregister_writable(8) poller.unregister_writable(100) # not registered, ignore error self.assertEqual(list(poller.writables), []) self.assertEqual(list(poller.readables), [8]) self.assertWriteEventAdded(kqueue.registered_kevents[0], 7) self.assertReadEventAdded(kqueue.registered_kevents[1], 8) self.assertWriteEventDeleted(kqueue.registered_kevents[2], 7) self.assertWriteEventDeleted(kqueue.registered_kevents[3], 8) def test_poll_returns_readables_and_writables(self): kqueue = DummyKQueue(result=[(6, select.KQ_FILTER_READ), (7, select.KQ_FILTER_READ), (8, select.KQ_FILTER_WRITE)]) poller = self._makeOne(DummyOptions()) poller._kqueue = kqueue poller.register_readable(6) poller.register_readable(7) poller.register_writable(8) readables, writables = poller.poll(1000) self.assertEqual(readables, [6,7]) self.assertEqual(writables, [8]) def test_poll_ignores_eintr(self): kqueue = DummyKQueue(raise_errno_poll=errno.EINTR) options = DummyOptions() poller = self._makeOne(options) poller._kqueue = kqueue poller.register_readable(6) poller.poll(1000) self.assertEqual(options.logger.data[0], 'EINTR encountered in poll') def test_register_readable_and_writable_ignores_ebadf(self): _kqueue = DummyKQueue(raise_errno_register=errno.EBADF) options = DummyOptions() poller = self._makeOne(options) poller._kqueue = _kqueue poller.register_readable(6) poller.register_writable(7) self.assertEqual(options.logger.data[0], 'EBADF encountered in kqueue. Invalid file descriptor 6') self.assertEqual(options.logger.data[1], 'EBADF encountered in kqueue. Invalid file descriptor 7') def test_register_uncaught_exception(self): _kqueue = DummyKQueue(raise_errno_register=errno.ENOMEM) options = DummyOptions() poller = self._makeOne(options) poller._kqueue = _kqueue self.assertRaises(OSError, poller.register_readable, 5) def test_poll_uncaught_exception(self): kqueue = DummyKQueue(raise_errno_poll=errno.EINVAL) options = DummyOptions() poller = self._makeOne(options) poller._kqueue = kqueue poller.register_readable(6) self.assertRaises(OSError, poller.poll, 1000) def test_before_daemonize_closes_kqueue(self): mock_kqueue = Mock() options = DummyOptions() poller = self._makeOne(options) poller._kqueue = mock_kqueue poller.before_daemonize() mock_kqueue.close.assert_called_once_with() self.assertEqual(poller._kqueue, None) def test_after_daemonize_restores_kqueue(self): options = DummyOptions() poller = self._makeOne(options) poller.readables = [1] poller.writables = [3] poller.register_readable = Mock() poller.register_writable = Mock() poller.after_daemonize() self.assertTrue(isinstance(poller._kqueue, select.kqueue)) poller.register_readable.assert_called_with(1) poller.register_writable.assert_called_with(3) def test_close_closes_kqueue(self): mock_kqueue = Mock() options = DummyOptions() poller = self._makeOne(options) poller._kqueue = mock_kqueue poller.close() mock_kqueue.close.assert_called_once_with() self.assertEqual(poller._kqueue, None) def assertReadEventAdded(self, kevent, fd): self.assertKevent(kevent, fd, select.KQ_FILTER_READ, select.KQ_EV_ADD) def assertWriteEventAdded(self, kevent, fd): self.assertKevent(kevent, fd, select.KQ_FILTER_WRITE, select.KQ_EV_ADD) def assertReadEventDeleted(self, kevent, fd): self.assertKevent(kevent, fd, select.KQ_FILTER_READ, select.KQ_EV_DELETE) def assertWriteEventDeleted(self, kevent, fd): self.assertKevent(kevent, fd, select.KQ_FILTER_WRITE, select.KQ_EV_DELETE) def assertKevent(self, kevent, ident, filter, flags): self.assertEqual(kevent.ident, ident) self.assertEqual(kevent.filter, filter) self.assertEqual(kevent.flags, flags) if implements_poll(): PollerPollTestsBase = unittest.TestCase else: PollerPollTestsBase = SkipTestCase class PollerPollTests(PollerPollTestsBase): def _makeOne(self, options): return PollPoller(options) def test_register_readable(self): select_poll = DummySelectPoll() poller = self._makeOne(DummyOptions()) poller._poller = select_poll poller.register_readable(6) poller.register_readable(7) self.assertEqual(select_poll.registered_as_readable, [6,7]) def test_register_writable(self): select_poll = DummySelectPoll() poller = self._makeOne(DummyOptions()) poller._poller = select_poll poller.register_writable(6) self.assertEqual(select_poll.registered_as_writable, [6]) def test_poll_returns_readables_and_writables(self): select_poll = DummySelectPoll(result=[(6, select.POLLIN), (7, select.POLLPRI), (8, select.POLLOUT), (9, select.POLLHUP)]) poller = self._makeOne(DummyOptions()) poller._poller = select_poll poller.register_readable(6) poller.register_readable(7) poller.register_writable(8) poller.register_readable(9) readables, writables = poller.poll(1000) self.assertEqual(readables, [6,7,9]) self.assertEqual(writables, [8]) def test_poll_ignores_eintr(self): select_poll = DummySelectPoll(error=errno.EINTR) options = DummyOptions() poller = self._makeOne(options) poller._poller = select_poll poller.register_readable(9) poller.poll(1000) self.assertEqual(options.logger.data[0], 'EINTR encountered in poll') def test_poll_uncaught_exception(self): select_poll = DummySelectPoll(error=errno.EBADF) options = DummyOptions() poller = self._makeOne(options) poller._poller = select_poll poller.register_readable(9) self.assertRaises(select.error, poller.poll, 1000) def test_poll_ignores_and_unregisters_closed_fd(self): select_poll = DummySelectPoll(result=[(6, select.POLLNVAL), (7, select.POLLPRI)]) poller = self._makeOne(DummyOptions()) poller._poller = select_poll poller.register_readable(6) poller.register_readable(7) readables, writables = poller.poll(1000) self.assertEqual(readables, [7]) self.assertEqual(select_poll.unregistered, [6]) class DummySelect(object): ''' Fake implementation of select.select() ''' def __init__(self, result=None, error=None): result = result or {} self.readables = result.get('readables', []) self.writables = result.get('writables', []) self.error = error def select(self, r, w, x, timeout): if self.error: raise select.error(self.error) return self.readables, self.writables, [] class DummySelectPoll(object): ''' Fake implementation of select.poll() ''' def __init__(self, result=None, error=None): self.result = result or [] self.error = error self.registered_as_readable = [] self.registered_as_writable = [] self.unregistered = [] def register(self, fd, eventmask): if eventmask == select.POLLIN | select.POLLPRI | select.POLLHUP: self.registered_as_readable.append(fd) elif eventmask == select.POLLOUT: self.registered_as_writable.append(fd) else: raise ValueError("Registered a fd on unknown eventmask: '{0}'".format(eventmask)) def unregister(self, fd): self.unregistered.append(fd) def poll(self, timeout): if self.error: raise select.error(self.error) return self.result class DummyKQueue(object): ''' Fake implementation of select.kqueue() ''' def __init__(self, result=None, raise_errno_poll=None, raise_errno_register=None): self.result = result or [] self.errno_poll = raise_errno_poll self.errno_register = raise_errno_register self.registered_kevents = [] self.registered_flags = [] def control(self, kevents, max_events, timeout=None): if kevents is None: # being called on poll() self.assert_max_events_on_poll(max_events) self.raise_error(self.errno_poll) return self.build_result() self.assert_max_events_on_register(max_events) self.raise_error(self.errno_register) self.registered_kevents.extend(kevents) def raise_error(self, err): if not err: return ex = OSError() ex.errno = err raise ex def build_result(self): return [FakeKEvent(ident, filter) for ident,filter in self.result] def assert_max_events_on_poll(self, max_events): assert max_events == KQueuePoller.max_events, ( "`max_events` parameter of `kqueue.control() should be %d" % KQueuePoller.max_events) def assert_max_events_on_register(self, max_events): assert max_events == 0, ( "`max_events` parameter of `kqueue.control()` should be 0 on register") class FakeKEvent(object): def __init__(self, ident, filter): self.ident = ident self.filter = filter def test_suite(): return unittest.findTestCases(sys.modules[__name__]) if __name__ == '__main__': unittest.main(defaultTest='test_suite') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0 supervisor-4.2.5/supervisor/tests/test_process.py0000644000076500000240000031663514351440431022160 0ustar00mnaberezstaffimport errno import os import signal import tempfile import time import unittest from supervisor.compat import as_bytes from supervisor.compat import maxint from supervisor.tests.base import Mock, patch, sentinel from supervisor.tests.base import DummyOptions from supervisor.tests.base import DummyPConfig from supervisor.tests.base import DummyProcess from supervisor.tests.base import DummyPGroupConfig from supervisor.tests.base import DummyDispatcher from supervisor.tests.base import DummyEvent from supervisor.tests.base import DummyFCGIGroupConfig from supervisor.tests.base import DummySocketConfig from supervisor.tests.base import DummyProcessGroup from supervisor.tests.base import DummyFCGIProcessGroup from supervisor.process import Subprocess from supervisor.options import BadCommand class SubprocessTests(unittest.TestCase): def _getTargetClass(self): from supervisor.process import Subprocess return Subprocess def _makeOne(self, *arg, **kw): return self._getTargetClass()(*arg, **kw) def tearDown(self): from supervisor.events import clear clear() def test_getProcessStateDescription(self): from supervisor.states import ProcessStates from supervisor.process import getProcessStateDescription for statename, code in ProcessStates.__dict__.items(): if isinstance(code, int): self.assertEqual(getProcessStateDescription(code), statename) def test_ctor(self): options = DummyOptions() config = DummyPConfig(options, 'cat', 'bin/cat', stdout_logfile='/tmp/temp123.log', stderr_logfile='/tmp/temp456.log') instance = self._makeOne(config) self.assertEqual(instance.config, config) self.assertEqual(instance.config.options, options) self.assertEqual(instance.laststart, 0) self.assertEqual(instance.pid, 0) self.assertEqual(instance.laststart, 0) self.assertEqual(instance.laststop, 0) self.assertEqual(instance.delay, 0) self.assertFalse(instance.administrative_stop) self.assertFalse(instance.killing) self.assertEqual(instance.backoff, 0) self.assertEqual(instance.pipes, {}) self.assertEqual(instance.dispatchers, {}) self.assertEqual(instance.spawnerr, None) def test_repr(self): options = DummyOptions() config = DummyPConfig(options, 'cat', 'bin/cat') instance = self._makeOne(config) s = repr(instance) self.assertTrue(s.startswith('')) def test_reopenlogs(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.dispatchers = {0:DummyDispatcher(readable=True), 1:DummyDispatcher(writable=True)} instance.reopenlogs() self.assertEqual(instance.dispatchers[0].logs_reopened, True) self.assertEqual(instance.dispatchers[1].logs_reopened, False) def test_removelogs(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.dispatchers = {0:DummyDispatcher(readable=True), 1:DummyDispatcher(writable=True)} instance.removelogs() self.assertEqual(instance.dispatchers[0].logs_removed, True) self.assertEqual(instance.dispatchers[1].logs_removed, False) def test_drain(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test', stdout_logfile='/tmp/foo', stderr_logfile='/tmp/bar') instance = self._makeOne(config) instance.dispatchers = {0:DummyDispatcher(readable=True), 1:DummyDispatcher(writable=True)} instance.drain() self.assertTrue(instance.dispatchers[0].read_event_handled) self.assertTrue(instance.dispatchers[1].write_event_handled) def test_get_execv_args_bad_command_extraquote(self): options = DummyOptions() config = DummyPConfig(options, 'extraquote', 'extraquote"') instance = self._makeOne(config) self.assertRaises(BadCommand, instance.get_execv_args) def test_get_execv_args_bad_command_empty(self): options = DummyOptions() config = DummyPConfig(options, 'empty', '') instance = self._makeOne(config) self.assertRaises(BadCommand, instance.get_execv_args) def test_get_execv_args_bad_command_whitespaceonly(self): options = DummyOptions() config = DummyPConfig(options, 'whitespaceonly', ' \t ') instance = self._makeOne(config) self.assertRaises(BadCommand, instance.get_execv_args) def test_get_execv_args_abs_missing(self): options = DummyOptions() config = DummyPConfig(options, 'notthere', '/notthere') instance = self._makeOne(config) args = instance.get_execv_args() self.assertEqual(args, ('/notthere', ['/notthere'])) def test_get_execv_args_abs_withquotes_missing(self): options = DummyOptions() config = DummyPConfig(options, 'notthere', '/notthere "an argument"') instance = self._makeOne(config) args = instance.get_execv_args() self.assertEqual(args, ('/notthere', ['/notthere', 'an argument'])) def test_get_execv_args_rel_missing(self): options = DummyOptions() config = DummyPConfig(options, 'notthere', 'notthere') instance = self._makeOne(config) args = instance.get_execv_args() self.assertEqual(args, ('notthere', ['notthere'])) def test_get_execv_args_rel_withquotes_missing(self): options = DummyOptions() config = DummyPConfig(options, 'notthere', 'notthere "an argument"') instance = self._makeOne(config) args = instance.get_execv_args() self.assertEqual(args, ('notthere', ['notthere', 'an argument'])) def test_get_execv_args_abs(self): executable = '/bin/sh foo' options = DummyOptions() config = DummyPConfig(options, 'sh', executable) instance = self._makeOne(config) args = instance.get_execv_args() self.assertEqual(len(args), 2) self.assertEqual(args[0], '/bin/sh') self.assertEqual(args[1], ['/bin/sh', 'foo']) def test_get_execv_args_rel(self): executable = 'sh foo' options = DummyOptions() config = DummyPConfig(options, 'sh', executable) instance = self._makeOne(config) args = instance.get_execv_args() self.assertEqual(len(args), 2) self.assertEqual(args[0], '/bin/sh') self.assertEqual(args[1], ['sh', 'foo']) def test_get_execv_args_rel_searches_using_pconfig_path(self): with tempfile.NamedTemporaryFile() as f: dirname, basename = os.path.split(f.name) executable = '%s foo' % basename options = DummyOptions() config = DummyPConfig(options, 'sh', executable) config.get_path = lambda: [ dirname ] instance = self._makeOne(config) args = instance.get_execv_args() self.assertEqual(args[0], f.name) self.assertEqual(args[1], [basename, 'foo']) def test_record_spawnerr(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.record_spawnerr('foo') self.assertEqual(instance.spawnerr, 'foo') self.assertEqual(options.logger.data[0], 'spawnerr: foo') def test_spawn_already_running(self): options = DummyOptions() config = DummyPConfig(options, 'sh', '/bin/sh') instance = self._makeOne(config) instance.pid = True from supervisor.states import ProcessStates instance.state = ProcessStates.RUNNING result = instance.spawn() self.assertEqual(result, None) self.assertEqual(options.logger.data[0], "process 'sh' already running") self.assertEqual(instance.state, ProcessStates.RUNNING) def test_spawn_fail_check_execv_args(self): options = DummyOptions() config = DummyPConfig(options, 'bad', '/bad/filename') instance = self._makeOne(config) from supervisor.states import ProcessStates instance.state = ProcessStates.BACKOFF from supervisor import events L = [] events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) result = instance.spawn() self.assertEqual(result, None) self.assertEqual(instance.spawnerr, 'bad filename') self.assertEqual(options.logger.data[0], "spawnerr: bad filename") self.assertTrue(instance.delay) self.assertTrue(instance.backoff) from supervisor.states import ProcessStates self.assertEqual(instance.state, ProcessStates.BACKOFF) self.assertEqual(len(L), 2) event1 = L[0] event2 = L[1] self.assertEqual(event1.__class__, events.ProcessStateStartingEvent) self.assertEqual(event2.__class__, events.ProcessStateBackoffEvent) def test_spawn_fail_make_pipes_emfile(self): options = DummyOptions() options.make_pipes_exception = OSError(errno.EMFILE, os.strerror(errno.EMFILE)) config = DummyPConfig(options, 'good', '/good/filename') instance = self._makeOne(config) from supervisor.states import ProcessStates instance.state = ProcessStates.BACKOFF from supervisor import events L = [] events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) result = instance.spawn() self.assertEqual(result, None) self.assertEqual(instance.spawnerr, "too many open files to spawn 'good'") self.assertEqual(options.logger.data[0], "spawnerr: too many open files to spawn 'good'") self.assertTrue(instance.delay) self.assertTrue(instance.backoff) from supervisor.states import ProcessStates self.assertEqual(instance.state, ProcessStates.BACKOFF) self.assertEqual(len(L), 2) event1, event2 = L self.assertEqual(event1.__class__, events.ProcessStateStartingEvent) self.assertEqual(event2.__class__, events.ProcessStateBackoffEvent) def test_spawn_fail_make_pipes_other(self): options = DummyOptions() options.make_pipes_exception = OSError(errno.EPERM, os.strerror(errno.EPERM)) config = DummyPConfig(options, 'good', '/good/filename') instance = self._makeOne(config) from supervisor.states import ProcessStates instance.state = ProcessStates.BACKOFF from supervisor import events L = [] events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) result = instance.spawn() self.assertEqual(result, None) msg = "unknown error making dispatchers for 'good': EPERM" self.assertEqual(instance.spawnerr, msg) self.assertEqual(options.logger.data[0], "spawnerr: %s" % msg) self.assertTrue(instance.delay) self.assertTrue(instance.backoff) from supervisor.states import ProcessStates self.assertEqual(instance.state, ProcessStates.BACKOFF) self.assertEqual(len(L), 2) event1, event2 = L self.assertEqual(event1.__class__, events.ProcessStateStartingEvent) self.assertEqual(event2.__class__, events.ProcessStateBackoffEvent) def test_spawn_fail_make_dispatchers_eisdir(self): options = DummyOptions() config = DummyPConfig(options, name='cat', command='/bin/cat', stdout_logfile='/a/directory') # not a file def raise_eisdir(envelope): raise IOError(errno.EISDIR) config.make_dispatchers = raise_eisdir instance = self._makeOne(config) from supervisor.states import ProcessStates instance.state = ProcessStates.BACKOFF from supervisor import events L = [] events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) result = instance.spawn() self.assertEqual(result, None) msg = "unknown error making dispatchers for 'cat': EISDIR" self.assertEqual(instance.spawnerr, msg) self.assertEqual(options.logger.data[0], "spawnerr: %s" % msg) self.assertTrue(instance.delay) self.assertTrue(instance.backoff) from supervisor.states import ProcessStates self.assertEqual(instance.state, ProcessStates.BACKOFF) self.assertEqual(len(L), 2) event1, event2 = L self.assertEqual(event1.__class__, events.ProcessStateStartingEvent) self.assertEqual(event2.__class__, events.ProcessStateBackoffEvent) def test_spawn_fork_fail_eagain(self): options = DummyOptions() options.fork_exception = OSError(errno.EAGAIN, os.strerror(errno.EAGAIN)) config = DummyPConfig(options, 'good', '/good/filename') instance = self._makeOne(config) from supervisor.states import ProcessStates instance.state = ProcessStates.BACKOFF from supervisor import events L = [] events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) result = instance.spawn() self.assertEqual(result, None) msg = "Too many processes in process table to spawn 'good'" self.assertEqual(instance.spawnerr, msg) self.assertEqual(options.logger.data[0], "spawnerr: %s" % msg) self.assertEqual(len(options.parent_pipes_closed), 6) self.assertEqual(len(options.child_pipes_closed), 6) self.assertTrue(instance.delay) self.assertTrue(instance.backoff) from supervisor.states import ProcessStates self.assertEqual(instance.state, ProcessStates.BACKOFF) self.assertEqual(len(L), 2) event1, event2 = L self.assertEqual(event1.__class__, events.ProcessStateStartingEvent) self.assertEqual(event2.__class__, events.ProcessStateBackoffEvent) def test_spawn_fork_fail_other(self): options = DummyOptions() options.fork_exception = OSError(errno.EPERM, os.strerror(errno.EPERM)) config = DummyPConfig(options, 'good', '/good/filename') instance = self._makeOne(config) from supervisor.states import ProcessStates instance.state = ProcessStates.BACKOFF from supervisor import events L = [] events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) result = instance.spawn() self.assertEqual(result, None) msg = "unknown error during fork for 'good': EPERM" self.assertEqual(instance.spawnerr, msg) self.assertEqual(options.logger.data[0], "spawnerr: %s" % msg) self.assertEqual(len(options.parent_pipes_closed), 6) self.assertEqual(len(options.child_pipes_closed), 6) self.assertTrue(instance.delay) self.assertTrue(instance.backoff) from supervisor.states import ProcessStates self.assertEqual(instance.state, ProcessStates.BACKOFF) self.assertEqual(len(L), 2) event1, event2 = L self.assertEqual(event1.__class__, events.ProcessStateStartingEvent) self.assertEqual(event2.__class__, events.ProcessStateBackoffEvent) def test_spawn_as_child_setuid_ok(self): options = DummyOptions() options.forkpid = 0 config = DummyPConfig(options, 'good', '/good/filename', uid=1) instance = self._makeOne(config) result = instance.spawn() self.assertEqual(result, None) self.assertEqual(options.parent_pipes_closed, None) self.assertEqual(options.child_pipes_closed, None) self.assertEqual(options.pgrp_set, True) self.assertEqual(len(options.duped), 3) self.assertEqual(len(options.fds_closed), options.minfds - 3) self.assertEqual(options.privsdropped, 1) self.assertEqual(options.execv_args, ('/good/filename', ['/good/filename']) ) self.assertEqual(options.execve_called, True) # if the real execve() succeeds, the code that writes the # "was not spawned" message won't be reached. this assertion # is to test that no other errors were written. self.assertEqual(options.written, {2: "supervisor: child process was not spawned\n"}) def test_spawn_as_child_setuid_fail(self): options = DummyOptions() options.forkpid = 0 options.setuid_msg = 'failure reason' config = DummyPConfig(options, 'good', '/good/filename', uid=1) instance = self._makeOne(config) result = instance.spawn() self.assertEqual(result, None) self.assertEqual(options.parent_pipes_closed, None) self.assertEqual(options.child_pipes_closed, None) self.assertEqual(options.pgrp_set, True) self.assertEqual(len(options.duped), 3) self.assertEqual(len(options.fds_closed), options.minfds - 3) self.assertEqual(options.written, {2: "supervisor: couldn't setuid to 1: failure reason\n" "supervisor: child process was not spawned\n"}) self.assertEqual(options.privsdropped, None) self.assertEqual(options.execve_called, False) self.assertEqual(options._exitcode, 127) def test_spawn_as_child_cwd_ok(self): options = DummyOptions() options.forkpid = 0 config = DummyPConfig(options, 'good', '/good/filename', directory='/tmp') instance = self._makeOne(config) result = instance.spawn() self.assertEqual(result, None) self.assertEqual(options.parent_pipes_closed, None) self.assertEqual(options.child_pipes_closed, None) self.assertEqual(options.pgrp_set, True) self.assertEqual(len(options.duped), 3) self.assertEqual(len(options.fds_closed), options.minfds - 3) self.assertEqual(options.execv_args, ('/good/filename', ['/good/filename']) ) self.assertEqual(options.changed_directory, True) self.assertEqual(options.execve_called, True) # if the real execve() succeeds, the code that writes the # "was not spawned" message won't be reached. this assertion # is to test that no other errors were written. self.assertEqual(options.written, {2: "supervisor: child process was not spawned\n"}) def test_spawn_as_child_sets_umask(self): options = DummyOptions() options.forkpid = 0 config = DummyPConfig(options, 'good', '/good/filename', umask=2) instance = self._makeOne(config) result = instance.spawn() self.assertEqual(result, None) self.assertEqual(options.execv_args, ('/good/filename', ['/good/filename']) ) self.assertEqual(options.umaskset, 2) self.assertEqual(options.execve_called, True) # if the real execve() succeeds, the code that writes the # "was not spawned" message won't be reached. this assertion # is to test that no other errors were written. self.assertEqual(options.written, {2: "supervisor: child process was not spawned\n"}) def test_spawn_as_child_cwd_fail(self): options = DummyOptions() options.forkpid = 0 options.chdir_exception = OSError(errno.ENOENT, os.strerror(errno.ENOENT)) config = DummyPConfig(options, 'good', '/good/filename', directory='/tmp') instance = self._makeOne(config) result = instance.spawn() self.assertEqual(result, None) self.assertEqual(options.parent_pipes_closed, None) self.assertEqual(options.child_pipes_closed, None) self.assertEqual(options.pgrp_set, True) self.assertEqual(len(options.duped), 3) self.assertEqual(len(options.fds_closed), options.minfds - 3) self.assertEqual(options.execv_args, None) out = {2: "supervisor: couldn't chdir to /tmp: ENOENT\n" "supervisor: child process was not spawned\n"} self.assertEqual(options.written, out) self.assertEqual(options._exitcode, 127) self.assertEqual(options.changed_directory, False) self.assertEqual(options.execve_called, False) def test_spawn_as_child_execv_fail_oserror(self): options = DummyOptions() options.forkpid = 0 options.execv_exception = OSError(errno.EPERM, os.strerror(errno.EPERM)) config = DummyPConfig(options, 'good', '/good/filename') instance = self._makeOne(config) result = instance.spawn() self.assertEqual(result, None) self.assertEqual(options.parent_pipes_closed, None) self.assertEqual(options.child_pipes_closed, None) self.assertEqual(options.pgrp_set, True) self.assertEqual(len(options.duped), 3) self.assertEqual(len(options.fds_closed), options.minfds - 3) out = {2: "supervisor: couldn't exec /good/filename: EPERM\n" "supervisor: child process was not spawned\n"} self.assertEqual(options.written, out) self.assertEqual(options.privsdropped, None) self.assertEqual(options._exitcode, 127) def test_spawn_as_child_execv_fail_runtime_error(self): options = DummyOptions() options.forkpid = 0 options.execv_exception = RuntimeError(errno.ENOENT) config = DummyPConfig(options, 'good', '/good/filename') instance = self._makeOne(config) result = instance.spawn() self.assertEqual(result, None) self.assertEqual(options.parent_pipes_closed, None) self.assertEqual(options.child_pipes_closed, None) self.assertEqual(options.pgrp_set, True) self.assertEqual(len(options.duped), 3) self.assertEqual(len(options.fds_closed), options.minfds - 3) msg = options.written[2] # dict, 2 is fd # head = "supervisor: couldn't exec /good/filename:" self.assertTrue(msg.startswith(head)) self.assertTrue("RuntimeError" in msg) self.assertEqual(options.privsdropped, None) self.assertEqual(options._exitcode, 127) def test_spawn_as_child_uses_pconfig_environment(self): options = DummyOptions() options.forkpid = 0 config = DummyPConfig(options, 'cat', '/bin/cat', environment={'_TEST_':'1'}) instance = self._makeOne(config) result = instance.spawn() self.assertEqual(result, None) self.assertEqual(options.execv_args, ('/bin/cat', ['/bin/cat']) ) self.assertEqual(options.execv_environment['_TEST_'], '1') def test_spawn_as_child_environment_supervisor_envvars(self): options = DummyOptions() options.forkpid = 0 config = DummyPConfig(options, 'cat', '/bin/cat') instance = self._makeOne(config) class Dummy: name = 'dummy' instance.group = Dummy() instance.group.config = Dummy() result = instance.spawn() self.assertEqual(result, None) self.assertEqual(options.execv_args, ('/bin/cat', ['/bin/cat']) ) self.assertEqual( options.execv_environment['SUPERVISOR_ENABLED'], '1') self.assertEqual( options.execv_environment['SUPERVISOR_PROCESS_NAME'], 'cat') self.assertEqual( options.execv_environment['SUPERVISOR_GROUP_NAME'], 'dummy') self.assertEqual( options.execv_environment['SUPERVISOR_SERVER_URL'], 'http://localhost:9001') def test_spawn_as_child_stderr_redirected(self): options = DummyOptions() options.forkpid = 0 config = DummyPConfig(options, 'good', '/good/filename', uid=1) config.redirect_stderr = True instance = self._makeOne(config) result = instance.spawn() self.assertEqual(result, None) self.assertEqual(options.parent_pipes_closed, None) self.assertEqual(options.child_pipes_closed, None) self.assertEqual(options.pgrp_set, True) self.assertEqual(len(options.duped), 2) self.assertEqual(len(options.fds_closed), options.minfds - 3) self.assertEqual(options.privsdropped, 1) self.assertEqual(options.execv_args, ('/good/filename', ['/good/filename']) ) self.assertEqual(options.execve_called, True) # if the real execve() succeeds, the code that writes the # "was not spawned" message won't be reached. this assertion # is to test that no other errors were written. self.assertEqual(options.written, {2: "supervisor: child process was not spawned\n"}) def test_spawn_as_parent(self): options = DummyOptions() options.forkpid = 10 config = DummyPConfig(options, 'good', '/good/filename') instance = self._makeOne(config) result = instance.spawn() self.assertEqual(result, 10) self.assertEqual(instance.dispatchers[4].__class__, DummyDispatcher) self.assertEqual(instance.dispatchers[5].__class__, DummyDispatcher) self.assertEqual(instance.dispatchers[7].__class__, DummyDispatcher) self.assertEqual(instance.pipes['stdin'], 4) self.assertEqual(instance.pipes['stdout'], 5) self.assertEqual(instance.pipes['stderr'], 7) self.assertEqual(options.parent_pipes_closed, None) self.assertEqual(len(options.child_pipes_closed), 6) self.assertEqual(options.logger.data[0], "spawned: 'good' with pid 10") self.assertEqual(instance.spawnerr, None) self.assertTrue(instance.delay) self.assertEqual(instance.config.options.pidhistory[10], instance) from supervisor.states import ProcessStates self.assertEqual(instance.state, ProcessStates.STARTING) def test_spawn_redirect_stderr(self): options = DummyOptions() options.forkpid = 10 config = DummyPConfig(options, 'good', '/good/filename', redirect_stderr=True) instance = self._makeOne(config) result = instance.spawn() self.assertEqual(result, 10) self.assertEqual(instance.dispatchers[4].__class__, DummyDispatcher) self.assertEqual(instance.dispatchers[5].__class__, DummyDispatcher) self.assertEqual(instance.pipes['stdin'], 4) self.assertEqual(instance.pipes['stdout'], 5) self.assertEqual(instance.pipes['stderr'], None) def test_write(self): executable = '/bin/cat' options = DummyOptions() config = DummyPConfig(options, 'output', executable) instance = self._makeOne(config) sent = 'a' * (1 << 13) self.assertRaises(OSError, instance.write, sent) options.forkpid = 1 instance.spawn() instance.write(sent) stdin_fd = instance.pipes['stdin'] self.assertEqual(sent, instance.dispatchers[stdin_fd].input_buffer) instance.killing = True self.assertRaises(OSError, instance.write, sent) def test_write_dispatcher_closed(self): executable = '/bin/cat' options = DummyOptions() config = DummyPConfig(options, 'output', executable) instance = self._makeOne(config) sent = 'a' * (1 << 13) self.assertRaises(OSError, instance.write, sent) options.forkpid = 1 instance.spawn() stdin_fd = instance.pipes['stdin'] instance.dispatchers[stdin_fd].close() self.assertRaises(OSError, instance.write, sent) def test_write_stdin_fd_none(self): executable = '/bin/cat' options = DummyOptions() config = DummyPConfig(options, 'output', executable) instance = self._makeOne(config) options.forkpid = 1 instance.spawn() stdin_fd = instance.pipes['stdin'] instance.dispatchers[stdin_fd].close() instance.pipes['stdin'] = None try: instance.write('foo') self.fail('nothing raised') except OSError as exc: self.assertEqual(exc.args[0], errno.EPIPE) self.assertEqual(exc.args[1], 'Process has no stdin channel') def test_write_dispatcher_flush_raises_epipe(self): executable = '/bin/cat' options = DummyOptions() config = DummyPConfig(options, 'output', executable) instance = self._makeOne(config) sent = 'a' * (1 << 13) self.assertRaises(OSError, instance.write, sent) options.forkpid = 1 instance.spawn() stdin_fd = instance.pipes['stdin'] instance.dispatchers[stdin_fd].flush_exception = OSError(errno.EPIPE, os.strerror(errno.EPIPE)) self.assertRaises(OSError, instance.write, sent) def _dont_test_spawn_and_kill(self): # this is a functional test from supervisor.tests.base import makeSpew try: sigchlds = [] def sighandler(*args): sigchlds.append(True) signal.signal(signal.SIGCHLD, sighandler) executable = makeSpew() options = DummyOptions() config = DummyPConfig(options, 'spew', executable) instance = self._makeOne(config) result = instance.spawn() msg = options.logger.data[0] self.assertTrue(msg.startswith("spawned: 'spew' with pid")) self.assertEqual(len(instance.pipes), 6) self.assertTrue(instance.pid) self.assertEqual(instance.pid, result) origpid = instance.pid while 1: try: data = os.popen('ps').read() break except IOError as why: if why.args[0] != errno.EINTR: raise # try again ;-) time.sleep(0.1) # arbitrary, race condition possible self.assertTrue(data.find(as_bytes(repr(origpid))) != -1 ) msg = instance.kill(signal.SIGTERM) time.sleep(0.1) # arbitrary, race condition possible self.assertEqual(msg, None) pid, sts = os.waitpid(-1, os.WNOHANG) data = os.popen('ps').read() self.assertEqual(data.find(as_bytes(repr(origpid))), -1) # dubious self.assertNotEqual(sigchlds, []) finally: try: os.remove(executable) except: pass signal.signal(signal.SIGCHLD, signal.SIG_DFL) def test_stop(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.pid = 11 dispatcher = DummyDispatcher(writable=True) instance.dispatchers = {'foo':dispatcher} from supervisor.states import ProcessStates instance.state = ProcessStates.RUNNING instance.laststopreport = time.time() instance.stop() self.assertTrue(instance.administrative_stop) self.assertEqual(instance.laststopreport, 0) self.assertTrue(instance.delay) self.assertEqual(options.logger.data[0], 'killing test (pid 11) with ' 'signal SIGTERM') self.assertTrue(instance.killing) self.assertEqual(options.kills[11], signal.SIGTERM) def test_stop_not_in_stoppable_state_error(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.pid = 11 dispatcher = DummyDispatcher(writable=True) instance.dispatchers = {'foo':dispatcher} from supervisor.states import ProcessStates instance.state = ProcessStates.STOPPED try: instance.stop() self.fail('nothing raised') except AssertionError as exc: self.assertEqual(exc.args[0], 'Assertion failed for test: ' 'STOPPED not in RUNNING STARTING STOPPING') def test_stop_report_logs_nothing_if_not_stopping_state(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.pid = 11 dispatcher = DummyDispatcher(writable=True) instance.dispatchers = {'foo':dispatcher} from supervisor.states import ProcessStates instance.state = ProcessStates.STOPPED instance.stop_report() self.assertEqual(len(options.logger.data), 0) def test_stop_report_logs_throttled_by_laststopreport(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.pid = 11 dispatcher = DummyDispatcher(writable=True) instance.dispatchers = {'foo':dispatcher} from supervisor.states import ProcessStates instance.state = ProcessStates.STOPPING self.assertEqual(instance.laststopreport, 0) instance.stop_report() self.assertEqual(len(options.logger.data), 1) self.assertEqual(options.logger.data[0], 'waiting for test to stop') self.assertNotEqual(instance.laststopreport, 0) instance.stop_report() self.assertEqual(len(options.logger.data), 1) # throttled def test_stop_report_laststopreport_in_future(self): future_time = time.time() + 3600 # 1 hour into the future options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.pid = 11 dispatcher = DummyDispatcher(writable=True) instance.dispatchers = {'foo':dispatcher} from supervisor.states import ProcessStates instance.state = ProcessStates.STOPPING instance.laststopreport = future_time # This iteration of stop_report() should reset instance.laststopreport # to the current time instance.stop_report() # No logging should have taken place self.assertEqual(len(options.logger.data), 0) # Ensure instance.laststopreport has rolled backward self.assertTrue(instance.laststopreport < future_time) # Sleep for 2 seconds time.sleep(2) # This iteration of stop_report() should actually trigger the report instance.stop_report() self.assertEqual(len(options.logger.data), 1) self.assertEqual(options.logger.data[0], 'waiting for test to stop') self.assertNotEqual(instance.laststopreport, 0) instance.stop_report() self.assertEqual(len(options.logger.data), 1) # throttled def test_give_up(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) L = [] from supervisor.states import ProcessStates from supervisor import events events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.state = ProcessStates.BACKOFF instance.give_up() self.assertTrue(instance.system_stop) self.assertFalse(instance.delay) self.assertFalse(instance.backoff) self.assertEqual(instance.state, ProcessStates.FATAL) self.assertEqual(len(L), 1) event = L[0] self.assertEqual(event.__class__, events.ProcessStateFatalEvent) def test_kill_nopid(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.kill(signal.SIGTERM) self.assertEqual(options.logger.data[0], 'attempted to kill test with sig SIGTERM but it wasn\'t running') self.assertFalse(instance.killing) def test_kill_from_starting(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.pid = 11 L = [] from supervisor.states import ProcessStates from supervisor import events events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.state = ProcessStates.STARTING instance.kill(signal.SIGTERM) self.assertEqual(options.logger.data[0], 'killing test (pid 11) with ' 'signal SIGTERM') self.assertTrue(instance.killing) self.assertEqual(options.kills[11], signal.SIGTERM) self.assertEqual(len(L), 1) event = L[0] self.assertEqual(event.__class__, events.ProcessStateStoppingEvent) def test_kill_from_running(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.pid = 11 L = [] from supervisor.states import ProcessStates from supervisor import events events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.state = ProcessStates.RUNNING instance.kill(signal.SIGTERM) self.assertEqual(options.logger.data[0], 'killing test (pid 11) with ' 'signal SIGTERM') self.assertTrue(instance.killing) self.assertEqual(options.kills[11], signal.SIGTERM) self.assertEqual(len(L), 1) event = L[0] self.assertEqual(event.__class__, events.ProcessStateStoppingEvent) def test_kill_from_running_error(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') options.kill_exception = OSError(errno.EPERM, os.strerror(errno.EPERM)) instance = self._makeOne(config) L = [] from supervisor.states import ProcessStates from supervisor import events events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.pid = 11 instance.state = ProcessStates.RUNNING instance.kill(signal.SIGTERM) self.assertEqual(options.logger.data[0], 'killing test (pid 11) with ' 'signal SIGTERM') self.assertTrue(options.logger.data[1].startswith( 'unknown problem killing test')) self.assertTrue('Traceback' in options.logger.data[1]) self.assertFalse(instance.killing) self.assertEqual(instance.pid, 11) # unchanged self.assertEqual(instance.state, ProcessStates.UNKNOWN) self.assertEqual(len(L), 2) event1 = L[0] event2 = L[1] self.assertEqual(event1.__class__, events.ProcessStateStoppingEvent) self.assertEqual(event2.__class__, events.ProcessStateUnknownEvent) def test_kill_from_running_error_ESRCH(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') options.kill_exception = OSError(errno.ESRCH, os.strerror(errno.ESRCH)) instance = self._makeOne(config) L = [] from supervisor.states import ProcessStates from supervisor import events events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.pid = 11 instance.state = ProcessStates.RUNNING instance.kill(signal.SIGTERM) self.assertEqual(options.logger.data[0], 'killing test (pid 11) with ' 'signal SIGTERM') self.assertEqual(options.logger.data[1], 'unable to signal test (pid 11), ' 'it probably just exited on its own: %s' % str(options.kill_exception)) self.assertTrue(instance.killing) self.assertEqual(instance.pid, 11) # unchanged self.assertEqual(instance.state, ProcessStates.STOPPING) self.assertEqual(len(L), 1) event1 = L[0] self.assertEqual(event1.__class__, events.ProcessStateStoppingEvent) def test_kill_from_stopping(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.pid = 11 L = [] from supervisor.states import ProcessStates from supervisor import events events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.state = ProcessStates.STOPPING instance.kill(signal.SIGKILL) self.assertEqual(options.logger.data[0], 'killing test (pid 11) with ' 'signal SIGKILL') self.assertTrue(instance.killing) self.assertEqual(options.kills[11], signal.SIGKILL) self.assertEqual(L, []) # no event because we didn't change state def test_kill_from_backoff(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) L = [] from supervisor.states import ProcessStates from supervisor import events events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.state = ProcessStates.BACKOFF instance.kill(signal.SIGKILL) self.assertEqual(options.logger.data[0], 'Attempted to kill test, which is in BACKOFF state.') self.assertFalse(instance.killing) event = L[0] self.assertEqual(event.__class__, events.ProcessStateStoppedEvent) def test_kill_from_stopping_w_killasgroup(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test', killasgroup=True) instance = self._makeOne(config) instance.pid = 11 L = [] from supervisor.states import ProcessStates from supervisor import events events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.state = ProcessStates.STOPPING instance.kill(signal.SIGKILL) self.assertEqual(options.logger.data[0], 'killing test (pid 11) ' 'process group with signal SIGKILL') self.assertTrue(instance.killing) self.assertEqual(options.kills[-11], signal.SIGKILL) self.assertEqual(L, []) # no event because we didn't change state def test_stopasgroup(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test', stopasgroup=True) instance = self._makeOne(config) instance.pid = 11 L = [] from supervisor.states import ProcessStates from supervisor import events events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.state = ProcessStates.RUNNING instance.kill(signal.SIGTERM) self.assertEqual(options.logger.data[0], 'killing test (pid 11) ' 'process group with signal SIGTERM') self.assertTrue(instance.killing) self.assertEqual(options.kills[-11], signal.SIGTERM) self.assertEqual(len(L), 1) event = L[0] self.assertEqual(event.__class__, events.ProcessStateStoppingEvent) self.assertEqual(event.extra_values, [('pid', 11)]) self.assertEqual(event.from_state, ProcessStates.RUNNING) def test_signal_from_stopped(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) from supervisor.states import ProcessStates instance.state = ProcessStates.STOPPED instance.signal(signal.SIGWINCH) self.assertEqual(options.logger.data[0], "attempted to send test sig SIGWINCH " "but it wasn't running") self.assertEqual(len(options.kills), 0) def test_signal_from_running(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.pid = 11 from supervisor.states import ProcessStates instance.state = ProcessStates.RUNNING instance.signal(signal.SIGWINCH) self.assertEqual(options.logger.data[0], 'sending test (pid 11) sig SIGWINCH') self.assertEqual(len(options.kills), 1) self.assertTrue(instance.pid in options.kills) self.assertEqual(options.kills[instance.pid], signal.SIGWINCH) def test_signal_from_running_error_ESRCH(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') options.kill_exception = OSError(errno.ESRCH, os.strerror(errno.ESRCH)) instance = self._makeOne(config) L = [] from supervisor.states import ProcessStates from supervisor import events events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.pid = 11 instance.state = ProcessStates.RUNNING instance.signal(signal.SIGWINCH) self.assertEqual(options.logger.data[0], 'sending test (pid 11) sig SIGWINCH') self.assertEqual(options.logger.data[1], 'unable to signal test (pid 11), ' 'it probably just now exited on its own: %s' % str(options.kill_exception)) self.assertFalse(instance.killing) self.assertEqual(instance.state, ProcessStates.RUNNING) # unchanged self.assertEqual(instance.pid, 11) # unchanged self.assertEqual(len(L), 0) def test_signal_from_running_error(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') options.kill_exception = OSError(errno.EPERM, os.strerror(errno.EPERM)) instance = self._makeOne(config) L = [] from supervisor.states import ProcessStates from supervisor import events events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.pid = 11 instance.state = ProcessStates.RUNNING instance.signal(signal.SIGWINCH) self.assertEqual(options.logger.data[0], 'sending test (pid 11) sig SIGWINCH') self.assertTrue(options.logger.data[1].startswith( 'unknown problem sending sig test (11)')) self.assertTrue('Traceback' in options.logger.data[1]) self.assertFalse(instance.killing) self.assertEqual(instance.state, ProcessStates.UNKNOWN) self.assertEqual(instance.pid, 11) # unchanged self.assertEqual(len(L), 1) event = L[0] self.assertEqual(event.__class__, events.ProcessStateUnknownEvent) def test_finish_stopping_state(self): options = DummyOptions() config = DummyPConfig(options, 'notthere', '/notthere', stdout_logfile='/tmp/foo') instance = self._makeOne(config) instance.waitstatus = (123, 1) # pid, waitstatus instance.config.options.pidhistory[123] = instance instance.killing = True pipes = {'stdout':'','stderr':''} instance.pipes = pipes from supervisor.states import ProcessStates from supervisor import events instance.state = ProcessStates.STOPPING L = [] events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.pid = 123 instance.finish(123, 1) self.assertFalse(instance.killing) self.assertEqual(instance.pid, 0) self.assertEqual(options.parent_pipes_closed, pipes) self.assertEqual(instance.pipes, {}) self.assertEqual(instance.dispatchers, {}) self.assertEqual(options.logger.data[0], 'stopped: notthere ' '(terminated by SIGHUP)') self.assertEqual(instance.exitstatus, -1) self.assertEqual(len(L), 1) event = L[0] self.assertEqual(event.__class__, events.ProcessStateStoppedEvent) self.assertEqual(event.extra_values, [('pid', 123)]) self.assertEqual(event.from_state, ProcessStates.STOPPING) def test_finish_running_state_exit_expected(self): options = DummyOptions() config = DummyPConfig(options, 'notthere', '/notthere', stdout_logfile='/tmp/foo') instance = self._makeOne(config) instance.config.options.pidhistory[123] = instance pipes = {'stdout':'','stderr':''} instance.pipes = pipes instance.config.exitcodes =[-1] from supervisor.states import ProcessStates from supervisor import events instance.state = ProcessStates.RUNNING L = [] events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.pid = 123 instance.finish(123, 1) self.assertFalse(instance.killing) self.assertEqual(instance.pid, 0) self.assertEqual(options.parent_pipes_closed, pipes) self.assertEqual(instance.pipes, {}) self.assertEqual(instance.dispatchers, {}) self.assertEqual(options.logger.data[0], 'exited: notthere (terminated by SIGHUP; expected)') self.assertEqual(instance.exitstatus, -1) self.assertEqual(len(L), 1) event = L[0] self.assertEqual(event.__class__, events.ProcessStateExitedEvent) self.assertEqual(event.expected, True) self.assertEqual(event.extra_values, [('expected', True), ('pid', 123)]) self.assertEqual(event.from_state, ProcessStates.RUNNING) def test_finish_starting_state_laststart_in_future(self): options = DummyOptions() config = DummyPConfig(options, 'notthere', '/notthere', stdout_logfile='/tmp/foo') instance = self._makeOne(config) instance.config.options.pidhistory[123] = instance pipes = {'stdout':'','stderr':''} instance.pipes = pipes instance.config.exitcodes =[-1] instance.laststart = time.time() + 3600 # 1 hour into the future from supervisor.states import ProcessStates from supervisor import events instance.state = ProcessStates.STARTING L = [] events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.pid = 123 instance.finish(123, 1) self.assertFalse(instance.killing) self.assertEqual(instance.pid, 0) self.assertEqual(options.parent_pipes_closed, pipes) self.assertEqual(instance.pipes, {}) self.assertEqual(instance.dispatchers, {}) self.assertEqual(options.logger.data[0], "process 'notthere' (123) laststart time is in the " "future, don't know how long process was running so " "assuming it did not exit too quickly") self.assertEqual(options.logger.data[1], 'exited: notthere (terminated by SIGHUP; expected)') self.assertEqual(instance.exitstatus, -1) self.assertEqual(len(L), 2) event = L[0] self.assertEqual(event.__class__, events.ProcessStateRunningEvent) self.assertEqual(event.expected, True) self.assertEqual(event.extra_values, [('pid', 123)]) self.assertEqual(event.from_state, ProcessStates.STARTING) event = L[1] self.assertEqual(event.__class__, events.ProcessStateExitedEvent) self.assertEqual(event.expected, True) self.assertEqual(event.extra_values, [('expected', True), ('pid', 123)]) self.assertEqual(event.from_state, ProcessStates.RUNNING) def test_finish_starting_state_exited_too_quickly(self): options = DummyOptions() config = DummyPConfig(options, 'notthere', '/notthere', stdout_logfile='/tmp/foo', startsecs=10) instance = self._makeOne(config) instance.config.options.pidhistory[123] = instance pipes = {'stdout':'','stderr':''} instance.pipes = pipes instance.config.exitcodes =[-1] instance.laststart = time.time() from supervisor.states import ProcessStates from supervisor import events instance.state = ProcessStates.STARTING L = [] events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.pid = 123 instance.finish(123, 1) self.assertFalse(instance.killing) self.assertEqual(instance.pid, 0) self.assertEqual(options.parent_pipes_closed, pipes) self.assertEqual(instance.pipes, {}) self.assertEqual(instance.dispatchers, {}) self.assertEqual(options.logger.data[0], 'exited: notthere (terminated by SIGHUP; not expected)') self.assertEqual(instance.exitstatus, None) self.assertEqual(len(L), 1) event = L[0] self.assertEqual(event.__class__, events.ProcessStateBackoffEvent) self.assertEqual(event.from_state, ProcessStates.STARTING) # This tests the case where the process has stayed alive longer than # startsecs (i.e., long enough to enter the RUNNING state), however the # system clock has since rolled backward such that the current time is # greater than laststart but less than startsecs. def test_finish_running_state_exited_too_quickly_due_to_clock_rollback(self): options = DummyOptions() config = DummyPConfig(options, 'notthere', '/notthere', stdout_logfile='/tmp/foo', startsecs=10) instance = self._makeOne(config) instance.config.options.pidhistory[123] = instance pipes = {'stdout':'','stderr':''} instance.pipes = pipes instance.config.exitcodes =[-1] instance.laststart = time.time() from supervisor.states import ProcessStates from supervisor import events instance.state = ProcessStates.RUNNING L = [] events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.pid = 123 instance.finish(123, 1) self.assertFalse(instance.killing) self.assertEqual(instance.pid, 0) self.assertEqual(options.parent_pipes_closed, pipes) self.assertEqual(instance.pipes, {}) self.assertEqual(instance.dispatchers, {}) self.assertEqual(options.logger.data[0], 'exited: notthere (terminated by SIGHUP; expected)') self.assertEqual(instance.exitstatus, -1) self.assertEqual(len(L), 1) event = L[0] self.assertEqual(event.__class__, events.ProcessStateExitedEvent) self.assertEqual(event.expected, True) self.assertEqual(event.extra_values, [('expected', True), ('pid', 123)]) self.assertEqual(event.from_state, ProcessStates.RUNNING) def test_finish_running_state_laststart_in_future(self): options = DummyOptions() config = DummyPConfig(options, 'notthere', '/notthere', stdout_logfile='/tmp/foo') instance = self._makeOne(config) instance.config.options.pidhistory[123] = instance pipes = {'stdout':'','stderr':''} instance.pipes = pipes instance.config.exitcodes =[-1] instance.laststart = time.time() + 3600 # 1 hour into the future from supervisor.states import ProcessStates from supervisor import events instance.state = ProcessStates.RUNNING L = [] events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) instance.pid = 123 instance.finish(123, 1) self.assertFalse(instance.killing) self.assertEqual(instance.pid, 0) self.assertEqual(options.parent_pipes_closed, pipes) self.assertEqual(instance.pipes, {}) self.assertEqual(instance.dispatchers, {}) self.assertEqual(options.logger.data[0], "process 'notthere' (123) laststart time is in the " "future, don't know how long process was running so " "assuming it did not exit too quickly") self.assertEqual(options.logger.data[1], 'exited: notthere (terminated by SIGHUP; expected)') self.assertEqual(instance.exitstatus, -1) self.assertEqual(len(L), 1) event = L[0] self.assertEqual(event.__class__, events.ProcessStateExitedEvent) self.assertEqual(event.expected, True) self.assertEqual(event.extra_values, [('expected', True), ('pid', 123)]) self.assertEqual(event.from_state, ProcessStates.RUNNING) def test_finish_with_current_event_sends_rejected(self): from supervisor import events L = [] events.subscribe(events.ProcessStateEvent, lambda x: L.append(x)) events.subscribe(events.EventRejectedEvent, lambda x: L.append(x)) options = DummyOptions() config = DummyPConfig(options, 'notthere', '/notthere', stdout_logfile='/tmp/foo', startsecs=10) instance = self._makeOne(config) from supervisor.states import ProcessStates instance.state = ProcessStates.RUNNING event = DummyEvent() instance.event = event instance.finish(123, 1) self.assertEqual(len(L), 2) event1, event2 = L self.assertEqual(event1.__class__, events.ProcessStateExitedEvent) self.assertEqual(event2.__class__, events.EventRejectedEvent) self.assertEqual(event2.process, instance) self.assertEqual(event2.event, event) self.assertEqual(instance.event, None) def test_set_uid_no_uid(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.set_uid() self.assertEqual(options.privsdropped, None) def test_set_uid(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test', uid=1) instance = self._makeOne(config) msg = instance.set_uid() self.assertEqual(options.privsdropped, 1) self.assertEqual(msg, None) def test_cmp_bypriority(self): options = DummyOptions() config = DummyPConfig(options, 'notthere', '/notthere', stdout_logfile='/tmp/foo', priority=1) instance = self._makeOne(config) config = DummyPConfig(options, 'notthere1', '/notthere', stdout_logfile='/tmp/foo', priority=2) instance1 = self._makeOne(config) config = DummyPConfig(options, 'notthere2', '/notthere', stdout_logfile='/tmp/foo', priority=3) instance2 = self._makeOne(config) L = [instance2, instance, instance1] L.sort() self.assertEqual(L, [instance, instance1, instance2]) def test_transition_stopped_to_starting_supervisor_stopping(self): from supervisor import events emitted_events = [] events.subscribe(events.ProcessStateEvent, emitted_events.append) from supervisor.states import ProcessStates, SupervisorStates options = DummyOptions() options.mood = SupervisorStates.SHUTDOWN # this should not be spawned, as supervisor is shutting down pconfig = DummyPConfig(options, 'process', 'process','/bin/process') process = self._makeOne(pconfig) process.laststart = 0 process.state = ProcessStates.STOPPED process.transition() self.assertEqual(process.state, ProcessStates.STOPPED) self.assertEqual(emitted_events, []) def test_transition_stopped_to_starting_supervisor_running(self): from supervisor import events emitted_events_with_states = [] def subscriber(e): emitted_events_with_states.append((e, e.process.state)) events.subscribe(events.ProcessStateEvent, subscriber) from supervisor.states import ProcessStates, SupervisorStates options = DummyOptions() options.mood = SupervisorStates.RUNNING pconfig = DummyPConfig(options, 'process', 'process','/bin/process') process = self._makeOne(pconfig) process.laststart = 0 process.state = ProcessStates.STOPPED process.transition() self.assertEqual(process.state, ProcessStates.STARTING) self.assertEqual(len(emitted_events_with_states), 1) event, state_when_event_emitted = emitted_events_with_states[0] self.assertEqual(event.__class__, events.ProcessStateStartingEvent) self.assertEqual(event.from_state, ProcessStates.STOPPED) self.assertEqual(state_when_event_emitted, ProcessStates.STARTING) def test_transition_exited_to_starting_supervisor_stopping(self): from supervisor import events emitted_events = [] events.subscribe(events.ProcessStateEvent, emitted_events.append) from supervisor.states import ProcessStates, SupervisorStates options = DummyOptions() options.mood = SupervisorStates.SHUTDOWN # this should not be spawned, as supervisor is shutting down pconfig = DummyPConfig(options, 'process', 'process','/bin/process') from supervisor.datatypes import RestartUnconditionally pconfig.autorestart = RestartUnconditionally process = self._makeOne(pconfig) process.laststart = 1 process.system_stop = True process.state = ProcessStates.EXITED process.transition() self.assertEqual(process.state, ProcessStates.EXITED) self.assertTrue(process.system_stop) self.assertEqual(emitted_events, []) def test_transition_exited_to_starting_uncond_supervisor_running(self): from supervisor import events emitted_events_with_states = [] def subscriber(e): emitted_events_with_states.append((e, e.process.state)) events.subscribe(events.ProcessStateEvent, subscriber) from supervisor.states import ProcessStates options = DummyOptions() pconfig = DummyPConfig(options, 'process', 'process','/bin/process') from supervisor.datatypes import RestartUnconditionally pconfig.autorestart = RestartUnconditionally process = self._makeOne(pconfig) process.laststart = 1 process.state = ProcessStates.EXITED process.transition() self.assertEqual(process.state, ProcessStates.STARTING) self.assertEqual(len(emitted_events_with_states), 1) event, state_when_event_emitted = emitted_events_with_states[0] self.assertEqual(event.__class__, events.ProcessStateStartingEvent) self.assertEqual(event.from_state, ProcessStates.EXITED) self.assertEqual(state_when_event_emitted, ProcessStates.STARTING) def test_transition_exited_to_starting_condit_supervisor_running(self): from supervisor import events emitted_events_with_states = [] def subscriber(e): emitted_events_with_states.append((e, e.process.state)) events.subscribe(events.ProcessStateEvent, subscriber) from supervisor.states import ProcessStates options = DummyOptions() pconfig = DummyPConfig(options, 'process', 'process','/bin/process') from supervisor.datatypes import RestartWhenExitUnexpected pconfig.autorestart = RestartWhenExitUnexpected process = self._makeOne(pconfig) process.laststart = 1 process.state = ProcessStates.EXITED process.exitstatus = 'bogus' process.transition() self.assertEqual(process.state, ProcessStates.STARTING) self.assertEqual(len(emitted_events_with_states), 1) event, state_when_event_emitted = emitted_events_with_states[0] self.assertEqual(event.__class__, events.ProcessStateStartingEvent) self.assertEqual(event.from_state, ProcessStates.EXITED) self.assertEqual(state_when_event_emitted, ProcessStates.STARTING) def test_transition_exited_to_starting_condit_fls_supervisor_running(self): from supervisor import events emitted_events = [] events.subscribe(events.ProcessStateEvent, emitted_events.append) from supervisor.states import ProcessStates options = DummyOptions() pconfig = DummyPConfig(options, 'process', 'process','/bin/process') from supervisor.datatypes import RestartWhenExitUnexpected pconfig.autorestart = RestartWhenExitUnexpected process = self._makeOne(pconfig) process.laststart = 1 process.state = ProcessStates.EXITED process.exitstatus = 0 process.transition() self.assertEqual(process.state, ProcessStates.EXITED) self.assertEqual(emitted_events, []) def test_transition_backoff_to_starting_supervisor_stopping(self): from supervisor import events emitted_events = [] events.subscribe(events.ProcessStateEvent, lambda x: emitted_events.append(x)) from supervisor.states import ProcessStates, SupervisorStates options = DummyOptions() options.mood = SupervisorStates.SHUTDOWN pconfig = DummyPConfig(options, 'process', 'process','/bin/process') process = self._makeOne(pconfig) process.laststart = 1 process.delay = 0 process.backoff = 0 process.state = ProcessStates.BACKOFF process.transition() self.assertEqual(process.state, ProcessStates.BACKOFF) self.assertEqual(emitted_events, []) def test_transition_backoff_to_starting_supervisor_running(self): from supervisor import events emitted_events_with_states = [] def subscriber(e): emitted_events_with_states.append((e, e.process.state)) events.subscribe(events.ProcessStateEvent, subscriber) from supervisor.states import ProcessStates, SupervisorStates options = DummyOptions() options.mood = SupervisorStates.RUNNING pconfig = DummyPConfig(options, 'process', 'process','/bin/process') process = self._makeOne(pconfig) process.laststart = 1 process.delay = 0 process.backoff = 0 process.state = ProcessStates.BACKOFF process.transition() self.assertEqual(process.state, ProcessStates.STARTING) self.assertEqual(len(emitted_events_with_states), 1) event, state_when_event_emitted = emitted_events_with_states[0] self.assertEqual(event.__class__, events.ProcessStateStartingEvent) self.assertEqual(event.from_state, ProcessStates.BACKOFF) self.assertEqual(state_when_event_emitted, ProcessStates.STARTING) def test_transition_backoff_to_starting_supervisor_running_notyet(self): from supervisor import events emitted_events = [] events.subscribe(events.ProcessStateEvent, lambda x: emitted_events.append(x)) from supervisor.states import ProcessStates, SupervisorStates options = DummyOptions() options.mood = SupervisorStates.RUNNING pconfig = DummyPConfig(options, 'process', 'process','/bin/process') process = self._makeOne(pconfig) process.laststart = 1 process.delay = maxint process.backoff = 0 process.state = ProcessStates.BACKOFF process.transition() self.assertEqual(process.state, ProcessStates.BACKOFF) self.assertEqual(emitted_events, []) def test_transition_starting_to_running(self): from supervisor import events emitted_events_with_states = [] def subscriber(e): emitted_events_with_states.append((e, e.process.state)) events.subscribe(events.ProcessStateEvent, subscriber) from supervisor.states import ProcessStates options = DummyOptions() # this should go from STARTING to RUNNING via transition() pconfig = DummyPConfig(options, 'process', 'process','/bin/process') process = self._makeOne(pconfig) process.backoff = 1 process.delay = 1 process.system_stop = False process.laststart = 1 process.pid = 1 process.stdout_buffer = 'abc' process.stderr_buffer = 'def' process.state = ProcessStates.STARTING process.transition() # this implies RUNNING self.assertEqual(process.backoff, 0) self.assertEqual(process.delay, 0) self.assertFalse(process.system_stop) self.assertEqual(options.logger.data[0], 'success: process entered RUNNING state, process has ' 'stayed up for > than 10 seconds (startsecs)') self.assertEqual(len(emitted_events_with_states), 1) event, state_when_event_emitted = emitted_events_with_states[0] self.assertEqual(event.__class__, events.ProcessStateRunningEvent) self.assertEqual(event.from_state, ProcessStates.STARTING) self.assertEqual(state_when_event_emitted, ProcessStates.RUNNING) def test_transition_starting_to_running_laststart_in_future(self): from supervisor import events emitted_events_with_states = [] def subscriber(e): emitted_events_with_states.append((e, e.process.state)) events.subscribe(events.ProcessStateEvent, subscriber) from supervisor.states import ProcessStates future_time = time.time() + 3600 # 1 hour into the future options = DummyOptions() test_startsecs = 2 # this should go from STARTING to RUNNING via transition() pconfig = DummyPConfig(options, 'process', 'process','/bin/process', startsecs=test_startsecs) process = self._makeOne(pconfig) process.backoff = 1 process.delay = 1 process.system_stop = False process.laststart = future_time process.pid = 1 process.stdout_buffer = 'abc' process.stderr_buffer = 'def' process.state = ProcessStates.STARTING # This iteration of transition() should reset process.laststart # to the current time process.transition() # Process state should still be STARTING self.assertEqual(process.state, ProcessStates.STARTING) # Ensure process.laststart has rolled backward self.assertTrue(process.laststart < future_time) # Sleep for (startsecs + 1) time.sleep(test_startsecs + 1) # This iteration of transition() should actually trigger the state # transition to RUNNING process.transition() # this implies RUNNING self.assertEqual(process.backoff, 0) self.assertEqual(process.delay, 0) self.assertFalse(process.system_stop) self.assertEqual(process.state, ProcessStates.RUNNING) self.assertEqual(options.logger.data[0], 'success: process entered RUNNING state, process has ' 'stayed up for > than {} seconds (startsecs)'.format(test_startsecs)) self.assertEqual(len(emitted_events_with_states), 1) event, state_when_event_emitted = emitted_events_with_states[0] self.assertEqual(event.__class__, events.ProcessStateRunningEvent) self.assertEqual(event.from_state, ProcessStates.STARTING) self.assertEqual(state_when_event_emitted, ProcessStates.RUNNING) def test_transition_backoff_to_starting_delay_in_future(self): from supervisor import events emitted_events_with_states = [] def subscriber(e): emitted_events_with_states.append((e, e.process.state)) events.subscribe(events.ProcessStateEvent, subscriber) from supervisor.states import ProcessStates future_time = time.time() + 3600 # 1 hour into the future options = DummyOptions() pconfig = DummyPConfig(options, 'process', 'process','/bin/process') process = self._makeOne(pconfig) process.laststart = 1 process.delay = future_time process.backoff = 0 process.state = ProcessStates.BACKOFF # This iteration of transition() should reset process.delay # to the current time process.transition() # Process state should still be BACKOFF self.assertEqual(process.state, ProcessStates.BACKOFF) # Ensure process.delay has rolled backward self.assertTrue(process.delay < future_time) # This iteration of transition() should actually trigger the state # transition to STARTING process.transition() self.assertEqual(process.state, ProcessStates.STARTING) self.assertEqual(len(emitted_events_with_states), 1) event, state_when_event_emitted = emitted_events_with_states[0] self.assertEqual(event.__class__, events.ProcessStateStartingEvent) self.assertEqual(event.from_state, ProcessStates.BACKOFF) self.assertEqual(state_when_event_emitted, ProcessStates.STARTING) def test_transition_backoff_to_fatal(self): from supervisor import events emitted_events_with_states = [] def subscriber(e): emitted_events_with_states.append((e, e.process.state)) events.subscribe(events.ProcessStateEvent, subscriber) from supervisor.states import ProcessStates options = DummyOptions() # this should go from BACKOFF to FATAL via transition() pconfig = DummyPConfig(options, 'process', 'process','/bin/process') process = self._makeOne(pconfig) process.laststart = 1 process.backoff = 10000 process.delay = 1 process.system_stop = False process.stdout_buffer = 'abc' process.stderr_buffer = 'def' process.state = ProcessStates.BACKOFF process.transition() # this implies FATAL self.assertEqual(process.backoff, 0) self.assertEqual(process.delay, 0) self.assertTrue(process.system_stop) self.assertEqual(options.logger.data[0], 'gave up: process entered FATAL state, too many start' ' retries too quickly') self.assertEqual(len(emitted_events_with_states), 1) event, state_when_event_emitted = emitted_events_with_states[0] self.assertEqual(event.__class__, events.ProcessStateFatalEvent) self.assertEqual(event.from_state, ProcessStates.BACKOFF) self.assertEqual(state_when_event_emitted, ProcessStates.FATAL) def test_transition_stops_unkillable_notyet(self): from supervisor import events emitted_events = [] events.subscribe(events.ProcessStateEvent, emitted_events.append) from supervisor.states import ProcessStates options = DummyOptions() pconfig = DummyPConfig(options, 'process', 'process','/bin/process') process = self._makeOne(pconfig) process.delay = maxint process.state = ProcessStates.STOPPING process.transition() self.assertEqual(process.state, ProcessStates.STOPPING) self.assertEqual(emitted_events, []) def test_transition_stops_unkillable(self): from supervisor import events emitted_events = [] events.subscribe(events.ProcessStateEvent, emitted_events.append) from supervisor.states import ProcessStates options = DummyOptions() pconfig = DummyPConfig(options, 'process', 'process','/bin/process') process = self._makeOne(pconfig) process.delay = 0 process.pid = 1 process.killing = False process.state = ProcessStates.STOPPING process.transition() self.assertTrue(process.killing) self.assertNotEqual(process.delay, 0) self.assertEqual(process.state, ProcessStates.STOPPING) self.assertEqual(options.logger.data[0], "killing 'process' (1) with SIGKILL") self.assertEqual(options.kills[1], signal.SIGKILL) self.assertEqual(emitted_events, []) def test_change_state_doesnt_notify_if_no_state_change(self): options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.state = 10 self.assertEqual(instance.change_state(10), False) def test_change_state_sets_backoff_and_delay(self): from supervisor.states import ProcessStates options = DummyOptions() config = DummyPConfig(options, 'test', '/test') instance = self._makeOne(config) instance.state = 10 instance.change_state(ProcessStates.BACKOFF) self.assertEqual(instance.backoff, 1) self.assertTrue(instance.delay > 0) class FastCGISubprocessTests(unittest.TestCase): def _getTargetClass(self): from supervisor.process import FastCGISubprocess return FastCGISubprocess def _makeOne(self, *arg, **kw): return self._getTargetClass()(*arg, **kw) def tearDown(self): from supervisor.events import clear clear() def test_no_group(self): options = DummyOptions() options.forkpid = 0 config = DummyPConfig(options, 'good', '/good/filename', uid=1) instance = self._makeOne(config) self.assertRaises(NotImplementedError, instance.spawn) def test_no_socket_manager(self): options = DummyOptions() options.forkpid = 0 config = DummyPConfig(options, 'good', '/good/filename', uid=1) instance = self._makeOne(config) instance.group = DummyProcessGroup(DummyPGroupConfig(options)) self.assertRaises(NotImplementedError, instance.spawn) def test_prepare_child_fds(self): options = DummyOptions() options.forkpid = 0 config = DummyPConfig(options, 'good', '/good/filename', uid=1) instance = self._makeOne(config) sock_config = DummySocketConfig(7) gconfig = DummyFCGIGroupConfig(options, 'whatever', 999, None, sock_config) instance.group = DummyFCGIProcessGroup(gconfig) result = instance.spawn() self.assertEqual(result, None) self.assertEqual(len(options.duped), 3) self.assertEqual(options.duped[7], 0) self.assertEqual(options.duped[instance.pipes['child_stdout']], 1) self.assertEqual(options.duped[instance.pipes['child_stderr']], 2) self.assertEqual(len(options.fds_closed), options.minfds - 3) def test_prepare_child_fds_stderr_redirected(self): options = DummyOptions() options.forkpid = 0 config = DummyPConfig(options, 'good', '/good/filename', uid=1) config.redirect_stderr = True instance = self._makeOne(config) sock_config = DummySocketConfig(13) gconfig = DummyFCGIGroupConfig(options, 'whatever', 999, None, sock_config) instance.group = DummyFCGIProcessGroup(gconfig) result = instance.spawn() self.assertEqual(result, None) self.assertEqual(len(options.duped), 2) self.assertEqual(options.duped[13], 0) self.assertEqual(len(options.fds_closed), options.minfds - 3) def test_before_spawn_gets_socket_ref(self): options = DummyOptions() config = DummyPConfig(options, 'good', '/good/filename', uid=1) instance = self._makeOne(config) sock_config = DummySocketConfig(7) gconfig = DummyFCGIGroupConfig(options, 'whatever', 999, None, sock_config) instance.group = DummyFCGIProcessGroup(gconfig) self.assertTrue(instance.fcgi_sock is None) instance.before_spawn() self.assertFalse(instance.fcgi_sock is None) def test_after_finish_removes_socket_ref(self): options = DummyOptions() config = DummyPConfig(options, 'good', '/good/filename', uid=1) instance = self._makeOne(config) instance.fcgi_sock = 'hello' instance.after_finish() self.assertTrue(instance.fcgi_sock is None) #Patch Subprocess.finish() method for this test to verify override @patch.object(Subprocess, 'finish', Mock(return_value=sentinel.finish_result)) def test_finish_override(self): options = DummyOptions() config = DummyPConfig(options, 'good', '/good/filename', uid=1) instance = self._makeOne(config) instance.after_finish = Mock() result = instance.finish(sentinel.pid, sentinel.sts) self.assertEqual(sentinel.finish_result, result, 'FastCGISubprocess.finish() did not pass thru result') self.assertEqual(1, instance.after_finish.call_count, 'FastCGISubprocess.after_finish() not called once') finish_mock = Subprocess.finish self.assertEqual(1, finish_mock.call_count, 'Subprocess.finish() not called once') pid_arg = finish_mock.call_args[0][1] sts_arg = finish_mock.call_args[0][2] self.assertEqual(sentinel.pid, pid_arg, 'Subprocess.finish() pid arg was not passed') self.assertEqual(sentinel.sts, sts_arg, 'Subprocess.finish() sts arg was not passed') #Patch Subprocess.spawn() method for this test to verify override @patch.object(Subprocess, 'spawn', Mock(return_value=sentinel.ppid)) def test_spawn_override_success(self): options = DummyOptions() config = DummyPConfig(options, 'good', '/good/filename', uid=1) instance = self._makeOne(config) instance.before_spawn = Mock() result = instance.spawn() self.assertEqual(sentinel.ppid, result, 'FastCGISubprocess.spawn() did not pass thru result') self.assertEqual(1, instance.before_spawn.call_count, 'FastCGISubprocess.before_spawn() not called once') spawn_mock = Subprocess.spawn self.assertEqual(1, spawn_mock.call_count, 'Subprocess.spawn() not called once') #Patch Subprocess.spawn() method for this test to verify error handling @patch.object(Subprocess, 'spawn', Mock(return_value=None)) def test_spawn_error(self): options = DummyOptions() config = DummyPConfig(options, 'good', '/good/filename', uid=1) instance = self._makeOne(config) instance.before_spawn = Mock() instance.fcgi_sock = 'nuke me on error' result = instance.spawn() self.assertEqual(None, result, 'FastCGISubprocess.spawn() did return None on error') self.assertEqual(1, instance.before_spawn.call_count, 'FastCGISubprocess.before_spawn() not called once') self.assertEqual(None, instance.fcgi_sock, 'FastCGISubprocess.spawn() did not remove sock ref on error') class ProcessGroupBaseTests(unittest.TestCase): def _getTargetClass(self): from supervisor.process import ProcessGroupBase return ProcessGroupBase def _makeOne(self, *args, **kw): return self._getTargetClass()(*args, **kw) def test_get_unstopped_processes(self): options = DummyOptions() from supervisor.states import ProcessStates pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.STOPPING) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) group = self._makeOne(gconfig) group.processes = { 'process1': process1 } unstopped = group.get_unstopped_processes() self.assertEqual(unstopped, [process1]) def test_before_remove(self): options = DummyOptions() from supervisor.states import ProcessStates pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.STOPPING) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) group = self._makeOne(gconfig) group.processes = { 'process1': process1 } group.before_remove() # shouldn't raise def test_stop_all(self): from supervisor.states import ProcessStates options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.STOPPED) pconfig2 = DummyPConfig(options, 'process2', 'process2','/bin/process2') process2 = DummyProcess(pconfig2, state=ProcessStates.RUNNING) pconfig3 = DummyPConfig(options, 'process3', 'process3','/bin/process3') process3 = DummyProcess(pconfig3, state=ProcessStates.STARTING) pconfig4 = DummyPConfig(options, 'process4', 'process4','/bin/process4') process4 = DummyProcess(pconfig4, state=ProcessStates.BACKOFF) process4.delay = 1000 process4.backoff = 10 gconfig = DummyPGroupConfig( options, pconfigs=[pconfig1, pconfig2, pconfig3, pconfig4]) group = self._makeOne(gconfig) group.processes = {'process1': process1, 'process2': process2, 'process3':process3, 'process4':process4} group.stop_all() self.assertEqual(process1.stop_called, False) self.assertEqual(process2.stop_called, True) self.assertEqual(process3.stop_called, True) self.assertEqual(process4.stop_called, False) self.assertEqual(process4.state, ProcessStates.FATAL) def test_get_dispatchers(self): options = DummyOptions() from supervisor.states import ProcessStates pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.STOPPING) process1.dispatchers = {4:None} pconfig2 = DummyPConfig(options, 'process2', 'process2','/bin/process2') process2 = DummyProcess(pconfig2, state=ProcessStates.STOPPING) process2.dispatchers = {5:None} gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1, pconfig2]) group = self._makeOne(gconfig) group.processes = { 'process1': process1, 'process2': process2 } result= group.get_dispatchers() self.assertEqual(result, {4:None, 5:None}) def test_reopenlogs(self): options = DummyOptions() from supervisor.states import ProcessStates pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.STOPPING) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) group = self._makeOne(gconfig) group.processes = {'process1': process1} group.reopenlogs() self.assertEqual(process1.logs_reopened, True) def test_removelogs(self): options = DummyOptions() from supervisor.states import ProcessStates pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.STOPPING) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) group = self._makeOne(gconfig) group.processes = {'process1': process1} group.removelogs() self.assertEqual(process1.logsremoved, True) def test_ordering_and_comparison(self): options = DummyOptions() gconfig1 = DummyPGroupConfig(options) group1 = self._makeOne(gconfig1) gconfig2 = DummyPGroupConfig(options) group2 = self._makeOne(gconfig2) config3 = DummyPGroupConfig(options) group3 = self._makeOne(config3) group1.config.priority = 5 group2.config.priority = 1 group3.config.priority = 5 L = [group1, group2] L.sort() self.assertEqual(L, [group2, group1]) self.assertNotEqual(group1, group2) self.assertEqual(group1, group3) class ProcessGroupTests(ProcessGroupBaseTests): def _getTargetClass(self): from supervisor.process import ProcessGroup return ProcessGroup def test_repr(self): options = DummyOptions() gconfig = DummyPGroupConfig(options) group = self._makeOne(gconfig) s = repr(group) self.assertTrue('supervisor.process.ProcessGroup' in s) self.assertTrue(s.endswith('named whatever>'), s) def test_transition(self): options = DummyOptions() from supervisor.states import ProcessStates pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.STOPPING) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) group = self._makeOne(gconfig) group.processes = {'process1': process1} group.transition() self.assertEqual(process1.transitioned, True) class FastCGIProcessGroupTests(unittest.TestCase): def _getTargetClass(self): from supervisor.process import FastCGIProcessGroup return FastCGIProcessGroup def _makeOne(self, config, **kwargs): cls = self._getTargetClass() return cls(config, **kwargs) def test___init__without_socket_error(self): options = DummyOptions() gconfig = DummyPGroupConfig(options) gconfig.socket_config = None class DummySocketManager(object): def __init__(self, config, logger): pass def get_socket(self): pass self._makeOne(gconfig, socketManager=DummySocketManager) # doesn't fail with exception def test___init__with_socket_error(self): options = DummyOptions() gconfig = DummyPGroupConfig(options) gconfig.socket_config = None class DummySocketManager(object): def __init__(self, config, logger): pass def get_socket(self): raise KeyError(5) def config(self): return 'config' self.assertRaises( ValueError, self._makeOne, gconfig, socketManager=DummySocketManager ) class EventListenerPoolTests(ProcessGroupBaseTests): def setUp(self): from supervisor.events import clear clear() def tearDown(self): from supervisor.events import clear clear() def _getTargetClass(self): from supervisor.process import EventListenerPool return EventListenerPool def test_ctor(self): options = DummyOptions() gconfig = DummyPGroupConfig(options) class EventType: pass gconfig.pool_events = (EventType,) pool = self._makeOne(gconfig) from supervisor import events self.assertEqual(len(events.callbacks), 2) self.assertEqual(events.callbacks[0], (EventType, pool._acceptEvent)) self.assertEqual(events.callbacks[1], (events.EventRejectedEvent, pool.handle_rejected)) self.assertEqual(pool.serial, -1) def test_before_remove_unsubscribes_from_events(self): options = DummyOptions() gconfig = DummyPGroupConfig(options) class EventType: pass gconfig.pool_events = (EventType,) pool = self._makeOne(gconfig) from supervisor import events self.assertEqual(len(events.callbacks), 2) pool.before_remove() self.assertEqual(len(events.callbacks), 0) def test__eventEnvelope(self): options = DummyOptions() options.identifier = 'thesupervisorname' gconfig = DummyPGroupConfig(options) gconfig.name = 'thepoolname' pool = self._makeOne(gconfig) from supervisor import events result = pool._eventEnvelope( events.EventTypes.PROCESS_COMMUNICATION_STDOUT, 80, 20, 'payload\n') header, payload = result.split('\n', 1) headers = header.split() self.assertEqual(headers[0], 'ver:3.0') self.assertEqual(headers[1], 'server:thesupervisorname') self.assertEqual(headers[2], 'serial:80') self.assertEqual(headers[3], 'pool:thepoolname') self.assertEqual(headers[4], 'poolserial:20') self.assertEqual(headers[5], 'eventname:PROCESS_COMMUNICATION_STDOUT') self.assertEqual(headers[6], 'len:8') self.assertEqual(payload, 'payload\n') def test_handle_rejected_no_overflow(self): options = DummyOptions() gconfig = DummyPGroupConfig(options) pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) pool = self._makeOne(gconfig) pool.processes = {'process1': process1} pool.event_buffer = [None, None] class DummyEvent1: serial = 'abc' class DummyEvent2: process = process1 event = DummyEvent1() dummyevent = DummyEvent2() dummyevent.serial = 1 pool.handle_rejected(dummyevent) self.assertEqual(pool.event_buffer, [dummyevent.event, None, None]) def test_handle_rejected_event_buffer_overflowed(self): options = DummyOptions() gconfig = DummyPGroupConfig(options) pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) gconfig.buffer_size = 3 pool = self._makeOne(gconfig) pool.processes = {'process1': process1} class DummyEvent: def __init__(self, serial): self.serial = serial class DummyRejectedEvent: def __init__(self, serial): self.process = process1 self.event = DummyEvent(serial) event_a = DummyEvent('a') event_b = DummyEvent('b') event_c = DummyEvent('c') rej_event = DummyRejectedEvent('rejected') pool.event_buffer = [event_a, event_b, event_c] pool.handle_rejected(rej_event) serials = [ x.serial for x in pool.event_buffer ] # we popped a, and we inserted the rejected event into the 1st pos self.assertEqual(serials, ['rejected', 'b', 'c']) self.assertEqual(pool.config.options.logger.data[0], 'pool whatever event buffer overflowed, discarding event a') def test_dispatch_pipe_error(self): options = DummyOptions() gconfig = DummyPGroupConfig(options) pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') from supervisor.states import EventListenerStates gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) pool = self._makeOne(gconfig) process1 = pool.processes['process1'] process1.write_exception = OSError(errno.EPIPE, os.strerror(errno.EPIPE)) process1.listener_state = EventListenerStates.READY event = DummyEvent() pool._acceptEvent(event) pool.dispatch() self.assertEqual(process1.listener_state, EventListenerStates.READY) self.assertEqual(pool.event_buffer, [event]) self.assertEqual(options.logger.data[0], 'epipe occurred while sending event abc to listener ' 'process1, listener state unchanged') self.assertEqual(options.logger.data[1], 'rebuffering event abc for pool whatever (buf size=0, max=10)') def test__acceptEvent_attaches_pool_serial_and_serial(self): from supervisor.process import GlobalSerial options = DummyOptions() gconfig = DummyPGroupConfig(options) pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) pool = self._makeOne(gconfig) process1 = pool.processes['process1'] from supervisor.states import EventListenerStates process1.listener_state = EventListenerStates.READY event = DummyEvent(None) pool._acceptEvent(event) self.assertEqual(event.serial, GlobalSerial.serial) self.assertEqual(event.pool_serials['whatever'], pool.serial) def test_repr(self): options = DummyOptions() gconfig = DummyPGroupConfig(options) pool = self._makeOne(gconfig) s = repr(pool) self.assertTrue('supervisor.process.EventListenerPool' in s) self.assertTrue(s.endswith('named whatever>')) def test_transition_nobody_ready(self): options = DummyOptions() from supervisor.states import ProcessStates pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.STARTING) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) pool = self._makeOne(gconfig) pool.processes = {'process1': process1} event = DummyEvent() event.serial = 'a' from supervisor.states import EventListenerStates process1.listener_state = EventListenerStates.BUSY pool._acceptEvent(event) pool.transition() self.assertEqual(process1.transitioned, True) self.assertEqual(pool.event_buffer, [event]) def test_transition_event_proc_not_running(self): options = DummyOptions() from supervisor.states import ProcessStates pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.STARTING) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) pool = self._makeOne(gconfig) pool.processes = {'process1': process1} event = DummyEvent() from supervisor.states import EventListenerStates event.serial = 1 process1.listener_state = EventListenerStates.READY pool._acceptEvent(event) pool.transition() self.assertEqual(process1.transitioned, True) self.assertEqual(pool.event_buffer, [event]) self.assertEqual(process1.stdin_buffer, b'') self.assertEqual(process1.listener_state, EventListenerStates.READY) def test_transition_event_proc_running(self): options = DummyOptions() from supervisor.states import ProcessStates pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.RUNNING) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) pool = self._makeOne(gconfig) pool.processes = {'process1': process1} event = DummyEvent() from supervisor.states import EventListenerStates process1.listener_state = EventListenerStates.READY class DummyGroup: config = gconfig process1.group = DummyGroup pool._acceptEvent(event) pool.transition() self.assertEqual(process1.transitioned, True) self.assertEqual(pool.event_buffer, []) header, payload = process1.stdin_buffer.split(b'\n', 1) self.assertEqual(payload, b'dummy event', payload) self.assertEqual(process1.listener_state, EventListenerStates.BUSY) self.assertEqual(process1.event, event) def test_transition_event_proc_running_with_dispatch_throttle_notyet(self): options = DummyOptions() from supervisor.states import ProcessStates pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.RUNNING) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) pool = self._makeOne(gconfig) pool.dispatch_throttle = 5 pool.last_dispatch = time.time() pool.processes = {'process1': process1} event = DummyEvent() from supervisor.states import EventListenerStates process1.listener_state = EventListenerStates.READY class DummyGroup: config = gconfig process1.group = DummyGroup pool._acceptEvent(event) pool.transition() self.assertEqual(process1.transitioned, True) self.assertEqual(pool.event_buffer, [event]) # not popped def test_transition_event_proc_running_with_dispatch_throttle_ready(self): options = DummyOptions() from supervisor.states import ProcessStates pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.RUNNING) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) pool = self._makeOne(gconfig) pool.dispatch_throttle = 5 pool.last_dispatch = time.time() - 1000 pool.processes = {'process1': process1} event = DummyEvent() from supervisor.states import EventListenerStates process1.listener_state = EventListenerStates.READY class DummyGroup: config = gconfig process1.group = DummyGroup pool._acceptEvent(event) pool.transition() self.assertEqual(process1.transitioned, True) self.assertEqual(pool.event_buffer, []) header, payload = process1.stdin_buffer.split(b'\n', 1) self.assertEqual(payload, b'dummy event', payload) self.assertEqual(process1.listener_state, EventListenerStates.BUSY) self.assertEqual(process1.event, event) def test_transition_event_proc_running_with_dispatch_throttle_last_dispatch_in_future(self): future_time = time.time() + 3600 # 1 hour into the future options = DummyOptions() from supervisor.states import ProcessStates pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.RUNNING) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) pool = self._makeOne(gconfig) pool.dispatch_throttle = 5 pool.last_dispatch = future_time pool.processes = {'process1': process1} event = DummyEvent() from supervisor.states import EventListenerStates process1.listener_state = EventListenerStates.READY class DummyGroup: config = gconfig process1.group = DummyGroup pool._acceptEvent(event) pool.transition() self.assertEqual(process1.transitioned, True) self.assertEqual(pool.event_buffer, [event]) # not popped # Ensure pool.last_dispatch has been rolled backward self.assertTrue(pool.last_dispatch < future_time) def test__dispatchEvent_notready(self): options = DummyOptions() from supervisor.states import ProcessStates pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.STOPPED) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) pool = self._makeOne(gconfig) pool.processes = {'process1': process1} event = DummyEvent() pool._acceptEvent(event) self.assertEqual(pool._dispatchEvent(event), False) def test__dispatchEvent_proc_write_raises_non_EPIPE_OSError(self): options = DummyOptions() from supervisor.states import ProcessStates pconfig1 = DummyPConfig(options, 'process1', 'process1','/bin/process1') process1 = DummyProcess(pconfig1, state=ProcessStates.RUNNING) def raise_epipe(envelope): raise OSError(errno.EAGAIN) process1.write = raise_epipe gconfig = DummyPGroupConfig(options, pconfigs=[pconfig1]) pool = self._makeOne(gconfig) pool.processes = {'process1': process1} event = DummyEvent() from supervisor.states import EventListenerStates process1.listener_state = EventListenerStates.READY class DummyGroup: config = gconfig process1.group = DummyGroup pool._acceptEvent(event) self.assertRaises(OSError, pool._dispatchEvent, event) class test_new_serial(unittest.TestCase): def _callFUT(self, inst): from supervisor.process import new_serial return new_serial(inst) def test_inst_serial_is_maxint(self): from supervisor.compat import maxint class Inst(object): def __init__(self): self.serial = maxint inst = Inst() result = self._callFUT(inst) self.assertEqual(inst.serial, 0) self.assertEqual(result, 0) def test_inst_serial_is_not_maxint(self): class Inst(object): def __init__(self): self.serial = 1 inst = Inst() result = self._callFUT(inst) self.assertEqual(inst.serial, 2) self.assertEqual(result, 2) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671836081.0 supervisor-4.2.5/supervisor/tests/test_rpcinterfaces.py0000644000076500000240000031770414351430661023334 0ustar00mnaberezstaff# -*- coding: utf-8 -*- import unittest import sys import operator import os import time import errno from supervisor.tests.base import DummyOptions from supervisor.tests.base import DummySupervisor from supervisor.tests.base import DummyProcess from supervisor.tests.base import DummyPConfig from supervisor.tests.base import DummyPGroupConfig from supervisor.tests.base import DummyProcessGroup from supervisor.tests.base import PopulatedDummySupervisor from supervisor.tests.base import _NOW from supervisor.tests.base import _TIMEFORMAT from supervisor.compat import as_string, PY2 from supervisor.datatypes import Automatic class TestBase(unittest.TestCase): def setUp(self): pass def tearDown(self): pass def _assertRPCError(self, code, callable, *args, **kw): from supervisor import xmlrpc try: callable(*args, **kw) except xmlrpc.RPCError as inst: self.assertEqual(inst.code, code) else: raise AssertionError("Didn't raise") class MainXMLRPCInterfaceTests(TestBase): def _getTargetClass(self): from supervisor import xmlrpc return xmlrpc.RootRPCInterface def _makeOne(self, *args, **kw): return self._getTargetClass()(*args, **kw) def test_ctor(self): interface = self._makeOne([('supervisor', None)]) self.assertEqual(interface.supervisor, None) def test_traverse(self): dummy = DummyRPCInterface() interface = self._makeOne([('dummy', dummy)]) from supervisor import xmlrpc self._assertRPCError(xmlrpc.Faults.UNKNOWN_METHOD, xmlrpc.traverse, interface, 'notthere.hello', []) self._assertRPCError(xmlrpc.Faults.UNKNOWN_METHOD, xmlrpc.traverse, interface, 'supervisor._readFile', []) self._assertRPCError(xmlrpc.Faults.INCORRECT_PARAMETERS, xmlrpc.traverse, interface, 'dummy.hello', [1]) self.assertEqual(xmlrpc.traverse( interface, 'dummy.hello', []), 'Hello!') class SupervisorNamespaceXMLRPCInterfaceTests(TestBase): def _getTargetClass(self): from supervisor import rpcinterface return rpcinterface.SupervisorNamespaceRPCInterface def _makeOne(self, *args, **kw): return self._getTargetClass()(*args, **kw) def test_update(self): from supervisor import xmlrpc from supervisor.supervisord import SupervisorStates supervisord = DummySupervisor() interface = self._makeOne(supervisord) interface._update('foo') self.assertEqual(interface.update_text, 'foo') supervisord.options.mood = SupervisorStates.SHUTDOWN self._assertRPCError(xmlrpc.Faults.SHUTDOWN_STATE, interface._update, 'foo') def test_getAPIVersion(self): from supervisor import rpcinterface supervisord = DummySupervisor() interface = self._makeOne(supervisord) version = interface.getAPIVersion() self.assertEqual(version, rpcinterface.API_VERSION) self.assertEqual(interface.update_text, 'getAPIVersion') def test_getAPIVersion_aliased_to_deprecated_getVersion(self): supervisord = DummySupervisor() interface = self._makeOne(supervisord) self.assertEqual(interface.getAPIVersion, interface.getVersion) def test_getSupervisorVersion(self): supervisord = DummySupervisor() interface = self._makeOne(supervisord) version = interface.getSupervisorVersion() from supervisor import options self.assertEqual(version, options.VERSION) self.assertEqual(interface.update_text, 'getSupervisorVersion') def test_getIdentification(self): supervisord = DummySupervisor() interface = self._makeOne(supervisord) identifier = interface.getIdentification() self.assertEqual(identifier, supervisord.options.identifier) self.assertEqual(interface.update_text, 'getIdentification') def test_getState(self): from supervisor.states import getSupervisorStateDescription supervisord = DummySupervisor() interface = self._makeOne(supervisord) stateinfo = interface.getState() statecode = supervisord.options.mood statename = getSupervisorStateDescription(statecode) self.assertEqual(stateinfo['statecode'], statecode) self.assertEqual(stateinfo['statename'], statename) self.assertEqual(interface.update_text, 'getState') def test_getPID(self): options = DummyOptions() supervisord = DummySupervisor(options) interface = self._makeOne(supervisord) self.assertEqual(interface.getPID(), options.get_pid()) self.assertEqual(interface.update_text, 'getPID') def test_readLog_aliased_to_deprecated_readMainLog(self): supervisord = DummySupervisor() interface = self._makeOne(supervisord) self.assertEqual(interface.readMainLog, interface.readLog) def test_readLog_unreadable(self): from supervisor import xmlrpc supervisord = DummySupervisor() interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.NO_FILE, interface.readLog, offset=0, length=1) def test_readLog_badargs(self): from supervisor import xmlrpc supervisord = DummySupervisor() interface = self._makeOne(supervisord) try: logfile = supervisord.options.logfile with open(logfile, 'w+') as f: f.write('x' * 2048) self._assertRPCError(xmlrpc.Faults.BAD_ARGUMENTS, interface.readLog, offset=-1, length=1) self._assertRPCError(xmlrpc.Faults.BAD_ARGUMENTS, interface.readLog, offset=-1, length=-1) finally: os.remove(logfile) def test_readLog(self): supervisord = DummySupervisor() interface = self._makeOne(supervisord) logfile = supervisord.options.logfile try: with open(logfile, 'w+') as f: f.write('x' * 2048) f.write('y' * 2048) data = interface.readLog(offset=0, length=0) self.assertEqual(interface.update_text, 'readLog') self.assertEqual(data, ('x' * 2048) + ('y' * 2048)) data = interface.readLog(offset=2048, length=0) self.assertEqual(data, 'y' * 2048) data = interface.readLog(offset=0, length=2048) self.assertEqual(data, 'x' * 2048) data = interface.readLog(offset=-4, length=0) self.assertEqual(data, 'y' * 4) finally: os.remove(logfile) def test_clearLog_unreadable(self): from supervisor import xmlrpc supervisord = DummySupervisor() interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.NO_FILE, interface.clearLog) def test_clearLog_unremoveable(self): from supervisor import xmlrpc options = DummyOptions() options.existing = [options.logfile] options.remove_exception = OSError(errno.EPERM, os.strerror(errno.EPERM)) supervisord = DummySupervisor(options) interface = self._makeOne(supervisord) self.assertRaises(xmlrpc.RPCError, interface.clearLog) self.assertEqual(interface.update_text, 'clearLog') def test_clearLog(self): options = DummyOptions() options.existing = [options.logfile] supervisord = DummySupervisor(options) interface = self._makeOne(supervisord) result = interface.clearLog() self.assertEqual(interface.update_text, 'clearLog') self.assertEqual(result, True) self.assertEqual(options.removed[0], options.logfile) for handler in supervisord.options.logger.handlers: self.assertEqual(handler.reopened, True) def test_shutdown(self): supervisord = DummySupervisor() interface = self._makeOne(supervisord) value = interface.shutdown() self.assertEqual(value, True) self.assertEqual(supervisord.options.mood, -1) def test_restart(self): supervisord = DummySupervisor() interface = self._makeOne(supervisord) value = interface.restart() self.assertEqual(value, True) self.assertEqual(supervisord.options.mood, 0) def test_reloadConfig(self): options = DummyOptions() supervisord = DummySupervisor(options) interface = self._makeOne(supervisord) changes = [ [DummyPGroupConfig(options, 'added')], [DummyPGroupConfig(options, 'changed')], [DummyPGroupConfig(options, 'dropped')] ] supervisord.diff_to_active = lambda : changes value = interface.reloadConfig() self.assertEqual(value, [[['added'], ['changed'], ['dropped']]]) def test_reloadConfig_process_config_raises_ValueError(self): from supervisor import xmlrpc options = DummyOptions() def raise_exc(*arg, **kw): raise ValueError('foo') options.process_config = raise_exc supervisord = DummySupervisor(options) interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.CANT_REREAD, interface.reloadConfig) def test_addProcessGroup(self): from supervisor.supervisord import Supervisor from supervisor import xmlrpc options = DummyOptions() supervisord = Supervisor(options) pconfig = DummyPConfig(options, 'foo', __file__, autostart=False) gconfig = DummyPGroupConfig(options, 'group1', pconfigs=[pconfig]) supervisord.options.process_group_configs = [gconfig] interface = self._makeOne(supervisord) result = interface.addProcessGroup('group1') self.assertTrue(result) self.assertEqual(list(supervisord.process_groups.keys()), ['group1']) self._assertRPCError(xmlrpc.Faults.ALREADY_ADDED, interface.addProcessGroup, 'group1') self.assertEqual(list(supervisord.process_groups.keys()), ['group1']) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.addProcessGroup, 'asdf') self.assertEqual(list(supervisord.process_groups.keys()), ['group1']) def test_removeProcessGroup(self): from supervisor.supervisord import Supervisor options = DummyOptions() supervisord = Supervisor(options) pconfig = DummyPConfig(options, 'foo', __file__, autostart=False) gconfig = DummyPGroupConfig(options, 'group1', pconfigs=[pconfig]) supervisord.options.process_group_configs = [gconfig] interface = self._makeOne(supervisord) interface.addProcessGroup('group1') result = interface.removeProcessGroup('group1') self.assertTrue(result) self.assertEqual(list(supervisord.process_groups.keys()), []) def test_removeProcessGroup_bad_name(self): from supervisor.supervisord import Supervisor from supervisor import xmlrpc options = DummyOptions() supervisord = Supervisor(options) pconfig = DummyPConfig(options, 'foo', __file__, autostart=False) gconfig = DummyPGroupConfig(options, 'group1', pconfigs=[pconfig]) supervisord.options.process_group_configs = [gconfig] interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.removeProcessGroup, 'asdf') def test_removeProcessGroup_still_running(self): from supervisor.supervisord import Supervisor from supervisor import xmlrpc options = DummyOptions() supervisord = Supervisor(options) pconfig = DummyPConfig(options, 'foo', __file__, autostart=False) gconfig = DummyPGroupConfig(options, 'group1', pconfigs=[pconfig]) supervisord.options.process_group_configs = [gconfig] process = DummyProcessGroup(gconfig) process.unstopped_processes = [123] supervisord.process_groups = {'group1':process} interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.STILL_RUNNING, interface.removeProcessGroup, 'group1') def test_startProcess_already_started(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'foo', __file__, autostart=False) supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'pid', 10) interface = self._makeOne(supervisord) self._assertRPCError( xmlrpc.Faults.ALREADY_STARTED, interface.startProcess, 'foo' ) def test_startProcess_unknown_state(self): from supervisor import xmlrpc from supervisor.states import ProcessStates options = DummyOptions() pconfig = DummyPConfig(options, 'foo', __file__, autostart=False) supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'pid', 10) supervisord.set_procattr('foo', 'state', ProcessStates.UNKNOWN) interface = self._makeOne(supervisord) self._assertRPCError( xmlrpc.Faults.FAILED, interface.startProcess, 'foo' ) process = supervisord.process_groups['foo'].processes['foo'] self.assertEqual(process.spawned, False) def test_startProcess_bad_group_name(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', __file__, autostart=False) supervisord = PopulatedDummySupervisor(options, 'group1', pconfig) interface = self._makeOne(supervisord) from supervisor import xmlrpc self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.startProcess, 'group2:foo') def test_startProcess_bad_process_name(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', __file__, autostart=False) supervisord = PopulatedDummySupervisor(options, 'group1', pconfig) interface = self._makeOne(supervisord) from supervisor import xmlrpc self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.startProcess, 'group1:bar') def test_startProcess_file_not_found(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/foo/bar', autostart=False) from supervisor.options import NotFound supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) process = supervisord.process_groups['foo'].processes['foo'] process.execv_arg_exception = NotFound interface = self._makeOne(supervisord) from supervisor import xmlrpc self._assertRPCError(xmlrpc.Faults.NO_FILE, interface.startProcess, 'foo') def test_startProcess_bad_command(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/foo/bar', autostart=False) from supervisor.options import BadCommand supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) process = supervisord.process_groups['foo'].processes['foo'] process.execv_arg_exception = BadCommand interface = self._makeOne(supervisord) from supervisor import xmlrpc self._assertRPCError(xmlrpc.Faults.NOT_EXECUTABLE, interface.startProcess, 'foo') def test_startProcess_file_not_executable(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/foo/bar', autostart=False) from supervisor.options import NotExecutable supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) process = supervisord.process_groups['foo'].processes['foo'] process.execv_arg_exception = NotExecutable interface = self._makeOne(supervisord) from supervisor import xmlrpc self._assertRPCError(xmlrpc.Faults.NOT_EXECUTABLE, interface.startProcess, 'foo') def test_startProcess_spawnerr(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'foo', __file__, autostart=False) from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.STOPPED) process = supervisord.process_groups['foo'].processes['foo'] process.spawnerr = 'abc' interface = self._makeOne(supervisord) self._assertRPCError( xmlrpc.Faults.SPAWN_ERROR, interface.startProcess, 'foo' ) def test_startProcess(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', __file__, autostart=False, startsecs=.01) from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.STOPPED) interface = self._makeOne(supervisord) result = interface.startProcess('foo') process = supervisord.process_groups['foo'].processes['foo'] self.assertEqual(process.spawned, True) self.assertEqual(interface.update_text, 'startProcess') process.state = ProcessStates.RUNNING self.assertEqual(result, True) def test_startProcess_spawnerr_in_onwait(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'foo', __file__, autostart=False) from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.STOPPED) process = supervisord.process_groups['foo'].processes['foo'] def spawn(): process.spawned = True process.state = ProcessStates.STARTING def transition(): process.spawnerr = 'abc' process.spawn = spawn process.transition = transition interface = self._makeOne(supervisord) callback = interface.startProcess('foo') self._assertRPCError(xmlrpc.Faults.SPAWN_ERROR, callback) def test_startProcess_success_in_onwait(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', __file__, autostart=False) from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.STOPPED) process = supervisord.process_groups['foo'].processes['foo'] def spawn(): process.spawned = True process.state = ProcessStates.STARTING process.spawn = spawn interface = self._makeOne(supervisord) callback = interface.startProcess('foo') process.state = ProcessStates.RUNNING self.assertEqual(callback(), True) def test_startProcess_nowait(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', __file__, autostart=False) from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.STOPPED) interface = self._makeOne(supervisord) result = interface.startProcess('foo', wait=False) self.assertEqual(result, True) process = supervisord.process_groups['foo'].processes['foo'] self.assertEqual(process.spawned, True) self.assertEqual(interface.update_text, 'startProcess') def test_startProcess_abnormal_term_process_not_running(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', __file__, autostart=False) from supervisor.process import ProcessStates from supervisor import http supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.STOPPED) interface = self._makeOne(supervisord) process = supervisord.process_groups['foo'].processes['foo'] def spawn(): process.spawned = True process.state = ProcessStates.STARTING process.spawn = spawn callback = interface.startProcess('foo', 100) # milliseconds result = callback() self.assertEqual(result, http.NOT_DONE_YET) self.assertEqual(process.spawned, True) self.assertEqual(interface.update_text, 'startProcess') process.state = ProcessStates.BACKOFF from supervisor import xmlrpc self._assertRPCError(xmlrpc.Faults.ABNORMAL_TERMINATION, callback) def test_startProcess_splat_calls_startProcessGroup(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', __file__, autostart=False, startsecs=.01) pconfig2 = DummyPConfig(options, 'process2', __file__, priority=2, startsecs=.01) supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) from supervisor.process import ProcessStates supervisord.set_procattr('process1', 'state', ProcessStates.STOPPED) supervisord.set_procattr('process2', 'state', ProcessStates.STOPPED) interface = self._makeOne(supervisord) interface.startProcess('foo:*') self.assertEqual(interface.update_text, 'startProcessGroup') def test_startProcessGroup(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', __file__, priority=1, startsecs=.01) pconfig2 = DummyPConfig(options, 'process2', __file__, priority=2, startsecs=.01) supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) from supervisor.process import ProcessStates from supervisor.xmlrpc import Faults supervisord.set_procattr('process1', 'state', ProcessStates.STOPPED) supervisord.set_procattr('process2', 'state', ProcessStates.STOPPED) interface = self._makeOne(supervisord) callback = interface.startProcessGroup('foo') self.assertEqual( callback(), [{'group': 'foo', 'status': Faults.SUCCESS, 'description': 'OK', 'name': 'process1'}, {'group': 'foo', 'status': Faults.SUCCESS, 'description': 'OK', 'name': 'process2'} ] ) self.assertEqual(interface.update_text, 'startProcess') process1 = supervisord.process_groups['foo'].processes['process1'] self.assertEqual(process1.spawned, True) process2 = supervisord.process_groups['foo'].processes['process2'] self.assertEqual(process2.spawned, True) def test_startProcessGroup_nowait(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', __file__, priority=1, startsecs=.01) pconfig2 = DummyPConfig(options, 'process2', __file__, priority=2, startsecs=.01) supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) from supervisor.process import ProcessStates supervisord.set_procattr('process1', 'state', ProcessStates.STOPPED) supervisord.set_procattr('process2', 'state', ProcessStates.STOPPED) interface = self._makeOne(supervisord) callback = interface.startProcessGroup('foo', wait=False) from supervisor.xmlrpc import Faults self.assertEqual( callback(), [{'group': 'foo', 'status': Faults.SUCCESS, 'description': 'OK', 'name': 'process1'}, {'group': 'foo', 'status': Faults.SUCCESS, 'description': 'OK', 'name': 'process2'} ] ) def test_startProcessGroup_badname(self): from supervisor import xmlrpc supervisord = DummySupervisor() interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.startProcessGroup, 'foo') def test_startAllProcesses(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', __file__, priority=1, startsecs=.01) pconfig2 = DummyPConfig(options, 'process2', __file__, priority=2, startsecs=.01) supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) from supervisor.process import ProcessStates supervisord.set_procattr('process1', 'state', ProcessStates.STOPPED) supervisord.set_procattr('process2', 'state', ProcessStates.STOPPED) interface = self._makeOne(supervisord) callback = interface.startAllProcesses() from supervisor.xmlrpc import Faults self.assertEqual( callback(), [{'group': 'foo', 'status': Faults.SUCCESS, 'description': 'OK', 'name': 'process1'}, {'group': 'foo', 'status': Faults.SUCCESS, 'description': 'OK', 'name': 'process2'} ] ) self.assertEqual(interface.update_text, 'startProcess') process1 = supervisord.process_groups['foo'].processes['process1'] self.assertEqual(process1.spawned, True) process2 = supervisord.process_groups['foo'].processes['process2'] self.assertEqual(process2.spawned, True) def test_startAllProcesses_nowait(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', __file__, priority=1, startsecs=.01) pconfig2 = DummyPConfig(options, 'process2', __file__, priority=2, startsecs=.01) supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) from supervisor.process import ProcessStates supervisord.set_procattr('process1', 'state', ProcessStates.STOPPED) supervisord.set_procattr('process2', 'state', ProcessStates.STOPPED) interface = self._makeOne(supervisord) callback = interface.startAllProcesses(wait=False) from supervisor.xmlrpc import Faults self.assertEqual( callback(), [{'group': 'foo', 'status': Faults.SUCCESS, 'description': 'OK', 'name': 'process1'}, {'group': 'foo', 'status': Faults.SUCCESS, 'description': 'OK', 'name': 'process2'} ] ) def test_stopProcess(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.RUNNING) interface = self._makeOne(supervisord) result = interface.stopProcess('foo') self.assertTrue(result) self.assertEqual(interface.update_text, 'stopProcess') process = supervisord.process_groups['foo'].processes['foo'] self.assertEqual(process.backoff, 0) self.assertEqual(process.delay, 0) self.assertFalse(process.killing) self.assertEqual(process.state, ProcessStates.STOPPED) self.assertTrue(process.stop_report_called) self.assertEqual(len(supervisord.process_groups['foo'].processes), 1) self.assertEqual(interface.update_text, 'stopProcess') def test_stopProcess_nowait(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', __file__) from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.RUNNING) interface = self._makeOne(supervisord) result = interface.stopProcess('foo', wait=False) self.assertEqual(result, True) process = supervisord.process_groups['foo'].processes['foo'] self.assertEqual(process.stop_called, True) self.assertTrue(process.stop_report_called) self.assertEqual(interface.update_text, 'stopProcess') def test_stopProcess_success_in_onwait(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) process = supervisord.process_groups['foo'].processes['foo'] L = [ ProcessStates.RUNNING, ProcessStates.STOPPING, ProcessStates.STOPPED ] def get_state(): return L.pop(0) process.get_state = get_state interface = self._makeOne(supervisord) callback = interface.stopProcess('foo') self.assertEqual(interface.update_text, 'stopProcess') self.assertTrue(callback()) def test_stopProcess_NDY_in_onwait(self): from supervisor.http import NOT_DONE_YET options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) process = supervisord.process_groups['foo'].processes['foo'] L = [ ProcessStates.RUNNING, ProcessStates.STOPPING, ProcessStates.STOPPING ] def get_state(): return L.pop(0) process.get_state = get_state interface = self._makeOne(supervisord) callback = interface.stopProcess('foo') self.assertEqual(callback(), NOT_DONE_YET) self.assertEqual(interface.update_text, 'stopProcess') def test_stopProcess_bad_name(self): from supervisor.xmlrpc import Faults supervisord = DummySupervisor() interface = self._makeOne(supervisord) self._assertRPCError(Faults.BAD_NAME, interface.stopProcess, 'foo') def test_stopProcess_not_running(self): from supervisor.states import ProcessStates from supervisor.xmlrpc import Faults options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.EXITED) interface = self._makeOne(supervisord) self._assertRPCError(Faults.NOT_RUNNING, interface.stopProcess, 'foo') def test_stopProcess_failed(self): from supervisor.xmlrpc import Faults options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'stop', lambda: 'unstoppable') interface = self._makeOne(supervisord) self._assertRPCError(Faults.FAILED, interface.stopProcess, 'foo') def test_stopProcessGroup(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', '/bin/foo', priority=1) pconfig2 = DummyPConfig(options, 'process2', '/bin/foo2', priority=2) from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) supervisord.set_procattr('process1', 'state', ProcessStates.RUNNING) supervisord.set_procattr('process2', 'state', ProcessStates.RUNNING) interface = self._makeOne(supervisord) callback = interface.stopProcessGroup('foo') self.assertEqual(interface.update_text, 'stopProcessGroup') from supervisor import http value = http.NOT_DONE_YET while 1: value = callback() if value is not http.NOT_DONE_YET: break from supervisor.xmlrpc import Faults self.assertEqual(value, [ {'status': Faults.SUCCESS, 'group':'foo', 'name': 'process1', 'description': 'OK'}, {'status': Faults.SUCCESS, 'group':'foo', 'name': 'process2', 'description': 'OK'}, ] ) process1 = supervisord.process_groups['foo'].processes['process1'] self.assertEqual(process1.stop_called, True) process2 = supervisord.process_groups['foo'].processes['process2'] self.assertEqual(process2.stop_called, True) def test_stopProcessGroup_nowait(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', __file__, priority=1) pconfig2 = DummyPConfig(options, 'process2', __file__, priority=2) supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) from supervisor.process import ProcessStates supervisord.set_procattr('process1', 'state', ProcessStates.RUNNING) supervisord.set_procattr('process2', 'state', ProcessStates.RUNNING) interface = self._makeOne(supervisord) callback = interface.stopProcessGroup('foo', wait=False) from supervisor.xmlrpc import Faults self.assertEqual( callback(), [ {'name': 'process1', 'description': 'OK', 'group': 'foo', 'status': Faults.SUCCESS}, {'name': 'process2', 'description': 'OK', 'group': 'foo', 'status': Faults.SUCCESS} ] ) def test_stopProcessGroup_badname(self): from supervisor import xmlrpc supervisord = DummySupervisor() interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.stopProcessGroup, 'foo') def test_stopProcess_splat_calls_stopProcessGroup(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', __file__, autostart=False, startsecs=.01) pconfig2 = DummyPConfig(options, 'process2', __file__, priority=2, startsecs=.01) supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) from supervisor.process import ProcessStates supervisord.set_procattr('process1', 'state', ProcessStates.STOPPED) supervisord.set_procattr('process2', 'state', ProcessStates.STOPPED) interface = self._makeOne(supervisord) interface.stopProcess('foo:*') self.assertEqual(interface.update_text, 'stopProcessGroup') def test_stopAllProcesses(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', '/bin/foo', priority=1) pconfig2 = DummyPConfig(options, 'process2', '/bin/foo2', priority=2) from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) supervisord.set_procattr('process1', 'state', ProcessStates.RUNNING) supervisord.set_procattr('process2', 'state', ProcessStates.RUNNING) interface = self._makeOne(supervisord) callback = interface.stopAllProcesses() self.assertEqual(interface.update_text, 'stopAllProcesses') from supervisor import http value = http.NOT_DONE_YET while 1: value = callback() if value is not http.NOT_DONE_YET: break from supervisor.xmlrpc import Faults self.assertEqual(value, [ {'status': Faults.SUCCESS, 'group':'foo', 'name': 'process1', 'description': 'OK'}, {'status': Faults.SUCCESS, 'group':'foo', 'name': 'process2', 'description': 'OK'}, ] ) process1 = supervisord.process_groups['foo'].processes['process1'] self.assertEqual(process1.stop_called, True) process2 = supervisord.process_groups['foo'].processes['process2'] self.assertEqual(process2.stop_called, True) def test_stopAllProcesses_nowait(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', __file__, priority=1) pconfig2 = DummyPConfig(options, 'process2', __file__, priority=2) supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) from supervisor.process import ProcessStates supervisord.set_procattr('process1', 'state', ProcessStates.RUNNING) supervisord.set_procattr('process2', 'state', ProcessStates.RUNNING) interface = self._makeOne(supervisord) callback = interface.stopAllProcesses(wait=False) from supervisor.xmlrpc import Faults self.assertEqual( callback(), [{'group': 'foo', 'status': Faults.SUCCESS, 'description': 'OK', 'name': 'process1'}, {'group': 'foo', 'status': Faults.SUCCESS, 'description': 'OK', 'name': 'process2'} ] ) def test_signalProcess_with_signal_number(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.RUNNING) interface = self._makeOne(supervisord) result = interface.signalProcess('foo', 10) self.assertEqual(interface.update_text, 'signalProcess') self.assertEqual(result, True) p = supervisord.process_groups[supervisord.group_name].processes['foo'] self.assertEqual(p.sent_signal, 10) def test_signalProcess_with_signal_name(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.RUNNING) from supervisor.datatypes import signal_number signame = 'quit' signum = signal_number(signame) interface = self._makeOne(supervisord) result = interface.signalProcess('foo', signame) self.assertEqual(interface.update_text, 'signalProcess') self.assertEqual(result, True) p = supervisord.process_groups[supervisord.group_name].processes['foo'] self.assertEqual(p.sent_signal, signum) def test_signalProcess_stopping(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.STOPPING) interface = self._makeOne(supervisord) result = interface.signalProcess('foo', 10) self.assertEqual(interface.update_text, 'signalProcess') self.assertEqual(result, True) p = supervisord.process_groups[supervisord.group_name].processes['foo'] self.assertEqual(p.sent_signal, 10) def test_signalProcess_badsignal(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') from supervisor.process import ProcessStates from supervisor import xmlrpc supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.RUNNING) interface = self._makeOne(supervisord) self._assertRPCError( xmlrpc.Faults.BAD_SIGNAL, interface.signalProcess, 'foo', 155 ) def test_signalProcess_notrunning(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') from supervisor.process import ProcessStates from supervisor import xmlrpc supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.STOPPED) interface = self._makeOne(supervisord) self._assertRPCError( xmlrpc.Faults.NOT_RUNNING, interface.signalProcess, 'foo', 10 ) def test_signalProcess_signal_returns_message(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') from supervisor.process import ProcessStates from supervisor import xmlrpc supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.RUNNING) def signalreturn(sig): return 'msg' pgroup = supervisord.process_groups[supervisord.group_name] proc = pgroup.processes['foo'] proc.signal = signalreturn interface = self._makeOne(supervisord) self._assertRPCError( xmlrpc.Faults.FAILED, interface.signalProcess, 'foo', 10 ) def test_signalProcess_withgroup(self): """ Test that sending foo:* works """ options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', '/bin/foo') pconfig2 = DummyPConfig(options, 'process2', '/bin/foo2') from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) supervisord.set_procattr('process1', 'state', ProcessStates.RUNNING) supervisord.set_procattr('process2', 'state', ProcessStates.STOPPING) interface = self._makeOne(supervisord) result = interface.signalProcess('foo:*', 10) self.assertEqual(interface.update_text, 'signalProcessGroup') # Sort so we get deterministic results despite hash randomization result = sorted(result, key=operator.itemgetter('name')) from supervisor.xmlrpc import Faults self.assertEqual(result, [ {'status': Faults.SUCCESS, 'group': 'foo', 'name': 'process1', 'description': 'OK'}, {'status': Faults.SUCCESS, 'group':'foo', 'name': 'process2', 'description': 'OK'}, ] ) process1 = supervisord.process_groups['foo'].processes['process1'] self.assertEqual(process1.sent_signal, 10) process2 = supervisord.process_groups['foo'].processes['process2'] self.assertEqual(process2.sent_signal, 10) def test_signalProcessGroup_with_signal_number(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', '/bin/foo') pconfig2 = DummyPConfig(options, 'process2', '/bin/foo2') from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) supervisord.set_procattr('process1', 'state', ProcessStates.RUNNING) supervisord.set_procattr('process2', 'state', ProcessStates.RUNNING) interface = self._makeOne(supervisord) result = interface.signalProcessGroup('foo', 10) self.assertEqual(interface.update_text, 'signalProcessGroup') # Sort so we get deterministic results despite hash randomization result = sorted(result, key=operator.itemgetter('name')) from supervisor.xmlrpc import Faults self.assertEqual(result, [ {'status': Faults.SUCCESS, 'group': 'foo', 'name': 'process1', 'description': 'OK'}, {'status': Faults.SUCCESS, 'group': 'foo', 'name': 'process2', 'description': 'OK'}, ] ) process1 = supervisord.process_groups['foo'].processes['process1'] self.assertEqual(process1.sent_signal, 10) process2 = supervisord.process_groups['foo'].processes['process2'] self.assertEqual(process2.sent_signal, 10) def test_signalProcessGroup_with_signal_name(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', '/bin/foo') from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1) supervisord.set_procattr('process1', 'state', ProcessStates.RUNNING) from supervisor.datatypes import signal_number signame = 'term' signum = signal_number(signame) interface = self._makeOne(supervisord) result = interface.signalProcessGroup('foo', signame) self.assertEqual(interface.update_text, 'signalProcessGroup') from supervisor.xmlrpc import Faults self.assertEqual(result, [ {'status': Faults.SUCCESS, 'group': 'foo', 'name': 'process1', 'description': 'OK'}, ] ) process1 = supervisord.process_groups['foo'].processes['process1'] self.assertEqual(process1.sent_signal, signum) def test_signalProcessGroup_nosuchgroup(self): from supervisor import xmlrpc options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', '/bin/foo') pconfig2 = DummyPConfig(options, 'process2', '/bin/foo2') from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) supervisord.set_procattr('process1', 'state', ProcessStates.RUNNING) supervisord.set_procattr('process2', 'state', ProcessStates.RUNNING) interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.signalProcessGroup, 'bar', 10 ) def test_signalAllProcesses_with_signal_number(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', '/bin/foo') pconfig2 = DummyPConfig(options, 'process2', '/bin/foo2') from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) supervisord.set_procattr('process1', 'state', ProcessStates.RUNNING) supervisord.set_procattr('process2', 'state', ProcessStates.RUNNING) interface = self._makeOne(supervisord) result = interface.signalAllProcesses(10) self.assertEqual(interface.update_text, 'signalAllProcesses') # Sort so we get deterministic results despite hash randomization result = sorted(result, key=operator.itemgetter('name')) from supervisor.xmlrpc import Faults self.assertEqual(result, [ {'status': Faults.SUCCESS, 'group': 'foo', 'name': 'process1', 'description': 'OK'}, {'status': Faults.SUCCESS, 'group': 'foo', 'name': 'process2', 'description': 'OK'}, ] ) process1 = supervisord.process_groups['foo'].processes['process1'] self.assertEqual(process1.sent_signal, 10) process2 = supervisord.process_groups['foo'].processes['process2'] self.assertEqual(process2.sent_signal, 10) def test_signalAllProcesses_with_signal_name(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', '/bin/foo') from supervisor.process import ProcessStates supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1) supervisord.set_procattr('process1', 'state', ProcessStates.RUNNING) from supervisor.datatypes import signal_number signame = 'hup' signum = signal_number(signame) interface = self._makeOne(supervisord) result = interface.signalAllProcesses(signame) self.assertEqual(interface.update_text, 'signalAllProcesses') from supervisor.xmlrpc import Faults self.assertEqual(result, [ {'status': Faults.SUCCESS, 'group': 'foo', 'name': 'process1', 'description': 'OK'}, ] ) process1 = supervisord.process_groups['foo'].processes['process1'] self.assertEqual(process1.sent_signal, signum) def test_getAllConfigInfo(self): options = DummyOptions() supervisord = DummySupervisor(options, 'foo') pconfig1 = DummyPConfig(options, 'process1', __file__, stdout_logfile=Automatic, stderr_logfile=Automatic, ) pconfig2 = DummyPConfig(options, 'process2', __file__, stdout_logfile=None, stderr_logfile=None, ) gconfig = DummyPGroupConfig(options, 'group1', pconfigs=[pconfig1, pconfig2]) supervisord.process_groups = {'group1': DummyProcessGroup(gconfig)} supervisord.options.process_group_configs = [gconfig] interface = self._makeOne(supervisord) configs = interface.getAllConfigInfo() self.assertEqual(configs[0]['autostart'], True) self.assertEqual(configs[0]['stopwaitsecs'], 10) self.assertEqual(configs[0]['stdout_events_enabled'], False) self.assertEqual(configs[0]['stderr_events_enabled'], False) self.assertEqual(configs[0]['group'], 'group1') self.assertEqual(configs[0]['stdout_capture_maxbytes'], 0) self.assertEqual(configs[0]['name'], 'process1') self.assertEqual(configs[0]['stopsignal'], 15) self.assertEqual(configs[0]['stderr_syslog'], False) self.assertEqual(configs[0]['stdout_logfile_maxbytes'], 0) self.assertEqual(configs[0]['group_prio'], 999) self.assertEqual(configs[0]['killasgroup'], False) self.assertEqual(configs[0]['process_prio'], 999) self.assertEqual(configs[0]['stdout_syslog'], False) self.assertEqual(configs[0]['stderr_logfile_maxbytes'], 0) self.assertEqual(configs[0]['startsecs'], 10) self.assertEqual(configs[0]['redirect_stderr'], False) self.assertEqual(configs[0]['stdout_logfile'], 'auto') self.assertEqual(configs[0]['exitcodes'], (0,)) self.assertEqual(configs[0]['stderr_capture_maxbytes'], 0) self.assertEqual(configs[0]['startretries'], 999) self.assertEqual(configs[0]['stderr_logfile_maxbytes'], 0) self.assertEqual(configs[0]['inuse'], True) self.assertEqual(configs[0]['stderr_logfile'], 'auto') self.assertEqual(configs[0]['stdout_logfile_backups'], 0) assert 'test_rpcinterfaces.py' in configs[0]['command'] self.assertEqual(configs[1]['autostart'], True) self.assertEqual(configs[1]['stopwaitsecs'], 10) self.assertEqual(configs[1]['stdout_events_enabled'], False) self.assertEqual(configs[1]['stderr_events_enabled'], False) self.assertEqual(configs[1]['group'], 'group1') self.assertEqual(configs[1]['stdout_capture_maxbytes'], 0) self.assertEqual(configs[1]['name'], 'process2') self.assertEqual(configs[1]['stopsignal'], 15) self.assertEqual(configs[1]['stderr_syslog'], False) self.assertEqual(configs[1]['stdout_logfile_maxbytes'], 0) self.assertEqual(configs[1]['group_prio'], 999) self.assertEqual(configs[1]['killasgroup'], False) self.assertEqual(configs[1]['process_prio'], 999) self.assertEqual(configs[1]['stdout_syslog'], False) self.assertEqual(configs[1]['stderr_logfile_maxbytes'], 0) self.assertEqual(configs[1]['startsecs'], 10) self.assertEqual(configs[1]['redirect_stderr'], False) self.assertEqual(configs[1]['stdout_logfile'], 'none') self.assertEqual(configs[1]['exitcodes'], (0,)) self.assertEqual(configs[1]['stderr_capture_maxbytes'], 0) self.assertEqual(configs[1]['startretries'], 999) self.assertEqual(configs[1]['stderr_logfile_maxbytes'], 0) self.assertEqual(configs[1]['inuse'], True) self.assertEqual(configs[1]['stderr_logfile'], 'none') self.assertEqual(configs[1]['stdout_logfile_backups'], 0) assert 'test_rpcinterfaces.py' in configs[0]['command'] def test_getAllConfigInfo_filters_types_not_compatible_with_xmlrpc(self): options = DummyOptions() supervisord = DummySupervisor(options, 'foo') pconfig1 = DummyPConfig(options, 'process1', __file__) pconfig2 = DummyPConfig(options, 'process2', __file__) gconfig = DummyPGroupConfig(options, 'group1', pconfigs=[pconfig1, pconfig2]) supervisord.process_groups = {'group1': DummyProcessGroup(gconfig)} supervisord.options.process_group_configs = [gconfig] interface = self._makeOne(supervisord) unmarshallables = [type(None)] try: from enum import Enum unmarshallables.append(Enum) except ImportError: # python 2 pass for typ in unmarshallables: for config in interface.getAllConfigInfo(): for k, v in config.items(): self.assertFalse(isinstance(v, typ), k) def test__interpretProcessInfo(self): supervisord = DummySupervisor() interface = self._makeOne(supervisord) start = _NOW -100 stop = _NOW -1 from supervisor.process import ProcessStates running = {'name':'running', 'pid':1, 'state':ProcessStates.RUNNING, 'start':start, 'stop':stop, 'now':_NOW} description = interface._interpretProcessInfo(running) self.assertEqual(description, 'pid 1, uptime 0:01:40') fatal = {'name':'fatal', 'pid':2, 'state':ProcessStates.FATAL, 'start':start, 'stop':stop, 'now':_NOW, 'spawnerr':'Hosed'} description = interface._interpretProcessInfo(fatal) self.assertEqual(description, 'Hosed') fatal2 = {'name':'fatal', 'pid':2, 'state':ProcessStates.FATAL, 'start':start, 'stop':stop, 'now':_NOW, 'spawnerr':'',} description = interface._interpretProcessInfo(fatal2) self.assertEqual(description, 'unknown error (try "tail fatal")') stopped = {'name':'stopped', 'pid':3, 'state':ProcessStates.STOPPED, 'start':start, 'stop':stop, 'now':_NOW, 'spawnerr':'',} description = interface._interpretProcessInfo(stopped) from datetime import datetime stoptime = datetime(*time.localtime(stop)[:7]) self.assertEqual(description, stoptime.strftime(_TIMEFORMAT)) stopped2 = {'name':'stopped', 'pid':3, 'state':ProcessStates.STOPPED, 'start':0, 'stop':stop, 'now':_NOW, 'spawnerr':'',} description = interface._interpretProcessInfo(stopped2) self.assertEqual(description, 'Not started') def test__interpretProcessInfo_doesnt_report_negative_uptime(self): supervisord = DummySupervisor() interface = self._makeOne(supervisord) from supervisor.process import ProcessStates running = {'name': 'running', 'pid': 42, 'state': ProcessStates.RUNNING, 'start': _NOW + 10, # started in the future 'stop': None, 'now': _NOW} description = interface._interpretProcessInfo(running) self.assertEqual(description, 'pid 42, uptime 0:00:00') def test_getProcessInfo(self): from supervisor.process import ProcessStates options = DummyOptions() config = DummyPConfig(options, 'foo', '/bin/foo', stdout_logfile='/tmp/fleeb.bar') process = DummyProcess(config) process.pid = 111 process.laststart = 10 process.laststop = 11 pgroup_config = DummyPGroupConfig(options, name='foo') pgroup = DummyProcessGroup(pgroup_config) pgroup.processes = {'foo':process} supervisord = DummySupervisor(process_groups={'foo':pgroup}) interface = self._makeOne(supervisord) data = interface.getProcessInfo('foo') self.assertEqual(interface.update_text, 'getProcessInfo') self.assertEqual(data['logfile'], '/tmp/fleeb.bar') self.assertEqual(data['stdout_logfile'], '/tmp/fleeb.bar') self.assertEqual(data['stderr_logfile'], '') self.assertEqual(data['name'], 'foo') self.assertEqual(data['pid'], 111) self.assertEqual(data['start'], 10) self.assertEqual(data['stop'], 11) self.assertEqual(data['state'], ProcessStates.RUNNING) self.assertEqual(data['statename'], 'RUNNING') self.assertEqual(data['exitstatus'], 0) self.assertEqual(data['spawnerr'], '') self.assertTrue(data['description'].startswith('pid 111')) def test_getProcessInfo_logfile_NONE(self): options = DummyOptions() config = DummyPConfig(options, 'foo', '/bin/foo', stdout_logfile=None) process = DummyProcess(config) process.pid = 111 process.laststart = 10 process.laststop = 11 pgroup_config = DummyPGroupConfig(options, name='foo') pgroup = DummyProcessGroup(pgroup_config) pgroup.processes = {'foo':process} supervisord = DummySupervisor(process_groups={'foo':pgroup}) interface = self._makeOne(supervisord) data = interface.getProcessInfo('foo') self.assertEqual(data['logfile'], '') self.assertEqual(data['stdout_logfile'], '') def test_getProcessInfo_unknown_state(self): from supervisor.states import ProcessStates options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) supervisord.set_procattr('foo', 'state', ProcessStates.UNKNOWN) interface = self._makeOne(supervisord) data = interface.getProcessInfo('foo') self.assertEqual(data['statename'], 'UNKNOWN') self.assertEqual(data['description'], '') def test_getProcessInfo_bad_name_when_bad_process(self): from supervisor import xmlrpc supervisord = DummySupervisor() interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.getProcessInfo, 'nonexistent') def test_getProcessInfo_bad_name_when_no_process(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.getProcessInfo, 'foo:') def test_getProcessInfo_caps_timestamps_exceeding_xmlrpc_maxint(self): from supervisor.compat import xmlrpclib options = DummyOptions() config = DummyPConfig(options, 'foo', '/bin/foo', stdout_logfile=None) process = DummyProcess(config) process.laststart = float(xmlrpclib.MAXINT + 1) process.laststop = float(xmlrpclib.MAXINT + 1) pgroup_config = DummyPGroupConfig(options, name='foo') pgroup = DummyProcessGroup(pgroup_config) pgroup.processes = {'foo':process} supervisord = DummySupervisor(process_groups={'foo':pgroup}) interface = self._makeOne(supervisord) interface._now = lambda: float(xmlrpclib.MAXINT + 1) data = interface.getProcessInfo('foo') self.assertEqual(data['start'], xmlrpclib.MAXINT) self.assertEqual(data['stop'], xmlrpclib.MAXINT) self.assertEqual(data['now'], xmlrpclib.MAXINT) def test_getAllProcessInfo(self): from supervisor.process import ProcessStates options = DummyOptions() p1config = DummyPConfig(options, 'process1', '/bin/process1', priority=1, stdout_logfile='/tmp/process1.log') p2config = DummyPConfig(options, 'process2', '/bin/process2', priority=2, stdout_logfile='/tmp/process2.log') supervisord = PopulatedDummySupervisor(options, 'gname', p1config, p2config) supervisord.set_procattr('process1', 'pid', 111) supervisord.set_procattr('process1', 'laststart', 10) supervisord.set_procattr('process1', 'laststop', 11) supervisord.set_procattr('process1', 'state', ProcessStates.RUNNING) supervisord.set_procattr('process2', 'pid', 0) supervisord.set_procattr('process2', 'laststart', 20) supervisord.set_procattr('process2', 'laststop', 11) supervisord.set_procattr('process2', 'state', ProcessStates.STOPPED) interface = self._makeOne(supervisord) info = interface.getAllProcessInfo() self.assertEqual(interface.update_text, 'getProcessInfo') self.assertEqual(len(info), 2) p1info = info[0] self.assertEqual(p1info['logfile'], '/tmp/process1.log') self.assertEqual(p1info['stdout_logfile'], '/tmp/process1.log') self.assertEqual(p1info['stderr_logfile'], '') self.assertEqual(p1info['name'], 'process1') self.assertEqual(p1info['pid'], 111) self.assertEqual(p1info['start'], 10) self.assertEqual(p1info['stop'], 11) self.assertEqual(p1info['state'], ProcessStates.RUNNING) self.assertEqual(p1info['statename'], 'RUNNING') self.assertEqual(p1info['exitstatus'], 0) self.assertEqual(p1info['spawnerr'], '') self.assertEqual(p1info['group'], 'gname') self.assertTrue(p1info['description'].startswith('pid 111')) p2info = info[1] process2 = supervisord.process_groups['gname'].processes['process2'] self.assertEqual(p2info['logfile'], '/tmp/process2.log') self.assertEqual(p2info['stdout_logfile'], '/tmp/process2.log') self.assertEqual(p1info['stderr_logfile'], '') self.assertEqual(p2info['name'], 'process2') self.assertEqual(p2info['pid'], 0) self.assertEqual(p2info['start'], process2.laststart) self.assertEqual(p2info['stop'], 11) self.assertEqual(p2info['state'], ProcessStates.STOPPED) self.assertEqual(p2info['statename'], 'STOPPED') self.assertEqual(p2info['exitstatus'], 0) self.assertEqual(p2info['spawnerr'], '') self.assertEqual(p1info['group'], 'gname') from datetime import datetime starttime = datetime(*time.localtime(process2.laststart)[:7]) self.assertEqual(p2info['description'], starttime.strftime(_TIMEFORMAT)) def test_readProcessStdoutLog_unreadable(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'process1', '/bin/process1', priority=1, stdout_logfile='/tmp/process1.log') supervisord = PopulatedDummySupervisor(options, 'process1', pconfig) interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.NO_FILE, interface.readProcessStdoutLog, 'process1', offset=0, length=1) def test_readProcessStdoutLog_badargs(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'process1', '/bin/process1', priority=1, stdout_logfile='/tmp/process1.log') supervisord = PopulatedDummySupervisor(options, 'process1', pconfig) interface = self._makeOne(supervisord) process = supervisord.process_groups['process1'].processes['process1'] logfile = process.config.stdout_logfile try: with open(logfile, 'w+') as f: f.write('x' * 2048) self._assertRPCError(xmlrpc.Faults.BAD_ARGUMENTS, interface.readProcessStdoutLog, 'process1', offset=-1, length=1) self._assertRPCError(xmlrpc.Faults.BAD_ARGUMENTS, interface.readProcessStdoutLog, 'process1', offset=-1, length=-1) finally: os.remove(logfile) def test_readProcessStdoutLog_bad_name_no_process(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'process1', '/bin/process1', priority=1, stdout_logfile='/tmp/process1.log') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.readProcessStdoutLog, 'foo:*', offset=0, length=1) def test_readProcessStdoutLog(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', stdout_logfile='/tmp/fooooooo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) process = supervisord.process_groups['foo'].processes['foo'] logfile = process.config.stdout_logfile try: with open(logfile, 'w+') as f: f.write('x' * 2048) f.write('y' * 2048) data = interface.readProcessStdoutLog('foo', offset=0, length=0) self.assertEqual(interface.update_text, 'readProcessStdoutLog') self.assertEqual(data, ('x' * 2048) + ('y' * 2048)) data = interface.readProcessStdoutLog('foo', offset=2048, length=0) self.assertEqual(data, 'y' * 2048) data = interface.readProcessStdoutLog('foo', offset=0, length=2048) self.assertEqual(data, 'x' * 2048) data = interface.readProcessStdoutLog('foo', offset=-4, length=0) self.assertEqual(data, 'y' * 4) finally: os.remove(logfile) def test_readProcessLogAliasedTo_readProcessStdoutLog(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) self.assertEqual(interface.readProcessLog, interface.readProcessStdoutLog) def test_readProcessStderrLog_unreadable(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'process1', '/bin/process1', priority=1, stderr_logfile='/tmp/process1.log') supervisord = PopulatedDummySupervisor(options, 'process1', pconfig) interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.NO_FILE, interface.readProcessStderrLog, 'process1', offset=0, length=1) def test_readProcessStderrLog_badargs(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'process1', '/bin/process1', priority=1, stderr_logfile='/tmp/process1.log') supervisord = PopulatedDummySupervisor(options, 'process1', pconfig) interface = self._makeOne(supervisord) process = supervisord.process_groups['process1'].processes['process1'] logfile = process.config.stderr_logfile try: with open(logfile, 'w+') as f: f.write('x' * 2048) self._assertRPCError(xmlrpc.Faults.BAD_ARGUMENTS, interface.readProcessStderrLog, 'process1', offset=-1, length=1) self._assertRPCError(xmlrpc.Faults.BAD_ARGUMENTS, interface.readProcessStderrLog, 'process1', offset=-1, length=-1) finally: os.remove(logfile) def test_readProcessStderrLog_bad_name_no_process(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'process1', '/bin/process1', priority=1, stdout_logfile='/tmp/process1.log') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.readProcessStderrLog, 'foo:*', offset=0, length=1) def test_readProcessStderrLog(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', stderr_logfile='/tmp/fooooooo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) process = supervisord.process_groups['foo'].processes['foo'] logfile = process.config.stderr_logfile try: with open(logfile, 'w+') as f: f.write('x' * 2048) f.write('y' * 2048) data = interface.readProcessStderrLog('foo', offset=0, length=0) self.assertEqual(interface.update_text, 'readProcessStderrLog') self.assertEqual(data, ('x' * 2048) + ('y' * 2048)) data = interface.readProcessStderrLog('foo', offset=2048, length=0) self.assertEqual(data, 'y' * 2048) data = interface.readProcessStderrLog('foo', offset=0, length=2048) self.assertEqual(data, 'x' * 2048) data = interface.readProcessStderrLog('foo', offset=-4, length=0) self.assertEqual(data, 'y' * 4) finally: os.remove(logfile) def test_tailProcessStdoutLog_bad_name(self): from supervisor import xmlrpc supervisord = DummySupervisor() interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.tailProcessStdoutLog, 'BAD_NAME', 0, 10) def test_tailProcessStdoutLog_bad_name_no_process(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'process1', '/bin/process1', priority=1, stdout_logfile='/tmp/process1.log') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.tailProcessStdoutLog, 'foo:*', 0, 10) def test_tailProcessStdoutLog_all(self): # test entire log is returned when offset==0 and logsize < length from supervisor.compat import letters options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', stdout_logfile='/tmp/fooooooo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) process = supervisord.process_groups['foo'].processes['foo'] logfile = process.config.stdout_logfile try: with open(logfile, 'w+') as f: f.write(letters) data, offset, overflow = interface.tailProcessStdoutLog('foo', offset=0, length=len(letters)) self.assertEqual(interface.update_text, 'tailProcessStdoutLog') self.assertEqual(overflow, False) self.assertEqual(offset, len(letters)) self.assertEqual(data, letters) finally: os.remove(logfile) def test_tailProcessStdoutLog_none(self): # test nothing is returned when offset <= logsize from supervisor.compat import letters options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', stdout_logfile='/tmp/fooooooo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) process = supervisord.process_groups['foo'].processes['foo'] logfile = process.config.stdout_logfile try: with open(logfile, 'w+') as f: f.write(letters) # offset==logsize data, offset, overflow = interface.tailProcessStdoutLog('foo', offset=len(letters), length=100) self.assertEqual(interface.update_text, 'tailProcessStdoutLog') self.assertEqual(overflow, False) self.assertEqual(offset, len(letters)) self.assertEqual(data, '') # offset > logsize data, offset, overflow = interface.tailProcessStdoutLog('foo', offset=len(letters)+5, length=100) self.assertEqual(interface.update_text, 'tailProcessStdoutLog') self.assertEqual(overflow, False) self.assertEqual(offset, len(letters)) self.assertEqual(data, '') finally: os.remove(logfile) def test_tailProcessStdoutLog_overflow(self): # test buffer overflow occurs when logsize > offset+length from supervisor.compat import letters options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', stdout_logfile='/tmp/fooooooo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) process = supervisord.process_groups['foo'].processes['foo'] logfile = process.config.stdout_logfile try: with open(logfile, 'w+') as f: f.write(letters) data, offset, overflow = interface.tailProcessStdoutLog('foo', offset=0, length=5) self.assertEqual(interface.update_text, 'tailProcessStdoutLog') self.assertEqual(overflow, True) self.assertEqual(offset, len(letters)) self.assertEqual(data, letters[-5:]) finally: os.remove(logfile) def test_tailProcessStdoutLog_unreadable(self): # test nothing is returned if the log doesn't exist yet options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', stdout_logfile='/tmp/fooooooo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) data, offset, overflow = interface.tailProcessStdoutLog('foo', offset=0, length=100) self.assertEqual(interface.update_text, 'tailProcessStdoutLog') self.assertEqual(overflow, False) self.assertEqual(offset, 0) self.assertEqual(data, '') def test_tailProcessLogAliasedTo_tailProcessStdoutLog(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) self.assertEqual(interface.tailProcessLog, interface.tailProcessStdoutLog) def test_tailProcessStderrLog_bad_name(self): from supervisor import xmlrpc supervisord = DummySupervisor() interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.tailProcessStderrLog, 'BAD_NAME', 0, 10) def test_tailProcessStderrLog_bad_name_no_process(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'process1', '/bin/process1', priority=1, stdout_logfile='/tmp/process1.log') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.tailProcessStderrLog, 'foo:*', 0, 10) def test_tailProcessStderrLog_all(self): # test entire log is returned when offset==0 and logsize < length from supervisor.compat import letters options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', stderr_logfile='/tmp/fooooooo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) process = supervisord.process_groups['foo'].processes['foo'] logfile = process.config.stderr_logfile try: with open(logfile, 'w+') as f: f.write(letters) data, offset, overflow = interface.tailProcessStderrLog('foo', offset=0, length=len(letters)) self.assertEqual(interface.update_text, 'tailProcessStderrLog') self.assertEqual(overflow, False) self.assertEqual(offset, len(letters)) self.assertEqual(data, letters) finally: os.remove(logfile) def test_tailProcessStderrLog_none(self): # test nothing is returned when offset <= logsize from supervisor.compat import letters options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', stderr_logfile='/tmp/fooooooo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) process = supervisord.process_groups['foo'].processes['foo'] logfile = process.config.stderr_logfile try: with open(logfile, 'w+') as f: f.write(letters) # offset==logsize data, offset, overflow = interface.tailProcessStderrLog('foo', offset=len(letters), length=100) self.assertEqual(interface.update_text, 'tailProcessStderrLog') self.assertEqual(overflow, False) self.assertEqual(offset, len(letters)) self.assertEqual(data, '') # offset > logsize data, offset, overflow = interface.tailProcessStderrLog('foo', offset=len(letters)+5, length=100) self.assertEqual(interface.update_text, 'tailProcessStderrLog') self.assertEqual(overflow, False) self.assertEqual(offset, len(letters)) self.assertEqual(data, '') finally: os.remove(logfile) def test_tailProcessStderrLog_overflow(self): # test buffer overflow occurs when logsize > offset+length from supervisor.compat import letters options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', stderr_logfile='/tmp/fooooooo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) process = supervisord.process_groups['foo'].processes['foo'] logfile = process.config.stderr_logfile try: with open(logfile, 'w+') as f: f.write(letters) data, offset, overflow = interface.tailProcessStderrLog('foo', offset=0, length=5) self.assertEqual(interface.update_text, 'tailProcessStderrLog') self.assertEqual(overflow, True) self.assertEqual(offset, len(letters)) self.assertEqual(data, letters[-5:]) finally: os.remove(logfile) def test_tailProcessStderrLog_unreadable(self): # test nothing is returned if the log doesn't exist yet options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', stderr_logfile='/tmp/fooooooo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) data, offset, overflow = interface.tailProcessStderrLog('foo', offset=0, length=100) self.assertEqual(interface.update_text, 'tailProcessStderrLog') self.assertEqual(overflow, False) self.assertEqual(offset, 0) self.assertEqual(data, '') def test_clearProcessLogs_bad_name_no_group(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'foo', 'foo') process = DummyProcess(pconfig) pgroup = DummyProcessGroup(None) pgroup.processes = {'foo': process} supervisord = DummySupervisor(process_groups={'foo':pgroup}) interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.clearProcessLogs, 'badgroup') def test_clearProcessLogs_bad_name_no_process(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'foo', 'foo') process = DummyProcess(pconfig) pgroup = DummyProcessGroup(None) pgroup.processes = {'foo': process} supervisord = DummySupervisor(process_groups={'foo':pgroup}) interface = self._makeOne(supervisord) self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.clearProcessLogs, 'foo:*') def test_clearProcessLogs(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', 'foo') process = DummyProcess(pconfig) pgroup = DummyProcessGroup(None) pgroup.processes = {'foo': process} supervisord = DummySupervisor(process_groups={'foo':pgroup}) interface = self._makeOne(supervisord) interface.clearProcessLogs('foo') self.assertEqual(process.logsremoved, True) def test_clearProcessLogs_failed(self): from supervisor import xmlrpc options = DummyOptions() pconfig = DummyPConfig(options, 'foo', 'foo') process = DummyProcess(pconfig) pgroup = DummyProcessGroup(None) pgroup.processes = {'foo': process} process.error_at_clear = True supervisord = DummySupervisor(process_groups={'foo':pgroup}) interface = self._makeOne(supervisord) self.assertRaises(xmlrpc.RPCError, interface.clearProcessLogs, 'foo') def test_clearProcessLogAliasedTo_clearProcessLogs(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig) interface = self._makeOne(supervisord) self.assertEqual(interface.clearProcessLog, interface.clearProcessLogs) def test_clearAllProcessLogs(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'foo', priority=1) pconfig2 = DummyPConfig(options, 'process2', 'bar', priority=2) supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) interface = self._makeOne(supervisord) callback = interface.clearAllProcessLogs() callback() results = callback() from supervisor import xmlrpc self.assertEqual(results[0], {'name':'process1', 'group':'foo', 'status':xmlrpc.Faults.SUCCESS, 'description':'OK'}) self.assertEqual(results[1], {'name':'process2', 'group':'foo', 'status':xmlrpc.Faults.SUCCESS, 'description':'OK'}) process1 = supervisord.process_groups['foo'].processes['process1'] self.assertEqual(process1.logsremoved, True) process2 = supervisord.process_groups['foo'].processes['process2'] self.assertEqual(process2.logsremoved, True) def test_clearAllProcessLogs_onefails(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'foo', priority=1) pconfig2 = DummyPConfig(options, 'process2', 'bar', priority=2) supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1, pconfig2) supervisord.set_procattr('process1', 'error_at_clear', True) interface = self._makeOne(supervisord) callback = interface.clearAllProcessLogs() callback() results = callback() process1 = supervisord.process_groups['foo'].processes['process1'] self.assertEqual(process1.logsremoved, False) process2 = supervisord.process_groups['foo'].processes['process2'] self.assertEqual(process2.logsremoved, True) self.assertEqual(len(results), 2) from supervisor import xmlrpc self.assertEqual(results[0], {'name':'process1', 'group':'foo', 'status':xmlrpc.Faults.FAILED, 'description':'FAILED: foo:process1'}) self.assertEqual(results[1], {'name':'process2', 'group':'foo', 'status':xmlrpc.Faults.SUCCESS, 'description':'OK'}) def test_clearAllProcessLogs_no_processes(self): supervisord = DummySupervisor() self.assertEqual(supervisord.process_groups, {}) interface = self._makeOne(supervisord) callback = interface.clearAllProcessLogs() results = callback() self.assertEqual(results, []) def test_sendProcessStdin_raises_incorrect_params_when_not_chars(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'foo') supervisord = PopulatedDummySupervisor(options, 'foo', pconfig1) interface = self._makeOne(supervisord) thing_not_chars = 42 from supervisor import xmlrpc self._assertRPCError(xmlrpc.Faults.INCORRECT_PARAMETERS, interface.sendProcessStdin, 'process1', thing_not_chars) def test_sendProcessStdin_raises_bad_name_when_bad_process(self): supervisord = DummySupervisor() interface = self._makeOne(supervisord) from supervisor import xmlrpc self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.sendProcessStdin, 'nonexistent', 'chars for stdin') def test_sendProcessStdin_raises_bad_name_when_no_process(self): options = DummyOptions() supervisord = PopulatedDummySupervisor(options, 'foo') interface = self._makeOne(supervisord) from supervisor import xmlrpc self._assertRPCError(xmlrpc.Faults.BAD_NAME, interface.sendProcessStdin, 'foo:*', 'chars for stdin') def test_sendProcessStdin_raises_not_running_when_not_process_pid(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'foo') supervisord = PopulatedDummySupervisor(options, 'process1', pconfig1) supervisord.set_procattr('process1', 'pid', 0) interface = self._makeOne(supervisord) from supervisor import xmlrpc self._assertRPCError(xmlrpc.Faults.NOT_RUNNING, interface.sendProcessStdin, 'process1', 'chars for stdin') def test_sendProcessStdin_raises_not_running_when_killing(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'foo') supervisord = PopulatedDummySupervisor(options, 'process1', pconfig1) supervisord.set_procattr('process1', 'pid', 42) supervisord.set_procattr('process1', 'killing', True) interface = self._makeOne(supervisord) from supervisor import xmlrpc self._assertRPCError(xmlrpc.Faults.NOT_RUNNING, interface.sendProcessStdin, 'process1', 'chars for stdin') def test_sendProcessStdin_raises_no_file_when_write_raises_epipe(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'foo') supervisord = PopulatedDummySupervisor(options, 'process1', pconfig1) supervisord.set_procattr('process1', 'pid', 42) supervisord.set_procattr('process1', 'killing', False) supervisord.set_procattr('process1', 'write_exception', OSError(errno.EPIPE, os.strerror(errno.EPIPE))) interface = self._makeOne(supervisord) from supervisor import xmlrpc self._assertRPCError(xmlrpc.Faults.NO_FILE, interface.sendProcessStdin, 'process1', 'chars for stdin') def test_sendProcessStdin_reraises_other_oserrors(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'foo') supervisord = PopulatedDummySupervisor(options, 'process1', pconfig1) supervisord.set_procattr('process1', 'pid', 42) supervisord.set_procattr('process1', 'killing', False) supervisord.set_procattr('process1', 'write_exception', OSError(errno.EINTR, os.strerror(errno.EINTR))) interface = self._makeOne(supervisord) self.assertRaises(OSError, interface.sendProcessStdin, 'process1', 'chars for stdin') def test_sendProcessStdin_writes_chars_and_returns_true(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'foo') supervisord = PopulatedDummySupervisor(options, 'process1', pconfig1) supervisord.set_procattr('process1', 'pid', 42) interface = self._makeOne(supervisord) chars = b'chars for stdin' self.assertTrue(interface.sendProcessStdin('process1', chars)) self.assertEqual('sendProcessStdin', interface.update_text) process1 = supervisord.process_groups['process1'].processes['process1'] self.assertEqual(process1.stdin_buffer, chars) def test_sendProcessStdin_unicode_encoded_to_utf8(self): options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'foo') supervisord = PopulatedDummySupervisor(options, 'process1', pconfig1) supervisord.set_procattr('process1', 'pid', 42) interface = self._makeOne(supervisord) interface.sendProcessStdin('process1', as_string(b'fi\xc3\xad')) process1 = supervisord.process_groups['process1'].processes['process1'] self.assertEqual(process1.stdin_buffer, b'fi\xc3\xad') def test_sendRemoteCommEvent_notifies_subscribers(self): options = DummyOptions() supervisord = DummySupervisor(options) interface = self._makeOne(supervisord) from supervisor import events L = [] def callback(event): L.append(event) try: events.callbacks[:] = [(events.RemoteCommunicationEvent, callback)] result = interface.sendRemoteCommEvent('foo', 'bar') finally: events.callbacks[:] = [] events.clear() self.assertTrue(result) self.assertEqual(len(L), 1) event = L[0] self.assertEqual(event.type, 'foo') self.assertEqual(event.data, 'bar') def test_sendRemoteCommEvent_unicode_encoded_to_utf8(self): options = DummyOptions() supervisord = DummySupervisor(options) interface = self._makeOne(supervisord) from supervisor import events L = [] def callback(event): L.append(event) try: events.callbacks[:] = [(events.RemoteCommunicationEvent, callback)] result = interface.sendRemoteCommEvent( as_string('fií once'), as_string('fií twice'), ) finally: events.callbacks[:] = [] events.clear() self.assertTrue(result) self.assertEqual(len(L), 1) event = L[0] if PY2: self.assertEqual(event.type, 'fi\xc3\xad once') self.assertEqual(event.data, 'fi\xc3\xad twice') else: self.assertEqual(event.type, 'fií once') self.assertEqual(event.data, 'fií twice') class SystemNamespaceXMLRPCInterfaceTests(TestBase): def _getTargetClass(self): from supervisor import xmlrpc return xmlrpc.SystemNamespaceRPCInterface def _makeOne(self): from supervisor import rpcinterface supervisord = DummySupervisor() supervisor = rpcinterface.SupervisorNamespaceRPCInterface(supervisord) return self._getTargetClass()( [('supervisor', supervisor), ] ) def test_ctor(self): interface = self._makeOne() self.assertTrue(interface.namespaces['supervisor']) self.assertTrue(interface.namespaces['system']) def test_listMethods(self): interface = self._makeOne() methods = interface.listMethods() methods.sort() keys = list(interface._listMethods().keys()) keys.sort() self.assertEqual(methods, keys) def test_methodSignature(self): from supervisor import xmlrpc interface = self._makeOne() self._assertRPCError(xmlrpc.Faults.SIGNATURE_UNSUPPORTED, interface.methodSignature, ['foo.bar']) result = interface.methodSignature('system.methodSignature') self.assertEqual(result, ['array', 'string']) def test_allMethodDocs(self): from supervisor import xmlrpc # belt-and-suspenders test for docstring-as-typing parsing correctness # and documentation validity vs. implementation _RPCTYPES = ['int', 'double', 'string', 'boolean', 'dateTime.iso8601', 'base64', 'binary', 'array', 'struct'] interface = self._makeOne() methods = interface._listMethods() for k in methods.keys(): # if a method doesn't have a @return value, an RPCError is raised. # Detect that here. try: interface.methodSignature(k) except xmlrpc.RPCError: raise AssertionError('methodSignature for %s raises ' 'RPCError (missing @return doc?)' % k) # we want to test that the number of arguments implemented in # the function is the same as the number of arguments implied by # the doc @params, and that they show up in the same order. ns_name, method_name = k.split('.', 1) namespace = interface.namespaces[ns_name] meth = getattr(namespace, method_name) try: code = meth.func_code except Exception: code = meth.__code__ argnames = code.co_varnames[1:code.co_argcount] parsed = xmlrpc.gettags(str(meth.__doc__)) plines = [] ptypes = [] pnames = [] ptexts = [] rlines = [] rtypes = [] rnames = [] rtexts = [] for thing in parsed: if thing[1] == 'param': # tag name plines.append(thing[0]) # doc line number ptypes.append(thing[2]) # data type pnames.append(thing[3]) # function name ptexts.append(thing[4]) # description elif thing[1] == 'return': # tag name rlines.append(thing[0]) # doc line number rtypes.append(thing[2]) # data type rnames.append(thing[3]) # function name rtexts.append(thing[4]) # description elif thing[1] is not None: raise AssertionError( 'unknown tag type %s for %s, parsed %s' % (thing[1], k, parsed)) # param tokens if len(argnames) != len(pnames): raise AssertionError('Incorrect documentation ' '(%s args, %s doc params) in %s' % (len(argnames), len(pnames), k)) for docline in plines: self.assertTrue(type(docline) == int, (docline, type(docline), k, parsed)) for doctype in ptypes: self.assertTrue(doctype in _RPCTYPES, doctype) for x in range(len(pnames)): if pnames[x] != argnames[x]: msg = 'Name wrong: (%s vs. %s in %s)\n%s' % (pnames[x], argnames[x], k, parsed) raise AssertionError(msg) for doctext in ptexts: self.assertTrue(type(doctext) == type(''), doctext) # result tokens if len(rlines) > 1: raise AssertionError( 'Duplicate @return values in docs for %s' % k) for docline in rlines: self.assertTrue(type(docline) == int, (docline, type(docline), k)) for doctype in rtypes: self.assertTrue(doctype in _RPCTYPES, doctype) for docname in rnames: self.assertTrue(type(docname) == type(''), (docname, type(docname), k)) for doctext in rtexts: self.assertTrue(type(doctext) == type(''), (doctext, type(doctext), k)) def test_multicall_simplevals(self): interface = self._makeOne() results = interface.multicall([ {'methodName':'system.methodHelp', 'params':['system.methodHelp']}, {'methodName':'system.listMethods', 'params':[]}, ]) self.assertEqual(results[0], interface.methodHelp('system.methodHelp')) self.assertEqual(results[1], interface.listMethods()) def test_multicall_recursion_guard(self): from supervisor import xmlrpc interface = self._makeOne() results = interface.multicall([ {'methodName': 'system.multicall', 'params': []}, ]) e = xmlrpc.RPCError(xmlrpc.Faults.INCORRECT_PARAMETERS, 'Recursive system.multicall forbidden') recursion_fault = {'faultCode': e.code, 'faultString': e.text} self.assertEqual(results, [recursion_fault]) def test_multicall_nested_callback(self): from supervisor import http interface = self._makeOne() callback = interface.multicall([ {'methodName':'supervisor.stopAllProcesses'}]) results = http.NOT_DONE_YET while results is http.NOT_DONE_YET: results = callback() self.assertEqual(results[0], []) def test_methodHelp(self): from supervisor import xmlrpc interface = self._makeOne() self._assertRPCError(xmlrpc.Faults.SIGNATURE_UNSUPPORTED, interface.methodHelp, ['foo.bar']) help = interface.methodHelp('system.methodHelp') self.assertEqual(help, interface.methodHelp.__doc__) class Test_make_allfunc(unittest.TestCase): def _callFUT(self, processes, predicate, func, **extra_kwargs): from supervisor.rpcinterface import make_allfunc return make_allfunc(processes, predicate, func, **extra_kwargs) def test_rpcerror_nocallbacks(self): from supervisor import xmlrpc def cb(name, **kw): raise xmlrpc.RPCError(xmlrpc.Faults.FAILED) options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'foo') proc = DummyProcess(pconfig1) group = DummyProcessGroup(pconfig1) def pred(proc): return True af = self._callFUT([(group, proc)], pred, cb) result = af() self.assertEqual(result, [{'description': 'FAILED', 'group': 'process1', 'name': 'process1', 'status': xmlrpc.Faults.FAILED}]) def test_func_callback_normal_return_val(self): def cb(name, **kw): return lambda: 1 options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'foo') proc = DummyProcess(pconfig1) group = DummyProcessGroup(pconfig1) def pred(proc): return True af = self._callFUT([(group, proc)], pred, cb) result = af() self.assertEqual( result, [{'group': 'process1', 'description': 'OK', 'status': 80, 'name': 'process1'}] ) def test_func_callback_raises_RPCError(self): from supervisor import xmlrpc def cb(name, **kw): def inner(): raise xmlrpc.RPCError(xmlrpc.Faults.FAILED) return inner options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'foo') proc = DummyProcess(pconfig1) group = DummyProcessGroup(pconfig1) def pred(proc): return True af = self._callFUT([(group, proc)], pred, cb) result = af() self.assertEqual( result, [{'description': 'FAILED', 'group': 'process1', 'status': 30, 'name': 'process1'}] ) def test_func_callback_returns_NOT_DONE_YET(self): from supervisor.http import NOT_DONE_YET def cb(name, **kw): def inner(): return NOT_DONE_YET return inner options = DummyOptions() pconfig1 = DummyPConfig(options, 'process1', 'foo') proc = DummyProcess(pconfig1) group = DummyProcessGroup(pconfig1) def pred(proc): return True af = self._callFUT([(group, proc)], pred, cb) result = af() self.assertEqual( result, NOT_DONE_YET, ) class Test_make_main_rpcinterface(unittest.TestCase): def _callFUT(self, supervisord): from supervisor.rpcinterface import make_main_rpcinterface return make_main_rpcinterface(supervisord) def test_it(self): inst = self._callFUT(None) self.assertEqual( inst.__class__.__name__, 'SupervisorNamespaceRPCInterface' ) class DummyRPCInterface: def hello(self): return 'Hello!' def test_suite(): return unittest.findTestCases(sys.modules[__name__]) if __name__ == '__main__': unittest.main(defaultTest='test_suite') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/test_socket_manager.py0000644000076500000240000002062114340177153023455 0ustar00mnaberezstaff"""Test suite for supervisor.socket_manager""" import gc import sys import os import unittest import socket import tempfile try: import __pypy__ except ImportError: __pypy__ = None from supervisor.tests.base import DummySocketConfig from supervisor.tests.base import DummyLogger from supervisor.datatypes import UnixStreamSocketConfig from supervisor.datatypes import InetStreamSocketConfig class Subject: def __init__(self): self.value = 5 def getValue(self): return self.value def setValue(self, val): self.value = val class ProxyTest(unittest.TestCase): def setUp(self): self.on_deleteCalled = False def _getTargetClass(self): from supervisor.socket_manager import Proxy return Proxy def _makeOne(self, *args, **kw): return self._getTargetClass()(*args, **kw) def setOnDeleteCalled(self): self.on_deleteCalled = True def test_proxy_getattr(self): proxy = self._makeOne(Subject()) self.assertEqual(5, proxy.getValue()) def test_on_delete(self): proxy = self._makeOne(Subject(), on_delete=self.setOnDeleteCalled) self.assertEqual(5, proxy.getValue()) proxy = None gc_collect() self.assertTrue(self.on_deleteCalled) class ReferenceCounterTest(unittest.TestCase): def setUp(self): self.running = False def start(self): self.running = True def stop(self): self.running = False def _getTargetClass(self): from supervisor.socket_manager import ReferenceCounter return ReferenceCounter def _makeOne(self, *args, **kw): return self._getTargetClass()(*args, **kw) def test_incr_and_decr(self): ctr = self._makeOne(on_zero=self.stop,on_non_zero=self.start) self.assertFalse(self.running) ctr.increment() self.assertTrue(self.running) self.assertEqual(1, ctr.get_count()) ctr.increment() self.assertTrue(self.running) self.assertEqual(2, ctr.get_count()) ctr.decrement() self.assertTrue(self.running) self.assertEqual(1, ctr.get_count()) ctr.decrement() self.assertFalse(self.running) self.assertEqual(0, ctr.get_count()) def test_decr_at_zero_raises_error(self): ctr = self._makeOne(on_zero=self.stop,on_non_zero=self.start) self.assertRaises(Exception, ctr.decrement) class SocketManagerTest(unittest.TestCase): def tearDown(self): gc_collect() def _getTargetClass(self): from supervisor.socket_manager import SocketManager return SocketManager def _makeOne(self, *args, **kw): return self._getTargetClass()(*args, **kw) def test_repr(self): conf = DummySocketConfig(2) sock_manager = self._makeOne(conf) expected = "<%s at %s for %s>" % ( sock_manager.__class__, id(sock_manager), conf.url) self.assertEqual(repr(sock_manager), expected) def test_get_config(self): conf = DummySocketConfig(2) sock_manager = self._makeOne(conf) self.assertEqual(conf, sock_manager.config()) def test_tcp_w_hostname(self): conf = InetStreamSocketConfig('localhost', 51041) sock_manager = self._makeOne(conf) self.assertEqual(sock_manager.socket_config, conf) sock = sock_manager.get_socket() self.assertEqual(sock.getsockname(), ('127.0.0.1', 51041)) def test_tcp_w_ip(self): conf = InetStreamSocketConfig('127.0.0.1', 51041) sock_manager = self._makeOne(conf) self.assertEqual(sock_manager.socket_config, conf) sock = sock_manager.get_socket() self.assertEqual(sock.getsockname(), ('127.0.0.1', 51041)) def test_unix(self): (tf_fd, tf_name) = tempfile.mkstemp() conf = UnixStreamSocketConfig(tf_name) sock_manager = self._makeOne(conf) self.assertEqual(sock_manager.socket_config, conf) sock = sock_manager.get_socket() self.assertEqual(sock.getsockname(), tf_name) sock = None os.close(tf_fd) def test_socket_lifecycle(self): conf = DummySocketConfig(2) sock_manager = self._makeOne(conf) # Assert that sockets are created on demand self.assertFalse(sock_manager.is_prepared()) # Get two socket references sock = sock_manager.get_socket() self.assertTrue(sock_manager.is_prepared()) #socket created on demand sock_id = id(sock._get()) sock2 = sock_manager.get_socket() sock2_id = id(sock2._get()) # Assert that they are not the same proxy object self.assertNotEqual(sock, sock2) # Assert that they are the same underlying socket self.assertEqual(sock_id, sock2_id) # Socket not actually closed yet b/c ref ct is 2 self.assertEqual(2, sock_manager.get_socket_ref_count()) self.assertTrue(sock_manager.is_prepared()) self.assertFalse(sock_manager.socket.close_called) sock = None gc_collect() # Socket not actually closed yet b/c ref ct is 1 self.assertTrue(sock_manager.is_prepared()) self.assertFalse(sock_manager.socket.close_called) sock2 = None gc_collect() # Socket closed self.assertFalse(sock_manager.is_prepared()) self.assertTrue(sock_manager.socket.close_called) # Get a new socket reference sock3 = sock_manager.get_socket() self.assertTrue(sock_manager.is_prepared()) sock3_id = id(sock3._get()) # Assert that it is not the same socket self.assertNotEqual(sock_id, sock3_id) # Drop ref ct to zero del sock3 gc_collect() # Now assert that socket is closed self.assertFalse(sock_manager.is_prepared()) self.assertTrue(sock_manager.socket.close_called) def test_logging(self): conf = DummySocketConfig(1) logger = DummyLogger() sock_manager = self._makeOne(conf, logger=logger) # socket open sock = sock_manager.get_socket() self.assertEqual(len(logger.data), 1) self.assertEqual('Creating socket %s' % repr(conf), logger.data[0]) # socket close del sock gc_collect() self.assertEqual(len(logger.data), 2) self.assertEqual('Closing socket %s' % repr(conf), logger.data[1]) def test_prepare_socket(self): conf = DummySocketConfig(1) sock_manager = self._makeOne(conf) sock = sock_manager.get_socket() self.assertTrue(sock_manager.is_prepared()) self.assertFalse(sock.bind_called) self.assertTrue(sock.listen_called) self.assertFalse(sock.close_called) def test_prepare_socket_uses_configured_backlog(self): conf = DummySocketConfig(1, backlog=42) sock_manager = self._makeOne(conf) sock = sock_manager.get_socket() self.assertTrue(sock_manager.is_prepared()) self.assertEqual(sock.listen_backlog, conf.get_backlog()) def test_prepare_socket_uses_somaxconn_if_no_backlog_configured(self): conf = DummySocketConfig(1, backlog=None) sock_manager = self._makeOne(conf) sock = sock_manager.get_socket() self.assertTrue(sock_manager.is_prepared()) self.assertEqual(sock.listen_backlog, socket.SOMAXCONN) def test_tcp_socket_already_taken(self): conf = InetStreamSocketConfig('127.0.0.1', 51041) sock_manager = self._makeOne(conf) sock = sock_manager.get_socket() sock_manager2 = self._makeOne(conf) self.assertRaises(socket.error, sock_manager2.get_socket) del sock def test_unix_bad_sock(self): conf = UnixStreamSocketConfig('/notthere/foo.sock') sock_manager = self._makeOne(conf) self.assertRaises(socket.error, sock_manager.get_socket) def test_close_requires_prepared_socket(self): conf = InetStreamSocketConfig('127.0.0.1', 51041) sock_manager = self._makeOne(conf) self.assertFalse(sock_manager.is_prepared()) try: sock_manager._close() self.fail() except Exception as e: self.assertEqual(e.args[0], 'Socket has not been prepared') def gc_collect(): if __pypy__ is not None: gc.collect() gc.collect() gc.collect() def test_suite(): return unittest.findTestCases(sys.modules[__name__]) if __name__ == '__main__': unittest.main(defaultTest='test_suite') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/test_states.py0000644000076500000240000000434514340177153022003 0ustar00mnaberezstaff"""Test suite for supervisor.states""" import sys import unittest from supervisor import states class TopLevelProcessStateTests(unittest.TestCase): def test_module_has_process_states(self): self.assertTrue(hasattr(states, 'ProcessStates')) def test_stopped_states_do_not_overlap_with_running_states(self): for state in states.STOPPED_STATES: self.assertFalse(state in states.RUNNING_STATES) def test_running_states_do_not_overlap_with_stopped_states(self): for state in states.RUNNING_STATES: self.assertFalse(state in states.STOPPED_STATES) def test_getProcessStateDescription_returns_string_when_found(self): state = states.ProcessStates.STARTING self.assertEqual(states.getProcessStateDescription(state), 'STARTING') def test_getProcessStateDescription_returns_None_when_not_found(self): self.assertEqual(states.getProcessStateDescription(3.14159), None) class TopLevelSupervisorStateTests(unittest.TestCase): def test_module_has_supervisor_states(self): self.assertTrue(hasattr(states, 'SupervisorStates')) def test_getSupervisorStateDescription_returns_string_when_found(self): state = states.SupervisorStates.RUNNING self.assertEqual(states.getSupervisorStateDescription(state), 'RUNNING') def test_getSupervisorStateDescription_returns_None_when_not_found(self): self.assertEqual(states.getSupervisorStateDescription(3.14159), None) class TopLevelEventListenerStateTests(unittest.TestCase): def test_module_has_eventlistener_states(self): self.assertTrue(hasattr(states, 'EventListenerStates')) def test_getEventListenerStateDescription_returns_string_when_found(self): state = states.EventListenerStates.ACKNOWLEDGED self.assertEqual(states.getEventListenerStateDescription(state), 'ACKNOWLEDGED') def test_getEventListenerStateDescription_returns_None_when_not_found(self): self.assertEqual(states.getEventListenerStateDescription(3.14159), None) def test_suite(): return unittest.findTestCases(sys.modules[__name__]) if __name__ == '__main__': unittest.main(defaultTest='test_suite') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0 supervisor-4.2.5/supervisor/tests/test_supervisorctl.py0000644000076500000240000024117314351440431023420 0ustar00mnaberezstaffimport sys import unittest from supervisor import xmlrpc from supervisor.compat import StringIO from supervisor.compat import xmlrpclib from supervisor.supervisorctl import LSBInitExitStatuses, LSBStatusExitStatuses from supervisor.tests.base import DummyRPCServer class fgthread_Tests(unittest.TestCase): def _getTargetClass(self): from supervisor.supervisorctl import fgthread return fgthread def _makeOne(self, program, ctl): return self._getTargetClass()(program, ctl) def test_ctor(self): options = DummyClientOptions() ctl = DummyController(options) inst = self._makeOne(None, ctl) self.assertEqual(inst.killed, False) def test_globaltrace_call(self): options = DummyClientOptions() ctl = DummyController(options) inst = self._makeOne(None, ctl) result = inst.globaltrace(None, 'call', None) self.assertEqual(result, inst.localtrace) def test_globaltrace_noncall(self): options = DummyClientOptions() ctl = DummyController(options) inst = self._makeOne(None, ctl) result = inst.globaltrace(None, None, None) self.assertEqual(result, None) def test_localtrace_killed_whyline(self): options = DummyClientOptions() ctl = DummyController(options) inst = self._makeOne(None, ctl) inst.killed = True try: inst.localtrace(None, 'line', None) except SystemExit as e: self.assertEqual(e.code, None) else: self.fail("No exception thrown. Excepted SystemExit") def test_localtrace_killed_not_whyline(self): options = DummyClientOptions() ctl = DummyController(options) inst = self._makeOne(None, ctl) inst.killed = True result = inst.localtrace(None, None, None) self.assertEqual(result, inst.localtrace) def test_kill(self): options = DummyClientOptions() ctl = DummyController(options) inst = self._makeOne(None, ctl) inst.killed = True class DummyCloseable(object): def close(self): self.closed = True inst.output_handler = DummyCloseable() inst.error_handler = DummyCloseable() inst.kill() self.assertTrue(inst.killed) self.assertTrue(inst.output_handler.closed) self.assertTrue(inst.error_handler.closed) class ControllerTests(unittest.TestCase): def _getTargetClass(self): from supervisor.supervisorctl import Controller return Controller def _makeOne(self, options): return self._getTargetClass()(options) def test_ctor(self): options = DummyClientOptions() controller = self._makeOne(options) self.assertEqual(controller.prompt, options.prompt + '> ') def test__upcheck(self): options = DummyClientOptions() controller = self._makeOne(options) result = controller.upcheck() self.assertEqual(result, True) def test__upcheck_wrong_server_version(self): options = DummyClientOptions() options._server.supervisor.getVersion = lambda *x: '1.0' controller = self._makeOne(options) controller.stdout = StringIO() result = controller.upcheck() self.assertEqual(result, False) self.assertEqual(controller.stdout.getvalue(), 'Sorry, this version of supervisorctl expects' ' to talk to a server with API version 3.0, but' ' the remote version is 1.0.\n') def test__upcheck_unknown_method(self): options = DummyClientOptions() from supervisor.xmlrpc import Faults def getVersion(): raise xmlrpclib.Fault(Faults.UNKNOWN_METHOD, 'duh') options._server.supervisor.getVersion = getVersion controller = self._makeOne(options) controller.stdout = StringIO() result = controller.upcheck() self.assertEqual(result, False) self.assertEqual(controller.stdout.getvalue(), 'Sorry, supervisord responded but did not recognize' ' the supervisor namespace commands that' ' supervisorctl uses to control it. Please check' ' that the [rpcinterface:supervisor] section is' ' enabled in the configuration file' ' (see sample.conf).\n') def test__upcheck_reraises_other_xmlrpc_faults(self): options = DummyClientOptions() from supervisor.xmlrpc import Faults def f(*arg, **kw): raise xmlrpclib.Fault(Faults.FAILED, '') options._server.supervisor.getVersion = f controller = self._makeOne(options) controller.stdout = StringIO() self.assertRaises(xmlrpclib.Fault, controller.upcheck) self.assertEqual(controller.exitstatus, LSBInitExitStatuses.GENERIC) def test__upcheck_catches_socket_error_ECONNREFUSED(self): options = DummyClientOptions() import socket import errno def raise_fault(*arg, **kw): raise socket.error(errno.ECONNREFUSED, 'nobody home') options._server.supervisor.getVersion = raise_fault controller = self._makeOne(options) controller.stdout = StringIO() result = controller.upcheck() self.assertEqual(result, False) output = controller.stdout.getvalue() self.assertTrue('refused connection' in output) self.assertEqual(controller.exitstatus, LSBInitExitStatuses.INSUFFICIENT_PRIVILEGES) def test__upcheck_catches_socket_error_ENOENT(self): options = DummyClientOptions() import socket import errno def raise_fault(*arg, **kw): raise socket.error(errno.ENOENT, 'nobody home') options._server.supervisor.getVersion = raise_fault controller = self._makeOne(options) controller.stdout = StringIO() result = controller.upcheck() self.assertEqual(result, False) output = controller.stdout.getvalue() self.assertTrue('no such file' in output) self.assertEqual(controller.exitstatus, LSBInitExitStatuses.NOT_RUNNING) def test__upcheck_reraises_other_socket_faults(self): options = DummyClientOptions() import socket import errno def f(*arg, **kw): raise socket.error(errno.EBADF, '') options._server.supervisor.getVersion = f controller = self._makeOne(options) controller.stdout = StringIO() self.assertRaises(socket.error, controller.upcheck) def test_onecmd(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout = StringIO() plugin = DummyPlugin() controller.options.plugins = (plugin,) result = controller.onecmd('help') self.assertEqual(result, None) self.assertEqual(plugin.helped, True) def test_onecmd_empty_does_not_repeat_previous_cmd(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout = StringIO() plugin = DummyPlugin() controller.options.plugins = (plugin,) plugin.helped = False controller.onecmd('help') self.assertTrue(plugin.helped) plugin.helped = False controller.onecmd('') self.assertFalse(plugin.helped) def test_onecmd_clears_completion_cache(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout = StringIO() controller._complete_info = {} controller.onecmd('help') self.assertEqual(controller._complete_info, None) def test_onecmd_bad_command_error(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout = StringIO() controller.onecmd("badcmd") self.assertEqual(controller.stdout.getvalue(), "*** Unknown syntax: badcmd\n") self.assertEqual(controller.exitstatus, LSBInitExitStatuses.GENERIC) def test_complete_action_empty(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help'] result = controller.complete('', 0, line='') self.assertEqual(result, 'help ') result = controller.complete('', 1, line='') self.assertEqual(result, None) def test_complete_action_partial(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help'] result = controller.complete('h', 0, line='h') self.assertEqual(result, 'help ') result = controller.complete('h', 1, line='h') self.assertEqual(result, None) def test_complete_action_whole(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help'] result = controller.complete('help', 0, line='help') self.assertEqual(result, 'help ') def test_complete_unknown_action_uncompletable(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() result = controller.complete('bad', 0, line='bad') self.assertEqual(result, None) def test_complete_unknown_action_arg_uncompletable(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help', 'add'] result = controller.complete('', 1, line='bad ') self.assertEqual(result, None) def test_complete_help_empty(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help', 'start'] result = controller.complete('', 0, line='help ') self.assertEqual(result, 'help ') result = controller.complete('', 1, line='help ') self.assertEqual(result, 'start ') result = controller.complete('', 2, line='help ') self.assertEqual(result, None) def test_complete_help_action(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help', 'start'] result = controller.complete('he', 0, line='help he') self.assertEqual(result, 'help ') result = controller.complete('he', 1, line='help he') self.assertEqual(result, None) def test_complete_start_empty(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help', 'start'] result = controller.complete('', 0, line='start ') self.assertEqual(result, 'foo ') result = controller.complete('', 1, line='start ') self.assertEqual(result, 'bar ') result = controller.complete('', 2, line='start ') self.assertEqual(result, 'baz:baz_01 ') result = controller.complete('', 3, line='start ') self.assertEqual(result, 'baz:* ') result = controller.complete('', 4, line='start ') self.assertEqual(result, None) def test_complete_start_no_colon(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help', 'start'] result = controller.complete('f', 0, line='start f') self.assertEqual(result, 'foo ') result = controller.complete('f', 1, line='start f') self.assertEqual(result, None) def test_complete_start_with_colon(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help', 'start'] result = controller.complete('foo:', 0, line='start foo:') self.assertEqual(result, 'foo:foo ') result = controller.complete('foo:', 1, line='start foo:') self.assertEqual(result, 'foo:* ') result = controller.complete('foo:', 2, line='start foo:') self.assertEqual(result, None) def test_complete_start_uncompletable(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help', 'start'] result = controller.complete('bad', 0, line='start bad') self.assertEqual(result, None) def test_complete_caches_process_info(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help', 'start'] result = controller.complete('', 0, line='start ') self.assertNotEqual(result, None) def f(*arg, **kw): raise Exception("should not have called getAllProcessInfo") controller.options._server.supervisor.getAllProcessInfo = f controller.complete('', 1, line='start ') def test_complete_add_empty(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help', 'add'] result = controller.complete('', 0, line='add ') self.assertEqual(result, 'foo ') result = controller.complete('', 1, line='add ') self.assertEqual(result, 'bar ') result = controller.complete('', 2, line='add ') self.assertEqual(result, 'baz ') result = controller.complete('', 3, line='add ') self.assertEqual(result, None) def test_complete_add_uncompletable(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help', 'add'] result = controller.complete('bad', 0, line='add bad') self.assertEqual(result, None) def test_complete_add_group(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help', 'add'] result = controller.complete('f', 0, line='add f') self.assertEqual(result, 'foo ') result = controller.complete('f', 1, line='add f') self.assertEqual(result, None) def test_complete_reload_arg_uncompletable(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout=StringIO() controller.vocab = ['help', 'reload'] result = controller.complete('', 1, line='reload ') self.assertEqual(result, None) def test_nohelp(self): options = DummyClientOptions() controller = self._makeOne(options) self.assertEqual(controller.nohelp, '*** No help on %s') def test_do_help(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout = StringIO() results = controller.do_help('') helpval = controller.stdout.getvalue() self.assertEqual(results, None) self.assertEqual(helpval, 'foo helped') def test_do_help_for_help(self): options = DummyClientOptions() controller = self._makeOne(options) controller.stdout = StringIO() results = controller.do_help("help") self.assertEqual(results, None) helpval = controller.stdout.getvalue() self.assertTrue("help\t\tPrint a list" in helpval) def test_get_supervisor_returns_serverproxy_supervisor_namespace(self): options = DummyClientOptions() controller = self._makeOne(options) proxy = controller.get_supervisor() expected = options.getServerProxy().supervisor self.assertEqual(proxy, expected) def test_get_server_proxy_with_no_args_returns_serverproxy(self): options = DummyClientOptions() controller = self._makeOne(options) proxy = controller.get_server_proxy() expected = options.getServerProxy() self.assertEqual(proxy, expected) def test_get_server_proxy_with_namespace_returns_that_namespace(self): options = DummyClientOptions() controller = self._makeOne(options) proxy = controller.get_server_proxy('system') expected = options.getServerProxy().system self.assertEqual(proxy, expected) def test_real_controller_initialization(self): from supervisor.options import ClientOptions args = [] # simulating starting without parameters options = ClientOptions() # No default config file search in case they would exist self.assertTrue(len(options.searchpaths) > 0) options.searchpaths = [] options.realize(args, doc=__doc__) self._makeOne(options) # should not raise class TestControllerPluginBase(unittest.TestCase): def _getTargetClass(self): from supervisor.supervisorctl import ControllerPluginBase return ControllerPluginBase def _makeOne(self, *arg, **kw): klass = self._getTargetClass() options = DummyClientOptions() ctl = DummyController(options) plugin = klass(ctl, *arg, **kw) return plugin def test_do_help_noarg(self): plugin = self._makeOne() result = plugin.do_help(None) self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), '\n') self.assertEqual(len(plugin.ctl.topics_printed), 1) topics = plugin.ctl.topics_printed[0] self.assertEqual(topics[0], 'unnamed commands (type help ):') self.assertEqual(topics[1], []) self.assertEqual(topics[2], 15) self.assertEqual(topics[3], 80) def test_do_help_witharg(self): plugin = self._makeOne() result = plugin.do_help('foo') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'no help on foo\n') self.assertEqual(len(plugin.ctl.topics_printed), 0) class TestDefaultControllerPlugin(unittest.TestCase): def _getTargetClass(self): from supervisor.supervisorctl import DefaultControllerPlugin return DefaultControllerPlugin def _makeOne(self, *arg, **kw): klass = self._getTargetClass() options = DummyClientOptions() ctl = DummyController(options) plugin = klass(ctl, *arg, **kw) return plugin def test_tail_toofewargs(self): plugin = self._makeOne() result = plugin.do_tail('') self.assertEqual(result, None) lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(lines[0], 'Error: too few arguments') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_tail_toomanyargs(self): plugin = self._makeOne() result = plugin.do_tail('one two three four') self.assertEqual(result, None) lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(lines[0], 'Error: too many arguments') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_tail_f_noprocname(self): plugin = self._makeOne() result = plugin.do_tail('-f') self.assertEqual(result, None) lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(lines[0], 'Error: tail requires process name') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_tail_bad_modifier(self): plugin = self._makeOne() result = plugin.do_tail('-z foo') self.assertEqual(result, None) lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(lines[0], 'Error: bad argument -z') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_tail_defaults(self): plugin = self._makeOne() result = plugin.do_tail('foo') self.assertEqual(result, None) lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(len(lines), 12) self.assertEqual(lines[0], 'stdout line') def test_tail_no_file(self): plugin = self._makeOne() result = plugin.do_tail('NO_FILE') self.assertEqual(result, None) lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(len(lines), 2) self.assertEqual(lines[0], 'NO_FILE: ERROR (no log file)') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_tail_failed(self): plugin = self._makeOne() result = plugin.do_tail('FAILED') self.assertEqual(result, None) lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(len(lines), 2) self.assertEqual(lines[0], 'FAILED: ERROR (unknown error reading log)') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_tail_bad_name(self): plugin = self._makeOne() result = plugin.do_tail('BAD_NAME') self.assertEqual(result, None) lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(len(lines), 2) self.assertEqual(lines[0], 'BAD_NAME: ERROR (no such process name)') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_tail_bytesmodifier(self): plugin = self._makeOne() result = plugin.do_tail('-10 foo') self.assertEqual(result, None) lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(len(lines), 3) self.assertEqual(lines[0], 'dout line') def test_tail_explicit_channel_stdout_nomodifier(self): plugin = self._makeOne() result = plugin.do_tail('foo stdout') self.assertEqual(result, None) lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(len(lines), 12) self.assertEqual(lines[0], 'stdout line') def test_tail_explicit_channel_stderr_nomodifier(self): plugin = self._makeOne() result = plugin.do_tail('foo stderr') self.assertEqual(result, None) lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(len(lines), 12) self.assertEqual(lines[0], 'stderr line') def test_tail_explicit_channel_unrecognized(self): plugin = self._makeOne() result = plugin.do_tail('foo fudge') self.assertEqual(result, None) value = plugin.ctl.stdout.getvalue().strip() self.assertEqual(value, "Error: bad channel 'fudge'") self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_tail_upcheck_failed(self): plugin = self._makeOne() plugin.ctl.upcheck = lambda: False called = [] def f(*arg, **kw): called.append(True) plugin.ctl.options._server.supervisor.readProcessStdoutLog = f plugin.do_tail('foo') self.assertEqual(called, []) def test_status_help(self): plugin = self._makeOne() plugin.help_status() out = plugin.ctl.stdout.getvalue() self.assertTrue("status " in out) def test_status_upcheck_failed(self): plugin = self._makeOne() plugin.ctl.upcheck = lambda: False called = [] def f(*arg, **kw): called.append(True) plugin.ctl.options._server.supervisor.getAllProcessInfo = f plugin.do_status('') self.assertEqual(called, []) def test_status_table_process_column_min_width(self): plugin = self._makeOne() result = plugin.do_status('') self.assertEqual(result, None) lines = plugin.ctl.stdout.getvalue().split("\n") self.assertEqual(lines[0].index("RUNNING"), 33) def test_status_table_process_column_expands(self): plugin = self._makeOne() options = plugin.ctl.options def f(*arg, **kw): from supervisor.states import ProcessStates return [{'name': 'foo'*50, # long name 'group':'foo', 'pid': 11, 'state': ProcessStates.RUNNING, 'statename': 'RUNNING', 'start': 0, 'stop': 0, 'spawnerr': '', 'now': 0, 'description':'foo description'}, { 'name': 'bar', # short name 'group': 'bar', 'pid': 12, 'state': ProcessStates.FATAL, 'statename': 'RUNNING', 'start': 0, 'stop': 0, 'spawnerr': '', 'now': 0, 'description': 'bar description', }] options._server.supervisor.getAllProcessInfo = f self.assertEqual(plugin.do_status(''), None) lines = plugin.ctl.stdout.getvalue().split("\n") self.assertEqual(lines[0].index("RUNNING"), 157) self.assertEqual(lines[1].index("RUNNING"), 157) def test_status_all_processes_no_arg(self): plugin = self._makeOne() result = plugin.do_status('') self.assertEqual(result, None) value = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(value[0].split(None, 2), ['foo', 'RUNNING', 'foo description']) self.assertEqual(value[1].split(None, 2), ['bar', 'FATAL', 'bar description']) self.assertEqual(value[2].split(None, 2), ['baz:baz_01', 'STOPPED', 'baz description']) self.assertEqual(plugin.ctl.exitstatus, LSBStatusExitStatuses.NOT_RUNNING) def test_status_success(self): plugin = self._makeOne() result = plugin.do_status('foo') self.assertEqual(result, None) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) value = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(value[0].split(None, 2), ['foo', 'RUNNING', 'foo description']) def test_status_unknown_process(self): plugin = self._makeOne() result = plugin.do_status('unknownprogram') self.assertEqual(result, None) value = plugin.ctl.stdout.getvalue() self.assertEqual("unknownprogram: ERROR (no such process)\n", value) self.assertEqual(plugin.ctl.exitstatus, LSBStatusExitStatuses.UNKNOWN) def test_status_all_processes_all_arg(self): plugin = self._makeOne() result = plugin.do_status('all') self.assertEqual(result, None) value = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(value[0].split(None, 2), ['foo', 'RUNNING', 'foo description']) self.assertEqual(value[1].split(None, 2), ['bar', 'FATAL', 'bar description']) self.assertEqual(value[2].split(None, 2), ['baz:baz_01', 'STOPPED', 'baz description']) self.assertEqual(plugin.ctl.exitstatus, LSBStatusExitStatuses.NOT_RUNNING) def test_status_process_name(self): plugin = self._makeOne() result = plugin.do_status('foo') self.assertEqual(result, None) value = plugin.ctl.stdout.getvalue().strip() self.assertEqual(value.split(None, 2), ['foo', 'RUNNING', 'foo description']) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_status_group_name(self): plugin = self._makeOne() result = plugin.do_status('baz:*') self.assertEqual(result, None) value = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(value[0].split(None, 2), ['baz:baz_01', 'STOPPED', 'baz description']) self.assertEqual(plugin.ctl.exitstatus, LSBStatusExitStatuses.NOT_RUNNING) def test_status_mixed_names(self): plugin = self._makeOne() result = plugin.do_status('foo baz:*') self.assertEqual(result, None) value = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(value[0].split(None, 2), ['foo', 'RUNNING', 'foo description']) self.assertEqual(value[1].split(None, 2), ['baz:baz_01', 'STOPPED', 'baz description']) self.assertEqual(plugin.ctl.exitstatus, LSBStatusExitStatuses.NOT_RUNNING) def test_status_bad_group_name(self): plugin = self._makeOne() result = plugin.do_status('badgroup:*') self.assertEqual(result, None) value = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(value[0], "badgroup: ERROR (no such group)") self.assertEqual(plugin.ctl.exitstatus, LSBStatusExitStatuses.UNKNOWN) def test_status_bad_process_name(self): plugin = self._makeOne() result = plugin.do_status('badprocess') self.assertEqual(result, None) value = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(value[0], "badprocess: ERROR (no such process)") self.assertEqual(plugin.ctl.exitstatus, LSBStatusExitStatuses.UNKNOWN) def test_status_bad_process_name_with_group(self): plugin = self._makeOne() result = plugin.do_status('badgroup:badprocess') self.assertEqual(result, None) value = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(value[0], "badgroup:badprocess: " "ERROR (no such process)") self.assertEqual(plugin.ctl.exitstatus, LSBStatusExitStatuses.UNKNOWN) def test_start_help(self): plugin = self._makeOne() plugin.help_start() out = plugin.ctl.stdout.getvalue() self.assertTrue("start " in out) def test_start_fail(self): plugin = self._makeOne() result = plugin.do_start('') self.assertEqual(result, None) expected = "Error: start requires a process name" self.assertEqual(plugin.ctl.stdout.getvalue().split('\n')[0], expected) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.INVALID_ARGS) def test_start_badname(self): plugin = self._makeOne() result = plugin.do_start('BAD_NAME') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'BAD_NAME: ERROR (no such process)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_start_no_file(self): plugin = self._makeOne() result = plugin.do_start('NO_FILE') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'NO_FILE: ERROR (no such file)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_start_not_executable(self): plugin = self._makeOne() result = plugin.do_start('NOT_EXECUTABLE') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'NOT_EXECUTABLE: ERROR (file is not executable)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_start_alreadystarted(self): plugin = self._makeOne() result = plugin.do_start('ALREADY_STARTED') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'ALREADY_STARTED: ERROR (already started)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_start_spawnerror(self): plugin = self._makeOne() result = plugin.do_start('SPAWN_ERROR') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'SPAWN_ERROR: ERROR (spawn error)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.NOT_RUNNING) def test_start_abnormaltermination(self): plugin = self._makeOne() result = plugin.do_start('ABNORMAL_TERMINATION') self.assertEqual(result, None) expected = 'ABNORMAL_TERMINATION: ERROR (abnormal termination)\n' self.assertEqual(plugin.ctl.stdout.getvalue(), expected) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.NOT_RUNNING) def test_start_one_success(self): plugin = self._makeOne() result = plugin.do_start('foo') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: started\n') def test_start_one_with_group_name_success(self): plugin = self._makeOne() result = plugin.do_start('foo:foo') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: started\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_start_many(self): plugin = self._makeOne() result = plugin.do_start('foo bar') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: started\nbar: started\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_start_group(self): plugin = self._makeOne() result = plugin.do_start('foo:') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo:foo_00: started\n' 'foo:foo_01: started\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_start_group_bad_name(self): plugin = self._makeOne() result = plugin.do_start('BAD_NAME:') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'BAD_NAME: ERROR (no such group)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.INVALID_ARGS) def test_start_all(self): plugin = self._makeOne() result = plugin.do_start('all') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: started\n' 'foo2: started\n' 'failed_group:failed: ERROR (spawn error)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.NOT_RUNNING) def test_start_upcheck_failed(self): plugin = self._makeOne() plugin.ctl.upcheck = lambda: False called = [] def f(*arg, **kw): called.append(True) supervisor = plugin.ctl.options._server.supervisor supervisor.startAllProcesses = f supervisor.startProcessGroup = f plugin.do_start('foo') self.assertEqual(called, []) def test_stop_help(self): plugin = self._makeOne() plugin.help_stop() out = plugin.ctl.stdout.getvalue() self.assertTrue("stop " in out) def test_stop_fail(self): plugin = self._makeOne() result = plugin.do_stop('') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue().split('\n')[0], "Error: stop requires a process name") self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_stop_badname(self): plugin = self._makeOne() result = plugin.do_stop('BAD_NAME') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'BAD_NAME: ERROR (no such process)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_stop_notrunning(self): plugin = self._makeOne() result = plugin.do_stop('NOT_RUNNING') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'NOT_RUNNING: ERROR (not running)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_stop_failed(self): plugin = self._makeOne() result = plugin.do_stop('FAILED') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'FAILED\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_stop_one_success(self): plugin = self._makeOne() result = plugin.do_stop('foo') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: stopped\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_stop_one_with_group_name_success(self): plugin = self._makeOne() result = plugin.do_stop('foo:foo') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: stopped\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_stop_many(self): plugin = self._makeOne() result = plugin.do_stop('foo bar') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: stopped\n' 'bar: stopped\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_stop_group(self): plugin = self._makeOne() result = plugin.do_stop('foo:') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo:foo_00: stopped\n' 'foo:foo_01: stopped\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_stop_group_bad_name(self): plugin = self._makeOne() result = plugin.do_stop('BAD_NAME:') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'BAD_NAME: ERROR (no such group)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_stop_all(self): plugin = self._makeOne() result = plugin.do_stop('all') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: stopped\n' 'foo2: stopped\n' 'failed_group:failed: ERROR (no such process)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_stop_upcheck_failed(self): plugin = self._makeOne() plugin.ctl.upcheck = lambda: False called = [] def f(*arg, **kw): called.append(True) supervisor = plugin.ctl.options._server.supervisor supervisor.stopAllProcesses = f supervisor.stopProcessGroup = f plugin.do_stop('foo') self.assertEqual(called, []) def test_signal_help(self): plugin = self._makeOne() plugin.help_signal() out = plugin.ctl.stdout.getvalue() self.assertTrue("signal " in out) def test_signal_fail_no_arg(self): plugin = self._makeOne() result = plugin.do_signal('') self.assertEqual(result, None) msg = 'Error: signal requires a signal name and a process name' self.assertEqual(plugin.ctl.stdout.getvalue().split('\n')[0], msg) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_signal_fail_one_arg(self): plugin = self._makeOne() result = plugin.do_signal('hup') self.assertEqual(result, None) msg = 'Error: signal requires a signal name and a process name' self.assertEqual(plugin.ctl.stdout.getvalue().split('\n')[0], msg) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_signal_bad_signal(self): plugin = self._makeOne() result = plugin.do_signal('BAD_SIGNAL foo') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: ERROR (bad signal name)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_signal_bad_name(self): plugin = self._makeOne() result = plugin.do_signal('HUP BAD_NAME') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'BAD_NAME: ERROR (no such process)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_signal_bad_group(self): plugin = self._makeOne() result = plugin.do_signal('HUP BAD_NAME:') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'BAD_NAME: ERROR (no such group)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_signal_not_running(self): plugin = self._makeOne() result = plugin.do_signal('HUP NOT_RUNNING') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'NOT_RUNNING: ERROR (not running)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.NOT_RUNNING) def test_signal_failed(self): plugin = self._makeOne() result = plugin.do_signal('HUP FAILED') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'FAILED\n') self.assertEqual(plugin.ctl.exitstatus, 1) def test_signal_one_success(self): plugin = self._makeOne() result = plugin.do_signal('HUP foo') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: signalled\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_signal_many(self): plugin = self._makeOne() result = plugin.do_signal('HUP foo bar') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: signalled\n' 'bar: signalled\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_signal_group(self): plugin = self._makeOne() result = plugin.do_signal('HUP foo:') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo:foo_00: signalled\n' 'foo:foo_01: signalled\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_signal_all(self): plugin = self._makeOne() result = plugin.do_signal('HUP all') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: signalled\n' 'foo2: signalled\n' 'failed_group:failed: ERROR (no such process)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_signal_upcheck_failed(self): plugin = self._makeOne() plugin.ctl.upcheck = lambda: False called = [] def f(*arg, **kw): called.append(True) supervisor = plugin.ctl.options._server.supervisor supervisor.signalAllProcesses = f supervisor.signalProcessGroup = f plugin.do_signal('term foo') self.assertEqual(called, []) def test_restart_help(self): plugin = self._makeOne() plugin.help_restart() out = plugin.ctl.stdout.getvalue() self.assertTrue("restart " in out) def test_restart_fail(self): plugin = self._makeOne() result = plugin.do_restart('') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue().split('\n')[0], 'Error: restart requires a process name') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_restart_one(self): plugin = self._makeOne() result = plugin.do_restart('foo') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: stopped\nfoo: started\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_restart_all(self): plugin = self._makeOne() result = plugin.do_restart('all') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: stopped\nfoo2: stopped\n' 'failed_group:failed: ERROR (no such process)\n' 'foo: started\nfoo2: started\n' 'failed_group:failed: ERROR (spawn error)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.NOT_RUNNING) def test_restart_upcheck_failed(self): plugin = self._makeOne() plugin.ctl.upcheck = lambda: False called = [] def f(*arg, **kw): called.append(True) supervisor = plugin.ctl.options._server.supervisor supervisor.stopAllProcesses = f supervisor.stopProcessGroup = f plugin.do_restart('foo') self.assertEqual(called, []) def test_clear_help(self): plugin = self._makeOne() plugin.help_clear() out = plugin.ctl.stdout.getvalue() self.assertTrue("clear " in out) def test_clear_fail(self): plugin = self._makeOne() result = plugin.do_clear('') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue().split('\n')[0], "Error: clear requires a process name") self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_clear_badname(self): plugin = self._makeOne() result = plugin.do_clear('BAD_NAME') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'BAD_NAME: ERROR (no such process)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_clear_one_success(self): plugin = self._makeOne() result = plugin.do_clear('foo') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: cleared\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_clear_one_with_group_success(self): plugin = self._makeOne() result = plugin.do_clear('foo:foo') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: cleared\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_clear_many(self): plugin = self._makeOne() result = plugin.do_clear('foo bar') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: cleared\nbar: cleared\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_clear_all(self): plugin = self._makeOne() result = plugin.do_clear('all') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'foo: cleared\n' 'foo2: cleared\n' 'failed_group:failed: ERROR (failed)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_clear_upcheck_failed(self): plugin = self._makeOne() plugin.ctl.upcheck = lambda: False called = [] def f(*arg, **kw): called.append(True) supervisor = plugin.ctl.options._server.supervisor supervisor.clearAllProcessLogs = f supervisor.clearProcessLogs = f plugin.do_clear('foo') self.assertEqual(called, []) def test_open_help(self): plugin = self._makeOne() plugin.help_open() out = plugin.ctl.stdout.getvalue() self.assertTrue("open " in out) def test_open_fail(self): plugin = self._makeOne() result = plugin.do_open('badname') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'ERROR: url must be http:// or unix://\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_open_succeed(self): plugin = self._makeOne() result = plugin.do_open('http://localhost:9002') self.assertEqual(result, None) value = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(value[0].split(None, 2), ['foo', 'RUNNING', 'foo description']) self.assertEqual(value[1].split(None, 2), ['bar', 'FATAL', 'bar description']) self.assertEqual(value[2].split(None, 2), ['baz:baz_01', 'STOPPED', 'baz description']) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_version_help(self): plugin = self._makeOne() plugin.help_version() out = plugin.ctl.stdout.getvalue() self.assertTrue("Show the version of the remote supervisord" in out) def test_version(self): plugin = self._makeOne() plugin.do_version(None) self.assertEqual(plugin.ctl.stdout.getvalue(), '3000\n') def test_version_arg(self): plugin = self._makeOne() result = plugin.do_version('bad') self.assertEqual(result, None) val = plugin.ctl.stdout.getvalue() self.assertTrue(val.startswith('Error: version accepts no arguments'), val) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_version_upcheck_failed(self): plugin = self._makeOne() plugin.ctl.upcheck = lambda: False called = [] def f(*arg, **kw): called.append(True) plugin.ctl.options._server.supervisor.getSupervisorVersion = f plugin.do_version('') self.assertEqual(called, []) def test_reload_help(self): plugin = self._makeOne() plugin.help_reload() out = plugin.ctl.stdout.getvalue() self.assertTrue("Restart the remote supervisord" in out) def test_reload_fail(self): plugin = self._makeOne() options = plugin.ctl.options options._server.supervisor._restartable = False result = plugin.do_reload('') self.assertEqual(result, None) self.assertEqual(options._server.supervisor._restarted, False) def test_reload(self): plugin = self._makeOne() options = plugin.ctl.options result = plugin.do_reload('') self.assertEqual(result, None) self.assertEqual(options._server.supervisor._restarted, True) def test_reload_arg(self): plugin = self._makeOne() result = plugin.do_reload('bad') self.assertEqual(result, None) val = plugin.ctl.stdout.getvalue() self.assertTrue(val.startswith('Error: reload accepts no arguments'), val) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_shutdown_help(self): plugin = self._makeOne() plugin.help_shutdown() out = plugin.ctl.stdout.getvalue() self.assertTrue("Shut the remote supervisord down" in out) def test_shutdown_with_arg_shows_error(self): plugin = self._makeOne() options = plugin.ctl.options result = plugin.do_shutdown('bad') self.assertEqual(result, None) self.assertEqual(options._server.supervisor._shutdown, False) val = plugin.ctl.stdout.getvalue() self.assertTrue(val.startswith('Error: shutdown accepts no arguments'), val) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_shutdown(self): plugin = self._makeOne() options = plugin.ctl.options result = plugin.do_shutdown('') self.assertEqual(result, None) self.assertEqual(options._server.supervisor._shutdown, True) def test_shutdown_catches_xmlrpc_fault_shutdown_state(self): plugin = self._makeOne() from supervisor import xmlrpc def raise_fault(*arg, **kw): raise xmlrpclib.Fault(xmlrpc.Faults.SHUTDOWN_STATE, 'bye') plugin.ctl.options._server.supervisor.shutdown = raise_fault result = plugin.do_shutdown('') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'ERROR: already shutting down\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_shutdown_reraises_other_xmlrpc_faults(self): plugin = self._makeOne() from supervisor import xmlrpc def raise_fault(*arg, **kw): raise xmlrpclib.Fault(xmlrpc.Faults.CANT_REREAD, 'ouch') plugin.ctl.options._server.supervisor.shutdown = raise_fault self.assertRaises(xmlrpclib.Fault, plugin.do_shutdown, '') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_shutdown_catches_socket_error_ECONNREFUSED(self): plugin = self._makeOne() import socket import errno def raise_fault(*arg, **kw): raise socket.error(errno.ECONNREFUSED, 'nobody home') plugin.ctl.options._server.supervisor.shutdown = raise_fault result = plugin.do_shutdown('') self.assertEqual(result, None) output = plugin.ctl.stdout.getvalue() self.assertTrue('refused connection (already shut down?)' in output) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_shutdown_catches_socket_error_ENOENT(self): plugin = self._makeOne() import socket import errno def raise_fault(*arg, **kw): raise socket.error(errno.ENOENT, 'no file') plugin.ctl.options._server.supervisor.shutdown = raise_fault result = plugin.do_shutdown('') self.assertEqual(result, None) output = plugin.ctl.stdout.getvalue() self.assertTrue('no such file (already shut down?)' in output) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_shutdown_reraises_other_socket_errors(self): plugin = self._makeOne() import socket import errno def raise_fault(*arg, **kw): raise socket.error(errno.EPERM, 'denied') plugin.ctl.options._server.supervisor.shutdown = raise_fault self.assertRaises(socket.error, plugin.do_shutdown, '') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test__formatChanges(self): plugin = self._makeOne() # Don't explode, plz plugin._formatChanges([['added'], ['changed'], ['removed']]) plugin._formatChanges([[], [], []]) def test_reread_help(self): plugin = self._makeOne() plugin.help_reread() out = plugin.ctl.stdout.getvalue() self.assertTrue("Reload the daemon's configuration files" in out) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_reread(self): plugin = self._makeOne() calls = [] plugin._formatChanges = lambda x: calls.append(x) result = plugin.do_reread(None) self.assertEqual(result, None) self.assertEqual(calls[0], [['added'], ['changed'], ['removed']]) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_reread_arg(self): plugin = self._makeOne() result = plugin.do_reread('bad') self.assertEqual(result, None) val = plugin.ctl.stdout.getvalue() self.assertTrue(val.startswith('Error: reread accepts no arguments'), val) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_reread_cant_reread(self): plugin = self._makeOne() from supervisor import xmlrpc def reloadConfig(*arg, **kw): raise xmlrpclib.Fault(xmlrpc.Faults.CANT_REREAD, 'cant') plugin.ctl.options._server.supervisor.reloadConfig = reloadConfig plugin.do_reread(None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'ERROR: cant\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_reread_shutdown_state(self): plugin = self._makeOne() from supervisor import xmlrpc def reloadConfig(*arg, **kw): raise xmlrpclib.Fault(xmlrpc.Faults.SHUTDOWN_STATE, '') plugin.ctl.options._server.supervisor.reloadConfig = reloadConfig plugin.do_reread(None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'ERROR: supervisor shutting down\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_reread_reraises_other_faults(self): plugin = self._makeOne() from supervisor import xmlrpc def reloadConfig(*arg, **kw): raise xmlrpclib.Fault(xmlrpc.Faults.FAILED, '') plugin.ctl.options._server.supervisor.reloadConfig = reloadConfig self.assertRaises(xmlrpclib.Fault, plugin.do_reread, '') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test__formatConfigInfo(self): info = { 'group': 'group1', 'name': 'process1', 'inuse': True, 'autostart': True, 'process_prio': 999, 'group_prio': 999 } plugin = self._makeOne() result = plugin._formatConfigInfo(info) self.assertTrue('in use' in result) info = { 'group': 'group1', 'name': 'process1', 'inuse': False, 'autostart': False, 'process_prio': 999, 'group_prio': 999 } result = plugin._formatConfigInfo(info) self.assertTrue('avail' in result) def test_avail_help(self): plugin = self._makeOne() plugin.help_avail() out = plugin.ctl.stdout.getvalue() self.assertTrue("Display all configured" in out) def test_avail(self): calls = [] plugin = self._makeOne() class FakeSupervisor(object): def getAllConfigInfo(self): return [{ 'group': 'group1', 'name': 'process1', 'inuse': False, 'autostart': False, 'process_prio': 999, 'group_prio': 999 }] plugin.ctl.get_supervisor = lambda : FakeSupervisor() plugin.ctl.output = calls.append result = plugin.do_avail('') self.assertEqual(result, None) def test_avail_arg(self): plugin = self._makeOne() result = plugin.do_avail('bad') self.assertEqual(result, None) val = plugin.ctl.stdout.getvalue() self.assertTrue(val.startswith('Error: avail accepts no arguments'), val) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_avail_shutdown_state(self): plugin = self._makeOne() supervisor = plugin.ctl.options._server.supervisor def getAllConfigInfo(): from supervisor import xmlrpc raise xmlrpclib.Fault(xmlrpc.Faults.SHUTDOWN_STATE, '') supervisor.getAllConfigInfo = getAllConfigInfo result = plugin.do_avail('') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'ERROR: supervisor shutting down\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_avail_reraises_other_faults(self): plugin = self._makeOne() supervisor = plugin.ctl.options._server.supervisor def getAllConfigInfo(): from supervisor import xmlrpc raise xmlrpclib.Fault(xmlrpc.Faults.FAILED, '') supervisor.getAllConfigInfo = getAllConfigInfo self.assertRaises(xmlrpclib.Fault, plugin.do_avail, '') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_add_help(self): plugin = self._makeOne() plugin.help_add() out = plugin.ctl.stdout.getvalue() self.assertTrue("add " in out) def test_add(self): plugin = self._makeOne() result = plugin.do_add('foo') self.assertEqual(result, None) supervisor = plugin.ctl.options._server.supervisor self.assertEqual(supervisor.processes, ['foo']) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_add_already_added(self): plugin = self._makeOne() result = plugin.do_add('ALREADY_ADDED') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'ERROR: process group already active\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_add_bad_name(self): plugin = self._makeOne() result = plugin.do_add('BAD_NAME') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'ERROR: no such process/group: BAD_NAME\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_add_shutdown_state(self): plugin = self._makeOne() result = plugin.do_add('SHUTDOWN_STATE') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'ERROR: shutting down\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_add_reraises_other_faults(self): plugin = self._makeOne() self.assertRaises(xmlrpclib.Fault, plugin.do_add, 'FAILED') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_remove_help(self): plugin = self._makeOne() plugin.help_remove() out = plugin.ctl.stdout.getvalue() self.assertTrue("remove " in out) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_remove(self): plugin = self._makeOne() supervisor = plugin.ctl.options._server.supervisor supervisor.processes = ['foo'] result = plugin.do_remove('foo') self.assertEqual(result, None) self.assertEqual(supervisor.processes, []) def test_remove_bad_name(self): plugin = self._makeOne() supervisor = plugin.ctl.options._server.supervisor supervisor.processes = ['foo'] result = plugin.do_remove('BAD_NAME') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'ERROR: no such process/group: BAD_NAME\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_remove_still_running(self): plugin = self._makeOne() supervisor = plugin.ctl.options._server.supervisor supervisor.processes = ['foo'] result = plugin.do_remove('STILL_RUNNING') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'ERROR: process/group still running: STILL_RUNNING\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_remove_reraises_other_faults(self): plugin = self._makeOne() self.assertRaises(xmlrpclib.Fault, plugin.do_remove, 'FAILED') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_update_help(self): plugin = self._makeOne() plugin.help_update() out = plugin.ctl.stdout.getvalue() self.assertTrue("Reload config and add/remove" in out) def test_update_not_on_shutdown(self): plugin = self._makeOne() supervisor = plugin.ctl.options._server.supervisor def reloadConfig(): from supervisor import xmlrpc raise xmlrpclib.Fault(xmlrpc.Faults.SHUTDOWN_STATE, 'blah') supervisor.reloadConfig = reloadConfig supervisor.processes = ['removed'] plugin.do_update('') self.assertEqual(supervisor.processes, ['removed']) def test_update_added_procs(self): plugin = self._makeOne() supervisor = plugin.ctl.options._server.supervisor def reloadConfig(): return [[['new_proc'], [], []]] supervisor.reloadConfig = reloadConfig result = plugin.do_update('') self.assertEqual(result, None) self.assertEqual(supervisor.processes, ['new_proc']) def test_update_with_gname(self): plugin = self._makeOne() supervisor = plugin.ctl.options._server.supervisor def reloadConfig(): return [[['added1', 'added2'], ['changed'], ['removed']]] supervisor.reloadConfig = reloadConfig supervisor.processes = ['changed', 'removed'] plugin.do_update('changed') self.assertEqual(sorted(supervisor.processes), sorted(['changed', 'removed'])) plugin.do_update('added1 added2') self.assertEqual(sorted(supervisor.processes), sorted(['changed', 'removed', 'added1', 'added2'])) plugin.do_update('removed') self.assertEqual(sorted(supervisor.processes), sorted(['changed', 'added1', 'added2'])) supervisor.processes = ['changed', 'removed'] plugin.do_update('removed added1') self.assertEqual(sorted(supervisor.processes), sorted(['changed', 'added1'])) supervisor.processes = ['changed', 'removed'] plugin.do_update('all') self.assertEqual(sorted(supervisor.processes), sorted(['changed', 'added1', 'added2'])) def test_update_changed_procs(self): from supervisor import xmlrpc plugin = self._makeOne() supervisor = plugin.ctl.options._server.supervisor calls = [] def reloadConfig(): return [[[], ['changed_group'], []]] supervisor.reloadConfig = reloadConfig supervisor.startProcess = lambda x: calls.append(('start', x)) supervisor.addProcessGroup('changed_group') # fake existence results = [{'name': 'changed_process', 'group': 'changed_group', 'status': xmlrpc.Faults.SUCCESS, 'description': 'blah'}] def stopProcessGroup(name): calls.append(('stop', name)) return results supervisor.stopProcessGroup = stopProcessGroup plugin.do_update('') self.assertEqual(calls, [('stop', 'changed_group')]) supervisor.addProcessGroup('changed_group') # fake existence calls[:] = [] results[:] = [{'name': 'changed_process1', 'group': 'changed_group', 'status': xmlrpc.Faults.NOT_RUNNING, 'description': 'blah'}, {'name': 'changed_process2', 'group': 'changed_group', 'status': xmlrpc.Faults.FAILED, 'description': 'blah'}] plugin.do_update('') self.assertEqual(calls, [('stop', 'changed_group')]) supervisor.addProcessGroup('changed_group') # fake existence calls[:] = [] results[:] = [{'name': 'changed_process1', 'group': 'changed_group', 'status': xmlrpc.Faults.FAILED, 'description': 'blah'}, {'name': 'changed_process2', 'group': 'changed_group', 'status': xmlrpc.Faults.SUCCESS, 'description': 'blah'}] plugin.do_update('') self.assertEqual(calls, [('stop', 'changed_group')]) def test_update_removed_procs(self): from supervisor import xmlrpc plugin = self._makeOne() supervisor = plugin.ctl.options._server.supervisor def reloadConfig(): return [[[], [], ['removed_group']]] supervisor.reloadConfig = reloadConfig results = [{'name': 'removed_process', 'group': 'removed_group', 'status': xmlrpc.Faults.SUCCESS, 'description': 'blah'}] supervisor.processes = ['removed_group'] def stopProcessGroup(name): return results supervisor.stopProcessGroup = stopProcessGroup plugin.do_update('') self.assertEqual(supervisor.processes, []) results[:] = [{'name': 'removed_process', 'group': 'removed_group', 'status': xmlrpc.Faults.NOT_RUNNING, 'description': 'blah'}] supervisor.processes = ['removed_group'] plugin.do_update('') self.assertEqual(supervisor.processes, []) results[:] = [{'name': 'removed_process', 'group': 'removed_group', 'status': xmlrpc.Faults.FAILED, 'description': 'blah'}] supervisor.processes = ['removed_group'] plugin.do_update('') self.assertEqual(supervisor.processes, ['removed_group']) def test_update_reraises_other_faults(self): plugin = self._makeOne() supervisor = plugin.ctl.options._server.supervisor def reloadConfig(): from supervisor import xmlrpc raise xmlrpclib.Fault(xmlrpc.Faults.FAILED, 'FAILED') supervisor.reloadConfig = reloadConfig self.assertRaises(xmlrpclib.Fault, plugin.do_update, '') self.assertEqual(plugin.ctl.exitstatus, 1) def test_pid_help(self): plugin = self._makeOne() plugin.help_pid() out = plugin.ctl.stdout.getvalue() self.assertTrue("pid " in out) def test_pid_supervisord(self): plugin = self._makeOne() result = plugin.do_pid('') self.assertEqual(result, None) options = plugin.ctl.options lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(len(lines), 2) self.assertEqual(lines[0], str(options._server.supervisor.getPID())) def test_pid_allprocesses(self): plugin = self._makeOne() result = plugin.do_pid('all') self.assertEqual(result, None) value = plugin.ctl.stdout.getvalue().strip() self.assertEqual(value.split(), ['11', '12', '13']) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_pid_badname(self): plugin = self._makeOne() result = plugin.do_pid('BAD_NAME') self.assertEqual(result, None) value = plugin.ctl.stdout.getvalue().strip() self.assertEqual(value, 'No such process BAD_NAME') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_pid_oneprocess(self): plugin = self._makeOne() result = plugin.do_pid('foo') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue().strip(), '11') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.SUCCESS) def test_pid_oneprocess_not_running(self): plugin = self._makeOne() options = plugin.ctl.options def f(*arg, **kw): from supervisor.states import ProcessStates return {'name': 'foo', 'group':'foo', 'pid': 0, 'state': ProcessStates.STOPPED, 'statename': 'STOPPED', 'start': 0, 'stop': 0, 'spawnerr': '', 'now': 0, 'description':'foo description' } options._server.supervisor.getProcessInfo = f result = plugin.do_pid('foo') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue().strip(), '0') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.NOT_RUNNING) def test_pid_upcheck_failed(self): plugin = self._makeOne() plugin.ctl.upcheck = lambda: False called = [] def f(*arg, **kw): called.append(True) plugin.ctl.options._server.supervisor.getPID = f plugin.do_pid('') self.assertEqual(called, []) def test_maintail_help(self): plugin = self._makeOne() plugin.help_maintail() out = plugin.ctl.stdout.getvalue() self.assertTrue("tail of supervisor main log file" in out) def test_maintail_toomanyargs(self): plugin = self._makeOne() result = plugin.do_maintail('foo bar') self.assertEqual(result, None) val = plugin.ctl.stdout.getvalue() self.assertTrue(val.startswith('Error: too many'), val) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_maintail_minus_string_fails(self): plugin = self._makeOne() result = plugin.do_maintail('-wrong') self.assertEqual(result, None) val = plugin.ctl.stdout.getvalue() self.assertTrue(val.startswith('Error: bad argument -wrong'), val) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_maintail_wrong(self): plugin = self._makeOne() result = plugin.do_maintail('wrong') self.assertEqual(result, None) val = plugin.ctl.stdout.getvalue() self.assertTrue(val.startswith('Error: bad argument wrong'), val) self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def _dont_test_maintail_dashf(self): # https://github.com/Supervisor/supervisor/issues/285 # TODO: Refactor so we can test more of maintail -f than just a # connect error, and fix this test so it passes on FreeBSD. plugin = self._makeOne() plugin.listener = DummyListener() result = plugin.do_maintail('-f') self.assertEqual(result, None) errors = plugin.listener.errors self.assertEqual(len(errors), 1) error = errors[0] self.assertEqual(plugin.listener.closed, 'http://localhost:65532/mainlogtail') self.assertEqual(error[0], 'http://localhost:65532/mainlogtail') self.assertTrue('Cannot connect' in error[1]) def test_maintail_bad_modifier(self): plugin = self._makeOne() result = plugin.do_maintail('-z') self.assertEqual(result, None) lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(lines[0], 'Error: bad argument -z') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_maintail_nobytes(self): plugin = self._makeOne() result = plugin.do_maintail('') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'mainlogdata\n') def test_maintail_dashbytes(self): plugin = self._makeOne() result = plugin.do_maintail('-100') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'mainlogdata\n') def test_maintail_readlog_error_nofile(self): plugin = self._makeOne() supervisor_rpc = plugin.ctl.get_supervisor() from supervisor import xmlrpc supervisor_rpc._readlog_error = xmlrpc.Faults.NO_FILE result = plugin.do_maintail('-100') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'supervisord: ERROR (no log file)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_maintail_readlog_error_failed(self): plugin = self._makeOne() supervisor_rpc = plugin.ctl.get_supervisor() from supervisor import xmlrpc supervisor_rpc._readlog_error = xmlrpc.Faults.FAILED result = plugin.do_maintail('-100') self.assertEqual(result, None) self.assertEqual(plugin.ctl.stdout.getvalue(), 'supervisord: ERROR (unknown error reading log)\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_maintail_upcheck_failed(self): plugin = self._makeOne() plugin.ctl.upcheck = lambda: False called = [] def f(*arg, **kw): called.append(True) plugin.ctl.options._server.supervisor.readLog = f plugin.do_maintail('') self.assertEqual(called, []) def test_fg_help(self): plugin = self._makeOne() plugin.help_fg() out = plugin.ctl.stdout.getvalue() self.assertTrue("fg " in out) def test_fg_too_few_args(self): plugin = self._makeOne() result = plugin.do_fg('') self.assertEqual(result, None) lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(lines[0], 'ERROR: no process name supplied') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_fg_too_many_args(self): plugin = self._makeOne() result = plugin.do_fg('foo bar') self.assertEqual(result, None) line = plugin.ctl.stdout.getvalue() self.assertEqual(line, 'ERROR: too many process names supplied\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_fg_badprocname(self): plugin = self._makeOne() result = plugin.do_fg('BAD_NAME') self.assertEqual(result, None) line = plugin.ctl.stdout.getvalue() self.assertEqual(line, 'ERROR: bad process name supplied\n') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_fg_procnotrunning(self): plugin = self._makeOne() result = plugin.do_fg('bar') self.assertEqual(result, None) line = plugin.ctl.stdout.getvalue() self.assertEqual(line, 'ERROR: process not running\n') result = plugin.do_fg('baz_01') lines = plugin.ctl.stdout.getvalue().split('\n') self.assertEqual(result, None) self.assertEqual(lines[-2], 'ERROR: process not running') self.assertEqual(plugin.ctl.exitstatus, LSBInitExitStatuses.GENERIC) def test_fg_upcheck_failed(self): plugin = self._makeOne() plugin.ctl.upcheck = lambda: False called = [] def f(*arg, **kw): called.append(True) plugin.ctl.options._server.supervisor.getProcessInfo = f plugin.do_fg('foo') self.assertEqual(called, []) def test_exit_help(self): plugin = self._makeOne() plugin.help_exit() out = plugin.ctl.stdout.getvalue() self.assertTrue("Exit the supervisor shell" in out) def test_quit_help(self): plugin = self._makeOne() plugin.help_quit() out = plugin.ctl.stdout.getvalue() self.assertTrue("Exit the supervisor shell" in out) class DummyListener: def __init__(self): self.errors = [] def error(self, url, msg): self.errors.append((url, msg)) def close(self, url): self.closed = url class DummyPluginFactory: def __init__(self, ctl, **kw): self.ctl = ctl def do_help(self, arg): self.ctl.stdout.write('foo helped') class DummyClientOptions: def __init__(self): self.prompt = 'supervisor' self.serverurl = 'http://localhost:65532' self.username = 'chrism' self.password = '123' self.history_file = None self.plugins = () self._server = DummyRPCServer() self.interactive = False self.plugin_factories = [('dummy', DummyPluginFactory, {})] def getServerProxy(self): return self._server class DummyController: nohelp = 'no help on %s' def __init__(self, options): self.options = options self.topics_printed = [] self.stdout = StringIO() self.exitstatus = LSBInitExitStatuses.SUCCESS def upcheck(self): return True def get_supervisor(self): return self.get_server_proxy('supervisor') def get_server_proxy(self, namespace=None): proxy = self.options.getServerProxy() if namespace is None: return proxy else: return getattr(proxy, namespace) def output(self, data): self.stdout.write(data + '\n') def print_topics(self, doc_headers, cmds_doc, rows, cols): self.topics_printed.append((doc_headers, cmds_doc, rows, cols)) def set_exitstatus_from_xmlrpc_fault(self, faultcode, ignored_faultcode=None): from supervisor.supervisorctl import DEAD_PROGRAM_FAULTS if faultcode in (ignored_faultcode, xmlrpc.Faults.SUCCESS): pass elif faultcode in DEAD_PROGRAM_FAULTS: self.exitstatus = LSBInitExitStatuses.NOT_RUNNING else: self.exitstatus = LSBInitExitStatuses.GENERIC class DummyPlugin: def __init__(self, controller=None): self.ctl = controller def do_help(self, arg): self.helped = True def test_suite(): return unittest.findTestCases(sys.modules[__name__]) if __name__ == '__main__': unittest.main(defaultTest='test_suite') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/test_supervisord.py0000644000076500000240000010370414340177153023064 0ustar00mnaberezstaffimport unittest import time import signal import sys import os import tempfile import shutil from supervisor.states import ProcessStates from supervisor.states import SupervisorStates from supervisor.tests.base import DummyOptions from supervisor.tests.base import DummyPConfig from supervisor.tests.base import DummyPGroupConfig from supervisor.tests.base import DummyProcess from supervisor.tests.base import DummyProcessGroup from supervisor.tests.base import DummyDispatcher from supervisor.compat import StringIO try: import pstats except ImportError: # pragma: no cover # Debian-packaged pythons may not have the pstats module # unless the "python-profiler" package is installed. pstats = None class EntryPointTests(unittest.TestCase): def test_main_noprofile(self): from supervisor.supervisord import main conf = os.path.join( os.path.abspath(os.path.dirname(__file__)), 'fixtures', 'donothing.conf') new_stdout = StringIO() new_stdout.fileno = lambda: 1 old_stdout = sys.stdout try: tempdir = tempfile.mkdtemp() log = os.path.join(tempdir, 'log') pid = os.path.join(tempdir, 'pid') sys.stdout = new_stdout main(args=['-c', conf, '-l', log, '-j', pid, '-n'], test=True) finally: sys.stdout = old_stdout shutil.rmtree(tempdir) output = new_stdout.getvalue() self.assertTrue('supervisord started' in output, output) if pstats: def test_main_profile(self): from supervisor.supervisord import main conf = os.path.join( os.path.abspath(os.path.dirname(__file__)), 'fixtures', 'donothing.conf') new_stdout = StringIO() new_stdout.fileno = lambda: 1 old_stdout = sys.stdout try: tempdir = tempfile.mkdtemp() log = os.path.join(tempdir, 'log') pid = os.path.join(tempdir, 'pid') sys.stdout = new_stdout main(args=['-c', conf, '-l', log, '-j', pid, '-n', '--profile_options=cumulative,calls'], test=True) finally: sys.stdout = old_stdout shutil.rmtree(tempdir) output = new_stdout.getvalue() self.assertTrue('cumulative time, call count' in output, output) def test_silent_off(self): from supervisor.supervisord import main conf = os.path.join( os.path.abspath(os.path.dirname(__file__)), 'fixtures', 'donothing.conf') new_stdout = StringIO() new_stdout.fileno = lambda: 1 old_stdout = sys.stdout try: tempdir = tempfile.mkdtemp() log = os.path.join(tempdir, 'log') pid = os.path.join(tempdir, 'pid') sys.stdout = new_stdout main(args=['-c', conf, '-l', log, '-j', pid, '-n'], test=True) finally: sys.stdout = old_stdout shutil.rmtree(tempdir) output = new_stdout.getvalue() self.assertGreater(len(output), 0) def test_silent_on(self): from supervisor.supervisord import main conf = os.path.join( os.path.abspath(os.path.dirname(__file__)), 'fixtures', 'donothing.conf') new_stdout = StringIO() new_stdout.fileno = lambda: 1 old_stdout = sys.stdout try: tempdir = tempfile.mkdtemp() log = os.path.join(tempdir, 'log') pid = os.path.join(tempdir, 'pid') sys.stdout = new_stdout main(args=['-c', conf, '-l', log, '-j', pid, '-n', '-s'], test=True) finally: sys.stdout = old_stdout shutil.rmtree(tempdir) output = new_stdout.getvalue() self.assertEqual(len(output), 0) class SupervisordTests(unittest.TestCase): def tearDown(self): from supervisor.events import clear clear() def _getTargetClass(self): from supervisor.supervisord import Supervisor return Supervisor def _makeOne(self, options): return self._getTargetClass()(options) def test_main_first(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', '/tmp') gconfigs = [DummyPGroupConfig(options, 'foo', pconfigs=[pconfig])] options.process_group_configs = gconfigs options.test = True options.first = True supervisord = self._makeOne(options) supervisord.main() self.assertEqual(options.fds_cleaned_up, False) self.assertEqual(options.rlimits_set, True) self.assertEqual(options.parse_criticals, ['setuid_called']) self.assertEqual(options.parse_warnings, []) self.assertEqual(options.parse_infos, ['rlimits_set']) self.assertEqual(options.autochildlogdir_cleared, True) self.assertEqual(len(supervisord.process_groups), 1) self.assertEqual(supervisord.process_groups['foo'].config.options, options) self.assertEqual(options.httpservers_opened, True) self.assertEqual(options.signals_set, True) self.assertEqual(options.daemonized, True) self.assertEqual(options.pidfile_written, True) self.assertEqual(options.cleaned_up, True) def test_main_notfirst(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', '/tmp') gconfigs = [DummyPGroupConfig(options, 'foo', pconfigs=[pconfig])] options.process_group_configs = gconfigs options.test = True options.first = False supervisord = self._makeOne(options) supervisord.main() self.assertEqual(options.fds_cleaned_up, True) self.assertFalse(hasattr(options, 'rlimits_set')) self.assertEqual(options.parse_criticals, ['setuid_called']) self.assertEqual(options.parse_warnings, []) self.assertEqual(options.parse_infos, []) self.assertEqual(options.autochildlogdir_cleared, True) self.assertEqual(len(supervisord.process_groups), 1) self.assertEqual(supervisord.process_groups['foo'].config.options, options) self.assertEqual(options.httpservers_opened, True) self.assertEqual(options.signals_set, True) self.assertEqual(options.daemonized, False) self.assertEqual(options.pidfile_written, True) self.assertEqual(options.cleaned_up, True) def test_reap(self): options = DummyOptions() options.waitpid_return = 1, 1 pconfig = DummyPConfig(options, 'process', '/bin/foo', '/tmp') process = DummyProcess(pconfig) process.drained = False process.killing = True process.laststop = None process.waitstatus = None, None options.pidhistory = {1:process} supervisord = self._makeOne(options) supervisord.reap(once=True) self.assertEqual(process.finished, (1,1)) def test_reap_recursionguard(self): options = DummyOptions() supervisord = self._makeOne(options) result = supervisord.reap(once=True, recursionguard=100) self.assertEqual(result, None) def test_reap_more_than_once(self): options = DummyOptions() options.waitpid_return = 1, 1 pconfig = DummyPConfig(options, 'process', '/bin/foo', '/tmp') process = DummyProcess(pconfig) process.drained = False process.killing = True process.laststop = None process.waitstatus = None, None options.pidhistory = {1:process} supervisord = self._makeOne(options) supervisord.reap(recursionguard=99) self.assertEqual(process.finished, (1,1)) def test_reap_unknown_pid(self): options = DummyOptions() options.waitpid_return = 2, 0 # pid, status pconfig = DummyPConfig(options, 'process', '/bin/foo', '/tmp') process = DummyProcess(pconfig) process.drained = False process.killing = True process.laststop = None process.waitstatus = None, None options.pidhistory = {1: process} supervisord = self._makeOne(options) supervisord.reap(once=True) self.assertEqual(process.finished, None) self.assertEqual(options.logger.data[0], 'reaped unknown pid 2 (exit status 0)') def test_handle_sigterm(self): options = DummyOptions() options._signal = signal.SIGTERM supervisord = self._makeOne(options) supervisord.handle_signal() self.assertEqual(supervisord.options.mood, SupervisorStates.SHUTDOWN) self.assertEqual(options.logger.data[0], 'received SIGTERM indicating exit request') def test_handle_sigint(self): options = DummyOptions() options._signal = signal.SIGINT supervisord = self._makeOne(options) supervisord.handle_signal() self.assertEqual(supervisord.options.mood, SupervisorStates.SHUTDOWN) self.assertEqual(options.logger.data[0], 'received SIGINT indicating exit request') def test_handle_sigquit(self): options = DummyOptions() options._signal = signal.SIGQUIT supervisord = self._makeOne(options) supervisord.handle_signal() self.assertEqual(supervisord.options.mood, SupervisorStates.SHUTDOWN) self.assertEqual(options.logger.data[0], 'received SIGQUIT indicating exit request') def test_handle_sighup_in_running_state(self): options = DummyOptions() options._signal = signal.SIGHUP supervisord = self._makeOne(options) self.assertEqual(supervisord.options.mood, SupervisorStates.RUNNING) supervisord.handle_signal() self.assertEqual(supervisord.options.mood, SupervisorStates.RESTARTING) self.assertEqual(options.logger.data[0], 'received SIGHUP indicating restart request') def test_handle_sighup_in_shutdown_state(self): options = DummyOptions() options._signal = signal.SIGHUP supervisord = self._makeOne(options) supervisord.options.mood = SupervisorStates.SHUTDOWN self.assertEqual(supervisord.options.mood, SupervisorStates.SHUTDOWN) supervisord.handle_signal() self.assertEqual(supervisord.options.mood, SupervisorStates.SHUTDOWN) # unchanged self.assertEqual(options.logger.data[0], 'ignored SIGHUP indicating restart request ' '(shutdown in progress)') def test_handle_sigchld(self): options = DummyOptions() options._signal = signal.SIGCHLD supervisord = self._makeOne(options) supervisord.handle_signal() self.assertEqual(supervisord.options.mood, SupervisorStates.RUNNING) # supervisor.options.signame(signal.SIGCHLD) may return "SIGCLD" # on linux or other systems where SIGCHLD = SIGCLD. msgs = ('received SIGCHLD indicating a child quit', 'received SIGCLD indicating a child quit') self.assertTrue(options.logger.data[0] in msgs) def test_handle_sigusr2(self): options = DummyOptions() options._signal = signal.SIGUSR2 pconfig1 = DummyPConfig(options, 'process1', '/bin/foo', '/tmp') process1 = DummyProcess(pconfig1, state=ProcessStates.STOPPING) process1.delay = time.time() - 1 supervisord = self._makeOne(options) pconfigs = [DummyPConfig(options, 'foo', '/bin/foo', '/tmp')] options.process_group_configs = DummyPGroupConfig( options, 'foo', pconfigs=pconfigs) dummypgroup = DummyProcessGroup(options) supervisord.process_groups = {None:dummypgroup} supervisord.handle_signal() self.assertEqual(supervisord.options.mood, SupervisorStates.RUNNING) self.assertEqual(options.logs_reopened, True) self.assertEqual(options.logger.data[0], 'received SIGUSR2 indicating log reopen request') self.assertEqual(dummypgroup.logs_reopened, True) def test_handle_unknown_signal(self): options = DummyOptions() options._signal = signal.SIGUSR1 supervisord = self._makeOne(options) supervisord.handle_signal() self.assertEqual(supervisord.options.mood, SupervisorStates.RUNNING) self.assertEqual(options.logger.data[0], 'received SIGUSR1 indicating nothing') def test_get_state(self): options = DummyOptions() supervisord = self._makeOne(options) self.assertEqual(supervisord.get_state(), SupervisorStates.RUNNING) def test_diff_to_active_finds_groups_added(self): options = DummyOptions() supervisord = self._makeOne(options) pconfig = DummyPConfig(options, 'process1', '/bin/foo', '/tmp') group1 = DummyPGroupConfig(options, 'group1', pconfigs=[pconfig]) # the active configuration has no groups # diffing should find that group1 has been added supervisord.options.process_group_configs = [group1] added, changed, removed = supervisord.diff_to_active() self.assertEqual(added, [group1]) self.assertEqual(changed, []) self.assertEqual(removed, []) def test_diff_to_active_finds_groups_removed(self): options = DummyOptions() supervisord = self._makeOne(options) pconfig = DummyPConfig(options, 'process1', '/bin/process1', '/tmp') group1 = DummyPGroupConfig(options, 'group1', pconfigs=[pconfig]) pconfig = DummyPConfig(options, 'process2', '/bin/process2', '/tmp') group2 = DummyPGroupConfig(options, 'group2', pconfigs=[pconfig]) # set up supervisord with an active configuration of group1 and group2 supervisord.options.process_group_configs = [group1, group2] supervisord.add_process_group(group1) supervisord.add_process_group(group2) # diffing should find that group2 has been removed supervisord.options.process_group_configs = [group1] added, changed, removed = supervisord.diff_to_active() self.assertEqual(added, []) self.assertEqual(changed, []) self.assertEqual(removed, [group2]) def test_diff_to_active_changed(self): from supervisor.options import ProcessConfig, ProcessGroupConfig options = DummyOptions() supervisord = self._makeOne(options) def make_pconfig(name, command, **params): result = { 'name': name, 'command': command, 'directory': None, 'umask': None, 'priority': 999, 'autostart': True, 'autorestart': True, 'startsecs': 10, 'startretries': 999, 'uid': None, 'stdout_logfile': None, 'stdout_capture_maxbytes': 0, 'stdout_events_enabled': False, 'stdout_logfile_backups': 0, 'stdout_logfile_maxbytes': 0, 'stdout_syslog': False, 'stderr_logfile': None, 'stderr_capture_maxbytes': 0, 'stderr_events_enabled': False, 'stderr_logfile_backups': 0, 'stderr_logfile_maxbytes': 0, 'stderr_syslog': False, 'redirect_stderr': False, 'stopsignal': None, 'stopwaitsecs': 10, 'stopasgroup': False, 'killasgroup': False, 'exitcodes': (0,), 'environment': None, 'serverurl': None, } result.update(params) return ProcessConfig(options, **result) def make_gconfig(name, pconfigs): return ProcessGroupConfig(options, name, 25, pconfigs) pconfig = make_pconfig('process1', 'process1', uid='new') group1 = make_gconfig('group1', [pconfig]) pconfig = make_pconfig('process2', 'process2') group2 = make_gconfig('group2', [pconfig]) new = [group1, group2] pconfig = make_pconfig('process1', 'process1', uid='old') group3 = make_gconfig('group1', [pconfig]) pconfig = make_pconfig('process2', 'process2') group4 = make_gconfig('group2', [pconfig]) supervisord.add_process_group(group3) supervisord.add_process_group(group4) supervisord.options.process_group_configs = new added, changed, removed = supervisord.diff_to_active() self.assertEqual(added, []) self.assertEqual(removed, []) self.assertEqual(changed, [group1]) options = DummyOptions() supervisord = self._makeOne(options) pconfig1 = make_pconfig('process1', 'process1') pconfig2 = make_pconfig('process2', 'process2') group1 = make_gconfig('group1', [pconfig1, pconfig2]) new = [group1] supervisord.add_process_group(make_gconfig('group1', [pconfig1])) supervisord.options.process_group_configs = new added, changed, removed = supervisord.diff_to_active() self.assertEqual(added, []) self.assertEqual(removed, []) self.assertEqual(changed, [group1]) def test_diff_to_active_changed_eventlistener(self): from supervisor.events import EventTypes from supervisor.options import EventListenerConfig, EventListenerPoolConfig options = DummyOptions() supervisord = self._makeOne(options) def make_pconfig(name, command, **params): result = { 'name': name, 'command': command, 'directory': None, 'umask': None, 'priority': 999, 'autostart': True, 'autorestart': True, 'startsecs': 10, 'startretries': 999, 'uid': None, 'stdout_logfile': None, 'stdout_capture_maxbytes': 0, 'stdout_events_enabled': False, 'stdout_logfile_backups': 0, 'stdout_logfile_maxbytes': 0, 'stdout_syslog': False, 'stderr_logfile': None, 'stderr_capture_maxbytes': 0, 'stderr_events_enabled': False, 'stderr_logfile_backups': 0, 'stderr_logfile_maxbytes': 0, 'stderr_syslog': False, 'redirect_stderr': False, 'stopsignal': None, 'stopwaitsecs': 10, 'stopasgroup': False, 'killasgroup': False, 'exitcodes': (0,), 'environment': None, 'serverurl': None, } result.update(params) return EventListenerConfig(options, **result) def make_econfig(*pool_event_names): result = [] for pool_event_name in pool_event_names: result.append(getattr(EventTypes, pool_event_name, None)) return result def make_gconfig(name, pconfigs, pool_events, result_handler='supervisor.dispatchers:default_handler'): return EventListenerPoolConfig(options, name, 25, pconfigs, 10, pool_events, result_handler) # Test that changing an eventlistener's command is detected by diff_to_active pconfig = make_pconfig('process1', 'process1-new') econfig = make_econfig("TICK_60") group1 = make_gconfig('group1', [pconfig], econfig) pconfig = make_pconfig('process2', 'process2') econfig = make_econfig("TICK_3600") group2 = make_gconfig('group2', [pconfig], econfig) new = [group1, group2] pconfig = make_pconfig('process1', 'process1-old') econfig = make_econfig("TICK_60") group3 = make_gconfig('group1', [pconfig], econfig) pconfig = make_pconfig('process2', 'process2') econfig = make_econfig("TICK_3600") group4 = make_gconfig('group2', [pconfig], econfig) supervisord.add_process_group(group3) supervisord.add_process_group(group4) supervisord.options.process_group_configs = new added, changed, removed = supervisord.diff_to_active() self.assertEqual(added, []) self.assertEqual(removed, []) self.assertEqual(changed, [group1]) # Test that changing an eventlistener's event is detected by diff_to_active options = DummyOptions() supervisord = self._makeOne(options) pconfig = make_pconfig('process1', 'process1') econfig = make_econfig("TICK_60") group1 = make_gconfig('group1', [pconfig], econfig) pconfig = make_pconfig('process2', 'process2') econfig = make_econfig("TICK_3600") group2 = make_gconfig('group2', [pconfig], econfig) new = [group1, group2] pconfig = make_pconfig('process1', 'process1') econfig = make_econfig("TICK_5") group3 = make_gconfig('group1', [pconfig], econfig) pconfig = make_pconfig('process2', 'process2') econfig = make_econfig("TICK_3600") group4 = make_gconfig('group2', [pconfig], econfig) supervisord.add_process_group(group3) supervisord.add_process_group(group4) supervisord.options.process_group_configs = new added, changed, removed = supervisord.diff_to_active() self.assertEqual(added, []) self.assertEqual(removed, []) self.assertEqual(changed, [group1]) # Test that changing an eventlistener's result_handler is detected by diff_to_active options = DummyOptions() supervisord = self._makeOne(options) pconfig = make_pconfig('process1', 'process1') econfig = make_econfig("TICK_60") group1 = make_gconfig('group1', [pconfig], econfig, 'new-result-handler') pconfig = make_pconfig('process2', 'process2') econfig = make_econfig("TICK_3600") group2 = make_gconfig('group2', [pconfig], econfig) new = [group1, group2] pconfig = make_pconfig('process1', 'process1') econfig = make_econfig("TICK_60") group3 = make_gconfig('group1', [pconfig], econfig, 'old-result-handler') pconfig = make_pconfig('process2', 'process2') econfig = make_econfig("TICK_3600") group4 = make_gconfig('group2', [pconfig], econfig) supervisord.add_process_group(group3) supervisord.add_process_group(group4) supervisord.options.process_group_configs = new added, changed, removed = supervisord.diff_to_active() self.assertEqual(added, []) self.assertEqual(removed, []) self.assertEqual(changed, [group1]) def test_add_process_group(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', '/tmp') gconfig = DummyPGroupConfig(options, 'foo', pconfigs=[pconfig]) options.process_group_configs = [gconfig] supervisord = self._makeOne(options) self.assertEqual(supervisord.process_groups, {}) result = supervisord.add_process_group(gconfig) self.assertEqual(list(supervisord.process_groups.keys()), ['foo']) self.assertTrue(result) group = supervisord.process_groups['foo'] result = supervisord.add_process_group(gconfig) self.assertEqual(group, supervisord.process_groups['foo']) self.assertTrue(not result) def test_add_process_group_emits_event(self): from supervisor import events L = [] def callback(event): L.append(1) events.subscribe(events.ProcessGroupAddedEvent, callback) options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', '/tmp') gconfig = DummyPGroupConfig(options, 'foo', pconfigs=[pconfig]) options.process_group_configs = [gconfig] supervisord = self._makeOne(options) supervisord.add_process_group(gconfig) options.test = True supervisord.runforever() self.assertEqual(L, [1]) def test_remove_process_group(self): options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', '/tmp') gconfig = DummyPGroupConfig(options, 'foo', pconfigs=[pconfig]) supervisord = self._makeOne(options) self.assertRaises(KeyError, supervisord.remove_process_group, 'asdf') supervisord.add_process_group(gconfig) group = supervisord.process_groups['foo'] result = supervisord.remove_process_group('foo') self.assertTrue(group.before_remove_called) self.assertEqual(supervisord.process_groups, {}) self.assertTrue(result) supervisord.add_process_group(gconfig) supervisord.process_groups['foo'].unstopped_processes = [DummyProcess(None)] result = supervisord.remove_process_group('foo') self.assertEqual(list(supervisord.process_groups.keys()), ['foo']) self.assertTrue(not result) def test_remove_process_group_event(self): from supervisor import events L = [] def callback(event): L.append(1) events.subscribe(events.ProcessGroupRemovedEvent, callback) options = DummyOptions() pconfig = DummyPConfig(options, 'foo', '/bin/foo', '/tmp') gconfig = DummyPGroupConfig(options, 'foo', pconfigs=[pconfig]) options.process_group_configs = [gconfig] supervisord = self._makeOne(options) supervisord.add_process_group(gconfig) supervisord.process_groups['foo'].stopped_processes = [DummyProcess(None)] supervisord.remove_process_group('foo') options.test = True supervisord.runforever() self.assertEqual(L, [1]) def test_runforever_emits_generic_startup_event(self): from supervisor import events L = [] def callback(event): L.append(1) events.subscribe(events.SupervisorStateChangeEvent, callback) options = DummyOptions() supervisord = self._makeOne(options) options.test = True supervisord.runforever() self.assertEqual(L, [1]) def test_runforever_emits_generic_specific_event(self): from supervisor import events L = [] def callback(event): L.append(2) events.subscribe(events.SupervisorRunningEvent, callback) options = DummyOptions() options.test = True supervisord = self._makeOne(options) supervisord.runforever() self.assertEqual(L, [2]) def test_runforever_calls_tick(self): options = DummyOptions() options.test = True supervisord = self._makeOne(options) self.assertEqual(len(supervisord.ticks), 0) supervisord.runforever() self.assertEqual(len(supervisord.ticks), 3) def test_runforever_poll_dispatchers(self): options = DummyOptions() options.poller.result = [6], [7, 8] supervisord = self._makeOne(options) pconfig = DummyPConfig(options, 'foo', '/bin/foo',) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig]) pgroup = DummyProcessGroup(gconfig) readable = DummyDispatcher(readable=True) writable = DummyDispatcher(writable=True) error = DummyDispatcher(writable=True, error=OSError) pgroup.dispatchers = {6:readable, 7:writable, 8:error} supervisord.process_groups = {'foo': pgroup} options.test = True supervisord.runforever() self.assertEqual(pgroup.transitioned, True) self.assertEqual(readable.read_event_handled, True) self.assertEqual(writable.write_event_handled, True) self.assertEqual(error.error_handled, True) def test_runforever_select_dispatcher_exitnow_via_read(self): options = DummyOptions() options.poller.result = [6], [] supervisord = self._makeOne(options) pconfig = DummyPConfig(options, 'foo', '/bin/foo',) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig]) pgroup = DummyProcessGroup(gconfig) from supervisor.medusa import asyncore_25 as asyncore exitnow = DummyDispatcher(readable=True, error=asyncore.ExitNow) pgroup.dispatchers = {6:exitnow} supervisord.process_groups = {'foo': pgroup} options.test = True self.assertRaises(asyncore.ExitNow, supervisord.runforever) def test_runforever_select_dispatcher_exitnow_via_write(self): options = DummyOptions() options.poller.result = [], [6] supervisord = self._makeOne(options) pconfig = DummyPConfig(options, 'foo', '/bin/foo',) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig]) pgroup = DummyProcessGroup(gconfig) from supervisor.medusa import asyncore_25 as asyncore exitnow = DummyDispatcher(readable=True, error=asyncore.ExitNow) pgroup.dispatchers = {6:exitnow} supervisord.process_groups = {'foo': pgroup} options.test = True self.assertRaises(asyncore.ExitNow, supervisord.runforever) def test_runforever_select_dispatcher_handle_error_via_read(self): options = DummyOptions() options.poller.result = [6], [] supervisord = self._makeOne(options) pconfig = DummyPConfig(options, 'foo', '/bin/foo',) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig]) pgroup = DummyProcessGroup(gconfig) notimpl = DummyDispatcher(readable=True, error=NotImplementedError) pgroup.dispatchers = {6:notimpl} supervisord.process_groups = {'foo': pgroup} options.test = True supervisord.runforever() self.assertEqual(notimpl.error_handled, True) def test_runforever_select_dispatcher_handle_error_via_write(self): options = DummyOptions() options.poller.result = [], [6] supervisord = self._makeOne(options) pconfig = DummyPConfig(options, 'foo', '/bin/foo',) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig]) pgroup = DummyProcessGroup(gconfig) notimpl = DummyDispatcher(readable=True, error=NotImplementedError) pgroup.dispatchers = {6:notimpl} supervisord.process_groups = {'foo': pgroup} options.test = True supervisord.runforever() self.assertEqual(notimpl.error_handled, True) def test_runforever_stopping_emits_events(self): options = DummyOptions() supervisord = self._makeOne(options) gconfig = DummyPGroupConfig(options) pgroup = DummyProcessGroup(gconfig) supervisord.process_groups = {'foo': pgroup} supervisord.options.mood = SupervisorStates.SHUTDOWN L = [] def callback(event): L.append(event) from supervisor import events events.subscribe(events.SupervisorStateChangeEvent, callback) from supervisor.medusa import asyncore_25 as asyncore options.test = True self.assertRaises(asyncore.ExitNow, supervisord.runforever) self.assertTrue(pgroup.all_stopped) self.assertTrue(isinstance(L[0], events.SupervisorRunningEvent)) self.assertTrue(isinstance(L[0], events.SupervisorStateChangeEvent)) self.assertTrue(isinstance(L[1], events.SupervisorStoppingEvent)) self.assertTrue(isinstance(L[1], events.SupervisorStateChangeEvent)) def test_exit(self): options = DummyOptions() supervisord = self._makeOne(options) pconfig = DummyPConfig(options, 'foo', '/bin/foo',) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig]) pgroup = DummyProcessGroup(gconfig) L = [] def callback(): L.append(1) supervisord.process_groups = {'foo': pgroup} supervisord.options.mood = SupervisorStates.RESTARTING supervisord.options.test = True from supervisor.medusa import asyncore_25 as asyncore self.assertRaises(asyncore.ExitNow, supervisord.runforever) self.assertEqual(pgroup.all_stopped, True) def test_exit_delayed(self): options = DummyOptions() supervisord = self._makeOne(options) pconfig = DummyPConfig(options, 'foo', '/bin/foo',) process = DummyProcess(pconfig) gconfig = DummyPGroupConfig(options, pconfigs=[pconfig]) pgroup = DummyProcessGroup(gconfig) pgroup.unstopped_processes = [process] L = [] def callback(): L.append(1) supervisord.process_groups = {'foo': pgroup} supervisord.options.mood = SupervisorStates.RESTARTING supervisord.options.test = True supervisord.runforever() self.assertNotEqual(supervisord.lastshutdownreport, 0) def test_getSupervisorStateDescription(self): from supervisor.states import getSupervisorStateDescription result = getSupervisorStateDescription(SupervisorStates.RUNNING) self.assertEqual(result, 'RUNNING') def test_tick(self): from supervisor import events L = [] def callback(event): L.append(event) events.subscribe(events.TickEvent, callback) options = DummyOptions() supervisord = self._makeOne(options) supervisord.tick(now=0) self.assertEqual(supervisord.ticks[5], 0) self.assertEqual(supervisord.ticks[60], 0) self.assertEqual(supervisord.ticks[3600], 0) self.assertEqual(len(L), 0) supervisord.tick(now=6) self.assertEqual(supervisord.ticks[5], 5) self.assertEqual(supervisord.ticks[60], 0) self.assertEqual(supervisord.ticks[3600], 0) self.assertEqual(len(L), 1) self.assertEqual(L[-1].__class__, events.Tick5Event) supervisord.tick(now=61) self.assertEqual(supervisord.ticks[5], 60) self.assertEqual(supervisord.ticks[60], 60) self.assertEqual(supervisord.ticks[3600], 0) self.assertEqual(len(L), 3) self.assertEqual(L[-1].__class__, events.Tick60Event) supervisord.tick(now=3601) self.assertEqual(supervisord.ticks[5], 3600) self.assertEqual(supervisord.ticks[60], 3600) self.assertEqual(supervisord.ticks[3600], 3600) self.assertEqual(len(L), 6) self.assertEqual(L[-1].__class__, events.Tick3600Event) def test_suite(): return unittest.findTestCases(sys.modules[__name__]) if __name__ == '__main__': unittest.main(defaultTest='test_suite') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/test_templating.py0000644000076500000240000017426614340177153022656 0ustar00mnaberezstaff# This file was originally based on the meld3 package version 2.0.0 # (https://pypi.org/project/meld3/2.0.0/). The meld3 package is not # called out separately in Supervisor's license or copyright files # because meld3 had the same authors, copyright, and license as # Supervisor at the time this file was bundled with Supervisor. import unittest import re import sys _SIMPLE_XML = r""" Name Description """ _SIMPLE_XHTML = r""" Hello! """ _EMPTYTAGS_HTML = """

    """ _BOOLEANATTRS_XHTML= """ """ _ENTITIES_XHTML= r"""

     

    """ _COMPLEX_XHTML = r""" This will be escaped in html output: &
    Name Description
    """ _NVU_HTML = """ test doc Oh yeah...

    Yup More Stuff Oh Yeah
    1 2 3 4

    And an image...

    dumb """ _FILLMELDFORM_HTML = """\ Emergency Contacts
    Emergency Contacts
    Title
    First Name
    Middle Name
    Last Name
    Suffix
    Address 1
    Address 2
    City
    State
    ZIP
    Home Phone
    Cell/Mobile Phone
    Email Address
    Over 18? (Checkbox Boolean)
    Mail OK? (Checkbox Ternary)
    Favorite Color (Radio) Red Green Blue

    Return to list

    """ class MeldAPITests(unittest.TestCase): def _makeElement(self, string): from supervisor.templating import parse_xmlstring return parse_xmlstring(string) def _makeElementFromHTML(self, string): from supervisor.templating import parse_htmlstring return parse_htmlstring(string) def test_findmeld(self): root = self._makeElement(_SIMPLE_XML) item = root.findmeld('item') self.assertEqual(item.tag, 'item') name = root.findmeld('name') self.assertEqual(name.text, 'Name') def test_findmeld_default(self): root = self._makeElement(_SIMPLE_XML) item = root.findmeld('item') self.assertEqual(item.tag, 'item') unknown = root.findmeld('unknown', 'foo') self.assertEqual(unknown, 'foo') self.assertEqual(root.findmeld('unknown'), None) def test_repeat_nochild(self): root = self._makeElement(_SIMPLE_XML) item = root.findmeld('item') self.assertEqual(item.tag, 'item') data = [{'name':'Jeff Buckley', 'description':'ethereal'}, {'name':'Slipknot', 'description':'heavy'}] for element, d in item.repeat(data): element.findmeld('name').text = d['name'] element.findmeld('description').text = d['description'] self.assertEqual(item[0].text, 'Jeff Buckley') self.assertEqual(item[1].text, 'ethereal') def test_repeat_child(self): root = self._makeElement(_SIMPLE_XML) list = root.findmeld('list') self.assertEqual(list.tag, 'list') data = [{'name':'Jeff Buckley', 'description':'ethereal'}, {'name':'Slipknot', 'description':'heavy'}] for element, d in list.repeat(data, 'item'): element.findmeld('name').text = d['name'] element.findmeld('description').text = d['description'] self.assertEqual(list[0][0].text, 'Jeff Buckley') self.assertEqual(list[0][1].text, 'ethereal') self.assertEqual(list[1][0].text, 'Slipknot') self.assertEqual(list[1][1].text, 'heavy') def test_mod(self): root = self._makeElement(_SIMPLE_XML) root % {'description':'foo', 'name':'bar'} name = root.findmeld('name') self.assertEqual(name.text, 'bar') desc = root.findmeld('description') self.assertEqual(desc.text, 'foo') def test_fillmelds(self): root = self._makeElement(_SIMPLE_XML) unfilled = root.fillmelds(**{'description':'foo', 'jammyjam':'a'}) desc = root.findmeld('description') self.assertEqual(desc.text, 'foo') self.assertEqual(unfilled, ['jammyjam']) def test_fillmeldhtmlform(self): data = [ {'honorific':'Mr.', 'firstname':'Chris', 'middlename':'Phillips', 'lastname':'McDonough', 'address1':'802 Caroline St.', 'address2':'Apt. 2B', 'city':'Fredericksburg', 'state': 'VA', 'zip':'22401', 'homephone':'555-1212', 'cellphone':'555-1313', 'email':'user@example.com', 'suffix':'Sr.', 'over18':True, 'mailok:inputgroup':'true', 'favorite_color:inputgroup':'Green'}, {'honorific':'Mr.', 'firstname':'Fred', 'middlename':'', 'lastname':'Rogers', 'address1':'1 Imaginary Lane', 'address2':'Apt. 3A', 'city':'Never Never Land', 'state': 'LA', 'zip':'00001', 'homephone':'555-1111', 'cellphone':'555-4444', 'email':'fred@neighborhood.com', 'suffix':'Jr.', 'over18':False, 'mailok:inputgroup':'false','favorite_color:inputgroup':'Yellow',}, {'firstname':'Fred', 'middlename':'', 'lastname':'Rogers', 'address1':'1 Imaginary Lane', 'address2':'Apt. 3A', 'city':'Never Never Land', 'state': 'LA', 'zip':'00001', 'homephone':'555-1111', 'cellphone':'555-4444', 'email':'fred@neighborhood.com', 'suffix':'IV', 'over18':False, 'mailok:inputgroup':'false', 'favorite_color:inputgroup':'Blue', 'notthere':1,}, ] root = self._makeElementFromHTML(_FILLMELDFORM_HTML) clone = root.clone() unfilled = clone.fillmeldhtmlform(**data[0]) self.assertEqual(unfilled, []) self.assertEqual(clone.findmeld('honorific').attrib['value'], 'Mr.') self.assertEqual(clone.findmeld('firstname').attrib['value'], 'Chris') middlename = clone.findmeld('middlename') self.assertEqual(middlename.attrib['value'], 'Phillips') suffix = clone.findmeld('suffix') self.assertEqual(suffix[1].attrib['selected'], 'selected') self.assertEqual(clone.findmeld('over18').attrib['checked'], 'checked') mailok = clone.findmeld('mailok:inputgroup') self.assertEqual(mailok[1].attrib['checked'], 'checked') favoritecolor = clone.findmeld('favorite_color:inputgroup') self.assertEqual(favoritecolor[1].attrib['checked'], 'checked') clone = root.clone() unfilled = clone.fillmeldhtmlform(**data[1]) self.assertEqual(unfilled, ['favorite_color:inputgroup']) self.assertEqual(clone.findmeld('over18').attrib.get('checked'), None) mailok = clone.findmeld('mailok:inputgroup') self.assertEqual(mailok[2].attrib['checked'], 'checked') self.assertEqual(mailok[1].attrib.get('checked'), None) clone = root.clone() unfilled = clone.fillmeldhtmlform(**data[2]) self.assertEqual(sorted(unfilled), ['notthere', 'suffix']) self.assertEqual(clone.findmeld('honorific').text, None) favoritecolor = clone.findmeld('favorite_color:inputgroup') self.assertEqual(favoritecolor[2].attrib['checked'], 'checked') self.assertEqual(favoritecolor[1].attrib.get('checked'), None) def test_replace_removes_all_elements(self): from supervisor.templating import Replace root = self._makeElement(_SIMPLE_XML) L = root.findmeld('list') L.replace('this is a textual replacement') R = root[0] self.assertEqual(R.tag, Replace) self.assertEqual(len(root.getchildren()), 1) def test_replace_replaces_the_right_element(self): from supervisor.templating import Replace root = self._makeElement(_SIMPLE_XML) D = root.findmeld('description') D.replace('this is a textual replacement') self.assertEqual(len(root.getchildren()), 1) L = root[0] self.assertEqual(L.tag, 'list') self.assertEqual(len(L.getchildren()), 1) I = L[0] self.assertEqual(I.tag, 'item') self.assertEqual(len(I.getchildren()), 2) N = I[0] self.assertEqual(N.tag, 'name') self.assertEqual(len(N.getchildren()), 0) D = I[1] self.assertEqual(D.tag, Replace) self.assertEqual(D.text, 'this is a textual replacement') self.assertEqual(len(D.getchildren()), 0) self.assertEqual(D.structure, False) def test_content(self): from supervisor.templating import Replace root = self._makeElement(_SIMPLE_XML) D = root.findmeld('description') D.content('this is a textual replacement') self.assertEqual(len(root.getchildren()), 1) L = root[0] self.assertEqual(L.tag, 'list') self.assertEqual(len(L.getchildren()), 1) I = L[0] self.assertEqual(I.tag, 'item') self.assertEqual(len(I.getchildren()), 2) N = I[0] self.assertEqual(N.tag, 'name') self.assertEqual(len(N.getchildren()), 0) D = I[1] self.assertEqual(D.tag, 'description') self.assertEqual(D.text, None) self.assertEqual(len(D.getchildren()), 1) T = D[0] self.assertEqual(T.tag, Replace) self.assertEqual(T.text, 'this is a textual replacement') self.assertEqual(T.structure, False) def test_attributes(self): from supervisor.templating import _MELD_ID root = self._makeElement(_COMPLEX_XHTML) D = root.findmeld('form1') D.attributes(foo='bar', baz='1', g='2', action='#') self.assertEqual(D.attrib, { 'foo':'bar', 'baz':'1', 'g':'2', 'method':'POST', 'action':'#', _MELD_ID: 'form1'}) def test_attributes_unicode(self): from supervisor.templating import _MELD_ID from supervisor.compat import as_string root = self._makeElement(_COMPLEX_XHTML) D = root.findmeld('form1') D.attributes(foo=as_string('bar', encoding='latin1'), action=as_string('#', encoding='latin1')) self.assertEqual(D.attrib, { 'foo':as_string('bar', encoding='latin1'), 'method':'POST', 'action': as_string('#', encoding='latin1'), _MELD_ID: 'form1'}) def test_attributes_nonstringtype_raises(self): root = self._makeElement('') self.assertRaises(ValueError, root.attributes, foo=1) class MeldElementInterfaceTests(unittest.TestCase): def _getTargetClass(self): from supervisor.templating import _MeldElementInterface return _MeldElementInterface def _makeOne(self, *arg, **kw): klass = self._getTargetClass() return klass(*arg, **kw) def test_repeat(self): root = self._makeOne('root', {}) from supervisor.templating import _MELD_ID item = self._makeOne('item', {_MELD_ID:'item'}) record = self._makeOne('record', {_MELD_ID:'record'}) name = self._makeOne('name', {_MELD_ID:'name'}) description = self._makeOne('description', {_MELD_ID:'description'}) record.append(name) record.append(description) item.append(record) root.append(item) data = [{'name':'Jeff Buckley', 'description':'ethereal'}, {'name':'Slipknot', 'description':'heavy'}] for element, d in item.repeat(data): element.findmeld('name').text = d['name'] element.findmeld('description').text = d['description'] self.assertEqual(len(root), 2) item1 = root[0] self.assertEqual(len(item1), 1) record1 = item1[0] self.assertEqual(len(record1), 2) name1 = record1[0] desc1 = record1[1] self.assertEqual(name1.text, 'Jeff Buckley') self.assertEqual(desc1.text, 'ethereal') item2 = root[1] self.assertEqual(len(item2), 1) record2 = item2[0] self.assertEqual(len(record2), 2) name2 = record2[0] desc2 = record2[1] self.assertEqual(name2.text, 'Slipknot') self.assertEqual(desc2.text, 'heavy') def test_content_simple_nostructure(self): el = self._makeOne('div', {'id':'thediv'}) el.content('hello') self.assertEqual(len(el._children), 1) replacenode = el._children[0] self.assertEqual(replacenode.parent, el) self.assertEqual(replacenode.text, 'hello') self.assertEqual(replacenode.structure, False) from supervisor.templating import Replace self.assertEqual(replacenode.tag, Replace) def test_content_simple_structure(self): el = self._makeOne('div', {'id':'thediv'}) el.content('hello', structure=True) self.assertEqual(len(el._children), 1) replacenode = el._children[0] self.assertEqual(replacenode.parent, el) self.assertEqual(replacenode.text, 'hello') self.assertEqual(replacenode.structure, True) from supervisor.templating import Replace self.assertEqual(replacenode.tag, Replace) def test_findmeld_simple(self): from supervisor.templating import _MELD_ID el = self._makeOne('div', {_MELD_ID:'thediv'}) self.assertEqual(el.findmeld('thediv'), el) def test_findmeld_simple_oneleveldown(self): from supervisor.templating import _MELD_ID el = self._makeOne('div', {_MELD_ID:'thediv'}) span = self._makeOne('span', {_MELD_ID:'thespan'}) el.append(span) self.assertEqual(el.findmeld('thespan'), span) def test_findmeld_simple_twolevelsdown(self): from supervisor.templating import _MELD_ID el = self._makeOne('div', {_MELD_ID:'thediv'}) span = self._makeOne('span', {_MELD_ID:'thespan'}) a = self._makeOne('a', {_MELD_ID:'thea'}) span.append(a) el.append(span) self.assertEqual(el.findmeld('thea'), a) def test_ctor(self): iface = self._makeOne('div', {'id':'thediv'}) self.assertEqual(iface.parent, None) self.assertEqual(iface.tag, 'div') self.assertEqual(iface.attrib, {'id':'thediv'}) def test_getiterator_simple(self): div = self._makeOne('div', {'id':'thediv'}) iterator = div.getiterator() self.assertEqual(len(iterator), 1) self.assertEqual(iterator[0], div) def test_getiterator(self): div = self._makeOne('div', {'id':'thediv'}) span = self._makeOne('span', {}) span2 = self._makeOne('span', {'id':'2'}) span3 = self._makeOne('span3', {'id':'3'}) span3.text = 'abc' span3.tail = ' ' div.append(span) span.append(span2) span2.append(span3) it = div.getiterator() self.assertEqual(len(it), 4) self.assertEqual(it[0], div) self.assertEqual(it[1], span) self.assertEqual(it[2], span2) self.assertEqual(it[3], span3) def test_getiterator_tag_ignored(self): div = self._makeOne('div', {'id':'thediv'}) span = self._makeOne('span', {}) span2 = self._makeOne('span', {'id':'2'}) span3 = self._makeOne('span3', {'id':'3'}) span3.text = 'abc' span3.tail = ' ' div.append(span) span.append(span2) span2.append(span3) it = div.getiterator(tag='div') self.assertEqual(len(it), 4) self.assertEqual(it[0], div) self.assertEqual(it[1], span) self.assertEqual(it[2], span2) self.assertEqual(it[3], span3) def test_append(self): div = self._makeOne('div', {'id':'thediv'}) span = self._makeOne('span', {}) div.append(span) self.assertEqual(div[0].tag, 'span') self.assertEqual(span.parent, div) def test__setitem__(self): div = self._makeOne('div', {'id':'thediv'}) span = self._makeOne('span', {}) span2 = self._makeOne('span', {'id':'2'}) div.append(span) div[0] = span2 self.assertEqual(div[0].tag, 'span') self.assertEqual(div[0].attrib, {'id':'2'}) self.assertEqual(div[0].parent, div) def test_insert(self): div = self._makeOne('div', {'id':'thediv'}) span = self._makeOne('span', {}) span2 = self._makeOne('span', {'id':'2'}) div.append(span) div.insert(0, span2) self.assertEqual(div[0].tag, 'span') self.assertEqual(div[0].attrib, {'id':'2'}) self.assertEqual(div[0].parent, div) self.assertEqual(div[1].tag, 'span') self.assertEqual(div[1].attrib, {}) self.assertEqual(div[1].parent, div) def test_clone_simple(self): div = self._makeOne('div', {'id':'thediv'}) div.text = 'abc' div.tail = ' ' span = self._makeOne('span', {}) div.append(span) div.clone() def test_clone(self): div = self._makeOne('div', {'id':'thediv'}) span = self._makeOne('span', {}) span2 = self._makeOne('span', {'id':'2'}) span3 = self._makeOne('span3', {'id':'3'}) span3.text = 'abc' span3.tail = ' ' div.append(span) span.append(span2) span2.append(span3) div2 = div.clone() self.assertEqual(div.tag, div2.tag) self.assertEqual(div.attrib, div2.attrib) self.assertEqual(div[0].tag, div2[0].tag) self.assertEqual(div[0].attrib, div2[0].attrib) self.assertEqual(div[0][0].tag, div2[0][0].tag) self.assertEqual(div[0][0].attrib, div2[0][0].attrib) self.assertEqual(div[0][0][0].tag, div2[0][0][0].tag) self.assertEqual(div[0][0][0].attrib, div2[0][0][0].attrib) self.assertEqual(div[0][0][0].text, div2[0][0][0].text) self.assertEqual(div[0][0][0].tail, div2[0][0][0].tail) self.assertNotEqual(id(div), id(div2)) self.assertNotEqual(id(div[0]), id(div2[0])) self.assertNotEqual(id(div[0][0]), id(div2[0][0])) self.assertNotEqual(id(div[0][0][0]), id(div2[0][0][0])) def test_deparent_noparent(self): div = self._makeOne('div', {}) self.assertEqual(div.parent, None) div.deparent() self.assertEqual(div.parent, None) def test_deparent_withparent(self): parent = self._makeOne('parent', {}) self.assertEqual(parent.parent, None) child = self._makeOne('child', {}) parent.append(child) self.assertEqual(parent.parent, None) self.assertEqual(child.parent, parent) self.assertEqual(parent[0], child) child.deparent() self.assertEqual(child.parent, None) self.assertRaises(IndexError, parent.__getitem__, 0) def test_setslice(self): parent = self._makeOne('parent', {}) child1 = self._makeOne('child1', {}) child2 = self._makeOne('child2', {}) child3 = self._makeOne('child3', {}) children = (child1, child2, child3) parent[0:2] = children self.assertEqual(child1.parent, parent) self.assertEqual(child2.parent, parent) self.assertEqual(child3.parent, parent) self.assertEqual(parent._children, list(children)) def test_delslice(self): parent = self._makeOne('parent', {}) child1 = self._makeOne('child1', {}) child2 = self._makeOne('child2', {}) child3 = self._makeOne('child3', {}) children = (child1, child2, child3) parent[0:2] = children del parent[0:2] self.assertEqual(child1.parent, None) self.assertEqual(child2.parent, None) self.assertEqual(child3.parent, parent) self.assertEqual(len(parent._children), 1) def test_remove(self): parent = self._makeOne('parent', {}) child1 = self._makeOne('child1', {}) parent.append(child1) parent.remove(child1) self.assertEqual(child1.parent, None) self.assertEqual(len(parent._children), 0) def test_lineage(self): from supervisor.templating import _MELD_ID div1 = self._makeOne('div', {_MELD_ID:'div1'}) span1 = self._makeOne('span', {_MELD_ID:'span1'}) span2 = self._makeOne('span', {_MELD_ID:'span2'}) span3 = self._makeOne('span', {_MELD_ID:'span3'}) span4 = self._makeOne('span', {_MELD_ID:'span4'}) span5 = self._makeOne('span', {_MELD_ID:'span5'}) span6 = self._makeOne('span', {_MELD_ID:'span6'}) unknown = self._makeOne('span', {}) div2 = self._makeOne('div2', {_MELD_ID:'div2'}) div1.append(span1) span1.append(span2) span2.append(span3) span3.append(unknown) unknown.append(span4) span4.append(span5) span5.append(span6) div1.append(div2) def ids(L): return [ x.meldid() for x in L ] self.assertEqual(ids(div1.lineage()), ['div1']) self.assertEqual(ids(span1.lineage()), ['span1', 'div1']) self.assertEqual(ids(span2.lineage()), ['span2', 'span1', 'div1']) self.assertEqual(ids(span3.lineage()), ['span3', 'span2', 'span1', 'div1']) self.assertEqual(ids(unknown.lineage()), [None, 'span3', 'span2', 'span1', 'div1']) self.assertEqual(ids(span4.lineage()), ['span4', None, 'span3', 'span2', 'span1','div1']) self.assertEqual(ids(span5.lineage()), ['span5', 'span4', None, 'span3', 'span2', 'span1','div1']) self.assertEqual(ids(span6.lineage()), ['span6', 'span5', 'span4', None,'span3', 'span2', 'span1','div1']) self.assertEqual(ids(div2.lineage()), ['div2', 'div1']) def test_shortrepr(self): from supervisor.compat import as_bytes div = self._makeOne('div', {'id':'div1'}) span = self._makeOne('span', {}) span2 = self._makeOne('span', {'id':'2'}) div2 = self._makeOne('div2', {'id':'div2'}) div.append(span) span.append(span2) div.append(div2) r = div.shortrepr() self.assertEqual(r, as_bytes('
    ' '
    ', encoding='latin1')) def test_shortrepr2(self): from supervisor.templating import parse_xmlstring from supervisor.compat import as_bytes root = parse_xmlstring(_COMPLEX_XHTML) r = root.shortrepr() self.assertEqual(r, as_bytes('\n' ' \n' ' \n' ' [...]\n\n' ' \n' ' [...]\n' '', encoding='latin1')) def test_diffmeld1(self): from supervisor.templating import parse_xmlstring from supervisor.templating import _MELD_ID root = parse_xmlstring(_COMPLEX_XHTML) clone = root.clone() div = self._makeOne('div', {_MELD_ID:'newdiv'}) clone.append(div) tr = clone.findmeld('tr') tr.deparent() title = clone.findmeld('title') title.deparent() clone.append(title) # unreduced diff = root.diffmeld(clone) changes = diff['unreduced'] addedtags = [ x.attrib[_MELD_ID] for x in changes['added'] ] removedtags = [x.attrib[_MELD_ID] for x in changes['removed'] ] movedtags = [ x.attrib[_MELD_ID] for x in changes['moved'] ] addedtags.sort() removedtags.sort() movedtags.sort() self.assertEqual(addedtags,['newdiv']) self.assertEqual(removedtags,['td1', 'td2', 'tr']) self.assertEqual(movedtags, ['title']) # reduced changes = diff['reduced'] addedtags = [ x.attrib[_MELD_ID] for x in changes['added'] ] removedtags = [x.attrib[_MELD_ID] for x in changes['removed'] ] movedtags = [ x.attrib[_MELD_ID] for x in changes['moved'] ] addedtags.sort() removedtags.sort() movedtags.sort() self.assertEqual(addedtags,['newdiv']) self.assertEqual(removedtags,['tr']) self.assertEqual(movedtags, ['title']) def test_diffmeld2(self): source = """ """ target = """ """ from supervisor.templating import parse_htmlstring source_root = parse_htmlstring(source) target_root = parse_htmlstring(target) changes = source_root.diffmeld(target_root) # unreduced actual = [x.meldid() for x in changes['unreduced']['moved']] expected = ['b'] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['unreduced']['added']] expected = [] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['unreduced']['removed']] expected = [] self.assertEqual(expected, actual) # reduced actual = [x.meldid() for x in changes['reduced']['moved']] expected = ['b'] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['reduced']['added']] expected = [] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['reduced']['removed']] expected = [] self.assertEqual(expected, actual) def test_diffmeld3(self): source = """ """ target = """ """ from supervisor.templating import parse_htmlstring source_root = parse_htmlstring(source) target_root = parse_htmlstring(target) changes = source_root.diffmeld(target_root) # unreduced actual = [x.meldid() for x in changes['unreduced']['moved']] expected = ['b', 'c'] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['unreduced']['added']] expected = ['d', 'e'] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['unreduced']['removed']] expected = ['z', 'y'] self.assertEqual(expected, actual) # reduced actual = [x.meldid() for x in changes['reduced']['moved']] expected = ['b'] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['reduced']['added']] expected = ['d'] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['reduced']['removed']] expected = ['z'] self.assertEqual(expected, actual) def test_diffmeld4(self): source = """ """ target = """

    """ from supervisor.templating import parse_htmlstring source_root = parse_htmlstring(source) target_root = parse_htmlstring(target) changes = source_root.diffmeld(target_root) # unreduced actual = [x.meldid() for x in changes['unreduced']['moved']] expected = ['a', 'b'] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['unreduced']['added']] expected = ['m', 'n'] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['unreduced']['removed']] expected = ['c', 'd', 'z', 'y'] self.assertEqual(expected, actual) # reduced actual = [x.meldid() for x in changes['reduced']['moved']] expected = ['a'] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['reduced']['added']] expected = ['m'] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['reduced']['removed']] expected = ['c', 'z'] self.assertEqual(expected, actual) def test_diffmeld5(self): source = """ """ target = """

    """ from supervisor.templating import parse_htmlstring source_root = parse_htmlstring(source) target_root = parse_htmlstring(target) changes = source_root.diffmeld(target_root) # unreduced actual = [x.meldid() for x in changes['unreduced']['moved']] expected = ['a', 'b', 'c', 'd'] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['unreduced']['added']] expected = [] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['unreduced']['removed']] expected = [] self.assertEqual(expected, actual) # reduced actual = [x.meldid() for x in changes['reduced']['moved']] expected = ['a', 'c'] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['reduced']['added']] expected = [] self.assertEqual(expected, actual) actual = [x.meldid() for x in changes['reduced']['removed']] expected = [] self.assertEqual(expected, actual) class ParserTests(unittest.TestCase): def _parse(self, *args): from supervisor.templating import parse_xmlstring root = parse_xmlstring(*args) return root def _parse_html(self, *args): from supervisor.templating import parse_htmlstring root = parse_htmlstring(*args) return root def test_parse_simple_xml(self): from supervisor.templating import _MELD_ID root = self._parse(_SIMPLE_XML) self.assertEqual(root.tag, 'root') self.assertEqual(root.parent, None) l1st = root[0] self.assertEqual(l1st.tag, 'list') self.assertEqual(l1st.parent, root) self.assertEqual(l1st.attrib[_MELD_ID], 'list') item = l1st[0] self.assertEqual(item.tag, 'item') self.assertEqual(item.parent, l1st) self.assertEqual(item.attrib[_MELD_ID], 'item') name = item[0] description = item[1] self.assertEqual(name.tag, 'name') self.assertEqual(name.parent, item) self.assertEqual(name.attrib[_MELD_ID], 'name') self.assertEqual(description.tag, 'description') self.assertEqual(description.parent, item) self.assertEqual(description.attrib[_MELD_ID], 'description') def test_parse_simple_xhtml(self): xhtml_ns = '{http://www.w3.org/1999/xhtml}%s' from supervisor.templating import _MELD_ID root = self._parse(_SIMPLE_XHTML) self.assertEqual(root.tag, xhtml_ns % 'html') self.assertEqual(root.attrib, {}) self.assertEqual(root.parent, None) body = root[0] self.assertEqual(body.tag, xhtml_ns % 'body') self.assertEqual(body.attrib[_MELD_ID], 'body') self.assertEqual(body.parent, root) def test_parse_complex_xhtml(self): xhtml_ns = '{http://www.w3.org/1999/xhtml}%s' from supervisor.templating import _MELD_ID root = self._parse(_COMPLEX_XHTML) self.assertEqual(root.tag, xhtml_ns % 'html') self.assertEqual(root.attrib, {}) self.assertEqual(root.parent, None) head = root[0] self.assertEqual(head.tag, xhtml_ns % 'head') self.assertEqual(head.attrib, {}) self.assertEqual(head.parent, root) meta = head[0] self.assertEqual(meta.tag, xhtml_ns % 'meta') self.assertEqual(meta.attrib['content'], 'text/html; charset=ISO-8859-1') self.assertEqual(meta.parent, head) title = head[1] self.assertEqual(title.tag, xhtml_ns % 'title') self.assertEqual(title.attrib[_MELD_ID], 'title') self.assertEqual(title.parent, head) body = root[2] self.assertEqual(body.tag, xhtml_ns % 'body') self.assertEqual(body.attrib, {}) self.assertEqual(body.parent, root) div1 = body[0] self.assertEqual(div1.tag, xhtml_ns % 'div') self.assertEqual(div1.attrib, {'{http://foo/bar}baz': 'slab'}) self.assertEqual(div1.parent, body) div2 = body[1] self.assertEqual(div2.tag, xhtml_ns % 'div') self.assertEqual(div2.attrib[_MELD_ID], 'content_well') self.assertEqual(div2.parent, body) form = div2[0] self.assertEqual(form.tag, xhtml_ns % 'form') self.assertEqual(form.attrib[_MELD_ID], 'form1') self.assertEqual(form.attrib['action'], '.') self.assertEqual(form.attrib['method'], 'POST') self.assertEqual(form.parent, div2) img = form[0] self.assertEqual(img.tag, xhtml_ns % 'img') self.assertEqual(img.parent, form) table = form[1] self.assertEqual(table.tag, xhtml_ns % 'table') self.assertEqual(table.attrib[_MELD_ID], 'table1') self.assertEqual(table.attrib['border'], '0') self.assertEqual(table.parent, form) tbody = table[0] self.assertEqual(tbody.tag, xhtml_ns % 'tbody') self.assertEqual(tbody.attrib[_MELD_ID], 'tbody') self.assertEqual(tbody.parent, table) tr = tbody[0] self.assertEqual(tr.tag, xhtml_ns % 'tr') self.assertEqual(tr.attrib[_MELD_ID], 'tr') self.assertEqual(tr.attrib['class'], 'foo') self.assertEqual(tr.parent, tbody) td1 = tr[0] self.assertEqual(td1.tag, xhtml_ns % 'td') self.assertEqual(td1.attrib[_MELD_ID], 'td1') self.assertEqual(td1.parent, tr) td2 = tr[1] self.assertEqual(td2.tag, xhtml_ns % 'td') self.assertEqual(td2.attrib[_MELD_ID], 'td2') self.assertEqual(td2.parent, tr) def test_nvu_html(self): from supervisor.templating import _MELD_ID from supervisor.templating import Comment root = self._parse_html(_NVU_HTML) self.assertEqual(root.tag, 'html') self.assertEqual(root.attrib, {}) self.assertEqual(root.parent, None) head = root[0] self.assertEqual(head.tag, 'head') self.assertEqual(head.attrib, {}) self.assertEqual(head.parent, root) meta = head[0] self.assertEqual(meta.tag, 'meta') self.assertEqual(meta.attrib['content'], 'text/html; charset=ISO-8859-1') title = head[1] self.assertEqual(title.tag, 'title') self.assertEqual(title.attrib[_MELD_ID], 'title') self.assertEqual(title.parent, head) body = root[1] self.assertEqual(body.tag, 'body') self.assertEqual(body.attrib, {}) self.assertEqual(body.parent, root) comment = body[0] self.assertEqual(comment.tag, Comment) table = body[3] self.assertEqual(table.tag, 'table') self.assertEqual(table.attrib, {'style': 'text-align: left; width: 100px;', 'border':'1', 'cellpadding':'2', 'cellspacing':'2'}) self.assertEqual(table.parent, body) href = body[5] self.assertEqual(href.tag, 'a') img = body[8] self.assertEqual(img.tag, 'img') def test_dupe_meldids_fails_parse_xml(self): meld_ns = "https://github.com/Supervisor/supervisor" repeated = ('' '' % meld_ns) self.assertRaises(ValueError, self._parse, repeated) def test_dupe_meldids_fails_parse_html(self): meld_ns = "https://github.com/Supervisor/supervisor" repeated = ('' '' % meld_ns) self.assertRaises(ValueError, self._parse_html, repeated) class UtilTests(unittest.TestCase): def test_insert_xhtml_doctype(self): from supervisor.templating import insert_doctype orig = '' actual = insert_doctype(orig) expected = '' self.assertEqual(actual, expected) def test_insert_doctype_after_xmldecl(self): from supervisor.templating import insert_doctype orig = '' actual = insert_doctype(orig) expected = '' self.assertEqual(actual, expected) def test_insert_meld_ns_decl(self): from supervisor.templating import insert_meld_ns_decl orig = '' actual = insert_meld_ns_decl(orig) expected = '' self.assertEqual(actual, expected) def test_prefeed_preserves_existing_meld_ns(self): from supervisor.templating import prefeed orig = '' actual = prefeed(orig) expected = '' self.assertEqual(actual, expected) def test_prefeed_preserves_existing_doctype(self): from supervisor.templating import prefeed orig = '' actual = prefeed(orig) self.assertEqual(actual, orig) class WriterTests(unittest.TestCase): def _parse(self, xml): from supervisor.templating import parse_xmlstring root = parse_xmlstring(xml) return root def _parse_html(self, xml): from supervisor.templating import parse_htmlstring root = parse_htmlstring(xml) return root def _write(self, fn, **kw): try: from io import BytesIO except: # python 2.5 from StringIO import StringIO as BytesIO out = BytesIO() fn(out, **kw) out.seek(0) actual = out.read() return actual def _write_xml(self, node, **kw): return self._write(node.write_xml, **kw) def _write_html(self, node, **kw): return self._write(node.write_html, **kw) def _write_xhtml(self, node, **kw): return self._write(node.write_xhtml, **kw) def assertNormalizedXMLEqual(self, a, b): from supervisor.compat import as_string a = normalize_xml(as_string(a, encoding='latin1')) b = normalize_xml(as_string(b, encoding='latin1')) self.assertEqual(a, b) def assertNormalizedHTMLEqual(self, a, b): from supervisor.compat import as_string a = normalize_xml(as_string(a, encoding='latin1')) b = normalize_xml(as_string(b, encoding='latin1')) self.assertEqual(a, b) def test_write_simple_xml(self): root = self._parse(_SIMPLE_XML) actual = self._write_xml(root) expected = """ Name Description """ self.assertNormalizedXMLEqual(actual, expected) for el, data in root.findmeld('item').repeat(((1,2),)): el.findmeld('name').text = str(data[0]) el.findmeld('description').text = str(data[1]) actual = self._write_xml(root) expected = """ 1 2 """ self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xhtml(self): root = self._parse(_SIMPLE_XHTML) actual = self._write_xhtml(root) expected = """Hello!""" self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xhtml_as_html(self): root = self._parse(_SIMPLE_XHTML) actual = self._write_html(root) expected = """ Hello! """ self.assertNormalizedHTMLEqual(actual, expected) def test_write_complex_xhtml_as_html(self): root = self._parse(_COMPLEX_XHTML) actual = self._write_html(root) expected = """ This will be escaped in html output: &
    Name Description
    """ self.assertNormalizedHTMLEqual(actual, expected) def test_write_complex_xhtml_as_xhtml(self): # I'm not entirely sure if the cdata "script" quoting in this # test is entirely correct for XHTML. Ryan Tomayko suggests # that escaped entities are handled properly in script tags by # XML-aware browsers at # http://sourceforge.net/mailarchive/message.php?msg_id=10835582 # but I haven't tested it at all. ZPT does not seem to do # this; it outputs unescaped data. root = self._parse(_COMPLEX_XHTML) actual = self._write_xhtml(root) expected = """ This will be escaped in html output: &
    Name Description
    """ self.assertNormalizedXMLEqual(actual, expected) def test_write_emptytags_html(self): from supervisor.compat import as_string root = self._parse(_EMPTYTAGS_HTML) actual = self._write_html(root) expected = """

    """ self.assertEqual(as_string(actual, encoding='latin1'), expected) def test_write_booleanattrs_xhtml_as_html(self): root = self._parse(_BOOLEANATTRS_XHTML) actual = self._write_html(root) expected = """ """ self.assertNormalizedHTMLEqual(actual, expected) def test_write_simple_xhtml_pipeline(self): root = self._parse(_SIMPLE_XHTML) actual = self._write_xhtml(root, pipeline=True) expected = """Hello!""" self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xml_pipeline(self): root = self._parse(_SIMPLE_XML) actual = self._write_xml(root, pipeline=True) expected = """ Name Description """ self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xml_override_encoding(self): root = self._parse(_SIMPLE_XML) actual = self._write_xml(root, encoding="latin-1") expected = """ Name Description """ self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xml_as_fragment(self): root = self._parse(_SIMPLE_XML) actual = self._write_xml(root, fragment=True) expected = """ Name Description """ self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xml_with_doctype(self): root = self._parse(_SIMPLE_XML) from supervisor.templating import doctype actual = self._write_xml(root, doctype=doctype.xhtml) expected = """ Name Description """ self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xml_doctype_nodeclaration(self): root = self._parse(_SIMPLE_XML) from supervisor.templating import doctype actual = self._write_xml(root, declaration=False, doctype=doctype.xhtml) expected = """ Name Description """ self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xml_fragment_kills_doctype_and_declaration(self): root = self._parse(_SIMPLE_XML) from supervisor.templating import doctype actual = self._write_xml(root, declaration=True, doctype=doctype.xhtml, fragment=True) expected = """ Name Description """ self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xhtml_override_encoding(self): root = self._parse(_SIMPLE_XHTML) actual = self._write_xhtml(root, encoding="latin-1", declaration=True) expected = """Hello!""" self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xhtml_as_fragment(self): root = self._parse(_SIMPLE_XHTML) actual = self._write_xhtml(root, fragment=True) expected = """Hello!""" self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xhtml_with_doctype(self): root = self._parse(_SIMPLE_XHTML) from supervisor.templating import doctype actual = self._write_xhtml(root, doctype=doctype.xhtml) expected = """Hello!""" self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xhtml_doctype_nodeclaration(self): root = self._parse(_SIMPLE_XHTML) from supervisor.templating import doctype actual = self._write_xhtml(root, declaration=False, doctype=doctype.xhtml) expected = """Hello!""" self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xhtml_fragment_kills_doctype_and_declaration(self): root = self._parse(_SIMPLE_XHTML) from supervisor.templating import doctype actual = self._write_xhtml(root, declaration=True, doctype=doctype.xhtml, fragment=True) expected = """Hello!""" self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xhtml_as_html_fragment(self): root = self._parse(_SIMPLE_XHTML) actual = self._write_html(root, fragment=True) expected = """Hello!""" self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xhtml_with_doctype_as_html(self): root = self._parse(_SIMPLE_XHTML) actual = self._write_html(root) expected = """ Hello!""" self.assertNormalizedXMLEqual(actual, expected) def test_write_simple_xhtml_as_html_new_doctype(self): root = self._parse(_SIMPLE_XHTML) from supervisor.templating import doctype actual = self._write_html(root, doctype=doctype.html_strict) expected = """ Hello!""" self.assertNormalizedXMLEqual(actual, expected) def test_unknown_entity(self): # exception thrown may vary by python or expat version from xml.parsers import expat self.assertRaises((expat.error, SyntaxError), self._parse, '&fleeb;') def test_content_nostructure(self): root = self._parse(_SIMPLE_XML) D = root.findmeld('description') D.content('description &&', structure=False) actual = self._write_xml(root) expected = """ Name description &<foo>&<bar> """ self.assertNormalizedXMLEqual(actual, expected) def test_content_structure(self): root = self._parse(_SIMPLE_XML) D = root.findmeld('description') D.content('description & ', structure=True) actual = self._write_xml(root) expected = """ Name description & """ self.assertNormalizedXMLEqual(actual, expected) def test_replace_nostructure(self): root = self._parse(_SIMPLE_XML) D = root.findmeld('description') D.replace('description &&', structure=False) actual = self._write_xml(root) expected = """ Name description &<foo>&<bar> """ self.assertNormalizedXMLEqual(actual, expected) def test_replace_structure(self): root = self._parse(_SIMPLE_XML) D = root.findmeld('description') D.replace('description & ', structure=True) actual = self._write_xml(root) expected = """ Name description & """ self.assertNormalizedXMLEqual(actual, expected) def test_escape_cdata(self): from supervisor.compat import as_bytes from supervisor.templating import _escape_cdata a = ('< > <& &' && &foo "" ' 'http://www.example.com?foo=bar&bang=baz {') self.assertEqual( as_bytes('< > <& &' && &foo "" ' 'http://www.example.com?foo=bar&bang=baz {', encoding='latin1'), _escape_cdata(a)) def test_escape_cdata_unicodeerror(self): from supervisor.templating import _escape_cdata from supervisor.compat import as_bytes from supervisor.compat import as_string a = as_string(as_bytes('\x80', encoding='latin1'), encoding='latin1') self.assertEqual(as_bytes('€', encoding='latin1'), _escape_cdata(a, 'ascii')) def test_escape_attrib(self): from supervisor.templating import _escape_attrib from supervisor.compat import as_bytes a = ('< > <& &' && &foo "" ' 'http://www.example.com?foo=bar&bang=baz {') self.assertEqual( as_bytes('< > <& &' ' '&& &foo "" ' 'http://www.example.com?foo=bar&bang=baz {', encoding='latin1'), _escape_attrib(a, None)) def test_escape_attrib_unicodeerror(self): from supervisor.templating import _escape_attrib from supervisor.compat import as_bytes from supervisor.compat import as_string a = as_string(as_bytes('\x80', encoding='latin1'), encoding='latin1') self.assertEqual(as_bytes('€', encoding='latin1'), _escape_attrib(a, 'ascii')) def normalize_html(s): s = re.sub(r"[ \t]+", " ", s) s = re.sub(r"/>", ">", s) return s def normalize_xml(s): s = re.sub(r"\s+", " ", s) s = re.sub(r"(?s)\s+<", "<", s) s = re.sub(r"(?s)>\s+", ">", s) return s def test_suite(): return unittest.findTestCases(sys.modules[__name__]) def main(): unittest.main(defaultTest='test_suite') if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/tests/test_web.py0000644000076500000240000001600414340177153021250 0ustar00mnaberezstaffimport sys import unittest from supervisor.tests.base import DummySupervisor from supervisor.tests.base import DummyRequest class DeferredWebProducerTests(unittest.TestCase): def _getTargetClass(self): from supervisor.web import DeferredWebProducer return DeferredWebProducer def _makeOne(self, request, callback): producer = self._getTargetClass()(request, callback) return producer def test_ctor(self): request = DummyRequest('/index.html', [], '', '') callback = lambda *x: None callback.delay = 1 producer = self._makeOne(request, callback) self.assertEqual(producer.callback, callback) self.assertEqual(producer.request, request) self.assertEqual(producer.finished, False) self.assertEqual(producer.delay, 1) def test_more_not_done_yet(self): request = DummyRequest('/index.html', [], '', '') from supervisor.http import NOT_DONE_YET callback = lambda *x: NOT_DONE_YET callback.delay = 1 producer = self._makeOne(request, callback) self.assertEqual(producer.more(), NOT_DONE_YET) def test_more_finished(self): request = DummyRequest('/index.html', [], '', '') callback = lambda *x: 'done' callback.delay = 1 producer = self._makeOne(request, callback) self.assertEqual(producer.more(), None) self.assertTrue(producer.finished) self.assertEqual(producer.more(), '') def test_more_exception_caught(self): request = DummyRequest('/index.html', [], '', '') def callback(*arg): raise ValueError('foo') callback.delay = 1 producer = self._makeOne(request, callback) self.assertEqual(producer.more(), None) logdata = request.channel.server.logger.logged self.assertEqual(len(logdata), 1) logged = logdata[0] self.assertEqual(logged[0], 'Web interface error') self.assertTrue(logged[1].startswith('Traceback'), logged[1]) self.assertEqual(producer.finished, True) self.assertEqual(request._error, 500) def test_sendresponse_redirect(self): request = DummyRequest('/index.html', [], '', '') callback = lambda *arg: None callback.delay = 1 producer = self._makeOne(request, callback) response = {'headers': {'Location':'abc'}} result = producer.sendresponse(response) self.assertEqual(result, None) self.assertEqual(request._error, 301) self.assertEqual(request.headers['Content-Type'], 'text/plain') self.assertEqual(request.headers['Content-Length'], 0) def test_sendresponse_withbody_and_content_type(self): request = DummyRequest('/index.html', [], '', '') callback = lambda *arg: None callback.delay = 1 producer = self._makeOne(request, callback) response = {'body': 'abc', 'headers':{'Content-Type':'text/html'}} result = producer.sendresponse(response) self.assertEqual(result, None) self.assertEqual(request.headers['Content-Type'], 'text/html') self.assertEqual(request.headers['Content-Length'], 3) self.assertEqual(request.producers[0], 'abc') class UIHandlerTests(unittest.TestCase): def _getTargetClass(self): from supervisor.web import supervisor_ui_handler return supervisor_ui_handler def _makeOne(self): supervisord = DummySupervisor() handler = self._getTargetClass()(supervisord) return handler def test_handle_request_no_view_method(self): request = DummyRequest('/foo.css', [], '', '', {'PATH_INFO':'/foo.css'}) handler = self._makeOne() data = handler.handle_request(request) self.assertEqual(data, None) def test_handle_request_default(self): request = DummyRequest('/index.html', [], '', '', {'PATH_INFO':'/index.html'}) handler = self._makeOne() data = handler.handle_request(request) self.assertEqual(data, None) self.assertEqual(request.channel.producer.request, request) from supervisor.web import StatusView self.assertEqual(request.channel.producer.callback.__class__,StatusView) def test_handle_request_index_html(self): request = DummyRequest('/index.html', [], '', '', {'PATH_INFO':'/index.html'}) handler = self._makeOne() handler.handle_request(request) from supervisor.web import StatusView view = request.channel.producer.callback self.assertEqual(view.__class__, StatusView) self.assertEqual(view.context.template, 'ui/status.html') def test_handle_request_tail_html(self): request = DummyRequest('/tail.html', [], '', '', {'PATH_INFO':'/tail.html'}) handler = self._makeOne() handler.handle_request(request) from supervisor.web import TailView view = request.channel.producer.callback self.assertEqual(view.__class__, TailView) self.assertEqual(view.context.template, 'ui/tail.html') def test_handle_request_ok_html(self): request = DummyRequest('/tail.html', [], '', '', {'PATH_INFO':'/ok.html'}) handler = self._makeOne() handler.handle_request(request) from supervisor.web import OKView view = request.channel.producer.callback self.assertEqual(view.__class__, OKView) self.assertEqual(view.context.template, None) class StatusViewTests(unittest.TestCase): def _getTargetClass(self): from supervisor.web import StatusView return StatusView def _makeOne(self, context): klass = self._getTargetClass() return klass(context) def test_make_callback_noaction(self): context = DummyContext() context.supervisord = DummySupervisor() context.template = 'ui/status.html' context.form = {} view = self._makeOne(context) self.assertRaises(ValueError, view.make_callback, 'process', None) def test_render_noaction(self): context = DummyContext() context.supervisord = DummySupervisor() context.template = 'ui/status.html' context.request = DummyRequest('/foo', [], '', '') context.form = {} context.response = {} view = self._makeOne(context) data = view.render() self.assertTrue(data.startswith('' \ '' \ 'supervisor.getAPIVersion' \ '' request = DummyRequest('/what/ever', None, None, None) handler.continue_request(data, request) logdata = supervisor.options.logger.data self.assertEqual(len(logdata), 2) self.assertEqual(logdata[-2], 'XML-RPC method called: supervisor.getAPIVersion()') self.assertEqual(logdata[-1], 'XML-RPC method supervisor.getAPIVersion() returned successfully') self.assertEqual(len(request.producers), 1) xml_response = request.producers[0] response = xmlrpclib.loads(xml_response) from supervisor.rpcinterface import API_VERSION self.assertEqual(response[0][0], API_VERSION) self.assertEqual(request._done, True) self.assertEqual(request.headers['Content-Type'], 'text/xml') self.assertEqual(request.headers['Content-Length'], len(xml_response)) def test_continue_request_400_if_method_name_is_empty(self): supervisor = DummySupervisor() subinterfaces = [('supervisor', DummySupervisorRPCNamespace())] handler = self._makeOne(supervisor, subinterfaces) data = '' \ '' request = DummyRequest('/what/ever', None, None, None) handler.continue_request(data, request) logdata = supervisor.options.logger.data self.assertEqual(len(logdata), 1) self.assertTrue(logdata[0].startswith('XML-RPC request data')) self.assertTrue(repr(data) in logdata[0]) self.assertTrue(logdata[0].endswith('is invalid: no method name')) self.assertEqual(request._error, 400) def test_continue_request_400_if_loads_raises_not_xml(self): supervisor = DummySupervisor() subinterfaces = [('supervisor', DummySupervisorRPCNamespace())] handler = self._makeOne(supervisor, subinterfaces) data = 'this is not an xml-rpc request body' request = DummyRequest('/what/ever', None, None, None) handler.continue_request(data, request) logdata = supervisor.options.logger.data self.assertEqual(len(logdata), 1) self.assertTrue(logdata[0].startswith('XML-RPC request data')) self.assertTrue(repr(data) in logdata[0]) self.assertTrue(logdata[0].endswith('is invalid: unmarshallable')) self.assertEqual(request._error, 400) def test_continue_request_400_if_loads_raises_weird_xml(self): supervisor = DummySupervisor() subinterfaces = [('supervisor', DummySupervisorRPCNamespace())] handler = self._makeOne(supervisor, subinterfaces) data = '' request = DummyRequest('/what/ever', None, None, None) handler.continue_request(data, request) logdata = supervisor.options.logger.data self.assertEqual(len(logdata), 1) self.assertTrue(logdata[0].startswith('XML-RPC request data')) self.assertTrue(repr(data) in logdata[0]) self.assertTrue(logdata[0].endswith('is invalid: unmarshallable')) self.assertEqual(request._error, 400) def test_continue_request_500_if_rpcinterface_method_call_raises(self): supervisor = DummySupervisor() subinterfaces = [('supervisor', DummySupervisorRPCNamespace())] handler = self._makeOne(supervisor, subinterfaces) data = xmlrpclib.dumps((), 'supervisor.raiseError') request = DummyRequest('/what/ever', None, None, None) handler.continue_request(data, request) logdata = supervisor.options.logger.data self.assertEqual(len(logdata), 2) self.assertEqual(logdata[0], 'XML-RPC method called: supervisor.raiseError()') self.assertTrue("unexpected exception" in logdata[1]) self.assertTrue(repr(data) in logdata[1]) self.assertTrue("Traceback" in logdata[1]) self.assertTrue("ValueError: error" in logdata[1]) self.assertEqual(request._error, 500) def test_continue_request_500_if_xmlrpc_dumps_raises(self): supervisor = DummySupervisor() subinterfaces = [('supervisor', DummySupervisorRPCNamespace())] handler = self._makeOne(supervisor, subinterfaces) data = xmlrpclib.dumps((), 'supervisor.getXmlRpcUnmarshallable') request = DummyRequest('/what/ever', None, None, None) handler.continue_request(data, request) logdata = supervisor.options.logger.data self.assertEqual(len(logdata), 3) self.assertEqual(logdata[0], 'XML-RPC method called: supervisor.getXmlRpcUnmarshallable()') self.assertEqual(logdata[1], 'XML-RPC method supervisor.getXmlRpcUnmarshallable() ' 'returned successfully') self.assertTrue("unexpected exception" in logdata[2]) self.assertTrue(repr(data) in logdata[2]) self.assertTrue("Traceback" in logdata[2]) self.assertTrue("TypeError: cannot marshal" in logdata[2]) self.assertEqual(request._error, 500) def test_continue_request_value_is_function(self): class DummyRPCNamespace(object): def foo(self): def inner(self): return 1 inner.delay = .05 return inner supervisor = DummySupervisor() subinterfaces = [('supervisor', DummySupervisorRPCNamespace()), ('ns1', DummyRPCNamespace())] handler = self._makeOne(supervisor, subinterfaces) data = xmlrpclib.dumps((), 'ns1.foo') request = DummyRequest('/what/ever', None, None, None) handler.continue_request(data, request) logdata = supervisor.options.logger.data self.assertEqual(len(logdata), 2) self.assertEqual(logdata[-2], 'XML-RPC method called: ns1.foo()') self.assertEqual(logdata[-1], 'XML-RPC method ns1.foo() returned successfully') self.assertEqual(len(request.producers), 0) self.assertEqual(request._done, False) def test_iterparse_loads_methodcall(self): s = """ examples.getStateName 41 foo bar 1 -12.214 19980717T14:08:55 eW91IGNhbid0IHJlYWQgdGhpcyE= j5 kabc 12 abc def 34 k 1 """ supervisor = DummySupervisor() subinterfaces = [('supervisor', DummySupervisorRPCNamespace())] handler = self._makeOne(supervisor, subinterfaces) result = handler.loads(s) params, method = result import datetime self.assertEqual(method, 'examples.getStateName') self.assertEqual(params[0], 41) self.assertEqual(params[1], 'foo') self.assertEqual(params[2], '') self.assertEqual(params[3], 'bar') self.assertEqual(params[4], '') self.assertEqual(params[5], True) self.assertEqual(params[6], -12.214) self.assertEqual(params[7], datetime.datetime(1998, 7, 17, 14, 8, 55)) self.assertEqual(params[8], "you can't read this!") self.assertEqual(params[9], {'j': 5, 'k': 'abc'}) self.assertEqual(params[10], [12, 'abc', 'def', 34]) self.assertEqual(params[11], {'k': [1, {}]}) class TraverseTests(unittest.TestCase): def test_security_disallows_underscore_methods(self): from supervisor import xmlrpc class Root: pass class A: def _danger(self): return True root = Root() root.a = A() self.assertRaises(xmlrpc.RPCError, xmlrpc.traverse, root, 'a._danger', []) def test_security_disallows_object_traversal(self): from supervisor import xmlrpc class Root: pass class A: pass class B: def danger(self): return True root = Root() root.a = A() root.a.b = B() self.assertRaises(xmlrpc.RPCError, xmlrpc.traverse, root, 'a.b.danger', []) def test_namespace_name_not_found(self): from supervisor import xmlrpc class Root: pass root = Root() self.assertRaises(xmlrpc.RPCError, xmlrpc.traverse, root, 'notfound.hello', None) def test_method_name_not_found(self): from supervisor import xmlrpc class Root: pass class A: pass root = Root() root.a = A() self.assertRaises(xmlrpc.RPCError, xmlrpc.traverse, root, 'a.notfound', []) def test_method_name_exists_but_is_not_a_method(self): from supervisor import xmlrpc class Root: pass class A: pass class B: pass root = Root() root.a = A() root.a.b = B() self.assertRaises(xmlrpc.RPCError, xmlrpc.traverse, root, 'a.b', []) # b is not a method def test_bad_params(self): from supervisor import xmlrpc class Root: pass class A: def hello(self, name): return "Hello %s" % name root = Root() root.a = A() self.assertRaises(xmlrpc.RPCError, xmlrpc.traverse, root, 'a.hello', ["there", "extra"]) # too many params def test_success(self): from supervisor import xmlrpc class Root: pass class A: def hello(self, name): return "Hello %s" % name root = Root() root.a = A() result = xmlrpc.traverse(root, 'a.hello', ["there"]) self.assertEqual(result, "Hello there") class SupervisorTransportTests(unittest.TestCase): def _getTargetClass(self): from supervisor.xmlrpc import SupervisorTransport return SupervisorTransport def _makeOne(self, *arg, **kw): return self._getTargetClass()(*arg, **kw) def test_ctor_unix(self): from supervisor import xmlrpc transport = self._makeOne('user', 'pass', 'unix:///foo/bar') conn = transport._get_connection() self.assertTrue(isinstance(conn, xmlrpc.UnixStreamHTTPConnection)) self.assertEqual(conn.host, 'localhost') self.assertEqual(conn.socketfile, '/foo/bar') def test_ctor_unknown(self): self.assertRaises(ValueError, self._makeOne, 'user', 'pass', 'unknown:///foo/bar' ) def test__get_connection_http_9001(self): transport = self._makeOne('user', 'pass', 'http://127.0.0.1:9001/') conn = transport._get_connection() self.assertTrue(isinstance(conn, httplib.HTTPConnection)) self.assertEqual(conn.host, '127.0.0.1') self.assertEqual(conn.port, 9001) def test__get_connection_http_80(self): transport = self._makeOne('user', 'pass', 'http://127.0.0.1/') conn = transport._get_connection() self.assertTrue(isinstance(conn, httplib.HTTPConnection)) self.assertEqual(conn.host, '127.0.0.1') self.assertEqual(conn.port, 80) def test_request_non_200_response(self): transport = self._makeOne('user', 'pass', 'http://127.0.0.1/') dummy_conn = DummyConnection(400, '') def getconn(): return dummy_conn transport._get_connection = getconn self.assertRaises(xmlrpclib.ProtocolError, transport.request, 'localhost', '/', '') self.assertEqual(transport.connection, None) self.assertEqual(dummy_conn.closed, True) def test_request_400_response(self): transport = self._makeOne('user', 'pass', 'http://127.0.0.1/') dummy_conn = DummyConnection(400, '') def getconn(): return dummy_conn transport._get_connection = getconn self.assertRaises(xmlrpclib.ProtocolError, transport.request, 'localhost', '/', '') self.assertEqual(transport.connection, None) self.assertEqual(dummy_conn.closed, True) self.assertEqual(dummy_conn.requestargs[0], 'POST') self.assertEqual(dummy_conn.requestargs[1], '/') self.assertEqual(dummy_conn.requestargs[2], b'') self.assertEqual(dummy_conn.requestargs[3]['Content-Length'], '0') self.assertEqual(dummy_conn.requestargs[3]['Content-Type'], 'text/xml') self.assertEqual(dummy_conn.requestargs[3]['Authorization'], 'Basic dXNlcjpwYXNz') self.assertEqual(dummy_conn.requestargs[3]['Accept'], 'text/xml') def test_request_200_response(self): transport = self._makeOne('user', 'pass', 'http://127.0.0.1/') response = """ South Dakota """ dummy_conn = DummyConnection(200, response) def getconn(): return dummy_conn transport._get_connection = getconn result = transport.request('localhost', '/', '') self.assertEqual(transport.connection, dummy_conn) self.assertEqual(dummy_conn.closed, False) self.assertEqual(dummy_conn.requestargs[0], 'POST') self.assertEqual(dummy_conn.requestargs[1], '/') self.assertEqual(dummy_conn.requestargs[2], b'') self.assertEqual(dummy_conn.requestargs[3]['Content-Length'], '0') self.assertEqual(dummy_conn.requestargs[3]['Content-Type'], 'text/xml') self.assertEqual(dummy_conn.requestargs[3]['Authorization'], 'Basic dXNlcjpwYXNz') self.assertEqual(dummy_conn.requestargs[3]['Accept'], 'text/xml') self.assertEqual(result, ('South Dakota',)) def test_close(self): transport = self._makeOne('user', 'pass', 'http://127.0.0.1/') dummy_conn = DummyConnection(200, ''' ''') def getconn(): return dummy_conn transport._get_connection = getconn transport.request('localhost', '/', '') transport.close() self.assertTrue(dummy_conn.closed) class TestDeferredXMLRPCResponse(unittest.TestCase): def _getTargetClass(self): from supervisor.xmlrpc import DeferredXMLRPCResponse return DeferredXMLRPCResponse def _makeOne(self, request=None, callback=None): if request is None: request = DummyRequest(None, None, None, None, None) if callback is None: callback = Dummy() callback.delay = 1 return self._getTargetClass()(request, callback) def test_ctor(self): callback = Dummy() callback.delay = 1 inst = self._makeOne(request='request', callback=callback) self.assertEqual(inst.callback, callback) self.assertEqual(inst.delay, 1.0) self.assertEqual(inst.request, 'request') self.assertEqual(inst.finished, False) def test_more_finished(self): inst = self._makeOne() inst.finished = True result = inst.more() self.assertEqual(result, '') def test_more_callback_returns_not_done_yet(self): from supervisor.http import NOT_DONE_YET def callback(): return NOT_DONE_YET callback.delay = 1 inst = self._makeOne(callback=callback) self.assertEqual(inst.more(), NOT_DONE_YET) def test_more_callback_raises_RPCError(self): from supervisor.xmlrpc import RPCError, Faults def callback(): raise RPCError(Faults.UNKNOWN_METHOD) callback.delay = 1 inst = self._makeOne(callback=callback) self.assertEqual(inst.more(), None) self.assertEqual(len(inst.request.producers), 1) self.assertTrue('UNKNOWN_METHOD' in inst.request.producers[0]) self.assertTrue(inst.finished) def test_more_callback_returns_value(self): def callback(): return 'abc' callback.delay = 1 inst = self._makeOne(callback=callback) self.assertEqual(inst.more(), None) self.assertEqual(len(inst.request.producers), 1) self.assertTrue('abc' in inst.request.producers[0]) self.assertTrue(inst.finished) def test_more_callback_raises_unexpected_exception(self): def callback(): raise ValueError('foo') callback.delay = 1 inst = self._makeOne(callback=callback) self.assertEqual(inst.more(), None) self.assertEqual(inst.request._error, 500) self.assertTrue(inst.finished) logged = inst.request.channel.server.logger.logged self.assertEqual(len(logged), 1) src, msg = logged[0] self.assertEqual(src, 'XML-RPC response callback error') self.assertTrue("Traceback" in msg) def test_getresponse_http_10_with_keepalive(self): inst = self._makeOne() inst.request.version = '1.0' inst.request.header.append('Connection: keep-alive') inst.getresponse('abc') self.assertEqual(len(inst.request.producers), 1) self.assertEqual(inst.request.headers['Connection'], 'Keep-Alive') def test_getresponse_http_10_no_keepalive(self): inst = self._makeOne() inst.request.version = '1.0' inst.getresponse('abc') self.assertEqual(len(inst.request.producers), 1) self.assertEqual(inst.request.headers['Connection'], 'close') def test_getresponse_http_11_without_close(self): inst = self._makeOne() inst.request.version = '1.1' inst.getresponse('abc') self.assertEqual(len(inst.request.producers), 1) self.assertTrue('Connection' not in inst.request.headers) def test_getresponse_http_11_with_close(self): inst = self._makeOne() inst.request.header.append('Connection: close') inst.request.version = '1.1' inst.getresponse('abc') self.assertEqual(len(inst.request.producers), 1) self.assertEqual(inst.request.headers['Connection'], 'close') def test_getresponse_http_unknown(self): inst = self._makeOne() inst.request.version = None inst.getresponse('abc') self.assertEqual(len(inst.request.producers), 1) self.assertEqual(inst.request.headers['Connection'], 'close') class TestSystemNamespaceRPCInterface(unittest.TestCase): def _makeOne(self, namespaces=()): from supervisor.xmlrpc import SystemNamespaceRPCInterface return SystemNamespaceRPCInterface(namespaces) def test_listMethods_gardenpath(self): inst = self._makeOne() result = inst.listMethods() self.assertEqual( result, ['system.listMethods', 'system.methodHelp', 'system.methodSignature', 'system.multicall', ] ) def test_listMethods_omits_underscore_attrs(self): class DummyNamespace(object): def foo(self): pass def _bar(self): pass ns1 = DummyNamespace() inst = self._makeOne([('ns1', ns1)]) result = inst.listMethods() self.assertEqual( result, ['ns1.foo', 'system.listMethods', 'system.methodHelp', 'system.methodSignature', 'system.multicall' ] ) def test_methodHelp_known_method(self): inst = self._makeOne() result = inst.methodHelp('system.listMethods') self.assertTrue('array' in result) def test_methodHelp_unknown_method(self): from supervisor.xmlrpc import RPCError inst = self._makeOne() self.assertRaises(RPCError, inst.methodHelp, 'wont.be.found') def test_methodSignature_known_method(self): inst = self._makeOne() result = inst.methodSignature('system.methodSignature') self.assertEqual(result, ['array', 'string']) def test_methodSignature_unknown_method(self): from supervisor.xmlrpc import RPCError inst = self._makeOne() self.assertRaises(RPCError, inst.methodSignature, 'wont.be.found') def test_methodSignature_with_bad_sig(self): from supervisor.xmlrpc import RPCError class DummyNamespace(object): def foo(self): """ @param string name The thing""" ns1 = DummyNamespace() inst = self._makeOne([('ns1', ns1)]) self.assertRaises(RPCError, inst.methodSignature, 'ns1.foo') def test_multicall_faults_for_recursion(self): from supervisor.xmlrpc import Faults inst = self._makeOne() calls = [{'methodName':'system.multicall'}] results = inst.multicall(calls) self.assertEqual( results, [{'faultCode': Faults.INCORRECT_PARAMETERS, 'faultString': ('INCORRECT_PARAMETERS: Recursive ' 'system.multicall forbidden')}] ) def test_multicall_faults_for_missing_methodName(self): from supervisor.xmlrpc import Faults inst = self._makeOne() calls = [{}] results = inst.multicall(calls) self.assertEqual( results, [{'faultCode': Faults.INCORRECT_PARAMETERS, 'faultString': 'INCORRECT_PARAMETERS: No methodName'}] ) def test_multicall_faults_for_methodName_bad_namespace(self): from supervisor.xmlrpc import Faults inst = self._makeOne() calls = [{'methodName': 'bad.stopProcess'}] results = inst.multicall(calls) self.assertEqual( results, [{'faultCode': Faults.UNKNOWN_METHOD, 'faultString': 'UNKNOWN_METHOD'}] ) def test_multicall_faults_for_methodName_good_ns_bad_method(self): from supervisor.xmlrpc import Faults class DummyNamespace(object): pass ns1 = DummyNamespace() inst = self._makeOne([('ns1', ns1)]) calls = [{'methodName': 'ns1.bad'}] results = inst.multicall(calls) self.assertEqual( results, [{'faultCode': Faults.UNKNOWN_METHOD, 'faultString': 'UNKNOWN_METHOD'}] ) def test_multicall_returns_empty_results_for_empty_calls(self): inst = self._makeOne() calls = [] results = inst.multicall(calls) self.assertEqual(results, []) def test_multicall_performs_noncallback_functions_serially(self): class DummyNamespace(object): def say(self, name): """ @param string name Process name""" return name ns1 = DummyNamespace() inst = self._makeOne([('ns1', ns1)]) calls = [ {'methodName': 'ns1.say', 'params': ['Alvin']}, {'methodName': 'ns1.say', 'params': ['Simon']}, {'methodName': 'ns1.say', 'params': ['Theodore']} ] results = inst.multicall(calls) self.assertEqual(results, ['Alvin', 'Simon', 'Theodore']) def test_multicall_catches_noncallback_exceptions(self): import errno from supervisor.xmlrpc import RPCError, Faults class DummyNamespace(object): def bad_name(self): raise RPCError(Faults.BAD_NAME, 'foo') def os_error(self): raise OSError(errno.ENOENT) ns1 = DummyNamespace() inst = self._makeOne([('ns1', ns1)]) calls = [{'methodName': 'ns1.bad_name'}, {'methodName': 'ns1.os_error'}] results = inst.multicall(calls) bad_name = {'faultCode': Faults.BAD_NAME, 'faultString': 'BAD_NAME: foo'} os_error = {'faultCode': Faults.FAILED, 'faultString': "FAILED: %s:2" % OSError} self.assertEqual(results, [bad_name, os_error]) def test_multicall_catches_callback_exceptions(self): import errno from supervisor.xmlrpc import RPCError, Faults from supervisor.http import NOT_DONE_YET class DummyNamespace(object): def bad_name(self): def inner(): raise RPCError(Faults.BAD_NAME, 'foo') return inner def os_error(self): def inner(): raise OSError(errno.ENOENT) return inner ns1 = DummyNamespace() inst = self._makeOne([('ns1', ns1)]) calls = [{'methodName': 'ns1.bad_name'}, {'methodName': 'ns1.os_error'}] callback = inst.multicall(calls) results = NOT_DONE_YET while results is NOT_DONE_YET: results = callback() bad_name = {'faultCode': Faults.BAD_NAME, 'faultString': 'BAD_NAME: foo'} os_error = {'faultCode': Faults.FAILED, 'faultString': "FAILED: %s:2" % OSError} self.assertEqual(results, [bad_name, os_error]) def test_multicall_performs_callback_functions_serially(self): from supervisor.http import NOT_DONE_YET class DummyNamespace(object): def __init__(self): self.stop_results = [NOT_DONE_YET, NOT_DONE_YET, NOT_DONE_YET, 'stop result'] self.start_results = ['start result'] def stopProcess(self, name): def inner(): result = self.stop_results.pop(0) if result is not NOT_DONE_YET: self.stopped = True return result return inner def startProcess(self, name): def inner(): if not self.stopped: raise Exception("This should not raise") return self.start_results.pop(0) return inner ns1 = DummyNamespace() inst = self._makeOne([('ns1', ns1)]) calls = [{'methodName': 'ns1.stopProcess', 'params': {'name': 'foo'}}, {'methodName': 'ns1.startProcess', 'params': {'name': 'foo'}}] callback = inst.multicall(calls) results = NOT_DONE_YET while results is NOT_DONE_YET: results = callback() self.assertEqual(results, ['stop result', 'start result']) class Test_gettags(unittest.TestCase): def _callFUT(self, comment): from supervisor.xmlrpc import gettags return gettags(comment) def test_one_atpart(self): lines = '@foo' result = self._callFUT(lines) self.assertEqual( result, [(0, None, None, None, ''), (0, 'foo', '', '', '')] ) def test_two_atparts(self): lines = '@foo array' result = self._callFUT(lines) self.assertEqual( result, [(0, None, None, None, ''), (0, 'foo', 'array', '', '')] ) def test_three_atparts(self): lines = '@foo array name' result = self._callFUT(lines) self.assertEqual( result, [(0, None, None, None, ''), (0, 'foo', 'array', 'name', '')] ) def test_four_atparts(self): lines = '@foo array name text' result = self._callFUT(lines) self.assertEqual( result, [(0, None, None, None, ''), (0, 'foo', 'array', 'name', 'text')] ) class Test_capped_int(unittest.TestCase): def _callFUT(self, value): from supervisor.xmlrpc import capped_int return capped_int(value) def test_converts_value_to_integer(self): self.assertEqual(self._callFUT('42'), 42) def test_caps_value_below_minint(self): from supervisor.compat import xmlrpclib self.assertEqual(self._callFUT(xmlrpclib.MININT - 1), xmlrpclib.MININT) def test_caps_value_above_maxint(self): from supervisor.compat import xmlrpclib self.assertEqual(self._callFUT(xmlrpclib.MAXINT + 1), xmlrpclib.MAXINT) class DummyResponse: def __init__(self, status=200, body='', reason='reason'): self.status = status self.body = body self.reason = reason def read(self): return self.body class Dummy(object): pass class DummyConnection: closed = False def __init__(self, status=200, body='', reason='reason'): self.response = DummyResponse(status, body, reason) def getresponse(self): return self.response def request(self, *arg, **kw): self.requestargs = arg self.requestkw = kw def close(self): self.closed = True ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1671843145.3956356 supervisor-4.2.5/supervisor/ui/0000755000076500000240000000000014351446511016333 5ustar00mnaberezstaff././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1671843145.3969047 supervisor-4.2.5/supervisor/ui/images/0000755000076500000240000000000014351446511017600 5ustar00mnaberezstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/ui/images/icon.png0000644000076500000240000000324614340177153021244 0ustar00mnaberezstaffPNG  IHDR szzbKGD pHYs  tIME!s3IDATXWml=Y/W#8j B 4 q+?TM$6FTDU'UH%Jdĥkc׮5fwv>5(u齹ye;cUUGοg{`M}}}vuu꺮QUUq Ċ`fmpppرc9CHmJƻN۩Nf@dϻ4MPx|bdhhD <0H$R9$Z_e@}ЎDBm 3 K!+@eeHe1{`@ >Zafduuu\nED&.w P̊-BfP0??N q ۫3dCZ]͎:`HiSuMZYXGl_eV܎|=QzŬ'/?㨪FJܾħ:UUSC-_D" 3Sqqɢ>OtANګ>ܹs3‚,ԤHfi[FDRs:)L霷L+OFOo;vD?^]309=yAIөmIotVk TVMI ض, }>aPSS3UUW_+pjzz/H&֤dG R,8cG_8kkk7R{{-}>3d2qȎ7X4.#?/!?>%BFÞ!b`rrVzoFΉ|O*d¼fgrVK0::ʶek׮,~|+%ĊrXly0-]rht&3+bR7_MJ,g:'-.IENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/ui/images/rule.gif0000644000076500000240000000006614340177153021241 0ustar00mnaberezstaffGIF89aA޼!,A ڋs.;././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/ui/images/state0.gif0000644000076500000240000000223014340177153021465 0ustar00mnaberezstaffGIF89a=}|||{{}{~zz}~|z~z|}|}|~{z,=HHaia]Tƥ֤TƮTG%%21G1---11%-244@tʸ"!3*V1 +Th1cF InDpnj"=~!E! ,йˍ;u Th/@uȹ`hN;,m0CjQC Ajݡ vaAC =xUp bGv-Eۖm#XXᣱ>؂/@BɋMd^D/"ćMD F"`wFDȢ; nsaKrȕ3w<ۈ`d7oy8qB'l?O ɫ'_8d@"ydIy<>0F|Tie9a%O𑃕ge]crɇ_n馛m* grD ))(@F jDE) 1gr٨6g  ! 0  a,tiJ꧝VFZ*UxZꦘV!AJBKA,с :. : 0D t0Db vmFA.((0/ RЮ#/  \+q;././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/ui/images/state1.gif0000644000076500000240000000214314340177153021471 0ustar00mnaberezstaffGIF89a=끮釴{~珺}揻틷逭脱|戴}愱댸z劷|}~牶팷状|莹ꍹꁯ|{抶ꎹ눵쀮醳|~捸탱뎺}~萼ꄲ{慱쇳犵}傮錷珼逬苸{~灯ꅱ뎻숴샱ꅲ뉴|去z䍹ꏻz{{{z,=t t YtEtEE t $t$ t $ rA$rr rbbn۴r'r4,W'-r-T'T--4ҹv4(\C 6|0!D UX1F4TT42NDe̘|4i% >.S7O.`A Hu*)PR]`kԦ]RtQSt*@ qQD]xK7.߹q8DÈ;c 8BxI!Fxƈ5B|=ZFө=y2f7-uvAA(0@) q14gyҟ7<ʹcfҧ ` {o>~'W?Ǟ*@*` : FH @b!;hH@%9&-b9|l".oXdLZƒJaAI4QNYe1%L&%j,f^2l`C3 B 6's&v'u'6ڦ0F<*sP)Zj飑N馜JFi +* kRԚj+ l@0nPKmN[m&rKX V;././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/ui/images/state2.gif0000644000076500000240000000214214340177153021471 0ustar00mnaberezstaffGIF89a=|r{{~xℂw߀yo⁊ux⃑冔牒戒t{pzoxnym|ㄐ䅄y~႑zpw႓戋|r~|{t}s{qyoㅇ}znv၂x~s{p|q}rw}r應⃄zu{o䄏|q~rヌv₅z}q兄z{y~t|pzov}s爆z}}~r~t}凒znₔ爏兑s~syvu~ၑs牅yw䄌|pzp}r扅zxnxmymynyn,=|||ccUU#ow#UU#|__ɺ˺#̾oU;^em^;e;"xNƒ |pp2l81ĉ/ZĸaD#P2OHnXYbIKV\r&OFn憔8_R A"u!D A&@UF#$ULvP$ѣ^:pPAn׮}-ܺs{aWz@_D!ƒ$< Q 9ƌKfeǚply 2زC,a`!0291kN8$rD\oۺh߾=y|_Cxˣ_':t9ʉktfp]k_x`)8߀E_/d jB(pXd!"r!n *!fhbXф4c B)0ǐ4QV@$N& PYMZIpc2bɄNɁ>IĘcʙfvu>8& $h340 >hB(PZ)b额2Ji8C 8S`@ +jqjzkzj ò2G0iD m r-Ҫ,;././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/ui/images/state3.gif0000644000076500000240000000216114340177153021473 0ustar00mnaberezstaffGIF89a={}|咖䁆哗吕䅊刍䎒䇋劎哘叓{勐䃇厒䀄䄉匑卑}凋~䀅呕呔咗䄈偆䊎䈍勏|哖䆊z䂇儉䏓傇z叔兊|啘~{䌑呖冊倅厓}儈唙唗咕䉎吔剎䋏䒖僇䆋~前䇌䈌刌匐吓䋐䏔䁅偅䂆僈䌐{}䌏䑖卒䒗劏卐䎓䂈}䐕~|冋䍑|剏䉍䊏僉凌䍐倄䉌䑕䓗傈䃈卓䂅䅉䓖䐔}䔗䒕|䍒册唘啙z,=}|;|0;00}|0 < <| |T,!/5/)Qt J25F /2>5v,+u$é>0ˊ'v ).5%՗>z5'1Ӱ>p1.PΜiDXD ƠQ 2BhLqA'Tx"% X #) `?@ :h@% bpb8b8!eO>`C 0Levp˺  i}ADܧD:eA\b T+"-B%U&r},V^(b Z5`Р;7SHfnxDM5"2`nު삻݁{r@0ĩP/ 5@0B~ cCi" G< B~`~7Vr ~%l@ P@!<ࠇӂ 淟Xb~ '1ꖂ= @_ DPf 6@PB %o1TB~"Ġ@ p&%J \@5vC0 "",Y(2@#%49D ,ţJp$4A(BW QX8 ڰA8Bd%XfpN z}@Dz 9 !X0=$zs " 4@ JSݽ ({N}y ܋BQA jF\nF<4`A YUl86D-Xa-W . l @qvq(,= 1W%02X>P2pt5Ѐ_`&M-e00\PvB u6ԣ|mCsC0w̭!Zv2>BAͤW& eHyee%b w qH%yo4(b!в)(#3e_s"i{=C. yPv=̜z٣nI9X0b ?g`+t[6>1  Dc$x-{+<}|,w?uQ K Z7~ɸGu@ta@cDd@D(  gCD Ѐ p`Pd`aA"&OeCA QoS Q(B8DPH C"k؇_~CdIHHķ뎠d@x$1GAWH!.|PPGHA;>!!.@H, yL%NIA nEM| c.L 4G"rX uMra7 !Bl& ˀ2j%?e ml#ySxBZ Ł+|Ѐ29YPHmK3(* \t&c$HI @&a[e("?xXՁpu*ۦW xքY x%"!gy8'V騝rHg] :Ab|-YIгJU}( *RvիW0Q*%֚+B SJ^0Y,WΔ% 6KTg%1zˆW7EP=kCQ&WX u5@LNf=&[^6!<*bծGrܾW-QիīWy@Ί#0hx?聇GЃ ?Ulb 86޶O5XP޾D--z dMr*` ,b@dDhAJD}py+McXޘV #;"2Q-b<Xw= C04J<xNti0C!`6 i!I@ 0X A2@@. R֕v^:lum 8b@i*c'*Xz@h#`D`Hkkúz{N$z .̠vl˰a$-` l }p|G-Fۻ=3 o"Ի!+zZe>`Ƌ^i(L)ĠQ6^bӔLWЀQGGC0e o3==Omt9+:qɨWwp 03] Ǩ9 \04¢:NSsnBy/tu꾘z0Q,:p;U; h4X5 @{Z@sjX G(? S@-W} 10'0G'#e=an]sća&gGo6/Vu,|g p /SVa7@1eX @ /YWG1,p# h2=8/ PGJ6a<=PbqHG FC7\khX3Lq^}1/UI ([i r7 Yxa0ÈZ(XE9L@X5oo#[x[X:JpHSi38_EǘxS5fMԘ֨j͸SЍx(TD>0Hwx5xU2];././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/ui/status.html0000644000076500000240000000447614340177153020560 0ustar00mnaberezstaff Supervisor Status
    State Description Name Action
    nominal Info Name
    ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1671843145.3970256 supervisor-4.2.5/supervisor/ui/stylesheets/0000755000076500000240000000000014351446511020707 5ustar00mnaberezstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/ui/stylesheets/supervisor.css0000644000076500000240000000654614340177153023656 0ustar00mnaberezstaff/* =ORDER 1. display 2. float and position 3. width and height 4. Specific element properties 5. margin 6. border 7. padding 8. background 9. color 10. font related properties ----------------------------------------------- */ /* =MAIN ----------------------------------------------- */ body, td, input, select, textarea, a { font: 12px/1.5em arial, helvetica, verdana, sans-serif; color: #333; } html, body, form, fieldset, h1, h2, h3, h4, h5, h6, p, pre, blockquote, ul, ol, dl, address { margin: 0; padding: 0; } form label { cursor: pointer; } fieldset { border: none; } img, table { border-width: 0; } /* =COLORS ----------------------------------------------- */ body { background-color: #FFFFF3; color: #333; } a:link, a:visited { color: #333; } a:hover { color: #000; } /* =FLOATS ----------------------------------------------- */ .left { float: left; } .right { text-align: right; float: right; } /* clear float */ .clr:after { content: "."; display: block; height: 0; clear: both; visibility: hidden; } .clr {display: inline-block;} /* Hides from IE-mac \*/ * html .clr {height: 1%;} .clr {display: block;} /* End hide from IE-mac */ /* =LAYOUT ----------------------------------------------- */ html, body { height: 100%; } #wrapper { min-height: 100%; height: auto !important; height: 100%; width: 850px; margin: 0 auto -31px; } #footer, .push { height: 30px; } .hidden { display: none; } /* =STATUS ----------------------------------------------- */ #header { margin-bottom: 13px; padding: 10px 0 13px 0; background: url("../images/rule.gif") left bottom repeat-x; } .status_msg { padding: 5px 10px; border: 1px solid #919191; background-color: #FBFBFB; color: #000000; } #buttons { margin: 13px 0; } #buttons li { float: left; display: block; margin: 0 7px 0 0; } #buttons a { float: left; display: block; padding: 1px 0 0 0; } #buttons a, #buttons a:link { text-decoration: none; } .action-button { border: 1px solid #919191; text-transform: uppercase; padding: 0 5px; border-radius: 4px; color: #50504d; font-size: 12px; background: #fbfbfb; font-weight: 600; } .action-button:hover { border: 1px solid #88b0f2; background: #ffffff; } table { width: 100%; border: 1px solid #919191; } th { background-color: #919191; color: #fff; text-align: left; } th.state { text-align: center; width: 44px; } th.desc { width: 200px; } th.name { width: 200px; } th.action { } td, th { padding: 4px 8px; border-bottom: 1px solid #fff; } tr td { background-color: #FBFBFB; } tr.shade td { background-color: #F0F0F0; } .action ul { list-style: none; display: inline; } .action li { margin-right: 10px; display: inline; } /* status message */ .status span { display: block; width: 60px; height: 16px; border: 1px solid #fff; text-align: center; font-size: 95%; line-height: 1.4em; } .statusnominal { background-image: url("../images/state0.gif"); } .statusrunning { background-image: url("../images/state2.gif"); } .statuserror { background-image: url("../images/state3.gif"); } #footer { width: 760px; margin: 0 auto; padding: 0 10px; line-height: 30px; border: 1px solid #C8C8C2; border-bottom-width: 0; background-color: #FBFBFB; overflow: hidden; opacity: 0.7; color: #000; font-size: 95%; } #footer a { font-size: inherit; } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398123.0 supervisor-4.2.5/supervisor/ui/tail.html0000644000076500000240000000116614340177153020157 0ustar00mnaberezstaff Supervisor Status ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671841573.0 supervisor-4.2.5/supervisor/version.txt0000644000076500000240000000000614351443445020143 0ustar00mnaberezstaff4.2.5 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671836081.0 supervisor-4.2.5/supervisor/web.py0000644000076500000240000005737614351430661017066 0ustar00mnaberezstaffimport os import re import time import traceback import datetime from supervisor import templating from supervisor.compat import urllib from supervisor.compat import urlparse from supervisor.compat import as_bytes from supervisor.compat import as_string from supervisor.compat import PY2 from supervisor.compat import unicode from supervisor.medusa import producers from supervisor.medusa.http_server import http_date from supervisor.medusa.http_server import get_header from supervisor.medusa.xmlrpc_handler import collector from supervisor.process import ProcessStates from supervisor.http import NOT_DONE_YET from supervisor.options import VERSION from supervisor.options import make_namespec from supervisor.options import split_namespec from supervisor.xmlrpc import SystemNamespaceRPCInterface from supervisor.xmlrpc import RootRPCInterface from supervisor.xmlrpc import Faults from supervisor.xmlrpc import RPCError from supervisor.rpcinterface import SupervisorNamespaceRPCInterface class DeferredWebProducer: """ A medusa producer that implements a deferred callback; requires a subclass of asynchat.async_chat that handles NOT_DONE_YET sentinel """ CONNECTION = re.compile ('Connection: (.*)', re.IGNORECASE) def __init__(self, request, callback): self.callback = callback self.request = request self.finished = False self.delay = float(callback.delay) def more(self): if self.finished: return '' try: response = self.callback() if response is NOT_DONE_YET: return NOT_DONE_YET self.finished = True return self.sendresponse(response) except: tb = traceback.format_exc() # this should go to the main supervisor log file self.request.channel.server.logger.log('Web interface error', tb) self.finished = True self.request.error(500) def sendresponse(self, response): headers = response.get('headers', {}) for header in headers: self.request[header] = headers[header] if 'Content-Type' not in self.request: self.request['Content-Type'] = 'text/plain' if headers.get('Location'): self.request['Content-Length'] = 0 self.request.error(301) return body = response.get('body', '') self.request['Content-Length'] = len(body) self.request.push(body) connection = get_header(self.CONNECTION, self.request.header) close_it = 0 wrap_in_chunking = 0 if self.request.version == '1.0': if connection == 'keep-alive': if not self.request.has_key('Content-Length'): close_it = 1 else: self.request['Connection'] = 'Keep-Alive' else: close_it = 1 elif self.request.version == '1.1': if connection == 'close': close_it = 1 elif 'Content-Length' not in self.request: if 'Transfer-Encoding' in self.request: if not self.request['Transfer-Encoding'] == 'chunked': close_it = 1 elif self.request.use_chunked: self.request['Transfer-Encoding'] = 'chunked' wrap_in_chunking = 1 else: close_it = 1 elif self.request.version is None: close_it = 1 outgoing_header = producers.simple_producer ( self.request.build_reply_header()) if close_it: self.request['Connection'] = 'close' if wrap_in_chunking: outgoing_producer = producers.chunked_producer ( producers.composite_producer (self.request.outgoing) ) # prepend the header outgoing_producer = producers.composite_producer( [outgoing_header, outgoing_producer] ) else: # fix AttributeError: 'unicode' object has no attribute 'more' if PY2 and (len(self.request.outgoing) > 0): body = self.request.outgoing[0] if isinstance(body, unicode): self.request.outgoing[0] = producers.simple_producer (body) # prepend the header self.request.outgoing.insert(0, outgoing_header) outgoing_producer = producers.composite_producer ( self.request.outgoing) # apply a few final transformations to the output self.request.channel.push_with_producer ( # globbing gives us large packets producers.globbing_producer ( # hooking lets us log the number of bytes sent producers.hooked_producer ( outgoing_producer, self.request.log ) ) ) self.request.channel.current_request = None if close_it: self.request.channel.close_when_done() class ViewContext: def __init__(self, **kw): self.__dict__.update(kw) class MeldView: content_type = 'text/html;charset=utf-8' delay = .5 def __init__(self, context): self.context = context template = self.context.template if not os.path.isabs(template): here = os.path.abspath(os.path.dirname(__file__)) template = os.path.join(here, template) self.root = templating.parse_xml(template) self.callback = None def __call__(self): body = self.render() if body is NOT_DONE_YET: return NOT_DONE_YET response = self.context.response headers = response['headers'] headers['Content-Type'] = self.content_type headers['Pragma'] = 'no-cache' headers['Cache-Control'] = 'no-cache' headers['Expires'] = http_date.build_http_date(0) response['body'] = as_bytes(body) return response def render(self): pass def clone(self): return self.root.clone() class TailView(MeldView): def render(self): supervisord = self.context.supervisord form = self.context.form if not 'processname' in form: tail = 'No process name found' processname = None else: processname = form['processname'] offset = 0 limit = form.get('limit', '1024') limit = min(-1024, int(limit)*-1 if limit.isdigit() else -1024) if not processname: tail = 'No process name found' else: rpcinterface = SupervisorNamespaceRPCInterface(supervisord) try: tail = rpcinterface.readProcessStdoutLog(processname, limit, offset) except RPCError as e: if e.code == Faults.NO_FILE: tail = 'No file for %s' % processname else: tail = 'ERROR: unexpected rpc fault [%d] %s' % ( e.code, e.text) root = self.clone() title = root.findmeld('title') title.content('Supervisor tail of process %s' % processname) tailbody = root.findmeld('tailbody') tailbody.content(tail) refresh_anchor = root.findmeld('refresh_anchor') if processname is not None: refresh_anchor.attributes( href='tail.html?processname=%s&limit=%s' % ( urllib.quote(processname), urllib.quote(str(abs(limit))) ) ) else: refresh_anchor.deparent() return as_string(root.write_xhtmlstring()) class StatusView(MeldView): def actions_for_process(self, process): state = process.get_state() processname = urllib.quote(make_namespec(process.group.config.name, process.config.name)) start = { 'name': 'Start', 'href': 'index.html?processname=%s&action=start' % processname, 'target': None, } restart = { 'name': 'Restart', 'href': 'index.html?processname=%s&action=restart' % processname, 'target': None, } stop = { 'name': 'Stop', 'href': 'index.html?processname=%s&action=stop' % processname, 'target': None, } clearlog = { 'name': 'Clear Log', 'href': 'index.html?processname=%s&action=clearlog' % processname, 'target': None, } tailf_stdout = { 'name': 'Tail -f Stdout', 'href': 'logtail/%s' % processname, 'target': '_blank' } tailf_stderr = { 'name': 'Tail -f Stderr', 'href': 'logtail/%s/stderr' % processname, 'target': '_blank' } if state == ProcessStates.RUNNING: actions = [restart, stop, clearlog, tailf_stdout, tailf_stderr] elif state in (ProcessStates.STOPPED, ProcessStates.EXITED, ProcessStates.FATAL): actions = [start, None, clearlog, tailf_stdout, tailf_stderr] else: actions = [None, None, clearlog, tailf_stdout, tailf_stderr] return actions def css_class_for_state(self, state): if state == ProcessStates.RUNNING: return 'statusrunning' elif state in (ProcessStates.FATAL, ProcessStates.BACKOFF): return 'statuserror' else: return 'statusnominal' def make_callback(self, namespec, action): supervisord = self.context.supervisord # the rpc interface code is already written to deal properly in a # deferred world, so just use it main = ('supervisor', SupervisorNamespaceRPCInterface(supervisord)) system = ('system', SystemNamespaceRPCInterface([main])) rpcinterface = RootRPCInterface([main, system]) if action: if action == 'refresh': def donothing(): message = 'Page refreshed at %s' % time.ctime() return message donothing.delay = 0.05 return donothing elif action == 'stopall': callback = rpcinterface.supervisor.stopAllProcesses() def stopall(): if callback() is NOT_DONE_YET: return NOT_DONE_YET else: return 'All stopped at %s' % time.ctime() stopall.delay = 0.05 return stopall elif action == 'restartall': callback = rpcinterface.system.multicall( [ {'methodName':'supervisor.stopAllProcesses'}, {'methodName':'supervisor.startAllProcesses'} ] ) def restartall(): result = callback() if result is NOT_DONE_YET: return NOT_DONE_YET return 'All restarted at %s' % time.ctime() restartall.delay = 0.05 return restartall elif namespec: def wrong(): return 'No such process named %s' % namespec wrong.delay = 0.05 group_name, process_name = split_namespec(namespec) group = supervisord.process_groups.get(group_name) if group is None: return wrong process = group.processes.get(process_name) if process is None: return wrong if action == 'start': try: bool_or_callback = ( rpcinterface.supervisor.startProcess(namespec) ) except RPCError as e: if e.code == Faults.NO_FILE: msg = 'no such file' elif e.code == Faults.NOT_EXECUTABLE: msg = 'file not executable' elif e.code == Faults.ALREADY_STARTED: msg = 'already started' elif e.code == Faults.SPAWN_ERROR: msg = 'spawn error' elif e.code == Faults.ABNORMAL_TERMINATION: msg = 'abnormal termination' else: msg = 'unexpected rpc fault [%d] %s' % ( e.code, e.text) def starterr(): return 'ERROR: Process %s: %s' % (namespec, msg) starterr.delay = 0.05 return starterr if callable(bool_or_callback): def startprocess(): try: result = bool_or_callback() except RPCError as e: if e.code == Faults.SPAWN_ERROR: msg = 'spawn error' elif e.code == Faults.ABNORMAL_TERMINATION: msg = 'abnormal termination' else: msg = 'unexpected rpc fault [%d] %s' % ( e.code, e.text) return 'ERROR: Process %s: %s' % (namespec, msg) if result is NOT_DONE_YET: return NOT_DONE_YET return 'Process %s started' % namespec startprocess.delay = 0.05 return startprocess else: def startdone(): return 'Process %s started' % namespec startdone.delay = 0.05 return startdone elif action == 'stop': try: bool_or_callback = ( rpcinterface.supervisor.stopProcess(namespec) ) except RPCError as e: msg = 'unexpected rpc fault [%d] %s' % (e.code, e.text) def stoperr(): return msg stoperr.delay = 0.05 return stoperr if callable(bool_or_callback): def stopprocess(): try: result = bool_or_callback() except RPCError as e: return 'unexpected rpc fault [%d] %s' % ( e.code, e.text) if result is NOT_DONE_YET: return NOT_DONE_YET return 'Process %s stopped' % namespec stopprocess.delay = 0.05 return stopprocess else: def stopdone(): return 'Process %s stopped' % namespec stopdone.delay = 0.05 return stopdone elif action == 'restart': results_or_callback = rpcinterface.system.multicall( [ {'methodName':'supervisor.stopProcess', 'params': [namespec]}, {'methodName':'supervisor.startProcess', 'params': [namespec]}, ] ) if callable(results_or_callback): callback = results_or_callback def restartprocess(): results = callback() if results is NOT_DONE_YET: return NOT_DONE_YET return 'Process %s restarted' % namespec restartprocess.delay = 0.05 return restartprocess else: def restartdone(): return 'Process %s restarted' % namespec restartdone.delay = 0.05 return restartdone elif action == 'clearlog': try: callback = rpcinterface.supervisor.clearProcessLogs( namespec) except RPCError as e: msg = 'unexpected rpc fault [%d] %s' % (e.code, e.text) def clearerr(): return msg clearerr.delay = 0.05 return clearerr def clearlog(): return 'Log for %s cleared' % namespec clearlog.delay = 0.05 return clearlog raise ValueError(action) def render(self): form = self.context.form response = self.context.response processname = form.get('processname') action = form.get('action') message = form.get('message') if action: if not self.callback: self.callback = self.make_callback(processname, action) return NOT_DONE_YET else: message = self.callback() if message is NOT_DONE_YET: return NOT_DONE_YET if message is not None: server_url = form['SERVER_URL'] location = server_url + "/" + '?message=%s' % urllib.quote( message) response['headers']['Location'] = location supervisord = self.context.supervisord rpcinterface = RootRPCInterface( [('supervisor', SupervisorNamespaceRPCInterface(supervisord))] ) processnames = [] for group in supervisord.process_groups.values(): for gprocname in group.processes.keys(): processnames.append((group.config.name, gprocname)) processnames.sort() data = [] for groupname, processname in processnames: actions = self.actions_for_process( supervisord.process_groups[groupname].processes[processname]) sent_name = make_namespec(groupname, processname) info = rpcinterface.supervisor.getProcessInfo(sent_name) data.append({ 'status':info['statename'], 'name':processname, 'group':groupname, 'actions':actions, 'state':info['state'], 'description':info['description'], }) root = self.clone() if message is not None: statusarea = root.findmeld('statusmessage') statusarea.attrib['class'] = 'status_msg' statusarea.content(message) if data: iterator = root.findmeld('tr').repeat(data) shaded_tr = False for tr_element, item in iterator: status_text = tr_element.findmeld('status_text') status_text.content(item['status'].lower()) status_text.attrib['class'] = self.css_class_for_state( item['state']) info_text = tr_element.findmeld('info_text') info_text.content(item['description']) anchor = tr_element.findmeld('name_anchor') processname = make_namespec(item['group'], item['name']) anchor.attributes(href='tail.html?processname=%s' % urllib.quote(processname)) anchor.content(processname) actions = item['actions'] actionitem_td = tr_element.findmeld('actionitem_td') for li_element, actionitem in actionitem_td.repeat(actions): anchor = li_element.findmeld('actionitem_anchor') if actionitem is None: anchor.attrib['class'] = 'hidden' else: anchor.attributes(href=actionitem['href'], name=actionitem['name']) anchor.content(actionitem['name']) if actionitem['target']: anchor.attributes(target=actionitem['target']) if shaded_tr: tr_element.attrib['class'] = 'shade' shaded_tr = not shaded_tr else: table = root.findmeld('statustable') table.replace('No programs to manage') root.findmeld('supervisor_version').content(VERSION) copyright_year = str(datetime.date.today().year) root.findmeld('copyright_date').content(copyright_year) return as_string(root.write_xhtmlstring()) class OKView: delay = 0 def __init__(self, context): self.context = context def __call__(self): return {'body':'OK'} VIEWS = { 'index.html': { 'template':'ui/status.html', 'view':StatusView }, 'tail.html': { 'template':'ui/tail.html', 'view':TailView, }, 'ok.html': { 'template':None, 'view':OKView, }, } class supervisor_ui_handler: IDENT = 'Supervisor Web UI HTTP Request Handler' def __init__(self, supervisord): self.supervisord = supervisord def match(self, request): if request.command not in ('POST', 'GET'): return False path, params, query, fragment = request.split_uri() while path.startswith('/'): path = path[1:] if not path: path = 'index.html' for viewname in VIEWS.keys(): if viewname == path: return True def handle_request(self, request): if request.command == 'POST': request.collector = collector(self, request) else: self.continue_request('', request) def continue_request (self, data, request): form = {} cgi_env = request.cgi_environment() form.update(cgi_env) if 'QUERY_STRING' not in form: form['QUERY_STRING'] = '' query = form['QUERY_STRING'] # we only handle x-www-form-urlencoded values from POSTs form_urlencoded = urlparse.parse_qsl(data) query_data = urlparse.parse_qs(query) for k, v in query_data.items(): # ignore dupes form[k] = v[0] for k, v in form_urlencoded: # ignore dupes form[k] = v form['SERVER_URL'] = request.get_server_url() path = form['PATH_INFO'] # strip off all leading slashes while path and path[0] == '/': path = path[1:] if not path: path = 'index.html' viewinfo = VIEWS.get(path) if viewinfo is None: # this should never happen if our match method works return response = {'headers': {}} viewclass = viewinfo['view'] viewtemplate = viewinfo['template'] context = ViewContext(template=viewtemplate, request = request, form = form, response = response, supervisord=self.supervisord) view = viewclass(context) pushproducer = request.channel.push_with_producer pushproducer(DeferredWebProducer(request, view)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0 supervisor-4.2.5/supervisor/xmlrpc.py0000644000076500000240000005167614351440431017607 0ustar00mnaberezstaffimport datetime import re import socket import sys import time import traceback import types from xml.etree.ElementTree import iterparse from supervisor.compat import xmlrpclib from supervisor.compat import StringIO from supervisor.compat import urlparse from supervisor.compat import as_bytes from supervisor.compat import as_string from supervisor.compat import encodestring from supervisor.compat import decodestring from supervisor.compat import httplib from supervisor.compat import PY2 from supervisor.medusa.http_server import get_header from supervisor.medusa.xmlrpc_handler import xmlrpc_handler from supervisor.medusa import producers from supervisor.http import NOT_DONE_YET class Faults: UNKNOWN_METHOD = 1 INCORRECT_PARAMETERS = 2 BAD_ARGUMENTS = 3 SIGNATURE_UNSUPPORTED = 4 SHUTDOWN_STATE = 6 BAD_NAME = 10 BAD_SIGNAL = 11 NO_FILE = 20 NOT_EXECUTABLE = 21 FAILED = 30 ABNORMAL_TERMINATION = 40 SPAWN_ERROR = 50 ALREADY_STARTED = 60 NOT_RUNNING = 70 SUCCESS = 80 ALREADY_ADDED = 90 STILL_RUNNING = 91 CANT_REREAD = 92 def getFaultDescription(code): for faultname in Faults.__dict__: if getattr(Faults, faultname) == code: return faultname return 'UNKNOWN' class RPCError(Exception): def __init__(self, code, extra=None): self.code = code self.text = getFaultDescription(code) if extra is not None: self.text = '%s: %s' % (self.text, extra) def __str__(self): return 'code=%r, text=%r' % (self.code, self.text) class DeferredXMLRPCResponse: """ A medusa producer that implements a deferred callback; requires a subclass of asynchat.async_chat that handles NOT_DONE_YET sentinel """ CONNECTION = re.compile ('Connection: (.*)', re.IGNORECASE) def __init__(self, request, callback): self.callback = callback self.request = request self.finished = False self.delay = float(callback.delay) def more(self): if self.finished: return '' try: try: value = self.callback() if value is NOT_DONE_YET: return NOT_DONE_YET except RPCError as err: value = xmlrpclib.Fault(err.code, err.text) body = xmlrpc_marshal(value) self.finished = True return self.getresponse(body) except: tb = traceback.format_exc() self.request.channel.server.logger.log( "XML-RPC response callback error", tb ) self.finished = True self.request.error(500) def getresponse(self, body): self.request['Content-Type'] = 'text/xml' self.request['Content-Length'] = len(body) self.request.push(body) connection = get_header(self.CONNECTION, self.request.header) close_it = 0 if self.request.version == '1.0': if connection == 'keep-alive': self.request['Connection'] = 'Keep-Alive' else: close_it = 1 elif self.request.version == '1.1': if connection == 'close': close_it = 1 elif self.request.version is None: close_it = 1 outgoing_header = producers.simple_producer ( self.request.build_reply_header()) if close_it: self.request['Connection'] = 'close' # prepend the header self.request.outgoing.insert(0, outgoing_header) outgoing_producer = producers.composite_producer(self.request.outgoing) # apply a few final transformations to the output self.request.channel.push_with_producer ( # globbing gives us large packets producers.globbing_producer ( # hooking lets us log the number of bytes sent producers.hooked_producer ( outgoing_producer, self.request.log ) ) ) self.request.channel.current_request = None if close_it: self.request.channel.close_when_done() def xmlrpc_marshal(value): ismethodresponse = not isinstance(value, xmlrpclib.Fault) if ismethodresponse: if not isinstance(value, tuple): value = (value,) body = xmlrpclib.dumps(value, methodresponse=ismethodresponse) else: body = xmlrpclib.dumps(value) return body class SystemNamespaceRPCInterface: def __init__(self, namespaces): self.namespaces = {} for name, inst in namespaces: self.namespaces[name] = inst self.namespaces['system'] = self def _listMethods(self): methods = {} for ns_name in self.namespaces: namespace = self.namespaces[ns_name] for method_name in namespace.__class__.__dict__: # introspect; any methods that don't start with underscore # are published func = getattr(namespace, method_name) if callable(func): if not method_name.startswith('_'): sig = '%s.%s' % (ns_name, method_name) methods[sig] = str(func.__doc__) return methods def listMethods(self): """ Return an array listing the available method names @return array result An array of method names available (strings). """ methods = self._listMethods() keys = list(methods.keys()) keys.sort() return keys def methodHelp(self, name): """ Return a string showing the method's documentation @param string name The name of the method. @return string result The documentation for the method name. """ methods = self._listMethods() for methodname in methods.keys(): if methodname == name: return methods[methodname] raise RPCError(Faults.SIGNATURE_UNSUPPORTED) def methodSignature(self, name): """ Return an array describing the method signature in the form [rtype, ptype, ptype...] where rtype is the return data type of the method, and ptypes are the parameter data types that the method accepts in method argument order. @param string name The name of the method. @return array result The result. """ methods = self._listMethods() for method in methods: if method == name: rtype = None ptypes = [] parsed = gettags(methods[method]) for thing in parsed: if thing[1] == 'return': # tag name rtype = thing[2] # datatype elif thing[1] == 'param': # tag name ptypes.append(thing[2]) # datatype if rtype is None: raise RPCError(Faults.SIGNATURE_UNSUPPORTED) return [rtype] + ptypes raise RPCError(Faults.SIGNATURE_UNSUPPORTED) def multicall(self, calls): """Process an array of calls, and return an array of results. Calls should be structs of the form {'methodName': string, 'params': array}. Each result will either be a single-item array containing the result value, or a struct of the form {'faultCode': int, 'faultString': string}. This is useful when you need to make lots of small calls without lots of round trips. @param array calls An array of call requests @return array result An array of results """ remaining_calls = calls[:] # [{'methodName':x, 'params':x}, ...] callbacks = [] # always empty or 1 callback function only results = [] # results of completed calls # args are only to fool scoping and are never passed by caller def multi(remaining_calls=remaining_calls, callbacks=callbacks, results=results): # if waiting on a callback, call it, then remove it if it's done if callbacks: try: value = callbacks[0]() except RPCError as exc: value = {'faultCode': exc.code, 'faultString': exc.text} except: info = sys.exc_info() errmsg = "%s:%s" % (info[0], info[1]) value = {'faultCode': Faults.FAILED, 'faultString': 'FAILED: ' + errmsg} if value is not NOT_DONE_YET: callbacks.pop(0) results.append(value) # if we don't have a callback now, pop calls and call them in # order until one returns a callback. while (not callbacks) and remaining_calls: call = remaining_calls.pop(0) name = call.get('methodName', None) params = call.get('params', []) try: if name is None: raise RPCError(Faults.INCORRECT_PARAMETERS, 'No methodName') if name == 'system.multicall': raise RPCError(Faults.INCORRECT_PARAMETERS, 'Recursive system.multicall forbidden') # make the call, may return a callback or not root = AttrDict(self.namespaces) value = traverse(root, name, params) except RPCError as exc: value = {'faultCode': exc.code, 'faultString': exc.text} except: info = sys.exc_info() errmsg = "%s:%s" % (info[0], info[1]) value = {'faultCode': Faults.FAILED, 'faultString': 'FAILED: ' + errmsg} if isinstance(value, types.FunctionType): callbacks.append(value) else: results.append(value) # we are done when there's no callback and no more calls queued if callbacks or remaining_calls: return NOT_DONE_YET else: return results multi.delay = 0.05 # optimization: multi() is called here instead of just returning # multi in case all calls complete and we can return with no delay. value = multi() if value is NOT_DONE_YET: return multi else: return value class AttrDict(dict): # hack to make a dict's getattr equivalent to its getitem def __getattr__(self, name): return self.get(name) class RootRPCInterface: def __init__(self, subinterfaces): for name, rpcinterface in subinterfaces: setattr(self, name, rpcinterface) def capped_int(value): i = int(value) if i < xmlrpclib.MININT: i = xmlrpclib.MININT elif i > xmlrpclib.MAXINT: i = xmlrpclib.MAXINT return i def make_datetime(text): return datetime.datetime( *time.strptime(text, "%Y%m%dT%H:%M:%S")[:6] ) class supervisor_xmlrpc_handler(xmlrpc_handler): path = '/RPC2' IDENT = 'Supervisor XML-RPC Handler' unmarshallers = { "int": lambda x: int(x.text), "i4": lambda x: int(x.text), "boolean": lambda x: x.text == "1", "string": lambda x: x.text or "", "double": lambda x: float(x.text), "dateTime.iso8601": lambda x: make_datetime(x.text), "array": lambda x: x[0].text, "data": lambda x: [v.text for v in x], "struct": lambda x: dict([(k.text or "", v.text) for k, v in x]), "base64": lambda x: as_string(decodestring(as_bytes(x.text or ""))), "param": lambda x: x[0].text, } def __init__(self, supervisord, subinterfaces): self.rpcinterface = RootRPCInterface(subinterfaces) self.supervisord = supervisord def loads(self, data): params = method = None for action, elem in iterparse(StringIO(data)): unmarshall = self.unmarshallers.get(elem.tag) if unmarshall: data = unmarshall(elem) elem.clear() elem.text = data elif elem.tag == "value": try: data = elem[0].text except IndexError: data = elem.text or "" elem.clear() elem.text = data elif elem.tag == "methodName": method = elem.text elif elem.tag == "params": params = tuple([v.text for v in elem]) return params, method def match(self, request): return request.uri.startswith(self.path) def continue_request(self, data, request): logger = self.supervisord.options.logger try: try: # on 2.x, the Expat parser doesn't like Unicode which actually # contains non-ASCII characters. It's a bit of a kludge to # do it conditionally here, but it's down to how underlying # libs behave if PY2: data = data.encode('ascii', 'xmlcharrefreplace') params, method = self.loads(data) except: logger.error( 'XML-RPC request data %r is invalid: unmarshallable' % (data,) ) request.error(400) return # no in the request or name is an empty string if not method: logger.error( 'XML-RPC request data %r is invalid: no method name' % (data,) ) request.error(400) return # we allow xml-rpc clients that do not send empty # when there are no parameters for the method call if params is None: params = () try: logger.trace('XML-RPC method called: %s()' % method) value = self.call(method, params) logger.trace('XML-RPC method %s() returned successfully' % method) except RPCError as err: # turn RPCError reported by method into a Fault instance value = xmlrpclib.Fault(err.code, err.text) logger.trace('XML-RPC method %s() returned fault: [%d] %s' % ( method, err.code, err.text)) if isinstance(value, types.FunctionType): # returning a function from an RPC method implies that # this needs to be a deferred response (it needs to block). pushproducer = request.channel.push_with_producer pushproducer(DeferredXMLRPCResponse(request, value)) else: # if we get anything but a function, it implies that this # response doesn't need to be deferred, we can service it # right away. body = as_bytes(xmlrpc_marshal(value)) request['Content-Type'] = 'text/xml' request['Content-Length'] = len(body) request.push(body) request.done() except: tb = traceback.format_exc() logger.critical( "Handling XML-RPC request with data %r raised an unexpected " "exception: %s" % (data, tb) ) # internal error, report as HTTP server error request.error(500) def call(self, method, params): return traverse(self.rpcinterface, method, params) def traverse(ob, method, params): dotted_parts = method.split('.') # security (CVE-2017-11610, don't allow object traversal) if len(dotted_parts) != 2: raise RPCError(Faults.UNKNOWN_METHOD) namespace, method = dotted_parts # security (don't allow methods that start with an underscore to # be called remotely) if method.startswith('_'): raise RPCError(Faults.UNKNOWN_METHOD) rpcinterface = getattr(ob, namespace, None) if rpcinterface is None: raise RPCError(Faults.UNKNOWN_METHOD) func = getattr(rpcinterface, method, None) if not isinstance(func, types.MethodType): raise RPCError(Faults.UNKNOWN_METHOD) try: return func(*params) except TypeError: raise RPCError(Faults.INCORRECT_PARAMETERS) class SupervisorTransport(xmlrpclib.Transport): """ Provides a Transport for xmlrpclib that uses httplib.HTTPConnection in order to support persistent connections. Also support basic auth and UNIX domain socket servers. """ connection = None def __init__(self, username=None, password=None, serverurl=None): xmlrpclib.Transport.__init__(self) self.username = username self.password = password self.verbose = False self.serverurl = serverurl if serverurl.startswith('http://'): parsed = urlparse.urlparse(serverurl) host, port = parsed.hostname, parsed.port if port is None: port = 80 def get_connection(host=host, port=port): return httplib.HTTPConnection(host, port) self._get_connection = get_connection elif serverurl.startswith('unix://'): def get_connection(serverurl=serverurl): # we use 'localhost' here because domain names must be # < 64 chars (or we'd use the serverurl filename) conn = UnixStreamHTTPConnection('localhost') conn.socketfile = serverurl[7:] return conn self._get_connection = get_connection else: raise ValueError('Unknown protocol for serverurl %s' % serverurl) def close(self): if self.connection: self.connection.close() self.connection = None def request(self, host, handler, request_body, verbose=0): request_body = as_bytes(request_body) if not self.connection: self.connection = self._get_connection() self.headers = { "User-Agent" : self.user_agent, "Content-Type" : "text/xml", "Accept": "text/xml" } # basic auth if self.username is not None and self.password is not None: unencoded = "%s:%s" % (self.username, self.password) encoded = as_string(encodestring(as_bytes(unencoded))) encoded = encoded.replace('\n', '') encoded = encoded.replace('\012', '') self.headers["Authorization"] = "Basic %s" % encoded self.headers["Content-Length"] = str(len(request_body)) self.connection.request('POST', handler, request_body, self.headers) r = self.connection.getresponse() if r.status != 200: self.connection.close() self.connection = None raise xmlrpclib.ProtocolError(host + handler, r.status, r.reason, '' ) data = r.read() data = as_string(data) # on 2.x, the Expat parser doesn't like Unicode which actually # contains non-ASCII characters data = data.encode('ascii', 'xmlcharrefreplace') p, u = self.getparser() p.feed(data) p.close() return u.close() class UnixStreamHTTPConnection(httplib.HTTPConnection): def connect(self): # pragma: no cover self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) # we abuse the host parameter as the socketname self.sock.connect(self.socketfile) def gettags(comment): """ Parse documentation strings into JavaDoc-like tokens """ tags = [] tag = None datatype = None name = None tag_lineno = lineno = 0 tag_text = [] for line in comment.split('\n'): line = line.strip() if line.startswith("@"): tags.append((tag_lineno, tag, datatype, name, '\n'.join(tag_text))) parts = line.split(None, 3) if len(parts) == 1: datatype = '' name = '' tag_text = [] elif len(parts) == 2: datatype = parts[1] name = '' tag_text = [] elif len(parts) == 3: datatype = parts[1] name = parts[2] tag_text = [] elif len(parts) == 4: datatype = parts[1] name = parts[2] tag_text = [parts[3].lstrip()] tag = parts[0][1:] tag_lineno = lineno else: if line: tag_text.append(line) lineno += 1 tags.append((tag_lineno, tag, datatype, name, '\n'.join(tag_text))) return tags ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1671843145.381748 supervisor-4.2.5/supervisor.egg-info/0000755000076500000240000000000014351446511017410 5ustar00mnaberezstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671843145.0 supervisor-4.2.5/supervisor.egg-info/PKG-INFO0000644000076500000240000025016714351446511020520 0ustar00mnaberezstaffMetadata-Version: 2.1 Name: supervisor Version: 4.2.5 Summary: A system for controlling process state under UNIX Home-page: http://supervisord.org/ Author: Chris McDonough Author-email: chrism@plope.com License: BSD-derived (http://www.repoze.org/LICENSE.txt) Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: No Input/Output (Daemon) Classifier: Intended Audience :: System Administrators Classifier: Natural Language :: English Classifier: Operating System :: POSIX Classifier: Topic :: System :: Boot Classifier: Topic :: System :: Monitoring Classifier: Topic :: System :: Systems Administration Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Provides-Extra: testing License-File: LICENSES.txt Supervisor ========== Supervisor is a client/server system that allows its users to control a number of processes on UNIX-like operating systems. Supported Platforms ------------------- Supervisor has been tested and is known to run on Linux (Ubuntu), Mac OS X (10.4, 10.5, 10.6), and Solaris (10 for Intel) and FreeBSD 6.1. It will likely work fine on most UNIX systems. Supervisor will not run at all under any version of Windows. Supervisor is intended to work on Python 3 version 3.4 or later and on Python 2 version 2.7. Documentation ------------- You can view the current Supervisor documentation online `in HTML format `_ . This is where you should go for detailed installation and configuration documentation. Reporting Bugs and Viewing the Source Repository ------------------------------------------------ Please report bugs in the `GitHub issue tracker `_. You can view the source repository for supervisor via `https://github.com/Supervisor/supervisor `_. Contributing ------------ We'll review contributions from the community in `pull requests `_ on GitHub. 4.2.5 (2022-12-23) ------------------ - Fixed a bug where the XML-RPC method ``supervisor.startProcess()`` would return 500 Internal Server Error instead of an XML-RPC fault response if the command could not be parsed. Patch by Julien Le Cléach. - Fixed a bug on Python 2.7 where a ``UnicodeDecodeError`` may have occurred when using the web interface. Patch by Vinay Sajip. - Removed use of ``urllib.parse`` functions ``splithost``, ``splitport``, and ``splittype`` deprecated in Python 3.8. - Removed use of ``asynchat`` and ``asyncore`` deprecated in Python 3.10. - The return value of the XML-RPC method ``supervisor.getAllConfigInfo()`` now includes the ``directory``, ``uid``, and ``serverurl`` of the program. Patch by Yellmean. - If a subprocess exits with a unexpected exit code (one not listed in ``exitcodes=`` in a ``[program:x]`` section) then the exit will now be logged at the ``WARN`` level instead of ``INFO``. Patch by Precy Lee. - ``supervisorctl shutdown`` now shows an error message if an argument is given. - File descriptors are now closed using the faster ``os.closerange()`` instead of calling ``os.close()`` in a loop. Patch by tyong920. 4.2.4 (2021-12-30) ------------------ - Fixed a bug where the ``--identifier`` command line argument was ignored. It was broken since at least 3.0a7 (released in 2009) and probably earlier. Patch by Julien Le Cléach. 4.2.3 (2021-12-27) ------------------ - Fixed a race condition where an ``rpcinterface`` extension that subscribed to events would not see the correct process state if it accessed the the ``state`` attribute on a ``Subprocess`` instance immediately in the event callback. Patch by Chao Wang. - Added the ``setuptools`` package to the list of dependencies in ``setup.py`` because it is a runtime dependency. Patch by Louis Sautier. - The web interface will now return a 404 Not Found response if a log file is missing. Previously, it would return 410 Gone. It was changed because 410 is intended to mean that the condition is likely to be permanent. A log file missing is usually temporary, e.g. a process that was never started will not have a log file but will have one as soon as it is started. 4.2.2 (2021-02-26) ------------------ - Fixed a bug where ``supervisord`` could crash if a subprocess exited immediately before trying to kill it. - Fixed a bug where the ``stdout_syslog`` and ``stderr_syslog`` options of a ``[program:x]`` section could not be used unless file logging for the same program had also been configured. The file and syslog options can now be used independently. Patch by Scott Stroupe. - Fixed a bug where the ``logfile`` option in the ``[supervisord]`` section would not log to syslog when the special filename of ``syslog`` was supplied, as is supported by all other log filename options. Patch by Franck Cuny. - Fixed a bug where environment variables defined in ``environment=`` in the ``[supervisord]`` section or a ``[program:x]`` section could not be used in ``%(ENV_x)s`` expansions. Patch by MythRen. - The ``supervisorctl signal`` command now allows a signal to be sent when a process is in the ``STOPPING`` state. Patch by Mike Gould. - ``supervisorctl`` and ``supervisord`` now print help when given ``-?`` in addition to the existing ``-h``/``--help``. 4.2.1 (2020-08-20) ------------------ - Fixed a bug on Python 3 where a network error could cause ``supervisord`` to crash with the error ``:can't concat str to bytes``. Patch by Vinay Sajip. - Fixed a bug where a test would fail on systems with glibc 2.3.1 because the default value of SOMAXCONN changed. 4.2.0 (2020-04-30) ------------------ - When ``supervisord`` is run in the foreground, a new ``--silent`` option suppresses the main log from being echoed to ``stdout`` as it normally would. Patch by Trevor Foster. - Parsing ``command=`` now supports a new expansion, ``%(numprocs)d``, that expands to the value of ``numprocs=`` in the same section. Patch by Santjago Corkez. - Web UI buttons no longer use background images. Patch by Dmytro Karpovych. - The Web UI now has a link to view ``tail -f stderr`` for a process in addition to the existing ``tail -f stdout`` link. Based on a patch by OuroborosCoding. - The HTTP server will now send an ``X-Accel-Buffering: no`` header in logtail responses to fix Nginx proxy buffering. Patch by Weizhao Li. - When ``supervisord`` reaps an unknown PID, it will now log a description of the ``waitpid`` status. Patch by Andrey Zelenchuk. - Fixed a bug introduced in 4.0.3 where ``supervisorctl tail -f foo | grep bar`` would fail with the error ``NoneType object has no attribute 'lower'``. This only occurred on Python 2.7 and only when piped. Patch by Slawa Pidgorny. 4.1.0 (2019-10-19) ------------------ - Fixed a bug on Python 3 only where logging to syslog did not work and would log the exception ``TypeError: a bytes-like object is required, not 'str'`` to the main ``supervisord`` log file. Patch by Vinay Sajip and Josh Staley. - Fixed a Python 3.8 compatibility issue caused by the removal of ``cgi.escape()``. Patch by Mattia Procopio. - The ``meld3`` package is no longer a dependency. A version of ``meld3`` is now included within the ``supervisor`` package itself. 4.0.4 (2019-07-15) ------------------ - Fixed a bug where ``supervisorctl tail stdout`` would actually tail ``stderr``. Note that ``tail `` without the explicit ``stdout`` correctly tailed ``stdout``. The bug existed since 3.0a3 (released in 2007). Patch by Arseny Hofman. - Improved the warning message added in 4.0.3 so it is now emitted for both ``tail`` and ``tail -f``. Patch by Vinay Sajip. - CVE-2019-12105. Documentation addition only, no code changes. This CVE states that ``inet_http_server`` does not use authentication by default (`details `_). Note that ``inet_http_server`` is not enabled by default, and is also not enabled in the example configuration output by ``echo_supervisord_conf``. The behavior of the ``inet_http_server`` options have been correctly documented, and have not changed, since the feature was introduced in 2006. A new `warning message `_ was added to the documentation. 4.0.3 (2019-05-22) ------------------ - Fixed an issue on Python 2 where running ``supervisorctl tail -f `` would fail with the message ``Cannot connect, error: `` where it may have worked on Supervisor 3.x. The issue was introduced in Supervisor 4.0.0 due to new bytes/strings conversions necessary to add Python 3 support. For ``supervisorctl`` to correctly display logs with Unicode characters, the terminal encoding specified by the environment must support it. If not, the ``UnicodeEncodeError`` may still occur on either Python 2 or 3. A new warning message is now printed if a problematic terminal encoding is detected. Patch by Vinay Sajip. 4.0.2 (2019-04-17) ------------------ - Fixed a bug where inline comments in the config file were not parsed correctly such that the comments were included as part of the values. This only occurred on Python 2, and only where the environment had an extra ``configparser`` module installed. The bug was introduced in Supervisor 4.0.0 because of Python 2/3 compatibility code that expected a Python 2 environment to only have a ``ConfigParser`` module. 4.0.1 (2019-04-10) ------------------ - Fixed an issue on Python 3 where an ``OSError: [Errno 29] Illegal seek`` would occur if ``logfile`` in the ``[supervisord]`` section was set to a special file like ``/dev/stdout`` that was not seekable, even if ``logfile_maxbytes = 0`` was set to disable rotation. The issue only affected the main log and not child logs. Patch by Martin Falatic. 4.0.0 (2019-04-05) ------------------ - Support for Python 3 has been added. On Python 3, Supervisor requires Python 3.4 or later. Many thanks to Vinay Sajip, Scott Maxwell, Palm Kevin, Tres Seaver, Marc Abramowitz, Son Nguyen, Shane Hathaway, Evan Andrews, and Ethan Hann who all made major contributions to the Python 3 porting effort. Thanks also to all contributors who submitted issue reports and patches towards this effort. - Support for Python 2.4, 2.5, and 2.6 has been dropped. On Python 2, Supervisor now requires Python 2.7. - The ``supervisor`` package is no longer a namespace package. - The behavior of the config file expansion ``%(here)s`` has changed. In previous versions, a bug caused ``%(here)s`` to always expand to the directory of the root config file. Now, when ``%(here)s`` is used inside a file included via ``[include]``, it will expand to the directory of that file. Thanks to Alex Eftimie and Zoltan Toth-Czifra for the patches. - The default value for the config file setting ``exitcodes=``, the expected exit codes of a program, has changed. In previous versions, it was ``0,2``. This caused issues with Golang programs where ``panic()`` causes the exit code to be ``2``. The default value for ``exitcodes`` is now ``0``. - An undocumented feature where multiple ``supervisorctl`` commands could be combined on a single line separated by semicolons has been removed. - ``supervisorctl`` will now set its exit code to a non-zero value when an error condition occurs. Previous versions did not set the exit code for most error conditions so it was almost always 0. Patch by Luke Weber. - Added new ``stdout_syslog`` and ``stderr_syslog`` options to the config file. These are boolean options that indicate whether process output will be sent to syslog. Supervisor can now log to both files and syslog at the same time. Specifying a log filename of ``syslog`` is still supported but deprecated. Patch by Jason R. Coombs. 3.4.0 (2019-04-05) ------------------ - FastCGI programs (``[fcgi-program:x]`` sections) can now be used in groups (``[group:x]``). Patch by Florian Apolloner. - Added a new ``socket_backlog`` option to the ``[fcgi-program:x]`` section to set the listen(2) socket backlog. Patch by Nenad Merdanovic. - Fixed a bug where ``SupervisorTransport`` (the XML-RPC transport used with Unix domain sockets) did not close the connection when ``close()`` was called on it. Patch by Jérome Perrin. - Fixed a bug where ``supervisorctl start `` could hang for a long time if the system clock rolled back. Patch by Joe LeVeque. 3.3.5 (2018-12-22) ------------------ - Fixed a race condition where ``supervisord`` would cancel a shutdown already in progress if it received ``SIGHUP``. Now, ``supervisord`` will ignore ``SIGHUP`` if shutdown is already in progress. Patch by Livanh. - Fixed a bug where searching for a relative command ignored changes to ``PATH`` made in ``environment=``. Based on a patch by dongweiming. - ``childutils.ProcessCommunicationsProtocol`` now does an explicit ``flush()`` after writing to ``stdout``. - A more descriptive error message is now emitted if a name in the config file contains a disallowed character. Patch by Rick van Hattem. 3.3.4 (2018-02-15) ------------------ - Fixed a bug where rereading the configuration would not detect changes to eventlisteners. Patch by Michael Ihde. - Fixed a bug where the warning ``Supervisord is running as root and it is searching for its config file`` may have been incorrectly shown by ``supervisorctl`` if its executable name was changed. - Fixed a bug where ``supervisord`` would continue starting up if the ``[supervisord]`` section of the config file specified ``user=`` but ``setuid()`` to that user failed. It will now exit immediately if it cannot drop privileges. - Fixed a bug in the web interface where redirect URLs did not have a slash between the host and query string, which caused issues when proxying with Nginx. Patch by Luke Weber. - When ``supervisord`` successfully drops privileges during startup, it is now logged at the ``INFO`` level instead of ``CRIT``. - The HTTP server now returns a Content-Type header specifying UTF-8 encoding. This may fix display issues in some browsers. Patch by Katenkka. 3.3.3 (2017-07-24) ------------------ - Fixed CVE-2017-11610. A vulnerability was found where an authenticated client can send a malicious XML-RPC request to ``supervisord`` that will run arbitrary shell commands on the server. The commands will be run as the same user as ``supervisord``. Depending on how ``supervisord`` has been configured, this may be root. See https://github.com/Supervisor/supervisor/issues/964 for details. 3.3.2 (2017-06-03) ------------------ - Fixed a bug introduced in 3.3.0 where the ``supervisorctl reload`` command would crash ``supervisord`` with the error ``OSError: [Errno 9] Bad file descriptor`` if the ``kqueue`` poller was used. Patch by Jared Suttles. - Fixed a bug introduced in 3.3.0 where ``supervisord`` could get stuck in a polling loop after the web interface was used, causing high CPU usage. Patch by Jared Suttles. - Fixed a bug where if ``supervisord`` attempted to start but aborted due to another running instance of ``supervisord`` with the same config, the pidfile of the running instance would be deleted. Patch by coldnight. - Fixed a bug where ``supervisorctl fg`` would swallow most XML-RPC faults. ``fg`` now prints the fault and exits. - Parsing the config file will now fail with an error message if a process or group name contains a forward slash character (``/``) since it would break the URLs used by the web interface. - ``supervisorctl reload`` now shows an error message if an argument is given. Patch by Joel Krauska. - ``supervisorctl`` commands ``avail``, ``reread``, and ``version`` now show an error message if an argument is given. 3.3.1 (2016-08-02) ------------------ - Fixed an issue where ``supervisord`` could hang when responding to HTTP requests (including ``supervisorctl`` commands) if the system time was set back after ``supervisord`` was started. - Zope ``trackrefs``, a debugging tool that was included in the ``tests`` directory but hadn't been used for years, has been removed. 3.3.0 (2016-05-14) ------------------ - ``supervisord`` will now use ``kqueue``, ``poll``, or ``select`` to monitor its file descriptors, in that order, depending on what is available on the system. Previous versions used ``select`` only and would crash with the error ``ValueError: filedescriptor out of range in select()`` when running a large number of subprocesses (whatever number resulted in enough file descriptors to exceed the fixed-size file descriptor table used by ``select``, which is typically 1024). Patch by Igor Sobreira. - ``/etc/supervisor/supervisord.conf`` has been added to the config file search paths. Many versions of Supervisor packaged for Debian and Ubuntu have included a patch that added this path. This difference was reported in a number of tickets as a source of confusion and upgrade difficulties, so the path has been added. Patch by Kelvin Wong. - Glob patterns in the ``[include]`` section now support the ``host_node_name`` expansion. Patch by Paul Lockaby. - Files included via the ``[include]`` section are now logged at the ``INFO`` level instead of ``WARN``. Patch by Daniel Hahler. 3.2.4 (2017-07-24) ------------------ - Backported from Supervisor 3.3.3: Fixed CVE-2017-11610. A vulnerability was found where an authenticated client can send a malicious XML-RPC request to ``supervisord`` that will run arbitrary shell commands on the server. The commands will be run as the same user as ``supervisord``. Depending on how ``supervisord`` has been configured, this may be root. See https://github.com/Supervisor/supervisor/issues/964 for details. 3.2.3 (2016-03-19) ------------------ - 400 Bad Request is now returned if an XML-RPC request is received with invalid body data. In previous versions, 500 Internal Server Error was returned. 3.2.2 (2016-03-04) ------------------ - Parsing the config file will now fail with an error message if an ``inet_http_server`` or ``unix_http_server`` section contains a ``username=`` but no ``password=``. In previous versions, ``supervisord`` would start with this invalid configuration but the HTTP server would always return a 500 Internal Server Error. Thanks to Chris Ergatides for reporting this issue. 3.2.1 (2016-02-06) ------------------ - Fixed a server exception ``OverflowError: int exceeds XML-RPC limits`` that made ``supervisorctl status`` unusable if the system time was far into the future. The XML-RPC API returns timestamps as XML-RPC integers, but timestamps will exceed the maximum value of an XML-RPC integer in January 2038 ("Year 2038 Problem"). For now, timestamps exceeding the maximum integer will be capped at the maximum to avoid the exception and retain compatibility with existing API clients. In a future version of the API, the return type for timestamps will be changed. 3.2.0 (2015-11-30) ------------------ - Files included via the ``[include]`` section are read in sorted order. In past versions, the order was undefined. Patch by Ionel Cristian Mărieș. - ``supervisorctl start`` and ``supervisorctl stop`` now complete more quickly when handling many processes. Thanks to Chris McDonough for this patch. See: https://github.com/Supervisor/supervisor/issues/131 - Environment variables are now expanded for all config file options. Patch by Dexter Tad-y. - Added ``signalProcess``, ``signalProcessGroup``, and ``signalAllProcesses`` XML-RPC methods to supervisor RPC interface. Thanks to Casey Callendrello, Marc Abramowitz, and Moriyoshi Koizumi for the patches. - Added ``signal`` command to supervisorctl. Thanks to Moriyoshi Koizumi and Marc Abramowitz for the patches. - Errors caused by bad values in a config file now show the config section to make debugging easier. Patch by Marc Abramowitz. - Setting ``redirect_stderr=true`` in an ``[eventlistener:x]`` section is now disallowed because any messages written to ``stderr`` would interfere with the eventlistener protocol on ``stdout``. - Fixed a bug where spawning a process could cause ``supervisord`` to crash if an ``IOError`` occurred while setting up logging. One way this could happen is if a log filename was accidentally set to a directory instead of a file. Thanks to Grzegorz Nosek for reporting this issue. - Fixed a bug introduced in 3.1.0 where ``supervisord`` could crash when attempting to display a resource limit error. - Fixed a bug where ``supervisord`` could crash with the message ``Assertion failed for processname: RUNNING not in STARTING`` if a time change caused the last start time of the process to be in the future. Thanks to Róbert Nagy, Sergey Leschenko, and samhair for the patches. - A warning is now logged if an eventlistener enters the UNKNOWN state, which usually indicates a bug in the eventlistener. Thanks to Steve Winton and detailyang for reporting issues that led to this change. - Errors from the web interface are now logged at the ``ERROR`` level. Previously, they were logged at the ``TRACE`` level and easily missed. Thanks to Thomas Güttler for reporting this issue. - Fixed ``DeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately.`` on setuptools >= 11.3. - If ``redirect_stderr=true`` and ``stderr_logfile=auto``, no stderr log file will be created. In previous versions, an empty stderr log file would be created. Thanks to Łukasz Kożuchowski for the initial patch. - Fixed an issue in Medusa that would cause ``supervisorctl tail -f`` to disconnect if many other ``supervisorctl`` commands were run in parallel. Patch by Stefan Friesel. 3.1.4 (2017-07-24) ------------------ - Backported from Supervisor 3.3.3: Fixed CVE-2017-11610. A vulnerability was found where an authenticated client can send a malicious XML-RPC request to ``supervisord`` that will run arbitrary shell commands on the server. The commands will be run as the same user as ``supervisord``. Depending on how ``supervisord`` has been configured, this may be root. See https://github.com/Supervisor/supervisor/issues/964 for details. 3.1.3 (2014-10-28) ------------------ - Fixed an XML-RPC bug where the ElementTree-based parser handled strings like ``hello`` but not strings like ``hello``, which are valid in the XML-RPC spec. This fixes compatibility with the Apache XML-RPC client for Java and possibly other clients. 3.1.2 (2014-09-07) ------------------ - Fixed a bug where ``tail group:*`` in ``supervisorctl`` would show a 500 Internal Server Error rather than a BAD_NAME fault. - Fixed a bug where the web interface would show a 500 Internal Server Error instead of an error message for some process start faults. - Removed medusa files not used by Supervisor. 3.1.1 (2014-08-11) ------------------ - Fixed a bug where ``supervisorctl tail -f name`` output would stop if log rotation occurred while tailing. - Prevent a crash when a greater number of file descriptors were attempted to be opened than permitted by the environment when starting a bunch of programs. Now, instead a spawn error is logged. - Compute "channel delay" properly, fixing symptoms where a supervisorctl start command would hang for a very long time when a process (or many processes) are spewing to their stdout or stderr. See comments attached to https://github.com/Supervisor/supervisor/pull/263 . - Added ``docs/conf.py``, ``docs/Makefile``, and ``supervisor/scripts/*.py`` to the release package. 3.1.0 (2014-07-29) ------------------ - The output of the ``start``, ``stop``, ``restart``, and ``clear`` commands in ``supervisorctl`` has been changed to be consistent with the ``status`` command. Previously, the ``status`` command would show a process like ``foo:foo_01`` but starting that process would show ``foo_01: started`` (note the group prefix ``foo:`` was missing). Now, starting the process will show ``foo:foo_01: started``. Suggested by Chris Wood. - The ``status`` command in ``supervisorctl`` now supports group name syntax: ``status group:*``. - The process column in the table output by the ``status`` command in ``supervisorctl`` now expands to fit the widest name. - The ``update`` command in ``supervisorctl`` now accepts optional group names. When group names are specified, only those groups will be updated. Patch by Gary M. Josack. - Tab completion in ``supervisorctl`` has been improved and now works for more cases. Thanks to Mathieu Longtin and Marc Abramowitz for the patches. - Attempting to start or stop a process group in ``supervisorctl`` with the ``group:*`` syntax will now show the same error message as the ``process`` syntax if the name does not exist. Previously, it would show a Python exception. Patch by George Ang. - Added new ``PROCESS_GROUP_ADDED`` and ``PROCESS_GROUP_REMOVED`` events. These events are fired when process groups are added or removed from Supervisor's runtime configuration when using the ``add`` and ``remove`` commands in ``supervisorctl``. Patch by Brent Tubbs. - Stopping a process in the backoff state now changes it to the stopped state. Previously, an attempt to stop a process in backoff would be ignored. Patch by Pascal Varet. - The ``directory`` option is now expanded separately for each process in a homogeneous process group. This allows each process to have its own working directory. Patch by Perttu Ranta-aho. - Removed ``setuptools`` from the ``requires`` list in ``setup.py`` because it caused installation issues on some systems. - Fixed a bug in Medusa where the HTTP Basic authorizer would cause an exception if the password contained a colon. Thanks to Thomas Güttler for reporting this issue. - Fixed an XML-RPC bug where calling supervisor.clearProcessLogs() with a name like ``group:*`` would cause a 500 Internal Server Error rather than returning a BAD_NAME fault. - Fixed a hang that could occur in ``supervisord`` if log rotation is used and an outside program deletes an active log file. Patch by Magnus Lycka. - A warning is now logged if a glob pattern in an ``[include]`` section does not match any files. Patch by Daniel Hahler. 3.0.1 (2017-07-24) ------------------ - Backported from Supervisor 3.3.3: Fixed CVE-2017-11610. A vulnerability was found where an authenticated client can send a malicious XML-RPC request to ``supervisord`` that will run arbitrary shell commands on the server. The commands will be run as the same user as ``supervisord``. Depending on how ``supervisord`` has been configured, this may be root. See https://github.com/Supervisor/supervisor/issues/964 for details. 3.0 (2013-07-30) ---------------- - Parsing the config file will now fail with an error message if a process or group name contains characters that are not compatible with the eventlistener protocol. - Fixed a bug where the ``tail -f`` command in ``supervisorctl`` would fail if the combined length of the username and password was over 56 characters. - Reading the config file now gives a separate error message when the config file exists but can't be read. Previously, any error reading the file would be reported as "could not find config file". Patch by Jens Rantil. - Fixed an XML-RPC bug where array elements after the first would be ignored when using the ElementTree-based XML parser. Patch by Zev Benjamin. - Fixed the usage message output by ``supervisorctl`` to show the correct default config file path. Patch by Alek Storm. 3.0b2 (2013-05-28) ------------------ - The behavior of the program option ``user`` has changed. In all previous versions, if ``supervisord`` failed to switch to the user, a warning would be sent to the stderr log but the child process would still be spawned. This means that a mistake in the config file could result in a child process being unintentionally spawned as root. Now, ``supervisord`` will not spawn the child unless it was able to successfully switch to the user. Thanks to Igor Partola for reporting this issue. - If a user specified in the config file does not exist on the system, ``supervisord`` will now print an error and refuse to start. - Reverted a change to logging introduced in 3.0b1 that was intended to allow multiple processes to log to the same file with the rotating log handler. The implementation caused supervisord to crash during reload and to leak file handles. Also, since log rotation options are given on a per-program basis, impossible configurations could be created (conflicting rotation options for the same file). Given this and that supervisord now has syslog support, it was decided to remove this feature. A warning was added to the documentation that two processes may not log to the same file. - Fixed a bug where parsing ``command=`` could cause supervisord to crash if shlex.split() fails, such as a bad quoting. Patch by Scott Wilson. - It is now possible to use ``supervisorctl`` on a machine with no ``supervisord.conf`` file by supplying the connection information in command line options. Patch by Jens Rantil. - Fixed a bug where supervisord would crash if the syslog handler was used and supervisord received SIGUSR2 (log reopen request). - Fixed an XML-RPC bug where calling supervisor.getProcessInfo() with a bad name would cause a 500 Internal Server Error rather than the returning a BAD_NAME fault. - Added a favicon to the web interface. Patch by Caio Ariede. - Fixed a test failure due to incorrect handling of daylight savings time in the childutils tests. Patch by Ildar Hizbulin. - Fixed a number of pyflakes warnings for unused variables, imports, and dead code. Patch by Philippe Ombredanne. 3.0b1 (2012-09-10) ------------------ - Fixed a bug where parsing ``environment=`` did not verify that key/value pairs were correctly separated. Patch by Martijn Pieters. - Fixed a bug in the HTTP server code that could cause unnecessary delays when sending large responses. Patch by Philip Zeyliger. - When supervisord starts up as root, if the ``-c`` flag was not provided, a warning is now emitted to the console. Rationale: supervisord looks in the current working directory for a ``supervisord.conf`` file; someone might trick the root user into starting supervisord while cd'ed into a directory that has a rogue ``supervisord.conf``. - A warning was added to the documentation about the security implications of starting supervisord without the ``-c`` flag. - Add a boolean program option ``stopasgroup``, defaulting to false. When true, the flag causes supervisor to send the stop signal to the whole process group. This is useful for programs, such as Flask in debug mode, that do not propagate stop signals to their children, leaving them orphaned. - Python 2.3 is no longer supported. The last version that supported Python 2.3 is Supervisor 3.0a12. - Removed the unused "supervisor_rpc" entry point from setup.py. - Fixed a bug in the rotating log handler that would cause unexpected results when two processes were set to log to the same file. Patch by Whit Morriss. - Fixed a bug in config file reloading where each reload could leak memory because a list of warning messages would be appended but never cleared. Patch by Philip Zeyliger. - Added a new Syslog log handler. Thanks to Denis Bilenko, Nathan L. Smith, and Jason R. Coombs, who each contributed to the patch. - Put all change history into a single file (CHANGES.txt). 3.0a12 (2011-12-06) ------------------- - Released to replace a broken 3.0a11 package where non-Python files were not included in the package. 3.0a11 (2011-12-06) ------------------- - Added a new file, ``PLUGINS.rst``, with a listing of third-party plugins for Supervisor. Contributed by Jens Rantil. - The ``pid`` command in supervisorctl can now be used to retrieve the PIDs of child processes. See ``help pid``. Patch by Gregory Wisniewski. - Added a new ``host_node_name`` expansion that will be expanded to the value returned by Python's ``platform.node`` (see http://docs.python.org/library/platform.html#platform.node). Patch by Joseph Kondel. - Fixed a bug in the web interface where pages over 64K would be truncated. Thanks to Drew Perttula and Timothy Jones for reporting this. - Renamed ``README.txt`` to ``README.rst`` so GitHub renders the file as ReStructuredText. - The XML-RPC server is now compatible with clients that do not send empty when there are no parameters for the method call. Thanks to Johannes Becker for reporting this. - Fixed ``supervisorctl --help`` output to show the correct program name. - The behavior of the configuration options ``minfds`` and ``minprocs`` has changed. Previously, if a hard limit was less than ``minfds`` or ``minprocs``, supervisord would unconditionally abort with an error. Now, supervisord will attempt to raise the hard limit. This may succeed if supervisord is run as root, otherwise the error is printed as before. Patch by Benoit Sigoure. - Add a boolean program option ``killasgroup``, defaulting to false, if true when resorting to send SIGKILL to stop/terminate the process send it to its whole process group instead to take care of possible children as well and not leave them behind. Patch by Samuele Pedroni. - Environment variables may now be used in the configuration file for options that support string expansion. Patch by Aleksey Sivokon. - Fixed a race condition where supervisord might not act on a signal sent to it. Thanks to Adar Dembo for reporting the issue and supplying the initial patch. - Updated the output of ``echo_supervisord_conf`` to fix typos and improve comments. Thanks to Jens Rantil for noticing these. - Fixed a possible 500 Server Error from the web interface. This was observed when using Supervisor on a domain socket behind Nginx, where Supervisor would raise an exception because REMOTE_ADDR was not set. Patch by David Bennett. 3.0a10 (2011-03-30) ------------------- - Fixed the stylesheet of the web interface so the footer line won't overlap a long process list. Thanks to Derek DeVries for the patch. - Allow rpc interface plugins to register new events types. - Bug fix for FCGI sockets not getting cleaned up when the ``reload`` command is issued from supervisorctl. Also, the default behavior has changed for FCGI sockets. They are now closed whenever the number of running processes in a group hits zero. Previously, the sockets were kept open unless a group-level stop command was issued. - Better error message when HTTP server cannot reverse-resolve a hostname to an IP address. Previous behavior: show a socket error. Current behavior: spit out a suggestion to stdout. - Environment variables set via ``environment=`` value within ``[supervisord]`` section had no effect. Thanks to Wyatt Baldwin for a patch. - Fix bug where stopping process would cause process output that happened after the stop request was issued to be lost. See https://github.com/Supervisor/supervisor/issues/11. - Moved 2.X change log entries into ``HISTORY.txt``. - Converted ``CHANGES.txt`` and ``README.txt`` into proper ReStructuredText and included them in the ``long_description`` in ``setup.py``. - Added a tox.ini to the package (run via ``tox`` in the package dir). Tests supervisor on multiple Python versions. 3.0a9 (2010-08-13) ------------------ - Use rich comparison methods rather than __cmp__ to sort process configs and process group configs to better straddle Python versions. (thanks to Jonathan Riboux for identifying the problem and supplying an initial patch). - Fixed test_supervisorctl.test_maintail_dashf test for Python 2.7. (thanks to Jonathan Riboux for identifying the problem and supplying an initial patch). - Fixed the way that supervisor.datatypes.url computes a "good" URL for compatibility with Python 2.7 and Python >= 2.6.5. URLs with bogus "schemes://" will now be accepted as a version-straddling compromise (before they were rejected before supervisor would start). (thanks to Jonathan Riboux for identifying the problem and supplying an initial patch). - Add a ``-v`` / ``--version`` option to supervisord: Print the supervisord version number out to stdout and exit. (Roger Hoover) - Import iterparse from xml.etree when available (eg: Python 2.6). Patch by Sidnei da Silva. - Fixed the url to the supervisor-users mailing list. Patch by Sidnei da Silva - When parsing "environment=" in the config file, changes introduced in 3.0a8 prevented Supervisor from parsing some characters commonly found in paths unless quoting was used as in this example:: environment=HOME='/home/auser' Supervisor once again allows the above line to be written as:: environment=HOME=/home/auser Alphanumeric characters, "_", "/", ".", "+", "-", "(", ")", and ":" can all be used as a value without quoting. If any other characters are needed in the value, please quote it as in the first example above. Thanks to Paul Heideman for reporting this issue. - Supervisor will now look for its config file in locations relative to the executable path, allowing it to be used more easily in virtual environments. If sys.argv[0] is ``/path/to/venv/bin/supervisorctl``, supervisor will now look for it's config file in ``/path/to/venv/etc/supervisord.conf`` and ``/path/to/venv/supervisord.conf`` in addition to the other standard locations. Patch by Chris Rossi. 3.0a8 (2010-01-20) ------------------ - Don't cleanup file descriptors on first supervisord invocation: this is a lame workaround for Snow Leopard systems that use libdispatch and are receiving "Illegal instruction" messages at supervisord startup time. Restarting supervisord via "supervisorctl restart" may still cause a crash on these systems. - Got rid of Medusa hashbang headers in various files to ease RPM packaging. - Allow umask to be 000 (patch contributed by Rowan Nairn). - Fixed a bug introduced in 3.0a7 where supervisorctl wouldn't ask for a username/password combination properly from a password-protected supervisord if it wasn't filled in within the "[supervisorctl]" section username/password values. It now properly asks for a username and password. - Fixed a bug introduced in 3.0a7 where setup.py would not detect the Python version correctly. Patch by Daniele Paolella. - Fixed a bug introduced in 3.0a7 where parsing a string of key/value pairs failed on Python 2.3 due to use of regular expression syntax introduced in Python 2.4. - Removed the test suite for the ``memmon`` console script, which was moved to the Superlance package in 3.0a7. - Added release dates to CHANGES.txt. - Reloading the config for an fcgi process group did not close the fcgi socket - now, the socket is closed whenever the group is stopped as a unit (including during config update). However, if you stop all the processes in a group individually, the socket will remain open to allow for graceful restarts of FCGI daemons. (Roger Hoover) - Rereading the config did not pick up changes to the socket parameter in a fcgi-program section. (Roger Hoover) - Made a more friendly exception message when a FCGI socket cannot be created. (Roger Hoover) - Fixed a bug where the --serverurl option of supervisorctl would not accept a URL with a "unix" scheme. (Jason Kirtland) - Running the tests now requires the "mock" package. This dependency has been added to "tests_require" in setup.py. (Roger Hoover) - Added support for setting the ownership and permissions for an FCGI socket. This is done using new "socket_owner" and "socket_mode" options in an [fcgi-program:x] section. See the manual for details. (Roger Hoover) - Fixed a bug where the FCGI socket reference count was not getting decremented on spawn error. (Roger Hoover) - Fixed a Python 2.6 deprecation warning on use of the "sha" module. - Updated ez_setup.py to one that knows about setuptools 0.6c11. - Running "supervisorctl shutdown" no longer dumps a Python backtrace when it can't connect to supervisord on the expected socket. Thanks to Benjamin Smith for reporting this. - Removed use of collections.deque in our bundled version of asynchat because it broke compatibility with Python 2.3. - The sample configuration output by "echo_supervisord_conf" now correctly shows the default for "autorestart" as "unexpected". Thanks to William Dode for noticing it showed the wrong value. 3.0a7 (2009-05-24) ------------------ - We now bundle our own patched version of Medusa contributed by Jason Kirtland to allow Supervisor to run on Python 2.6. This was done because Python 2.6 introduced backwards incompatible changes to asyncore and asynchat in the stdlib. - The console script ``memmon``, introduced in Supervisor 3.0a4, has been moved to Superlance (http://pypi.python.org/pypi/superlance). The Superlance package contains other useful monitoring tools designed to run under Supervisor. - Supervisorctl now correctly interprets all of the error codes that can be returned when starting a process. Patch by Francesc Alted. - New ``stdout_events_enabled`` and ``stderr_events_enabled`` config options have been added to the ``[program:x]``, ``[fcgi-program:x]``, and ``[eventlistener:x]`` sections. These enable the emitting of new PROCESS_LOG events for a program. If unspecified, the default is False. If enabled for a subprocess, and data is received from the stdout or stderr of the subprocess while not in the special capture mode used by PROCESS_COMMUNICATION, an event will be emitted. Event listeners can subscribe to either PROCESS_LOG_STDOUT or PROCESS_LOG_STDERR individually, or PROCESS_LOG for both. - Values for subprocess environment variables specified with environment= in supervisord.conf can now be optionally quoted, allowing them to contain commas. Patch by Tim Godfrey. - Added a new event type, REMOTE_COMMUNICATION, that is emitted by a new RPC method, supervisor.sendRemoteCommEvent(). - Patch for bug #268 (KeyError on ``here`` expansion for stdout/stderr_logfile) from David E. Kindred. - Add ``reread``, ``update``, and ``avail`` commands based on Anders Quist's ``online_config_reload.diff`` patch. This patch extends the "add" and "drop" commands with automagical behavior:: In supervisorctl: supervisor> status bar RUNNING pid 14864, uptime 18:03:42 baz RUNNING pid 23260, uptime 0:10:16 foo RUNNING pid 14866, uptime 18:03:42 gazonk RUNNING pid 23261, uptime 0:10:16 supervisor> avail bar in use auto 999:999 baz in use auto 999:999 foo in use auto 999:999 gazonk in use auto 999:999 quux avail auto 999:999 Now we add this to our conf: [group:zegroup] programs=baz,gazonk Then we reread conf: supervisor> reread baz: disappeared gazonk: disappeared quux: available zegroup: available supervisor> avail bar in use auto 999:999 foo in use auto 999:999 quux avail auto 999:999 zegroup:baz avail auto 999:999 zegroup:gazonk avail auto 999:999 supervisor> status bar RUNNING pid 14864, uptime 18:04:18 baz RUNNING pid 23260, uptime 0:10:52 foo RUNNING pid 14866, uptime 18:04:18 gazonk RUNNING pid 23261, uptime 0:10:52 The magic make-it-so command: supervisor> update baz: stopped baz: removed process group gazonk: stopped gazonk: removed process group zegroup: added process group quux: added process group supervisor> status bar RUNNING pid 14864, uptime 18:04:43 foo RUNNING pid 14866, uptime 18:04:43 quux RUNNING pid 23561, uptime 0:00:02 zegroup:baz RUNNING pid 23559, uptime 0:00:02 zegroup:gazonk RUNNING pid 23560, uptime 0:00:02 supervisor> avail bar in use auto 999:999 foo in use auto 999:999 quux in use auto 999:999 zegroup:baz in use auto 999:999 zegroup:gazonk in use auto 999:999 - Fix bug with symptom "KeyError: 'process_name'" when using a logfile name including documented``process_name`` Python string expansions. - Tab completions in the supervisorctl shell, and a foreground mode for Supervisor, implemented as a part of GSoC. The supervisorctl program now has a ``fg`` command, which makes it possible to supply inputs to a process, and see its output/error stream in real time. - Process config reloading implemented by Anders Quist. The supervisorctl program now has the commands "add" and "drop". "add " adds the process group implied by in the config file. "drop " removes the process group from the running configuration (it must already be stopped). This makes it possible to add processes to and remove processes from a running supervisord without restarting the supervisord process. - Fixed a bug where opening the HTTP servers would fail silently for socket errors other than errno.EADDRINUSE. - Thanks to Dave Peticolas, using "reload" against a supervisord that is running in the background no longer causes supervisord to crash. - Configuration options for logfiles now accept mixed case reserved words (e.g. "AUTO" or "auto") for consistency with other options. - childutils.eventdata was buggy, it could not deal with carriage returns in data. See http://www.plope.com/software/collector/257. Thanks to Ian Bicking. - Per-process exitcodes= configuration now will not accept exit codes that are not 8-bit unsigned integers (supervisord will not start when one of the exit codes is outside the range of 0 - 255). - Per-process ``directory`` value can now contain expandable values like ``%(here)s``. (See http://www.plope.com/software/collector/262). - Accepted patch from Roger Hoover to allow for a new sort of process group: "fcgi-program". Adding one of these to your supervisord.conf allows you to control fastcgi programs. FastCGI programs cannot belong to heterogenous groups. The configuration for FastCGI programs is the same as regular programs except an additional "socket" parameter. Substitution happens on the socket parameter with the ``here`` and ``program_name`` variables:: [fcgi-program:fcgi_test] ;socket=tcp://localhost:8002 socket=unix:///path/to/fcgi/socket - Supervisorctl now supports a plugin model for supervisorctl commands. - Added the ability to retrieve supervisord's own pid through supervisor.getPID() on the XML-RPC interface or a new "pid" command on supervisorctl. 3.0a6 (2008-04-07) ------------------ - The RotatingFileLogger had a race condition in its doRollover method whereby a file might not actually exist despite a call to os.path.exists on the line above a place where we try to remove it. We catch the exception now and ignore the missing file. 3.0a5 (2008-03-13) ------------------ - Supervisorctl now supports persistent readline history. To enable, add "history_file = " to the ``[supervisorctl]`` section in your supervisord.conf file. - Multiple commands may now be issued on one supervisorctl command line, e.g. "restart prog; tail -f prog". Separate commands with a single semicolon; they will be executed in order as you would expect. 3.0a4 (2008-01-30) ------------------ - 3.0a3 broke Python 2.3 backwards compatibility. - On Debian Sarge, one user reported that a call to options.mktempfile would fail with an "[Errno 9] Bad file descriptor" at supervisord startup time. I was unable to reproduce this, but we found a workaround that seemed to work for him and it's included in this release. See http://www.plope.com/software/collector/252 for more information. Thanks to William Dode. - The fault ``ALREADY_TERMINATED`` has been removed. It was only raised by supervisor.sendProcessStdin(). That method now returns ``NOT_RUNNING`` for parity with the other methods. (Mike Naberezny) - The fault TIMED_OUT has been removed. It was not used. - Supervisor now depends on meld3 0.6.4, which does not compile its C extensions by default, so there is no more need to faff around with NO_MELD3_EXTENSION_MODULES during installation if you don't have a C compiler or the Python development libraries on your system. - Instead of making a user root around for the sample.conf file, provide a convenience command "echo_supervisord_conf", which he can use to echo the sample.conf to his terminal (and redirect to a file appropriately). This is a new user convenience (especially one who has no Python experience). - Added ``numprocs_start`` config option to ``[program:x]`` and ``[eventlistener:x]`` sections. This is an offset used to compute the first integer that ``numprocs`` will begin to start from. Contributed by Antonio Beamud Montero. - Added capability for ``[include]`` config section to config format. This section must contain a single key "files", which must name a space-separated list of file globs that will be included in supervisor's configuration. Contributed by Ian Bicking. - Invoking the ``reload`` supervisorctl command could trigger a bug in supervisord which caused it to crash. See http://www.plope.com/software/collector/253 . Thanks to William Dode for a bug report. - The ``pidproxy`` script was made into a console script. - The ``password`` value in both the ``[inet_http_server]`` and ``[unix_http_server]`` sections can now optionally be specified as a SHA hexdigest instead of as cleartext. Values prefixed with ``{SHA}`` will be considered SHA hex digests. To encrypt a password to a form suitable for pasting into the configuration file using Python, do, e.g.:: >>> import sha >>> '{SHA}' + sha.new('thepassword').hexdigest() '{SHA}82ab876d1387bfafe46cc1c8a2ef074eae50cb1d' - The subtypes of the events PROCESS_STATE_CHANGE (and PROCESS_STATE_CHANGE itself) have been removed, replaced with a simpler set of PROCESS_STATE subscribable event types. The new event types are: PROCESS_STATE_STOPPED PROCESS_STATE_EXITED PROCESS_STATE_STARTING PROCESS_STATE_STOPPING PROCESS_STATE_BACKOFF PROCESS_STATE_FATAL PROCESS_STATE_RUNNING PROCESS_STATE_UNKNOWN PROCESS_STATE # abstract PROCESS_STATE_STARTING replaces: PROCESS_STATE_CHANGE_STARTING_FROM_STOPPED PROCESS_STATE_CHANGE_STARTING_FROM_BACKOFF PROCESS_STATE_CHANGE_STARTING_FROM_EXITED PROCESS_STATE_CHANGE_STARTING_FROM_FATAL PROCESS_STATE_RUNNING replaces PROCESS_STATE_CHANGE_RUNNING_FROM_STARTED PROCESS_STATE_BACKOFF replaces PROCESS_STATE_CHANGE_BACKOFF_FROM_STARTING PROCESS_STATE_STOPPING replaces: PROCESS_STATE_CHANGE_STOPPING_FROM_RUNNING PROCESS_STATE_CHANGE_STOPPING_FROM_STARTING PROCESS_STATE_EXITED replaces PROCESS_STATE_CHANGE_EXITED_FROM_RUNNING PROCESS_STATE_STOPPED replaces PROCESS_STATE_CHANGE_STOPPED_FROM_STOPPING PROCESS_STATE_FATAL replaces PROCESS_STATE_CHANGE_FATAL_FROM_BACKOFF PROCESS_STATE_UNKNOWN replaces PROCESS_STATE_CHANGE_TO_UNKNOWN PROCESS_STATE replaces PROCESS_STATE_CHANGE The PROCESS_STATE_CHANGE_EXITED_OR_STOPPED abstract event is gone. All process state changes have at least "processname", "groupname", and "from_state" (the name of the previous state) in their serializations. PROCESS_STATE_EXITED additionally has "expected" (1 or 0) and "pid" (the process id) in its serialization. PROCESS_STATE_RUNNING, PROCESS_STATE_STOPPING, PROCESS_STATE_STOPPED additionally have "pid" in their serializations. PROCESS_STATE_STARTING and PROCESS_STATE_BACKOFF have "tries" in their serialization (initially "0", bumped +1 each time a start retry happens). - Remove documentation from README.txt, point people to http://supervisord.org/manual/ . - The eventlistener request/response protocol has changed. OK/FAIL must now be wrapped in a RESULT envelope so we can use it for more specialized communications. Previously, to signify success, an event listener would write the string ``OK\n`` to its stdout. To signify that the event was seen but couldn't be handled by the listener and should be rebuffered, an event listener would write the string ``FAIL\n`` to its stdout. In the new protocol, the listener must write the string:: RESULT {resultlen}\n{result} For example, to signify OK:: RESULT 2\nOK To signify FAIL:: RESULT 4\nFAIL See the scripts/sample_eventlistener.py script for an example. - To provide a hook point for custom results returned from event handlers (see above) the [eventlistener:x] configuration sections now accept a "result_handler=" parameter, e.g. "result_handler=supervisor.dispatchers:default_handler" (the default) or "handler=mypackage:myhandler". The keys are pkgutil "entry point" specifications (importable Python function names). Result handlers must be callables which accept two arguments: one named "event" which represents the event, and the other named "result", which represents the listener's result. A result handler either executes successfully or raises an exception. If it raises a supervisor.dispatchers.RejectEvent exception, the event will be rebuffered, and the eventhandler will be placed back into the ACKNOWLEDGED state. If it raises any other exception, the event handler will be placed in the UNKNOWN state. If it does not raise any exception, the event is considered successfully processed. A result handler's return value is ignored. Writing a result handler is a "in case of emergency break glass" sort of thing, it is not something to be used for arbitrary business code. In particular, handlers *must not block* for any appreciable amount of time. The standard eventlistener result handler (supervisor.dispatchers:default_handler) does nothing if it receives an "OK" and will raise a supervisor.dispatchers.RejectEvent exception if it receives any other value. - Supervisord now emits TICK events, which happen every N seconds. Three types of TICK events are available: TICK_5 (every five seconds), TICK_60 (every minute), TICK_3600 (every hour). Event listeners may subscribe to one of these types of events to perform every-so-often processing. TICK events are subtypes of the EVENT type. - Get rid of OSX platform-specific memory monitor and replace with memmon.py, which works on both Linux and Mac OS. This script is now a console script named "memmon". - Allow "web handler" (the handler which receives http requests from browsers visiting the web UI of supervisor) to deal with POST requests. - RPC interface methods stopProcess(), stopProcessGroup(), and stopAllProcesses() now take an optional "wait" argument that defaults to True for parity with the start methods. 3.0a3 (2007-10-02) ------------------ - Supervisorctl now reports a better error message when the main supervisor XML-RPC namespace is not registered. Thanks to Mike Orr for reporting this. (Mike Naberezny) - Create ``scripts`` directory within supervisor package, move ``pidproxy.py`` there, and place sample event listener and comm event programs within the directory. - When an event notification is buffered (either because a listener rejected it or because all listeners were busy when we attempted to send it originally), we now rebuffer it in a way that will result in it being retried earlier than it used to be. - When a listener process exits (unexpectedly) before transitioning from the BUSY state, rebuffer the event that was being processed. - supervisorctl ``tail`` command now accepts a trailing specifier: ``stderr`` or ``stdout``, which respectively, allow a user to tail the stderr or stdout of the named process. When this specifier is not provided, tail defaults to stdout. - supervisor ``clear`` command now clears both stderr and stdout logs for the given process. - When a process encounters a spawn error as a result of a failed execve or when it cannot setuid to a given uid, it now puts this info into the process' stderr log rather than its stdout log. - The event listener protocol header now contains the ``server`` identifier, the ``pool`` that the event emanated from, and the ``poolserial`` as well as the values it previously contained (version, event name, serial, and length). The server identifier is taken from the config file options value ``identifier``, the ``pool`` value is the name of the listener pool that this event emanates from, and the ``poolserial`` is a serial number assigned to the event local to the pool that is processing it. - The event listener protocol header is now a sequence of key-value pairs rather than a list of positional values. Previously, a representative header looked like:: SUPERVISOR3.0 PROCESS_COMMUNICATION_STDOUT 30 22\n Now it looks like:: ver:3.0 server:supervisor serial:21 ... - Specific event payload serializations have changed. All event types that deal with processes now include the pid of the process that the event is describing. In event serialization "header" values, we've removed the space between the header name and the value and headers are now separated by a space instead of a line feed. The names of keys in all event types have had underscores removed. - Abandon the use of the Python stdlib ``logging`` module for speed and cleanliness purposes. We've rolled our own. - Fix crash on start if AUTO logging is used with a max_bytes of zero for a process. - Improve process communication event performance. - The process config parameters ``stdout_capturefile`` and ``stderr_capturefile`` are no longer valid. They have been replaced with the ``stdout_capture_maxbytes`` and ``stderr_capture_maxbytes`` parameters, which are meant to be suffix-multiplied integers. They both default to zero. When they are zero, process communication event capturing is not performed. When either is nonzero, the value represents the maximum number of bytes that will be captured between process event start and end tags. This change was to support the fact that we no longer keep capture data in a separate file, we just use a FIFO in RAM to maintain capture info. For users whom don't care about process communication events, or whom haven't changed the defaults for ``stdout_capturefile`` or ``stderr_capturefile``, they needn't do anything to their configurations to deal with this change. - Log message levels have been normalized. In particular, process stdin/stdout is now logged at ``debug`` level rather than at ``trace`` level (``trace`` level is now reserved for output useful typically for debugging supervisor itself). See "Supervisor Log Levels" in the documentation for more info. - When an event is rebuffered (because all listeners are busy or a listener rejected the event), the rebuffered event is now inserted in the head of the listener event queue. This doesn't guarantee event emission in natural ordering, because if a listener rejects an event or dies while it's processing an event, it can take an arbitrary amount of time for the event to be rebuffered, and other events may be processed in the meantime. But if pool listeners never reject an event or don't die while processing an event, this guarantees that events will be emitted in the order that they were received because if all listeners are busy, the rebuffered event will be tried again "first" on the next go-around. - Removed EVENT_BUFFER_OVERFLOW event type. - The supervisorctl xmlrpc proxy can now communicate with supervisord using a persistent HTTP connection. - A new module "supervisor.childutils" was added. This module provides utilities for Python scripts which act as children of supervisord. Most notably, it contains an API method "getRPCInterface" allows you to obtain an xmlrpclib ServerProxy that is willing to communicate with the parent supervisor. It also contains utility functions that allow for parsing of supervisor event listener protocol headers. A pair of scripts (loop_eventgen.py and loop_listener.py) were added to the script directory that serve as examples about how to use the childutils module. - A new envvar is added to child process environments: SUPERVISOR_SERVER_URL. This contains the server URL for the supervisord running the child. - An ``OK`` URL was added at ``/ok.html`` which just returns the string ``OK`` (can be used for up checks or speed checks via plain-old-HTTP). - An additional command-line option ``--profile_options`` is accepted by the supervisord script for developer use:: supervisord -n -c sample.conf --profile_options=cumulative,calls The values are sort_stats options that can be passed to the standard Python profiler's PStats sort_stats method. When you exit supervisor, it will print Python profiling output to stdout. - If cElementTree is installed in the Python used to invoke supervisor, an alternate (faster, by about 2X) XML parser will be used to parse XML-RPC request bodies. cElementTree was added as an "extras_require" option in setup.py. - Added the ability to start, stop, and restart process groups to supervisorctl. To start a group, use ``start groupname:*``. To start multiple groups, use ``start groupname1:* groupname2:*``. Equivalent commands work for "stop" and "restart". You can mix and match short processnames, fully-specified group:process names, and groupsplats on the same line for any of these commands. - Added ``directory`` option to process config. If you set this option, supervisor will chdir to this directory before executing the child program (and thus it will be the child's cwd). - Added ``umask`` option to process config. If you set this option, supervisor will set the umask of the child program. (Thanks to Ian Bicking for the suggestion). - A pair of scripts ``osx_memmon_eventgen.py`` and `osx_memmon_listener.py`` have been added to the scripts directory. If they are used together as described in their comments, processes which are consuming "too much" memory will be restarted. The ``eventgen`` script only works on OSX (my main development platform) but it should be trivially generalizable to other operating systems. - The long form ``--configuration`` (-c) command line option for supervisord was broken. Reported by Mike Orr. (Mike Naberezny) - New log level: BLAT (blather). We log all supervisor-internal-related debugging info here. Thanks to Mike Orr for the suggestion. - We now allow supervisor to listen on both a UNIX domain socket and an inet socket instead of making them mutually exclusive. As a result, the options "http_port", "http_username", "http_password", "sockchmod" and "sockchown" are no longer part of the ``[supervisord]`` section configuration. These have been supplanted by two other sections: ``[unix_http_server]`` and ``[inet_http_server]``. You'll need to insert one or the other (depending on whether you want to listen on a UNIX domain socket or a TCP socket respectively) or both into your supervisord.conf file. These sections have their own options (where applicable) for port, username, password, chmod, and chown. See README.txt for more information about these sections. - All supervisord command-line options related to "http_port", "http_username", "http_password", "sockchmod" and "sockchown" have been removed (see above point for rationale). - The option that *used* to be ``sockchown`` within the ``[supervisord]`` section (and is now named ``chown`` within the ``[unix_http_server]`` section) used to accept a dot-separated user.group value. The separator now must be a colon ":", e.g. "user:group". Unices allow for dots in usernames, so this change is a bugfix. Thanks to Ian Bicking for the bug report. - If a '-c' option is not specified on the command line, both supervisord and supervisorctl will search for one in the paths ``./supervisord.conf`` , ``./etc/supervisord.conf`` (relative to the current working dir when supervisord or supervisorctl is invoked) or in ``/etc/supervisord.conf`` (the old default path). These paths are searched in order, and supervisord and supervisorctl will use the first one found. If none are found, supervisor will fail to start. - The Python string expression ``%(here)s`` (referring to the directory in which the configuration file was found) can be used within the following sections/options within the config file:: unix_http_server:file supervisor:directory supervisor:logfile supervisor:pidfile supervisor:childlogdir supervisor:environment program:environment program:stdout_logfile program:stderr_logfile program:process_name program:command - The ``--environment`` aka ``-b`` option was removed from the list of available command-line switches to supervisord (use "A=1 B=2 bin/supervisord" instead). - If the socket filename (the tail-end of the unix:// URL) was longer than 64 characters, supervisorctl would fail with an encoding error at startup. - The ``identifier`` command-line argument was not functional. - Fixed http://www.plope.com/software/collector/215 (bad error message in supervisorctl when program command not found on PATH). - Some child processes may not have been shut down properly at supervisor shutdown time. - Move to ZPL-derived (but not ZPL) license available from http://www.repoze.org/LICENSE.txt; it's slightly less restrictive than the ZPL (no servicemark clause). - Spurious errors related to unclosed files ("bad file descriptor", typically) were evident at supervisord "reload" time (when using the "reload" command from supervisorctl). - We no longer bundle ez_setup to bootstrap setuptools installation. 3.0a2 (2007-08-24) ------------------ - Fixed the README.txt example for defining the supervisor RPC interface in the configuration file. Thanks to Drew Perttula. - Fixed a bug where process communication events would not have the proper payload if the payload data was very short. - when supervisord attempted to kill a process with SIGKILL after the process was not killed within "stopwaitsecs" using a "normal" kill signal, supervisord would crash with an improper AssertionError. Thanks to Calvin Hendryx-Parker. - On Linux, Supervisor would consume too much CPU in an effective "busywait" between the time a subprocess exited and the time at which supervisor was notified of its exit status. Thanks to Drew Perttula. - RPC interface behavior change: if the RPC method "sendProcessStdin" is called against a process that has closed its stdin file descriptor (e.g. it has done the equivalent of "sys.stdin.close(); os.close(0)"), we return a NO_FILE fault instead of accepting the data. - Changed the semantics of the process configuration ``autorestart`` parameter with respect to processes which move between the RUNNING and EXITED state. ``autorestart`` was previously a boolean. Now it's a trinary, accepting one of ``false``, ``unexpected``, or ``true``. If it's ``false``, a process will never be automatically restarted from the EXITED state. If it's ``unexpected``, a process that enters the EXITED state will be automatically restarted if it exited with an exit code that was not named in the process config's ``exitcodes`` list. If it's ``true``, a process that enters the EXITED state will be automatically restarted unconditionally. The default is now ``unexpected`` (it was previously ``true``). The readdition of this feature is a reversion of the behavior change note in the changelog notes for 3.0a1 that asserted we never cared about the process' exit status when determining whether to restart it or not. - setup.py develop (and presumably setup.py install) would fail under Python 2.3.3, because setuptools attempted to import ``splituser`` from urllib2, and it didn't exist. - It's now possible to use ``setup.py install`` and ``setup.py develop`` on systems which do not have a C compiler if you set the environment variable "NO_MELD3_EXTENSION_MODULES=1" in the shell in which you invoke these commands (versions of meld3 > 0.6.1 respect this envvar and do not try to compile optional C extensions when it's set). - The test suite would fail on Python versions <= 2.3.3 because the "assertTrue" and "assertFalse" methods of unittest.TestCase didn't exist in those versions. - The ``supervisorctl`` and ``supervisord`` wrapper scripts were disused in favor of using setuptools' ``console_scripts`` entry point settings. - Documentation files and the sample configuration file are put into the generated supervisor egg's ``doc`` directory. - Using the web interface would cause fairly dramatic memory leakage. We now require a version of meld3 that does not appear to leak memory from its C extensions (0.6.3). 3.0a1 (2007-08-16) ------------------ - Default config file comment documented 10 secs as default for ``startsecs`` value in process config, in reality it was 1 sec. Thanks to Christoph Zwerschke. - Make note of subprocess environment behavior in README.txt. Thanks to Christoph Zwerschke. - New "strip_ansi" config file option attempts to strip ANSI escape sequences from logs for smaller/more readable logs (submitted by Mike Naberezny). - The XML-RPC method supervisor.getVersion() has been renamed for clarity to supervisor.getAPIVersion(). The old name is aliased for compatibility but is deprecated and will be removed in a future version (Mike Naberezny). - Improved web interface styling (Mike Naberezny, Derek DeVries) - The XML-RPC method supervisor.startProcess() now checks that the file exists and is executable (Mike Naberezny). - Two environment variables, "SUPERVISOR_PROCESS_NAME" and "SUPERVISOR_PROCESS_GROUP" are set in the environment of child processes, representing the name of the process and group in supervisor's configuration. - Process state map change: a process may now move directly from the STARTING state to the STOPPING state (as a result of a stop request). - Behavior change: if ``autorestart`` is true, even if a process exits with an "expected" exit code, it will still be restarted. In the immediately prior release of supervisor, this was true anyway, and no one complained, so we're going to consider that the "officially correct" behavior from now on. - Supervisor now logs subprocess stdout and stderr independently. The old program config keys "logfile", "logfile_backups" and "logfile_maxbytes" are superseded by "stdout_logfile", "stdout_logfile_backups", and "stdout_logfile_maxbytes". Added keys include "stderr_logfile", "stderr_logfile_backups", and "stderr_logfile_maxbytes". An additional "redirect_stderr" key is used to cause program stderr output to be sent to its stdout channel. The keys "log_stderr" and "log_stdout" have been removed. - ``[program:x]`` config file sections now represent "homogeneous process groups" instead of single processes. A "numprocs" key in the section represents the number of processes that are in the group. A "process_name" key in the section allows composition of the each process' name within the homogeneous group. - A new kind of config file section, ``[group:x]`` now exists, allowing users to group heterogeneous processes together into a process group that can be controlled as a unit from a client. - Supervisord now emits "events" at certain points in its normal operation. These events include supervisor state change events, process state change events, and "process communication events". - A new kind of config file section ``[eventlistener:x]`` now exists. Each section represents an "event listener pool", which is a special kind of homogeneous process group. Each process in the pool is meant to receive supervisor "events" via its stdin and perform some notification (e.g. send a mail, log, make an http request, etc.) - Supervisord can now capture data between special tokens in subprocess stdout/stderr output and emit a "process communications event" as a result. - Supervisor's XML-RPC interface may be extended arbitrarily by programmers. Additional top-level namespace XML-RPC interfaces can be added using the ``[rpcinterface:foo]`` declaration in the configuration file. - New ``supervisor``-namespace XML-RPC methods have been added: getAPIVersion (returns the XML-RPC API version, the older "getVersion" is now deprecated), "startProcessGroup" (starts all processes in a supervisor process group), "stopProcessGroup" (stops all processes in a supervisor process group), and "sendProcessStdin" (sends data to a process' stdin file descriptor). - ``supervisor``-namespace XML-RPC methods which previously accepted ony a process name as "name" (startProcess, stopProcess, getProcessInfo, readProcessLog, tailProcessLog, and clearProcessLog) now accept a "name" which may contain both the process name and the process group name in the form ``groupname:procname``. For backwards compatibility purposes, "simple" names will also be accepted but will be expanded internally (e.g. if "foo" is sent as a name, it will be expanded to "foo:foo", representing the foo process within the foo process group). - 2.X versions of supervisorctl will work against supervisor 3.0 servers in a degraded fashion, but 3.X versions of supervisorctl will not work at all against supervisor 2.X servers. 2.2b1 (2007-03-31) ------------------ - Individual program configuration sections can now specify an environment. - Added a 'version' command to supervisorctl. This returns the version of the supervisor2 package which the remote supervisord process is using. 2.1 (2007-03-17) ---------------- - When supervisord was invoked more than once, and its configuration was set up to use a UNIX domain socket as the HTTP server, the socket file would be erased in error. The symptom of this was that a subsequent invocation of supervisorctl could not find the socket file, so the process could not be controlled (it and all of its subprocesses would need to be killed by hand). - Close subprocess file descriptors properly when a subprocess exits or otherwise dies. This should result in fewer "too many open files to spawn foo" messages when supervisor is left up for long periods of time. - When a process was not killable with a "normal" signal at shutdown time, too many "INFO: waiting for x to die" messages would be sent to the log until we ended up killing the process with a SIGKILL. Now a maximum of one every three seconds is sent up until SIGKILL time. Thanks to Ian Bicking. - Add an assertion: we never want to try to marshal None to XML-RPC callers. Issue 223 in the collector from vgatto indicates that somehow a supervisor XML-RPC method is returning None (which should never happen), but I cannot identify how. Maybe the assertion will give us more clues if it happens again. - Supervisor would crash when run under Python 2.5 because the xmlrpclib.Transport class in Python 2.5 changed in a backward-incompatible way. Thanks to Eric Westra for the bug report and a fix. - Tests now pass under Python 2.5. - Better supervisorctl reporting on stop requests that have a FAILED status. - Removed duplicated code (readLog/readMainLog), thanks to Mike Naberezny. - Added tailProcessLog command to the XML-RPC API. It provides a more efficient way to tail logs than readProcessLog(). Use readProcessLog() to read chunks and tailProcessLog() to tail. (thanks to Mike Naberezny). 2.1b1 (2006-08-30) ------------------ - "supervisord -h" and "supervisorctl -h" did not work (traceback instead of showing help view (thanks to Damjan from Macedonia for the bug report). - Processes which started successfully after failing to start initially are no longer reported in BACKOFF state once they are started successfully (thanks to Damjan from Macedonia for the bug report). - Add new 'maintail' command to supervisorctl shell, which allows you to tail the 'main' supervisor log. This uses a new readMainLog xmlrpc API. - Various process-state-transition related changes, all internal. README.txt updated with new state transition map. - startProcess and startAllProcesses xmlrpc APIs changed: instead of accepting a timeout integer, these accept a wait boolean (timeout is implied by process' "startsecs" configuration). If wait is False, do not wait for startsecs. Known issues: - Code does not match state transition map. Processes which are configured as autorestarting which start "successfully" but subsequently die after 'startsecs' go through the transitions RUNNING -> BACKOFF -> STARTING instead of the correct transitions RUNNING -> EXITED -> STARTING. This has no real negative effect, but should be fixed for correctness. 2.0 (2006-08-30) ---------------- - pidfile written in daemon mode had incorrect pid. - supervisorctl: tail (non -f) did not pass through proper error messages when supplied by the server. - Log signal name used to kill processes at debug level. - supervisorctl "tail -f" didn't work with supervisorctl sections configured with an absolute unix:// URL - New "environment" config file option allows you to add environment variable values to supervisord environment from config file. 2.0b1 (2006-07-12) ------------------ - Fundamental rewrite based on 1.0.7, use distutils (only) for installation, use ConfigParser rather than ZConfig, use HTTP for wire protocol, web interface, less lies in supervisorctl. 1.0.7 (2006-07-11) ------------------ - Don't log a waitpid error if the error value is "no children". - Use select() against child file descriptor pipes and bump up select timeout appropriately. 1.0.6 (2005-11-20) ------------------ - Various tweaks to make run more effectively on Mac OS X (including fixing tests to run there, no more "error reading from fd XXX" in logtail output, reduced disk/CPU usage as a result of not writing to log file unnecessarily on Mac OS). 1.0.5 (2004-07-29) ------------------ - Short description: In previous releases, managed programs that created voluminous stdout/stderr output could run more slowly than usual when invoked under supervisor, now they do not. Long description: The supervisord manages child output by polling pipes related to child process stderr/stdout. Polling operations are performed in the mainloop, which also performs a 'select' on the filedescriptor(s) related to client/server operations. In prior releases, the select timeout was set to 2 seconds. This release changes the timeout to 1/10th of a second in order to keep up with client stdout/stderr output. Gory description: On Linux, at least, there is a pipe buffer size fixed by the kernel of somewhere between 512 - 4096 bytes; when a child process writes enough data to fill the pipe buffer, it will block on further stdout/stderr output until supervisord comes along and clears out the buffer by reading bytes from the pipe within the mainloop. We now clear these buffers much more quickly than we did before due to the increased frequency of buffer reads in the mainloop; the timeout value of 1/10th of a second seems to be fast enough to clear out the buffers of child process pipes when managing programs on even a very fast system while still enabling the supervisord process to be in a sleeping state for most of the time. 1.0.4 or "Alpha 4" (2004-06-30) ------------------------------- - Forgot to update version tag in configure.py, so the supervisor version in a3 is listed as "1.0.1", where it should be "1.0.3". a4 will be listed as "1.0.4'. - Instead of preventing a process from starting if setuid() can't be called (if supervisord is run as nonroot, for example), just log the error and proceed. 1.0.3 or "Alpha 3" (2004-05-26) ------------------------------- - The daemon could chew up a lot of CPU time trying to select() on real files (I didn't know select() failed to block when a file is at EOF). Fixed by polling instead of using select(). - Processes could "leak" and become zombies due to a bug in reaping dead children. - supervisord now defaults to daemonizing itself. - 'daemon' config file option and -d/--daemon command-line option removed from supervisord acceptable options. In place of these options, we now have a 'nodaemon' config file option and a -n/--nodaemon command-line option. - logtail now works. - pidproxy changed slightly to reap children synchronously. - in alpha2 changelist, supervisord was reported to have a "noauth" command-line option. This was not accurate. The way to turn off auth on the server is to disinclude the "passwdfile" config file option from the server config file. The client however does indeed still have a noauth option, which prevents it from ever attempting to send authentication credentials to servers. - ZPL license added for ZConfig to LICENSE.txt 1.0.2 or "Alpha 2" (Unreleased) ------------------------------- - supervisorctl and supervisord no longer need to run on the same machine due to the addition of internet socket support. - supervisorctl and supervisord no longer share a common configuration file format. - supervisorctl now uses a persistent connection to supervisord (as opposed to creating a fresh connection for each command). - SRP (Secure Remote Password) authentication is now a supported form of access control for supervisord. In supervisorctl interactive mode, by default, users will be asked for credentials when attempting to talk to a supervisord that requires SRP authentication. - supervisord has a new command-line option and configuration file option for specifying "noauth" mode, which signifies that it should not require authentication from clients. - supervisorctl has a new command-line option and configuration option for specifying "noauth" mode, which signifies that it should never attempt to send authentication info to servers. - supervisorctl has new commands: open: opens a connection to a new supervisord; close: closes the current connection. - supervisorctl's "logtail" command now retrieves log data from supervisord's log file remotely (as opposed to reading it directly from a common filesystem). It also no longer emulates "tail -f", it just returns lines of the server's log file. - The supervisord/supervisorctl wire protocol now has protocol versioning and is documented in "protocol.txt". - "configfile" command-line override -C changed to -c - top-level section name for supervisor schema changed to 'supervisord' from 'supervisor' - Added 'pidproxy' shim program. Known issues in alpha 2: - If supervisorctl loses a connection to a supervisord or if the remote supervisord crashes or shuts down unexpectedly, it is possible that any supervisorctl talking to it will "hang" indefinitely waiting for data. Pressing Ctrl-C will allow you to restart supervisorctl. - Only one supervisorctl process may talk to a given supervisord process at a time. If two supervisorctl processes attempt to talk to the same supervisord process, one will "win" and the other will be disconnected. - Sometimes if a pidproxy is used to start a program, the pidproxy program itself will "leak". 1.0.0 or "Alpha 1" (Unreleased) ------------------------------- Initial release. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671843145.0 supervisor-4.2.5/supervisor.egg-info/SOURCES.txt0000644000076500000240000001017714351446511021302 0ustar00mnaberezstaffCHANGES.rst COPYRIGHT.txt LICENSES.txt MANIFEST.in README.rst setup.cfg setup.py tox.ini docs/Makefile docs/api.rst docs/conf.py docs/configuration.rst docs/development.rst docs/events.rst docs/faq.rst docs/glossary.rst docs/index.rst docs/installing.rst docs/introduction.rst docs/logging.rst docs/plugins.rst docs/running.rst docs/subprocess-transitions.png docs/subprocess.rst docs/upgrading.rst docs/xmlrpc.rst docs/.static/logo_hi.gif docs/.static/repoze.css supervisor/__init__.py supervisor/childutils.py supervisor/compat.py supervisor/confecho.py supervisor/datatypes.py supervisor/dispatchers.py supervisor/events.py supervisor/http.py supervisor/http_client.py supervisor/loggers.py supervisor/options.py supervisor/pidproxy.py supervisor/poller.py supervisor/process.py supervisor/rpcinterface.py supervisor/socket_manager.py supervisor/states.py supervisor/supervisorctl.py supervisor/supervisord.py supervisor/templating.py supervisor/version.txt supervisor/web.py supervisor/xmlrpc.py supervisor.egg-info/PKG-INFO supervisor.egg-info/SOURCES.txt supervisor.egg-info/dependency_links.txt supervisor.egg-info/entry_points.txt supervisor.egg-info/not-zip-safe supervisor.egg-info/requires.txt supervisor.egg-info/top_level.txt supervisor/medusa/__init__.py supervisor/medusa/asynchat_25.py supervisor/medusa/asyncore_25.py supervisor/medusa/auth_handler.py supervisor/medusa/counter.py supervisor/medusa/default_handler.py supervisor/medusa/filesys.py supervisor/medusa/http_date.py supervisor/medusa/http_server.py supervisor/medusa/logger.py supervisor/medusa/producers.py supervisor/medusa/util.py supervisor/medusa/xmlrpc_handler.py supervisor/scripts/loop_eventgen.py supervisor/scripts/loop_listener.py supervisor/scripts/sample_commevent.py supervisor/scripts/sample_eventlistener.py supervisor/scripts/sample_exiting_eventlistener.py supervisor/skel/sample.conf supervisor/tests/__init__.py supervisor/tests/base.py supervisor/tests/test_childutils.py supervisor/tests/test_confecho.py supervisor/tests/test_datatypes.py supervisor/tests/test_dispatchers.py supervisor/tests/test_end_to_end.py supervisor/tests/test_events.py supervisor/tests/test_http.py supervisor/tests/test_http_client.py supervisor/tests/test_loggers.py supervisor/tests/test_options.py supervisor/tests/test_pidproxy.py supervisor/tests/test_poller.py supervisor/tests/test_process.py supervisor/tests/test_rpcinterfaces.py supervisor/tests/test_socket_manager.py supervisor/tests/test_states.py supervisor/tests/test_supervisorctl.py supervisor/tests/test_supervisord.py supervisor/tests/test_templating.py supervisor/tests/test_web.py supervisor/tests/test_xmlrpc.py supervisor/tests/fixtures/donothing.conf supervisor/tests/fixtures/include.conf supervisor/tests/fixtures/issue-1054.conf supervisor/tests/fixtures/issue-1170a.conf supervisor/tests/fixtures/issue-1170b.conf supervisor/tests/fixtures/issue-1170c.conf supervisor/tests/fixtures/issue-1224.conf supervisor/tests/fixtures/issue-1231a.conf supervisor/tests/fixtures/issue-1231b.conf supervisor/tests/fixtures/issue-1231c.conf supervisor/tests/fixtures/issue-1298.conf supervisor/tests/fixtures/issue-1483a.conf supervisor/tests/fixtures/issue-1483b.conf supervisor/tests/fixtures/issue-1483c.conf supervisor/tests/fixtures/issue-291a.conf supervisor/tests/fixtures/issue-550.conf supervisor/tests/fixtures/issue-565.conf supervisor/tests/fixtures/issue-638.conf supervisor/tests/fixtures/issue-663.conf supervisor/tests/fixtures/issue-664.conf supervisor/tests/fixtures/issue-733.conf supervisor/tests/fixtures/issue-835.conf supervisor/tests/fixtures/issue-836.conf supervisor/tests/fixtures/issue-986.conf supervisor/tests/fixtures/listener.py supervisor/tests/fixtures/print_env.py supervisor/tests/fixtures/spew.py supervisor/tests/fixtures/test_1231.py supervisor/tests/fixtures/unkillable_spew.py supervisor/tests/fixtures/example/included.conf supervisor/ui/status.html supervisor/ui/tail.html supervisor/ui/images/icon.png supervisor/ui/images/rule.gif supervisor/ui/images/state0.gif supervisor/ui/images/state1.gif supervisor/ui/images/state2.gif supervisor/ui/images/state3.gif supervisor/ui/images/supervisor.gif supervisor/ui/stylesheets/supervisor.css././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671843145.0 supervisor-4.2.5/supervisor.egg-info/dependency_links.txt0000644000076500000240000000000114351446511023456 0ustar00mnaberezstaff ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671843145.0 supervisor-4.2.5/supervisor.egg-info/entry_points.txt0000644000076500000240000000027714351446511022714 0ustar00mnaberezstaff[console_scripts] echo_supervisord_conf = supervisor.confecho:main pidproxy = supervisor.pidproxy:main supervisorctl = supervisor.supervisorctl:main supervisord = supervisor.supervisord:main ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1669398344.0 supervisor-4.2.5/supervisor.egg-info/not-zip-safe0000644000076500000240000000000114340177510021634 0ustar00mnaberezstaff ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671843145.0 supervisor-4.2.5/supervisor.egg-info/requires.txt0000644000076500000240000000005014351446511022003 0ustar00mnaberezstaffsetuptools [testing] pytest pytest-cov ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671843145.0 supervisor-4.2.5/supervisor.egg-info/top_level.txt0000644000076500000240000000001314351446511022134 0ustar00mnaberezstaffsupervisor ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671840025.0 supervisor-4.2.5/tox.ini0000644000076500000240000000210214351440431014776 0ustar00mnaberezstaff[tox] envlist = cover,cover3,docs,py27,py34,py35,py36,py37,py38,py39,py310 [testenv] deps = attrs < 21.1.0 # see https://github.com/python-attrs/attrs/pull/608 pytest pexpect == 4.7.0 # see https://github.com/Supervisor/supervisor/issues/1327 mock >= 0.5.0 passenv = END_TO_END commands = pytest {posargs} [testenv:py27-configparser] ;see https://github.com/Supervisor/supervisor/issues/1230 basepython = python2.7 deps = {[testenv]deps} configparser passenv = {[testenv]passenv} commands = {[testenv]commands} [testenv:cover] basepython = python2.7 commands = pytest --cov=supervisor --cov-report=term-missing --cov-report=xml {posargs} deps = {[testenv]deps} pytest-cov [testenv:cover3] basepython = python3.7 commands = pytest --cov=supervisor --cov-report=term-missing --cov-report=xml {posargs} deps = {[testenv:cover]deps} [testenv:docs] deps = Sphinx readme setuptools >= 18.5 allowlist_externals = make commands = make -C docs html BUILDDIR={envtmpdir} "SPHINXOPTS=-W -E" python setup.py check -m -r -s