././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1742141828.9637117
pebble-5.1.1/ 0000755 0001751 0000166 00000000000 14765574605 012375 5 ustar 00runner docker ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/LICENSE 0000644 0001751 0000166 00000016720 14765574576 013417 0 ustar 00runner docker GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/MANIFEST.in 0000644 0001751 0000166 00000000017 14765574576 014140 0 ustar 00runner docker include LICENSE ././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1742141828.9637117
pebble-5.1.1/PKG-INFO 0000644 0001751 0000166 00000007250 14765574605 013476 0 ustar 00runner docker Metadata-Version: 2.2
Name: Pebble
Version: 5.1.1
Summary: Threading and multiprocessing eye-candy.
Home-page: https://github.com/noxdafox/pebble
Author: Matteo Cafasso
Author-email: noxdafox@gmail.com
License: LGPL
Keywords: thread process pool decorator
Classifier: Programming Language :: Python :: 3
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)
Requires-Python: >=3.8
License-File: LICENSE
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: home-page
Dynamic: keywords
Dynamic: license
Dynamic: requires-python
Dynamic: summary
Pebble
======
Pebble provides a neat API to manage threads and processes within an application.
:Source: https://github.com/noxdafox/pebble
:Documentation: https://pebble.readthedocs.io
:Download: https://pypi.org/project/Pebble/
|build badge| |docs badge| |downloads badge|
.. |build badge| image:: https://github.com/noxdafox/pebble/actions/workflows/action.yml/badge.svg
:target: https://github.com/noxdafox/pebble/actions/workflows/action.yml
:alt: Build Status
.. |docs badge| image:: https://readthedocs.org/projects/pebble/badge/?version=latest
:target: https://pebble.readthedocs.io
:alt: Documentation Status
.. |downloads badge| image:: https://img.shields.io/pypi/dm/pebble
:target: https://pypistats.org/packages/pebble
:alt: PyPI - Downloads
Examples
--------
Run a job in a separate thread and wait for its results.
.. code:: python
from pebble import concurrent
@concurrent.thread
def function(foo, bar=0):
return foo + bar
future = function(1, bar=2)
result = future.result() # blocks until results are ready
Same code with AsyncIO support.
.. code:: python
import asyncio
from pebble import asynchronous
@asynchronous.thread
def function(foo, bar=0):
return foo + bar
async def asynchronous_function():
result = await function(1, bar=2) # blocks until results are ready
print(result)
asyncio.run(asynchronous_function())
Run a function with a timeout of ten seconds and deal with errors.
.. code:: python
from pebble import concurrent
from concurrent.futures import TimeoutError
@concurrent.process(timeout=10)
def function(foo, bar=0):
return foo + bar
future = function(1, bar=2)
try:
result = future.result() # blocks until results are ready
except TimeoutError as error:
print("Function took longer than %d seconds" % error.args[1])
except Exception as error:
print("Function raised %s" % error)
print(error.traceback) # traceback of the function
Pools support workers restart, timeout for long running tasks and more.
.. code:: python
from pebble import ProcessPool
from concurrent.futures import TimeoutError
TIMEOUT_SECONDS = 3
def function(foo, bar=0):
return foo + bar
def task_done(future):
try:
result = future.result() # blocks until results are ready
except TimeoutError as error:
print("Function took longer than %d seconds" % error.args[1])
except Exception as error:
print("Function raised %s" % error)
print(error.traceback) # traceback of the function
with ProcessPool(max_workers=5, max_tasks=10) as pool:
for index in range(0, 10):
future = pool.schedule(function, index, bar=1, timeout=TIMEOUT_SECONDS)
future.add_done_callback(task_done)
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1742141828.9637117
pebble-5.1.1/Pebble.egg-info/ 0000755 0001751 0000166 00000000000 14765574605 015260 5 ustar 00runner docker ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141828.0
pebble-5.1.1/Pebble.egg-info/PKG-INFO 0000644 0001751 0000166 00000007250 14765574604 016360 0 ustar 00runner docker Metadata-Version: 2.2
Name: Pebble
Version: 5.1.1
Summary: Threading and multiprocessing eye-candy.
Home-page: https://github.com/noxdafox/pebble
Author: Matteo Cafasso
Author-email: noxdafox@gmail.com
License: LGPL
Keywords: thread process pool decorator
Classifier: Programming Language :: Python :: 3
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)
Requires-Python: >=3.8
License-File: LICENSE
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: home-page
Dynamic: keywords
Dynamic: license
Dynamic: requires-python
Dynamic: summary
Pebble
======
Pebble provides a neat API to manage threads and processes within an application.
:Source: https://github.com/noxdafox/pebble
:Documentation: https://pebble.readthedocs.io
:Download: https://pypi.org/project/Pebble/
|build badge| |docs badge| |downloads badge|
.. |build badge| image:: https://github.com/noxdafox/pebble/actions/workflows/action.yml/badge.svg
:target: https://github.com/noxdafox/pebble/actions/workflows/action.yml
:alt: Build Status
.. |docs badge| image:: https://readthedocs.org/projects/pebble/badge/?version=latest
:target: https://pebble.readthedocs.io
:alt: Documentation Status
.. |downloads badge| image:: https://img.shields.io/pypi/dm/pebble
:target: https://pypistats.org/packages/pebble
:alt: PyPI - Downloads
Examples
--------
Run a job in a separate thread and wait for its results.
.. code:: python
from pebble import concurrent
@concurrent.thread
def function(foo, bar=0):
return foo + bar
future = function(1, bar=2)
result = future.result() # blocks until results are ready
Same code with AsyncIO support.
.. code:: python
import asyncio
from pebble import asynchronous
@asynchronous.thread
def function(foo, bar=0):
return foo + bar
async def asynchronous_function():
result = await function(1, bar=2) # blocks until results are ready
print(result)
asyncio.run(asynchronous_function())
Run a function with a timeout of ten seconds and deal with errors.
.. code:: python
from pebble import concurrent
from concurrent.futures import TimeoutError
@concurrent.process(timeout=10)
def function(foo, bar=0):
return foo + bar
future = function(1, bar=2)
try:
result = future.result() # blocks until results are ready
except TimeoutError as error:
print("Function took longer than %d seconds" % error.args[1])
except Exception as error:
print("Function raised %s" % error)
print(error.traceback) # traceback of the function
Pools support workers restart, timeout for long running tasks and more.
.. code:: python
from pebble import ProcessPool
from concurrent.futures import TimeoutError
TIMEOUT_SECONDS = 3
def function(foo, bar=0):
return foo + bar
def task_done(future):
try:
result = future.result() # blocks until results are ready
except TimeoutError as error:
print("Function took longer than %d seconds" % error.args[1])
except Exception as error:
print("Function raised %s" % error)
print(error.traceback) # traceback of the function
with ProcessPool(max_workers=5, max_tasks=10) as pool:
for index in range(0, 10):
future = pool.schedule(function, index, bar=1, timeout=TIMEOUT_SECONDS)
future.add_done_callback(task_done)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141828.0
pebble-5.1.1/Pebble.egg-info/SOURCES.txt 0000644 0001751 0000166 00000002071 14765574604 017143 0 ustar 00runner docker LICENSE
MANIFEST.in
README.rst
setup.py
Pebble.egg-info/PKG-INFO
Pebble.egg-info/SOURCES.txt
Pebble.egg-info/dependency_links.txt
Pebble.egg-info/top_level.txt
pebble/__init__.py
pebble/decorators.py
pebble/functions.py
pebble/py.typed
pebble/asynchronous/__init__.py
pebble/asynchronous/process.py
pebble/asynchronous/thread.py
pebble/common/__init__.py
pebble/common/process.py
pebble/common/shared.py
pebble/common/types.py
pebble/concurrent/__init__.py
pebble/concurrent/process.py
pebble/concurrent/thread.py
pebble/pool/__init__.py
pebble/pool/base_pool.py
pebble/pool/channel.py
pebble/pool/process.py
pebble/pool/thread.py
test/test_asynchronous_process_fork.py
test/test_asynchronous_process_forkserver.py
test/test_asynchronous_process_spawn.py
test/test_asynchronous_thread.py
test/test_concurrent_process_fork.py
test/test_concurrent_process_forkserver.py
test/test_concurrent_process_spawn.py
test/test_concurrent_thread.py
test/test_pebble.py
test/test_process_pool_fork.py
test/test_process_pool_forkserver.py
test/test_process_pool_spawn.py
test/test_thread_pool.py ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141828.0
pebble-5.1.1/Pebble.egg-info/dependency_links.txt 0000644 0001751 0000166 00000000001 14765574604 021325 0 ustar 00runner docker
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141828.0
pebble-5.1.1/Pebble.egg-info/top_level.txt 0000644 0001751 0000166 00000000007 14765574604 020006 0 ustar 00runner docker pebble
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/README.rst 0000644 0001751 0000166 00000005544 14765574576 014103 0 ustar 00runner docker Pebble
======
Pebble provides a neat API to manage threads and processes within an application.
:Source: https://github.com/noxdafox/pebble
:Documentation: https://pebble.readthedocs.io
:Download: https://pypi.org/project/Pebble/
|build badge| |docs badge| |downloads badge|
.. |build badge| image:: https://github.com/noxdafox/pebble/actions/workflows/action.yml/badge.svg
:target: https://github.com/noxdafox/pebble/actions/workflows/action.yml
:alt: Build Status
.. |docs badge| image:: https://readthedocs.org/projects/pebble/badge/?version=latest
:target: https://pebble.readthedocs.io
:alt: Documentation Status
.. |downloads badge| image:: https://img.shields.io/pypi/dm/pebble
:target: https://pypistats.org/packages/pebble
:alt: PyPI - Downloads
Examples
--------
Run a job in a separate thread and wait for its results.
.. code:: python
from pebble import concurrent
@concurrent.thread
def function(foo, bar=0):
return foo + bar
future = function(1, bar=2)
result = future.result() # blocks until results are ready
Same code with AsyncIO support.
.. code:: python
import asyncio
from pebble import asynchronous
@asynchronous.thread
def function(foo, bar=0):
return foo + bar
async def asynchronous_function():
result = await function(1, bar=2) # blocks until results are ready
print(result)
asyncio.run(asynchronous_function())
Run a function with a timeout of ten seconds and deal with errors.
.. code:: python
from pebble import concurrent
from concurrent.futures import TimeoutError
@concurrent.process(timeout=10)
def function(foo, bar=0):
return foo + bar
future = function(1, bar=2)
try:
result = future.result() # blocks until results are ready
except TimeoutError as error:
print("Function took longer than %d seconds" % error.args[1])
except Exception as error:
print("Function raised %s" % error)
print(error.traceback) # traceback of the function
Pools support workers restart, timeout for long running tasks and more.
.. code:: python
from pebble import ProcessPool
from concurrent.futures import TimeoutError
TIMEOUT_SECONDS = 3
def function(foo, bar=0):
return foo + bar
def task_done(future):
try:
result = future.result() # blocks until results are ready
except TimeoutError as error:
print("Function took longer than %d seconds" % error.args[1])
except Exception as error:
print("Function raised %s" % error)
print(error.traceback) # traceback of the function
with ProcessPool(max_workers=5, max_tasks=10) as pool:
for index in range(0, 10):
future = pool.schedule(function, index, bar=1, timeout=TIMEOUT_SECONDS)
future.add_done_callback(task_done)
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1742141828.9587119
pebble-5.1.1/pebble/ 0000755 0001751 0000166 00000000000 14765574605 013626 5 ustar 00runner docker ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/__init__.py 0000644 0001751 0000166 00000001210 14765574576 015740 0 ustar 00runner docker __author__ = 'Matteo Cafasso'
__version__ = '5.1.1'
__license__ = 'LGPL'
__all__ = ['waitforthreads',
'waitforqueues',
'synchronized',
'sighandler',
'ProcessFuture',
'MapFuture',
'ProcessMapFuture',
'ProcessExpired',
'ProcessPool',
'ThreadPool']
from pebble import concurrent, asynchronous
from pebble.decorators import synchronized, sighandler
from pebble.functions import waitforqueues, waitforthreads
from pebble.common import ProcessExpired, ProcessFuture, CONSTS
from pebble.pool import ThreadPool, ProcessPool, MapFuture, ProcessMapFuture
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1742141828.9587119
pebble-5.1.1/pebble/asynchronous/ 0000755 0001751 0000166 00000000000 14765574605 016361 5 ustar 00runner docker ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/asynchronous/__init__.py 0000644 0001751 0000166 00000000211 14765574576 020473 0 ustar 00runner docker __all__ = [
'process',
'thread'
]
from pebble.asynchronous.thread import thread
from pebble.asynchronous.process import process
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/asynchronous/process.py 0000644 0001751 0000166 00000012514 14765574576 020423 0 ustar 00runner docker # This file is part of Pebble.
# Copyright (c) 2013-2025, Matteo Cafasso
# Pebble is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License
# as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version.
# Pebble is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with Pebble. If not, see .
import os
import types
import asyncio
import multiprocessing
from itertools import count
from functools import wraps
from concurrent.futures import TimeoutError
from typing import Any, Callable, Optional, overload
from pebble import common
from pebble.pool.process import ProcessPool
@overload
def process(func: common.CallableType) -> common.AsyncIODecoratorReturnType:
...
@overload
def process(
name: Optional[str] = None,
daemon: bool = True,
timeout: Optional[float] = None,
mp_context: Optional[multiprocessing.context.BaseContext] = None,
pool: Optional[ProcessPool] = None
) -> common.AsyncIODecoratorParamsReturnType:
...
def process(*args, **kwargs):
"""Runs the decorated function in a concurrent process,
taking care of the result and error management.
Decorated functions will return an asyncio.Future object
once called.
The timeout parameter will set a maximum execution time
for the decorated function. If the execution exceeds the timeout,
the process will be stopped and the Future will raise TimeoutError.
The name parameter will set the process name.
The daemon parameter controls the underlying process daemon flag.
Default is True.
The context parameter allows to provide the multiprocessing.context
object used for starting the process.
The pool parameter accepts a pebble.ProcessPool instance to be used
instead of running the function in a new process.
"""
return common.decorate_function(_process_wrapper, *args, **kwargs)
def _process_wrapper(
function: Callable,
name: str,
daemon: bool,
timeout: float,
mp_context: multiprocessing.context.BaseContext,
pool: ProcessPool
) -> Callable:
if isinstance(function, types.FunctionType):
common.register_function(function)
if hasattr(mp_context, 'get_start_method'):
start_method = mp_context.get_start_method()
else:
start_method = 'spawn' if os.name == 'nt' else 'fork'
if pool is not None:
if not isinstance(pool, ProcessPool):
raise TypeError('Pool expected to be ProcessPool')
start_method = 'pool'
@wraps(function)
def wrapper(*args, **kwargs) -> asyncio.Future:
loop = common.get_asyncio_loop()
target, args = common.maybe_install_trampoline(function, args, start_method)
if pool is not None:
future = loop.run_in_executor(pool, target, timeout, *args, **kwargs)
else:
future = loop.create_future()
reader, writer = mp_context.Pipe(duplex=False)
worker = common.launch_process(
name, common.function_handler, daemon, mp_context,
target, args, kwargs, (reader, writer))
writer.close()
loop.create_task(_worker_handler(future, worker, reader, timeout))
return future
return wrapper
async def _worker_handler(
future: asyncio.Future,
worker: multiprocessing.Process,
pipe: multiprocessing.Pipe,
timeout: float
):
"""Worker lifecycle manager.
Waits for the worker to be perform its task,
collects result, runs the callback and cleans up the process.
"""
result = await _get_result(future, pipe, timeout)
if worker.is_alive():
common.stop_process(worker)
if result.status == common.ResultStatus.SUCCESS:
future.set_result(result.value)
else:
if result.status == common.ResultStatus.ERROR:
result.value.exitcode = worker.exitcode
result.value.pid = worker.pid
if not isinstance(result.value, asyncio.CancelledError):
future.set_exception(result.value)
async def _get_result(
future: asyncio.Future,
pipe: multiprocessing.Pipe,
timeout: float
) -> Any:
"""Waits for result and handles communication errors."""
counter = count(step=common.CONSTS.sleep_unit)
try:
while not pipe.poll():
if timeout is not None and next(counter) >= timeout:
error = TimeoutError('Task Timeout', timeout)
return common.Result(common.ResultStatus.FAILURE, error)
if future.cancelled():
error = asyncio.CancelledError()
return common.Result(common.ResultStatus.FAILURE, error)
await asyncio.sleep(common.CONSTS.sleep_unit)
return pipe.recv()
except (EOFError, OSError):
error = common.ProcessExpired('Abnormal termination')
return common.Result(common.ResultStatus.ERROR, error)
except Exception as error:
return common.Result(common.ResultStatus.ERROR, error)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/asynchronous/thread.py 0000644 0001751 0000166 00000005635 14765574576 020222 0 ustar 00runner docker # This file is part of Pebble.
# Copyright (c) 2013-2025, Matteo Cafasso
# Pebble is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License
# as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version.
# Pebble is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with Pebble. If not, see .
import asyncio
from functools import wraps
from typing import Callable, Optional, overload
from pebble import common
from pebble.pool.thread import ThreadPool
@overload
def thread(func: common.CallableType) -> common.AsyncIODecoratorReturnType:
...
@overload
def thread(
name: Optional[str] = None,
daemon: bool = True,
pool: Optional[ThreadPool] = None
) -> common.AsyncIODecoratorParamsReturnType:
...
def thread(*args, **kwargs):
"""Runs the decorated function within a concurrent thread,
taking care of the result and error management.
Decorated functions will return an asyncio.Future object
once called.
The name parameter will set the thread name.
The daemon parameter controls the underlying thread daemon flag.
Default is True.
The pool parameter accepts a pebble.ThreadPool instance to be used
instead of running the function in a new process.
"""
return common.decorate_function(_thread_wrapper, *args, **kwargs)
def _thread_wrapper(
function: Callable,
name: str,
daemon: bool,
_timeout: float,
_mp_context,
pool: ThreadPool
) -> Callable:
if pool is not None:
if not isinstance(pool, ThreadPool):
raise TypeError('Pool expected to be ThreadPool')
@wraps(function)
def wrapper(*args, **kwargs) -> asyncio.Future:
loop = common.get_asyncio_loop()
if pool is not None:
future = loop.run_in_executor(pool, function, *args, **kwargs)
else:
future = loop.create_future()
common.launch_thread(
name, _function_handler, daemon,
function, args, kwargs, future)
return future
return wrapper
def _function_handler(
function: Callable,
args: list,
kwargs: dict,
future: asyncio.Future
):
"""Runs the actual function in separate thread and returns its result."""
loop = future.get_loop()
result = common.execute(function, *args, **kwargs)
if result.status == common.ResultStatus.SUCCESS:
loop.call_soon_threadsafe(future.set_result, result.value)
else:
loop.call_soon_threadsafe(future.set_exception, result.value)
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1742141828.9597118
pebble-5.1.1/pebble/common/ 0000755 0001751 0000166 00000000000 14765574605 015116 5 ustar 00runner docker ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/common/__init__.py 0000644 0001751 0000166 00000001642 14765574576 017241 0 ustar 00runner docker from pebble.common.shared import execute, launch_thread
from pebble.common.shared import decorate_function, get_asyncio_loop
from pebble.common.types import ProcessExpired, ProcessFuture, PebbleFuture
from pebble.common.types import Result, ResultStatus, RemoteException
from pebble.common.types import FutureStatus, CONSTS, CallableType
from pebble.common.types import AsyncIODecoratorReturnType
from pebble.common.types import AsyncIODecoratorParamsReturnType
from pebble.common.types import ThreadDecoratorReturnType
from pebble.common.types import ThreadDecoratorParamsReturnType
from pebble.common.types import ProcessDecoratorReturnType
from pebble.common.types import ProcessDecoratorParamsReturnType
from pebble.common.process import launch_process, stop_process
from pebble.common.process import register_function, maybe_install_trampoline
from pebble.common.process import process_execute, send_result, function_handler
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/common/process.py 0000644 0001751 0000166 00000011475 14765574576 017165 0 ustar 00runner docker # This file is part of Pebble.
# Copyright (c) 2013-2025, Matteo Cafasso
# Pebble is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License
# as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version.
# Pebble is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with Pebble. If not, see .
import os
import sys
import types
import pickle
import signal
import multiprocessing
from traceback import format_exc
from typing import Any, Callable
from pebble.common.types import Result, ResultStatus, RemoteException, CONSTS
def launch_process(
name: str,
function: Callable,
daemon: bool, mp_context: multiprocessing.context,
*args,
**kwargs
) -> multiprocessing.Process:
process = mp_context.Process(
target=function, name=name, args=args, kwargs=kwargs)
process.daemon = daemon
process.start()
return process
def stop_process(process: multiprocessing.Process):
"""Does its best to stop the process."""
process.terminate()
process.join(CONSTS.term_timeout)
if process.is_alive() and os.name != 'nt':
try:
os.kill(process.pid, signal.SIGKILL)
process.join()
except OSError:
return
if process.is_alive():
raise RuntimeError("Unable to terminate PID %d" % os.getpid())
def process_execute(function: Callable, *args, **kwargs) -> Result:
"""Runs the given function returning its results or exception."""
try:
return Result(ResultStatus.SUCCESS, function(*args, **kwargs))
except BaseException as error:
return Result(ResultStatus.FAILURE, RemoteException(error, format_exc()))
def send_result(pipe: multiprocessing.Pipe, data: Any):
"""Send result handling pickling and communication errors."""
try:
pipe.send(data)
except (pickle.PicklingError, TypeError) as error:
pipe.send(Result(ResultStatus.ERROR, RemoteException(error, format_exc())))
def function_handler(
function: Callable,
args: list,
kwargs: dict,
pipe: multiprocessing.Pipe
):
"""Runs the actual function in separate process and returns its result."""
signal.signal(signal.SIGINT, signal.SIG_IGN)
signal.signal(signal.SIGTERM, signal.SIG_DFL)
reader, writer = pipe
reader.close()
result = process_execute(function, *args, **kwargs)
send_result(writer, result)
################################################################################
# Spawn process start method handling logic. #
# #
# Processes created via Spawn will load the modules anew. As a consequence, #
# @concurrent/@asynchronous decorated functions will be decorated again #
# making the child process unable to execute them. #
################################################################################
_registered_functions = {}
def register_function(function: Callable) -> Callable:
"""Registers the function to be used within the trampoline."""
_registered_functions[function.__qualname__] = function
return function
def maybe_install_trampoline(
function: Callable,
args: list,
start_method: str
) -> tuple:
"""Install the trampoline on the right process start methods."""
if isinstance(function, types.FunctionType) and start_method != 'fork':
target = _trampoline
args = [function.__qualname__, function.__module__] + list(args)
else:
target = function
return target, args
def _trampoline(name: str, module: Any, *args, **kwargs) -> Any:
"""Trampoline function for decorators.
Lookups the function between the registered ones;
if not found, forces its registering and then executes it.
"""
function = _function_lookup(name, module)
return function(*args, **kwargs)
def _function_lookup(name: str, module: Any) -> Callable:
"""Searches the function between the registered ones.
If not found, it imports the module forcing its registration.
"""
try:
return _registered_functions[name]
except KeyError: # force function registering
__import__(module)
mod = sys.modules[module]
function = getattr(mod, name)
try:
return _registered_functions[name]
except KeyError: # decorator without @pie syntax
return register_function(function)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/common/shared.py 0000644 0001751 0000166 00000006627 14765574576 016760 0 ustar 00runner docker # This file is part of Pebble.
# Copyright (c) 2013-2025, Matteo Cafasso
# Pebble is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License
# as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version.
# Pebble is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with Pebble. If not, see .
import asyncio
import multiprocessing
from typing import Callable
from threading import Thread
from traceback import format_exc
from pebble.common.types import Result, ResultStatus
def launch_thread(name, function, daemon, *args, **kwargs):
thread = Thread(target=function, name=name, args=args, kwargs=kwargs)
thread.daemon = daemon
thread.start()
return thread
def execute(function, *args, **kwargs):
"""Runs the given function returning its results or exception."""
try:
return Result(ResultStatus.SUCCESS, function(*args, **kwargs))
except BaseException as error:
try:
error.traceback = format_exc()
except AttributeError: # Frozen exception
pass
return Result(ResultStatus.FAILURE, error)
def get_asyncio_loop() -> asyncio.BaseEventLoop:
"""Backwards compatible loop getter."""
try:
return asyncio.get_running_loop()
except AttributeError:
return asyncio.get_event_loop()
################################################################################
# @concurrent/@asyncrhonous decorators. #
################################################################################
def decorate_function(wrapper: Callable, *args, **kwargs) -> Callable:
"""Decorate the function taking care of all the possible uses."""
name = kwargs.get('name')
pool = kwargs.get('pool')
daemon = kwargs.get('daemon', True)
timeout = kwargs.get('timeout')
mp_context = kwargs.get('context')
# decorator without parameters: @process/process(function)
if not kwargs and len(args) == 1 and callable(args[0]):
return wrapper(args[0], name, daemon, timeout, multiprocessing, pool)
# decorator with parameters
_validate_parameters(name, daemon, timeout)
mp_context = mp_context if mp_context is not None else multiprocessing
## without @pie syntax: process(function, timeout=12)
if len(args) == 1 and callable(args[0]):
return wrapper(args[0], name, daemon, timeout, multiprocessing, pool)
## with @pie syntax: @process(timeout=12)
def decorating_function(function: Callable) -> Callable:
return wrapper(function, name, daemon, timeout, mp_context, pool)
return decorating_function
def _validate_parameters(name: str, daemon: bool, timeout: float):
if name is not None and not isinstance(name, str):
raise TypeError('Name expected to be None or string')
if daemon is not None and not isinstance(daemon, bool):
raise TypeError('Daemon expected to be None or bool')
if timeout is not None and not isinstance(timeout, (int, float)):
raise TypeError('Timeout expected to be None or integer or float')
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/common/types.py 0000644 0001751 0000166 00000014372 14765574576 016652 0 ustar 00runner docker # This file is part of Pebble.
# Copyright (c) 2013-2025, Matteo Cafasso
# Pebble is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License
# as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version.
# Pebble is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with Pebble. If not, see .
import asyncio
from enum import Enum, IntEnum
from dataclasses import dataclass
from concurrent.futures import Future
from typing import Any, TypeVar, Callable
P = TypeVar("P")
T = TypeVar("T")
try:
FutureType = Future[T]
except TypeError:
FutureType = Future
class ProcessExpired(OSError):
"""Raised when process dies unexpectedly."""
def __init__(self, msg, code=0, pid=None):
super(ProcessExpired, self).__init__(msg)
self.exitcode = code
self.pid = pid
class PebbleFuture(FutureType):
# Same as base class, removed logline
def set_running_or_notify_cancel(self):
"""Mark the future as running or process any cancel notifications.
Should only be used by Executor implementations and unit tests.
If the future has been cancelled (cancel() was called and returned
True) then any threads waiting on the future completing (though calls
to as_completed() or wait()) are notified and False is returned.
If the future was not cancelled then it is put in the running state
(future calls to running() will return True) and True is returned.
This method should be called by Executor implementations before
executing the work associated with this future. If this method returns
False then the work should not be executed.
Returns:
False if the Future was cancelled, True otherwise.
Raises:
RuntimeError: if set_result() or set_exception() was called.
"""
with self._condition:
if self._state == FutureStatus.CANCELLED:
self._state = FutureStatus.CANCELLED_AND_NOTIFIED
for waiter in self._waiters:
waiter.add_cancelled(self)
return False
elif self._state == FutureStatus.PENDING:
self._state = FutureStatus.RUNNING
return True
else:
raise RuntimeError('Future in unexpected state')
try:
PebbleFutureType = PebbleFuture[T]
except TypeError:
PebbleFutureType = PebbleFuture
class ProcessFuture(PebbleFutureType):
def cancel(self):
"""Cancel the future.
Returns True if the future was cancelled, False otherwise. A future
cannot be cancelled if it has already completed.
"""
with self._condition:
if self._state == FutureStatus.FINISHED:
return False
if self._state in (FutureStatus.CANCELLED,
FutureStatus.CANCELLED_AND_NOTIFIED):
return True
self._state = FutureStatus.CANCELLED
self._condition.notify_all()
self._invoke_callbacks()
return True
class RemoteTraceback(Exception):
"""Traceback wrapper for exceptions in remote process.
Exception.__cause__ requires a BaseException subclass.
"""
def __init__(self, traceback):
self.traceback = traceback
def __str__(self):
return self.traceback
class RemoteException:
"""Pickling wrapper for exceptions in remote process."""
def __init__(self, exception, traceback):
self.exception = exception
self.traceback = traceback
def __reduce__(self):
return self.rebuild_exception, (self.exception, self.traceback)
@staticmethod
def rebuild_exception(exception, traceback):
try:
exception.traceback = traceback
exception.__cause__ = RemoteTraceback(traceback)
except AttributeError: # Frozen exception
pass
return exception
class ResultStatus(IntEnum):
"""Status of results of a function execution."""
SUCCESS = 0
FAILURE = 1
ERROR = 2
@dataclass
class Result:
"""Result of a function execution."""
status: ResultStatus
value: Any
class FutureStatus(str, Enum):
"""Borrowed from concurrent.futures."""
PENDING = 'PENDING'
RUNNING = 'RUNNING'
FINISHED = 'FINISHED'
CANCELLED = 'CANCELLED'
CANCELLED_AND_NOTIFIED = 'CANCELLED_AND_NOTIFIED'
@dataclass
class Consts:
"""Internal constants.
WARNING: changing these values will affect the behaviour
of Pools and decorators.
"""
sleep_unit: float = 0.1
"""Any cycle which needs to periodically assess the state."""
term_timeout: float = 3
"""On UNIX once a SIGTERM signal is issued to a process,
the amount of seconds to wait before issuing a SIGKILL signal."""
channel_lock_timeout: float = 60
"""The process pool relies on a pipe protected by a lock.
The timeout when attempting to acquire the lock."""
try:
CallableType = Callable[[P], T]
AsyncIODecoratorReturnType = Callable[[P], asyncio.Future[T]]
AsyncIODecoratorParamsReturnType = Callable[[Callable[[P], T]],
Callable[[P], asyncio.Future[T]]]
ThreadDecoratorReturnType = Callable[[P], Future[T]]
ThreadDecoratorParamsReturnType = Callable[[Callable[[P], T]],
Callable[[P], Future[T]]]
ProcessDecoratorReturnType = Callable[[P], ProcessFuture[T]]
ProcessDecoratorParamsReturnType = Callable[[Callable[[P], T]],
Callable[[P], ProcessFuture[T]]]
except TypeError:
ReturnType = Callable
AsyncIODecoratorReturnType = Callable
AsyncIODecoratorParamsReturnType = Callable
ThreadDecoratorReturnType = Callable
ThreadDecoratorParamsReturnType = Callable
ProcessDecoratorReturnType = Callable
ProcessDecoratorParamsReturnType = Callable
CONSTS = Consts()
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1742141828.9597118
pebble-5.1.1/pebble/concurrent/ 0000755 0001751 0000166 00000000000 14765574605 016010 5 ustar 00runner docker ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/concurrent/__init__.py 0000644 0001751 0000166 00000000206 14765574576 020126 0 ustar 00runner docker __all__ = ['thread',
'process']
from pebble.concurrent.thread import thread
from pebble.concurrent.process import process
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/concurrent/process.py 0000644 0001751 0000166 00000012564 14765574576 020057 0 ustar 00runner docker # This file is part of Pebble.
# Copyright (c) 2013-2025, Matteo Cafasso
# Pebble is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License
# as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version.
# Pebble is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with Pebble. If not, see .
import os
import types
import multiprocessing
import multiprocessing.context
from itertools import count
from functools import wraps
from typing import Any, Callable, Optional, overload
from concurrent.futures import CancelledError, TimeoutError
from pebble import common
from pebble.pool.process import ProcessPool
@overload
def process(func: common.CallableType) -> common.ProcessDecoratorReturnType:
...
@overload
def process(
name: Optional[str] = None,
daemon: bool = True,
timeout: Optional[float] = None,
mp_context: Optional[multiprocessing.context.BaseContext] = None,
pool: Optional[ProcessPool] = None
) -> common.ProcessDecoratorParamsReturnType:
...
def process(*args, **kwargs):
"""Runs the decorated function in a concurrent process,
taking care of the result and error management.
Decorated functions will return a concurrent.futures.Future object
once called.
The timeout parameter will set a maximum execution time
for the decorated function. If the execution exceeds the timeout,
the process will be stopped and the Future will raise TimeoutError.
The name parameter will set the process name.
The daemon parameter controls the underlying process daemon flag.
Default is True.
The context parameter allows to provide the multiprocessing.context
object used for starting the process.
The pool parameter accepts a pebble.ProcessPool instance to be used
instead of running the function in a new process.
"""
return common.decorate_function(_process_wrapper, *args, **kwargs)
def _process_wrapper(
function: Callable,
name: str,
daemon: bool,
timeout: float,
mp_context: multiprocessing.context.BaseContext,
pool: ProcessPool
) -> Callable:
if isinstance(function, types.FunctionType):
common.register_function(function)
if hasattr(mp_context, 'get_start_method'):
start_method = mp_context.get_start_method()
else:
start_method = 'spawn' if os.name == 'nt' else 'fork'
if pool is not None:
if not isinstance(pool, ProcessPool):
raise TypeError('Pool expected to be ProcessPool')
start_method = 'pool'
@wraps(function)
def wrapper(*args, **kwargs) -> common.ProcessFuture:
target, args = common.maybe_install_trampoline(function, args, start_method)
if pool is not None:
future = pool.schedule(target, args=args, kwargs=kwargs, timeout=timeout)
else:
future = common.ProcessFuture()
reader, writer = mp_context.Pipe(duplex=False)
worker = common.launch_process(
name, common.function_handler, daemon, mp_context,
target, args, kwargs, (reader, writer))
writer.close()
future.set_running_or_notify_cancel()
common.launch_thread(
name, _worker_handler, True, future, worker, reader, timeout)
return future
return wrapper
def _worker_handler(
future: common.ProcessFuture,
worker: multiprocessing.Process,
pipe: multiprocessing.Pipe,
timeout: float
):
"""Worker lifecycle manager.
Waits for the worker to be perform its task,
collects result, runs the callback and cleans up the process.
"""
result = _get_result(future, pipe, timeout)
if worker.is_alive():
common.stop_process(worker)
if result.status == common.ResultStatus.SUCCESS:
future.set_result(result.value)
else:
if result.status == common.ResultStatus.ERROR:
result.value.exitcode = worker.exitcode
result.value.pid = worker.pid
if not isinstance(result.value, CancelledError):
future.set_exception(result.value)
def _get_result(
future: common.ProcessFuture,
pipe: multiprocessing.Pipe,
timeout: float
) -> Any:
"""Waits for result and handles communication errors."""
counter = count(step=common.CONSTS.sleep_unit)
try:
while not pipe.poll(common.CONSTS.sleep_unit):
if timeout is not None and next(counter) >= timeout:
error = TimeoutError('Task Timeout', timeout)
return common.Result(common.ResultStatus.FAILURE, error)
if future.cancelled():
error = CancelledError()
return common.Result(common.ResultStatus.FAILURE, error)
return pipe.recv()
except (EOFError, OSError):
error = common.ProcessExpired('Abnormal termination')
return common.Result(common.ResultStatus.ERROR, error)
except Exception as error:
return common.Result(common.ResultStatus.ERROR, error)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/concurrent/thread.py 0000644 0001751 0000166 00000005507 14765574576 017647 0 ustar 00runner docker # This file is part of Pebble.
# Copyright (c) 2013-2025, Matteo Cafasso
# Pebble is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License
# as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version.
# Pebble is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with Pebble. If not, see .
from functools import wraps
from concurrent.futures import Future
from typing import Callable, Optional, overload
from pebble import common
from pebble.pool.thread import ThreadPool
@overload
def thread(func: common.CallableType) -> common.ThreadDecoratorReturnType:
...
@overload
def thread(
name: Optional[str] = None,
daemon: bool = True,
pool: Optional[ThreadPool] = None
) -> common.ThreadDecoratorParamsReturnType:
...
def thread(*args, **kwargs):
"""Runs the decorated function within a concurrent thread,
taking care of the result and error management.
Decorated functions will return a concurrent.futures.Future object
once called.
The name parameter will set the thread name.
The daemon parameter controls the underlying thread daemon flag.
Default is True.
The pool parameter accepts a pebble.ThreadPool instance to be used
instead of running the function in a new process.
"""
return common.decorate_function(_thread_wrapper, *args, **kwargs)
def _thread_wrapper(
function: Callable,
name: str,
daemon: bool,
_timeout: float,
_mp_context,
pool: ThreadPool
) -> Callable:
if pool is not None:
if not isinstance(pool, ThreadPool):
raise TypeError('Pool expected to be ThreadPool')
@wraps(function)
def wrapper(*args, **kwargs) -> Future:
if pool is not None:
future = pool.schedule(function, args=args, kwargs=kwargs)
else:
future = Future()
common.launch_thread(
name, _function_handler, daemon,
function, args, kwargs, future)
return future
return wrapper
def _function_handler(
function: Callable,
args: list,
kwargs: dict,
future: Future
):
"""Runs the actual function in separate thread and returns its result."""
future.set_running_or_notify_cancel()
result = common.execute(function, *args, **kwargs)
if result.status == common.ResultStatus.SUCCESS:
future.set_result(result.value)
else:
future.set_exception(result.value)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/decorators.py 0000644 0001751 0000166 00000004636 14765574576 016365 0 ustar 00runner docker # This file is part of Pebble.
# Copyright (c) 2013-2025, Matteo Cafasso
# Pebble is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License
# as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version.
# Pebble is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with Pebble. If not, see .
import signal
import threading
from functools import wraps
from typing import Any, Callable
_synchronized_lock = threading.Lock()
def synchronized(*args) -> Callable:
"""A synchronized function prevents two or more callers to interleave
its execution preventing race conditions.
The synchronized decorator accepts as optional parameter a Lock, RLock or
Semaphore object which will be employed to ensure the function's atomicity.
If no synchronization object is given, a single threading.Lock will be used.
This implies that between different decorated function only one at a time
will be executed.
"""
if callable(args[0]):
return decorate_synchronized(args[0], _synchronized_lock)
else:
def wrap(function) -> type:
return decorate_synchronized(function, args[0])
return wrap
def decorate_synchronized(function: Callable, lock: threading.Lock) -> Callable:
@wraps(function)
def wrapper(*args, **kwargs) -> Any:
with lock:
return function(*args, **kwargs)
return wrapper
def sighandler(signals: list) -> Callable:
"""Sets the decorated function as signal handler of given *signals*.
*signals* can be either a single signal or a list/tuple
of multiple ones.
"""
def wrap(function):
set_signal_handlers(signals, function)
@wraps(function)
def wrapper(*args, **kwargs):
return function(*args, **kwargs)
return wrapper
return wrap
def set_signal_handlers(signals: list, function: Callable):
if isinstance(signals, (list, tuple)):
for signum in signals:
signal.signal(signum, function)
else:
signal.signal(signals, function)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/functions.py 0000644 0001751 0000166 00000010143 14765574576 016216 0 ustar 00runner docker # This file is part of Pebble.
# Copyright (c) 2013-2025, Matteo Cafasso
# Pebble is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License
# as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version.
# Pebble is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with Pebble. If not, see .
import threading
from time import time
from types import MethodType
from typing import Callable, Optional
_waitforthreads_lock = threading.Lock()
def waitforqueues(queues: list, timeout: float = None) -> filter:
"""Waits for one or more *Queue* to be ready or until *timeout* expires.
*queues* is a list containing one or more *Queue.Queue* objects.
If *timeout* is not None the function will block
for the specified amount of seconds.
The function returns a list containing the ready *Queues*.
"""
lock = threading.Condition(threading.Lock())
prepare_queues(queues, lock)
try:
wait_queues(queues, lock, timeout)
finally:
reset_queues(queues)
return filter(lambda q: not q.empty(), queues)
def prepare_queues(queues: list, lock: threading.Condition):
"""Replaces queue._put() method in order to notify the waiting Condition."""
for queue in queues:
queue._pebble_lock = lock
with queue.mutex:
queue._pebble_old_method = queue._put
queue._put = MethodType(new_method, queue)
def wait_queues(queues: list,
lock: threading.Condition,
timeout: Optional[float]):
with lock:
if not any(map(lambda q: not q.empty(), queues)):
lock.wait(timeout)
def reset_queues(queues: list):
"""Resets original queue._put() method."""
for queue in queues:
with queue.mutex:
queue._put = queue._pebble_old_method
delattr(queue, '_pebble_old_method')
delattr(queue, '_pebble_lock')
def waitforthreads(threads: list, timeout: float = None) -> filter:
"""Waits for one or more *Thread* to exit or until *timeout* expires.
.. note::
Expired *Threads* are not joined by *waitforthreads*.
*threads* is a list containing one or more *threading.Thread* objects.
If *timeout* is not None the function will block
for the specified amount of seconds.
The function returns a list containing the ready *Threads*.
"""
old_function = None
lock = threading.Condition(threading.Lock())
def new_function(*args):
old_function(*args)
with lock:
lock.notify_all()
old_function = prepare_threads(new_function)
try:
wait_threads(threads, lock, timeout)
finally:
reset_threads(old_function)
return filter(lambda t: not t.is_alive(), threads)
def prepare_threads(new_function: Callable) -> Callable:
"""Replaces threading._get_ident() function in order to notify
the waiting Condition."""
with _waitforthreads_lock:
old_function = threading.get_ident
threading.get_ident = new_function
return old_function
def wait_threads(threads: list,
lock: threading.Condition,
timeout: Optional[float]):
timestamp = time()
with lock:
while not any(map(lambda t: not t.is_alive(), threads)):
if timeout is None:
lock.wait()
elif timeout - (time() - timestamp) > 0:
lock.wait(timeout - (time() - timestamp))
else:
return
def reset_threads(old_function: Callable):
"""Resets original threading.get_ident() function."""
with _waitforthreads_lock:
threading.get_ident = old_function
def new_method(self, *args):
self._pebble_old_method(*args)
with self._pebble_lock:
self._pebble_lock.notify_all()
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1742141828.9607117
pebble-5.1.1/pebble/pool/ 0000755 0001751 0000166 00000000000 14765574605 014577 5 ustar 00runner docker ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/pool/__init__.py 0000644 0001751 0000166 00000000400 14765574576 016711 0 ustar 00runner docker __all__ = ['ThreadPool',
'ProcessPool',
'MapFuture',
'ProcessMapFuture']
from pebble.pool.thread import ThreadPool
from pebble.pool.process import ProcessPool
from pebble.pool.base_pool import MapFuture, ProcessMapFuture
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/pool/base_pool.py 0000644 0001751 0000166 00000020623 14765574576 017126 0 ustar 00runner docker # This file is part of Pebble.
# Copyright (c) 2013-2025, Matteo Cafasso
# Pebble is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License
# as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version.
# Pebble is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with Pebble. If not, see .
import time
import logging
import itertools
from queue import Queue
from enum import IntEnum
from threading import RLock
from dataclasses import dataclass
from typing import Any, Callable, Optional
from concurrent.futures import Future, TimeoutError
from pebble.common import Result, ResultStatus
from pebble.common import PebbleFuture, ProcessFuture, CONSTS
class BasePool:
def __init__(self, max_workers: int,
max_tasks: int,
initializer: Optional[Callable],
initargs: list):
self._context = PoolContext(
max_workers, max_tasks, initializer, initargs)
self._loops = ()
self._task_counter = itertools.count()
def __enter__(self):
return self
def __exit__(self, *args):
self.close()
self.join()
@property
def active(self) -> bool:
self._update_pool_status()
return self._context.status in (PoolStatus.CLOSED, PoolStatus.RUNNING)
def close(self):
"""Closes the Pool preventing new tasks from being accepted.
Pending tasks will be completed.
"""
self._context.status = PoolStatus.CLOSED
def stop(self):
"""Stops the pool without performing any pending task."""
self._context.status = PoolStatus.STOPPED
def join(self, timeout: float = None):
"""Joins the pool waiting until all workers exited.
If *timeout* is set, it block until all workers are done
or raises TimeoutError.
"""
if self._context.status == PoolStatus.RUNNING:
raise RuntimeError('The Pool is still running')
if self._context.status == PoolStatus.CLOSED:
self._wait_queue_depletion(timeout)
self.stop()
self.join()
else:
self._context.task_queue.put(None) # Pool termination sentinel
self._stop_pool()
def _wait_queue_depletion(self, timeout: Optional[float]):
tick = time.time()
while self.active:
if timeout is not None and time.time() - tick > timeout:
raise TimeoutError("Tasks are still being executed")
elif self._context.task_queue.unfinished_tasks:
time.sleep(CONSTS.sleep_unit)
else:
return
def _check_pool_status(self):
self._update_pool_status()
if self._context.status == PoolStatus.ERROR:
raise RuntimeError('Unexpected error within the Pool')
elif self._context.status != PoolStatus.RUNNING:
raise RuntimeError('The Pool is not active')
def _update_pool_status(self):
if self._context.status == PoolStatus.CREATED:
self._start_pool()
for loop in self._loops:
if not loop.is_alive():
self._context.status = PoolStatus.ERROR
def _start_pool(self):
raise NotImplementedError("Not implemented")
def _stop_pool(self):
raise NotImplementedError("Not implemented")
class PoolContext:
def __init__(self, max_workers: int,
max_tasks: int,
initializer: Callable,
initargs: list):
self._status = PoolStatus.CREATED
self.status_mutex = RLock()
self.task_queue = Queue()
self.workers = max_workers
self.task_counter = itertools.count()
self.worker_parameters = Worker(max_tasks, initializer, initargs)
@property
def status(self) -> int:
return self._status
@status.setter
def status(self, status: int):
with self.status_mutex:
if self.alive:
self._status = status
@property
def alive(self) -> bool:
return self.status not in (PoolStatus.ERROR, PoolStatus.STOPPED)
class Task:
def __init__(self, identifier: int,
future: Future,
timeout: Optional[float],
payload: 'TaskPayload'):
self.id = identifier
self.future = future
self.timeout = timeout
self.payload = payload
self.timestamp = 0.0
self.worker_id = 0
@property
def started(self) -> bool:
return bool(self.timestamp > 0)
def set_running_or_notify_cancel(self):
if hasattr(self.future, 'map_future'):
if not self.future.map_future.done():
try:
self.future.map_future.set_running_or_notify_cancel()
except RuntimeError:
pass
try:
self.future.set_running_or_notify_cancel()
except RuntimeError:
pass
class MapFuture(PebbleFuture):
def __init__(self, futures: list):
super().__init__()
self._futures = futures
@property
def futures(self) -> list:
return self._futures
def cancel(self) -> bool:
"""Cancel the future.
Returns True if any of the elements of the iterables is cancelled.
False otherwise.
"""
super().cancel()
return any(tuple(f.cancel() for f in self._futures))
class ProcessMapFuture(ProcessFuture):
def __init__(self, futures: list):
super().__init__()
self._futures = futures
@property
def futures(self) -> list:
return self._futures
def cancel(self) -> bool:
"""Cancel the future.
Returns True if any of the elements of the iterables is cancelled.
False otherwise.
"""
super().cancel()
return any(tuple(f.cancel() for f in self._futures))
class MapResults:
def __init__(self, futures: list, timeout: float = None):
self._results = itertools.chain.from_iterable(
chunk_result(f, timeout) for f in futures)
def __iter__(self):
return self
def next(self):
result = next(self._results)
if isinstance(result, Result):
if result.status == ResultStatus.SUCCESS:
return result.value
result = result.value
raise result
__next__ = next
def map_results(map_future: MapFuture, timeout: Optional[float]) -> MapFuture:
futures = map_future.futures
if not futures:
map_future.set_result(MapResults(futures))
return map_future
def done_map(_):
if not map_future.done():
map_future.set_result(MapResults(futures, timeout=timeout))
for future in futures:
future.add_done_callback(done_map)
setattr(future, 'map_future', map_future)
return map_future
def iter_chunks(iterable: iter, chunksize: int) -> iter:
"""Iterates over zipped iterables in chunks."""
try:
yield from itertools.batched(iterable, chunksize)
except AttributeError: # < Python 3.12
while 1:
chunk = tuple(itertools.islice(iterable, chunksize))
if not chunk:
return
yield chunk
def chunk_result(future: ProcessFuture, timeout: Optional[float]) -> Any:
"""Returns the results of a processed chunk."""
try:
return future.result(timeout=timeout)
except BaseException as error:
return (error, )
def run_initializer(initializer: Callable, initargs: list) -> bool:
"""Runs the Pool initializer dealing with errors."""
try:
initializer(*initargs)
return True
except BaseException as error:
logging.exception(error)
return False
class PoolStatus(IntEnum):
"""Current status of the Pool."""
CREATED = 0
RUNNING = 1
CLOSED = 2
STOPPED = 3
ERROR = 4
@dataclass
class Worker:
"""Worker configuration."""
max_tasks: int
initializer: Callable
initargs: list
@dataclass
class TaskPayload:
"""The work item wrapped within a Task."""
function: Callable
args: list
kwargs: dict
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/pool/channel.py 0000644 0001751 0000166 00000015524 14765574576 016577 0 ustar 00runner docker # This file is part of Pebble.
# Copyright (c) 2013-2025, Matteo Cafasso
# Pebble is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License
# as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version.
# Pebble is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with Pebble. If not, see .
import os
import select
import multiprocessing
from typing import Any, Callable
from contextlib import contextmanager
from pebble.common import CONSTS
class ChannelError(OSError):
"""Error occurring within the process channel."""
def channels(mp_context: multiprocessing.context.BaseContext) -> tuple:
read0, write0 = mp_context.Pipe(duplex=False)
read1, write1 = mp_context.Pipe(duplex=False)
return (Channel(read1, write0),
WorkerChannel(read0, write1, (read1, write0), mp_context))
class Channel:
def __init__(self, reader: multiprocessing.connection.Connection,
writer: multiprocessing.connection.Connection):
self.reader = reader
self.writer = writer
self.poll = self._make_poll_method()
def _make_poll_method(self):
def unix_poll(timeout: float = None) -> bool:
readonly_mask = (select.POLLIN |
select.POLLPRI |
select.POLLHUP |
select.POLLERR)
poll = select.poll()
poll.register(self.reader, readonly_mask)
# Convert from Seconds to Milliseconds
if timeout is not None:
timeout *= MILLISECONDS
return bool(poll.poll(timeout))
def windows_poll(timeout: float = None) -> bool:
return self.reader.poll(timeout)
return unix_poll if os.name != 'nt' else windows_poll
def recv(self) -> Any:
return self.reader.recv()
def send(self, obj: Any):
return self.writer.send(obj)
def close(self):
self.reader.close()
self.writer.close()
class WorkerChannel(Channel):
def __init__(self, reader: multiprocessing.connection.Connection,
writer: multiprocessing.connection.Connection,
unused: tuple,
mp_context: multiprocessing.context.BaseContext):
super().__init__(reader, writer)
self.mutex = ChannelMutex(mp_context)
self.recv = self._make_recv_method()
self.send = self._make_send_method()
self.unused = unused
def __getstate__(self) -> tuple:
return self.reader, self.writer, self.mutex, self.unused
def __setstate__(self, state: tuple):
self.reader, self.writer, self.mutex, self.unused = state
self.poll = self._make_poll_method()
self.recv = self._make_recv_method()
self.send = self._make_send_method()
def _make_recv_method(self) -> Callable:
def recv():
with self.mutex.reader:
return self.reader.recv()
return recv
def _make_send_method(self) -> Callable:
def unix_send(obj: Any):
with self.mutex.writer:
return self.writer.send(obj)
def windows_send(obj: Any):
return self.writer.send(obj)
return unix_send if os.name != 'nt' else windows_send
@contextmanager
def lock(self, block: bool = True, timeout: int = None) -> bool:
"""Lock the channel, yields True if channel is locked."""
acquired = self.mutex.acquire(block=block, timeout=timeout)
try:
yield acquired
finally:
if acquired:
self.mutex.release()
def initialize(self):
"""Close unused connections."""
for connection in self.unused:
connection.close()
class ChannelMutex:
def __init__(self, mp_context: multiprocessing.context.BaseContext):
self.reader_mutex = mp_context.RLock()
self.writer_mutex = mp_context.RLock() if os.name != 'nt' else None
self.acquire = self._make_acquire_method()
self.release = self._make_release_method()
def __getstate__(self):
return self.reader_mutex, self.writer_mutex
def __setstate__(self, state: tuple):
self.reader_mutex, self.writer_mutex = state
self.acquire = self._make_acquire_method()
self.release = self._make_release_method()
def __enter__(self):
if self.acquire():
return self
raise ChannelError("Channel mutex time out")
def __exit__(self, *_):
self.release()
def _make_acquire_method(self) -> Callable:
def unix_acquire(
block: bool = True, timeout: int = CONSTS.channel_lock_timeout
) -> bool:
"""Acquire both locks. Returns True if both locks where acquired.
Otherwise, handle the locks state.
"""
if self.reader_mutex.acquire(block=block, timeout=timeout):
if self.writer_mutex.acquire(block=block, timeout=timeout):
return True
self.reader_mutex.release()
return False
def windows_acquire(
block: bool = True, timeout: int = CONSTS.channel_lock_timeout
) -> bool:
"""Acquire the reader lock (on NT OS, writes are atomic)."""
return self.reader_mutex.acquire(block=block, timeout=timeout)
return windows_acquire if os.name == 'nt' else unix_acquire
def _make_release_method(self) -> Callable:
def unix_release():
"""Release both the locks."""
self.reader_mutex.release()
self.writer_mutex.release()
def windows_release():
"""Release the reader lock (on NT OS, writes are atomic)."""
self.reader_mutex.release()
return windows_release if os.name == 'nt' else unix_release
@property
@contextmanager
def reader(self):
"""Reader lock context manager."""
if self.reader_mutex.acquire(timeout=CONSTS.channel_lock_timeout):
try:
yield self
finally:
self.reader_mutex.release()
else:
raise ChannelError("Channel mutex time out")
@property
@contextmanager
def writer(self):
"""Writer lock context manager."""
if self.writer_mutex.acquire(timeout=CONSTS.channel_lock_timeout):
try:
yield self
finally:
self.writer_mutex.release()
else:
raise ChannelError("Channel mutex time out")
MILLISECONDS = 1000
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/pool/process.py 0000644 0001751 0000166 00000044145 14765574576 016646 0 ustar 00runner docker # This file is part of Pebble.
# Copyright (c) 2013-2025, Matteo Cafasso
# Pebble is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License
# as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version.
# Pebble is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with Pebble. If not, see .
import os
import time
import atexit
import signal
import pickle
import multiprocessing
from itertools import count
from dataclasses import dataclass
from typing import Any, Callable, Optional
from concurrent.futures.process import BrokenProcessPool
from concurrent.futures import CancelledError, TimeoutError
from pebble.pool.base_pool import Worker, iter_chunks, run_initializer
from pebble.pool.base_pool import PoolContext, BasePool, Task, TaskPayload
from pebble.pool.base_pool import PoolStatus, ProcessMapFuture, map_results
from pebble.pool.channel import ChannelError, WorkerChannel, channels
from pebble.common import Result, ResultStatus, CONSTS
from pebble.common import launch_process, stop_process
from pebble.common import ProcessExpired, ProcessFuture
from pebble.common import process_execute, launch_thread
class ProcessPool(BasePool):
"""Allows to schedule jobs within a Pool of Processes.
max_workers is an integer representing the amount of desired process workers
managed by the pool.
If max_tasks is a number greater than zero,
each worker will be restarted after performing an equal amount of tasks.
initializer must be callable, if passed, it will be called
every time a worker is started, receiving initargs as arguments.
The context parameter can be used to specify the multiprocessing.context object
used for starting the worker processes.
"""
def __init__(self, max_workers: int = multiprocessing.cpu_count(),
max_tasks: int = 0,
initializer: Callable = None,
initargs: list = (),
context: multiprocessing.context.BaseContext = multiprocessing):
super().__init__(max_workers, max_tasks, initializer, initargs)
self._pool_manager = PoolManager(self._context, context)
self._task_scheduler_loop = None
self._pool_manager_loop = None
self._message_manager_loop = None
def _start_pool(self):
with self._context.status_mutex:
if self._context.status == PoolStatus.CREATED:
self._pool_manager.start()
self._task_scheduler_loop = launch_thread(
None, task_scheduler_loop, True, self._pool_manager)
self._pool_manager_loop = launch_thread(
None, pool_manager_loop, True, self._pool_manager)
self._message_manager_loop = launch_thread(
None, message_manager_loop, True, self._pool_manager)
self._context.status = PoolStatus.RUNNING
def _stop_pool(self):
if self._pool_manager_loop is not None:
self._pool_manager_loop.join()
self._pool_manager.stop()
if self._task_scheduler_loop is not None:
self._task_scheduler_loop.join()
if self._message_manager_loop is not None:
self._message_manager_loop.join()
def schedule(self, function: Callable,
args: list = (),
kwargs: dict = {},
timeout: float = None) -> ProcessFuture:
"""Schedules *function* to be run the Pool.
*args* and *kwargs* will be forwareded to the scheduled function
respectively as arguments and keyword arguments.
*timeout* is an integer, if expires the task will be terminated
and *Future.result()* will raise *TimeoutError*.
A *pebble.ProcessFuture* object is returned.
"""
self._check_pool_status()
future = ProcessFuture()
payload = TaskPayload(function, args, kwargs)
task = Task(next(self._task_counter), future, timeout, payload)
self._context.task_queue.put(task)
return future
def submit(self, function: Callable,
timeout: Optional[float],
/, *args, **kwargs) -> ProcessFuture:
"""This function is provided for compatibility with
`asyncio.loop.run_in_executor`.
For scheduling jobs within the pool use `schedule` instead.
"""
return self.schedule(
function, args=args, kwargs=kwargs, timeout=timeout)
def map(self, function: Callable,
*iterables, **kwargs) -> ProcessMapFuture:
"""Computes the *function* using arguments from
each of the iterables. Stops when the shortest iterable is exhausted.
*timeout* is an integer, if expires the task will be terminated
and the call to next will raise *TimeoutError*.
The *timeout* is applied to each chunk of the iterable.
*chunksize* controls the size of the chunks the iterable will
be broken into before being passed to the function.
A *pebble.ProcessFuture* object is returned.
"""
self._check_pool_status()
timeout = kwargs.get('timeout')
chunksize = kwargs.get('chunksize', 1)
if chunksize < 1:
raise ValueError("chunksize must be >= 1")
futures = [self.schedule(
process_chunk, args=(function, chunk), timeout=timeout)
for chunk in iter_chunks(zip(*iterables), chunksize)]
return map_results(ProcessMapFuture(futures), timeout)
def task_scheduler_loop(pool_manager: 'PoolManager'):
context = pool_manager.context
task_queue = context.task_queue
try:
while context.alive and not GLOBAL_SHUTDOWN:
task = task_queue.get()
if task is not None:
if task.future.cancelled():
task.set_running_or_notify_cancel()
task_queue.task_done()
else:
pool_manager.schedule(task)
else:
task_queue.task_done() # Termination sentinel received
except BrokenProcessPool:
context.status = PoolStatus.ERROR
def pool_manager_loop(pool_manager: 'PoolManager'):
context = pool_manager.context
try:
while context.alive and not GLOBAL_SHUTDOWN:
pool_manager.update_status()
time.sleep(CONSTS.sleep_unit)
except BrokenProcessPool:
context.status = PoolStatus.ERROR
def message_manager_loop(pool_manager: 'PoolManager'):
context = pool_manager.context
try:
while context.alive and not GLOBAL_SHUTDOWN:
pool_manager.process_next_message(CONSTS.sleep_unit)
except BrokenProcessPool:
context.status = PoolStatus.ERROR
class PoolManager:
"""Combines Task and Worker Managers providing a higher level one."""
def __init__(self, context: PoolContext,
mp_context: multiprocessing.context.BaseContext):
self.context = context
self.task_manager = TaskManager(context.task_queue.task_done)
self.worker_manager = WorkerManager(context.workers,
context.worker_parameters,
mp_context)
def start(self):
self.worker_manager.create_workers()
def stop(self):
self.worker_manager.close_channels()
self.worker_manager.force_stop_workers()
def schedule(self, task: Task):
"""Schedules a new Task in the PoolManager."""
self.task_manager.register(task)
try:
self.worker_manager.dispatch(task)
except PICKLING_ERRORS as error:
self.task_manager.task_problem(task.id, error)
def process_next_message(self, timeout: float):
"""Processes the next message coming from the workers."""
message = self.worker_manager.receive(timeout)
if isinstance(message, Acknowledgement):
self.task_manager.task_start(message.task, message.worker)
elif isinstance(message, TaskResult):
self.task_manager.task_done(message.task, message.result)
elif isinstance(message, TaskProblem):
self.task_manager.task_problem(message.task, message.error)
def update_status(self):
self.update_tasks()
self.update_workers()
def update_tasks(self):
"""Handles cancelled and timing out Tasks."""
for task in self.task_manager.timeout_tasks():
if self.worker_manager.maybe_stop_worker(task.worker_id):
self.task_manager.task_done(
task.id,
Result(ResultStatus.FAILURE,
TimeoutError("Task timeout", task.timeout)))
for task in self.task_manager.cancelled_tasks():
if self.worker_manager.maybe_stop_worker(task.worker_id):
self.task_manager.task_done(
task.id, Result(ResultStatus.FAILURE, CancelledError()))
def update_workers(self):
"""Handles unexpected processes termination."""
for expiration in self.worker_manager.inspect_workers():
self.handle_worker_expiration(expiration)
self.worker_manager.create_workers()
def handle_worker_expiration(self, expiration: tuple):
worker_id, exitcode = expiration
try:
task = self.find_expired_task(worker_id)
except LookupError:
return
else:
error = ProcessExpired('Abnormal termination', code=exitcode, pid=worker_id)
self.task_manager.task_done(
task.id, Result(ResultStatus.ERROR, error))
def find_expired_task(self, worker_id: int) -> Task:
tasks = dictionary_values(self.task_manager.tasks)
running_tasks = tuple(t for t in tasks if t.worker_id != 0)
if running_tasks:
return task_worker_lookup(running_tasks, worker_id)
raise BrokenProcessPool("All workers expired")
class TaskManager:
"""Manages the tasks flow within the Pool.
Tasks are registered, acknowledged and completed.
Timing out and cancelled tasks are handled as well.
"""
def __init__(self, task_done_callback: Callable):
self.tasks = {}
self.task_done_callback = task_done_callback
def register(self, task: Task):
self.tasks[task.id] = task
def task_start(self, task_id: int, worker_id: Optional[int]):
task = self.tasks[task_id]
task.worker_id = worker_id
task.timestamp = time.time()
task.set_running_or_notify_cancel()
def task_done(self, task_id: int, result: Result):
"""Set the tasks result and run the callback."""
try:
task = self.tasks.pop(task_id)
except KeyError:
return # result of previously timeout Task
else:
if task.future.cancelled():
task.set_running_or_notify_cancel()
elif result.status == ResultStatus.SUCCESS:
task.future.set_result(result.value)
else:
task.future.set_exception(result.value)
self.task_done_callback()
def task_problem(self, task_id: int, error: Exception):
"""Set the task with the error it caused within the Pool."""
self.task_start(task_id, None)
self.task_done(task_id, Result(ResultStatus.ERROR, error))
def timeout_tasks(self) -> tuple:
return tuple(t for t in dictionary_values(self.tasks)
if self.timeout(t))
def cancelled_tasks(self) -> tuple:
return tuple(t for t in dictionary_values(self.tasks)
if t.started and t.future.cancelled())
@staticmethod
def timeout(task: Task) -> bool:
if task.timeout and task.started:
return time.time() - task.timestamp > task.timeout
else:
return False
class WorkerManager:
"""Manages the workers related mechanics within the Pool.
Maintains the workers active and encapsulates their communication logic.
"""
def __init__(self, workers:int,
worker_parameters: Worker,
mp_context: multiprocessing.context.BaseContext):
self.workers = {}
self.workers_number = workers
self.worker_parameters = worker_parameters
self.pool_channel, self.workers_channel = channels(mp_context)
self.mp_context = mp_context
def dispatch(self, task: Task):
try:
self.pool_channel.send(WorkerTask(task.id, task.payload))
except PICKLING_ERRORS as error:
raise error
except OSError as error:
raise BrokenProcessPool from error
def receive(self, timeout: float):
try:
if self.pool_channel.poll(timeout):
return self.pool_channel.recv()
else:
return NoMessage()
except (OSError, TypeError) as error:
raise BrokenProcessPool from error
except EOFError: # Pool shutdown
return NoMessage()
def inspect_workers(self) -> tuple:
"""Updates the workers status.
Returns the workers which have unexpectedly ended.
"""
expired = tuple(w for w in dictionary_values(self.workers)
if not w.is_alive())
for worker in expired:
self.workers.pop(worker.pid)
return tuple((w.pid, w.exitcode) for w in expired if w.exitcode != 0)
def create_workers(self):
for _ in range(self.workers_number - len(self.workers)):
self.new_worker()
def close_channels(self):
self.pool_channel.close()
self.workers_channel.close()
def force_stop_workers(self):
for worker_id in tuple(self.workers.keys()):
stop_process(self.workers.pop(worker_id))
def new_worker(self):
try:
worker = launch_process(
WORKERS_NAME, worker_process, False, self.mp_context,
self.worker_parameters, self.workers_channel)
self.workers[worker.pid] = worker
except OSError as error:
raise BrokenProcessPool from error
def maybe_stop_worker(self, worker_id: int) -> bool:
"""Try to stop the assigned worker.
Returns True if the worker was stopped successfully
or did already expire by its own.
"""
with self.workers_channel.lock(block=False) as locked:
if locked:
worker = self.workers.pop(worker_id, None)
if worker is not None: # Worker have already ended
stop_process(worker)
return locked
def worker_process(params: Worker, channel: WorkerChannel):
"""The worker process routines."""
signal.signal(signal.SIGINT, signal.SIG_IGN)
signal.signal(signal.SIGTERM, signal.SIG_DFL)
channel.initialize()
if params.initializer is not None:
if not run_initializer(params.initializer, params.initargs):
os._exit(1)
try:
for task in worker_get_next_task(channel, params.max_tasks):
payload = task.payload
result = process_execute(
payload.function, *payload.args, **payload.kwargs)
send_result(channel, TaskResult(task.id, result))
except (OSError, RuntimeError) as error:
errno = getattr(error, 'errno', 1)
os._exit(errno if isinstance(errno, int) else 1)
except EOFError as error:
os._exit(0)
def worker_get_next_task(channel: WorkerChannel, max_tasks: int):
counter = count()
while max_tasks == 0 or next(counter) < max_tasks:
yield fetch_task(channel)
def send_result(channel: WorkerChannel, result: Any):
"""Send result handling pickling and communication errors."""
try:
channel.send(result)
except (pickle.PicklingError, TypeError) as error:
channel.send(TaskProblem(result.task, error))
def fetch_task(channel: WorkerChannel) -> Task:
while channel.poll():
try:
return task_transaction(channel)
except RuntimeError:
continue # another worker got the task
def task_transaction(channel: WorkerChannel) -> Task:
"""Ensures a task is fetched and acknowledged atomically."""
with channel.lock():
if channel.poll(0):
task = channel.recv()
channel.send(Acknowledgement(os.getpid(), task.id))
else:
raise RuntimeError("Race condition between workers")
return task
def task_worker_lookup(running_tasks: tuple, worker_id: int) -> Task:
for task in running_tasks:
if task.worker_id == worker_id:
return task
raise LookupError("Not found")
def process_chunk(function: Callable, chunk: list) -> list:
"""Processes a chunk of the iterable passed to map dealing with errors."""
return [process_execute(function, *args) for args in chunk]
def interpreter_shutdown():
global GLOBAL_SHUTDOWN
GLOBAL_SHUTDOWN = True
workers = [p for p in multiprocessing.active_children()
if p.name == WORKERS_NAME]
for worker in workers:
stop_process(worker)
def dictionary_values(dictionary: dict) -> tuple:
"""Returns a snapshot of the dictionary values handling race conditions."""
while True:
try:
return tuple(dictionary.values())
except RuntimeError: # race condition
pass
atexit.register(interpreter_shutdown)
GLOBAL_SHUTDOWN = False
WORKERS_NAME = 'pebble_pool_worker'
PICKLING_ERRORS = AttributeError, pickle.PicklingError, TypeError
@dataclass
class NoMessage:
pass
@dataclass
class TaskResult:
"""The result of a Task."""
task: id
result: Any
@dataclass
class TaskProblem:
"""Issue occurred within a Task."""
task: id
error: BaseException
@dataclass
class WorkerTask:
"""A Task assigned to a worker."""
id: id
payload: TaskPayload
@dataclass
class Acknowledgement:
"""Ack from a worker of a received Task."""
worker: id
task: id
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/pool/thread.py 0000644 0001751 0000166 00000015010 14765574576 016424 0 ustar 00runner docker # This file is part of Pebble.
# Copyright (c) 2013-2025, Matteo Cafasso
# Pebble is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License
# as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version.
# Pebble is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with Pebble. If not, see .
import time
import multiprocessing
from itertools import count
from typing import Callable
from concurrent.futures import Future
from pebble.common import ResultStatus, execute, launch_thread, CONSTS
from pebble.pool.base_pool import iter_chunks, run_initializer
from pebble.pool.base_pool import PoolStatus, MapFuture, map_results
from pebble.pool.base_pool import PoolContext, BasePool, Task, TaskPayload
class ThreadPool(BasePool):
"""Allows to schedule jobs within a Pool of Threads.
max_workers is an integer representing the amount of desired process workers
managed by the pool.
If max_tasks is a number greater than zero,
each worker will be restarted after performing an equal amount of tasks.
initializer must be callable, if passed, it will be called
every time a worker is started, receiving initargs as arguments.
"""
def __init__(self, max_workers: int = multiprocessing.cpu_count(),
max_tasks: int = 0,
initializer: Callable = None,
initargs: list = ()):
super().__init__(max_workers, max_tasks, initializer, initargs)
self._pool_manager = PoolManager(self._context)
self._pool_manager_loop = None
def _start_pool(self):
with self._context.status_mutex:
if self._context.status == PoolStatus.CREATED:
self._pool_manager.start()
self._pool_manager_loop = launch_thread(
None, pool_manager_loop, True, self._pool_manager)
self._context.status = PoolStatus.RUNNING
def _stop_pool(self):
if self._pool_manager_loop is not None:
self._pool_manager_loop.join()
self._pool_manager.stop()
def schedule(self, function, args=(), kwargs={}) -> Future:
"""Schedules *function* to be run the Pool.
*args* and *kwargs* will be forwareded to the scheduled function
respectively as arguments and keyword arguments.
A *concurrent.futures.Future* object is returned.
"""
self._check_pool_status()
future = Future()
payload = TaskPayload(function, args, kwargs)
task = Task(next(self._task_counter), future, None, payload)
self._context.task_queue.put(task)
return future
def submit(self, function: Callable, *args, **kwargs) -> Future:
"""This function is provided for compatibility with
`asyncio.loop.run_in_executor`.
For scheduling jobs within the pool use `schedule` instead.
"""
return self.schedule(function, args=args, kwargs=kwargs)
def map(self, function: Callable, *iterables, **kwargs) -> MapFuture:
"""Returns an iterator equivalent to map(function, iterables).
*chunksize* controls the size of the chunks the iterable will
be broken into before being passed to the function. If None
the size will be controlled by the Pool.
"""
self._check_pool_status()
timeout = kwargs.get('timeout')
chunksize = kwargs.get('chunksize', 1)
if chunksize < 1:
raise ValueError("chunksize must be >= 1")
futures = [self.schedule(process_chunk, args=(function, chunk))
for chunk in iter_chunks(zip(*iterables), chunksize)]
return map_results(MapFuture(futures), timeout)
def pool_manager_loop(pool_manager: 'PoolManager'):
context = pool_manager.context
while context.alive:
pool_manager.update_status()
time.sleep(CONSTS.sleep_unit)
class PoolManager:
def __init__(self, context: PoolContext):
self.workers = []
self.context = context
def start(self):
self.create_workers()
def stop(self):
for worker in self.workers:
self.context.task_queue.put(None)
for worker in tuple(self.workers):
self.join_worker(worker)
def update_status(self):
expired = self.inspect_workers()
for worker in expired:
self.join_worker(worker)
self.create_workers()
def inspect_workers(self) -> tuple:
return tuple(w for w in self.workers if not w.is_alive())
def create_workers(self):
for _ in range(self.context.workers - len(self.workers)):
worker = launch_thread(None, worker_thread, True, self.context)
self.workers.append(worker)
def join_worker(self, worker):
worker.join()
self.workers.remove(worker)
def worker_thread(context: PoolContext):
"""The worker thread routines."""
queue = context.task_queue
parameters = context.worker_parameters
if parameters.initializer is not None:
if not run_initializer(parameters.initializer, parameters.initargs):
context.status = PoolStatus.ERROR
return
for task in get_next_task(context, parameters.max_tasks):
execute_next_task(task)
queue.task_done()
def get_next_task(context: PoolContext, max_tasks: int):
counter = count()
queue = context.task_queue
while context.alive and (max_tasks == 0 or next(counter) < max_tasks):
task = queue.get()
if task is not None:
if task.future.cancelled():
task.set_running_or_notify_cancel()
queue.task_done()
else:
yield task
def execute_next_task(task: Task):
payload = task.payload
task.timestamp = time.time()
task.set_running_or_notify_cancel()
result = execute(payload.function, *payload.args, **payload.kwargs)
if result.status == ResultStatus.SUCCESS:
task.future.set_result(result.value)
else:
task.future.set_exception(result.value)
def process_chunk(function: Callable, chunk: list) -> list:
"""Processes a chunk of the iterable passed to map dealing with errors."""
return [execute(function, *args) for args in chunk]
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/pebble/py.typed 0000644 0001751 0000166 00000000001 14765574576 015323 0 ustar 00runner docker
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1742141828.9637117
pebble-5.1.1/setup.cfg 0000644 0001751 0000166 00000000046 14765574605 014216 0 ustar 00runner docker [egg_info]
tag_build =
tag_date = 0
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/setup.py 0000644 0001751 0000166 00000002240 14765574576 014114 0 ustar 00runner docker import os
import fileinput
from setuptools import setup, find_packages
CWD = os.path.dirname(__file__)
def package_version():
module_path = os.path.join(CWD, 'pebble', '__init__.py')
for line in fileinput.FileInput(module_path):
if line.startswith('__version__'):
return line.split('=')[-1].strip().replace('\'', '')
setup(
name="Pebble",
version=package_version(),
author="Matteo Cafasso",
author_email="noxdafox@gmail.com",
description=("Threading and multiprocessing eye-candy."),
license="LGPL",
keywords="thread process pool decorator",
url="https://github.com/noxdafox/pebble",
packages=find_packages(exclude=["test"]),
long_description=open(os.path.join(CWD, 'README.rst')).read(),
python_requires=">=3.8",
classifiers=[
"Programming Language :: Python :: 3",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: OSI Approved :: " +
"GNU Library or Lesser General Public License (LGPL)"
],
)
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1742141828.9627118
pebble-5.1.1/test/ 0000755 0001751 0000166 00000000000 14765574605 013354 5 ustar 00runner docker ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/test/test_asynchronous_process_fork.py 0000644 0001751 0000166 00000031270 14765574576 022311 0 ustar 00runner docker import os
import time
import pickle
import signal
import asyncio
import unittest
import threading
import dataclasses
import multiprocessing
from concurrent.futures import CancelledError, TimeoutError
from pebble import asynchronous, ProcessExpired, ProcessPool
# set start method
supported = False
mp_context = None
methods = multiprocessing.get_all_start_methods()
if 'fork' in methods:
try:
mp_context = multiprocessing.get_context('fork')
if mp_context.get_start_method() == 'fork':
supported = True
else:
raise Exception(mp_context.get_start_method())
except RuntimeError: # child process
pass
else:
mp_context = multiprocessing.get_context()
def not_decorated(argument, keyword_argument=0):
return argument + keyword_argument
@asynchronous.process(context=mp_context)
def decorated(argument, keyword_argument=0):
"""A docstring."""
return argument + keyword_argument
@asynchronous.process(context=mp_context)
def error_decorated():
raise RuntimeError("BOOM!")
@asynchronous.process(context=mp_context)
def error_returned():
return RuntimeError("BOOM!")
@asynchronous.process(context=mp_context)
def pickling_error_decorated():
event = threading.Event()
return event
@dataclasses.dataclass(frozen=True)
class FrozenError(Exception):
pass
@asynchronous.process(context=mp_context)
def frozen_error_decorated():
raise FrozenError()
@asynchronous.process(context=mp_context)
def critical_decorated():
os._exit(123)
@asynchronous.process(context=mp_context)
def decorated_cancel():
time.sleep(10)
@asynchronous.process(timeout=0.1, context=mp_context)
def long_decorated():
time.sleep(10)
@asynchronous.process(timeout=0.1, context=mp_context)
def sigterm_decorated():
signal.signal(signal.SIGTERM, signal.SIG_IGN)
time.sleep(10)
@asynchronous.process(context=mp_context)
def name_keyword_argument(name='function_kwarg'):
return name
@asynchronous.process(name='asynchronous_process_name', context=mp_context)
def name_keyword_decorated():
return multiprocessing.current_process().name
@asynchronous.process(name='decorator_kwarg', context=mp_context)
def name_keyword_decorated_and_argument(name='bar'):
return (multiprocessing.current_process().name, name)
@asynchronous.process(daemon=False, context=mp_context)
def daemon_keyword_decorated():
return multiprocessing.current_process().daemon
@asynchronous.process(pool=ProcessPool(1, context=mp_context))
def pool_decorated(_argument, _keyword_argument=0):
return multiprocessing.current_process().pid
class ProcessAsynchronousObj:
a = 0
def __init__(self):
self.b = 1
@classmethod
@asynchronous.process(context=mp_context)
def clsmethod(cls):
return cls.a
@asynchronous.process(context=mp_context)
def instmethod(self):
return self.b
@staticmethod
@asynchronous.process(context=mp_context)
def stcmethod():
return 2
class ProcessAsynchronousSub1(ProcessAsynchronousObj):
@classmethod
@asynchronous.process(context=mp_context)
def clsmethod(cls):
return cls.a + 1
@asynchronous.process(context=mp_context)
def instmethod(self):
return self.b + 1
@staticmethod
@asynchronous.process(context=mp_context)
def stcmethod():
return 2 + 1
class ProcessAsynchronousSub2(ProcessAsynchronousObj):
@classmethod
@asynchronous.process(context=mp_context)
def clsmethod(cls):
return cls.a + 2
@asynchronous.process(context=mp_context)
def instmethod(self):
return self.b + 2
@staticmethod
@asynchronous.process(context=mp_context)
def stcmethod():
return 2 + 2
class CallableClass:
def __call__(self, argument, keyword_argument=0):
return argument + keyword_argument
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessAsynchronous(unittest.TestCase):
def setUp(self):
self.results = 0
self.exception = None
self.event = None
self.asynchronousobj = ProcessAsynchronousObj()
self.asynchronousobj1 = ProcessAsynchronousSub1()
self.asynchronousobj2 = ProcessAsynchronousSub2()
def callback(self, future):
try:
self.results = future.result()
except (ProcessExpired, RuntimeError, TimeoutError) as error:
self.exception = error
finally:
self.event.set()
def test_docstring(self):
"""Process Fork docstring is preserved."""
self.assertEqual(decorated.__doc__, "A docstring.")
def test_wrong_timeout(self):
"""Process Fork TypeError is raised if timeout is not number."""
with self.assertRaises(TypeError):
@asynchronous.process(timeout='Foo', context=mp_context)
def function():
return
def test_class_method(self):
"""Process Fork decorated classmethods."""
async def test0():
return await ProcessAsynchronousObj.clsmethod()
self.assertEqual(asyncio.run(test0()), 0)
async def test1():
return await ProcessAsynchronousSub1.clsmethod()
self.assertEqual(asyncio.run(test1()), 1)
async def test2():
return await ProcessAsynchronousSub2.clsmethod()
self.assertEqual(asyncio.run(test2()), 2)
def test_instance_method(self):
"""Process Fork decorated instance methods."""
async def test0():
return await self.asynchronousobj.instmethod()
self.assertEqual(asyncio.run(test0()), 1)
async def test1():
return await self.asynchronousobj1.instmethod()
self.assertEqual(asyncio.run(test1()), 2)
async def test2():
return await self.asynchronousobj2.instmethod()
self.assertEqual(asyncio.run(test2()), 3)
def test_static_method(self):
"""Process Fork decorated static methods (Fork startmethod only)."""
async def test0():
return await self.asynchronousobj.stcmethod()
self.assertEqual(asyncio.run(test0()), 2)
async def test1():
return await self.asynchronousobj1.stcmethod()
self.assertEqual(asyncio.run(test1()), 3)
async def test2():
return await self.asynchronousobj2.stcmethod()
self.assertEqual(asyncio.run(test2()), 4)
def test_not_decorated_results(self):
"""Process Fork results are produced."""
non_decorated = asynchronous.process(not_decorated, context=mp_context)
async def test():
return await non_decorated(1, 1)
self.assertEqual(asyncio.run(test()), 2)
def test_decorated_results(self):
"""Process Fork results are produced."""
async def test():
return await decorated(1, 1)
self.assertEqual(asyncio.run(test()), 2)
def test_decorated_results_callback(self):
"""Process Fork results are forwarded to the callback."""
async def test():
self.event = asyncio.Event()
self.event.clear()
future = decorated(1, 1)
future.add_done_callback(self.callback)
await self.event.wait()
asyncio.run(test())
self.assertEqual(self.results, 2)
def test_error_decorated(self):
"""Process Fork errors are raised by future.result."""
async def test():
return await error_decorated()
with self.assertRaises(RuntimeError):
asyncio.run(test())
def test_error_returned(self):
"""Process Fork errors are returned by future.result."""
async def test():
return await error_returned()
self.assertIsInstance(asyncio.run(test()), RuntimeError)
def test_error_decorated_callback(self):
"""Process Fork errors are forwarded to callback."""
async def test():
self.event = asyncio.Event()
self.event.clear()
future = error_decorated()
future.add_done_callback(self.callback)
await self.event.wait()
asyncio.run(test())
self.assertTrue(isinstance(self.exception, RuntimeError),
msg=str(self.exception))
def test_pickling_error_decorated(self):
"""Process Fork pickling errors are raised by future.result."""
async def test():
return await pickling_error_decorated()
with self.assertRaises((pickle.PicklingError, TypeError)):
asyncio.run(test())
def test_frozen_error_decorated(self):
"""Process Fork frozen errors are raised by future.result."""
async def test():
return await frozen_error_decorated()
with self.assertRaises(FrozenError):
asyncio.run(test())
def test_timeout_decorated(self):
"""Process Fork raises TimeoutError if so."""
async def test():
return await long_decorated()
with self.assertRaises(TimeoutError):
asyncio.run(test())
def test_timeout_decorated_callback(self):
"""Process Fork TimeoutError is forwarded to callback."""
async def test():
self.event = asyncio.Event()
self.event.clear()
future = long_decorated()
future.add_done_callback(self.callback)
await self.event.wait()
asyncio.run(test())
self.assertTrue(isinstance(self.exception, TimeoutError),
msg=str(self.exception))
def test_decorated_dead_process(self):
"""Process Fork ProcessExpired is raised if process dies."""
async def test():
return await critical_decorated()
with self.assertRaises(ProcessExpired) as exc_ctx:
asyncio.run(test())
self.assertEqual(exc_ctx.exception.exitcode, 123)
self.assertIsInstance(exc_ctx.exception.pid, int)
def test_timeout_decorated_callback(self):
"""Process Fork ProcessExpired is forwarded to callback."""
async def test():
self.event = asyncio.Event()
self.event.clear()
future = critical_decorated()
future.add_done_callback(self.callback)
await self.event.wait()
asyncio.run(test())
self.assertTrue(isinstance(self.exception, ProcessExpired),
msg=str(self.exception))
def test_cancel_decorated(self):
"""Process Fork raises CancelledError if future was cancelled."""
async def test():
future = decorated_cancel()
future.cancel()
return await future
with self.assertRaises(asyncio.CancelledError):
asyncio.run(test())
@unittest.skipIf(os.name == 'nt', "Test won't run on Windows.")
def test_decorated_ignoring_sigterm(self):
"""Process Fork Asynchronous ignored SIGTERM signal are handled on Unix."""
async def test():
return await sigterm_decorated()
with self.assertRaises(TimeoutError):
asyncio.run(test())
def test_name_keyword_argument(self):
"""name keyword can be passed to a decorated function process without name"""
async def test():
return await name_keyword_argument()
self.assertEqual(asyncio.run(test()), "function_kwarg")
def test_name_keyword_decorated(self):
"""
Check that a simple use case of the name keyword passed to the decorator works
"""
async def test():
return await name_keyword_decorated()
self.assertEqual(asyncio.run(test()), "asynchronous_process_name")
def test_name_keyword_decorated_result_colision(self):
"""name kwarg is handled without modifying the function kwargs"""
async def test():
return await name_keyword_decorated_and_argument(
name="function_kwarg")
dec_out, fn_out = asyncio.run(test())
self.assertEqual(dec_out, "decorator_kwarg")
self.assertEqual(fn_out, "function_kwarg")
def test_daemon_keyword_decorated(self):
"""Daemon keyword can be passed to a decorated function and spawns correctly."""
async def test():
return await daemon_keyword_decorated()
self.assertEqual(asyncio.run(test()), False)
def test_callable_objects(self):
"""Callable objects are correctly handled."""
callable_object = asynchronous.process(context=mp_context)(CallableClass())
async def test():
return await callable_object(1)
self.assertEqual(asyncio.run(test()), 1)
def test_pool_decorated(self):
"""Process Fork pool decorated function."""
async def test():
return await pool_decorated(1, 1)
self.assertEqual(asyncio.run(test()), asyncio.run(test()))
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/test/test_asynchronous_process_forkserver.py 0000644 0001751 0000166 00000027612 14765574576 023545 0 ustar 00runner docker import os
import time
import pickle
import signal
import asyncio
import unittest
import threading
import dataclasses
import multiprocessing
from concurrent.futures import CancelledError, TimeoutError
from pebble import asynchronous, ProcessExpired, ProcessPool
# set start method
supported = False
mp_context = None
methods = multiprocessing.get_all_start_methods()
if 'forkserver' in methods:
try:
mp_context = multiprocessing.get_context('forkserver')
if mp_context.get_start_method() == 'forkserver':
supported = True
else:
raise Exception(mp_context.get_start_method())
except RuntimeError: # child process
pass
else:
mp_context = multiprocessing.get_context()
def not_decorated(argument, keyword_argument=0):
return argument + keyword_argument
@asynchronous.process(context=mp_context)
def decorated(argument, keyword_argument=0):
"""A docstring."""
return argument + keyword_argument
@asynchronous.process(context=mp_context)
def error_decorated():
raise RuntimeError("BOOM!")
@asynchronous.process(context=mp_context)
def error_returned():
return RuntimeError("BOOM!")
@asynchronous.process(context=mp_context)
def pickling_error_decorated():
event = threading.Event()
return event
@dataclasses.dataclass(frozen=True)
class FrozenError(Exception):
pass
@asynchronous.process(context=mp_context)
def frozen_error_decorated():
raise FrozenError()
@asynchronous.process(context=mp_context)
def critical_decorated():
os._exit(123)
@asynchronous.process(context=mp_context)
def decorated_cancel():
time.sleep(10)
@asynchronous.process(timeout=0.1, context=mp_context)
def long_decorated():
time.sleep(10)
@asynchronous.process(timeout=0.1, context=mp_context)
def sigterm_decorated():
signal.signal(signal.SIGTERM, signal.SIG_IGN)
time.sleep(10)
@asynchronous.process(context=mp_context)
def name_keyword_argument(name='function_kwarg'):
return name
@asynchronous.process(name='asynchronous_process_name', context=mp_context)
def name_keyword_decorated():
return multiprocessing.current_process().name
@asynchronous.process(name='decorator_kwarg', context=mp_context)
def name_keyword_decorated_and_argument(name='bar'):
return (multiprocessing.current_process().name, name)
@asynchronous.process(daemon=False, context=mp_context)
def daemon_keyword_decorated():
return multiprocessing.current_process().daemon
@asynchronous.process(pool=ProcessPool(1, context=mp_context))
def pool_decorated(_argument, _keyword_argument=0):
return multiprocessing.current_process().pid
class ProcessAsynchronousObj:
a = 0
def __init__(self):
self.b = 1
@classmethod
@asynchronous.process(context=mp_context)
def clsmethod(cls):
return cls.a
@asynchronous.process(context=mp_context)
def instmethod(self):
return self.b
@staticmethod
@asynchronous.process(context=mp_context)
def stcmethod():
return 2
class ProcessAsynchronousSub1(ProcessAsynchronousObj):
@classmethod
@asynchronous.process(context=mp_context)
def clsmethod(cls):
return cls.a + 1
@asynchronous.process(context=mp_context)
def instmethod(self):
return self.b + 1
@staticmethod
@asynchronous.process(context=mp_context)
def stcmethod():
return 2 + 1
class ProcessAsynchronousSub2(ProcessAsynchronousObj):
@classmethod
@asynchronous.process(context=mp_context)
def clsmethod(cls):
return cls.a + 2
@asynchronous.process(context=mp_context)
def instmethod(self):
return self.b + 2
@staticmethod
@asynchronous.process(context=mp_context)
def stcmethod():
return 2 + 2
class CallableClass:
def __call__(self, argument, keyword_argument=0):
return argument + keyword_argument
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessAsynchronous(unittest.TestCase):
def setUp(self):
self.results = 0
self.exception = None
self.event = None
self.asynchronousobj = ProcessAsynchronousObj()
self.asynchronousobj1 = ProcessAsynchronousSub1()
self.asynchronousobj2 = ProcessAsynchronousSub2()
def callback(self, future):
try:
self.results = future.result()
except (ProcessExpired, RuntimeError, TimeoutError) as error:
self.exception = error
finally:
self.event.set()
def test_docstring(self):
"""Process Forkserver docstring is preserved."""
self.assertEqual(decorated.__doc__, "A docstring.")
def test_wrong_timeout(self):
"""Process Forkserver TypeError is raised if timeout is not number."""
with self.assertRaises(TypeError):
@asynchronous.process(timeout='Foo', context=mp_context)
def function():
return
def test_class_method(self):
"""Process Forkserver decorated classmethods."""
async def test0():
return await ProcessAsynchronousObj.clsmethod()
self.assertEqual(asyncio.run(test0()), 0)
async def test1():
return await ProcessAsynchronousSub1.clsmethod()
self.assertEqual(asyncio.run(test1()), 1)
async def test2():
return await ProcessAsynchronousSub2.clsmethod()
self.assertEqual(asyncio.run(test2()), 2)
def test_instance_method(self):
"""Process Forkserver decorated instance methods."""
async def test0():
return await self.asynchronousobj.instmethod()
self.assertEqual(asyncio.run(test0()), 1)
async def test1():
return await self.asynchronousobj1.instmethod()
self.assertEqual(asyncio.run(test1()), 2)
async def test2():
return await self.asynchronousobj2.instmethod()
self.assertEqual(asyncio.run(test2()), 3)
def test_not_decorated_results(self):
"""Process Forkserver results are produced."""
non_decorated = asynchronous.process(not_decorated, context=mp_context)
async def test():
return await non_decorated(1, 1)
self.assertEqual(asyncio.run(test()), 2)
def test_decorated_results(self):
"""Process Forkserver results are produced."""
async def test():
return await decorated(1, 1)
self.assertEqual(asyncio.run(test()), 2)
def test_decorated_results_callback(self):
"""Process Forkserver results are forwarded to the callback."""
async def test():
self.event = asyncio.Event()
self.event.clear()
future = decorated(1, 1)
future.add_done_callback(self.callback)
await self.event.wait()
asyncio.run(test())
self.assertEqual(self.results, 2)
def test_error_decorated(self):
"""Process Forkserver errors are raised by future.result."""
async def test():
return await error_decorated()
with self.assertRaises(RuntimeError):
asyncio.run(test())
def test_error_returned(self):
"""Process Forkserver errors are returned by future.result."""
async def test():
return await error_returned()
self.assertIsInstance(asyncio.run(test()), RuntimeError)
def test_error_decorated_callback(self):
"""Process Forkserver errors are forwarded to callback."""
async def test():
self.event = asyncio.Event()
self.event.clear()
future = error_decorated()
future.add_done_callback(self.callback)
await self.event.wait()
asyncio.run(test())
self.assertTrue(isinstance(self.exception, RuntimeError),
msg=str(self.exception))
def test_pickling_error_decorated(self):
"""Process Forkserver pickling errors are raised by future.result."""
async def test():
return await pickling_error_decorated()
with self.assertRaises((pickle.PicklingError, TypeError)):
asyncio.run(test())
def test_frozen_error_decorated(self):
"""Process Fork frozen errors are raised by future.result."""
async def test():
return await frozen_error_decorated()
with self.assertRaises(FrozenError):
asyncio.run(test())
def test_timeout_decorated(self):
"""Process Forkserver raises TimeoutError if so."""
async def test():
return await long_decorated()
with self.assertRaises(TimeoutError):
asyncio.run(test())
def test_timeout_decorated_callback(self):
"""Process Forkserver TimeoutError is forwarded to callback."""
async def test():
self.event = asyncio.Event()
self.event.clear()
future = long_decorated()
future.add_done_callback(self.callback)
await self.event.wait()
asyncio.run(test())
self.assertTrue(isinstance(self.exception, TimeoutError),
msg=str(self.exception))
def test_decorated_dead_process(self):
"""Process Forkserver ProcessExpired is raised if process dies."""
async def test():
return await critical_decorated()
with self.assertRaises(ProcessExpired) as exc_ctx:
asyncio.run(test())
self.assertEqual(exc_ctx.exception.exitcode, 123)
self.assertIsInstance(exc_ctx.exception.pid, int)
def test_timeout_decorated_callback(self):
"""Process Forkserver ProcessExpired is forwarded to callback."""
async def test():
self.event = asyncio.Event()
self.event.clear()
future = critical_decorated()
future.add_done_callback(self.callback)
await self.event.wait()
asyncio.run(test())
self.assertTrue(isinstance(self.exception, ProcessExpired),
msg=str(self.exception))
def test_cancel_decorated(self):
"""Process Forkserver raises CancelledError if future was cancelled."""
async def test():
future = decorated_cancel()
future.cancel()
return await future
with self.assertRaises(asyncio.CancelledError):
asyncio.run(test())
@unittest.skipIf(os.name == 'nt', "Test won't run on Windows.")
def test_decorated_ignoring_sigterm(self):
"""Process Forkserver Asynchronous ignored SIGTERM signal are handled on Unix."""
async def test():
return await sigterm_decorated()
with self.assertRaises(TimeoutError):
asyncio.run(test())
def test_name_keyword_argument(self):
"""name keyword can be passed to a decorated function process without name"""
async def test():
return await name_keyword_argument()
self.assertEqual(asyncio.run(test()), "function_kwarg")
def test_name_keyword_decorated(self):
"""
Check that a simple use case of the name keyword passed to the decorator works
"""
async def test():
return await name_keyword_decorated()
self.assertEqual(asyncio.run(test()), "asynchronous_process_name")
def test_daemon_keyword_decorated(self):
"""Daemon keyword can be passed to a decorated function and spawns correctly."""
async def test():
return await daemon_keyword_decorated()
self.assertEqual(asyncio.run(test()), False)
def test_callable_objects(self):
"""Callable objects are correctly handled."""
callable_object = asynchronous.process(context=mp_context)(CallableClass())
async def test():
return await callable_object(1)
self.assertEqual(asyncio.run(test()), 1)
def test_pool_decorated(self):
"""Process Forkserver results are produced."""
async def test():
return await pool_decorated(1, 1)
self.assertEqual(asyncio.run(test()), asyncio.run(test()))
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/test/test_asynchronous_process_spawn.py 0000644 0001751 0000166 00000027442 14765574576 022506 0 ustar 00runner docker import os
import time
import pickle
import signal
import asyncio
import unittest
import threading
import dataclasses
import multiprocessing
from concurrent.futures import CancelledError, TimeoutError
from pebble import asynchronous, ProcessExpired, ProcessPool
# set start method
supported = False
mp_context = None
methods = multiprocessing.get_all_start_methods()
if 'spawn' in methods:
try:
mp_context = multiprocessing.get_context('spawn')
if mp_context.get_start_method() == 'spawn':
supported = True
else:
raise Exception(mp_context.get_start_method())
except RuntimeError: # child process
pass
else:
mp_context = multiprocessing.get_context()
def not_decorated(argument, keyword_argument=0):
return argument + keyword_argument
@asynchronous.process(context=mp_context)
def decorated(argument, keyword_argument=0):
"""A docstring."""
return argument + keyword_argument
@asynchronous.process(context=mp_context)
def error_decorated():
raise RuntimeError("BOOM!")
@asynchronous.process(context=mp_context)
def error_returned():
return RuntimeError("BOOM!")
@asynchronous.process(context=mp_context)
def pickling_error_decorated():
event = threading.Event()
return event
@dataclasses.dataclass(frozen=True)
class FrozenError(Exception):
pass
@asynchronous.process(context=mp_context)
def frozen_error_decorated():
raise FrozenError()
@asynchronous.process(context=mp_context)
def critical_decorated():
os._exit(123)
@asynchronous.process(context=mp_context)
def decorated_cancel():
time.sleep(10)
@asynchronous.process(timeout=0.1, context=mp_context)
def long_decorated():
time.sleep(10)
@asynchronous.process(timeout=0.1, context=mp_context)
def sigterm_decorated():
signal.signal(signal.SIGTERM, signal.SIG_IGN)
time.sleep(10)
@asynchronous.process(context=mp_context)
def name_keyword_argument(name='function_kwarg'):
return name
@asynchronous.process(name='asynchronous_process_name', context=mp_context)
def name_keyword_decorated():
return multiprocessing.current_process().name
@asynchronous.process(name='decorator_kwarg', context=mp_context)
def name_keyword_decorated_and_argument(name='bar'):
return (multiprocessing.current_process().name, name)
@asynchronous.process(daemon=False, context=mp_context)
def daemon_keyword_decorated():
return multiprocessing.current_process().daemon
@asynchronous.process(pool=ProcessPool(1, context=mp_context))
def pool_decorated(_argument, _keyword_argument=0):
return multiprocessing.current_process().pid
class ProcessAsynchronousObj:
a = 0
def __init__(self):
self.b = 1
@classmethod
@asynchronous.process(context=mp_context)
def clsmethod(cls):
return cls.a
@asynchronous.process(context=mp_context)
def instmethod(self):
return self.b
@staticmethod
@asynchronous.process(context=mp_context)
def stcmethod():
return 2
class ProcessAsynchronousSub1(ProcessAsynchronousObj):
@classmethod
@asynchronous.process(context=mp_context)
def clsmethod(cls):
return cls.a + 1
@asynchronous.process(context=mp_context)
def instmethod(self):
return self.b + 1
@staticmethod
@asynchronous.process(context=mp_context)
def stcmethod():
return 2 + 1
class ProcessAsynchronousSub2(ProcessAsynchronousObj):
@classmethod
@asynchronous.process(context=mp_context)
def clsmethod(cls):
return cls.a + 2
@asynchronous.process(context=mp_context)
def instmethod(self):
return self.b + 2
@staticmethod
@asynchronous.process(context=mp_context)
def stcmethod():
return 2 + 2
class CallableClass:
def __call__(self, argument, keyword_argument=0):
return argument + keyword_argument
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessAsynchronous(unittest.TestCase):
def setUp(self):
self.results = 0
self.exception = None
self.event = None
self.asynchronousobj = ProcessAsynchronousObj()
self.asynchronousobj1 = ProcessAsynchronousSub1()
self.asynchronousobj2 = ProcessAsynchronousSub2()
def callback(self, future):
try:
self.results = future.result()
except (ProcessExpired, RuntimeError, TimeoutError) as error:
self.exception = error
finally:
self.event.set()
def test_docstring(self):
"""Process Spawn docstring is preserved."""
self.assertEqual(decorated.__doc__, "A docstring.")
def test_wrong_timeout(self):
"""Process Spawn TypeError is raised if timeout is not number."""
with self.assertRaises(TypeError):
@asynchronous.process(timeout='Foo', context=mp_context)
def function():
return
def test_class_method(self):
"""Process Spawn decorated classmethods."""
async def test0():
return await ProcessAsynchronousObj.clsmethod()
self.assertEqual(asyncio.run(test0()), 0)
async def test1():
return await ProcessAsynchronousSub1.clsmethod()
self.assertEqual(asyncio.run(test1()), 1)
async def test2():
return await ProcessAsynchronousSub2.clsmethod()
self.assertEqual(asyncio.run(test2()), 2)
def test_instance_method(self):
"""Process Spawn decorated instance methods."""
async def test0():
return await self.asynchronousobj.instmethod()
self.assertEqual(asyncio.run(test0()), 1)
async def test1():
return await self.asynchronousobj1.instmethod()
self.assertEqual(asyncio.run(test1()), 2)
async def test2():
return await self.asynchronousobj2.instmethod()
self.assertEqual(asyncio.run(test2()), 3)
def test_not_decorated_results(self):
"""Process Spawn results are produced."""
non_decorated = asynchronous.process(not_decorated, context=mp_context)
async def test():
return await non_decorated(1, 1)
self.assertEqual(asyncio.run(test()), 2)
def test_decorated_results(self):
"""Process Spawn results are produced."""
async def test():
return await decorated(1, 1)
self.assertEqual(asyncio.run(test()), 2)
def test_decorated_results_callback(self):
"""Process Spawn results are forwarded to the callback."""
async def test():
self.event = asyncio.Event()
self.event.clear()
future = decorated(1, 1)
future.add_done_callback(self.callback)
await self.event.wait()
asyncio.run(test())
self.assertEqual(self.results, 2)
def test_error_decorated(self):
"""Process Spawn errors are raised by future.result."""
async def test():
return await error_decorated()
with self.assertRaises(RuntimeError):
asyncio.run(test())
def test_error_returned(self):
"""Process Spawn errors are returned by future.result."""
async def test():
return await error_returned()
self.assertIsInstance(asyncio.run(test()), RuntimeError)
def test_error_decorated_callback(self):
"""Process Spawn errors are forwarded to callback."""
async def test():
self.event = asyncio.Event()
self.event.clear()
future = error_decorated()
future.add_done_callback(self.callback)
await self.event.wait()
asyncio.run(test())
self.assertTrue(isinstance(self.exception, RuntimeError),
msg=str(self.exception))
def test_pickling_error_decorated(self):
"""Process Spawn pickling errors are raised by future.result."""
async def test():
return await pickling_error_decorated()
with self.assertRaises((pickle.PicklingError, TypeError)):
asyncio.run(test())
def test_frozen_error_decorated(self):
"""Process Spawn frozen errors are raised by future.result."""
async def test():
return await frozen_error_decorated()
with self.assertRaises(FrozenError):
asyncio.run(test())
def test_timeout_decorated(self):
"""Process Spawn raises TimeoutError if so."""
async def test():
return await long_decorated()
with self.assertRaises(TimeoutError):
asyncio.run(test())
def test_timeout_decorated_callback(self):
"""Process Spawn TimeoutError is forwarded to callback."""
async def test():
self.event = asyncio.Event()
self.event.clear()
future = long_decorated()
future.add_done_callback(self.callback)
await self.event.wait()
asyncio.run(test())
self.assertTrue(isinstance(self.exception, TimeoutError),
msg=str(self.exception))
def test_decorated_dead_process(self):
"""Process Spawn ProcessExpired is raised if process dies."""
async def test():
return await critical_decorated()
with self.assertRaises(ProcessExpired) as exc_ctx:
asyncio.run(test())
self.assertEqual(exc_ctx.exception.exitcode, 123)
self.assertIsInstance(exc_ctx.exception.pid, int)
def test_timeout_decorated_callback(self):
"""Process Spawn ProcessExpired is forwarded to callback."""
async def test():
self.event = asyncio.Event()
self.event.clear()
future = critical_decorated()
future.add_done_callback(self.callback)
await self.event.wait()
asyncio.run(test())
self.assertTrue(isinstance(self.exception, ProcessExpired),
msg=str(self.exception))
def test_cancel_decorated(self):
"""Process Spawn raises CancelledError if future was cancelled."""
async def test():
future = decorated_cancel()
future.cancel()
return await future
with self.assertRaises(asyncio.CancelledError):
asyncio.run(test())
@unittest.skipIf(os.name == 'nt', "Test won't run on Windows.")
def test_decorated_ignoring_sigterm(self):
"""Process Spawn Asynchronous ignored SIGTERM signal are handled on Unix."""
async def test():
return await sigterm_decorated()
with self.assertRaises(TimeoutError):
asyncio.run(test())
def test_name_keyword_argument(self):
"""name keyword can be passed to a decorated function process without name"""
async def test():
return await name_keyword_argument()
self.assertEqual(asyncio.run(test()), "function_kwarg")
def test_name_keyword_decorated(self):
"""
Check that a simple use case of the name keyword passed to the decorator works
"""
async def test():
return await name_keyword_decorated()
self.assertEqual(asyncio.run(test()), "asynchronous_process_name")
def test_daemon_keyword_decorated(self):
"""Daemon keyword can be passed to a decorated function and spawns correctly."""
async def test():
return await daemon_keyword_decorated()
self.assertEqual(asyncio.run(test()), False)
def test_callable_objects(self):
"""Callable objects are correctly handled."""
callable_object = asynchronous.process(context=mp_context)(CallableClass())
async def test():
return await callable_object(1)
self.assertEqual(asyncio.run(test()), 1)
def test_pool_decorated(self):
"""Process Spawn results are produced."""
async def test():
return await pool_decorated(1, 1)
self.assertEqual(asyncio.run(test()), asyncio.run(test()))
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/test/test_asynchronous_thread.py 0000644 0001751 0000166 00000014376 14765574576 021071 0 ustar 00runner docker import asyncio
import unittest
import threading
import dataclasses
from pebble import ThreadPool
from pebble import asynchronous
def not_decorated(argument, keyword_argument=0):
return argument + keyword_argument
@asynchronous.thread
def decorated(argument, keyword_argument=0):
"""A docstring."""
return argument + keyword_argument
@asynchronous.thread
def error_decorated():
raise RuntimeError("BOOM!")
@asynchronous.thread
def error_returned():
return RuntimeError("BOOM!")
@dataclasses.dataclass(frozen=True)
class FrozenError(Exception):
pass
@asynchronous.thread()
def frozen_error_decorated():
raise FrozenError()
@asynchronous.thread()
def name_keyword_argument(name='function_kwarg'):
return name
@asynchronous.thread(name='asynchronous_thread_name')
def name_keyword_decorated():
return threading.current_thread().name
@asynchronous.thread(name='decorator_kwarg')
def name_keyword_decorated_and_argument(name='bar'):
return (threading.current_thread().name, name)
@asynchronous.thread(daemon=False)
def daemon_keyword_decorated():
return threading.current_thread().daemon
@asynchronous.thread(pool=ThreadPool(1))
def pool_decorated(_argument, _keyword_argument=0):
return threading.current_thread().ident
class ThreadAsynchronousObj:
a = 0
def __init__(self):
self.b = 1
@classmethod
@asynchronous.thread
def clsmethod(cls):
return cls.a
@asynchronous.thread
def instmethod(self):
return self.b
@staticmethod
@asynchronous.thread
def stcmethod():
return 2
class TestThreadAsynchronous(unittest.TestCase):
def setUp(self):
self.results = 0
self.exception = None
self.event = None
self.asynchronousobj = ThreadAsynchronousObj()
def callback(self, future):
try:
self.results = future.result()
except (RuntimeError) as error:
self.exception = error
finally:
self.event.set()
def test_docstring(self):
"""Thread docstring is preserved."""
self.assertEqual(decorated.__doc__, "A docstring.")
def test_class_method(self):
"""Thread decorated classmethods."""
async def test():
return await ThreadAsynchronousObj.clsmethod()
self.assertEqual(asyncio.run(test()), 0)
def test_instance_method(self):
"""Thread decorated instance methods."""
async def test():
return await self.asynchronousobj.instmethod()
self.assertEqual(asyncio.run(test()), 1)
def test_static_method(self):
"""Thread decorated static methods ( startmethod only)."""
async def test():
return await self.asynchronousobj.stcmethod()
self.assertEqual(asyncio.run(test()), 2)
def test_not_decorated_results(self):
"""Process Fork results are produced."""
non_decorated = asynchronous.thread(not_decorated)
async def test():
return await non_decorated(1, 1)
self.assertEqual(asyncio.run(test()), 2)
def test_decorated_results(self):
"""Thread results are produced."""
async def test():
return await decorated(1, 1)
self.assertEqual(asyncio.run(test()), 2)
def test_decorated_results_callback(self):
"""Thread results are forwarded to the callback."""
async def test():
self.event = asyncio.Event()
self.event.clear()
future = decorated(1, 1)
future.add_done_callback(self.callback)
await self.event.wait()
asyncio.run(test())
self.assertEqual(self.results, 2)
def test_error_decorated(self):
"""Thread errors are raised by future.result."""
async def test():
return await error_decorated()
with self.assertRaises(RuntimeError):
asyncio.run(test())
def test_error_returned(self):
"""Thread errors are raised by future.result."""
async def test():
return await error_returned()
self.assertIsInstance(asyncio.run(test()), RuntimeError)
def test_error_decorated_callback(self):
"""Thread errors are forwarded to callback."""
async def test():
self.event = asyncio.Event()
self.event.clear()
future = error_decorated()
future.add_done_callback(self.callback)
await self.event.wait()
asyncio.run(test())
self.assertTrue(isinstance(self.exception, RuntimeError),
msg=str(self.exception))
def test_frozen_error_decorated(self):
"""Thread frozen errors are raised by future.result."""
async def test():
return await frozen_error_decorated()
with self.assertRaises(FrozenError):
asyncio.run(test())
def test_name_keyword_argument(self):
"""name keyword can be passed to a decorated function process without name """
async def test():
return await name_keyword_argument()
self.assertEqual(asyncio.run(test()), "function_kwarg")
def test_name_keyword_decorated(self):
"""
Check that a simple use case of the name keyword passed to the decorator works
"""
async def test():
return await name_keyword_decorated()
self.assertEqual(asyncio.run(test()), "asynchronous_thread_name")
def test_name_keyword_decorated_result(self):
"""name kwarg is handled without modifying the function kwargs"""
async def test():
return await name_keyword_decorated_and_argument(
name="function_kwarg")
dec_out, fn_out = asyncio.run(test())
self.assertEqual(dec_out, "decorator_kwarg")
self.assertEqual(fn_out, "function_kwarg")
def test_daemon_keyword_decorated(self):
"""Daemon keyword can be passed to a decorated function and spawns correctly."""
async def test():
return await daemon_keyword_decorated()
self.assertEqual(asyncio.run(test()), False)
def test_pool_decorated(self):
"""Thread pool decorated function."""
async def test():
return await pool_decorated(1, 1)
self.assertEqual(asyncio.run(test()), asyncio.run(test()))
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/test/test_concurrent_process_fork.py 0000644 0001751 0000166 00000026124 14765574576 021742 0 ustar 00runner docker import os
import time
import pickle
import signal
import unittest
import threading
import dataclasses
import multiprocessing
from concurrent.futures import CancelledError, TimeoutError
from pebble import concurrent, ProcessExpired, ProcessPool
# set start method
supported = False
mp_context = None
methods = multiprocessing.get_all_start_methods()
if 'fork' in methods:
try:
mp_context = multiprocessing.get_context('fork')
if mp_context.get_start_method() == 'fork':
supported = True
else:
raise Exception(mp_context.get_start_method())
except RuntimeError: # child process
pass
else:
mp_context = multiprocessing.get_context()
def not_decorated(argument, keyword_argument=0):
return argument + keyword_argument
@concurrent.process(context=mp_context)
def decorated(argument, keyword_argument=0):
"""A docstring."""
return argument + keyword_argument
@concurrent.process(context=mp_context)
def error_decorated():
raise RuntimeError("BOOM!")
@concurrent.process(context=mp_context)
def error_returned():
return RuntimeError("BOOM!")
@concurrent.process(context=mp_context)
def pickling_error_decorated():
event = threading.Event()
return event
@dataclasses.dataclass(frozen=True)
class FrozenError(Exception):
pass
@concurrent.process(context=mp_context)
def frozen_error_decorated():
raise FrozenError()
@concurrent.process(context=mp_context)
def critical_decorated():
os._exit(123)
@concurrent.process(context=mp_context)
def decorated_cancel():
time.sleep(10)
@concurrent.process(timeout=0.1, context=mp_context)
def long_decorated():
time.sleep(10)
@concurrent.process(timeout=0.1, context=mp_context)
def sigterm_decorated():
signal.signal(signal.SIGTERM, signal.SIG_IGN)
time.sleep(10)
@concurrent.process(context=mp_context)
def name_keyword_argument(name='function_kwarg'):
return name
@concurrent.process(name='concurrent_process_name', context=mp_context)
def name_keyword_decorated():
return multiprocessing.current_process().name
@concurrent.process(name='decorator_kwarg', context=mp_context)
def name_keyword_decorated_and_argument(name='bar'):
return (multiprocessing.current_process().name, name)
@concurrent.process(daemon=False, context=mp_context)
def daemon_keyword_decorated():
return multiprocessing.current_process().daemon
@concurrent.process(pool=ProcessPool(1, context=mp_context))
def pool_decorated(_argument, _keyword_argument=0):
return multiprocessing.current_process().pid
class ProcessConcurrentObj:
a = 0
def __init__(self):
self.b = 1
@classmethod
@concurrent.process(context=mp_context)
def clsmethod(cls):
return cls.a
@concurrent.process(context=mp_context)
def instmethod(self):
return self.b
@staticmethod
@concurrent.process(context=mp_context)
def stcmethod():
return 2
class ProcessConcurrentSub1(ProcessConcurrentObj):
@classmethod
@concurrent.process(context=mp_context)
def clsmethod(cls):
return cls.a + 1
@concurrent.process(context=mp_context)
def instmethod(self):
return self.b + 1
@staticmethod
@concurrent.process(context=mp_context)
def stcmethod():
return 2 + 1
class ProcessConcurrentSub2(ProcessConcurrentObj):
@classmethod
@concurrent.process(context=mp_context)
def clsmethod(cls):
return cls.a + 2
@concurrent.process(context=mp_context)
def instmethod(self):
return self.b + 2
@staticmethod
@concurrent.process(context=mp_context)
def stcmethod():
return 2 + 2
class CallableClass:
def __call__(self, argument, keyword_argument=0):
return argument + keyword_argument
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessConcurrent(unittest.TestCase):
def setUp(self):
self.results = 0
self.exception = None
self.event = threading.Event()
self.event.clear()
self.concurrentobj = ProcessConcurrentObj()
self.concurrentobj1 = ProcessConcurrentSub1()
self.concurrentobj2 = ProcessConcurrentSub2()
def callback(self, future):
try:
self.results = future.result()
except (ProcessExpired, RuntimeError, TimeoutError) as error:
self.exception = error
finally:
self.event.set()
def test_docstring(self):
"""Process Fork docstring is preserved."""
self.assertEqual(decorated.__doc__, "A docstring.")
def test_wrong_timeout(self):
"""Process Fork TypeError is raised if timeout is not number."""
with self.assertRaises(TypeError):
@concurrent.process(timeout='Foo', context=mp_context)
def function():
return
def test_class_method(self):
"""Process Fork decorated classmethods."""
future = ProcessConcurrentObj.clsmethod()
self.assertEqual(future.result(), 0)
future = ProcessConcurrentSub1.clsmethod()
self.assertEqual(future.result(), 1)
future = ProcessConcurrentSub2.clsmethod()
self.assertEqual(future.result(), 2)
def test_instance_method(self):
"""Process Fork decorated instance methods."""
future = self.concurrentobj.instmethod()
self.assertEqual(future.result(), 1)
future = self.concurrentobj1.instmethod()
self.assertEqual(future.result(), 2)
future = self.concurrentobj2.instmethod()
self.assertEqual(future.result(), 3)
def test_static_method(self):
"""Process Fork decorated static methods (Fork startmethod only)."""
future = self.concurrentobj.stcmethod()
self.assertEqual(future.result(), 2)
future = self.concurrentobj1.stcmethod()
self.assertEqual(future.result(), 3)
future = self.concurrentobj2.stcmethod()
self.assertEqual(future.result(), 4)
def test_not_decorated_results(self):
"""Process Fork results are produced."""
non_decorated = concurrent.process(not_decorated, context=mp_context)
future = non_decorated(1, 1)
self.assertEqual(future.result(), 2)
def test_decorated_results(self):
"""Process Fork results are produced."""
future = decorated(1, 1)
self.assertEqual(future.result(), 2)
def test_decorated_results_callback(self):
"""Process Fork results are forwarded to the callback."""
future = decorated(1, 1)
future.add_done_callback(self.callback)
self.event.wait(timeout=1)
self.assertEqual(self.results, 2)
def test_error_decorated(self):
"""Process Fork errors are raised by future.result."""
future = error_decorated()
with self.assertRaises(RuntimeError):
future.result()
def test_error_returned(self):
"""Process Fork returned errors are returned by future.result."""
future = error_returned()
self.assertIsInstance(future.result(), RuntimeError)
def test_error_decorated_callback(self):
"""Process Fork errors are forwarded to callback."""
future = error_decorated()
future.add_done_callback(self.callback)
self.event.wait(timeout=1)
self.assertTrue(isinstance(self.exception, RuntimeError),
msg=str(self.exception))
def test_pickling_error_decorated(self):
"""Process Fork pickling errors are raised by future.result."""
future = pickling_error_decorated()
with self.assertRaises((pickle.PicklingError, TypeError)):
future.result()
def test_frozen_error_decorated(self):
"""Process Fork frozen errors are raised by future.result."""
future = frozen_error_decorated()
with self.assertRaises(FrozenError):
future.result()
def test_timeout_decorated(self):
"""Process Fork raises TimeoutError if so."""
future = long_decorated()
with self.assertRaises(TimeoutError):
future.result()
def test_timeout_decorated_callback(self):
"""Process Fork TimeoutError is forwarded to callback."""
future = long_decorated()
future.add_done_callback(self.callback)
self.event.wait(timeout=1)
self.assertTrue(isinstance(self.exception, TimeoutError),
msg=str(self.exception))
def test_decorated_dead_process(self):
"""Process Fork ProcessExpired is raised if process dies."""
future = critical_decorated()
with self.assertRaises(ProcessExpired) as exc_ctx:
future.result()
self.assertEqual(exc_ctx.exception.exitcode, 123)
self.assertIsInstance(exc_ctx.exception.pid, int)
def test_timeout_decorated_callback(self):
"""Process Fork ProcessExpired is forwarded to callback."""
future = critical_decorated()
future.add_done_callback(self.callback)
self.event.wait(timeout=1)
self.assertTrue(isinstance(self.exception, ProcessExpired),
msg=str(self.exception))
def test_cancel_decorated(self):
"""Process Fork raises CancelledError if future was cancelled."""
future = decorated_cancel()
future.cancel()
self.assertRaises(CancelledError, future.result)
@unittest.skipIf(os.name == 'nt', "Test won't run on Windows.")
def test_decorated_ignoring_sigterm(self):
"""Process Fork Concurrent ignored SIGTERM signal are handled on Unix."""
future = sigterm_decorated()
with self.assertRaises(TimeoutError):
future.result()
def test_name_keyword_argument(self):
"""name keyword can be passed to a decorated function process without name"""
f = name_keyword_argument()
fn_out = f.result()
self.assertEqual(fn_out, "function_kwarg")
def test_name_keyword_decorated(self):
"""
Check that a simple use case of the name keyword passed to the decorator works
"""
f = name_keyword_decorated()
dec_out = f.result()
self.assertEqual(dec_out, "concurrent_process_name")
def test_name_keyword_decorated_result_colision(self):
"""name kwarg is handled without modifying the function kwargs"""
f = name_keyword_decorated_and_argument(name="function_kwarg")
dec_out, fn_out = f.result()
self.assertEqual(dec_out, "decorator_kwarg")
self.assertEqual(fn_out, "function_kwarg")
def test_daemon_keyword_decorated(self):
"""Daemon keyword can be passed to a decorated function and spawns correctly."""
f = daemon_keyword_decorated()
dec_out = f.result()
self.assertEqual(dec_out, False)
def test_callable_objects(self):
"""Callable objects are correctly handled."""
callable_object = concurrent.process(context=mp_context)(CallableClass())
f = callable_object(1)
self.assertEqual(f.result(), 1)
def test_pool_decorated(self):
"""Process Fork pool decorated function."""
future1 = pool_decorated(1, 1)
future2 = pool_decorated(1, 1)
self.assertEqual(future1.result(), future2.result())
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/test/test_concurrent_process_forkserver.py 0000644 0001751 0000166 00000022362 14765574576 023171 0 ustar 00runner docker import os
import time
import pickle
import signal
import unittest
import threading
import dataclasses
import multiprocessing
from concurrent.futures import CancelledError, TimeoutError
from pebble import concurrent, ProcessExpired, ProcessPool
# set start method
supported = False
mp_context = None
methods = multiprocessing.get_all_start_methods()
if 'forkserver' in methods:
try:
mp_context = multiprocessing.get_context('forkserver')
if mp_context.get_start_method() == 'forkserver':
supported = True
else:
raise Exception(mp_context.get_start_method())
except RuntimeError: # child process
pass
else:
mp_context = multiprocessing.get_context()
def not_decorated(argument, keyword_argument=0):
return argument + keyword_argument
@concurrent.process(context=mp_context)
def decorated(argument, keyword_argument=0):
"""A docstring."""
return argument + keyword_argument
@concurrent.process(context=mp_context)
def error_decorated():
raise RuntimeError("BOOM!")
@concurrent.process(context=mp_context)
def error_returned():
return RuntimeError("BOOM!")
@concurrent.process(context=mp_context)
def pickling_error_decorated():
event = threading.Event()
return event
@dataclasses.dataclass(frozen=True)
class FrozenError(Exception):
pass
@concurrent.process(context=mp_context)
def frozen_error_decorated():
raise FrozenError()
@concurrent.process(context=mp_context)
def critical_decorated():
os._exit(123)
@concurrent.process(context=mp_context)
def decorated_cancel():
time.sleep(10)
@concurrent.process(timeout=0.1, context=mp_context)
def long_decorated():
time.sleep(10)
@concurrent.process(timeout=0.1, context=mp_context)
def sigterm_decorated():
signal.signal(signal.SIGTERM, signal.SIG_IGN)
time.sleep(10)
@concurrent.process(daemon=False, context=mp_context)
def daemon_keyword_decorated():
return multiprocessing.current_process().daemon
@concurrent.process(pool=ProcessPool(1, context=mp_context))
def pool_decorated(_argument, _keyword_argument=0):
return multiprocessing.current_process().pid
class ProcessConcurrentObj:
a = 0
def __init__(self):
self.b = 1
@classmethod
@concurrent.process(context=mp_context)
def clsmethod(cls):
return cls.a
@concurrent.process(context=mp_context)
def instmethod(self):
return self.b
@unittest.skipIf(not supported, "Start method is not supported")
class ProcessConcurrentSub1(ProcessConcurrentObj):
@classmethod
@concurrent.process(context=mp_context)
def clsmethod(cls):
return cls.a + 1
@concurrent.process(context=mp_context)
def instmethod(self):
return self.b + 1
class ProcessConcurrentSub2(ProcessConcurrentObj):
@classmethod
@concurrent.process(context=mp_context)
def clsmethod(cls):
return cls.a + 2
@concurrent.process(context=mp_context)
def instmethod(self):
return self.b + 2
class CallableClass:
def __call__(self, argument, keyword_argument=0):
return argument + keyword_argument
class TestProcessConcurrent(unittest.TestCase):
def setUp(self):
self.results = 0
self.exception = None
self.event = threading.Event()
self.event.clear()
self.concurrentobj = ProcessConcurrentObj()
self.concurrentobj1 = ProcessConcurrentSub1()
self.concurrentobj2 = ProcessConcurrentSub2()
def callback(self, future):
try:
self.results = future.result()
except (ProcessExpired, RuntimeError, TimeoutError) as error:
self.exception = error
finally:
self.event.set()
def test_docstring(self):
"""Process Forkserver docstring is preserved."""
self.assertEqual(decorated.__doc__, "A docstring.")
def test_wrong_timeout(self):
"""Process Forkserver TypeError is raised if timeout is not number."""
with self.assertRaises(TypeError):
@concurrent.process(timeout='Foo', context=mp_context)
def function():
return
def test_class_method(self):
"""Process Forkserver decorated classmethods."""
future = ProcessConcurrentObj.clsmethod()
self.assertEqual(future.result(), 0)
future = ProcessConcurrentSub1.clsmethod()
self.assertEqual(future.result(), 1)
future = ProcessConcurrentSub2.clsmethod()
self.assertEqual(future.result(), 2)
def test_instance_method(self):
"""Process Forkserver decorated instance methods."""
future = self.concurrentobj.instmethod()
self.assertEqual(future.result(), 1)
future = self.concurrentobj1.instmethod()
self.assertEqual(future.result(), 2)
future = self.concurrentobj2.instmethod()
self.assertEqual(future.result(), 3)
def test_not_decorated_results(self):
"""Process Forkserver results are produced."""
non_decorated = concurrent.process(not_decorated, context=mp_context)
future = non_decorated(1, 1)
self.assertEqual(future.result(), 2)
def test_decorated_results(self):
"""Process Forkserver results are produced."""
future = decorated(1, 1)
self.assertEqual(future.result(), 2)
def test_decorated_results_callback(self):
"""Process Forkserver results are forwarded to the callback."""
future = decorated(1, 1)
future.add_done_callback(self.callback)
self.event.wait(timeout=1)
self.assertEqual(self.results, 2)
def test_error_decorated(self):
"""Process Forkserver errors are raised by future.result."""
future = error_decorated()
with self.assertRaises(RuntimeError):
future.result()
def test_error_returned(self):
"""Process Forkserver returned errors are returned by future.result."""
future = error_returned()
self.assertIsInstance(future.result(), RuntimeError)
def test_error_decorated_callback(self):
"""Process Forkserver errors are forwarded to callback."""
future = error_decorated()
future.add_done_callback(self.callback)
self.event.wait(timeout=1)
self.assertTrue(isinstance(self.exception, RuntimeError),
msg=str(self.exception))
def test_pickling_error_decorated(self):
"""Process Forkserver pickling errors are raised by future.result."""
future = pickling_error_decorated()
with self.assertRaises((pickle.PicklingError, TypeError)):
future.result()
def test_frozen_error_decorated(self):
"""Process Fork frozen errors are raised by future.result."""
future = frozen_error_decorated()
with self.assertRaises(FrozenError):
future.result()
def test_timeout_decorated(self):
"""Process Forkserver raises TimeoutError if so."""
future = long_decorated()
with self.assertRaises(TimeoutError):
future.result()
def test_timeout_decorated_callback(self):
"""Process Forkserver TimeoutError is forwarded to callback."""
future = long_decorated()
future.add_done_callback(self.callback)
self.event.wait(timeout=1)
self.assertTrue(isinstance(self.exception, TimeoutError),
msg=str(self.exception))
def test_decorated_dead_process(self):
"""Process Forkserver ProcessExpired is raised if process dies."""
future = critical_decorated()
with self.assertRaises(ProcessExpired) as exc_ctx:
future.result()
self.assertEqual(exc_ctx.exception.exitcode, 123)
self.assertIsInstance(exc_ctx.exception.pid, int)
def test_timeout_decorated_callback(self):
"""Process Forkserver ProcessExpired is forwarded to callback."""
future = critical_decorated()
future.add_done_callback(self.callback)
self.event.wait(timeout=1)
self.assertTrue(isinstance(self.exception, ProcessExpired),
msg=str(self.exception))
def test_cancel_decorated(self):
"""Process Forkserver raises CancelledError if future was cancelled."""
future = decorated_cancel()
future.cancel()
self.assertRaises(CancelledError, future.result)
@unittest.skipIf(os.name == 'nt', "Test won't run on Windows.")
def test_decorated_ignoring_sigterm(self):
"""Process Forkserver Concurrent ignored SIGTERM signal are handled on Unix."""
future = sigterm_decorated()
with self.assertRaises(TimeoutError):
future.result()
def test_daemon_keyword_decorated(self):
"""Daemon keyword can be passed to a decorated function and spawns correctly."""
f = daemon_keyword_decorated()
dec_out = f.result()
self.assertEqual(dec_out, False)
def test_callable_objects(self):
"""Callable objects are correctly handled."""
callable_object = concurrent.process(context=mp_context)(CallableClass())
f = callable_object(1)
self.assertEqual(f.result(), 1)
def test_pool_decorated(self):
"""Process Forkserver pool decorated function."""
future1 = pool_decorated(1, 1)
future2 = pool_decorated(1, 1)
self.assertEqual(future1.result(), future2.result())
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/test/test_concurrent_process_spawn.py 0000644 0001751 0000166 00000022165 14765574576 022132 0 ustar 00runner docker import os
import time
import pickle
import signal
import unittest
import threading
import dataclasses
import multiprocessing
from concurrent.futures import CancelledError, TimeoutError
from pebble import concurrent, ProcessExpired, ProcessPool
# set start method
supported = False
mp_context = None
methods = multiprocessing.get_all_start_methods()
if 'spawn' in methods:
try:
mp_context = multiprocessing.get_context('spawn')
if mp_context.get_start_method() == 'spawn':
supported = True
else:
raise Exception(mp_context.get_start_method())
except RuntimeError: # child process
pass
else:
mp_context = multiprocessing.get_context()
def not_decorated(argument, keyword_argument=0):
return argument + keyword_argument
@concurrent.process(context=mp_context)
def decorated(argument, keyword_argument=0):
"""A docstring."""
return argument + keyword_argument
@concurrent.process(context=mp_context)
def error_decorated():
raise RuntimeError("BOOM!")
@concurrent.process(context=mp_context)
def error_returned():
return RuntimeError("BOOM!")
@concurrent.process(context=mp_context)
def pickling_error_decorated():
event = threading.Event()
return event
@dataclasses.dataclass(frozen=True)
class FrozenError(Exception):
pass
@concurrent.process(context=mp_context)
def frozen_error_decorated():
raise FrozenError()
@concurrent.process(context=mp_context)
def critical_decorated():
os._exit(123)
@concurrent.process(context=mp_context)
def decorated_cancel():
time.sleep(10)
@concurrent.process(timeout=0.1, context=mp_context)
def long_decorated():
time.sleep(10)
@concurrent.process(timeout=0.1, context=mp_context)
def sigterm_decorated():
signal.signal(signal.SIGTERM, signal.SIG_IGN)
time.sleep(10)
@concurrent.process(daemon=False, context=mp_context)
def daemon_keyword_decorated():
return multiprocessing.current_process().daemon
@concurrent.process(pool=ProcessPool(1, context=mp_context))
def pool_decorated(_argument, _keyword_argument=0):
return multiprocessing.current_process().pid
class ProcessConcurrentObj:
a = 0
def __init__(self):
self.b = 1
@classmethod
@concurrent.process(context=mp_context)
def clsmethod(cls):
return cls.a
@concurrent.process(context=mp_context)
def instmethod(self):
return self.b
@unittest.skipIf(not supported, "Start method is not supported")
class ProcessConcurrentSub1(ProcessConcurrentObj):
@classmethod
@concurrent.process(context=mp_context)
def clsmethod(cls):
return cls.a + 1
@concurrent.process(context=mp_context)
def instmethod(self):
return self.b + 1
class ProcessConcurrentSub2(ProcessConcurrentObj):
@classmethod
@concurrent.process(context=mp_context)
def clsmethod(cls):
return cls.a + 2
@concurrent.process(context=mp_context)
def instmethod(self):
return self.b + 2
class CallableClass:
def __call__(self, argument, keyword_argument=0):
return argument + keyword_argument
class TestProcessConcurrent(unittest.TestCase):
def setUp(self):
self.results = 0
self.exception = None
self.event = threading.Event()
self.event.clear()
self.concurrentobj = ProcessConcurrentObj()
self.concurrentobj1 = ProcessConcurrentSub1()
self.concurrentobj2 = ProcessConcurrentSub2()
def callback(self, future):
try:
self.results = future.result()
except (ProcessExpired, RuntimeError, TimeoutError) as error:
self.exception = error
finally:
self.event.set()
def test_docstring(self):
"""Process Spawn docstring is preserved."""
self.assertEqual(decorated.__doc__, "A docstring.")
def test_wrong_timeout(self):
"""Process Spawn TypeError is raised if timeout is not number."""
with self.assertRaises(TypeError):
@concurrent.process(timeout='Foo', context=mp_context)
def function():
return
def test_class_method(self):
"""Process Spawn decorated classmethods."""
future = ProcessConcurrentObj.clsmethod()
self.assertEqual(future.result(), 0)
future = ProcessConcurrentSub1.clsmethod()
self.assertEqual(future.result(), 1)
future = ProcessConcurrentSub2.clsmethod()
self.assertEqual(future.result(), 2)
def test_instance_method(self):
"""Process Spawn decorated instance methods."""
future = self.concurrentobj.instmethod()
self.assertEqual(future.result(), 1)
future = self.concurrentobj1.instmethod()
self.assertEqual(future.result(), 2)
future = self.concurrentobj2.instmethod()
self.assertEqual(future.result(), 3)
def test_not_decorated_results(self):
"""Process Spawn results are produced."""
non_decorated = concurrent.process(not_decorated, context=mp_context)
future = non_decorated(1, 1)
self.assertEqual(future.result(), 2)
def test_decorated_results(self):
"""Process Spawn results are produced."""
future = decorated(1, 1)
self.assertEqual(future.result(), 2)
def test_decorated_results_callback(self):
"""Process Spawn results are forwarded to the callback."""
future = decorated(1, 1)
future.add_done_callback(self.callback)
self.event.wait(timeout=1)
self.assertEqual(self.results, 2)
def test_error_decorated(self):
"""Process Spawn errors are raised by future.result."""
future = error_decorated()
with self.assertRaises(RuntimeError):
future.result()
def test_error_returned(self):
"""Process Spawn returned errors are returned by future.result."""
future = error_returned()
self.assertIsInstance(future.result(), RuntimeError)
def test_error_decorated_callback(self):
"""Process Spawn errors are forwarded to callback."""
future = error_decorated()
future.add_done_callback(self.callback)
self.event.wait(timeout=1)
self.assertTrue(isinstance(self.exception, RuntimeError),
msg=str(self.exception))
def test_pickling_error_decorated(self):
"""Process Spawn pickling errors are raised by future.result."""
future = pickling_error_decorated()
with self.assertRaises((pickle.PicklingError, TypeError)):
future.result()
def test_error_decorated(self):
"""Process Fork errors are raised by future.result."""
future = error_decorated()
with self.assertRaises(RuntimeError):
future.result()
def test_timeout_decorated(self):
"""Process Spawn raises TimeoutError if so."""
future = long_decorated()
with self.assertRaises(TimeoutError):
future.result()
def test_timeout_decorated_callback(self):
"""Process Spawn TimeoutError is forwarded to callback."""
future = long_decorated()
future.add_done_callback(self.callback)
self.event.wait(timeout=1)
self.assertTrue(isinstance(self.exception, TimeoutError),
msg=str(self.exception))
def test_decorated_dead_process(self):
"""Process Spawn ProcessExpired is raised if process dies."""
future = critical_decorated()
with self.assertRaises(ProcessExpired) as exc_ctx:
future.result()
self.assertEqual(exc_ctx.exception.exitcode, 123)
self.assertIsInstance(exc_ctx.exception.pid, int)
def test_timeout_decorated_callback(self):
"""Process Spawn ProcessExpired is forwarded to callback."""
future = critical_decorated()
future.add_done_callback(self.callback)
self.event.wait(timeout=1)
self.assertTrue(isinstance(self.exception, ProcessExpired),
msg=str(self.exception))
def test_cancel_decorated(self):
"""Process Spawn raises CancelledError if future was cancelled."""
future = decorated_cancel()
future.cancel()
self.assertRaises(CancelledError, future.result)
@unittest.skipIf(os.name == 'nt', "Test won't run on Windows.")
def test_decorated_ignoring_sigterm(self):
"""Process Spawn Concurrent ignored SIGTERM signal are handled on Unix."""
future = sigterm_decorated()
with self.assertRaises(TimeoutError):
future.result()
def test_daemon_keyword_decorated(self):
"""Daemon keyword can be passed to a decorated function and spawns correctly."""
f = daemon_keyword_decorated()
dec_out = f.result()
self.assertEqual(dec_out, False)
def test_callable_objects(self):
"""Callable objects are correctly handled."""
callable_object = concurrent.process(context=mp_context)(CallableClass())
f = callable_object(1)
self.assertEqual(f.result(), 1)
def test_pool_decorated(self):
"""Process Spawn pool decorated function."""
future1 = pool_decorated(1, 1)
future2 = pool_decorated(1, 1)
self.assertEqual(future1.result(), future2.result())
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/test/test_concurrent_thread.py 0000644 0001751 0000166 00000012216 14765574576 020507 0 ustar 00runner docker import unittest
import threading
from pebble import concurrent
from pebble import ThreadPool
def not_decorated(argument, keyword_argument=0):
return argument + keyword_argument
@concurrent.thread
def decorated(argument, keyword_argument=0):
"""A docstring."""
return argument + keyword_argument
@concurrent.thread
def error_decorated():
raise RuntimeError("BOOM!")
@concurrent.thread
def error_returned():
return RuntimeError("BOOM!")
@concurrent.thread()
def name_keyword_argument(name='function_kwarg'):
return name
@concurrent.thread(name='concurrent_thread_name')
def name_keyword_decorated():
return threading.current_thread().name
@concurrent.thread(name='decorator_kwarg')
def name_keyword_decorated_and_argument(name='bar'):
return (threading.current_thread().name, name)
@concurrent.thread(daemon=False)
def daemon_keyword_decorated():
return threading.current_thread().daemon
@concurrent.thread(pool=ThreadPool(1))
def pool_decorated(_argument, _keyword_argument=0):
return threading.current_thread().ident
class ThreadConcurrentObj:
a = 0
def __init__(self):
self.b = 1
@classmethod
@concurrent.thread
def clsmethod(cls):
return cls.a
@concurrent.thread
def instmethod(self):
return self.b
@staticmethod
@concurrent.thread
def stcmethod():
return 2
class TestThreadConcurrent(unittest.TestCase):
def setUp(self):
self.results = 0
self.exception = None
self.event = threading.Event()
self.event.clear()
self.concurrentobj = ThreadConcurrentObj()
def callback(self, future):
try:
self.results = future.result()
except (RuntimeError) as error:
self.exception = error
finally:
self.event.set()
def test_docstring(self):
"""Thread docstring is preserved."""
self.assertEqual(decorated.__doc__, "A docstring.")
def test_class_method(self):
"""Thread decorated classmethods."""
future = ThreadConcurrentObj.clsmethod()
self.assertEqual(future.result(), 0)
def test_instance_method(self):
"""Thread decorated instance methods."""
future = self.concurrentobj.instmethod()
self.assertEqual(future.result(), 1)
def test_static_method(self):
"""Thread decorated static methods ( startmethod only)."""
future = self.concurrentobj.stcmethod()
self.assertEqual(future.result(), 2)
def test_not_decorated_results(self):
"""Process Fork results are produced."""
non_decorated = concurrent.thread(not_decorated)
future = non_decorated(1, 1)
self.assertEqual(future.result(), 2)
def test_decorated_results(self):
"""Thread results are produced."""
future = decorated(1, 1)
self.assertEqual(future.result(), 2)
def test_decorated_results_callback(self):
"""Thread results are forwarded to the callback."""
future = decorated(1, 1)
future.add_done_callback(self.callback)
self.event.wait(timeout=1)
self.assertEqual(self.results, 2)
def test_error_decorated(self):
"""Thread errors are raised by future.result."""
future = error_decorated()
with self.assertRaises(RuntimeError):
future.result()
def test_error_returned(self):
"""Thread returned errors are returned by future.result."""
future = error_returned()
self.assertIsInstance(future.result(), RuntimeError)
def test_error_decorated_callback(self):
"""Thread errors are forwarded to callback."""
future = error_decorated()
future.add_done_callback(self.callback)
self.event.wait(timeout=1)
self.assertTrue(isinstance(self.exception, RuntimeError),
msg=str(self.exception))
def test_name_keyword_argument(self):
"""name keyword can be passed to a decorated function process without name """
f = name_keyword_argument()
fn_out = f.result()
self.assertEqual(fn_out, "function_kwarg")
def test_name_keyword_decorated(self):
"""
Check that a simple use case of the name keyword passed to the decorator works
"""
f = name_keyword_decorated()
dec_out = f.result()
self.assertEqual(dec_out, "concurrent_thread_name")
def test_name_keyword_decorated_result(self):
"""name kwarg is handled without modifying the function kwargs"""
f = name_keyword_decorated_and_argument(name="function_kwarg")
dec_out, fn_out = f.result()
self.assertEqual(dec_out, "decorator_kwarg")
self.assertEqual(fn_out, "function_kwarg")
def test_daemon_keyword_decorated(self):
"""Daemon keyword can be passed to a decorated function and spawns correctly."""
f = daemon_keyword_decorated()
dec_out = f.result()
self.assertEqual(dec_out, False)
def test_pool_decorated(self):
"""Thread pool decorated function."""
future1 = pool_decorated(1, 1)
future2 = pool_decorated(1, 1)
self.assertEqual(future1.result(), future2.result())
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/test/test_pebble.py 0000644 0001751 0000166 00000014360 14765574576 016231 0 ustar 00runner docker import os
import time
import signal
import unittest
import threading
from queue import Queue
from pebble import decorators
from pebble.common import launch_thread
from pebble import synchronized, sighandler
from pebble import waitforthreads, waitforqueues
results = 0
semaphore = threading.Semaphore()
@synchronized
def synchronized_function():
"""A docstring."""
return decorators._synchronized_lock.acquire(False)
@synchronized(semaphore)
def custom_synchronized_function():
"""A docstring."""
return semaphore.acquire(False)
try:
from signal import SIGALRM, SIGFPE, SIGIO
@sighandler(SIGALRM)
def signal_handler(signum, frame):
"""A docstring."""
global results
results = 1
@sighandler((SIGFPE, SIGIO))
def signals_handler(signum, frame):
pass
except ImportError:
pass
def thread_function(value):
time.sleep(value)
return value
def queue_function(queues, index, value):
time.sleep(value)
queues[index].put(value)
return value
def spurious_wakeup_function(value, lock):
value = value / 2
time.sleep(value)
lock.acquire()
time.sleep(value)
return value
class TestSynchronizedDecorator(unittest.TestCase):
def test_wrapper_decorator_docstring(self):
"""Synchronized docstring of the original function is preserved."""
self.assertEqual(synchronized_function.__doc__, "A docstring.")
def test_syncronized_locked(self):
"""Synchronized Lock is acquired
during execution of decorated function."""
self.assertFalse(synchronized_function())
def test_syncronized_released(self):
"""Synchronized Lock is released
during execution of decorated function."""
synchronized_function()
self.assertTrue(decorators._synchronized_lock.acquire(False))
decorators._synchronized_lock.release()
def test_custom_syncronized_locked(self):
"""Synchronized semaphore is acquired
during execution of decorated function."""
self.assertFalse(custom_synchronized_function())
def test_custom_syncronized_released(self):
"""Synchronized semaphore is acquired
during execution of decorated function."""
custom_synchronized_function()
self.assertTrue(semaphore.acquire(False))
semaphore.release()
class TestSigHandler(unittest.TestCase):
def test_wrapper_decorator_docstring(self):
"""Sighandler docstring of the original function is preserved."""
if os.name != 'nt':
self.assertEqual(signal_handler.__doc__, "A docstring.")
def test_sighandler(self):
"""Sighandler installs SIGALRM."""
if os.name != 'nt':
self.assertEqual(signal.getsignal(signal.SIGALRM).__name__,
signal_handler.__name__)
def test_sighandler_multiple(self):
"""Sighandler installs SIGFPE and SIGIO."""
if os.name != 'nt':
self.assertEqual(signal.getsignal(signal.SIGFPE).__name__,
signals_handler.__name__)
self.assertEqual(signal.getsignal(signal.SIGIO).__name__,
signals_handler.__name__)
def test_sigalarm_sighandler(self):
"""Sighandler for SIGALARM works."""
if os.name != 'nt':
os.kill(os.getpid(), signal.SIGALRM)
time.sleep(0.1)
self.assertEqual(results, 1)
class TestWaitForThreads(unittest.TestCase):
def test_waitforthreads_single(self):
"""Waitforthreads waits for a single thread."""
thread = launch_thread(None, thread_function, True, 0.01)
self.assertEqual(list(waitforthreads([thread]))[0], thread)
def test_waitforthreads_multiple(self):
"""Waitforthreads waits for multiple threads."""
threads = []
for _ in range(5):
threads.append(launch_thread(None, thread_function, True, 0.01))
time.sleep(0.1)
self.assertEqual(list(waitforthreads(threads)), threads)
def test_waitforthreads_timeout(self):
"""Waitforthreads returns empty list if timeout."""
thread = launch_thread(None, thread_function, True, 0.1)
self.assertEqual(list(waitforthreads([thread], timeout=0.01)), [])
def test_waitforthreads_restore(self):
"""Waitforthreads get_ident is restored to original one."""
if hasattr(threading, 'get_ident'):
expected = threading.get_ident
else:
expected = threading._get_ident
thread = launch_thread(None, thread_function, True, 0)
time.sleep(0.01)
waitforthreads([thread])
if hasattr(threading, 'get_ident'):
self.assertEqual(threading.get_ident, expected)
else:
self.assertEqual(threading._get_ident, expected)
def test_waitforthreads_spurious(self):
"""Waitforthreads tolerates spurious wakeups."""
lock = threading.RLock()
thread = launch_thread(None, spurious_wakeup_function, True, 0.1, lock)
self.assertEqual(list(waitforthreads([thread])), [thread])
class TestWaitForQueues(unittest.TestCase):
def setUp(self):
self.queues = [Queue(), Queue(), Queue()]
def test_waitforqueues_single(self):
"""Waitforqueues waits for a single queue."""
launch_thread(None, queue_function, True, self.queues, 0, 0.01)
self.assertEqual(list(waitforqueues(self.queues))[0], self.queues[0])
def test_waitforqueues_multiple(self):
"""Waitforqueues waits for multiple queues."""
for index in range(3):
launch_thread(None, queue_function, True, self.queues, index, 0.01)
time.sleep(0.1)
self.assertEqual(list(waitforqueues(self.queues)), self.queues)
def test_waitforqueues_timeout(self):
"""Waitforqueues returns empty list if timeout."""
launch_thread(None, queue_function, True, self.queues, 0, 0.1)
self.assertEqual(list(waitforqueues(self.queues, timeout=0.01)), [])
def test_waitforqueues_restore(self):
"""Waitforqueues Queue object is restored to original one."""
expected = sorted(dir(self.queues[0]))
launch_thread(None, queue_function, True, self.queues, 0, 0)
waitforqueues(self.queues)
self.assertEqual(sorted(dir(self.queues[0])), expected)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/test/test_process_pool_fork.py 0000644 0001751 0000166 00000075235 14765574576 020540 0 ustar 00runner docker import os
import sys
import time
import pickle
import signal
import asyncio
import unittest
import threading
import concurrent
import dataclasses
import multiprocessing
from concurrent.futures import CancelledError, TimeoutError
import pebble
from pebble import ProcessPool, ProcessExpired
from pebble.pool.base_pool import PoolStatus
# set start method
supported = False
mp_context = None
methods = multiprocessing.get_all_start_methods()
if 'fork' in methods:
try:
mp_context = multiprocessing.get_context('fork')
if mp_context.get_start_method() == 'fork':
supported = True
except RuntimeError: # child process
pass
initarg = 0
def initializer(value):
global initarg
initarg = value
def long_initializer():
time.sleep(60)
def broken_initializer():
raise BaseException("BOOM!")
def function(argument, keyword_argument=0):
"""A docstring."""
return argument + keyword_argument
def initializer_function():
return initarg
def error_function():
raise BaseException("BOOM!")
def return_error_function():
return BaseException("BOOM!")
@dataclasses.dataclass(frozen=True)
class FrozenError(Exception):
pass
def frozen_error_function():
raise FrozenError()
def pickle_error_function():
return threading.Lock()
def long_function(value=1):
time.sleep(value)
return value
def pid_function():
time.sleep(0.1)
return os.getpid()
def sigterm_function():
signal.signal(signal.SIGTERM, signal.SIG_IGN)
time.sleep(10)
def suicide_function():
os._exit(1)
def process_function():
p = multiprocessing.Process(target=function, args=[1])
p.start()
p.join()
return 1
def pool_function():
pool = multiprocessing.Pool(1)
result = pool.apply(function, args=[1])
pool.close()
pool.join()
return result
def pebble_function():
with ProcessPool(max_workers=1) as pool:
f = pool.schedule(function, args=[1])
return f.result()
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessPool(unittest.TestCase):
def setUp(self):
global initarg
initarg = 0
self.event = threading.Event()
self.event.clear()
self.result = None
self.exception = None
def callback(self, future):
try:
self.result = future.result()
except BaseException as error:
self.exception = error
finally:
self.event.set()
def test_process_pool_single_future(self):
"""Process Pool Fork single future."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(function, args=[1],
kwargs={'keyword_argument': 1})
self.assertEqual(future.result(), 2)
def test_process_pool_multiple_futures(self):
"""Process Pool Fork multiple futures."""
futures = []
with ProcessPool(max_workers=2, context=mp_context) as pool:
for _ in range(5):
futures.append(pool.schedule(function, args=[1]))
self.assertEqual(sum([f.result() for f in futures]), 5)
def test_process_pool_callback(self):
"""Process Pool Fork result is forwarded to the callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(
function, args=[1], kwargs={'keyword_argument': 1})
future.add_done_callback(self.callback)
self.event.wait()
self.assertEqual(self.result, 2)
def test_process_pool_error(self):
"""Process Pool Fork errors are raised by future get."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(error_function)
self.assertRaises(BaseException, future.result)
def test_process_pool_error_returned(self):
"""Process Pool Fork returned errors are returned by future get."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(return_error_function)
self.assertIsInstance(future.result(), BaseException)
def test_process_pool_error_callback(self):
"""Process Pool Fork errors are forwarded to callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(error_function)
future.add_done_callback(self.callback)
self.event.wait()
self.assertTrue(isinstance(self.exception, BaseException))
def test_process_pool_pickling_error_task(self):
"""Process Pool Fork task pickling errors
are raised by future.result."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(function, args=[threading.Lock()])
self.assertRaises((pickle.PicklingError, TypeError), future.result)
def test_process_pool_pickling_error_result(self):
"""Process Pool Fork result pickling errors
are raised by future.result."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(pickle_error_function)
self.assertRaises((pickle.PicklingError, TypeError), future.result)
def test_process_pool_frozen_error(self):
"""Process Pool Fork frozen errors are raised by future get."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(frozen_error_function)
self.assertRaises(FrozenError, future.result)
def test_process_pool_timeout(self):
"""Process Pool Fork future raises TimeoutError if so."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(long_function, timeout=0.1)
self.assertRaises(TimeoutError, future.result)
def test_process_pool_timeout_callback(self):
"""Process Pool Fork TimeoutError is forwarded to callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(long_function, timeout=0.1)
future.add_done_callback(self.callback)
self.event.wait()
self.assertTrue(isinstance(self.exception, TimeoutError))
def test_process_pool_cancel(self):
"""Process Pool Fork future raises CancelledError if so."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(long_function)
time.sleep(0.1) # let the process pick up the task
self.assertTrue(future.cancel())
self.assertRaises(CancelledError, future.result)
def test_process_pool_cancel_callback(self):
"""Process Pool Fork CancelledError is forwarded to callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(long_function)
future.add_done_callback(self.callback)
time.sleep(0.1) # let the process pick up the task
self.assertTrue(future.cancel())
self.event.wait()
self.assertTrue(isinstance(self.exception, CancelledError))
@unittest.skipIf(sys.platform == 'darwin', "Not supported on MAC OS")
def test_process_pool_different_process(self):
"""Process Pool Fork multiple futures are handled by different processes."""
futures = []
with ProcessPool(max_workers=2, context=mp_context) as pool:
for _ in range(0, 5):
futures.append(pool.schedule(pid_function))
self.assertEqual(len(set([f.result() for f in futures])), 2)
def test_process_pool_future_limit(self):
"""Process Pool Fork tasks limit is honored."""
futures = []
with ProcessPool(max_workers=1, max_tasks=2, context=mp_context) as pool:
for _ in range(0, 4):
futures.append(pool.schedule(pid_function))
self.assertEqual(len(set([f.result() for f in futures])), 2)
def test_process_pool_stop_timeout(self):
"""Process Pool Fork workers are stopped if future timeout."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future1 = pool.schedule(pid_function)
pool.schedule(long_function, timeout=0.1)
future2 = pool.schedule(pid_function)
self.assertNotEqual(future1.result(), future2.result())
def test_process_pool_stop_cancel(self):
"""Process Pool Fork workers are stopped if future is cancelled."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future1 = pool.schedule(pid_function)
cancel_future = pool.schedule(long_function)
time.sleep(0.1) # let the process pick up the task
cancel_future.cancel()
future2 = pool.schedule(pid_function)
self.assertNotEqual(future1.result(), future2.result())
def test_process_pool_initializer(self):
"""Process Pool Fork initializer is correctly run."""
with ProcessPool(initializer=initializer, initargs=[1], context=mp_context) as pool:
future = pool.schedule(initializer_function)
self.assertEqual(future.result(), 1)
def test_process_pool_broken_initializer(self):
"""Process Pool Fork broken initializer is notified."""
with self.assertRaises(RuntimeError):
with ProcessPool(initializer=broken_initializer, context=mp_context) as pool:
pool.active
time.sleep(0.4)
pool.schedule(function)
def test_process_pool_running(self):
"""Process Pool Fork is active if a future is scheduled."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
pool.schedule(function, args=[1])
self.assertTrue(pool.active)
def test_process_pool_stopped(self):
"""Process Pool Fork is not active once stopped."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
pool.schedule(function, args=[1])
self.assertFalse(pool.active)
def test_process_pool_close_futures(self):
"""Process Pool Fork all futures are performed on close."""
futures = []
pool = ProcessPool(max_workers=1, context=mp_context)
for index in range(10):
futures.append(pool.schedule(function, args=[index]))
pool.close()
pool.join()
map(self.assertTrue, [f.done() for f in futures])
def test_process_pool_close_stopped(self):
"""Process Pool Fork is stopped after close."""
pool = ProcessPool(max_workers=1, context=mp_context)
pool.schedule(function, args=[1])
pool.close()
pool.join()
self.assertFalse(pool.active)
def test_process_pool_stop_futures(self):
"""Process Pool Fork not all futures are performed on stop."""
futures = []
pool = ProcessPool(max_workers=1, context=mp_context)
for index in range(10):
futures.append(pool.schedule(function, args=[index]))
pool.stop()
pool.join()
self.assertTrue(len([f for f in futures if not f.done()]) > 0)
def test_process_pool_stop_stopped(self):
"""Process Pool Fork is stopped after stop."""
pool = ProcessPool(max_workers=1, context=mp_context)
pool.schedule(function, args=[1])
pool.stop()
pool.join()
self.assertFalse(pool.active)
def test_process_pool_stop_stopped_callback(self):
"""Process Pool Fork is stopped in callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
def stop_pool_callback(_):
pool.stop()
future = pool.schedule(function, args=[1])
future.add_done_callback(stop_pool_callback)
with self.assertRaises(RuntimeError):
for index in range(10):
time.sleep(0.1)
pool.schedule(long_function, args=[index])
self.assertFalse(pool.active)
def test_process_pool_large_data(self):
"""Process Pool Fork large data is sent on the channel."""
data = "a" * 1098 * 1024 * 100 # 100 Mb
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(
function, args=[data], kwargs={'keyword_argument': ''})
self.assertEqual(data, future.result())
def test_process_pool_stop_large_data(self):
"""Process Pool Fork is stopped if large data is sent on the channel."""
data = "a" * 1098 * 1024 * 100 # 100 Mb
pool = ProcessPool(max_workers=1, context=mp_context)
pool.schedule(function, args=[data])
time.sleep(1)
pool.stop()
pool.join()
self.assertFalse(pool.active)
def test_process_pool_join_workers(self):
"""Process Pool Fork no worker is running after join."""
pool = ProcessPool(max_workers=4, context=mp_context)
pool.schedule(function, args=[1])
pool.stop()
pool.join()
self.assertEqual(len(pool._pool_manager.worker_manager.workers), 0)
def test_process_pool_join_running(self):
"""Process Pool Fork RuntimeError is raised if active pool joined."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
pool.schedule(function, args=[1])
self.assertRaises(RuntimeError, pool.join)
def test_process_pool_join_futures_timeout(self):
"""Process Pool Fork TimeoutError is raised if join on long futures."""
pool = ProcessPool(max_workers=1, context=mp_context)
for _ in range(2):
pool.schedule(long_function)
pool.close()
self.assertRaises(TimeoutError, pool.join, 0.4)
pool.stop()
pool.join()
def test_process_pool_callback_error(self):
"""Process Pool Fork does not stop if error in callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(function, args=[1],
kwargs={'keyword_argument': 1})
future.add_done_callback(self.callback)
# sleep enough to ensure callback is run
time.sleep(0.1)
pool.schedule(function, args=[1],
kwargs={'keyword_argument': 1})
def test_process_pool_exception_isolated(self):
"""Process Pool Fork an BaseException does not affect other futures."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(error_function)
try:
future.result()
except BaseException:
pass
future = pool.schedule(function, args=[1],
kwargs={'keyword_argument': 1})
self.assertEqual(future.result(), 2)
@unittest.skipIf(os.name == 'nt', "Test won't run on Windows'.")
def test_process_pool_ignoring_sigterm(self):
"""Process Pool Fork ignored SIGTERM signal are handled on Unix."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(sigterm_function, timeout=0.2)
with self.assertRaises(TimeoutError):
future.result()
def test_process_pool_expired_worker(self):
"""Process Pool Fork unexpect death of worker raises ProcessExpired."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(suicide_function)
worker_pid = list(pool._pool_manager.worker_manager.workers)[0]
with self.assertRaises(ProcessExpired) as exc_ctx:
future.result()
self.assertEqual(exc_ctx.exception.exitcode, 1)
self.assertEqual(exc_ctx.exception.pid, worker_pid)
def test_process_pool_map(self):
"""Process Pool Fork map simple."""
elements = [1, 2, 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_empty(self):
"""Process Pool Fork map no elements."""
elements = []
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_single(self):
"""Process Pool Fork map one element."""
elements = [0]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_multi(self):
"""Process Pool Fork map multiple iterables."""
expected = (2, 4)
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, (1, 2, 3), (1, 2))
generator = future.result()
self.assertEqual(tuple(generator), expected)
def test_process_pool_map_one_chunk(self):
"""Process Pool Fork map chunksize 1."""
elements = [1, 2, 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements, chunksize=1)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_zero_chunk(self):
"""Process Pool Fork map chunksize 0."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(ValueError):
pool.map(function, [], chunksize=0)
def test_process_pool_map_timeout(self):
"""Process Pool Fork map with timeout."""
raised = []
elements = [1, 2, 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(long_function, elements, timeout=0.1)
generator = future.result()
while True:
try:
next(generator)
except TimeoutError as error:
raised.append(error)
except StopIteration:
break
self.assertTrue(all((isinstance(e, TimeoutError) for e in raised)))
def test_process_pool_map_timeout_chunks(self):
"""Process Pool Fork map timeout is assigned per chunk."""
elements = [0.1]*20
with ProcessPool(max_workers=1, context=mp_context) as pool:
# it takes 1s to process a chunk
future = pool.map(
long_function, elements, chunksize=5, timeout=1.8)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_error(self):
"""Process Pool Fork errors do not stop the iteration."""
raised = None
elements = [1, 'a', 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements)
generator = future.result()
while True:
try:
result = next(generator)
except TypeError as error:
raised = error
except StopIteration:
break
self.assertEqual(result, 3)
self.assertTrue(isinstance(raised, TypeError))
def test_process_pool_map_cancel(self):
"""Process Pool Fork cancel iteration."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(long_function, range(5))
generator = future.result()
self.assertEqual(next(generator), 0)
future.cancel()
for _ in range(4):
with self.assertRaises(CancelledError):
next(generator)
def test_process_pool_map_broken_pool(self):
"""Process Pool Fork Broken Pool."""
elements = [1, 2, 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(long_function, elements, timeout=1)
generator = future.result()
pool._context.status = PoolStatus.ERROR
while True:
try:
next(generator)
except TimeoutError as error:
self.assertFalse(pool.active)
future.cancel()
break
except StopIteration:
break
def test_process_pool_child_process(self):
"""Process Pool Fork worker starts process."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(process_function)
self.assertEqual(future.result(), 1)
def test_process_pool_child_pool(self):
"""Process Pool Fork worker starts multiprocessing.Pool."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(pool_function)
self.assertEqual(future.result(), 1)
def test_process_pool_child_pebble(self):
"""Process Pool Fork worker starts pebble.ProcessPool."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(pebble_function)
self.assertEqual(future.result(), 1)
@unittest.skipIf(not supported, "Start method is not supported")
class TestAsyncIOProcessPool(unittest.TestCase):
def setUp(self):
self.event = None
self.result = None
self.exception = None
def callback(self, future):
try:
self.result = future.result()
# asyncio.exception.CancelledError does not inherit from BaseException
except BaseException as error:
self.exception = error
finally:
self.event.set()
def test_process_pool_single_future(self):
"""Process Pool Fork single future."""
async def test(pool):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(pool, function, None, 1)
with ProcessPool(max_workers=1, context=mp_context) as pool:
self.assertEqual(asyncio.run(test(pool)), 1)
def test_process_pool_multiple_futures(self):
"""Process Pool Fork multiple futures."""
async def test(pool):
futures = []
loop = asyncio.get_running_loop()
for _ in range(5):
futures.append(loop.run_in_executor(pool, function, None, 1))
return await asyncio.wait(futures)
with ProcessPool(max_workers=2, context=mp_context) as pool:
self.assertEqual(sum(r.result()
for r in asyncio.run(test(pool))[0]), 5)
def test_process_pool_callback(self):
"""Process Pool Fork result is forwarded to the callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, function, None, 1)
future.add_done_callback(self.callback)
await self.event.wait()
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
self.assertEqual(self.result, 1)
def test_process_pool_error(self):
"""Process Pool Fork errors are raised by future get."""
async def test(pool):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(pool, error_function, None)
with ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(BaseException):
asyncio.run(test(pool))
def test_process_pool_error_returned(self):
"""Process Pool Fork returned errors are returned by future get."""
async def test(pool):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(pool, return_error_function, None)
with ProcessPool(max_workers=1, context=mp_context) as pool:
self.assertIsInstance(asyncio.run(test(pool)), BaseException)
def test_process_pool_error_callback(self):
"""Process Pool Fork errors are forwarded to callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, error_function, None)
future.add_done_callback(self.callback)
await self.event.wait()
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
self.assertTrue(isinstance(self.exception, BaseException))
def test_process_pool_timeout(self):
"""Process Pool Fork future raises TimeoutError if so."""
async def test(pool):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(pool, long_function, 0.1)
with ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(asyncio.TimeoutError):
asyncio.run(test(pool))
def test_process_pool_timeout_callback(self):
"""Process Pool Fork TimeoutError is forwarded to callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, long_function, 0.1)
future.add_done_callback(self.callback)
await asyncio.sleep(0.1) # let the process pick up the task
await self.event.wait()
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
self.assertTrue(isinstance(self.exception, asyncio.TimeoutError))
def test_process_pool_cancel(self):
"""Process Pool Fork future raises CancelledError if so."""
async def test(pool):
loop = asyncio.get_running_loop()
future = loop.run_in_executor(pool, long_function, None)
await asyncio.sleep(0.1) # let the process pick up the task
self.assertTrue(future.cancel())
return await future
with ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(asyncio.CancelledError):
asyncio.run(test(pool))
def test_process_pool_cancel_callback(self):
"""Process Pool Fork CancelledError is forwarded to callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, long_function, None)
future.add_done_callback(self.callback)
await asyncio.sleep(0.1) # let the process pick up the task
self.assertTrue(future.cancel())
await self.event.wait()
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
self.assertTrue(isinstance(self.exception, asyncio.CancelledError))
def test_process_pool_stop_timeout(self):
"""Process Pool Fork workers are stopped if future timeout."""
async def test(pool):
loop = asyncio.get_running_loop()
future1 = loop.run_in_executor(pool, pid_function, None)
with self.assertRaises(asyncio.TimeoutError):
await loop.run_in_executor(pool, long_function, 0.1)
future2 = loop.run_in_executor(pool, pid_function, None)
self.assertNotEqual(await future1, await future2)
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
def test_process_pool_stop_cancel(self):
"""Process Pool Fork workers are stopped if future is cancelled."""
async def test(pool):
loop = asyncio.get_running_loop()
future1 = loop.run_in_executor(pool, pid_function, None)
cancel_future = loop.run_in_executor(pool, long_function, None)
await asyncio.sleep(0.1) # let the process pick up the task
self.assertTrue(cancel_future.cancel())
future2 = loop.run_in_executor(pool, pid_function, None)
self.assertNotEqual(await future1, await future2)
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
# DEADLOCK TESTS
def broken_worker_process_tasks(_, channel):
"""Process failing in receiving new tasks."""
with channel.mutex.reader:
os._exit(1)
def broken_worker_process_result(_, channel):
"""Process failing in delivering result."""
try:
for _ in pebble.pool.process.worker_get_next_task(channel, 2):
with channel.mutex.writer:
os._exit(1)
except OSError:
os._exit(1)
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessPoolDeadlockOnNewFutures(unittest.TestCase):
def setUp(self):
self.worker_process = pebble.pool.process.worker_process
pebble.pool.process.worker_process = broken_worker_process_tasks
pebble.CONSTS.channel_lock_timeout = 0.1
def tearDown(self):
pebble.pool.process.worker_process = self.worker_process
pebble.CONSTS.channel_lock_timeout = 60
def test_pool_deadlock_stop(self):
"""Process Pool Fork reading deadlocks are stopping the Pool."""
with self.assertRaises(RuntimeError):
pool = pebble.ProcessPool(max_workers=1, context=mp_context)
for _ in range(10):
pool.schedule(function)
time.sleep(0.2)
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessPoolDeadlockOnResult(unittest.TestCase):
def setUp(self):
self.worker_process = pebble.pool.process.worker_process
pebble.pool.process.worker_process = broken_worker_process_result
pebble.CONSTS.channel_lock_timeout = 0.1
def tearDown(self):
pebble.pool.process.worker_process = self.worker_process
pebble.CONSTS.channel_lock_timeout = 60
def test_pool_deadlock(self):
"""Process Pool Fork no deadlock if writing worker dies locking channel."""
with pebble.ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(pebble.ProcessExpired):
pool.schedule(function).result()
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessPoolDeadlockOnCancelLargeData(unittest.TestCase):
def test_pool_deadlock_stop_cancel(self):
"""Pool is stopped when futures are cancelled on large data."""
data = b'A' * 1024 * 1024 * 100
with pebble.ProcessPool() as pool:
futures = [pool.schedule(function, args=[data]) for _ in range(10)]
concurrent.futures.wait(
futures,
return_when=concurrent.futures.FIRST_COMPLETED
)
for f in futures:
f.cancel()
pool.stop()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/test/test_process_pool_forkserver.py 0000644 0001751 0000166 00000076130 14765574576 021762 0 ustar 00runner docker import os
import sys
import time
import pickle
import signal
import asyncio
import unittest
import threading
import concurrent
import dataclasses
import multiprocessing
from concurrent.futures import CancelledError, TimeoutError
import pebble
from pebble import ProcessPool, ProcessExpired
from pebble.pool.base_pool import PoolStatus
# set start method
supported = False
mp_context = None
methods = multiprocessing.get_all_start_methods()
if 'forkserver' in methods:
try:
mp_context = multiprocessing.get_context('forkserver')
if mp_context.get_start_method() == 'forkserver':
supported = True
else:
raise BaseException(mp_context.get_start_method())
except RuntimeError: # child process
pass
initarg = 0
def initializer(value):
global initarg
initarg = value
def long_initializer():
time.sleep(60)
def broken_initializer():
raise BaseException("BOOM!")
def function(argument, keyword_argument=0):
"""A docstring."""
return argument + keyword_argument
def initializer_function():
return initarg
def error_function():
raise BaseException("BOOM!")
def return_error_function():
return BaseException("BOOM!")
def pickle_error_function():
return threading.Lock()
@dataclasses.dataclass(frozen=True)
class FrozenError(Exception):
pass
def frozen_error_function():
raise FrozenError()
def long_function(value=1):
time.sleep(value)
return value
def pid_function():
time.sleep(0.1)
return os.getpid()
def sigterm_function():
signal.signal(signal.SIGTERM, signal.SIG_IGN)
time.sleep(10)
def suicide_function():
os._exit(1)
def process_function():
p = multiprocessing.Process(target=function, args=[1])
p.start()
p.join()
return 1
def pool_function():
pool = multiprocessing.Pool(1)
result = pool.apply(function, args=[1])
pool.close()
pool.join()
return result
def pebble_function():
with ProcessPool(max_workers=1) as pool:
f = pool.schedule(function, args=[1])
return f.result()
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessPool(unittest.TestCase):
def setUp(self):
global initarg
initarg = 0
self.event = threading.Event()
self.event.clear()
self.result = None
self.exception = None
def callback(self, future):
try:
self.result = future.result()
except BaseException as error:
self.exception = error
finally:
self.event.set()
def test_process_pool_single_future(self):
"""Process Pool Forkserver single future."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(function, args=[1],
kwargs={'keyword_argument': 1})
self.assertEqual(future.result(), 2)
def test_process_pool_multiple_futures(self):
"""Process Pool Forkserver multiple futures."""
futures = []
with ProcessPool(max_workers=1, context=mp_context) as pool:
for _ in range(5):
futures.append(pool.schedule(function, args=[1]))
self.assertEqual(sum([f.result() for f in futures]), 5)
def test_process_pool_callback(self):
"""Process Pool Forkserver result is forwarded to the callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(
function, args=[1], kwargs={'keyword_argument': 1})
future.add_done_callback(self.callback)
self.event.wait()
self.assertEqual(self.result, 2)
def test_process_pool_error(self):
"""Process Pool Forkserver errors are raised by future get."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(error_function)
self.assertRaises(BaseException, future.result)
def test_process_pool_error_returned(self):
"""Process Pool Forkserver returned errors are returned by future get."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(return_error_function)
self.assertIsInstance(future.result(), BaseException)
def test_process_pool_error_callback(self):
"""Process Pool Forkserver errors are forwarded to callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(error_function)
future.add_done_callback(self.callback)
self.event.wait()
self.assertTrue(isinstance(self.exception, BaseException))
def test_process_pool_pickling_error_task(self):
"""Process Pool Forkserver task pickling errors
are raised by future.result."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(function, args=[threading.Lock()])
self.assertRaises((pickle.PicklingError, TypeError), future.result)
def test_process_pool_pickling_error_result(self):
"""Process Pool Forkserver result pickling errors
are raised by future.result."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(pickle_error_function)
self.assertRaises((pickle.PicklingError, TypeError), future.result)
def test_process_pool_frozen_error(self):
"""Process Pool Forkserver frozen errors are raised by future get."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(frozen_error_function)
self.assertRaises(FrozenError, future.result)
def test_process_pool_timeout(self):
"""Process Pool Forkserver future raises TimeoutError if so."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(long_function, timeout=0.1)
self.assertRaises(TimeoutError, future.result)
def test_process_pool_timeout_callback(self):
"""Process Pool Forkserver TimeoutError is forwarded to callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(long_function, timeout=0.1)
future.add_done_callback(self.callback)
self.event.wait()
self.assertTrue(isinstance(self.exception, TimeoutError))
def test_process_pool_cancel(self):
"""Process Pool Forkserver future raises CancelledError if so."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(long_function)
time.sleep(0.1) # let the process pick up the task
self.assertTrue(future.cancel())
self.assertRaises(CancelledError, future.result)
def test_process_pool_cancel_callback(self):
"""Process Pool Forkserver CancelledError is forwarded to callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(long_function)
future.add_done_callback(self.callback)
time.sleep(0.1) # let the process pick up the task
self.assertTrue(future.cancel())
self.event.wait()
self.assertTrue(isinstance(self.exception, CancelledError))
@unittest.skipIf(sys.platform == 'darwin', "Not supported on MAC OS")
def test_process_pool_different_process(self):
"""Process Pool Forkserver multiple futures are handled by different processes."""
futures = []
with ProcessPool(max_workers=2, context=mp_context) as pool:
for _ in range(0, 5):
futures.append(pool.schedule(pid_function))
self.assertEqual(len(set([f.result() for f in futures])), 2)
def test_process_pool_future_limit(self):
"""Process Pool Forkserver tasks limit is honored."""
futures = []
with ProcessPool(max_workers=1, max_tasks=2, context=mp_context) as pool:
for _ in range(0, 4):
futures.append(pool.schedule(pid_function))
self.assertEqual(len(set([f.result() for f in futures])), 2)
def test_process_pool_stop_timeout(self):
"""Process Pool Forkserver workers are stopped if future timeout."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future1 = pool.schedule(pid_function)
pool.schedule(long_function, timeout=0.1)
future2 = pool.schedule(pid_function)
self.assertNotEqual(future1.result(), future2.result())
def test_process_pool_stop_cancel(self):
"""Process Pool Forkserver workers are stopped if future is cancelled."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future1 = pool.schedule(pid_function)
cancel_future = pool.schedule(long_function)
time.sleep(0.1) # let the process pick up the task
cancel_future.cancel()
future2 = pool.schedule(pid_function)
self.assertNotEqual(future1.result(), future2.result())
def test_process_pool_initializer(self):
"""Process Pool Forkserver initializer is correctly run."""
with ProcessPool(initializer=initializer, initargs=[1], context=mp_context) as pool:
future = pool.schedule(initializer_function)
self.assertEqual(future.result(), 1)
def test_process_pool_broken_initializer(self):
"""Process Pool Forkserver broken initializer is notified."""
with self.assertRaises(RuntimeError):
with ProcessPool(initializer=broken_initializer, context=mp_context) as pool:
pool.active
time.sleep(1)
pool.schedule(function)
def test_process_pool_running(self):
"""Process Pool Forkserver is active if a future is scheduled."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
pool.schedule(function, args=[1])
self.assertTrue(pool.active)
def test_process_pool_stopped(self):
"""Process Pool Forkserver is not active once stopped."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
pool.schedule(function, args=[1])
self.assertFalse(pool.active)
def test_process_pool_close_futures(self):
"""Process Pool Forkserver all futures are performed on close."""
futures = []
pool = ProcessPool(max_workers=1, context=mp_context)
for index in range(10):
futures.append(pool.schedule(function, args=[index]))
pool.close()
pool.join()
map(self.assertTrue, [f.done() for f in futures])
def test_process_pool_close_stopped(self):
"""Process Pool Forkserver is stopped after close."""
pool = ProcessPool(max_workers=1, context=mp_context)
pool.schedule(function, args=[1])
pool.close()
pool.join()
self.assertFalse(pool.active)
def test_process_pool_stop_futures(self):
"""Process Pool Forkserver not all futures are performed on stop."""
futures = []
pool = ProcessPool(max_workers=1, context=mp_context)
for index in range(10):
futures.append(pool.schedule(function, args=[index]))
pool.stop()
pool.join()
self.assertTrue(len([f for f in futures if not f.done()]) > 0)
def test_process_pool_stop_stopped(self):
"""Process Pool Forkserver is stopped after stop."""
pool = ProcessPool(max_workers=1, context=mp_context)
pool.schedule(function, args=[1])
pool.stop()
pool.join()
self.assertFalse(pool.active)
def test_process_pool_stop_stopped_callback(self):
"""Process Pool Forkserver is stopped in callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
def stop_pool_callback(_):
pool.stop()
future = pool.schedule(function, args=[1])
future.add_done_callback(stop_pool_callback)
with self.assertRaises(RuntimeError):
for index in range(10):
time.sleep(0.1)
pool.schedule(long_function, args=[index])
self.assertFalse(pool.active)
def test_process_pool_large_data(self):
"""Process Pool Forkserver large data is sent on the channel."""
data = "a" * 1098 * 1024 * 100 # 100 Mb
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(
function, args=[data], kwargs={'keyword_argument': ''})
self.assertEqual(data, future.result())
def test_process_pool_stop_large_data(self):
"""Process Pool Forkserver is stopped if large data is sent on the channel."""
data = "a" * 1098 * 1024 * 100 # 100 Mb
pool = ProcessPool(max_workers=1, context=mp_context)
pool.schedule(function, args=[data])
time.sleep(1)
pool.stop()
pool.join()
self.assertFalse(pool.active)
def test_process_pool_join_workers(self):
"""Process Pool Forkserver no worker is running after join."""
pool = ProcessPool(max_workers=4, context=mp_context)
pool.schedule(function, args=[1])
pool.stop()
pool.join()
self.assertEqual(len(pool._pool_manager.worker_manager.workers), 0)
def test_process_pool_join_running(self):
"""Process Pool Forkserver RuntimeError is raised if active pool joined."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
pool.schedule(function, args=[1])
self.assertRaises(RuntimeError, pool.join)
def test_process_pool_join_futures_timeout(self):
"""Process Pool Forkserver TimeoutError is raised if join on long tasks."""
pool = ProcessPool(max_workers=1, context=mp_context)
for _ in range(2):
pool.schedule(long_function)
pool.close()
self.assertRaises(TimeoutError, pool.join, 0.4)
pool.stop()
pool.join()
def test_process_pool_callback_error(self):
"""Process Pool Forkserver does not stop if error in callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(function, args=[1],
kwargs={'keyword_argument': 1})
future.add_done_callback(self.callback)
# sleep enough to ensure callback is run
time.sleep(0.1)
pool.schedule(function, args=[1],
kwargs={'keyword_argument': 1})
def test_process_pool_exception_isolated(self):
"""Process Pool Forkserver an BaseException does not affect other futures."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(error_function)
try:
future.result()
except BaseException:
pass
future = pool.schedule(function, args=[1],
kwargs={'keyword_argument': 1})
self.assertEqual(future.result(), 2)
@unittest.skipIf(os.name == 'nt', "Test won't run on Windows'.")
def test_process_pool_ignoring_sigterm(self):
"""Process Pool Forkserver ignored SIGTERM signal are handled on Unix."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(sigterm_function, timeout=0.2)
with self.assertRaises(TimeoutError):
future.result()
def test_process_pool_expired_worker(self):
"""Process Pool Forkserver unexpect death of worker raises ProcessExpired."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(suicide_function)
worker_pid = list(pool._pool_manager.worker_manager.workers)[0]
with self.assertRaises(ProcessExpired) as exc_ctx:
future.result()
self.assertEqual(exc_ctx.exception.exitcode, 1)
self.assertEqual(exc_ctx.exception.pid, worker_pid)
def test_process_pool_map(self):
"""Process Pool Forkserver map simple."""
elements = [1, 2, 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_empty(self):
"""Process Pool Forkserver map no elements."""
elements = []
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_single(self):
"""Process Pool Forkserver map one element."""
elements = [0]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_multi(self):
"""Process Pool Forkserver map multiple iterables."""
expected = (2, 4)
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, (1, 2, 3), (1, 2))
generator = future.result()
self.assertEqual(tuple(generator), expected)
def test_process_pool_map_one_chunk(self):
"""Process Pool Forkserver map chunksize 1."""
elements = [1, 2, 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements, chunksize=1)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_zero_chunk(self):
"""Process Pool Forkserver map chunksize 0."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(ValueError):
pool.map(function, [], chunksize=0)
def test_process_pool_map_timeout(self):
"""Process Pool Forkserver map with timeout."""
raised = []
elements = [1, 2, 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(long_function, elements, timeout=0.1)
generator = future.result()
while True:
try:
next(generator)
except TimeoutError as error:
raised.append(error)
except StopIteration:
break
self.assertTrue(all((isinstance(e, TimeoutError) for e in raised)))
def test_process_pool_map_timeout_chunks(self):
"""Process Pool Forkserver map timeout is assigned per chunk."""
elements = [0.1]*20
with ProcessPool(max_workers=1, context=mp_context) as pool:
# it takes 1s to process a chunk
future = pool.map(
long_function, elements, chunksize=5, timeout=1.8)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_error(self):
"""Process Pool Forkserver errors do not stop the iteration."""
raised = None
elements = [1, 'a', 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements)
generator = future.result()
while True:
try:
next(generator)
except TypeError as error:
raised = error
except StopIteration:
break
self.assertTrue(isinstance(raised, TypeError))
def test_process_pool_map_cancel(self):
"""Process Pool Forkserver cancel iteration."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(long_function, range(5))
generator = future.result()
self.assertEqual(next(generator), 0)
future.cancel()
for _ in range(4):
with self.assertRaises(CancelledError):
next(generator)
def test_process_pool_map_broken_pool(self):
"""Process Pool Forkserver Broken Pool."""
elements = [1, 2, 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(long_function, elements, timeout=1)
generator = future.result()
pool._context.status = PoolStatus.ERROR
while True:
try:
next(generator)
except TimeoutError as error:
self.assertFalse(pool.active)
future.cancel()
break
except StopIteration:
break
def test_process_pool_child_process(self):
"""Process Pool Forkserver worker starts process."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(process_function)
self.assertEqual(future.result(), 1)
def test_process_pool_child_pool(self):
"""Process Pool Forkserver worker starts multiprocessing.Pool."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(pool_function)
self.assertEqual(future.result(), 1)
def test_process_pool_child_pebble(self):
"""Process Pool Forkserver worker starts pebble.ProcessPool."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(pebble_function)
self.assertEqual(future.result(), 1)
@unittest.skipIf(not supported, "Start method is not supported")
class TestAsyncIOProcessPool(unittest.TestCase):
def setUp(self):
self.event = None
self.result = None
self.exception = None
def callback(self, future):
try:
self.result = future.result()
# asyncio.exception.CancelledError does not inherit from BaseException
except BaseException as error:
self.exception = error
finally:
self.event.set()
def test_process_pool_single_future(self):
"""Process Pool Forkserver single future."""
async def test(pool):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(pool, function, None, 1)
with ProcessPool(max_workers=1, context=mp_context) as pool:
self.assertEqual(asyncio.run(test(pool)), 1)
def test_process_pool_multiple_futures(self):
"""Process Pool Forkserver multiple futures."""
async def test(pool):
futures = []
loop = asyncio.get_running_loop()
for _ in range(5):
futures.append(loop.run_in_executor(pool, function, None, 1))
return await asyncio.wait(futures)
with ProcessPool(max_workers=2, context=mp_context) as pool:
self.assertEqual(sum(r.result()
for r in asyncio.run(test(pool))[0]), 5)
def test_process_pool_callback(self):
"""Process Pool Forkserver result is forwarded to the callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, function, None, 1)
future.add_done_callback(self.callback)
await self.event.wait()
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
self.assertEqual(self.result, 1)
def test_process_pool_error(self):
"""Process Pool Forkserver errors are raised by future get."""
async def test(pool):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(pool, error_function, None)
with ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(BaseException):
asyncio.run(test(pool))
def test_process_pool_error_returned(self):
"""Process Pool Forkserver returned errors are returned by future get."""
async def test(pool):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(pool, return_error_function, None)
with ProcessPool(max_workers=1, context=mp_context) as pool:
self.assertIsInstance(asyncio.run(test(pool)), BaseException)
def test_process_pool_error_callback(self):
"""Process Pool Forkserver errors are forwarded to callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, error_function, None)
future.add_done_callback(self.callback)
await self.event.wait()
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
self.assertTrue(isinstance(self.exception, BaseException))
def test_process_pool_timeout(self):
"""Process Pool Forkserver future raises TimeoutError if so."""
async def test(pool):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(pool, long_function, 0.1)
with ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(asyncio.TimeoutError):
asyncio.run(test(pool))
def test_process_pool_timeout_callback(self):
"""Process Pool Forkserver TimeoutError is forwarded to callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, long_function, 0.1)
future.add_done_callback(self.callback)
await asyncio.sleep(0.1) # let the process pick up the task
await self.event.wait()
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
self.assertTrue(isinstance(self.exception, asyncio.TimeoutError))
def test_process_pool_cancel(self):
"""Process Pool Forkserver future raises CancelledError if so."""
async def test(pool):
loop = asyncio.get_running_loop()
future = loop.run_in_executor(pool, long_function, None)
await asyncio.sleep(0.1) # let the process pick up the task
self.assertTrue(future.cancel())
return await future
with ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(asyncio.CancelledError):
asyncio.run(test(pool))
def test_process_pool_cancel_callback(self):
"""Process Pool Forkserver CancelledError is forwarded to callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, long_function, None)
future.add_done_callback(self.callback)
await asyncio.sleep(0.1) # let the process pick up the task
self.assertTrue(future.cancel())
await self.event.wait()
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
self.assertTrue(isinstance(self.exception, asyncio.CancelledError))
def test_process_pool_stop_timeout(self):
"""Process Pool Forkserver workers are stopped if future timeout."""
async def test(pool):
loop = asyncio.get_running_loop()
future1 = loop.run_in_executor(pool, pid_function, None)
with self.assertRaises(asyncio.TimeoutError):
await loop.run_in_executor(pool, long_function, 0.1)
future2 = loop.run_in_executor(pool, pid_function, None)
self.assertNotEqual(await future1, await future2)
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
def test_process_pool_stop_cancel(self):
"""Process Pool Forkserver workers are stopped if future is cancelled."""
async def test(pool):
loop = asyncio.get_running_loop()
future1 = loop.run_in_executor(pool, pid_function, None)
cancel_future = loop.run_in_executor(pool, long_function, None)
await asyncio.sleep(0.1) # let the process pick up the task
self.assertTrue(cancel_future.cancel())
future2 = loop.run_in_executor(pool, pid_function, None)
self.assertNotEqual(await future1, await future2)
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
# DEADLOCK TESTS
def broken_worker_process_tasks(_, channel):
"""Process failing in receiving new tasks."""
with channel.mutex.reader:
os._exit(1)
def broken_worker_process_result(_, channel):
"""Process failing in delivering result."""
try:
for _ in pebble.pool.process.worker_get_next_task(channel, 2):
with channel.mutex.writer:
os._exit(1)
except OSError:
os._exit(1)
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessPoolDeadlockOnNewFutures(unittest.TestCase):
def setUp(self):
self.worker_process = pebble.pool.process.worker_process
pebble.pool.process.worker_process = broken_worker_process_tasks
pebble.CONSTS.channel_lock_timeout = 0.1
def tearDown(self):
pebble.pool.process.worker_process = self.worker_process
pebble.CONSTS.channel_lock_timeout = 60
def test_pool_deadlock_stop(self):
"""Process Pool Forkserver reading deadlocks are stopping the Pool."""
with self.assertRaises(RuntimeError):
pool = pebble.ProcessPool(max_workers=1, context=mp_context)
for _ in range(10):
pool.schedule(function)
time.sleep(0.2)
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessPoolDeadlockOnResult(unittest.TestCase):
def setUp(self):
self.worker_process = pebble.pool.process.worker_process
pebble.pool.process.worker_process = broken_worker_process_result
pebble.CONSTS.channel_lock_timeout = 0.1
def tearDown(self):
pebble.pool.process.worker_process = self.worker_process
pebble.CONSTS.channel_lock_timeout = 60
def test_pool_deadlock(self):
"""Process Pool Forkserver no deadlock if writing worker dies locking channel."""
with pebble.ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(pebble.ProcessExpired):
pool.schedule(function).result()
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessPoolDeadlockOnCancelLargeData(unittest.TestCase):
def test_pool_deadlock_stop_cancel(self):
"""Process Pool Forkserver is stopped when futures are cancelled on large data."""
data = b'A' * 1024 * 1024 * 100
with pebble.ProcessPool() as pool:
futures = [pool.schedule(function, args=[data]) for _ in range(10)]
concurrent.futures.wait(
futures,
return_when=concurrent.futures.FIRST_COMPLETED
)
for f in futures:
f.cancel()
pool.stop()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/test/test_process_pool_spawn.py 0000644 0001751 0000166 00000075260 14765574576 020725 0 ustar 00runner docker import os
import sys
import time
import pickle
import signal
import asyncio
import unittest
import threading
import concurrent
import dataclasses
import multiprocessing
from concurrent.futures import CancelledError, TimeoutError
import pebble
from pebble import ProcessPool, ProcessExpired
from pebble.pool.base_pool import PoolStatus
# set start method
supported = False
mp_context = None
methods = multiprocessing.get_all_start_methods()
if 'spawn' in methods:
try:
mp_context = multiprocessing.get_context('spawn')
if mp_context.get_start_method() == 'spawn':
supported = True
except RuntimeError: # child process
pass
initarg = 0
def initializer(value):
global initarg
initarg = value
def long_initializer():
time.sleep(60)
def broken_initializer():
raise BaseException("BOOM!")
def function(argument, keyword_argument=0):
"""A docstring."""
return argument + keyword_argument
def initializer_function():
return initarg
def error_function():
raise BaseException("BOOM!")
def return_error_function():
return BaseException("BOOM!")
def pickle_error_function():
return threading.Lock()
@dataclasses.dataclass(frozen=True)
class FrozenError(Exception):
pass
def frozen_error_function():
raise FrozenError()
def long_function(value=1):
time.sleep(value)
return value
def pid_function():
time.sleep(0.1)
return os.getpid()
def sigterm_function():
signal.signal(signal.SIGTERM, signal.SIG_IGN)
time.sleep(10)
def suicide_function():
os._exit(1)
def process_function():
p = multiprocessing.Process(target=function, args=[1])
p.start()
p.join()
return 1
def pool_function():
pool = multiprocessing.Pool(1)
result = pool.apply(function, args=[1])
pool.close()
pool.join()
return result
def pebble_function():
with ProcessPool(max_workers=1) as pool:
f = pool.schedule(function, args=[1])
return f.result()
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessPool(unittest.TestCase):
def setUp(self):
global initarg
initarg = 0
self.event = threading.Event()
self.event.clear()
self.result = None
self.exception = None
def callback(self, future):
try:
self.result = future.result()
except BaseException as error:
self.exception = error
finally:
self.event.set()
def test_process_pool_single_future(self):
"""Process Pool Spawn single future."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(function, args=[1],
kwargs={'keyword_argument': 1})
self.assertEqual(future.result(), 2)
def test_process_pool_multiple_futures(self):
"""Process Pool Spawn multiple futures."""
futures = []
with ProcessPool(max_workers=1, context=mp_context) as pool:
for _ in range(5):
futures.append(pool.schedule(function, args=[1]))
self.assertEqual(sum([f.result() for f in futures]), 5)
def test_process_pool_callback(self):
"""Process Pool Spawn result is forwarded to the callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(
function, args=[1], kwargs={'keyword_argument': 1})
future.add_done_callback(self.callback)
self.event.wait()
self.assertEqual(self.result, 2)
def test_process_pool_error(self):
"""Process Pool Spawn errors are raised by future get."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(error_function)
self.assertRaises(BaseException, future.result)
def test_process_pool_error_returned(self):
"""Process Pool Spawn returned errors are returned by future get."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(return_error_function)
self.assertIsInstance(future.result(), BaseException)
def test_process_pool_error_callback(self):
"""Process Pool Spawn errors are forwarded to callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(error_function)
future.add_done_callback(self.callback)
self.event.wait()
self.assertTrue(isinstance(self.exception, BaseException))
def test_process_pool_pickling_error_task(self):
"""Process Pool Spawn task pickling errors
are raised by future.result."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(function, args=[threading.Lock()])
self.assertRaises((pickle.PicklingError, TypeError), future.result)
def test_process_pool_pickling_error_result(self):
"""Process Pool Spawn result pickling errors
are raised by future.result."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(pickle_error_function)
self.assertRaises((pickle.PicklingError, TypeError), future.result)
def test_process_pool_frozen_error(self):
"""Process Pool Spawn frozen errors are raised by future get."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(frozen_error_function)
self.assertRaises(FrozenError, future.result)
def test_process_pool_timeout(self):
"""Process Pool Spawn future raises TimeoutError if so."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(long_function, timeout=0.1)
self.assertRaises(TimeoutError, future.result)
def test_process_pool_timeout_callback(self):
"""Process Pool Spawn TimeoutError is forwarded to callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(long_function, timeout=0.1)
future.add_done_callback(self.callback)
self.event.wait()
self.assertTrue(isinstance(self.exception, TimeoutError))
def test_process_pool_cancel(self):
"""Process Pool Spawn future raises CancelledError if so."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(long_function)
time.sleep(0.1) # let the process pick up the task
self.assertTrue(future.cancel())
self.assertRaises(CancelledError, future.result)
def test_process_pool_cancel_callback(self):
"""Process Pool Spawn CancelledError is forwarded to callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(long_function)
future.add_done_callback(self.callback)
time.sleep(0.1) # let the process pick up the task
self.assertTrue(future.cancel())
self.event.wait()
self.assertTrue(isinstance(self.exception, CancelledError))
@unittest.skipIf(sys.platform == 'darwin', "Not supported on MAC OS")
def test_process_pool_different_process(self):
"""Process Pool Spawn futures are handled by different processes."""
futures = []
with ProcessPool(max_workers=2, context=mp_context) as pool:
for _ in range(0, 5):
futures.append(pool.schedule(pid_function))
self.assertEqual(len(set([f.result() for f in futures])), 2)
def test_process_pool_future_limit(self):
"""Process Pool Spawn tasks limit is honored."""
futures = []
with ProcessPool(max_workers=1, max_tasks=2, context=mp_context) as pool:
for _ in range(0, 4):
futures.append(pool.schedule(pid_function))
self.assertEqual(len(set([f.result() for f in futures])), 2)
def test_process_pool_stop_timeout(self):
"""Process Pool Spawn workers are stopped if future timeout."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future1 = pool.schedule(pid_function)
pool.schedule(long_function, timeout=0.1)
future2 = pool.schedule(pid_function)
self.assertNotEqual(future1.result(), future2.result())
def test_process_pool_stop_cancel(self):
"""Process Pool Spawn workers are stopped if future is cancelled."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future1 = pool.schedule(pid_function)
cancel_future = pool.schedule(long_function)
time.sleep(0.1) # let the process pick up the task
cancel_future.cancel()
future2 = pool.schedule(pid_function)
self.assertNotEqual(future1.result(), future2.result())
def test_process_pool_initializer(self):
"""Process Pool Spawn initializer is correctly run."""
with ProcessPool(initializer=initializer, initargs=[1], context=mp_context) as pool:
future = pool.schedule(initializer_function)
self.assertEqual(future.result(), 1)
def test_process_pool_broken_initializer(self):
"""Process Pool Spawn broken initializer is notified."""
with self.assertRaises(RuntimeError):
with ProcessPool(initializer=broken_initializer, context=mp_context) as pool:
pool.active
time.sleep(2)
pool.schedule(function)
def test_process_pool_running(self):
"""Process Pool Spawn is active if a future is scheduled."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
pool.schedule(function, args=[1])
self.assertTrue(pool.active)
def test_process_pool_stopped(self):
"""Process Pool Spawn is not active once stopped."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
pool.schedule(function, args=[1])
self.assertFalse(pool.active)
def test_process_pool_close_futures(self):
"""Process Pool Spawn all futures are performed on close."""
futures = []
pool = ProcessPool(max_workers=1, context=mp_context)
for index in range(10):
futures.append(pool.schedule(function, args=[index]))
pool.close()
pool.join()
map(self.assertTrue, [f.done() for f in futures])
def test_process_pool_close_stopped(self):
"""Process Pool Spawn is stopped after close."""
pool = ProcessPool(max_workers=1, context=mp_context)
pool.schedule(function, args=[1])
pool.close()
pool.join()
self.assertFalse(pool.active)
def test_process_pool_stop_futures(self):
"""Process Pool Spawn not all futures are performed on stop."""
futures = []
pool = ProcessPool(max_workers=1, context=mp_context)
for index in range(10):
futures.append(pool.schedule(function, args=[index]))
pool.stop()
pool.join()
self.assertTrue(len([f for f in futures if not f.done()]) > 0)
def test_process_pool_stop_stopped(self):
"""Process Pool Spawn is stopped after stop."""
pool = ProcessPool(max_workers=1, context=mp_context)
pool.schedule(function, args=[1])
pool.stop()
pool.join()
self.assertFalse(pool.active)
def test_process_pool_stop_stopped_callback(self):
"""Process Pool Spawn is stopped in callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
def stop_pool_callback(_):
pool.stop()
future = pool.schedule(function, args=[1])
future.add_done_callback(stop_pool_callback)
with self.assertRaises(RuntimeError):
for index in range(30):
time.sleep(0.1)
pool.schedule(long_function, args=[index])
self.assertFalse(pool.active)
def test_process_pool_large_data(self):
"""Process Pool Spawn large data is sent on the channel."""
data = "a" * 1098 * 1024 * 100 # 100 Mb
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(
function, args=[data], kwargs={'keyword_argument': ''})
self.assertEqual(data, future.result())
def test_process_pool_stop_large_data(self):
"""Process Pool Spawn stopped if large data is sent on the channel."""
data = "a" * 1098 * 1024 * 100 # 100 Mb
pool = ProcessPool(max_workers=1, context=mp_context)
pool.schedule(function, args=[data])
time.sleep(1)
pool.stop()
pool.join()
self.assertFalse(pool.active)
def test_process_pool_join_workers(self):
"""Process Pool Spawn no worker is running after join."""
pool = ProcessPool(max_workers=4, context=mp_context)
pool.schedule(function, args=[1])
pool.stop()
pool.join()
self.assertEqual(len(pool._pool_manager.worker_manager.workers), 0)
def test_process_pool_join_running(self):
"""Process Pool Spawn RuntimeError is raised if active pool joined."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
pool.schedule(function, args=[1])
self.assertRaises(RuntimeError, pool.join)
def test_process_pool_join_futures_timeout(self):
"""Process Pool Spawn TimeoutError is raised if join on long tasks."""
pool = ProcessPool(max_workers=1, context=mp_context)
for _ in range(2):
pool.schedule(long_function)
pool.close()
self.assertRaises(TimeoutError, pool.join, 0.4)
pool.stop()
pool.join()
def test_process_pool_callback_error(self):
"""Process Pool Spawn does not stop if error in callback."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(function, args=[1],
kwargs={'keyword_argument': 1})
future.add_done_callback(self.callback)
# sleep enough to ensure callback is run
time.sleep(0.1)
pool.schedule(function, args=[1],
kwargs={'keyword_argument': 1})
def test_process_pool_exception_isolated(self):
"""Process Pool Spawn an BaseException does not affect other futures."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(error_function)
try:
future.result()
except BaseException:
pass
future = pool.schedule(function, args=[1],
kwargs={'keyword_argument': 1})
self.assertEqual(future.result(), 2)
@unittest.skipIf(os.name == 'nt', "Test won't run on Windows'.")
def test_process_pool_ignoring_sigterm(self):
"""Process Pool Spawn ignored SIGTERM signal are handled on Unix."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(sigterm_function, timeout=0.2)
with self.assertRaises(TimeoutError):
future.result()
def test_process_pool_expired_worker(self):
"""Process Pool Spawn unexpect death of worker raises ProcessExpired."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(suicide_function)
worker_pid = list(pool._pool_manager.worker_manager.workers)[0]
with self.assertRaises(ProcessExpired) as exc_ctx:
future.result()
self.assertEqual(exc_ctx.exception.exitcode, 1)
self.assertEqual(exc_ctx.exception.pid, worker_pid)
def test_process_pool_map(self):
"""Process Pool Spawn map simple."""
elements = [1, 2, 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_empty(self):
"""Process Pool Spawn map no elements."""
elements = []
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_single(self):
"""Process Pool Spawn map one element."""
elements = [0]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_multi(self):
"""Process Pool Spawn map multiple iterables."""
expected = (2, 4)
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, (1, 2, 3), (1, 2))
generator = future.result()
self.assertEqual(tuple(generator), expected)
def test_process_pool_map_one_chunk(self):
"""Process Pool Spawn map chunksize 1."""
elements = [1, 2, 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements, chunksize=1)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_zero_chunk(self):
"""Process Pool Spawn map chunksize 0."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(ValueError):
pool.map(function, [], chunksize=0)
def test_process_pool_map_timeout(self):
"""Process Pool Spawn map with timeout."""
raised = []
elements = [1, 2, 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(long_function, elements, timeout=0.1)
generator = future.result()
while True:
try:
next(generator)
except TimeoutError as error:
raised.append(error)
except StopIteration:
break
self.assertTrue(all((isinstance(e, TimeoutError) for e in raised)))
def test_process_pool_map_timeout_chunks(self):
"""Process Pool Spawn map timeout is assigned per chunk."""
elements = [0.1]*20
with ProcessPool(max_workers=1, context=mp_context) as pool:
# it takes 1s to process a chunk
future = pool.map(
long_function, elements, chunksize=5, timeout=1.8)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_process_pool_map_error(self):
"""Process Pool Spawn errors do not stop the iteration."""
raised = None
elements = [1, 'a', 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(function, elements)
generator = future.result()
while True:
try:
next(generator)
except TypeError as error:
raised = error
except StopIteration:
break
self.assertTrue(isinstance(raised, TypeError))
def test_process_pool_map_cancel(self):
"""Process Pool Spawn cancel iteration."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(long_function, range(5))
generator = future.result()
self.assertEqual(next(generator), 0)
future.cancel()
for _ in range(4):
with self.assertRaises(CancelledError):
next(generator)
def test_process_pool_map_broken_pool(self):
"""Process Pool Spawn Broken Pool."""
elements = [1, 2, 3]
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.map(long_function, elements, timeout=1)
generator = future.result()
pool._context.status = PoolStatus.ERROR
while True:
try:
next(generator)
except TimeoutError as error:
self.assertFalse(pool.active)
future.cancel()
break
except StopIteration:
break
def test_process_pool_child_process(self):
"""Process Pool Spawn worker starts process."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(process_function)
self.assertEqual(future.result(), 1)
def test_process_pool_child_pool(self):
"""Process Pool Spawn worker starts multiprocessing.Pool."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(pool_function)
self.assertEqual(future.result(), 1)
def test_process_pool_child_pebble(self):
"""Process Pool Spawn worker starts pebble.ProcessPool."""
with ProcessPool(max_workers=1, context=mp_context) as pool:
future = pool.schedule(pebble_function)
self.assertEqual(future.result(), 1)
@unittest.skipIf(not supported, "Start method is not supported")
class TestAsyncIOProcessPool(unittest.TestCase):
def setUp(self):
self.event = None
self.result = None
self.exception = None
def callback(self, future):
try:
self.result = future.result()
# asyncio.exception.CancelledError does not inherit from BaseException
except BaseException as error:
self.exception = error
finally:
self.event.set()
def test_process_pool_single_future(self):
"""Process Pool Spawn single future."""
async def test(pool):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(pool, function, None, 1)
with ProcessPool(max_workers=1, context=mp_context) as pool:
self.assertEqual(asyncio.run(test(pool)), 1)
def test_process_pool_multiple_futures(self):
"""Process Pool Spawn multiple futures."""
async def test(pool):
futures = []
loop = asyncio.get_running_loop()
for _ in range(5):
futures.append(loop.run_in_executor(pool, function, None, 1))
return await asyncio.wait(futures)
with ProcessPool(max_workers=2, context=mp_context) as pool:
self.assertEqual(sum(r.result()
for r in asyncio.run(test(pool))[0]), 5)
def test_process_pool_callback(self):
"""Process Pool Spawn result is forwarded to the callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, function, None, 1)
future.add_done_callback(self.callback)
await self.event.wait()
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
self.assertEqual(self.result, 1)
def test_process_pool_error(self):
"""Process Pool Spawn errors are raised by future get."""
async def test(pool):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(pool, error_function, None)
with ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(BaseException):
asyncio.run(test(pool))
def test_process_pool_error_returned(self):
"""Process Pool Spawn returned errors are returned by future get."""
async def test(pool):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(pool, return_error_function, None)
with ProcessPool(max_workers=1, context=mp_context) as pool:
self.assertIsInstance(asyncio.run(test(pool)), BaseException)
def test_process_pool_error_callback(self):
"""Process Pool Spawn errors are forwarded to callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, error_function, None)
future.add_done_callback(self.callback)
await self.event.wait()
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
self.assertTrue(isinstance(self.exception, BaseException))
def test_process_pool_timeout(self):
"""Process Pool Spawn future raises TimeoutError if so."""
async def test(pool):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(pool, long_function, 0.1)
with ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(asyncio.TimeoutError):
asyncio.run(test(pool))
def test_process_pool_timeout_callback(self):
"""Process Pool Spawn TimeoutError is forwarded to callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, long_function, 0.1)
future.add_done_callback(self.callback)
await asyncio.sleep(0.1) # let the process pick up the task
await self.event.wait()
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
self.assertTrue(isinstance(self.exception, asyncio.TimeoutError))
def test_process_pool_cancel(self):
"""Process Pool Spawn future raises CancelledError if so."""
async def test(pool):
loop = asyncio.get_running_loop()
future = loop.run_in_executor(pool, long_function, None)
await asyncio.sleep(0.1) # let the process pick up the task
self.assertTrue(future.cancel())
return await future
with ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(asyncio.CancelledError):
asyncio.run(test(pool))
def test_process_pool_cancel_callback(self):
"""Process Pool Spawn CancelledError is forwarded to callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, long_function, None)
future.add_done_callback(self.callback)
await asyncio.sleep(0.1) # let the process pick up the task
self.assertTrue(future.cancel())
await self.event.wait()
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
self.assertTrue(isinstance(self.exception, asyncio.CancelledError))
def test_process_pool_stop_timeout(self):
"""Process Pool Spawn workers are stopped if future timeout."""
async def test(pool):
loop = asyncio.get_running_loop()
future1 = loop.run_in_executor(pool, pid_function, None)
with self.assertRaises(asyncio.TimeoutError):
await loop.run_in_executor(pool, long_function, 0.1)
future2 = loop.run_in_executor(pool, pid_function, None)
self.assertNotEqual(await future1, await future2)
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
def test_process_pool_stop_cancel(self):
"""Process Pool Spawn workers are stopped if future is cancelled."""
async def test(pool):
loop = asyncio.get_running_loop()
future1 = loop.run_in_executor(pool, pid_function, None)
cancel_future = loop.run_in_executor(pool, long_function, None)
await asyncio.sleep(0.1) # let the process pick up the task
self.assertTrue(cancel_future.cancel())
future2 = loop.run_in_executor(pool, pid_function, None)
self.assertNotEqual(await future1, await future2)
with ProcessPool(max_workers=1, context=mp_context) as pool:
asyncio.run(test(pool))
# DEADLOCK TESTS
def broken_worker_process_tasks(_, channel):
"""Process failing in receiving new tasks."""
with channel.mutex.reader:
os._exit(1)
def broken_worker_process_result(_, channel):
"""Process failing in delivering result."""
try:
for _ in pebble.pool.process.worker_get_next_task(channel, 2):
with channel.mutex.writer:
os._exit(1)
except OSError:
os._exit(1)
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessPoolDeadlockOnNewFutures(unittest.TestCase):
def setUp(self):
self.worker_process = pebble.pool.process.worker_process
pebble.pool.process.worker_process = broken_worker_process_tasks
pebble.CONSTS.channel_lock_timeout = 0.1
def tearDown(self):
pebble.pool.process.worker_process = self.worker_process
pebble.CONSTS.channel_lock_timeout = 60
def test_pool_deadlock_stop(self):
"""Process Pool Spawn reading deadlocks are stopping the Pool."""
with self.assertRaises(RuntimeError):
pool = pebble.ProcessPool(max_workers=1, context=mp_context)
for _ in range(10):
pool.schedule(function)
time.sleep(0.2)
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessPoolDeadlockOnResult(unittest.TestCase):
def setUp(self):
self.worker_process = pebble.pool.process.worker_process
pebble.pool.process.worker_process = broken_worker_process_result
pebble.CONSTS.channel_lock_timeout = 0.1
def tearDown(self):
pebble.pool.process.worker_process = self.worker_process
pebble.CONSTS.channel_lock_timeout = 60
def test_pool_deadlock(self):
"""Process Pool Spawn no deadlock if writing worker dies locking channel."""
with pebble.ProcessPool(max_workers=1, context=mp_context) as pool:
with self.assertRaises(pebble.ProcessExpired):
pool.schedule(function).result()
@unittest.skipIf(not supported, "Start method is not supported")
class TestProcessPoolDeadlockOnCancelLargeData(unittest.TestCase):
def test_pool_deadlock_stop_cancel(self):
"""Process Pool Spawn is stopped when futures are cancelled on large data."""
data = b'A' * 1024 * 1024 * 100
with pebble.ProcessPool() as pool:
futures = [pool.schedule(function, args=[data]) for _ in range(10)]
concurrent.futures.wait(
futures,
return_when=concurrent.futures.FIRST_COMPLETED
)
for f in futures:
f.cancel()
pool.stop()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1742141822.0
pebble-5.1.1/test/test_thread_pool.py 0000644 0001751 0000166 00000037347 14765574576 017312 0 ustar 00runner docker import time
import asyncio
import unittest
import threading
import dataclasses
from pebble import ThreadPool
from concurrent.futures import CancelledError, TimeoutError
from pebble.pool.base_pool import PoolStatus
initarg = 0
def error_callback(future):
raise BaseException("BOOM!")
def initializer(value):
global initarg
initarg = value
def broken_initializer():
raise BaseException("BOOM!")
def function(argument, keyword_argument=0):
"""A docstring."""
return argument + keyword_argument
def initializer_function():
return initarg
def error_function():
raise BaseException("BOOM!")
@dataclasses.dataclass(frozen=True)
class FrozenError(Exception):
pass
def frozen_error_function():
raise FrozenError()
def long_function(value=0):
time.sleep(1)
return value
def tid_function():
time.sleep(0.1)
return threading.current_thread()
class TestThreadPool(unittest.TestCase):
def setUp(self):
global initarg
initarg = 0
self.event = threading.Event()
self.event.clear()
self.results = None
self.exception = None
def callback(self, future):
try:
self.results = future.result()
except BaseException as error:
self.exception = error
finally:
self.event.set()
def test_thread_pool_single_future(self):
"""Thread Pool single future."""
with ThreadPool(max_workers=1) as pool:
future = pool.schedule(function, args=[1],
kwargs={'keyword_argument': 1})
self.assertEqual(future.result(), 2)
def test_thread_pool_multiple_futures(self):
"""Thread Pool multiple futures."""
futures = []
with ThreadPool(max_workers=1) as pool:
for _ in range(5):
futures.append(pool.schedule(function, args=[1]))
self.assertEqual(sum([t.result() for t in futures]), 5)
def test_thread_pool_callback(self):
"""Thread Pool results are forwarded to the callback."""
with ThreadPool(max_workers=1) as pool:
future = pool.schedule(
function, args=[1], kwargs={'keyword_argument': 1})
future.add_done_callback(self.callback)
self.event.wait()
self.assertEqual(self.results, 2)
def test_thread_pool_error(self):
"""Thread Pool errors are raised by future get."""
with ThreadPool(max_workers=1) as pool:
future = pool.schedule(error_function)
with self.assertRaises(BaseException):
future.result()
def test_thread_pool_error_callback(self):
"""Thread Pool errors are forwarded to callback."""
with ThreadPool(max_workers=1) as pool:
future = pool.schedule(error_function)
future.add_done_callback(self.callback)
self.event.wait()
self.assertTrue(isinstance(self.exception, BaseException))
def test_process_pool_frozen_error(self):
"""Thread Pool frozen errors are raised by future get."""
with ThreadPool(max_workers=1) as pool:
future = pool.schedule(frozen_error_function)
self.assertRaises(FrozenError, future.result)
def test_thread_pool_cancel_callback(self):
"""Thread Pool FutureCancelled is forwarded to callback."""
with ThreadPool(max_workers=1) as pool:
pool.schedule(long_function)
future = pool.schedule(long_function)
future.add_done_callback(self.callback)
future.cancel()
self.event.wait()
self.assertTrue(isinstance(self.exception, CancelledError))
def test_thread_pool_different_thread(self):
"""Thread Pool multiple futures are handled by different threades."""
futures = []
with ThreadPool(max_workers=2) as pool:
for _ in range(0, 5):
futures.append(pool.schedule(tid_function))
self.assertEqual(len(set([t.result() for t in futures])), 2)
def test_thread_pool_tasks_limit(self):
"""Thread Pool future limit is honored."""
futures = []
with ThreadPool(max_workers=1, max_tasks=2) as pool:
for _ in range(0, 4):
futures.append(pool.schedule(tid_function))
self.assertEqual(len(set([t.result() for t in futures])), 2)
def test_thread_pool_initializer(self):
"""Thread Pool initializer is correctly run."""
with ThreadPool(initializer=initializer, initargs=[1]) as pool:
future = pool.schedule(initializer_function)
self.assertEqual(future.result(), 1)
def test_thread_pool_broken_initializer(self):
"""Thread Pool broken initializer is notified."""
with self.assertRaises(RuntimeError):
with ThreadPool(initializer=broken_initializer) as pool:
pool.active
time.sleep(0.3)
pool.schedule(function)
def test_thread_pool_running(self):
"""Thread Pool is active if a future is scheduled."""
with ThreadPool(max_workers=1) as pool:
pool.schedule(function, args=[1])
self.assertTrue(pool.active)
def test_thread_pool_stopped(self):
"""Thread Pool is not active once stopped."""
with ThreadPool(max_workers=1) as pool:
pool.schedule(function, args=[1])
self.assertFalse(pool.active)
def test_thread_pool_close_futures(self):
"""Thread Pool all futures are performed on close."""
futures = []
pool = ThreadPool(max_workers=1)
for index in range(10):
futures.append(pool.schedule(function, args=[index]))
pool.close()
pool.join()
map(self.assertTrue, [t.done() for t in futures])
def test_thread_pool_close_stopped(self):
"""Thread Pool is stopped after close."""
pool = ThreadPool(max_workers=1)
pool.schedule(function, args=[1])
pool.close()
pool.join()
self.assertFalse(pool.active)
def test_thread_pool_stop_futures(self):
"""Thread Pool not all futures are performed on stop."""
futures = []
pool = ThreadPool(max_workers=1)
for index in range(10):
futures.append(pool.schedule(long_function, args=[index]))
pool.stop()
pool.join()
self.assertTrue(len([t for t in futures if not t.done()]) > 0)
def test_thread_pool_stop_stopped(self):
"""Thread Pool is stopped after stop."""
pool = ThreadPool(max_workers=1)
pool.schedule(function, args=[1])
pool.stop()
pool.join()
self.assertFalse(pool.active)
def test_thread_pool_stop_stopped_function(self):
"""Thread Pool is stopped in function."""
with ThreadPool(max_workers=1) as pool:
def function():
pool.stop()
pool.schedule(function)
self.assertFalse(pool.active)
def test_thread_pool_stop_stopped_callback(self):
"""Thread Pool is stopped in callback."""
with ThreadPool(max_workers=1) as pool:
def stop_pool_callback(_):
pool.stop()
future = pool.schedule(function, args=[1])
future.add_done_callback(stop_pool_callback)
with self.assertRaises(RuntimeError):
for index in range(10):
time.sleep(0.1)
pool.schedule(long_function, args=[index])
self.assertFalse(pool.active)
def test_thread_pool_join_workers(self):
"""Thread Pool no worker is running after join."""
pool = ThreadPool(max_workers=4)
pool.schedule(function, args=[1])
pool.stop()
pool.join()
self.assertEqual(len(pool._pool_manager.workers), 0)
def test_thread_pool_join_running(self):
"""Thread Pool RuntimeError is raised if active pool joined."""
with ThreadPool(max_workers=1) as pool:
pool.schedule(function, args=[1])
self.assertRaises(RuntimeError, pool.join)
def test_thread_pool_join_futures_timeout(self):
"""Thread Pool TimeoutError is raised if join on long futures."""
pool = ThreadPool(max_workers=1)
for _ in range(2):
pool.schedule(long_function)
pool.close()
self.assertRaises(TimeoutError, pool.join, 0.4)
pool.stop()
pool.join()
def test_thread_pool_exception_isolated(self):
"""Thread Pool an BaseException does not affect other futures."""
with ThreadPool(max_workers=1) as pool:
future = pool.schedule(error_function)
try:
future.result()
except:
pass
future = pool.schedule(function, args=[1],
kwargs={'keyword_argument': 1})
self.assertEqual(future.result(), 2)
def test_thread_pool_map(self):
"""Thread Pool map simple."""
elements = [1, 2, 3]
with ThreadPool(max_workers=1) as pool:
future = pool.map(function, elements)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_thread_pool_map_empty(self):
"""Thread Pool map no elements."""
elements = []
with ThreadPool(max_workers=1) as pool:
future = pool.map(function, elements)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_thread_pool_map_single(self):
"""Thread Pool map one element."""
elements = [0]
with ThreadPool(max_workers=1) as pool:
future = pool.map(function, elements)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_thread_pool_map_multi(self):
"""Thread Pool map multiple iterables."""
expected = (2, 4)
with ThreadPool(max_workers=1) as pool:
future = pool.map(function, (1, 2, 3), (1, 2))
generator = future.result()
self.assertEqual(tuple(generator), expected)
def test_thread_pool_map_one_chunk(self):
"""Thread Pool map chunksize 1."""
elements = [1, 2, 3]
with ThreadPool(max_workers=1) as pool:
future = pool.map(function, elements, chunksize=1)
generator = future.result()
self.assertEqual(list(generator), elements)
def test_thread_pool_map_zero_chunk(self):
"""Thread Pool map chunksize 0."""
with ThreadPool(max_workers=1) as pool:
with self.assertRaises(ValueError):
pool.map(function, [], chunksize=0)
def test_thread_pool_map_error(self):
"""Thread Pool errors do not stop the iteration."""
raised = None
elements = [1, 'a', 3]
with ThreadPool(max_workers=1) as pool:
future = pool.map(function, elements)
generator = future.result()
while True:
try:
next(generator)
except TypeError as error:
raised = error
except StopIteration:
break
self.assertTrue(isinstance(raised, TypeError))
def test_thread_pool_map_cancel(self):
"""Thread Pool cancel iteration."""
with ThreadPool(max_workers=1) as pool:
future = pool.map(long_function, range(5))
generator = future.result()
self.assertEqual(next(generator), 0)
future.cancel()
# either gets computed or it gets cancelled
try:
self.assertEqual(next(generator), 1)
except CancelledError:
pass
for _ in range(3):
with self.assertRaises(CancelledError):
next(generator)
def test_thread_pool_map_broken_pool(self):
"""Thread Pool Fork Broken Pool."""
elements = [1, 2, 3]
with ThreadPool(max_workers=1) as pool:
future = pool.map(long_function, elements, timeout=1)
generator = future.result()
pool._context.status = PoolStatus.ERROR
while True:
try:
next(generator)
except TimeoutError as error:
self.assertFalse(pool.active)
future.cancel()
break
except StopIteration:
break
class TestAsyncIOThreadPool(unittest.TestCase):
def setUp(self):
global initarg
initarg = 0
self.event = None
self.result = None
self.exception = None
def callback(self, future):
try:
self.result = future.result()
# asyncio.exception.CancelledError does not inherit from BaseException
except BaseException as error:
self.exception = error
finally:
self.event.set()
def test_thread_pool_single_future(self):
"""Thread Pool single future."""
async def test(pool):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(pool, function, 1)
with ThreadPool(max_workers=1) as pool:
self.assertEqual(asyncio.run(test(pool)), 1)
def test_thread_pool_multiple_futures(self):
"""Thread Pool multiple futures."""
async def test(pool):
futures = []
loop = asyncio.get_running_loop()
for _ in range(5):
futures.append(loop.run_in_executor(pool, function, 1))
return await asyncio.wait(futures)
with ThreadPool(max_workers=2) as pool:
self.assertEqual(sum(r.result()
for r in asyncio.run(test(pool))[0]), 5)
def test_thread_pool_callback(self):
"""Thread Pool results are forwarded to the callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, function, 1)
future.add_done_callback(self.callback)
await self.event.wait()
with ThreadPool(max_workers=1) as pool:
asyncio.run(test(pool))
self.assertEqual(self.result, 1)
def test_thread_pool_error(self):
"""Thread Pool errors are raised by future get."""
async def test(pool):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(pool, error_function)
with ThreadPool(max_workers=1) as pool:
with self.assertRaises(BaseException):
asyncio.run(test(pool))
def test_thread_pool_error_callback(self):
"""Thread Pool errors are forwarded to callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, error_function)
future.add_done_callback(self.callback)
await self.event.wait()
with ThreadPool(max_workers=1) as pool:
asyncio.run(test(pool))
self.assertTrue(isinstance(self.exception, BaseException))
def test_thread_pool_cancel_callback(self):
"""Thread Pool FutureCancelled is forwarded to callback."""
async def test(pool):
loop = asyncio.get_running_loop()
self.event = asyncio.Event()
self.event.clear()
future = loop.run_in_executor(pool, long_function)
future.add_done_callback(self.callback)
await asyncio.sleep(0.1) # let the process pick up the task
self.assertTrue(future.cancel())
await self.event.wait()
with ThreadPool(max_workers=1) as pool:
asyncio.run(test(pool))
self.assertTrue(isinstance(self.exception, asyncio.CancelledError))