plainbox-0.25/ 0000775 0001750 0001750 00000000000 12633675274 014073 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/docs/ 0000775 0001750 0001750 00000000000 12633675274 015023 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/docs/install.rst 0000664 0001750 0001750 00000004541 12627266441 017222 0 ustar pierre pierre 0000000 0000000 Installation
============
Debian Jessie and Ubuntu 14.04
------------------------------
You can install :term:`Plainbox` straight from the archive:
.. code-block:: bash
$ sudo apt-get install plainbox
Ubuntu (Development PPA)
------------------------
Plainbox can be installed from a :abbr:`PPA (Personal Package Archive)` on
Ubuntu Precise (12.04) or newer.
.. code-block:: bash
$ sudo add-apt-repository ppa:checkbox-dev/ppa && sudo apt-get update && sudo apt-get install plainbox
From python package index
-------------------------
Plainbox can be installed from :abbr:`pypi (python package index)`. Keep in
mind that you will need python3 version of ``pip``:
.. code-block:: bash
$ pip3 install plainbox
We recommend using virtualenv or installing with the ``--user`` option.
From a .snap (for Ubuntu Snappy)
--------------------------------
You can build a local version of plainbox.snap and install it on any snappy
device (it is architecture independent for now, it doesn't bundle python
itself). You will have to have access to the checkbox source repository for
this.
.. code-block:: bash
$ bzr branch lp:checkbox
$ cd checkbox/plainbox/
$ make
This will give you a new .snap file in the ``dist/`` directory. You can install
that snappy on a physical or virtual machine running snappy with the
``snappy-remote`` tool. Note that you will have to have the latest version of
the tool only available in the snappy PPA at this time. Refer to `snappy
umentation `_ for details.
If you followed snappy documentation to run an amd64 image in kvm you can try
this code snippet to get started. Note that you can pass the use ``-snapshot``
option to kvm to make all the disk changes temporary. This will let you make
destructive changes inside the image without having to re-create the original
image each time.
.. code-block:: bash
wget http://releases.ubuntu.com/15.04/ubuntu-15.04-snappy-amd64-generic.img.xz
unxz ubuntu-15.04-snappy-amd64-generic.img.xz
kvm -snapshot -m 512 -redir :8090::80 -redir :8022::22 ubuntu-15.04-snappy-amd64-generic.img
snappy-remote --url=ssh://localhost:8022 install plainbox_0.22.dev0_all.snap
The password for the ``ubuntu`` user is ``ubuntu``. After installing you can
log in (or use the KVM window) and invoke the ``plainbox.plainbox`` executable
directly.
plainbox-0.25/docs/manpages/ 0000775 0001750 0001750 00000000000 12633675274 016616 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/docs/manpages/plainbox-device.rst 0000664 0001750 0001750 00000000300 12627266441 022405 0 ustar pierre pierre 0000000 0000000 ===================
plainbox-device (1)
===================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:path: device
:manpage:
:nodefault: plainbox-0.25/docs/manpages/plainbox-session.rst 0000664 0001750 0001750 00000000564 12627266441 022645 0 ustar pierre pierre 0000000 0000000 ====================
plainbox-session (1)
====================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: session
:nodefault:
See Also
========
:doc:`plainbox-session-list`, :doc:`plainbox-session-remove`,
:doc:`plainbox-session-show`, :doc:`plainbox-session-archive`
:doc:`plainbox-session-export`
plainbox-0.25/docs/manpages/plainbox-startprovider.rst 0000664 0001750 0001750 00000000767 12627266441 024077 0 ustar pierre pierre 0000000 0000000 ==========================
plainbox-startprovider (1)
==========================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: startprovider
:nodefault:
About naming providers
======================
Plainbox tries to be intuitive where possible but provider names keep causing
issues and make people struggle with coming up with correct and good names.
See :doc:`plainbox-provider-names` for a detailed discussion of provider names.
plainbox-0.25/docs/manpages/plainbox-self-test.rst 0000664 0001750 0001750 00000000315 12627266441 023062 0 ustar pierre pierre 0000000 0000000 ======================
plainbox-self-test (1)
======================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: self-test
:nodefault:
plainbox-0.25/docs/manpages/CHECKBOX_DATA.rst 0000664 0001750 0001750 00000000533 12627266441 021323 0 ustar pierre pierre 0000000 0000000 =================
CHECKBOX_DATA (7)
=================
Synopsis
========
Legacy name of :doc:`PLAINBOX_SESSION_SHARE`
Description
===========
This environment variable may be used in scripts embedded in Plainbox job
definitions. It is discouraged and will eventually be deprecated and removed.
See Also
========
:doc:`PLAINBOX_SESSION_SHARE`
plainbox-0.25/docs/manpages/plainbox-category-units.rst 0000664 0001750 0001750 00000003632 12627266441 024136 0 ustar pierre pierre 0000000 0000000 ===========================
plainbox-category-units (7)
===========================
Synopsis
========
This page documents the Plainbox category units syntax and runtime behavior
Description
===========
The category unit is a normalized implementation of a "test category" concept.
Using category units one can define logical groups of tests that deal with some
specific testing area (for example, suspend-resume or USB support).
Job definitions can be associated with at most one category. Categories can
be used by particular applications to facilitate test selection.
Category Fields
---------------
There are two fields that are used by the category unit:
``id``:
This field defines the partial identifier of the category. It is similar
to the id field on the job definition units.
This field is mandatory.
``name``:
This field defines a human readable name of the category. It may be used
in application user interfaces for displaying a group of tests.
This field is translatable.
This field is mandatory.
Rationale
=========
The unit is a separate entity so that it can be shipped separately of job
definitions and so that it can gain a localizable name that can still be
referred to uniquely by any job definition.
In the future it is likely that the unit will be extended with additional
fields, for example to define an icon.
Note
====
Association between job definitions and categories can be overridden by
a particular test plan. Please refer to the test plan unit documentation for
details.
Examples
========
Given the following definition of a category unit::
unit: category
id: audio
_name: Audio tests
And the following definition of a job unit::
id: audio/speaker-headphone-plug-detection
category_id: audio
plugin: manual
_description: Plug in your headphones and ensure the system detected them
The job definition will be a part of the audio category. plainbox-0.25/docs/manpages/plainbox-qml-shell.rst 0000664 0001750 0001750 00000000462 12627266441 023055 0 ustar pierre pierre 0000000 0000000 ======================
plainbox-qml-shell (1)
======================
.. argparse::
:ref: plainbox.qml_shell.qml_shell.get_parser_for_sphinx
:prog: plainbox-qml-shell
:manpage:
:nodefault:
This command runs qml job provided by specified file.
See Also
========
:doc:`plainbox-run`
plainbox-0.25/docs/manpages/plainbox-dev-list.rst 0000664 0001750 0001750 00000000361 12627266441 022704 0 ustar pierre pierre 0000000 0000000 =====================
plainbox-dev-list (1)
=====================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: dev list
:nodefault:
See Also
========
:doc:`plainbox-dev`
plainbox-0.25/docs/manpages/plainbox-manage.py.rst 0000664 0001750 0001750 00000012041 12627266441 023032 0 ustar pierre pierre 0000000 0000000 =========
manage.py
=========
.. argparse::
:ref: plainbox.provider_manager.get_parser_for_sphinx
:manpage:
:nodefault:
This manual page documents the typical aspects of the manage.py file
initially generated for each Plainbox provider by `plainbox startprovider`.
It is not to be confused by `manage.py` files used by web applications
written using the Django framework.
Working With Providers
======================
Plainbox is pretty flexible and allows developers and tests alike to work with
providers in several different ways. First of all, providers are typically
packaged into Debian packages. Such packages are installed in system-wide
locations (look at the output of ``./manage.py install --help``).
One particular file that is a part of such providers, that you don't typically
see in the source directory, is a file with the extension ``.provider``.
Plainbox looks for files like that in several places (see plainbox(1)
discussion of PROVIDERPATH). When working *on* a provider (either writing a new
provider from scratch or extending an existing provider) that would be a quite
tedious process to go through. For that you can use the ``manage.py develop``
command to create a ``.provider`` file in your
``$XDG_DATA_HOME/plainbox-providers-1/`` directory. Plainbox will automatically
pick it up and and you will be able to run jobs from it directly, without
having to reinstall.
Caveats
=======
The behavior of each management script may be different. Plainbox offers APIs
to extend or override available commands so this man page should be seen as a
spiritual intent rather than concrete behavior.
Building Provider-Specific Executables
======================================
Plainbox assists in building provider-specific executables. Those are
additional architecture-specific binary executables that can be used in job
scripts.
Typically such additional executables are written in C and built with make. If
your provider doesn't require any sophisticated build system then all you need
to do is to create a src/ directory (alongside all the other provider
directories) and create at least the following files inside:
Makefile:
The makefile that will build your executables. This assumes it is not
generated (for example, with automake). It should place resulting
executables int the *current directory*. It will be invoked from a
different directory though, with ``make -f /path/to/Makefile``, so be aware
of that when writing your rules. Fortunately makefiles tend to just work so
this is not an issue in practice.
EXECUTABLES:
This file lists all the executables (one per line) that will be built by
the particular build system. It is used to ensure that Plainbox knows up
front about executables built from source and to know which files to copy.
(sources):
You obviously need to provide source files for your executables. Just add
them alongside all the other files in the ``src/`` directory.
Once that is done, you should be able to run ``./manage.py build``. It will
attempt to identify the build system that is being used (it understands C, go
and autotools, to some extent) and then carry on to build everything as
expected.
Resulting executables will be placed in ``build/bin``. When working in
development mode (via ``manage.py develop``) that will all magically just work.
Plainbox will figure out where each executable is, coping with files both in
``build/bin`` and in ``bin/`` directories transparently. When installing
(``manage.py install``) either locally or as a part of the packaging step that
will also just work so you don't have do do anything else.
Overriding / Extending Commands
===============================
Plainbox offers a decorator that can be used to extend any of the manage.py
subcommands with additional functionality. The general syntax for extending
existing commands is (here illustrated by changes to the ``sdist`` command)::
from plainbox.provider_manager import SourceDistributionCommand
from plainbox.provider_manager import manage_py_extension
@manage_py_extension
class SourceDistributionCommandExt(SourceDistributionCommand):
__doc__ = SourceDistributionCommand.__doc__
def invoked(self, ns):
super().invoked(ns)
# Do something else as well
Note that in some cases you need to define the command name to match the
original command name (for example, the install command requires this).
Otherwise Plainbox will derive the command name from the class name which may
be not what you expected::
from plainbox.provider_manager import InstallCommand
from plainbox.provider_manager import manage_py_extension
@manage_py_extension
class InstallCommandExt(InstallCommand):
__doc__ = InstallCommand.__doc__
name = 'install'
Further Reading
===============
The Checkbox project comes with a number of providers that use various niche
and under-documented features. It's always good to learn from existing
examples. Have a look at the project source directory, go to ``providers/``
and explore each provider there.
plainbox-0.25/docs/manpages/plainbox-session-remove.rst 0000664 0001750 0001750 00000000320 12627266441 024126 0 ustar pierre pierre 0000000 0000000 ===========================
plainbox-session-remove (1)
===========================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: session remove plainbox-0.25/docs/manpages/plainbox-test-plan-units.rst 0000664 0001750 0001750 00000036210 12627266441 024226 0 ustar pierre pierre 0000000 0000000 ============================
plainbox-test-plan-units (7)
============================
Synopsis
========
This page documents the Plainbox test plan units syntax and runtime behavior
Description
===========
The test plan unit is an evolution of the Plainbox whitelist concept, that is,
a facility that describes a sequence of job definitions that should be executed
together.
As in whitelists, jobs definitions are _selected_ by either listing their
identifier or a regular expression that matches their identifier. Selected
jobs are executed in the sequence they appear in the list, unless they need to
be reordered to satisfy dependencies which always take priority.
Unlike whitelists, test plans can contain additional meta-data which can be
used in a graphical user interface. You can assign a translatable name and
description to each test plan. This used to be done informally by naming the
``.whitelist`` file appropriately, with some unique filename and including
some #-based comments at the top of the file.
Test plans are also typical units so they can be defined with the familiar
RFC822-like syntax that is also used for job definitions. They can also be
multiple test plan definitions per file, just like with all the other units,
including job definitions.
Test Plan Fields
-----------------
The following fields can be used in a test plan. Note that **not all** fields
need to be used or even should be used. Please remember that Checkbox needs to
maintain backwards compatibility so some of the test plans it defines may have
non-typical constructs required to ensure proper behavior. You don't have to
copy such constructs when working on a new test plan from scratch
``id``:
Each test plan needs to have a unique identifier. This is exactly the same
as with other units that have an identifier (like job definitions
and categories).
This field is not used for display purposes but you may need to refer
to it on command line so keeping it descriptive is useful
``name``:
A human-readable name of the test plan. The name should be relatively short
as it may be used to display a list of test plans to the test operator.
Remember that the user or the test operator may not always be familiar with
the scope of testing that you are focusing on. Also consider that multiple
test providers may be always installed at the same time. The translated
version of the name (and icon, see below) is the only thing that needs
to allow the test operator to pick the right test plan.
Please use short and concrete names like:
- "Storage Device Certification Tests"
- "Ubuntu Core Application's Clock Acceptance Tests"
- "Default Ubuntu Hardware Certification Tests".
The field has a soft limit of eighty characters. It cannot have multiple
lines. This field should be marked as translatable by prepending the
underscore character (\_) in front. This field is mandatory.
``description``:
A human-readable description of this test plan. Here you can include as
many or few details as you'd like. Some applications may offer a way
of viewing this data. In general it is recommended to include a description
of what is being tested so that users can make an informed decision but
please in mind that the ``name`` field alone must be sufficient to
discriminate between distinct test plans so you don't have to duplicate
that information in the description.
If your tests will require any special set-up (procuring external hardware,
setting some devices or software in special test mode) it is recommended
to include this information here.
The field has no size limit. It can contain newline characters. This field
should be marked as translatable by prepending the underscore character
(\_) in front. This field is optional.
``include``:
A multi-line list of job identifiers or patterns matching such identifiers
that should be included for execution.
This is the most important field in any test plan. It basically decides
on which job definitions are selected by (included by) the test plan.
Separate entries need to be placed on separate lines. White space does not
separate entries as the id field may (sic!) actually include spaces.
You have two options for selecting tests:
- You can simply list the identifier (either partial or fully qualified)
of the job you want to include in the test plan directly. This is very
common and most test plans used by Checkbox actually look like that.
- You can use regular expressions to select many tests at the same time.
This is the only way to select generated jobs (created either by
template units or by job definitions using the legacy 'local' plugin
type). Please remember that the dot character has a special meaning
so unless you actually want to match *any character* escape the dot
with the backslash character (\\).
Regardless of if you use patterns or literal job identifiers you can use
their fully qualified name (the one that includes the namespace they reside
in) or an abbreviated form. The abbreviated form is applicable for job
definitions that reside in the same namespace (but not necessarily the same
provider) as the provider that is defining the test plan.
Plainbox will catch incorrect references to unknown jobs so you should
be relatively safe. Have a look at the examples section below for examples
on how you can refer to jobs from other providers (you simply use their
fully qualified name for that)
``mandatory_include``:
A multi-line list of job identifiers or patterns matching such identifiers
that should always be executed.
This optional field can be used to specify the jobs that should always run.
This is particularly useful for specifying jobs that gather vital
info about the tested system, as it renders imposible to generate a report
with no information about system under test.
For example, session results meant to be sent to the Ubuntu certification
website must include the special job: miscellanea/submission-resources
Example:
mandatory_include:
miscellanea/submission-resources
Note that mandatory jobs will always be run first (along with their
dependant jobs)
``bootstrap_include``:
A multi-line list of job identifiers that should be run first, before the
main body of testing begins. The job that should be included in the
bootstrapping sections are the ones generating or helping to generate other
jobs.
Example:
bootstrap_include:
graphics/generator_driver_version
Note that each entry in the bootstrap_include section must be a valid job
identifier and cannot be a regular expression pattern.
Also note that only local and resource jobs are allowed in this section.
``exclude``:
A multi-line list of job identifiers or patterns matching such identifiers
that should be excluded from execution.
This optional field can be used to prevent some jobs from being selected
for execution. It follows the similarly named ``-x`` command line option
to the ``plainbox run`` command.
This field may be used when a general (broad) selection is somehow made
by the ``include`` field and it must be trimmed down (for example, to
prevent a specific dangerous job from running). It has the same syntax
as the ``include``.
When a job is both included and excluded, exclusion always takes priority.
``category-overrides``:
A multi-line list of category override statements.
This optional field can be used to alter the natural job definition
category association. Currently Plainbox allows each job definition to
associate itself with at most one category (see plainbox-category-units(7)
and plainbox-job-units(7) for details). This is sub-optimal as some tests
can be easily assigned equally well to two categories at the same time.
For that reason, it may be necessary, in a particular test plan, to
override the natural category association with one that more correctly
reflects the purpose of a specific job definition in the context of a
specific test plan.
For example let's consider a job definition that tests if a specific piece
of hardware works correctly after a suspend-resume cycle. Let's assume that
the job definition has a natural association with the category describing
such hardware devices. In one test plan, this test will be associated
with the hardware-specific category (using the natural association). In
a special suspend-resume test plan the same job definition can
be associated with a special suspend-resume category.
The actual rules as to when to use category overrides and how to assign
a natural category to a specific test is not documented here. We believe
that each project should come up with a workflow and semantics that best
match its users.
The syntax of this field is a list of statements defined on separate lines.
Each override statement has the following form::
apply CATEGORY-IDENTIFIER to JOB-DEFINITION-PATTERN
Both 'apply' and 'to' are literal strings. CATEGORY-IDENTIFIER is
the identifier of a category unit. The JOB-DEFINITION-PATTERN has the
same syntax as the ``include`` field does. That is, it can be either
a simple string or a regular expression that is being compared to
identifiers of all the known job definitions. The pattern can be
either partially or fully qualified. That is, it may or may not
include the namespace component of the job definition identifier.
Overrides are applied in order and the last applied override is the
effective override in a given test plan. For example, given the
following two overrides::
apply cat-1 to .*
apply cat-2 to foo
The job definition with the partial identifier ``foo`` will be associated
with the ``cat-2`` category.
.. _testplan_estimated_duration:
``estimated_duration``:
An approximate time to execute this test plan, in seconds.
Since plainbox version 0.24 this field can be expressed in two formats. The
old format, a floating point number of seconds is somewhat difficult to
read for larger values. To avoid mistakes test designers can use the second
format with separate sections for number of hours, minutes and seconds. The
format, as regular expression, is ``(\d+h)?[: ]*(\d+m?)[: ]*(\d+s)?``. The
regular expression expresses an optional number of hours, followed by the
``h`` character, followed by any number of spaces or ``:`` characters,
followed by an optional number of minutes, followed by the ``m`` character,
again followed by any number of spaces or ``:`` characters, followed by the
number of seconds, ultimately followed by the ``s`` character.
The values can no longer be fractional (you cannot say ``2.5m`` you need to
say ``2m 30s``). We feel that sub-second granularity does is too
unpredictable to be useful so that will not be supported in the future.
This field is optional. If it is missing it is automatically computed by
the identical field that may be specified on particular job definitions.
Since sometimes it is easier to think in terms of test plans (they are
typically executed more often than a specific job definition) this estimate
may be more accurate as it doesn't include the accumulated sum of
mis-estimates from all of the job definitions selected by a particular test
plan.
Migrating From Whitelists
-------------------------
Migrating from whitelists is optional but strongly recommended. Whitelists
are discouraged but neither deprecated nor unsupported. As we progress on the
transition we are likely to fully deprecate and subsequently remove the
classical form of whitelits (as are typically found in many ``*.whitelist``
files).
The first thing you need to do is to create a file that will hold your test
plans. You should put that file in the ``units/`` directory of your provider.
Note that a file that holds a test plan may also hold any other units.
The decision on how to structure your provider is up to you and the particular
constraints and recommended practices of the project you are participating in.
Having selected an appropriate file simply copy your old whitelist (just one)
and paste it into the _template_ below::
unit: test plan
id: << DERIVE A PROPER IDENTIFIER FROM THE NAME OF THE WHITELIST FILE >>
_name: << COME UP WITH A PROPER NAME OF THIS TEST PLAN >>
_description:
<< COME UP WITH A PROPER DESCRIPTION OF THIS TEST PLAN >>
include:
<< PASTE THE FULL TEXT OF YOUR OLD WHITELIST >>
Note that you may also add the ``estimated_duration`` field but this is not
required. Sometimes it is easier to provide a rough estimate of a whole test
plan rather than having to compute it from all the job definitions it selects.
Examples
--------
A simple test plan that selects several jobs::
id: foo-bar-and-froz
_name: Tests Foo, Bar and Froz
_description:
This example test plan selects the following three jobs:
- Foo
- Bar
- Froz
include:
foo
bar
froz
A test plan that uses jobs from another provider's namespace in addition
to some of its own definitions::
id: extended-tests
_name: Extended Storage Tests (By Corp Inc.)
_description:
This test plan runs an extended set of storage tests, customized
by the Corp Inc. corporation. In addition to the standard Ubuntu
set of storage tests, this test plan includes the following tests::
- Multipath I/O Tests
- Degraded Array Recovery Tests
include:
2013.com.canonical.certification:disk/.*
multipath-io
degrade-array-recovery
A test plan that generates jobs using bootstrap_include section::
unit: test plan
id: test-plan-with-bootstrapping
_name: Tests with a bootstrapping stage
_description:
This test plan uses bootstrapping_include field to generate additional
jobs depending on the output of the generator job.
include: .*
bootstrap_include:
generator
unit: job
id: generator
plugin: resource
_description: Job that generates Foo and Bar resources
command:
echo "my_resource: Foo"
echo
echo "my_resource: Bar"
unit: template
template-unit: job
template-resource: generator
plugin: shell
estimated_duration: 1
id: generated_job_{my_resource}
command: echo {my_resource}
_description: Job instantiated from template that echoes {my_resource}
A test plan that marks some jobs as mandatory::
unit: test plan
id: test-plan-with-mandatory-jobs
_name: Test plan with mandatory jobs
_description:
This test plan runs some jobs regardless of user selection.
include:
Foo
mandatory_include:
Bar
unit: job
id: Foo
_name: Foo job
_description: Job that might be deselected by the user
plugin: shell
command: echo Foo job
unit: job
id: Bar
_name: Bar job (mandatory)
_description: Job that should *always* run
plugin: shell
command: echo Bar job
plainbox-0.25/docs/manpages/plainbox-dev-script.rst 0000664 0001750 0001750 00000000371 12627266441 023236 0 ustar pierre pierre 0000000 0000000 =======================
plainbox-dev-script (1)
=======================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: dev script
:nodefault:
See Also
========
:doc:`plainbox-dev`
plainbox-0.25/docs/manpages/plainbox-session-show.rst 0000664 0001750 0001750 00000000310 12627266441 023610 0 ustar pierre pierre 0000000 0000000 =========================
plainbox-session-show (1)
=========================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: session show plainbox-0.25/docs/manpages/plainbox-job-units.rst 0000664 0001750 0001750 00000036776 12627266441 023112 0 ustar pierre pierre 0000000 0000000 ======================
plainbox-job-units (7)
======================
Synopsis
========
This page documents the syntax of the plainbox job units
Description
===========
A job unit is a smallest unit of testing that can be performed by either
Checkbox or Plainbox. All jobs have an unique name. There are many types of
jobs, some are fully automated others are fully manual. Some jobs are only an
implementation detail and a part of the internal architecture of Checkbox.
File format and location
------------------------
Jobs are expressed as sections in text files that conform somewhat to the
``rfc822`` specification format. Our variant of the format is described in
rfc822. Each record defines a single job.
Job Fields
----------
Following fields may be used by the job unit:
``id``:
(mandatory) - A name for the job. Should be unique, an error will
be generated if there are duplicates. Should contain characters in
[a-z0-9/-].
This field used to be called ``name``. That name is now deprecated. For
backwards compatibility it is still recognized and used if ``id`` is
missing.
``summary``:
(mandatory) - A human readable name for the job. This value is available
for translation into other languages. It is used when listing jobs. It must
be one line long, ideally it should be short (50-70 characters max).
``plugin``:
(mandatory) - For historical reasons it's called "plugin" but it's
better thought of as describing the "type" of job. The allowed types
are:
:manual: jobs that require the user to perform an action and then
decide on the test's outcome.
:shell: jobs that run without user intervention and
automatically set the test's outcome.
:user-interact: jobs that require the user to perform an
interaction, after which the outcome is automatically set.
:user-interact-verify: jobs that require the user to perform an
interaction, run a command after which the user is asked to decide on the
test's outcome. This is essentially a manual job with a command.
:attachment: jobs whose command output will be attached to the
test report or submission.
:local: a job whose command output needs to be in Checkbox job
format. Jobs output by a local job will be added to the set of
available jobs to be run.
:resource: A job whose command output results in a set of rfc822
records, containing key/value pairs, and that can be used in other
jobs' ``requires`` expressions.
:qml: jobs that run a custom QML payload within a test shell (QML
application or a generic, minimalistic QML test shell) using test API
described in CEP-5
.. warning::
The following plugin names are deprecated:
:user-verify: jobs that automatically perform an action or test
and then request the user to decide on the test's outcome. This was
deprecated because the user had no chance to read instructions prior
to the test. Use user-interact-verify instead; that will present
instructions, ask the user to click a button before running the
command, and finally prompt for outcome assessment.
``requires``:
(optional). If specified, the job will only run if the conditions
expressed in this field are met.
Conditions are of the form ``.
'value' (and|or) ...`` . Comparison operators can be ==, != and ``in``.
Values to compare to can be scalars or (in the case of the ``in``
operator) arrays or tuples. The ``not in`` operator is explicitly
unsupported.
Requirements can be logically chained with ``or`` and
``and`` operators. They can also be placed in multiple lines,
respecting the rfc822 multi-line syntax, in which case all
requirements must be met for the job to run ( ``and`` ed).
The Plainbox resource program evaluator is extensively documented,
to see a detailed description including rationale and implementation of
Checkbox "legacy" compatibility, see
`http://plainbox.readthedocs.org/en/latest/dev/resources.html#resources-in-plainbox`
``depends``:
(optional). If specified, the job will only run if all the listed
jobs have run and passed. Multiple job names, separated by spaces,
can be specified.
``after``:
(optional). If specified, the job will only run if all the listed jobs have
run (regardless of the outcome). Multiple job names, separated by spaces,
can be specified.
This feature is available since plainbox 0.24.
``command``:
(optional). A command can be provided, to be executed under specific
circumstances. For ``manual``, ``user-interact`` and ``user-verify``
jobs, the command will be executed when the user presses a "test"
button present in the user interface. For ``shell`` jobs, the
command will be executed unconditionally as soon as the job is
started. In both cases the exit code from the command (0 for
success, !0 for failure) will be used to set the test's outcome. For
``manual``, ``user-interact`` and ``user-verify`` jobs, the user can
override the command's outcome. The command will be run using the
default system shell. If a specific shell is needed it should be
instantiated in the command. A multi-line command or shell script
can be used with the usual multi-line syntax.
Note that a ``shell`` job without a command will do nothing.
``description``:
(mandatory). Provides a textual description for the job. This is
mostly to aid people reading job descriptions in figuring out what a
job does.
The description field, however, is used specially in ``manual``,
``user-interact`` and ``user-verify`` jobs. For these jobs, the
description will be shown in the user interface, and in these cases
it's expected to contain instructions for the user to follow, as
well as criteria for him to decide whether the job passes or fails.
For these types of jobs, the description needs to contain a few
sub-fields, in order:
:PURPOSE: This indicates the purpose or intent of the test.
:STEPS: A numbered list of steps for the user to follow.
:INFO:
(optional). Additional information about the test. This is
commonly used to present command output for the user to validate.
For this purpose, the ``$output`` substitution variable can be used
(actually, it can be used anywhere in the description). If present,
it will be replaced by the standard output generated from running
the job's command (commonly when the user presses the "Test"
button).
:VERIFICATION:
A question for the user to answer, deciding whether the test
passes or fails. The question should be phrased in such a way
that an answer of **Yes** means the test passed, and an answer of
**No** means it failed.
.. warning::
since version 0.17 fields: purpose, steps and verification should be used
instead of description.
``Example:``
`old-way`:
_description:
PURPOSE:
This test will check that internal speakers work correctly
STEPS:
1. Make sure that no external speakers or headphones are connected
When testing a desktop, you can skip this test if there is no
internal speaker, we will test the external output later
2. Click the Test button to play a brief tone on your audio device
VERIFICATION:
Did you hear a tone?
`new-way`:
_purpose:
This test will check that internal speakers work correctly
_steps:
1. Make sure that no external speakers or headphones are connected
When testing a desktop, you can skip this test if there is no
internal speaker, we will test the external output later
2. Click the Test button to play a brief tone on your audio device
_verification:
Did you hear a tone?
Note that if client code references the description field, plainbox will
combine purpose, steps and verification fields into one and use the result
``purpose``:
(optional). Purpose field is used in tests requiring human interaction as
an information about what a given test is supposed to do. User interfaces
should display content of this field prior to test execution. This field
may be omitted if the summary field is supplied.
Note that this field is applicable only for human interaction jobs.
``steps``:
(optional). Steps field depicts actions that user should perform as a part
of job execution. User interfaces should display the content of this field
upon starting the test.
Note that this field is applicable only for jobs requiring the user to
perform some actions.
``verification``:
(optional). Verification field is used to inform the user how they can
resolve a given job outcome.
Note that this field is applicable only for jobs the result of which is
determined by the user.
``user``:
(optional). If specified, the job will be run as the user specified
here. This is most commonly used to run jobs as the superuser
(root).
``environ``:
(optional). If specified, the listed environment variables
(separated by spaces) will be taken from the invoking environment
(i.e. the one Checkbox is run under) and set to that value on the
job execution environment (i.e. the one the job will run under).
Note that only the *variable names* should be listed, not the
*values*, which will be taken from the existing environment. This
only makes sense for jobs that also have the ``user`` attribute.
This key provides a mechanism to account for security policies in
``sudo`` and ``pkexec``, which provide a sanitized execution
environment, with the downside that useful configuration specified
in environment variables may be lost in the process.
.. _job_estimated_duration:
``estimated_duration``:
(optional) This field contains metadata about how long the job is
expected to run for, as a positive float value indicating
the estimated job duration in seconds.
Since plainbox version 0.24 this field can be expressed in two formats. The
old format, a floating point number of seconds is somewhat difficult to
read for larger values. To avoid mistakes test designers can use the second
format with separate sections for number of hours, minutes and seconds. The
format, as regular expression, is ``(\d+h)?[: ]*(\d+m?)[: ]*(\d+s)?``. The
regular expression expresses an optional number of hours, followed by the
``h`` character, followed by any number of spaces or ``:`` characters,
followed by an optional number of minutes, followed by the ``m`` character,
again followed by any number of spaces or ``:`` characters, followed by the
number of seconds, ultimately followed by the ``s`` character.
The values can no longer be fractional (you cannot say ``2.5m`` you need to
say ``2m 30s``). We feel that sub-second granularity does is too
unpredictable to be useful so that will not be supported in the future.
``flags``:
(optional) This fields contains list of flags separated by spaces or
commas that might induce plainbox to run the job in particular way.
Currently, following flags are inspected by plainbox:
``preserve-locale``:
This flag makes plainbox carry locale settings to the job's command. If
this flag is not set, plainbox will neuter locale settings. Attach
this flag to all job definitions with commands that use translations .
``win32``:
This flag makes plainbox run jobs' commands in windows-specific manner.
Attach this flag to jobs that are run on Windows OS.
``noreturn``:
This flag makes plainbox suspend execution after job's command is run.
This prevents scenario where plainbox continued to operate (writing
session data to disk and so on), while other process kills it (leaving
plainbox session in unwanted/undefined state).
Attach this flag to jobs that cause killing of plainbox process during
their operation. E.g. run shutdown, reboot, etc.
.. _job_flag_explicit_fail:
``explicit-fail``:
Use this flag to make entering comment mandatory, when the user
manually fails the job.
.. _job_flag_has_leftovers:
``has-leftovers``:
This flag makes plainbox silently ignore (and not log) any files left
over by the execution of the command associated with a job. This flag
is useful for jobs that don't bother with maintenance of temporary
directories and just want to rely on the one already created by
plainbox.
.. _job_flag_simple:
``simple``:
This flag makes plainbox disable certain validation advice and have
some sesible defaults for automated test cases. This simiplification
is meant to cut the boiler plate on jobs that are closer to unit tests
than elaborate manual interactions.
In practice the following changes are in effect when this flag is set:
- the *plugin* field defaults to *shell*
- the *description* field is entirely optional
- the *estimated_duration* field is entirely optional
- the *preserve-locale* flag is entirely optional
A minimal job using the simple flag looks as follows::
id: foo
command: echo "Jobs are simple!"
flags: simple
Additional flags may be present in job definition; they are ignored.
``imports``:
(optional) This field lists all the resource jobs that will have to be
imported from other namespaces. This enables jobs to use resources from
other namespaces.
You can use the "as ..." syntax to import jobs that have dashes, slashes or
other characters that would make them invalid as identifiers and give them
a correct identifier name. E.g.::
imports: from 2013.com.canonical.certification import cpuinfo
requires: 'armhf' in cpuinfo.platform
imports: from 2013.com.canonical.certification import cpu-01-info as cpu01
requires: 'avx2' in cpu01.other
The syntax of each imports line is::
IMPORT_STMT :: "from" "import"
| "from" "import" AS
===========================
Extension of the job format
===========================
The Checkbox job format can be considered "extensible", in that
additional keys can be added to existing jobs to contain additional
data that may be needed.
In order for these extra fields to be exposed through the API (i.e. as
properties of JobDefinition instances), they need to be declared as
properties in (`plainbox.impl.job`). This is a good place to document,
via a docstring, what the field is for and how to interpret it.
Implementation note: if additional fields are added, Checkbox needs
to be also told about them, the reason is that Checkbox *does* perform
validation of the job descriptions, ensuring they contain only known fields and
that fields contain expected data types. The jobs_info plugin contains the job
schema declaration and can be consulted to verify the known fields, whether
they are optional or mandatory, and the type of data they're expected to
contain.
Also, Checkbox validates that fields contain data of a specific type,
so care must be taken not to simply change contents of fields if
Checkbox compatibility of jobs is desired.
Plainbox does this validation on a per-accessor basis, so data in each
field must make sense as defined by that field's accessor. There is no need,
however, to declare field type beforehand.
plainbox-0.25/docs/manpages/plainbox-session-structure.rst 0000664 0001750 0001750 00000005647 12627266441 024712 0 ustar pierre pierre 0000000 0000000 ==============================
plainbox-session-structure (7)
==============================
Synopsis
========
This page documents the structure of the Plainbox per-session directory.
Description
===========
Each session is represented by a directory. Typically all sessions are stored
in the ``$XDG_CACHE_HOME/plainbox/sessions/`` directory. Each directory there
is a randomly-named session comprised of the following files and directories.
session:
A state with the serialized state of the session. Currently it is a JSON
document compressed with the gzip compression scheme. You can preview the
contents of this file with ``zcat session | json_pp`` where ``zcat`` (1)
and `json_pp`` (1) are external system utilities.
The session file stores the *state* of the session. State is represented by
several structures which are further documented in
:doc:`plainbox-session-state`. This file is essential for resuming a
session but is also useful for debugging.
io-logs:
A directory with files representing input-output operations performed by
particular jobs. There are three files for each job. One for Plainbox
itself and two more for human-readable debugging. The files are:
\*.record.gz:
A file internal to Plainbox, containing representation of all of the
input-output operations performed by the specified job definition's
command process.
The format for this file is a gzip-compressed sequence of records,
represented as separate lines, terminated with the newline character.
Each record is a small JSON list of exactly three elements. The first
element is a JSON number representing the delay since the previous
element was generated OR the delay before the process startup time, for
the first record. The second name is the name of the communication
stream. Currently only `stdout` and `stderr` are used. The third and
last element of each record is a base-64 encoded binary string
representing the communication that took place.
The leading part of the filename is currently the identifier of the job
definition but this is subject to change to allow for multiple log
files associated with a single job in a given session.
To figure out which log file is associated with each job definition,
refer to the state file (``session``).
\*.stdout:
Plain-text representation of the entire `stdout` stream as it was
printed by the command process. This file is purely for debugging and
is ignored by Plainbox. It may cease to be generated at some future
time.
\*.stderr:
Similarly to ``.stdout`` but for the `stderr` stream.
CHECKBOX_DATA:
A directory associated with the :doc:`PLAINBOX_SESSION_SHARE` per-session
runtime directory where jobs may deposit files to perform a primitive form
of IJC (inter-job-communication).
plainbox-0.25/docs/manpages/plainbox-dev-special.rst 0000664 0001750 0001750 00000000375 12627266441 023356 0 ustar pierre pierre 0000000 0000000 ========================
plainbox-dev-special (1)
========================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: dev special
:nodefault:
See Also
========
:doc:`plainbox-dev`
plainbox-0.25/docs/manpages/CHECKBOX_SHARE.rst 0000664 0001750 0001750 00000000666 12627266441 021463 0 ustar pierre pierre 0000000 0000000 ==================
CHECKBOX_SHARE (7)
==================
Synopsis
========
Legacy name of :doc:`PLAINBOX_PROVIDER_DATA`
Description
===========
This environment variable may be used in scripts embedded in Plainbox job
definitions. It is discouraged and will eventually be deprecated and removed.
The word `SHARE` comes from the fact that it used to point to
``/usr/share/checkbox``.
See Also
========
:doc:`PLAINBOX_PROVIDER_DATA`
plainbox-0.25/docs/manpages/plainbox.conf.rst 0000664 0001750 0001750 00000002363 12627266441 022107 0 ustar pierre pierre 0000000 0000000 =================
plainbox.conf (5)
=================
Synopsis
========
``/etc/xdg/plainbox.conf``
``$XDG_CONFIG_HOME/plainbox.conf``
Description
===========
Plainbox (and its derivatives) uses a configuration system composed of
variables arranged in sections. All configuration files follow the well-known
INI-style syntax. While Plainbox itself is not really using any variables,
knowledge of where those can be defined is useful for working with derivative
applications, such as Checkbox.
The [environment] section
-------------------------
The ``[environment]`` section deserves special attention. If a job advertises
usage of environment variable ``FOO`` (by using the `environ: FOO` declaration)
and ``FOO`` is not available in the environment of the user starting plainbox,
then the value is obtained from the ``[environment]`` section. This mechanism
is useful for distributing both site-wide and per-user configuration for jobs.
Files
=====
``/etc/xdg/plainbox.conf``
System-wide configuration file (lowest priority).
``$XDG_CONFIG_HOME/plainbox.conf``
Per-user configuration (highest priority).
Examples
========
/etc/xdg/plainbox.conf::
[environment]
OPEN_BG_SSID=my-ap-ssid
See Also
========
``plainbox-check-config`` (1)
plainbox-0.25/docs/manpages/plainbox-session-list.rst 0000664 0001750 0001750 00000003226 12627266441 023614 0 ustar pierre pierre 0000000 0000000 =========================
plainbox-session-list (1)
=========================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: session list
The `plainbox session list` command simply prints a list of available
sessions. Each session has the following attributes displayed:
storage identifier:
A randomly-looking identifier string starting with 'pbox-' that
identifies the session in the repository it comes from. The repository
is typically typically specific to the user's home directory:
``$XDG_CACHE_HOME/plainbox/sessions``.
app:
The name of the application that created the session. Typically
`plainbox` or `checkbox`. Plainbox only resumes sessions it has itself
created.
flags:
A list of flags. Existing flags are:
incomplete:
The session has some jobs left to run. Sessions with this flag can
be resumed
submitted:
The session was complete and the results were processed somehow.
Typically this means they were saved to a file or sent to the
certification website.
Note that other flags are possible and they are perfectly fine.
Applications can define their own flags that are not documented here or
even understood by the core.
title:
An arbitrary "title" of the session. Plainbox typically uses the
command line that was used to launch the session but other applications
may come up with more interesting titles. Plainbox also uses the title
to match find a possible resume candidate.
plainbox-0.25/docs/manpages/plainbox-run.rst 0000664 0001750 0001750 00000040631 12627266441 021765 0 ustar pierre pierre 0000000 0000000 ================
plainbox-run (1)
================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: run
:nodefault:
This command runs zero or more Plainbox jobs as a part of a single session
and saves the test results. Plainbox will follow the following high-level
algorithm during the execution of this command.
1. Parse command line arguments and look if there's a session that can be
resumed (see **RESUMING** below). If so, offer the user a choice to
resume that session. If the resume operation fails move to the next
qualifying session. Finally offer to create a new session.
2. If the session is being resumed, replay the effects of the session
execution from the on-disk state. This recreates generated jobs and
re-introduces the same resources into the session state. In other words,
no jobs that have run in the past are re-ran.
If the resumed session was about to execute a job then offer to skip the
job. This allows test operators to skip jobs that have caused the system
to crash in the past (e.g. system suspend tests)
If the session is not being resumed (a new session was created), set the
`incomplete` flag.
3. Use the job selection (see **SELECTING JOBS** below) to derive the run
list. This step involves resolving job dependencies and reordering jobs
if required.
4. Follow the run list, executing each job in sequence if possible. Jobs
can be inhibited from execution by failed dependencies or failed
(evaluating to non-True result) resource expressions.
If at any time a new job is being re-introduced into the system (see
**GENERATED JOBS** below) then the loop is aborted and control jumps
back to step 3 to re-select jobs. Existing results are not discarded so
jobs that already have some results are not executed again.
Before and after executing any job the session state is saved to disk to
allow resuming from a job that somehow crashes the system or crashes
Plainbox itself.
5. Remove the `incomplete` flag.
6. Export the state of the session to the desired format (see **EXPORTING
RESULTS**) and use the desired transport to send the results (see
**TRANSPORTING RESULTS**).
7. Set the `submitted` flag.
SELECTING JOBS
==============
Plainbox offers two mechanisms for selecting jobs. Both can be used at the
same time, both can be used multiple times.
Selecting jobs with patterns
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The first mechanism is exposed through the ``--include-pattern PATTERN``
command-line option. It instructs Plainbox to `select` any job whose
fully-qualified identifier matches the regular expression ``PATTERN``.
Jobs selected this way will be, if possible, ordered according to the order
of command line arguments. For example, having the following command line
would run the job `foo` before running the job `bar`:
plainbox run -i '.*::foo' -i '.*::bar'
Selecting jobs with whitelists
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The second mechanism is the ``--whitelist WHITELIST`` command-line option.
WhiteLists (or test plans, which is somewhat easier to relate to).
Whitelists are simple text files composed of a list of regular expressions,
identical to those that may be passed with the ``-i`` option.
Unlike the ``-i`` option though, there are two kinds of whitelists.
Standalone whitelists are not associated with any Plainbox Provider. Such
whitelists can be distributed entirely separately from any other component
and thus have no association with any namespace.
Therefore, be fully qualified, each pattern must include both the namespace
and the partial identifier components. For example, this is a valid, fully
quallified whitelist::
2013.com.canonical.plainbox::stub/.*
It will unambiguously select some of the jobs from the special, internal
StubBox provider that is built into Plainbox. It can be saved under any
filename and stored in any directory and it will always select the same set
of jobs.
In contrast, whitelists that are associated with a particular provider, by
being stored in the per-provider ``whitelists/`` directory, carry an
implicit namespace. Such whitelists are typically written without
mentioning the namespace component.
For example, the same "stub/.*" pattern can be abbreviated to::
stub/.*
Typically this syntax is used in all whitelists specific to a particular
provider unless the provider maintainer explicitly wants to include a job
from another namespace (for example, one of the well-known Checkbox job
definitions).
GENERATED JOBS
==============
Plainbox offers a way to generate jobs at runtime. There are two
motivations for this feature.
Instantiating Tests for Multiple Devices
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The classic example is to probe the hardware (for example, to enumerate all
storage devices) and then duplicate each of the store specific tests so
that all devices are tested separately.
At this time jobs can be generated only from jobs using the plugin type
`local`. Jobs of this kind are expected to print fully conforming job
definitions on stdout. Generated jobs cause a few complexities and one
limitation that is currently enforced is that generated jobs cannot
generate additional jobs if any of the affected jobs need to run as another
user.
Another limitation is that jobs cannot override existing definitions.
Creating Parent-Child Association
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A relatively niche and legacy feature of generated jobs is to print a
verbatim copy of existing job definitions from a ``local`` job definition
named afer a generic testing theme or category. For example the Checkbox
job definition ``__wireless__`` prints, with the help of ``cat`` (1), all
of the job definitions defined in the file ``wireless.txt``.
This behavior is special-cased not to cause redefinition errors. Instead,
existing definitions gain the ``via`` attribute that links them to the
generator job. This feature is used by derivative application such as
Checkbox. Plainbox is not using it at this time.
RESUMING
========
Plainbox offers a session resume functionality whereas a session that was
interrupted (either purposefully or due to a malfunction) can be resumed
and effectively continued where it was left off.
When resuming a session you may be given an option to either re-run, pass,
fail or skip the test job that was being executed before the session was
interrupted. This is intended to handle both normal situations, such as a
"system reboot test" where it is perfectly fine to "pass" the test without
re-running the command. In addition it can be used to handle anomalous
cases where the machine misbehaves and re-running the same test would cause
the problem to occur again indefinitely.
Limitations
^^^^^^^^^^^
This functionality does not allow to interrupt and resume a test job that
is already being executed. Such job will be restarted from scratch.
Plainbox tries to ensure that a single session is consistent and the
assumptions that held at the start of the session are maintained at the
end. To that end, Plainbox will try to ensure that job definitions have not
changed between two separate invocations that worked with a single session.
If such a situation is detected the session will not be resumed.
EXPORTING RESULTS
=================
Plainbox offers a way to export the internal state of the session into a
more useful format for further processing.
Selecting Exporters
^^^^^^^^^^^^^^^^^^^
The exporter can be selected using the ``--output-format FORMAT``
command-line option. A list of available exporters (which may include 3rd
party exporters) can be obtained by passing the ``--output-format ?``
option.
Some formats are more useful than others in that they are capable of
transferring more of the internal state. Depending on your application you
may wish to choose the most generic format (json) and process it further
with additional tools, choose the most basic format (text) just to get a
simple summary of the results or lastly choose one of the two specialized
formats (xml and html) that are specific to the Checkbox workflow.
Out of the box the following exporters are supported:
html
----
This exporter creates a static HTML page with human-readable test report.
It is useful for communicating with other humans and since it is entirely
standalone and off-line it can be sent by email or archived.
json
----
This exporter creates a JSON document with the internal representation
of the session state. It is the most versatile exporter and it is useful
and easy for further processing. It is not particularly human-readable
but can be quite useful for high-level debugging without having to use
pdb and know the internals of Plainbox.
rfc822
------
This exporter creates quasi-RFC822 documents. It is rather limited and not
used much. Still, it can be useful in some circumstances.
text
----
This is the default exporter. It simply prints a human-readable
representation of test results without much detail. It discards nearly all
of the internal state though.
xlsx
----
This exporter creates a standalone .xlsx (XML format for Microsoft Excel)
file that contains a human-readable test report. It is quit similar to the
HTML report but it is easier to edit. It is useful for communicating with
other humans and since it is entirely standalone and off-line it can be
sent by email or archived.
It depends on python3-xlsxwriter package
hexr
----
This exporter creates a rather confusingly named XML document only
applicable for internal Canonical Hardware Certification Team workflow.
It is not a generic XML representation of test results and instead it
carries quite a few legacy constructs that are only retained for
compatibility with other internal tools. If you want generic processing
look for JSON instead.
Selecting Exporter Options
^^^^^^^^^^^^^^^^^^^^^^^^^^
Certain exporters offer a set of options that can further customize the
exported data. A full list of options available for each exporter can be
obtained by passing the ``--output-options ?`` command-line option.
Options may be specified as a comma-separated list. Some options act as
simple flags, other options can take an argument with the ``option=value``
syntax.
Known exporter options are documented below:
json
----
with-io-log:
Exported data will include the input/output log associated with each
job result. The data is included in its native three-tuple form unless
one of the `squash-io-log` or `flatten-io-log` options are used as
well.
IO logs are representations of the data produced by the process created
from the shell command associated with some jobs.
squash-io-log:
When used together with `with-io-log` option it causes Plainbox to
discard the stream name and time-stamp and just include a list of
base64-encoded binary strings. This option is more useful for
reconstructing simple "log files"
flatten-io-log:
When used together with `with-io-log` option it causes Plainbox to
concatenate all of the separate base64-encoded records into one large
base64-encoded binary string representing the whole communication that
took place.
with-run-list:
Exported data will include the run list (sequence of jobs computed from
the desired job list).
with-job-list:
Exported data will include the full list of jobs known to the system
with-resource-map:
Exported data will include the full resource map. Resources are records
of key-value sets that are associated with each job result for jobs
that have plugin type `resource`. They are expected to be printed to
`stdout` by such `resource jobs` and are parsed and stored by Plainbox.
with-job-defs:
Exported data will include some of the properties of each job
definition. Currently this set includes the following fields: `plugin`,
`requires`, `depends`, `command` and `description`.
with-attachments:
Exported data will include attachments. Attachments are created from
`stdout` stream of each job having plugin type `attachment`. The actual
attachments are base64-encoded.
with-comments:
Exported data will include comments added by the test operator to each
job result that has them.
with-job-via:
Exported data will include the ``via`` attribute alongside each job
result. The via attribute contains the checksum of the job definition
that generated a particular job definition. This is useful for tracking
jobs generated by jobs with the plugin type `local`.
with-job-hash:
Exported data will include the ``hash`` attribute alongside each job
result. The hash attribute is the checksum of the job definition's
data. It can be useful alongside with `with-job-via`.
machine-json:
The generated JSON document will be minimal (devoid of any optional
whitespace). This option is best to be used if the result is not
intended to be read by humans as it saves some space.
rfc822
------
All of the options have the same meaning as for the `json` exporter:
`with-io-log`, `squash-io-log`, `flatten-io-log`, `with-run-list`,
`with-job-list`, `with-resource-map`, `with-job-defs`, `with-attachments`,
`with-comments`, `with-job-via`, `with-job-hash`. The only exception is
the `machine-json` option which doesn't exist for this exporter.
text
----
Same as with rfc822.
xlsx
----
with-sys-info:
Exported spreadsheet will include a worksheet detailing the hardware
devices based on lspci, lsusb, udev, etc.
with-summary:
Exported spreadsheet will include test figures. This includes the
percentage of tests that have passed, have failed, have been skipped
and the total count.
with-job-description:
Exported spreadsheet will include job descriptions on a separate sheet
with-text-attachments:
Exported spreadsheet will include text attachments on a separate sheet
xml
---
client-name:
This option allows clients to override the name of the application
generating the XML document. By default that name is `plainbox`. To
use this option pass ``--output-options client-name=other-name``
command-line option.
TRANSPORTING RESULTS
====================
Exported results can be either saved to a file (this is the most basic,
default transport) or can be handed to one of the transport systems for
further processing. The idea is that specialized users can provide their
own transport systems (often coupled with a specific exporter) to move the
test results from the system-under-test to a central testing result
repository.
Transport can be selected with the ``--transport`` option. Again, as with
exporters, a list of known transports can be obtained by passing the
``--transport ?`` option. Transports need a destination URL which can be
specified with the ``--transport-where=`` option. The syntax of the URL
varies by transport type.
Plainbox comes equipped with the following transports:
launchpad
^^^^^^^^^
This transport can send the results exported using ``xml`` exporter to the
Launchpad Hardware Database. This is a little-known feature offered by the
https://launchpad.net/ website.
certification
^^^^^^^^^^^^^
This transport can send the results exported using the ``xml`` exporter to
the Canonical Certification Website (https://certification.canonical.com).
This transport is of little use to anyone but the Canonical Hardware
Certification Team that also maintains Plainbox and Checkbox but it is
mentioned here for completeness.
See Also
========
:doc:`plainbox-dev-analyze`
plainbox-0.25/docs/manpages/plainbox.rst 0000664 0001750 0001750 00000017172 12627266441 021167 0 ustar pierre pierre 0000000 0000000 ============
plainbox (1)
============
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:nodefault:
:nosubcommands:
Plainbox is a toolkit consisting of python3 library, development
tools, documentation and examples. It is targeted at developers working
on testing or certification applications and authors creating tests for
such applications.
Plainbox Sub-Commands
=====================
Plainbox uses a number of sub-commands for performing specific operations.
Since it targets several different audiences commands are arranged into three
parts: test authors, test users and core developers
Test Users
----------
plainbox run
Run a test job. This is the swiss army knife of a swiss army knife. Has
lots of options that affect job selection, execution and handling
results.
plainbox check-config
check and display plainbox configuration. While this command doesn't
allow to edit any settings it is very useful for figuring out what
variables are available and which configuration files are consulted.
Test Authors
------------
plainbox startprovider
Create a new provider (directory). This command allows test authors to
create a new collection (provider) of test definitions for Plainbox.
plainbox dev script
Run the command from a job in a way it would run as a part of normal
run, ignoring all dependencies / requirements and providing additional
diagnostic messages.
plainbox dev analyze
Analyze how selected jobs would be executed. Takes almost the same
arguments as ``plainbox run`` does. Additional optional arguments
control the type of analysis performed.
plainbox dev parse
Parse stdin with the specified parser. Plainbox comes with a system for
plugging parser definitions so that shell programs (and developers) get
access to structured data exported from otherwise hard-to-parse output.
plainbox dev list
List and describe various objects. Run without arguments to see all the
high-level objects Plainbox knows about. Optional argument can restrict
the list to objects of one kind.
Core Developers
---------------
plainbox self-test
Run unit and integration tests. Unit tests work also after installation
so this command can verify a local installation at any time.
plainbox dev special
Access to special/internal commands.
plainbox dev crash
Crash the application. Useful for testing the crash handler and crash
log files.
plainbox dev logtest
Log messages at various levels. Useful for testing the logging system.
Files
=====
The following files and directories affect Plainbox:
Created or written to
---------------------
``$XDG_CACHE_HOME/plainbox/logs``
Plainbox keeps all internal log files in this directory. In particular the
``crash.log`` is generated there on abnormal termination. If extended
logging / tracing is enabled via ``--debug`` or ``--trace`` then
``debug.log`` will be created in this directory. The files are generated on
demand and are rotated if they grow too large. It is safe to remove them at
any time.
``$XDG_CACHE_HOME/plainbox/sessions``
Plainbox keeps internal state of all running and dormant (suspended or
complete) sessions here. Each session is kept in a separate directory with
a randomly generated name. This directory may also contain a symlink
``last-session`` that points at one of those sessions. The symlink may be
broken as a part of normal operation.
Sessions may accumulate, in some cases, and they are not garbage collected
at this time. In general it is safe to remove sessions when Plainbox is not
running.
Looked up or read from
----------------------
``/usr/local/share/plainbox-providers-1/*.provider``
System wide, locally administered directory with provider definitions. See
PROVIDERS for more information. Jobs defined here have access to
``plainbox-trusted-launcher(1)`` and may run as root without prompting
(depending on configuration).
``/usr/share/plainbox-providers-1/*.provider``
Like ``/usr/local/share/plainbox-providers-1`` but maintained by the local
package management system. This is where packaged providers add their
definitions.
``$XDG_DATA_HOME/plainbox-providers-1/*.provider``
Per-user directory with provider definitions. This directory may be used to
install additional test definitions that are only available to a particular
user. Jobs defined there will not have access to
``plainbox-trusted-launcher(1)`` and will use ``pkexec(1)`` or ``sudo(1)``
to run as root, if needed.
Typically this directory is used by test provider developers transparently
by invoking ``manage.py develop`` (manage.py is the per-provider management
script generated by ``plainbox startprovider``)
In addition, refer to the list of files mentioned by ``plainbox.conf`` (5)
Environment Variables
=====================
The following environment variables affect Plainbox:
``PROVIDERPATH``
Determines the lookup of test providers. Note that unless otherwise
essential, it is recommended to install test providers into one of the
aforementioned directories instead of using PROVIDERPATH.
The default value is composed out of ':'-joined list of:
* ``/usr/local/share/plainbox-providers-1``
* ``/usr/share/plainbox-providers-1``
* ``$XDG_DATA_HOME/plainbox-providers-1``
``PLAINBOX_SESSION_REPOSITORY``
Alters the default location of the session storage repository. In practical
terms this is where all the test sessions are stored in the filesystem. By
default the effective value is ``$XDG_CACHE_HOME/plainbox/sessions``.
``PLAINBOX_LOCALE_DIR``
Alters the lookup directory for translation catalogs. When unset uses
system-wide locations. Developers working with a local copy should set it
to ``build/mo`` (after running ``./setup.py build_i18n``)
``PLAINBOX_I18N_MODE``
Alters behavior of the translation subsystem. This is only useful to
developers that wish to see fake translations of all the strings marked as
translatable. Available values include ``no-op``, ``gettext`` (default),
``lorem-ipsum-XX`` where ``XX`` is the language code of the faked
translations. Supported faked translations are: ``ar`` (Arabic), ``ch``
(Chinese), ``he`` (Hebrew), ``jp`` (Japanese), ``kr`` (Korean), ``pl``
(Polish) and ``ru`` (Russian)
``PLAINBOX_DEBUG``
Setting this to a non-empty string enables early logging support. This is
somewhat equivalent to running ``plainbox --debug`` except that it also
affects code that runs before command line parsing is finished. One
particular value that can be used here is "console". It enables console
traces (similar to ``plainbox --debug-console`` command-line argument).
``PLAINBOX_LOG_LEVEL``
This variable is only inspected if ``PLAINBOX_DEBUG`` is not empty. It is
equivalent to the ``plainbox --log-level=`` command-line argument. By
default (assuming ``PLAINBOX_DEBUG`` is set) is ``DEBUG`` which turns on
everything.
``PLAINBOX_TRACE``.
This variable is only inspected if ``PLAINBOX_DEBUG`` is not empty. It is
equivalent to the ``plainbox --trace=`` command-line argument. Unlike the
command line argument, it handles a comma-separated list of loggers to
trace. By default it is empty.
See Also
========
:doc:`plainbox-run`, :doc:`plainbox-session`, :doc:`plainbox-check-config`
:doc:`plainbox-self-test`, :doc:`plainbox-startprovider`, :doc:`plainbox-dev`
:doc:`plainbox.conf`
plainbox-0.25/docs/manpages/plainbox-file-units.rst 0000664 0001750 0001750 00000003657 12627266441 023247 0 ustar pierre pierre 0000000 0000000 =======================
plainbox-file-units (7)
=======================
Synopsis
========
This page documents the Plainbox file units syntax and runtime behavior
Description
===========
The file unit is a internal implementation detail at this time.
It is technically a Unit but it currently cannot be defined in a unit definition
file as the 'unit: file' association is not exposed.
File units are useful as an abstraction that everything is an unit. It allows
the core to validate file properties (name, role, permissions) in context.
Currently the unit is very fresh and relatively under-used but it is expected
to replace many internal ad-hoc enumeration systems that deal with files.
File Fields
-----------
There are two fields that are used by the file unit:
``path``:
This field defines the full, absolute path of the file that the unit is
describing. Note that this is not an identifier as it is more natural to
discuss files in terms of filenames rather than some abstract identifiers.
``role``:
This field defines the purpose of the file in a given provider. This field
may hold one of several supported values:
'unit-source':
The file is a source of unit definitions. Currently this is the only
actually implemented value.
'legacy-whitelist':
This file is a legacy whitelist.
'script':
This file is an architecture independent executable.
'binary':
This file is an architecture-specific executable.
'data':
This file is a binary blob (a data file).
'i18n':
This file is a part of the internationalization subststem. Typically
this would apply to the translation catalogues.
'manage_py':
This file is the provider management script, manage.py.
'legal':
This file contains copyright and licensing information.
'docs':
This file contains documentation. plainbox-0.25/docs/manpages/index.rst 0000664 0001750 0001750 00000000107 12627266441 020450 0 ustar pierre pierre 0000000 0000000 Manual Pages
============
.. toctree::
:maxdepth: 1
:glob:
* plainbox-0.25/docs/manpages/plainbox-session-archive.rst 0000664 0001750 0001750 00000000755 12627266441 024266 0 ustar pierre pierre 0000000 0000000 ============================
plainbox-session-archive (1)
============================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: session archive
The ``plainbox session archive`` command can be used to create an archive
of all the files associated with a single SESSION. Sessions are represented
by a unique randomly-named directory.
See Also
========
:doc:`plainbox-session-structure`, :doc:`plainbox-session`
plainbox-0.25/docs/manpages/plainbox-manifest-entry-units.rst 0000664 0001750 0001750 00000011654 12627266441 025271 0 ustar pierre pierre 0000000 0000000 =================================
plainbox-manifest-entry-units (7)
=================================
Synopsis
========
This page documents the syntax of the plainbox manifest entry units
Description
===========
A manifest entry unit describes a single entry in a *manifest* that describes
the machine or device under test. The purpose of each entry is to define one
specific fact. Plainbox uses such units to create a manifest that associates
each entry with a value.
The values themselves can come from multiple sources, the simplest one is the
test operator who can provide an answer. In more complex cases a specialized
application might look up the type of the device using some identification
method (such as DMI data) from a server, thus removing the extra interaction
steps.
File format and location
------------------------
Manifest entry units are regular plainbox units and are contained and shipped
with plainbox providers. In other words, they are just the same as job and test
plan units, for example.
Fields
------
Following fields may be used by a manifest entry unit.
``id``:
(mandatory) - Unique identifier of the entry. This field is used to look up
and store data so please keep it stable across the lifetime of your
provider.
``name``:
(mandatory) - A human readable name of the entry. This should read as in a
feature matrix of a device in a store (e.g., "802.11ac wireless
capability", or "Thunderbolt support", "Number of hard drive bays"). This
is not a sentence, don't end it with a dot. Please capitalize the first
letter. The name is used in various listings so it should be kept
reasonably short.
The name is a translatable field so please prefix it with ``_`` as in
``_name: Example``.
``value-type``:
(mandatory) - Type of value for this entry. Currently two values are
allowed: ``bool`` for a yes/no value and ``natural`` for any natural number
(negative numbers are rejected).
``value-units``:
(optional) - Units in which value is measured in. This is only used when
``value-type`` is equal to ``natural``. For example a *"Screen size"*
manifest entry could be measured in *"inch"* units.
``resource-key``:
(optional) - Name of the resource key used to store the manifest value when
representing the manifest as a resource record. This field defaults to the
so-called *partial id* which is just the ``id:`` field as spelled in the
unit definition file (so without the name space of the provider)
Example
-------
This is an example manifest entry definition::
unit: manifest entry
id: has_thunderbolt
_name: Thunderbolt Support
value-type: bool
Naming Manifest Entries
-----------------------
To keep the code consistent there's one naming scheme that should be followed.
Entries for boolean values must use the ``has_XXX`` naming scheme. This will
allow us to avoid issues later on where multiple people develop manifest
entries and it's all a bit weird what them mean ``has_thunderbolt`` or
``thunderbolt_supported`` or ``tb`` or whatever we come up with. It's a
convention, please stick to it.
Using Manifest Entries in Jobs
------------------------------
Manifest data can be used to decide if a given test is applicable for a given
device under test or not. When used as a resource they behave in a standard
way, like all other resources. The only special thing is the unique name-space
of the resource job as it is provided by plainbox itself. The name of the
resource job is: ``2013.com.canonical.plainbox``. In practice a simple job that
depends on data from the manifest can look like this::
unit: job
id: ...
plugin: ...
requires:
manifest.has_thunderbolt == 'True' and manifest.ns == '2013.com.canonical.checkbox'
imports: from 2013.com.canonical.plainbox import manifest
Note that the job uses the ``manifest`` job from the
``2013.com.canonical.plainbox`` name-space. It has to be imported using the
``imports:`` field as it is in a different name-space than the one the example
unit is defined in (which is arbitrary). Having that resource it can then check
for the ``has_thunderbolt`` field manifest entry in the
``2013.com.canonical.checkbox`` name-space. Note that the name-space of the
``manifest`` job is not related to the ``manifest.ns`` value. Since any
provider can ship additional manifest entries and then all share the flat
name-space of resource attributes looking at the ``.ns`` attribute is a way to
uniquely identify a given manifest entry.
Collecting Manifest Data
------------------------
To interactively collect manifest data from a user please include this job
somewhere early in your test plan:
``2013.com.canonical.plainbox::collect-manifest``.
Supplying External Manifest
---------------------------
The manifest file is stored in
``$HOME/.local/share/plainbox/machine-manifest.json``.
If the provisioning method ships a valid manifest file there it can be used for
fully automatic but manifest-based deployments.
plainbox-0.25/docs/manpages/plainbox-dev-parse.rst 0000664 0001750 0001750 00000000365 12627266441 023047 0 ustar pierre pierre 0000000 0000000 ======================
plainbox-dev-parse (1)
======================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: dev parse
:nodefault:
See Also
========
:doc:`plainbox-dev`
plainbox-0.25/docs/manpages/PLAINBOX_PROVIDER_DATA.rst 0000664 0001750 0001750 00000003165 12627266441 022667 0 ustar pierre pierre 0000000 0000000 ==========================
PLAINBOX_PROVIDER_DATA (7)
==========================
Synopsis
========
``command: example-command $PLAINBOX_PROVIDER_DATA/data-file.dat``
Running an example-command on a provider-specific data file.
Description
===========
Plainbox providers can require arbitrary data files for successful testing.
The absolute path of the provider ``data/`` directory is exposed as the
environment variable ``$PLAINBOX_PROVIDER_DATA``. Job commands can use that
variable to refer to the data directory in an unambiguous way.
Typical Use Cases
-----------------
Typically the data file is used by the job command. For example, let's say that
an audio file ``test.wav`` is stored in the ``data/`` directory of the provider
and the intent is to have a job definition which plays that file::
id: play-audio-file
plugin: user-verify
summary: play the test.wav file
command: paplay $PLAINBOX_PROVIDER_DATA/test.wav
description:
Plays the test sound file (test.wav)
Did the sound file play correctly?
The job ``play-audio-file`` will use the ``paplay`` (1) executable to play an
audio file shipped by the provider. Since the actual location of the audio file
may vary, depending on environment and installation method, the test definition
uses the environment variable ``$PLAINBOX_PROVIDER_DATA`` to access it in an
uniform way.
Checkbox Compatibility
----------------------
Jobs designed to work with pre-Plainbox-based Checkbox may still refer to the
old, somewhat confusing, environment variable :doc:`CHECKBOX_SHARE`. It points
to the same directory.
See Also
========
:doc:`PLAINBOX_PROVIDER_DATA`
plainbox-0.25/docs/manpages/plainbox-trusted-launcher-1.rst 0000664 0001750 0001750 00000006775 12627266441 024623 0 ustar pierre pierre 0000000 0000000 ===========================
plainbox-trusted-launcher-1
===========================
.. argparse::
:ref: plainbox.impl.secure.launcher1.get_parser_for_sphinx
:prog: plainbox-trusted-launcher-1
:manpage:
:nodefault:
This command is a part of the implementation of :doc:`plainbox`. It is not
intended to be invoked directly and the command line arguments and behavior
may freely change between versions.
Technically this program is used to run a command associated with a job
definition as another user (typically as root). The existing technologies
such as ``sudo`` (8) and ``pkexec`` (1) don't have enough granularity to
still restrict arbitrary commands but allow the commands that are inside
system-wide installed locations (thus safe as one needs root access to
install those in the first place). One additional restriction is that some
commands are themselves generated by other jobs.
Execution
=========
Warm-up Mode
------------
If the ``--warmup`` option is specified then nothing more happens and the
program exists immediately. This is intended to 'warm-up' the tool that
executes ``plainbox-trusted-launcher-1`` itself (typically ``pkexec`` or
``sudo``)
Normal Execution
----------------
In normal execution mode, the launcher looks up the job with the checksum
specified by ``--target`` and executes the command embedded inside. Environment
passed via ``--target-environment`` is appended to the environment variables
inherited from the parent process.
Standard output, standard error and exit code of
``plainbox-trusted-launcher-1`` is exactly as the values from the commands
embedded into the selected job itself.
Indirect Execution
------------------
In indirect execution mode, the launcher first looks up the job with the
checksum specified by ``--generator``, executes it, discarding stderr and
re-interpreting stdout as a set of job definitions. Environment passed via the
``--generator-environment`` is appended (but just to the generator job, the
``--target`` job has independent environment). All of the additional job
definitions are added to the global pool of jobs the launcher knows about.
After that the launcher continues as with normal execution, returning the same
stdout, stderr and exit code.
Environment Variables
=====================
The following environment variables *DO NOT* affect ``plainbox-trusted-launcher-1``
``PROVIDERPATH``
For :doc:`plainbox` this would affect the set of directories where Plainbox
looks for provider definitions. The trusted launcher has a fixed list of
directories that cannot be extended.
The fixed list is composed of two system-wide locations:
* ``/usr/local/share/plainbox-providers-1``
* ``/usr/share/plainbox-providers-1``
All the other environment variables mentioned in :doc:`plainbox` work the
same way.
Bugs
====
Currently it is impossible to use ``plainbox-trusted-launcher-1`` with a
``local`` job needs to run as root, that generates another ``local`` job that
needs to run as root, to generate any additional jobs that also need to run as
root. In other words, only one-level job generation is supported.
The launcher is somewhat inefficient, in that it has to re-run all of the
dependencies of the ``local`` job over and over. Ideally those would be cached,
per-session, but that would significantly increase the complexity of the code
running as root.
See Also
========
:doc:`plainbox`
plainbox-0.25/docs/manpages/PLAINBOX_SESSION_SHARE.rst 0000664 0001750 0001750 00000004664 12627266441 022716 0 ustar pierre pierre 0000000 0000000 ==========================
PLAINBOX_SESSION_SHARE (7)
==========================
Synopsis
========
Saving files to session share directory::
``command: do-something > $PLAINBOX_SESSION_SHARE/some-action.log``
Loading files from session-share directory::
``command: cat $PLAINBOX_SESSION_SHARE/some-action.log``
Description
===========
Plainbox sessions allow jobs to communicate by referring to the
$PLAINBOX_SESSION_SHARE environment variable. Files generated
therein are explicitly meant to be accessible to all the other jobs
participating in the session.
Typical Use Cases
-----------------
Typically a session will involve one or more pairs of jobs such as::
id: some-action
plugin: shell
summary: do something and save the log file to disk
commmand: do-something > $PLAINBOX_SESSION_SHARE/some-action.log
id: some-action-attachment
plugin: attachment
summary: log file of the do-something command
command: cat $PLAINBOX_SESSION_SHARE/some-action.log
The job ``some-action`` will use the ``do-something`` executable
to perform some tests. The log file of that action will be saved on
the device executing the test, in the directory exposed through the
environment variable ``$PLAINBOX_SESSION_SHARE``.
The ``some-action-attachment`` job will use that same directory and
the agreed-upon name of the log file and ``cat`` (1) it, which coupled
with the plugin type `shell` will cause Plainbox to attach the log
file to the resulting document.
Checkbox Compatibility
----------------------
Jobs designed to work with pre-Plainbox-based Checkbox may still refer
to the old, somewhat confusing, environment variable
``$CHECKBOX_DATA``. It points to the same directory.
Multi-Node Sessions
-------------------
When a test session involves multiple devices this directory is
separately instantiated for each device. Jobs executing on separate
devices cannot use this facility to communicate. If communication
is required jobs are expected to use the LAVA-inspired, MPI-based
communication API. For details see ``plainbox-multi-node-api`` (7)
Bugs
====
Within the session directory the name of this directory is still
``CHECKBOX_DATA`` (literally, this is not a variable name). It may be changed
at any point in time since jobs cannot form any meaningful paths to this
directory without referring to either ``$PLAINBOX_SESSION_SHARE`` or
``$CHECKBOX_DATA``
See Also
========
:doc:`PLAINBOX_PROVIDER_DATA`, :doc:`CHECKBOX_DATA`
plainbox-0.25/docs/manpages/plainbox-check-config.rst 0000664 0001750 0001750 00000000773 12627266441 023504 0 ustar pierre pierre 0000000 0000000 =========================
plainbox-check-config (1)
=========================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: check-config
:nodefault:
This command can be used to validate and display Plainbox configuration.
It is also commonly available for Plainbox derivatives such as Checkbox,
where it displays configuration files with additional variables not used by
Plainbox.
See Also
========
:doc:`plainbox.conf`
plainbox-0.25/docs/manpages/plainbox-dev-crash.rst 0000664 0001750 0001750 00000003144 12627266441 023033 0 ustar pierre pierre 0000000 0000000 ======================
plainbox-dev-crash (1)
======================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: dev crash
:nodefault:
This command is designed to crash or hang the application.
Using this command a developer can inspect the built-in development and
debugging features available in Plainbox. Specifically, there are several
options available to the top-level plainbox command (they *have to* be used
before the ``dev crash`` syntax) that allow to enable one of the following
actions:
Jumping Into PDB on Uncaught Exception
--------------------------------------
If ``plainbox`` is invoked with the ``--pdb`` command line option then all
uncaught exceptions are handled by starting a debugger session. Using the
debugger a developer can inspec the execution stack, including all the
threads, local and global variables, etc..
Jumping into PDB on KeyboardInterrupt
-------------------------------------
If ``plainbox`` is invoked with both the ``--pdb`` and the
``--debug-interrupt`` command line options then a ``KeyboardInterrupt``
exception is not ignored, as it usually is, and instead it allowed to
bubble up the command line implementation call stack until it starts the
interactive debugger session.
Examples
========
A debugger session on exception::
plainbox --pdb dev crash --crash
A debugger session on keyboard interrupt::
plainbox --pdb --debug-interrupt dev crash --hang
See Also
========
:doc:`plainbox-dev`, :doc:`plainbox`, ``pdb3`` (1)
plainbox-0.25/docs/manpages/plainbox-dev-logtest.rst 0000664 0001750 0001750 00000000375 12627266441 023417 0 ustar pierre pierre 0000000 0000000 ========================
plainbox-dev-logtest (1)
========================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: dev logtest
:nodefault:
See Also
========
:doc:`plainbox-dev`
plainbox-0.25/docs/manpages/plainbox-dev.rst 0000664 0001750 0001750 00000001375 12627266441 021741 0 ustar pierre pierre 0000000 0000000 ================
plainbox-dev (1)
================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:path: dev
:manpage:
:nodefault:
All of the commands in the ``plainbox dev`` group are intended for Plainbox
developers and may be unstable and may change from release to release
without notice.
Some of the commands are of general use and, most importantly, are of value
to Plainbox provider maintainers. Such commands may be promoted to be
top-level commands with the next release.
See Also
========
:doc:`plainbox-dev-script`
:doc:`plainbox-dev-special`
:doc:`plainbox-dev-analyze`
:doc:`plainbox-dev-parse`
:doc:`plainbox-dev-crash`
:doc:`plainbox-dev-logtest`
:doc:`plainbox-dev-list`
plainbox-0.25/docs/manpages/plainbox-dev-analyze.rst 0000664 0001750 0001750 00000006037 12627266441 023402 0 ustar pierre pierre 0000000 0000000 ========================
plainbox-dev-analyze (1)
========================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:path: dev analyze
:manpage:
:nodefault:
The ``plainbox dev analyze`` command is a direct replacement for ``plainbox
run`` that doesn't really run most of the jobs. Instead it offers a set of
reports that can be enabled (confusingly, by default no reports are enabled
and the command prints nothing at all) to inspect certain aspects of the
hypothetical session
The only exception to the rule above is the ``--run-local`` option. With that
option all local jobs and their dependencies *are* started. This is
technically required to correctly emulate the behavior of ``plainbox run``
that does so unconditionally. Still, local jobs can cause harm so don't run
untrusted code this way (the author of this man page recalls one local job
that ran ``sudo reboot`` to measure bootchart data)
Report Types
============
Plainbox ``dev analyze`` command offers a number of reports that can be
selected with their respective command line options. By default, no reports
are enabled which may be a little bit confusing but all options can be
enabled at the same time.
Dependency Report
-----------------
This report shows if any of the jobs have missing dependencies. It almost
never happens but the report is here for completeness.
Interactivity Report
--------------------
This report shows, for each job, if it is fully automatic or if it requires
human interaction.
Estimated Duration Report
-------------------------
This report shows if Plainbox would be able to accurately estimate the
duration of the session. It shows details for both fully automatic and
interactive jobs.
Validation Report
-----------------
This report shows if all of the selected jobs are valid. It is of lesser
use now that we have provider-wide validation via ``./manage.py validate``
Two Kinds of Job Lists
======================
Desired Job List
----------------
This list is displayed with the ``-S`` option. It contains the ordered
sequence of jobs that are "desired" by the test operator to execute. This
list contrasts with the so-called `run list` mentioned below.
Run List
--------
This list is displayed with the ``-R`` option. It contains the ordered
sequence of jobs that should be executed to satisfy the `desired list`
mentioned above. It is always a superset of the desired job list and almost
always includes additional jobs (such as resource jobs and other
dependencies)
The run list is of great importance. Most of the time the test operator will
see tests in precisely this order. The only exception is that some test
applications choose to pre-run local jobs. Still, if your job ordering is
wrong in any way, inspecting the run list is the best way to debug the
problem.
See Also
========
:doc:`plainbox-run`
plainbox-0.25/docs/manpages/plainbox-packaging-meta-data-units.rst 0000664 0001750 0001750 00000011465 12627266441 026103 0 ustar pierre pierre 0000000 0000000 ======================================
plainbox-packaging-meta-data-units (7)
======================================
Synopsis
========
This page documents the syntax of the plainbox packaging meta-data units
Description
===========
The packaging meta-data unit describes system-level dependencies of a provider
in a machine readable way. Dependencies can be specified separately for
different distributions. Dependencies can also be specified for a common base
distribution (e.g. for Debian rather than Ubuntu). The use of packaging
meta-data units can greatly simplify management of dependencies of binary
packages as it brings those decisions closer to the changes to the actual
provider and makes package management largely automatic.
File format and location
------------------------
Packaging meta-data units are regular plainbox units and are contained and
shipped with plainbox providers. In other words, they are just the same as job
and test plan units, for example.
Fields
------
Following fields may be used by a manifest entry unit.
``os-id``:
(mandatory) - the identifier of the operating system this rule applies to.
This is the same value as the ``ID`` field in the file ``/etc/os-release``.
Typical values include ``debian``, ``ubuntu`` or ``fedora``.
``os-version-id``:
(optional) - the identifier of the specific version of the operating system
this rule applies to. This is the same as the ``VERSION_ID`` field in the
file ``/etc/os-release``. If this field is not present then the rule
applies to all versions of a given operating system.
The remaining fields are custom and depend on the packaging driver. The values
for **Debian** are:
``Depends``:
(optional) - a comma separated list of dependencies for the binary package.
The syntax is the same as in normal Debian control files (including package
version dependencies). This field can be split into multiple lines, for
readability, as newlines are discarded.
``Suggests``:
(optional) - same as ``Depends``.
``Recommends``:
(optional) - same as ``Depends``.
Matching Packaging Meta-Data Units
----------------------------------
The base Linux distribution driver parses the ``/etc/os-release`` file, looks
at the ``ID``, ``ID_VERSION`` and optionally the ``ID_LIKE`` fields. They are
used as a standard way to determine the distribution for which packaging
meta-data is being collected for.
The *id and version match* strategy requires that both the ``os-id`` and
``os-dependencies`` fields are present and that they match the ``ID`` and
``ID_VERSION`` values. This strategy allows the test maintainer to express each
dependency accurately for each operating system they wish to support.
The *id match* strategy is only used when the ``os-version`` is not defined.
It is useful when a single definition is applicable to many subsequent
releases. This is especially useful when job works well with sufficiently old
version of a third party dependency and there is no need to repeatedly re-state
the same dependency for each later release of the operating system.
The *id_like match* strategy is only used as a last resort and can be seen as a
weaker *id match* strategy. This time the ``os-id`` field is compared to the
``ID_LIKE`` field (if present). It is useful for working with Debian
derivatives, like Ubuntu.
Each matching packaging meta-data unit is then passed to the driver to generate
packaging meta-data.
Example
-------
This is an example packaging meta-data unit, as taken from the resource provider::
unit: packaging meta-data
os-id: debian
Depends:
python3-checkbox-support (>= 0.2),
python3 (>= 3.2),
Recommends:
dmidecode,
dpkg (>= 1.13),
lsb-release,
wodim
This will cause the binary provider package to depend on the appropriate
version of ``python3-checkbox-support`` and ``python3`` in both *Debian*,
*Ubuntu* and, for example, *Elementary OS*. In addition the package will
recommend some utilities that are used by some of the jobs contained in this
provider.
Using Packaging Meta-Data in Debian
-----------------------------------
To make use of the packaging meta-data, follow those steps:
- Ensure that ``/etc/os-release`` exists in your build chroot. On Debian it is
a part of the ``base-files`` package which is not something you have to worry
about but other distributions may use different strategies.
- Mark the binary package that contains the provider with the
``X-Plainbox-Provider: yes`` header.
- Add the ``${plainbox:Depends}``, ``${plainbox:Recommends}`` and
``${plainbox:Suggests}`` variables to the binary package that contains the
provider.
- Override the gen_control debhelper rule and run the ``python3 manage.py
packaging`` command in addition to running ``dh_gencontrol``::
override_dh_gencontrol:
python3 manage.py packaging
dh_gencontrol
plainbox-0.25/docs/manpages/plainbox-exporter-units.rst 0000664 0001750 0001750 00000007770 12627266441 024200 0 ustar pierre pierre 0000000 0000000 ===========================
plainbox-exporter-units (7)
===========================
Synopsis
========
This page documents the syntax of the plainbox exporter units
Description
===========
The purpose of exporter units is to provide an easy way to customize the
plainbox reports by delagating the customization bits to providers.
Each exporter unit expresses a binding between code (the entry point) and data.
Data can be new options, different Jinja2 templates and/or new paths to load
them.
File format and location
------------------------
Exporter entry units are regular plainbox units and are contained and shipped
with plainbox providers. In other words, they are just the same as job and test
plan units, for example.
Fields
------
Following fields may be used by an exporter unit.
``id``:
(mandatory) - Unique identifier of the exporter. This field is used to look
up and store data so please keep it stable across the lifetime of your
provider.
``summary``:
(optional) - A human readable name for the exporter. This value is
available for translation into other languages. It is used when listing
exporters. It must be one line long, ideally it should be short (50-70
characters max).
``entry_point``:
(mandatory) - This is a key for a pkg_resources entry point from the
plainbox.exporters namespace.
Allowed values are: jinja2, text, xlsx, json and rfc822.
``file_extension``:
(mandatory) - Filename extension to use when the exporter stream is saved
to a file.
``options``:
(optional) - comma/space/semicolon separated list of options for this
exporter entry point. Only the following options are currently supported.
text and rfc822:
- with-io-log
- squash-io-log
- flatten-io-log
- with-run-list
- with-job-list
- with-resource-map
- with-job-defs
- with-attachments
- with-comments
- with-job-via
- with-job-hash
- with-category-map
- with-certification-status
json:
Same as for *text* and additionally:
- machine-json
xlsx:
- with-sys-info
- with-summary
- with-job-description
- with-text-attachments
- with-unit-categories
jinja2:
No options available
``data``:
(optional) - Extra data sent to the exporter code, to allow all kind of
data types, the data field only accept valid JSON. For exporters using the
jinja2 entry point, the template name and any additional paths to load
files from must be defined in this field. See examples below.
Example
-------
This is an example exporter definition::
unit: exporter
id: my_html
_summary: Generate my own version of the HTML report
entry_point: jinja2
file_extension: html
options:
with-foo
with-bar
data: {
"template": "my_template.html",
"extra_paths": [
"/usr/share/javascript/lib1/",
"/usr/share/javascript/lib2/",
"/usr/share/javascript/lib3/"]
}
The provider shipping such unit can be as follow::
├── data
│  ├── my_template.css
│  └── my_template.html
├── units
  ├── my_test_plans.pxu
  └── exporters.pxu
Note that exporters.pxu is not strictly needed to store the exporter units, but
keeping them in a dedidated file is a good practice.
How to use exporter units?
--------------------------
In order to call an exporter unit from provider foo, you just need to add the
unit id to the cli or the gui launcher in the exporter section:
Example of a gui launcher:
#!/usr/bin/checkbox-gui
[welcome]
title = "Foo"
text = "bar"
[exporter]
HTML = "2013.com.foo.bar::my_html"
Example of a cli launcher:
#!/usr/bin/env checkbox-launcher
[welcome]
text = Foo
[suite]
whitelist_filter = ^.*$
whitelist_selection = ^default$
[exporter]
2013.com.foo.bar::my_html
2013.com.foo.bar::my_json
2015.com.foo.baz::my_html
plainbox-0.25/docs/manpages/plainbox-template-units.rst 0000664 0001750 0001750 00000013550 12627266441 024134 0 ustar pierre pierre 0000000 0000000 ===========================
plainbox-template-units (7)
===========================
Synopsis
========
This page documents the Plainbox template units syntax and runtime behavior
Description
===========
The template unit is a variant of Plainbox unit types. A template is a skeleton
for defining additional units, typically job definitions. A template is defined
as a typical RFC822-like Plainbox unit (like a typical job definition) with the
exception that all the fields starting with the string ``template-`` are
reserved for the template itself while all the other fields are a definition of
all the eventual instances of the template.
Template-Specific Fields
------------------------
There are four fields that are specific to the template unit:
``template-unit``:
Name of the unit type this template will generate. By default job
definition units are generated (as if the field was specified with the
value of ``job``) eventually but other values may be used as well.
This field is optional.
``template-resource``:
Name of the resource job (if it is a compatible resource identifier) to use
to parametrize the template. This must either be a name of a resource job
available in the namespace the template unit belongs to *or* a valid
resource identifier matching the definition in the ``template-imports``
field.
This field is mandatory.
``template-imports``:
A resource import statement. It can be used to refer to arbitrary resource
job by its full identifier and (optionally) give it a short variable name.
The syntax of each imports line is::
IMPORT_STMT :: "from" "import"
| "from" "import"
AS
The short syntax exposes ``PARTIAL_ID`` as the variable name available
within all the fields defined within the template unit. If it is not a
valid variable name then the second form must be used.
This field is sometimes optional. It becomes mandatory when the resource
job definition is from another provider namespace or when it is not a valid
resource identifier and needs to be aliased.
``template-filter``:
A resource program that limits the set of records from which template
instances will be made. The syntax of this field is the same as the syntax
of typical job definition unit's ``requires`` field, that is, it is a
python expression.
When defined, the expression is evaluated once for each resource object and
if it evaluates successfully to a True value then that particular resource
object is used to instantiate a new unit.
This field is optional.
Instantiation
-------------
When a template is instantiated, a single record object is used to fill in the
parametric values to all the applicable fields. Each field is formatted using
the python formatting language. Within each field the record is exposed as the
variable named by the ``template_resource`` field. Record data is exposed as
attributes of that object.
The special parameter ``__index__`` can be used to iterate over the devices
matching the ``template-filter`` field.
Migrating From Local Jobs
-------------------------
Migration from local jobs is mostly straightforward. Apart from one gotcha the
process is as follows:
1. Look at the data that was used to *instantiate* job definitions by the old
local job. Write them down.
2. Ensure that all of the instantiated template data is exposed by exactly one
resource. This may be the commonly-used checkbox ``device`` resource job or
any custom resource job but it has to be all contained in one resource. Data
that used to be computed partially by the resource and partially by the
local job needs to be computed as additional attributes (fields) of the
resource instead.
3. Replace the boilerplate of the local job (typically a ``cat``, here-document
piped to ``run-templates`` and ``filter-templates``) with the equivalent
``template-resource`` and ``template-filter`` fields.
4. Remove the indentation so that all of the job definition is aligned to the
left of the paragraph.
5. Re-validate the provider to ensure that everything looks okay.
6. Re-test the job by running it.
The only gotcha is related to step two. It is very common for local jobs to do
some additional computation. For example many storage tests compute the path
name of some ``sysfs`` file. This has to be converted to a readily-available
path that is provided by the resource job.
Examples
========
The following example contains a simplified template that instantiates to a
simple storage test. The test is only instantiated for devices that are
considered *physical*. In this example we don't want to spam the user with a
long list of loopback devices. This is implemented by exposing that data in the
resource job itself::
id: device
plugin: resource
command:
echo 'path: /dev/sda'
echo 'has_media: yes'
echo 'physical: yes'
echo
echo 'path: /dev/cdrom'
echo 'has_media: no'
echo 'physical: yes'
echo
echo 'path: /dev/loop0'
echo 'has_media: yes'
echo 'physical: no'
The template defines a test-storage-``XXX`` test where ``XXX`` is replaced by
the path of the device. Only devices which are *physical* according to some
definition are considered for testing. This means that the record related to
``/dev/loop0`` will be ignored and will not instantiate a test job for that
device. This feature can be coupled with the existing resource requirement to
let the user know that we did see their CD-ROM device but it was not tested as
there was no inserted media at the time::
unit: template
template-resource: device
template-filter: device.physical == 'yes'
requires: device.has_media == 'yes'
id: test-storage-{path}
plugin: shell
command: perform-testing-on --device {path}
plainbox-0.25/docs/manpages/plainbox-session-export.rst 0000664 0001750 0001750 00000002307 12627266441 024161 0 ustar pierre pierre 0000000 0000000 ===========================
plainbox-session-export (1)
===========================
.. argparse::
:ref: plainbox.impl.box.get_parser_for_sphinx
:prog: plainbox
:manpage:
:path: session export
The `plainbox session export` command allows to export any existing session
(that can be still resumed) with any set of exporter / exporter option
combinations.
The exported session representation can be printed to stdout (default) or
saved to a specified file. You can pass a question mark (?) to both
``--output-format`` and ``--output-options`` for a list of available
values.
Limitations
===========
Sessions that cannot be resumed cannot be exported. Two common causes for that
are known.
First of all, a session can fail to resume because of missing or changed job
definitions. For that you need to re-install the exact same provider version as
was available on the machine that generated the session you are trying to work
with.
The second case is when a session was copied from another machine and some of
the log files are pointing to a different users' account. This can be worked
around by providing appropriate symbolic links from /home/some-user/ to
/home/your-user/
plainbox-0.25/docs/author/ 0000775 0001750 0001750 00000000000 12633675274 016325 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/docs/author/provider-template.rst 0000664 0001750 0001750 00000031033 12627266441 022515 0 ustar pierre pierre 0000000 0000000 =================
Provider Template
=================
Plainbox comes with a built-in template for a new provider. You can use it to
quickly start working on your own collection of tests.
This is not the :doc:`tutorial`, mind you, this is the actual template. It is
here though as a additional learning resource. To create this template locally,
for easier editing / experiments, just run::
plainbox startprovider 2013.com.example:template
Provider Template Layout
========================
The following files and directories are generated::
2013.com.example:template/
├── bin
│  ├── custom-executable
│  └── README.md
├── data
│  ├── example.dat
│  └── README.md
├── jobs
│  ├── examples-intermediate.txt
│  ├── examples-normal.txt
│  └── examples-trivial.txt
├── manage.py
├── po
│  └── POTFILES.in
├── README.md
└── whitelists
├── normal.whitelist
└── trivial.whitelist
Generated Content
=================
README.md
---------
::
Skeleton for a new Plainbox provider
====================================
This is a skeleton Plainbox provider that was generated using
``plainbox startprovider ...``.
It is just the starting point, there is nothing here of value to you
yet. If you know how this works then just remove this file along with
other example content and start working on your new tests,
otherwise, read on.
Inside the ``jobs/`` directory you will find several files that define
a number of "jobs" (more than one job per file actually). A job, in
Plainbox parlance, is the smallest piece of executable test code. Each
job has a name and a number of other attributes.
Jobs can be arranged in lists, test plans if you will that are known
as "whitelists". Those are defined in the ``whitelists/`` directory,
this time one per file. You can create as many whitelists as you need,
referring to arbitrary subsets of your jobs.
Then there are the ``bin/`` and ``data/`` directories. Those are
entirely for custom content you may need. You can put arbitrary
executables in ``bin/``, and those will be available to your job
definitions. Similarly you can keep any data your jobs might need
inside the ``data/`` directory. Referring to that directory at runtime
is a little bit trickier but one of the examples generated in this
skeleton shows how to do that.
Lastly there is the ``manage.py`` script. It requires python3 to run.
It depends on the python3-plainbox Debian package (or just the
Plainbox 0.5 upstream package) installed. This script can automate and
simplify a number of tasks that you will want to do as a test
developer.
Run ``./manage.py --help`` to see what sub-commands are available. You
can additionally pass ``--help`` to each sub command, for example
``./manage.py install --help`` will print the description of the
install command and all the arguments it supports.
That is it for now. You should check out the official documentation
for test authors at
http://plainbox.readthedocs.org/en/latest/author/index.html
If you find bugs or would like to see additional features developed
you can file bugs on the parent project page:
https://bugs.launchpad.net/checkbox/+filebug
manage.py
---------
::
#!/usr/bin/env python3
from plainbox.provider_manager import setup, N_
# You can inject other stuff here but please don't go overboard.
#
# In particular, if you need comprehensive compilation support to get
# your bin/ populated then please try to discuss that with us in the
# upstream project IRC channel #checkbox on irc.freenode.net.
# NOTE: one thing that you could do here, that makes a lot of sense,
# is to compute version somehow. This may vary depending on the
# context of your provider. Future version of Plainbox will offer git,
# bzr and mercurial integration using the versiontools library
# (optional)
setup(
name='2013.com.example:template',
version="1.0",
description=N_("The 2013.com.example:template provider"),
gettext_domain="2013_com_example_template",
)
bin/README.md
-------------
::
Container for arbitrary executables needed by tests
===================================================
You can execute files from this directory without any additional
setup, they are automatically added to the PATH of the executing
job examples/bin-access for details.
You should delete this file as anything here is automatically
distributed in the source tarball or installed.
bin/custom-executable
---------------------
::
#!/bin/sh
echo "Custom script executed"
data/README.md
--------------
::
Container for arbitrary data needed by tests
============================================
You can refer to files from this directory, in your scripts, using
the $PLAINBOX\_PROVIDER\_DATA environment variable. See the job
examples/data-access for details.
You should delete this file as anything here is automatically
distributed in the source tarball or installed.
data/example.dat
----------------
::
DATA
examples-trivial.txt
--------------------
::
# Two example jobs, both using the 'shell' "plugin". See the
# documentation for examples of other test cases including
# interactive tests, "resource" tests and a few other types.
#
# The summary and description keys are prefixed with _
# to indicate that they can be translated.
#
# http://plainbox.rtfd.org/en/latest/author/jobs.html
id: examples/trivial/always-pass
_summary: A test that always passes
_description:
A test that always passes
.
This simple test will always succeed, assuming your
platform has a 'true' command that returns 0.
plugin: shell
estimated_duration: 0.01
command: true
id: examples/trivial/always-fail
_summary: A test that always fails
_description:
A test that always fails
.
This simple test will always fail, assuming your
platform has a 'false' command that returns 1.
plugin: shell
estimated_duration: 0.01
command: false
jobs/examples-normal.txt
------------------------
::
id: examples/normal/data-access
_summary: Example job using provider-specific data
_description:
This test illustrates that custom data can be accessed using
the $PLAINBOX_PROVIDER_DATA environment variable. It points to
the absolute path of the data directory of the provider.
plugin: shell
estimated_duration: 0.01
command:
test "$(cat $PLAINBOX_PROVIDER_DATA/example.dat)" = "DATA"
id: examples/normal/bin-access
_summary: Example job using provider-specific executable
_description:
This test illustrates that custom executables can be accessed
directly, if placed in the bin/ directory of the provider.
.
Those are made available in the PATH, at runtime. This job
succeeds because the custom-executable script returns 0.
plugin: shell
estimated_duration: 0.01
command: custom-executable
id: examples/normal/info-collection
_summary: Example job attaching command output to results
_description:
This test illustrates that output of a job may be collected
for analysis using the plugin type ``attachment``
.
Attachment jobs may fail and behave almost the same as shell
jobs (exit status decides their outcome)
.
The output is saved but, depending on how tests are how results
are handled, may not be displayed. You can save attachments
using, for example, the JSON test result exporter, like this:
``plainbox run -f json -p with-attachments``
plugin: attachment
estimated_duration: 0.01
command: cat /proc/cpuinfo
jobs/examples-intermediate.txt
------------------------------
::
id: examples/intermediate/dependency-target
_summary: Example job that some other job depends on
_description:
This test illustrates how a job can be a dependency of another
job. The dependency graph can be arbitrarily complex, it just
cannot have any cycles. Plainbox will discover various problems
related to dependencies, including cyclic dependencies and
jobs that are depended upon, without a definition.
.
This job simply "passes" all the time but realistic examples
may include multi-stage manipulation (detect a device, set it
up, perform some automatic and some manual tests and summarise
the results, for example)
plugin: shell
command: true
estimated_duration: 0.01
id: examples/intermediate/dependency-source
_summary: Example job that depends on another job
_description:
This test illustrates how a job can depend on another job.
.
If you run this example unmodified (selecting just this job)
you will see that Plainbox will automatically run the
'dependency-target' job before attempting to run this one.
This will happen, even if you explicitly order the jobs
incorrectly.
.
If you edit the 'dependency-target' job to run 'false' instead
of 'true' and rerun this job you will see that it automatically
fails without being started. This is because of a rule which
automatically fails any job that has a failed dependency.
plugin: shell
command: true
depends: examples/intermediate/dependency-target
estimated_duration: 0.01
# TODO: this should be possible:
# name: examples/intermediate/detected-device
# resource-object: examples.intermediate.detected_device
id: detected_device
_summary: Example job producing structured resource data
_description:
This job illustrates that not all jobs are designed to be a
"test". Plainbox has a system of the so-called resources.
.
Technically a resource is a list of records with named fields.
Any program that prints RFC822-like output can be considered a
valid resource. Here a hypothetical resource program has
detected (fake) two devices which are represented as records
with the field ``device``.
.
Resources are ran on demand, their output parsed and stored.
All resources are made available to jobs that use resource
programs. See the next job for an example of how that can be
useful.
plugin: resource
command:
echo "type: WEBCAM"
echo ""
echo "type: WIFI"
estimated_duration: 0.03
id: examples/intermediate/test-webcam
_summary: Example job depending on structured resource
_description:
This test illustrates two concepts. It is the first test that
uses manual jobs (totally not automated test type). It also
uses a resource dependency, via a resource program, to limit
this test only on a machine that has a hypothetical webcam.
.
If you run this example unmodified (selecting just this job)
you will see that Plainbox will automatically run the
'detected_device' job before attempting to run this one. This
will happen, even if you explicitly order the jobs incorrectly.
.
If you edit the resource job to not print information about the
hypothetical WEBCAM device (just remove that line) and rerun
this job you will see that it automatically gets skipped
without being started. This is because of a rule which
automatically skips any job that has unmet requirement.
.
Resources are documented in detail here:
http://plainbox.rtfd.org/en/latest/search.html?q=resources
Please look at the ``Resources`` chapter there (it may move so
a search link is more reliable)
plugin: manual
requires:
detected_device.type == "WEBCAM"
estimated_duration: 30
po/PORFILES.in
--------------
::
[encoding: UTF-8]
[type: gettext/rfc822deb] jobs/examples-trivial.txt
[type: gettext/rfc822deb] jobs/examples-normal.txt
[type: gettext/rfc822deb] jobs/examples-intermediate.txt
manage.py
whitelists/trivial.whitelist
----------------------------
::
# select two trivial jobs by directly selecting their names
examples/trivial/always-pass
examples/trivial/always-fail
whitelists/normal.whitelist
---------------------------
::
# use regular expression to select all normal jobs
examples/normal/.*
plainbox-0.25/docs/author/provider-i18n.rst 0000664 0001750 0001750 00000013471 12627266441 021467 0 ustar pierre pierre 0000000 0000000 =============================
Provider Internationalization
=============================
About
-----
:term:`Plainbox` offers a way for test authors to create localized testing
experience. This allows test developers to mark certain strings as
translatable and make them a part of existing internationalization and
localization frameworks.
Working with translations
-------------------------
In practical terms, the summary and description of each job definition can now
be translated to other languages. The provider management tool (``manage.py``)
can now extract, merge and build translation catalogs that will be familiar to
many developers.
The job definition file format already supported this syntax but it was not
supported by Plainbox before, if you are maintaining an existing provider the
only new thing, for you, may be the fact that a job name (summary) is now also
translatable and that there are dedicated tools that make the process easier.
Looking at an example job definition from the :doc:`provider-template`::
id: examples/trivial/always-pass
_summary: A test that always passes
_description:
A test that always passes
.
This simple test will always succeed, assuming your
platform has a 'true' command that returns 0.
plugin: shell
estimated_duration: 0.01
command: true
The summary and description fields are prefixed with ``_`` which allows their
value to be collected to a translation catalog.
Updating Translations
---------------------
Whenever you edit those fields you should run ``./manage.py i18n``. This
command will perform several steps:
* All files mentioned in ``po/POTFILES.in`` will be scanned and translatable
messages will be extracted.
* The ``po/*.pot`` file will be rewritten based on all of the extracted
strings.
* The ``po/*.po`` files will be merged with the new template. New strings may
be added, similar but changed strings will be marked as *fuzzy* so that a
human translator can ensure they are okay (and typically make small changes)
by removing the fuzzy tag. Unused strings will be commented out but not
removed.
* Each ``po/*.po`` file will be compiled to a ``build/mo/*/LC_MESSAGES/*.mo``
file. Those files are what is actually used at runtime. If you ran
``manage.py develop`` on your provider you should now see translated values
being available.
Each of those actions can be individually disabled. See ``manage.py i18n
--help`` for details. This may be something you need to do in your build
system.
Translating Job Definitions
---------------------------
After generating the template file at least once you can translate all of the
job definitions into other languages.
There are many tools available to make this task easier. To get started just
copy the ``.pot`` file to ``LL.po`` where LL code of the language you want to
translate to and start editing. Run ``manage.py i18n`` often to spot syntax
issues and get updated values as you typically will edit code and translations
at the same time. Make sure that your editor can detect when a file is being
overwritten and offer to refresh the edited copy, ``manage.py i18n`` almost
always changes the layout of the file.
Once you commit the template file to your version control then you can use
tooling support offered by code hosting sites, such as Launchpad.net, to allow
the community to contribute translations. You can also seek paid services that
offer professional translations on a deadline. In both cases you should end up
with additional ``.po`` files in your repository.
.. note::
If English is not your first language it's a very good idea to try to keep
all of the strings translated to your language of choice and use the
translated version daily. This process allows you to think about the
English text, correct confusing statements, reword sentences and think
about the terminology used throughout your tests. It will also show missing
strings (those that are not marked for translation) or missing translator
comments.
Remember: If you, the author of the test, cannot reasonably translate your
test definitions into your native language, how can anyone else do it?
Translating Test Programs
-------------------------
Test definitions are not the whole story. It is probably even more important to
translate various testing programs or utilities that your test definitions
depend on.
Standard development practices apply, you should make properly translated
testing applications. It is advisable to reuse the same gettext domain as your
test definitions so that you can reasonably measure how much of your test
definition content is available in a given language.
For third party applications you may consider ensuring that they can be
localized and translated, file bugs or contribute patches, including
translations, for the languages that you care about.
Working with Version Control Systems
------------------------------------
It is advisable to separate commits that change the original string to the
commits that update the translation template file and individual translation
catalogues. The latter tend to be very long and almost impossible for anyone to
review without specialized tools.
Keep in mind that changes to actual translations that are *not* caused by
updates to the template file should be separated as well. This will allow
reviewers to actually look at the changes in text (assuming that more than one
person on the team knows that language).
Lastly you should never commit any of the build/ files (especially the
generated, compiled ``.mo`` files) into the version control system.
Further Reading
---------------
You may find those links handy:
* https://help.launchpad.net/Translations/YourProject
* https://help.launchpad.net/Translations/StartingToTranslate
* https://www.transifex.com/
* https://www.gnu.org/software/gettext/manual/gettext.html
plainbox-0.25/docs/author/provider-files.rst 0000664 0001750 0001750 00000006205 12627266441 022007 0 ustar pierre pierre 0000000 0000000 =========================
Provider Definition Files
=========================
Provider Definition Files are how :term:`Plainbox` learns about
:term:`providers `.
.. warning::
Normally provider definition files are generated automatically by
manage.py. They are generated both by ``manage.py install`` and
``manage.py develop``. It should not be necessary to create such
a file by hand.
Lookup Directories
==================
Plainbox discovers and loads providers based on '.provider' files placed in one
of the following three directories:
* ``/usr/local/share/plainbox-providers-1``
* ``/usr/share/plainbox-providers-1``
* ``$XDG_DATA_HOME/plainbox-providers-1`` typically
``$HOME/.local/share/plainbox-providers-1``
File Structure
==============
Each provider file has similar structure based on the well-known ``.ini`` file
syntax. Square braces denote sections, each of which contains arbitrary
key-value entries.
Currently only one section is used, *Plainbox Provider*.
The [Plainbox Provider] Section
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following keys may be defined in this section:
name
The format for the provider name is an RFC3720 IQN. This is specified in
:rfc:`3720#section-3.2.6.3.1`. It is used by Plainbox to uniquely identify
the provider.
version
The version of this provider. It must be a sequence of decimal numbers with
arbitrary many dots separating particular parts of the version string.
description
A short description of the provider. This value can be localized.
jobs_dir
Absolute pathname to a directory with :term:`job definitions `
as individual ``.txt`` files using the :doc:`job file format `.
whitelists_dir
Absolute pathname to a directory with :term:`whitelists `
as individual ``.whitelist`` files using the
:doc:`whitelist format `.
bin_dir
Absolute pathname to a directory with additional executables required by
any of the job definitions.
data_dir
Absolute pathname to a directory with additional data files required by
any of the job definitions.
locale_dir
Absolute pathname to a directory with translation catalogues.
The value should be suitable for :py:func:`bindtextdomain()`. This should
not be specified, unless in special circumstances.
location
Absolute pathname to a *base* directory that can be used to derive all of
the other directories. If defined, any of the dir variables mentioned above
gets an implicit default values:
================ =====================
Variable Default Value
================ =====================
jobs_dir $location/jobs
whitelists_dir $location/whitelists
bin_dir $location/bin
data_dir $location/data
locale_dir $location/locale
locale_dir (alt) $location/build/mo
================ =====================
Example
=======
An example provider definition file looks like this::
[Plainbox Provider]
name = 2013.com.canonical:myprovider
version = 1.0
description = My Plainbox test provider
location = /opt/2013.com.canonical.myprovider/
plainbox-0.25/docs/author/tutorial.rst 0000664 0001750 0001750 00000014766 12627266441 020733 0 ustar pierre pierre 0000000 0000000 .. _tutorial:
========
Tutorial
========
To best illustrate how providers work, we will walk through creating one
step-by-step. At the end of this tutorial you will have a provider which adds
a new :term:`whitelist`, several new jobs and the scripts and test data
supporting those jobs. Before starting this tutorial you will need to have a
running version of :term:`Plainbox` installed. You can either install it from
the repositories of Debian or its derivatives by running ``apt-get install
plainbox``, or if you prefer to work with the source, see :doc:`Getting
started with development <../dev/intro>`. There is also a Launchpad PPA with
the very latest development build for Ubuntu, which is `ppa:checkbox-dev/ppa`.
#. To get started we create an initial template for our provider by running
``plainbox startprovider 2014.com.example:myprovider``.
#. This will create a directory called ``2014.com.example:myprovider``
where this year is of course the current year (2014 is when this document
was written). Change to this directory and you will see that it contains::
/bin
/data
/integration-tests
/jobs
manage.py
README.md
/whitelists
The ``manage.py`` script is a helper script for developing the provider.
It provides a set of commands which assist in validating the correctness
of the provider and making it ready for distribution.
#. Let’s create some jobs first by changing to the jobs directory. It currently
contains a file called category.txt which serves as an example of how
jobs should look. Let’s delete it and instead create a file called
``myjobs.txt``. This can contain the following simple jobs::
plugin: shell
name: myjobs/shell_command
command: true
_description:
An example job that uses a command provided by the shell.
plugin: shell
name: myjobs/provider_command
command: mycommand
_description:
An example job that uses a test command provided by this provider.
At this point we can check that everything looks okay by running the command
``./manage.py info`` which displays some information about the provider. The
output should be something like::
[Provider MetaData]
name: 2014.com.example:myprovider
version: 1.0
[Job Definitions]
'myjobs/builtin_command', from jobs/myjobs.txt:1-5
'myjobs/provider_command', from jobs/myjobs.txt:7-11
[White Lists]
'category', from whitelists/category.whitelist:1-1
This shows all three jobs from the job file we added - great!
#. Next we need to change directory to ``bin`` to add the command used by the
job ``myjobs/this_provider_command``. We create a file there called
``mycommand`` which contains the following text::
#!/bin/sh
test `cat $CHECKBOX_SHARE/data/testfile` = 'expected'
This needs to be executable to be used in the job command so we need to run
``chmod a+x mycommand`` to make it executable.
You'll notice the command uses a file in ``$CHECKBOX_SHARE/data`` - we'll
add this file to our provider next.
#. Because the command we’re using uses a file that we expect to be located in
``$CHECKBOX_SHARE/data``, we need to add this file to our provider so that
after the provider is installed this file is available in that location.
First we need to change to the directory called ``data``, then as indicated
by the contents of the script we wrote in the previous step, we need to
create a file there called ``testfile`` with the contents::
expected
As simple as that!
#. Lastly we need to add a :term:`whitelist` that utilizes the jobs we created
earlier. We need to change to the directory called ``whitelists``. As with
the ``jobs`` directory there is already an example file there called
``category.whitelist``. We can delete that and add a file called
``mywhitelist.whitelist``. The contents should be::
myjobs/shell_command
myjobs/provider_command
The ``miscellanea/submission_resources`` and ``graphics/glxgears`` jobs
are from the default provider that is part of Plainbox.
We can check that everything is correct with the whitelist by running the
``./manage.py info`` command again. The output should be like::
[Provider MetaData]
name: 2014.com.example:myprovider
version: 1.0
[Job Definitions]
'myjobs/builtin_command', from jobs/myjobs.txt:1-5
'myjobs/provider_command', from jobs/myjobs.txt:7-11
[White Lists]
'mywhitelist', from whitelists/mywhitelist.whitelist:1-2
Our new :term:`whitelist` is listed there.
#. Now we have a provider we need to test it to make sure everything is
correct. The first thing to do is to install the provider so that it
it visible to Plainbox. Run ``./manage.py develop`` then run
``plainbox dev list provider``. Your provider should be in the list
that is displayed.
#. We should also make sure the whole provider works end-to-end by running
the :term:`whitelist` which it provides. Run the following command -
``plainbox run -w whitelists/mywhitelist.whitelist``.
#. Assuming everything works okay, we can now package the provider for
distribution. This involves creating a basic ``debian`` directory
containing all of the files needed for packaging your provider. Create
a directory called ``debian`` at the base of your provider, and then
create the following files within it.
``compat``::
9
``control``::
Source: plainbox-myprovider
Section: utils
Priority: optional
Maintainer: Brendan Donegan
Standards-Version: 3.9.3
X-Python3-Version: >= 3.2
Build-Depends: debhelper (>= 9.2),
lsb-release,
python3 (>= 3.2),
python3-plainbox
Package: plainbox-myprovider
Architecture: all
Depends: plainbox-provider-checkbox
Description: My whitelist provider
A provider for Plainbox.
``rules``::
#!/usr/bin/make -f
%:
dh "$@"
override_dh_auto_build:
$(CURDIR)/manage.py install
Note that the ``rules`` file must be executable. Make it so with
``chmod a+x rules``. Also, be careful with the indentation in the
file - all indents must be actual TAB characters, not four spaces
for example.
``source/format``::
3.0 (native)
Finally we should create a ``changelog`` file. The easiest way to do this
is to run the command ``dch --create 'Initial release.'``. You'll need to
edit the field ``PACKAGE`` to the name of your provider and the field
``VERSION`` to something like ``0.1``.
plainbox-0.25/docs/author/provider-namespaces.rst 0000664 0001750 0001750 00000017242 12627266441 023027 0 ustar pierre pierre 0000000 0000000 ====================
Provider Name-Spaces
====================
Name-spaces are a new feature in the 0.5 release. They alter typically short
job identifiers (names) and prefix them with a long and centrally-managed name
space identifier to ensure that jobs created by different non-cooperating but
well-behaving authors are uniquely distinguishable.
Theoretical Considerations
==========================
About name-spaces
-----------------
Starting with the 0.5 release, Plainbox supports name-spaces for job
identifiers. Each job has a partial identifier which is encoded by the ``id:``
or the legacy ``name:`` field in job definition files. That partial identifier
is prefixed with the name-space of the provider that job belongs to. This
creates unique names for all jobs.
Rationale
---------
Historically the :term:`Checkbox` project used to ship with a collection of job
definitions for various testing tasks. Since there was only one organization
controlling all jobs there was no problem of undesired clashes as all the
involved developers could easily coordinate and resolve issues.
With the rewrite that brought :term:`Plainbox` the core code and the pluggable
data concept was becoming easier to work with and during the 0.4 development
cycle we had decided to offer first-class support for external developers to
work on their own test definitions separately of the Canonical Hardware
Certification team that maintained the Checkbox project.
The first concern that became obvious as we introduced test providers was that
the name-space for all identifiers (job names at the time) was flat. As
additional test authors started using providers and, devoid of the baggage of
experience with legacy Checkbox, used natural, generic names for job
definitions it became clear that in order to work each test author needs to
have a private space where no clashes are possible.
Name-Space Organization Guide
-----------------------------
This section documents some guidelines for using name-spaces in practice.
Provider Name Spaces and IQN
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Plainbox name-spaces are based on the iSCSI IQN concept. It is a simple
extension of the usage of DNS names to create name-spaces. As DNS is externally
managed anyone owning a domain name can use that domain name and have a high
chance of avoiding clashes (as long as no party is maliciously trying to create
clashing names). IQN extends that with a year code. Since DNS name ownership
can and does change (people don't extend domains, companies change ownership,
etc.) it was important to prevent people from having to own a domain forever to
ensure name-space collisions are avoided. By prepending the four-digit year
number when a domain was owned by a particular entity, anyone that ever owned a
domain can create unique identifiers.
Sole Developers
^^^^^^^^^^^^^^^
If you are a sole developer you need to own at least one domain name at least
once. Assuming you owned example.com in 2014 you can create arbitrary many
name-spaces starting with ``2014.example.com``. It is advisable to use at least
one sub-domain if you know up front that the tests you are working on are for a
particular, well-defined task. For example, you could use
``2014.example.com.project1``.
Within that name-space you can create arbitrary many test providers (typically
to organize your dependencies so that not everything needs to be installed at
once). An example provider could be called
``2014.example.com.project1:acceptance-tests``. If you have two jobs inside
that provider, say ``test-1`` and ``test-2`` they would be called (**surprise**)
``2014.example.com.project1::test-1`` and
``2014.example.com.project1::test-2``.
Organizations and Companies
^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you are working as a part of an organization you should coordinate within
that organization and use the same rules as the sole developer above. The
primary difference is that you should really always use a sub-domain (so for
example, ``2014.example.com.department``) to differentiate your tests from
tests that may be independently developed by other people within the same
company. It is recommended that managers of particular organizational units
decide on the particular name-space to use.
Important Notes
^^^^^^^^^^^^^^^
There are two important notes that apply to everyone:
.. note::
Remember that provider namespace is **derived** from the provider name, the
part after the colon, including the colon, is discarded. Providers are a
way to organize tests together for dependencies. Namespaces are a way to
organize tests regardless of dependencies.
.. warning::
If you are reading this in 2015 and beyond, don't bump the year component.
Unless you are the new owner of ``example.com`` and you want to
differentiate your tests from whoever used to own *example.com* in 2014 you
should **keep using the same year forever**. If you bump the year all the
time you will create lots of small namespaces and you will most likely
break other people that may run your tests with a fixed identifier
hardcoded in a package name or script.
Technical Details
=================
Implicit Provider Name-Space
----------------------------
As mentioned above, the provider name-space is derived from the provider name::
2014.com.example.project:acceptance
^----------------------^
|
provider namespace
^---------------------------------^
|
provider name
The part of the provide name before the colon is used as the name-space. The
colon is *not* a part of the name-space.
The implicit name-space is used to construct non-partial job definition names
as well as to implicitly prefix each pattern inside :term:`whitelists `.
Using Explicit Name-Spaces
--------------------------
Explicit name-spaces need to be used in two situations:
1. When running a single job by name, e.g.: ``plainbox run -i
2013.com.canonical.plainbox::stub/true``.
This is required as any partial ID may silently change the job it resolves
to and we didn't want to introduce that ambiguity.
2. When including a job from another name-space inside a whitelist, e.g.::
~/2014.com.example.some:provider$ cat whitelists/cross.whitelist
job-a
job-b
2014\.com\.example\.other::job-a
~/2014.com.example.some:provider$
Here the whitelist names three jobs:
* 2014.com.example.some::job-a
* 2014.com.example.some::job-b
* 2014.com.example.other::job-a
Note that the dots are escaped with ``\`` to prevent them from matching
arbitrary character.
Custom Executables & Execution Environment
------------------------------------------
When Plainbox needs to execute a job with a shell command it constructs a
special execution environment that includes additional executables specific to
some providers. The execution environment is comprised of a directory with
symbolic links to all the private executables of all of the provides that have
the same name-space as the provider that owns the job that is to be executed.
Names of custom executables should be treated identically as job identifiers,
they share a private name-space (though separate from job names) and need to be
managed in the same way.
Limitations and Known Issues
============================
List of issues as of version 0.5
--------------------------------
* It is impossible to use a resource from one name-space in a job definition
from another name-space. This restriction should be lifted with the
introduction of additional syntax in subsequent versions.
* It is impossible for a local job to generate a new job definition in a
different name-space than the one of the local job itself. This limitation is
likely not to be lifted.
plainbox-0.25/docs/author/providers.rst 0000664 0001750 0001750 00000001007 12627266441 021065 0 ustar pierre pierre 0000000 0000000 =========
Providers
=========
Providers are a new feature introduced in Plainbox 0.5. They allow third party
developers to produce and maintain private and public test collections.
All :term:`jobs ` and :term:`whitelists ` are now loaded from a provider. This
also affects the :term:`Checkbox` project that now produces a custom user
interface and a number of providers for various purposes.
.. toctree::
provider-template.rst
provider-namespaces.rst
provider-i18n.rst
provider-files.rst
plainbox-0.25/docs/author/index.rst 0000664 0001750 0001750 00000003313 12627266441 020161 0 ustar pierre pierre 0000000 0000000 =======================
Job and Test Developers
=======================
This chapter organizes information useful for developers creating and
maintaining jobs and test scripts but not directly involved in changing the
core.
.. toctree::
intro.rst
tutorial.rst
qml-job-tutorial.rst
providers.rst
whitelists.rst
rfc822.rst
faq.rst
.. warning::
This chapter is very much under development. The list of stories below is a
guiding point for subsequent editions that will expand and provide real
value.
Personas and stories
--------------------
* I'm a developer working on the checkbox project. With my *job developer* hat
on:
* how does plainbox help me do my job when...
* ... I'm fixing a bug in existing jobs or scripts?
* ... I'm working on a new job from scratch?
* ... I'm working on private collection of jobs?
* how can I check for syntax correctness, simple errors, etc?
* how can I write automated tests for my jobs?
* how can I run automated tests for my jobs?
* how can I document my jobs so that others can understand and use them
better?
* I'm a developer working on a derivative of the checkbox project. I don't know
much about plainbox. What should I be aware of and how can I use plainbox to
do my job better.
* (same as above but with different assumptions about initial familiarity
with plainbox)
* how can I find about all the existing jobs?
* how can I find about all the existing resource jobs?
Key topics
----------
.. note::
The list here should always be based on the personas and stories section
above.
* Introduction to plainbox
* Where is plainbox getting the jobs from?
* Creating and maintaining jobs with plainbox
plainbox-0.25/docs/author/rfc822.rst 0000664 0001750 0001750 00000003721 12627266441 020063 0 ustar pierre pierre 0000000 0000000 .. _rfc822:
=============================
Plainbox RFC822 Specification
=============================
The syntax is only loosely inspired by the actual :RFC:`822` syntax. Since
Plainbox is not processing email, the original specification is used only as an
inspiration. One of the most important aspect of the syntax we're using is
relative familiarity for other users of the system and ease-of-use when using
general, off-the-shelf text editors.
BNF
---
An approximated syntax can be summarized as the following BNF::
record-list: record-list '\n' record
| record
record: entry-list '\n\n' entry
| entry
entry: KEY ':' VALUE
KEY: ^[^:]+
VALUE: .+\n([ ].+)*
There are two quirks which not handled by this syntax (see below). Otherwise
the syntax is very simple. It defines a list of records. Each record is a list
of entries. Each entry is a key-value pair. Values can be multi-line, which
allows for convenient expression of longer text fragments.
Quirk 1 -- the magic dot
------------------------
Due to the way the multi-line VALUE syntax is defined, it would be impossible
(or possible but dependant only on whitespace, which is not friendly) to
include two consecutive newlines. For that reason a line consisting of a single
space, followed by a single dot is translated to an empty line.
The example below::
key:
.
more value
Is parsed as an ENTRY (in python syntax)::
("key", "\nvalue")
Quirk 2 -- the # comments
-------------------------
Since it's a line-oriented format and people are used to being able to insert
comments anywhere with the ``# comment`` notation, any line that _starts_ with
a hash or pound character is discarded. This happens earlier than other parts
of parsing so comments are invisible to the rest of the parser. They can be
included anywhere, including in the middle of a multi-line value.
Example::
# this is a comment
key: value
multi-line
# comment!
and more
plainbox-0.25/docs/author/whitelists.rst 0000664 0001750 0001750 00000011274 12627266441 021256 0 ustar pierre pierre 0000000 0000000 ========================
Checkbox Whitelist Files
========================
When creating a test suite for a particular purpose, it will be necessary to
specify which tests to run and which order they should run in. For this purpose
Checkbox provides the concept of Whitelists.
Whitelist Format
================
A whitelist is a text file containing a line-seperated sequence of patterns,
each representing one or more 'jobs'. These patterns are in the Python regular
expression syntax. Comments may be included in the file by starting the line
with '#'.
Minimal Whitelist File
======================
In order to be useful a whitelist file needs to include a particular subset of
jobs which provide Checkbox with all of the information it needs to run tests
properly. These include jobs which attach hardware information and resource
jobs which provide other jobs with information of the environment they a
re running in (available hardware, available packages etc). To make this easy
to do a single job exists whose purpose is to execute all of these other jobs::
miscellanea/submission-resources
This should be included as the first job in any whitelist.
Job Categories
==============
In order to allow Checkbox to display jobs by category in the UI it is
necessary to include a particular local job which itself generates jobs which
belong to that category. This job will normally look like ``____``
where is the name of the job file which contains the job. This is
indicated again by the prefix of the job (before the ``/`` in the job name).
As a quick example, the job ``graphics/glxgears`` is contained in
``graphics.txt``. Therefore we should include the ``__graphics__`` job so that
the ``graphics/glxgears`` job shows correctly under the category. The
``__graphics__`` job itself looks like::
name: __graphics__
plugin: local
_description: Graphics tests
command:
shopt -s extglob
cat $CHECKBOX_SHARE/jobs/graphics.txt?(.in)
Checkbox will interpret this job as a request to display any job in
``graphics.txt`` (or its untranslated version ``graphics.txt.in``) under the
heading shown in the description of this job (in this case 'Graphics tests').
Tutorial
========
To compound what we discussed before, below is a brief tutorial which walks
through assembling a basic whitelist file.
1. First we need to create a file, let's name it tutorial.whitelist.
Whitelists don't have to end with the .whitelist suffix but this is the
convention used to help identify them.
2. We start by adding the one job that is required for all whitelists, as
explained above in the section 'Minimal Whitelist File', so our whitelist file
looks like::
miscellanea/submission-resources
3. Next we should choose some jobs that we want to run. This all depends on
your specific use-case of course, but I've selected a few jobs that will help
clearly illustrate more of the concepts involved in whitelists. These jobs will
give us a whitelist file that looks like::
miscellanea/submission-resources
cpu/clocktest
ethernet/multi_nic
ethernet/multi_nic_eth0
graphics/glxgears
If we run this whitelist now then all of these jobs will be executed and a
valid test submission will be created, but we can still improve it in a couple
of ways.
4. The first way is by adding the necessary jobs to allow the Checkbox UI to
group the jobs into specific categories. To do this we need to add a job with
a name like ``____`` for each category. We have three categories in
our whitelist file - cpu, ethernet and graphics. The category of the job is
the prefix of the job name prior to the ``/``. So now our whitelist file looks
like::
miscellanea/submission-resources
__cpu__
__ethernet__
__graphics__
cpu/clocktest
ethernet/multi_nic
ethernet/multi_nic_eth0
graphics/glxgears
Now the Checkbox UI will group the jobs into these categories.
5. Although it's not immediately apparent there is another problem with this
whitelist. The ``ethernet/multi_nic`` tests are only able to include one job
for the ethernet port 'eth0'. It would be better if we included all of the
jobs generated by 'ethernet/multi_nic', no matter how many ethernet ports are
present on the system under test. The best way to do this is to write the
pattern so that it matches all of the possible job names. We can take advantage
of the Python regular expression syntax and use the ``\d`` special character
to match any decimal number. After doing this the whitelist file will look
like this::
miscellanea/submission-resources
__cpu__
__ethernet__
__graphics__
cpu/clocktest
ethernet/multi_nic
ethernet/multi_nic_eth\d
graphics/glxgears
plainbox-0.25/docs/author/faq.rst 0000664 0001750 0001750 00000003103 12627266441 017616 0 ustar pierre pierre 0000000 0000000 Frequently Asked Questions
==========================
FAQ 1
-----
Q: What does "advice: please use .pxu as the extension for all files with
plainbox units" mean?
A: It means that you should just rename your ``.txt`` or ``.txt.in`` files
to ``.pxu``. We're doing this because we want to standardize the new file
extension and provide syntax highlighting in common text editors.
For now you can also look at the ``plainbox/contrib/pxu.vim`` directory to use
our experimental syntax highlighting file for Vim. Improvements to suppor other
editors are highly welcome!
FAQ 2
-----
Q: What's the difference between description and purpose/steps/verification
fields in job definition and how should I use them?
A: Description should contain all the information needed to perform the test.
For tests requiring human interaction, description field should contain
information about the purpose of the test, all the steps that the user has to
perform and instruction how to verify the outcome of the test. In order to draw
a finer finer distinction between the aformentioned stages of test execution,
the use of purpose, steps and verification fields is recommended. Since version
0.17 of plainbox some user interfaces take advantage of the new fields set.
They will display the purpose of the test prior to its execution, steps
information while executing them and verification instruction when the test is
done. Note, that purpose, steps and verification fields are used only in jobs
definitions requiring human interaction i.e. ones of plugin type 'manual',
'user-interact', and 'user-interact-verify'.
plainbox-0.25/docs/author/qml-job-tutorial.rst 0000664 0001750 0001750 00000023064 12627266441 022261 0 ustar pierre pierre 0000000 0000000 ========================
QML-native Jobs Tutorial
========================
.. contents::
What is a qml-native job
------------------------
A qml-native job is a simple Qt Quick application (it usually is one .qml file)
designed to test computer systems as any other plainbox job, difference being
that it can have fully blown GUI and communicates with checkbox stack using
predefined interface.
Software requirements
---------------------
To develop and run qml-native jobs you need two things:
Ubuntu-SDK and Plainbox
Ubuntu-SDK installation
```````````````````````
To install Ubuntu-SDK just run
``# apt-get install ubuntu-sdk``
Ubuntu-SDK, once opened, will ask you if you want to create any kit.
.. image:: qml-tut-0.png
:scale: 100
:alt: ubuntu-sdk kit creation wizard.
Go ahead and create one matching the architecture you're running on. And grab
a coffee, as this may take awhile. If prompted about emulator installation, skip
the screen.
Plainbox installation
`````````````````````
add checkbox-dev PPA:
``# apt-add-repository ppa:checkbox-dev/ppa``
retrieve the list of packages:
``# apt-get update``
install latest plainbox
``# apt-get install plainbox``
If you want to work on the greatest and latest of Plainbox, you might want to
use trunk version. To do that follow these steps::
$ bzr checkout --lightweight lp:checkbox
$ cd checkbox
$ ./mk-venv venv
$ . venv/bin/activate
Now you should be able to launch ``plainbox-qml-shell`` command.
First qml-native job - Smoke test
---------------------------------
Let's build a very basic test that shows pass and fail buttons. All
qml-native jobs start as ordinary QtQuick ``Item{}``, with ``testingShell``
property and testDone signal. I.e. ::
import QtQuick 2.0
Item {
property var testingShell;
signal testDone(var test);
}
That's the boilerplate code every qml-native job will have.
Now let's add two buttons.::
import QtQuick 2.0
import Ubuntu.Components 0.1
Item {
property var testingShell;
signal testDone(var test);
Column {
Button {
text: "pass"
onClicked: testDone({outcome: "pass"})
}
Button {
text: "fail"
onClicked: testDone({outcome: "fail"})
}
}
}
Save the above code as ``simple-job.qml``. We will run it in a minute.
``{outcome: "pass"}`` - this code creates an object with one property -
``outcome`` that is set the value of ``"pass"``.
``testDone({outcome: "pass"})`` - triggers ``testDone`` signal sending newly
created object. This informs the governing infrastructure that the test is
done and the test passed.
How to run jobs
---------------
Now we're ready to test newly developed qml job. Run: ::
$ plainbox-qml-shell simple-job.qml
.. image:: qml-tut-1.png
:scale: 100
:alt: ubuntu-sdk kit creation wizard.
It's not the prettiest qml code in the world, but it is a proper qml-native
plainbox job!
Multi-page tests
----------------
Two common approaches when developing multi-page qml app are flat structure, or
page navigation using page stack.
Flat page hierarchy
```````````````````
The simplest way is to create two Page components and switch their visibility
properties. E.g.::
Item {
id: root
property var testingShell;
Page {
id: firstPage
Button {
onClicked: {
firstPage.visible = false;
secondPage.visible = true;
}
}
}
Page {
id: secondPage
visible: false
}
}
Using page stack
````````````````
``testingShell`` defines ``pageStack`` property that you can use for multi-page
test with navigation. E.g.::
Item {
id: root
property var testingShell;
Page {
id: firstPage
visible: false
Button {
onClicked: testingShell.pageStack.push(second)
}
}
Page {
id: secondPage
visible: false
}
Component.onCompleted: testingShell.pageStack.push(first)
}
Migrating QtQuick app to a qml-native test
------------------------------------------
Start by creating ordinary "QML App with Simple UI"
.. image:: qml-tut-2.png
:scale: 100
:alt: ubuntu-sdk kit creation wizard.
The code generated by SDK should look like this:
.. image:: qml-tut-3.png
:height: 525
:width: 840
:alt: ubuntu-sdk kit creation wizard.
Now you can do a typical iterative process of developing an app that should
have the look and feel of the test you would like to create.
Let's say you're satisfied with the following app::
import QtQuick 2.0
import Ubuntu.Components 1.1
MainView {
useDeprecatedToolbar: false
width: units.gu(100)
height: units.gu(75)
Page {
Column {
spacing: units.gu(1)
anchors {
margins: units.gu(2)
fill: parent
}
Label {
id: label
text: i18n.tr("4 x 7 = ?")
}
TextField {
id: input
}
Button {
text: i18n.tr("Check")
onClicked: {
if (input.text == 28) {
console.log("Correct!");
} else {
console.log("Error!");
}
}
}
}
}
}
Notice that the app has a ``MainView`` component and one ``Page`` component.
These are not needed in qml-native jobs, as the view is managed by the testing
shell. Also, the outcome of the app is a simple ``console.log()`` statement.
To convert this app to a proper qml-native job we need to do three things:
* remove the bits responsible for managing the view
* add ``testingShell`` property and the ``testDone`` signal
* call ``testDone`` once we have a result
Final result::
import QtQuick 2.0
import Ubuntu.Components 1.1
Item {
property var testingShell;
signal testDone(var test);
Column {
spacing: units.gu(1)
anchors {
margins: units.gu(2)
fill: parent
}
Label {
id: label
text: i18n.tr("4 x 7 = ?")
}
TextField {
id: input
}
Button {
text: i18n.tr("Check")
onClicked: {
if (input.text == 28) {
testDone({outcome: "pass"});
} else {
testDone({outcome: "fail"});
}
}
}
}
}
Plainbox job definition for the test
````````````````````````````````````
The qml file we've created cannot be considered a plainbox job until it is
defined as a unit in a plainbox provider.
Consider this definition::
id: quazi-captcha
category_id: Captcha
plugin: qml
_summary: Basic math captcha
_description:
This test requires user to do simple multiplication
qml_file: simple.qml
estimated_duration: 5
Two bits that are different in qml jobs are ``plugin: qml`` and
``qml_file: simple.qml``
``plugin`` field specifies the type of the plainbox job. The value of `qml`
informs checkbox applications that this should be run in QML environment
(testing shell) and ``qml_file`` field specifies which file serves as the entry
point to the job. The file must be located in the ``data`` directory of the
provider the job is defined in.
For other information regarding plainbox job units see:
http://plainbox.readthedocs.org/en/latest/manpages/plainbox-job-units.html
To add this job to the plainbox provider with other qml jobs, paste the job
defintion to:
``checkbox/providers/2015.com.canonical.certification:qml-tests/units/qml-tests.pxu``
Testing qml job in Checkbox Touch on Ubuntu device
``````````````````````````````````````````````````
With job definition in qml-tests provider, and the qml file copied to its data
directory we can build and install checkbox click package.
In ``checkbox/checkbox-touch`` run::
./get-libs
./build-me --provider ../providers/2015.com.canonical.certification\:qml-tests/ \
--install
Launch the "Checkbox" app on the device and your test should be live.
Confined Qml jobs
-----------------
Sometimes there is a need to run a job with a different set of policies.
Checkbox makes this possible by embedding such jobs into the resulting click
package as seperate apps. Each of those apps have their own apparmor
declaration, so each one have its own, seperate entry in the Trust database.
To request Checkbox to run a qml job as confined, add 'confined' flag to its
definition.
E.g.::
id: confined-job
category_id: confinement-tests
plugin: qml
_summary: Job that runs as a seperate app
_description:
Checkbox should run this job with a seperate set of policies.
qml_file: simple.qml
flags: confined
estimated_duration: 5
After the confined jobs are defined, run ``generate-confinement.py`` in the
root directory of the provider, naming all confined jobs that have been
declared.
E.g.::
cd my_provider
~/checkbox/checkbox-touch/confinement/generate-confinement.py confined-job
The tool will print all the hooks declaration you need to add to the
``manifest.json`` file.
Now, your multi-app click is ready to be built.
plainbox-0.25/docs/author/intro.rst 0000664 0001750 0001750 00000043643 12627266441 020217 0 ustar pierre pierre 0000000 0000000 Introduction to Plainbox
========================
.. contents::
What is Plainbox?
-----------------
Many years ago, a dark sorcerer known only as CR3 created a testing tool
called ``hw-test`` with the vision of running tests against hardware to
bless the hardware and deem it as Ubuntu Certified. There was great
rejoicing. From the crowd that gathered around this tool came requests and
requirements for new features, new tests and new methods of doing things.
Over the subsequent years, a tool called Checkbox was created. It was the
product of the design by committee philosophy and soon grew ponderous and
difficult to understand except by a few known only as "The Developers."
Checkbox's goal was to function as a universal testing engine that could
drive several types of testing: end-users running tests on their systems,
certification testing with a larger set of tests, and even OEM-specific
testing with custom tests.
A couple of years ago Checkbox started showing its age. The architecture
was difficult to understand and to extend and the core didn't really scale
to some things we wanted to do; however, the test suite itself was still
quite valuable.
Thus Plainbox was created, as a "plain Checkbox" and again, there was much
rejoicing. It was originally meant to be a simpler library for creating
testing applications and as a requirement, it was designed to be compatible
with the Checkbox test/job definition format.
Since then, Plainbox has become a large set of libraries and tools, but the
central aim is still to write testing applications. Note that the term
*Checkbox* is still used to refer to the test suite generically; *Plainbox*
is used to refer to the new tool set "under the hood."
Goal
----
The goal of these tools is of course to run tests. They use a test
description language that was inherited from Checkbox, so it has many
interesting quirks. Since Checkbox itself is now deprecated, we have been
adding new features and improving the test description language so this is
in some flux.
Terminology
-----------
In developing or using Plainbox, you'll run into several unfamiliar terms.
Check the :doc:`../glossary` to learn what they mean. In fact, you should
probably check it now. Pay particular attention to the terms *Checkbox*,
*Plainbox*, *job*, *provier*, and *whitelist*.
Getting Started
---------------
To get started, we'll install Plainbox and ``checkbox-ng`` along with some
tests and look at how they are organized and packaged.
The newest versions are in our PPAs. We'll use the development PPA at
``ppa:checkbox-dev/ppa``. From there we'll install ``plainbox``,
``checkbox-ng``, and ``plainbox-provider-checkbox``.
As an end user this is all I need to run some tests. We can quickly run
``checkbox-cli``, which will show a series of screens to facilitate running
tests. First up is a welcome screen:
.. image:: cc1.png
:height: 178
:width: 800
:scale: 100
:alt: checkbox-cli presents an introductory message before enabling you to
select tests.
When you press the Enter key, ``checkbox-cli`` lets you select which
whitelist to use:
.. image:: cc2.png
:height: 343
:width: 300
:scale: 100
:alt: checkbox-cli enables you to select which test suite to run.
With a whitelist selected, you can choose the individual tests to run:
.. image:: cc3.png
:height: 600
:width: 800
:scale: 100
:alt: checkbox-cli enables you to select or de-select specific tests.
When the tests are run, the results are saved to files and the program
prompts to submit them to Launchpad.
As mentioned, ``checkbox-cli`` is just a convenient front-end for some
Plainbox features but it lets us see some aspects of Plainbox.
Looking Deeper
--------------
Providers
`````````
First, we installed some "provider" packages. Providers were designed to
encapsulate test descriptions and their related tools and data. Providers
are shipped in Debian packages, which allows us to express dependencies to
ensure required external packages are installed, and we can also separate
those dependencies; for instance, the provider used for server testing
doesn't actually contain the server-specific test definitions (we try to
keep all the test definitions in the Checkbox provider), but it does depend
on all the packages needed for server testing. Most users will want the
resource and Checkbox providers which contain many premade tests, but this
organization allows shipping the tiny core and a fully customized provider
without extraneous dependencies.
A provider is described in a configuration file (stored in
``/usr/share/plainbox-providers-1``). This file describes where to find all
the files from the provider. This file is usually managed automatically
(more on this later). A provider can ship jobs, binaries, data and
whitelists.
A **job** or **test** is the smallest unit or description that Plainbox
knows about. It describes a single test (historically they're called
jobs). The simplest possible job is::
id: a-job
plugin: manual
description: Ensure your computer is turned on. Is the computer turned on?
Jobs are shipped in a provider's jobs directory. This ultra-simple example
has three fields: ``id``, ``plugin``, and ``description``. (A real job
should include a ``_summary`` field, too.) The ``id`` identifies the job
(of course) and the ``description`` provides a plain-text description of
the job. In the case of this example, the description is shown to the user,
who must respond because the ``plugin`` type is ``manual``. ``plugin``
types include (but are not limited to):
* ``manual`` -- A test that requires the user to perform some action and
report the results.
* ``shell`` -- An automated test that requires no user interaction; the
test is passed or failed on the basis of the return value of the script
or command.
* ``local`` -- This type of job is similar to a ``shell`` test, but it
supports creating multiple tests from a single definition (say, to test
all the Ethernet ports on a computer). Jobs using the ``local`` plugin
are run when Plainbox is initialized.
* ``user-interact`` -- A test that asks the user to perform some action
*before* the test is performed. The test then passes or fails
automatically based on the output of the test. An example is
``keys/media-control``, which runs a tool to detect keypresses, asks the
user to press volume keys, and then exits automatically once the last
key has been pressed or the user clicks the skip button in the tool.
* ``user-interact-verify`` -- This type of test is similar to the
``user-interact`` test, except that the test's output is displayed for
the user, who must then decide whether it has passed or failed. An
example of this would be the ``usb/disk_detect`` test, which asks the
user to insert a USB key, click the ``test`` button, and then verify
manually that the USB key was detected correctly.
* ``user-verify`` -- A test that the user manually performs or runs
automatically and requires the user to verify the result as passed or
failed. An example of this is the graphics maximum resolution test
which probes the system to determine the maximum supported resolution
and then asks the user to confirm that the resolution is correct.
A fairly complex example definition is::
plugin: local
_summary: Automated test to walk multiple network cards and test each one in sequence.
id: ethernet/multi_nic
requires:
device.category == 'NETWORK'
_description: Automated test to walk multiple network cards and test each one in sequence.
command:
cat <<'EOF' | run_templates -s 'udev_resource | filter_templates -w "category=NETWORK" | awk "/path: / { print \$2 }" | xargs -n 1 sh -c "for i in \``ls /sys\$0/net 2>/dev/null\``; do echo \$0 \$i; done"'
plugin: shell
id: ethernet/multi_nic_$2
requires:
package.name == 'ethtool'
package.name == 'nmap'
device.path == "$1"
user: root
environ: TEST_TARGET_FTP TEST_TARGET_IPERF TEST_USER TEST_PASS
command: network test -i $2 -t iperf --fail-threshold 80
estimated_duration: 330.0
description:
Testing for NIC $2
EOF
Key points to note include:
* If a field name begins with an underscore, its value can be localized.
* The values of fields can appear on the same line as their field names,
as in ``plugin: local``; or they can appear on a subsequent line, which
is indented, as in the preceding example's ``requires: device.category
== 'NETWORK'``.
* The ``requires`` field can be used to specify dependencies; if the
specified condition is not met, the test does not run.
* The ``command`` field specifies the command that's used to run the test.
This can be a standard Linux command (or even a set of commands) or a
Checkbox test script. In this example's ``local`` test definition, the
first ``command`` line generates a list of network devices that is fed
to an embedded test, which is defined beginning with the second
``plugin`` line immediately following the first ``command`` line.
* In this example, the line that reads ``EOF`` ends the
``ethernet/ethtool_multi_nic_$2`` test's command; it's matched to the
``EOF`` that's part of ``cat << 'EOF'`` near the start of that command.
Each provider has a ``bin`` directory and all binaries there are available
in the path.
Whitelists
``````````
In the job files we have a "universe" of known jobs. We don't normally want
to run them all; rather we want to select a subset depending on what we're
testing, and maybe give the user a way to fine-tune that selection. Also,
we need a way to determine the order in which they will run, beyond what
dependencies may provide. This is where the whitelist comes in; think of it
as a mask or selection filter from the universe of jobs. Whitelists support
regular expressions, and Plainbox will attempt to run tests in the order
shown in the whitelist. Again, providers ship whitelists in a specific
directory, and you can use ``plainbox`` to run a specific whitelist with
the ``-w`` option.
You can also use ``plainbox`` to run a test with the ``-i`` syntax. This is
good for quickly running a job and ensuring it works well.
Let's look at ``checkbox-cli`` for a moment. This is a "launcher"; it
specifies a set of configuration options for a specific testing purpose.
This enables us to create mini-clients for each testing purpose, without
changing the core utility (``checkbox-launcher``). For instance, let's look
at the launcher for ``canonical-certification-server``, which appears in
``./providers/plainbox-provider-certification-server/launcher/canonical-certification-server``
in the Checkbox source tree::
#!/usr/bin/env checkbox-launcher
[welcome]
text = Welcome to System Certification!
This application will gather information from your system. Then you will be
asked manual tests to confirm that the system is working properly. Finally,
you will be asked for the Secure ID of the computer to submit the
information to the certification.canonical.com database.
To learn how to create or locate the Secure ID, please see here:
https://certification.canonical.com/
[suite]
# Whitelist(s) displayed in the suite selection screen
whitelist_filter = ^((network|storage|usb|virtualization)-only)|(server-(full|functional)-14.04)$
# Whitelist(s) pre-selected in the suite selection screen, default whitelist(s)
whitelist_selection = ^server-full-14.04$
[transport]
submit_to = certification
[config]
config_filename = canonical-certification.conf
A launcher such as this sets up an environment that includes introductory
text to be shown to users, a filter to determine what whitelists to present
as options, information on where to (optionally) submit results, and a
configuration filename. This allows each provider to ship a launcher or
binary with which to launch its relevant tests.
Developing Tests
````````````````
One way to deliver tests via Plainbox is to start your own provider. To
learn how to do that, see the :ref:`tutorial`.
In other cases you want to add tests to the main Checkbox repository (which
is also what we recommend to keep tests centralized, unless they're so
purpose-specific that this makes no sense).
This is a bit easier because the provider in question already exists. So
let's get started by branching a copy of ``lp:checkbox``. In brief, you
should change to your software development directory and type ``bzr branch
lp:checkbox my-branch`` to create a copy of the ``checkbox`` Launchpad
project in the ``my-branch`` subdirectory. You can then edit the files in
that subdirectory, upload the results to your own Launchpad account, and
request a merge.
To begin, consider the files and subdirectories in the main Checkbox
development directory (``my-branch`` if you used the preceding ``bzr``
command without change):
* ``checkbox-gui`` -- Checkbox GUI components, used in desktop/laptop
testing
* ``checkbox-ng`` -- The Plainbox-based version of Checkbox
* ``checkbox-support`` -- Support code for many providers
* ``checkbox-touch`` -- A Checkbox frontend optimized for touch/tablet
devices
* ``mk-venv`` -- A symbolic link to a script used to set up an environment
for testing Checkbox
* ``plainbox`` -- A Python3 library and development tools at the heart of
Plainbox
* ``plainbox-client`` -- Unfinished Python3 interface for Checkbox
* ``providers`` -- Provider definitions, including test scripts
* ``README.md`` -- A file describing the contents of the subdirectory in
greater detail
* ``setup.py`` -- A setup script
* ``support`` -- Support code that's not released
* ``tarmac-verify`` -- A support script
* ``test-in-lxc.sh`` -- A support script for testing in an LXC
* ``test-in-vagrant.sh`` -- A support script for testing with Vagrant
* ``test-with-coverage`` -- A link to a support script for testing with
coverage
* ``Vagrantfile`` -- A Vagrant configuration file
Let's say I want to write a test to ensure that the ubuntu user exists in
``/etc/passwd``. You need to remove any existing Checkbox provider
packages, lest they interfere with your new or modified tests. The
``setup.py`` script will set up a Plainbox development environment for you.
We can write a simple job here, then add a requirement, perhaps a
dependency, then a script in the directory. Note that scripts can be
anything that's executable, we usually prefer either shell or Python but
anything goes.
Plainbox will supply two environment variables, ``PLAINBOX_PROVIDER_DATA``
and ``SHARE``, we usually try to use them in the job description only, not
in the scripts, to keep the scripts Plainbox-agnostic if possible.
Once the test is running correctly, we can create a whitelist with a few
tests and name it.
Once we get everything running correctly we can prepare and propose a merge
request using ``bzr`` as usual.
Other Questions
---------------
**What Python modules are useful?**
I usually Google for the description of the problem I'm trying to solve,
and/or peruse the Python documentation in my spare time. I recommend the
*Dive Into Python* books if you have experience with another language, as
they are very focused on how to translate what you know into Python. This
applies also to Pythonisms like iterators, comprehensions, and
dictionaries which are quite versatile, and others. Again, the *Dive*
books will show you how these work.
**Are there other tools to use?**
``flake8`` or ``pyflakes``, it's always a good idea to run this if you
wrote a Python script, to ensure consistent syntax. ``manage.py
validate`` and ``plainbox dev analyze`` are also good tools to know
about.
**Is there a preferred editor for Python programming?**
I don't really know of a good editor/IDE that will provide a lot of help
when developing Python, as I usually prefer a minimalistic editor. I'm
partial to ``vim`` as it has syntax coloring, decent formatting
assistance, can interface with ``git`` and ``pyflakes`` and is just
really fast. We even have a plugin for Plainbox job files. Another good
option if you're not married to an editor is sublime text, Zygmunt has
been happy with it and it seems easy to extend, plus it's very
nice-looking. A recent survey identified Kate as a good alterntive. The
same survey identified ``gedit`` as *not* a good alternative so I'd avoid
that one. Finally if you're into cloud, ``cloud9.io`` may be an option
although we don't have a specific Plainbox development setup for it.
References
----------
:doc:`Reference on Plainbox test authoring `
:doc:`jobs`
:doc:`Plainbox provider template `
:doc:`Provider and job writing tutorial `
:doc:`../dev/intro`
:doc:`What resources are and how they work <../dev/resources>`
:doc:`Man pages on special variables available to jobs <../manpages/PLAINBOX_SESSION_SHARE>`
:doc:`All the manpages <../manpages/index>`
`The Checkbox stack diagram`_
.. _The Checkbox stack diagram:
http://checkbox.readthedocs.org/en/latest/stack.html
`Old Checkbox documentation for nostalgia`_
.. _Old Checkbox documentation for nostalgia:
https://wiki.ubuntu.com/Testing/Automation/Checkbox
`Usual Python modules`_
.. _Usual Python modules: https://docs.python.org/3.3/
`Document on upcoming template units feature`_
.. _Document on upcoming template units feature:
http://bazaar.launchpad.net/~checkbox-dev/checkbox/trunk/view/head:/plainbox/docs/manpages/plainbox-template-units.rst
`A quick introduction to Bazaar and bzr`_
.. _A quick introduction to Bazaar and bzr:
http://doc.bazaar.canonical.com/bzr.dev/en/mini-tutorial/
`A tool to use git locally but be able to pull/push from Launchpad`_
.. _A tool to use git locally but be able to pull/push from Launchpad: http://zyga.github.io/git-lp/
`A video on using git with Launchpad`_
.. _A video on using git with Launchpad:
https://plus.google.com/115602646184989903283/posts/RCepekrA5gu
`A video on how to set up Sublime Text for Plainbox development`_
.. _A video on how to set up Sublime Text for Plainbox development:
https://www.youtube.com/watch?v=mrfyAgDg4ME&list=UURGrmUhQo5P9hTbVskIIjoQ
`Checkbox(ng) documentation home`_
.. _Checkbox(ng) documentation home: http://checkbox.readthedocs.org
plainbox-0.25/docs/_theme/ 0000775 0001750 0001750 00000000000 12633675274 016264 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/docs/_theme/plainbox/ 0000775 0001750 0001750 00000000000 12633675274 020100 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/docs/_theme/plainbox/page.html 0000664 0001750 0001750 00000001400 12627266441 021670 0 ustar pierre pierre 0000000 0000000 {% extends "basic/page.html" %}
{% block body %}
{{ body }}
{%- if theme_show_disqus|tobool %}
comments powered by
{%- endif %}
{%- endblock %}
plainbox-0.25/docs/_theme/plainbox/theme.conf 0000664 0001750 0001750 00000000066 12627266441 022046 0 ustar pierre pierre 0000000 0000000 [theme]
inherit = default
[options]
show_disqus = ''
plainbox-0.25/docs/index.rst 0000664 0001750 0001750 00000003673 12627266441 016670 0 ustar pierre pierre 0000000 0000000 .. Plainbox documentation master file, created by
sphinx-quickstart on Wed Feb 13 11:18:39 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Plainbox
========
.. seealso:: See what's new in :ref:`version_0_17`
:term:`Plainbox` is a toolkit consisting of python3 library, development tools,
documentation and examples. It is targeted at developers working on testing or
certification applications and authors creating tests for such applications.
Plainbox can be used to both create simple and comprehensive test tools as well
as to develop and execute test jobs and test scenarios. It was created as a
refined and rewritten core of the :term:`Checkbox` project. It has a well
tested and documented core, small but active development community and a
collection of associated projects that use it as a lower-level engine/back-end
library.
Plainbox has a novel approach to discovering (and probing) hardware and
software that is extensible and not hardwired into the system. It allows test
developers to express association between a particular test and the hardware,
software and configuration constraints that must be met for the test to execute
meaningfully. This feature, along with pluggable test definitions, makes
Plainbox flexible and applicable to many diverse testing situations, ranging
from mobile phones, traditional desktop computers, servers and up to testing
"cloud" installations.
What are you interested in?
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Are you a :doc:`test author `, :doc:`application developer
` or :doc:`core developer `?
Table of contents
=================
.. toctree::
:maxdepth: 2
install.rst
usage.rst
manpages/index.rst
changelog.rst
author/index.rst
appdev/index.rst
dev/index.rst
ref/index.rst
glossary.rst
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
plainbox-0.25/docs/dev/ 0000775 0001750 0001750 00000000000 12633675274 015601 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/docs/dev/old.rst 0000664 0001750 0001750 00000032505 12627266441 017111 0 ustar pierre pierre 0000000 0000000 Old Architecture Notes
======================
.. warning::
This section needs maintenance
Application Skeleton
^^^^^^^^^^^^^^^^^^^^
This skeleton represents a typical application based on Plainbox. It enumerates
the essential parts of the APIs from the point of view of an application
developer.
1. Instantiate :class:`plainbox.impl.checkbox.Checkbox` then call
:meth:`plainbox.impl.checkbox.Checkbox.get_builtin_jobs()` to discover all
known jobs. In the future this might be replaced by a step that obtains jobs
from a named provider.
3. Instantiate :class:`plainbox.impl.runner.JobRunner` so that we can run jobs
4. Instantiate :class:`plainbox.impl.session.SessionState` so that we can keep
track of application state.
- Potentially restore an earlier, interrupted, testing session by calling
:meth:`plainbox.impl.session.SessionState.restore()`
- Potentially remove an earlier, interrupted, testing session by calling
:meth:`plainbox.impl.session.SessionState.discard()`
- Potentially start a new test session by calling
:meth:`plainbox.impl.session.SessionState.open()`
5. Allow the user to select jobs that should be executed and update session
state by calling
:meth:`plainbox.impl.session.SessionState.update_desired_job_list()`
6. For each job in :attr:`plainbox.impl.SessionState.run_list`:
1. Check if we want to run the job (if we have a result for it from previous
runs) or if we must run it (for jobs that cannot be persisted across
suspend)
2. Check if the job can be started by looking at
:meth:`plainbox.impl.session.JobState.can_start()`
- optionally query for additional data on why a job cannot be started and
present that to the user.
- optionally abort the sequence and go to step 5 or the outer loop.
3. Call :meth:`plainbox.impl.runner.JobRunner.run_job()` with the current
job and store the result.
- optionally ask the user to perform some manipulation
- optionally ask the user to qualify the outcome
- optionally ask the user for additional comments
4. Call :meth:`plainbox.impl.session.SessionState.update_job_result()` to
update readiness of jobs that depend on the outcome or output of current
job.
5. Call :meth:`plainbox.impl.session.SessionState.checkpoint()` to ensure
that testing can resume after system crash or shutdown.
7. Instantiate the selected state exporter, for example
:class:`plainbox.impl.exporters.json.JSONSessionStateExporter` so that we
can use it to save test results.
- optionally pass configuration options to customize the subset and the
presentation of the session state
8. Call
:meth:`plainbox.impl.exporters.SessionStateExporterBase.get_session_data_subset()`
followed by :meth:`plainbox.impl.exporters.SessionStateExporterBase.dump()`
to save results to a file.
9. Call :meth:`plainbox.impl.session.SessionState.close()` to remove any
nonvolatile temporary storage that was needed for the session.
Essential classes
=================
:class:`~plainbox.impl.session.SessionState`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Class representing all state needed during a single program session.
Usage
-----
The general idea is that you feed the session with a list of known jobs and
a subset of jobs that you want to run and in return get an ordered list of
jobs to run.
It is expected that the user will select / deselect and run jobs. This
class can react to both actions by recomputing the dependency graph and
updating the read states accordingly.
As the user runs subsequent jobs the results of those jobs are exposed to
the session with :meth:`update_job_result()`. This can cause subsequent
jobs to become available (not inhibited by anything). Note that there is no
notification of changes at this time.
The session does almost nothing by itself, it learns about everything by
observing job results coming from the job runner
(:class:`plainbox.impl.runner.JobRunner`) that applications need to
instantiate.
Suspend and resume
------------------
The session can save check-point data after each job is executed. This
allows the system to survive and continue after a catastrophic failure
(broken suspend, power failure) or continue across tests that require the
machine to reboot.
.. todo::
Create a section on suspend/resume design
Implementation notes
--------------------
Internally it ties into :class:`plainbox.impl.depmgr.DependencySolver` for
resolving dependencies. The way the session objects are used allows them to
return various problems back to the UI level - those are all the error
classes from :mod:`plainbox.impl.depmgr`:
- :class:`plainbox.impl.depmgr.DependencyCycleError`
- :class:`plainbox.impl.depmgr.DependencyDuplicateError`
- :class:`plainbox.impl.depmgr.DependencyMissingError`
Normally *none* of those errors should ever happen, they are only provided
so that we don't choke when a problem really happens. Everything is checked
and verified early before starting a job so typical unit and integration
testing should capture broken job definitions (for example, with cyclic
dependencies) being added to the repository.
Implementation issues
---------------------
There are two issues that are known at this time:
* There is too much checkbox-specific knowledge which really belongs
elsewhere. We are working to remove that so that non-checkbox jobs
can be introduced later. There is a branch in progress that entirely
removes that and moves it to a new concept called SessionController.
In that design the session delegates understanding of results to a
per-job session controller and exposes some APIs to alter the state
that was previously internal (most notably a way to add new jobs and
resources).
* The way jobs are currently selected is unfortunate because of local jobs
that can add new jobs to the system. This causes considerable complexity
at the application level where the application must check if each
executed job is a 'local' job and re-compute the desired_job_list. This
should be replaced by a matcher function that can be passed to
SessionState once so that desired_job_list is re-evaluated internally
whenever job_list changes.
:class:`~plainbox.impl.job.JobDefinition`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
:term:`Checkbox` has a concept of a :term:`job`. Jobs are named units of
testing work that can be executed. Typical jobs range from automated CPU power
management checks, BIOS tests, semi-automated peripherals testing to all manual
validation by following a script (intended for humans).
Jobs are distributed in plain text files, formated as a loose RFC822 documents
where typically a single text file contains a few dozen different jobs that
belong to one topic, for example, all bluetooth tests.
Tests have a number of properties that will not be discussed in detail here,
they are all documented in :class:`plainbox.impl.job.JobDefinition`. From the
architecture point of view the four essential properties of a job are *name*,
*plugin* and *requires* and *depends*. Those are discussed in detail below.
JobDefinition.name
------------------
The *name* field must be unique and is referred to by other parts of the system
(such as whitelists). Typically jobs follow a simple naming pattern
'category/detail', eg, 'networking/modem_connection'. The name must be _unique_
and this is enforced by the core.
JobDefinition.plugin
--------------------
The *plugin* field is an archaism from Checkbox and a misnomer (as Plainbox
does not have any plugins). In the Checkbox architecture it would instruct the
core which plugin should process that job. In Plainbox it is a way to encode
what type of a job is being processed. There is a finite set of types that are
documented below.
plugin == "shell"
#################
This value is used for fully automated jobs. Everything the job needs to do is
automated (preparation, execution, verification) and fully handled by the
command that is associated with a job.
plugin == "manual"
##################
This value is used for fully manual jobs. It has no special handling in the core
apart from requiring a human-provided outcome (pass/fail classification)
.. _local:
plugin == "local"
#################
This value is used for special job generator jobs. The output of such jobs is
interpreted as additional jobs and is identical in effect to loading such jobs
from a job definition file.
There are two practical uses for such jobs:
* Some local jobs are used to generate a number of jobs for each object.
This is needed where the tested machine may have a number of such objects
and each requires unique testing. A good example is a computer where all
network tests are explicitly "instantiated" for each network card
present.
This is a valid use case but is rather unfortunate for architecture of
Plainbox and there is a desire to replace it with equally-expressive
pattern jobs. The advantage is that unlike local jobs (which cannot be
"discovered" without enduring any potential side effects that may be
caused by the job script command) pattern jobs would allow the core to
determine the names of jobs that can be generated and, for example,
automatically determine that a pattern job needs to be executed as a
dependency of a phantom (yet undetermined) job with a given name.
The solution with "pattern" jobs may be executed in future phases of
Plainbox development. Currently there is no support for that at all.
Currently Plainbox cannot determine job dependencies across local jobs.
That is, unless a local job is explicitly requested (in the desired job
list) Plainbox will not be able to run a job that is generated by a local
job at all and will treat it as if that job never existed.
* Some local jobs are used to create a form of informal "category".
Typically all such jobs have a leading and trailing double underscore,
for example '__audio__'. This is currently being used by Checkbox for
building a hierarchical tree of tests that the user may select.
Since this has the same flaws as described above (for pattern jobs) it
will likely be replaced by an explicit category field that can be
specified each job.
plugin == "resource"
####################
This value is used for special "data" or "environment" jobs. Their output is
parsed as a list of RFC822 records and is kept by the core during a testing session.
They are primarily used to determine if a given job can be started. For
example, a particular bluetooth test may use the _requires_ field to indicate
that it depends (via a resource dependency) on a job that enumerates devices
and that one of those devices must be a bluetooth device.
plugin == "user-interact"
#########################
For all intents and purposes it is equivalent to "manual". The actual
difference is that a user is expected to perform some physical manipulation
before an automated test.
plugin == "user-verify"
#######################
For all intents and purposes it is equivalent to "manual". The actual
difference is that a user is expected to perform manual verification after an
automated test.
JobDefinition.depends
---------------------
The *depends* field is used to express dependencies between two jobs. If job A
has depends on job B then A cannot start if B is not both finished and
successful. Plainbox understands this dependency and can automatically sort and
execute jobs in proper order. In many places of the code this is referred to as
a "direct dependency" (in contrast to "resource dependency")
The actual syntax is not strictly specified, Plainbox interprets this field as
a list of tokens delimited by comma or any whitespace (including newlines).
A job may depend on any number of other jobs. There are a number of failure
modes associated with this feature, all of which are detected and handled by
Plainbox. Typically they only arise when during Checkbox job development
(editing actual job files) and are always a sign of a human error. No released
version of Checkbox or Plainbox should ever encounter any of those issues.
The actual problems are:
* dependency cycles, where job either directly or indirectly depends on
itself
* missing dependencies where some job refers to a job that is not defined
anywhere.
* duplicate jobs where two jobs with the same name (but different
definition) are being introduced to the system.
In all of those cases the core removes the offending job and tries to work
regardless of the problem. This is intended more as a development aid rather
than a reliability feature as no released versions of either project should
cause this problem.
JobDefinition.command
---------------------
The *command* field is used when the job needs to call an external command.
Typically all shell jobs define a command to run.
"Manual" jobs can also define a command to run as part of the test procedure.
JobDefinition.user
------------------
The *user* field is used when the job requires to run as a specific user
(e.g. root).
The job command will be run via pkexec to get the necessary
permissions.
.. _environ:
JobDefinition.environ
---------------------
The *environ* field is used to pass additional environmental keys from the user
session to the new environment set up when the job command is run by another
user (root, most of the time).
The actual syntax is not strictly specified, Plainbox interprets this field as
a list of tokens delimited by comma or any whitespace (including newlines).
plainbox-0.25/docs/dev/config.rst 0000664 0001750 0001750 00000025403 12627266441 017577 0 ustar pierre pierre 0000000 0000000 Plainbox Configuration System
=============================
Plainbox has a modular configuration system. The system allows one to define
static configuration models that are composed of variables. This is all
implemented in :mod:`plainbox.impl.secure.config` as two classes
:class:`plainbox.impl.secure.config.Config` and
:class:`plainbox.impl.secure.config.Variable`::
>>> from plainbox.impl.secure.config import Config, Variable
Configuration models
^^^^^^^^^^^^^^^^^^^^
Each subclass of :class:`plainbox.impl.secure.config.Config` defines a new
configuration model. The model is composed of named variables and sections
defined as members of the class using a quasi-declarative syntax::
>>> class AppConfig(Config):
... log_level = Variable()
... log_file = Variable()
If you've ever used Django this will fell just like models and fields.
Using Config objects and Variables
----------------------------------
Each configuration class can be simply instantiated and used as an object with
attributes::
>>> config = AppConfig()
Accessing any of the Variable attributes is handled and actually access data in
an underlying in-memory storage::
>>> config.log_level = 'DEBUG'
>>> assert config.log_level == 'DEBUG'
Writes are validated (see validators below), reads go to the backing store and,
if missing, pick the default from the variable declaration. By default values
are not constrained in any way.
The Unset value
---------------
Apart from handling arbitrary values, variables can store the ``Unset`` value,
which is of the special ``UnsetType``. Unset variables are used as the implicit
default values so understanding them is important.
The ``Unset`` value is always false in a boolean context. This makes it easier
to accommodate but applications are still expected to handle it correctly. One
way to do that is to provide a default value for **every** variable used.
Another is to use the :class:`~plainbox.impl.secure.config.NotUnsetValidator`
to prevent such values from reaching the application.
Using Variable with custom default values
-----------------------------------------
Each variable has a default value that is used when variable is accessed but
was not assigned or loaded from a config file before. By default that value is
a special :class:`~plainbox.impl.secure.config.Unset` object, but it can be
changed using the ``default`` keyword argument::
>>> class AppConfig(Config):
... log_level = Variable(default='INFO')
... log_file = Variable()
Here a freshly instantiated AppConfig class has a value in the ``log_level``
attribute. Note that there is a difference between values that have been
assigned and values that are loaded from defaults, as it will be explained
later::
>>> config = AppConfig()
>>> assert config.log_level == "INFO'
Using Variables with custom sections
------------------------------------
Each variable has section name that is used to lookup data in a INI-like config
file. By default that section is set to ``'DEFAULT'``.
Particular variables can be assigned to a non-default section. This can help
managing multiple groups of unrelated settings in one class / file. To specify
a section simply use the ``section`` keyword::
>>> class AppConfig(Config):
... log_level = Variable(section='logging', default='WARNING')
... log_file = Variable(
... section='logging',
... default='/var/log/plainbox.log')
... debug = Variable(default=False)
Using sections has no impact on how particular variables are used by the
application, it is only an utility for managing complexity.
Using Variable with custom kind
-------------------------------
Variables cannot hold values of arbitrary python type. In fact only a fixed
list of types are supported and allowed, those are: ``str``, ``bool``, ``int``
and ``float``. By default all variables are treated as strings.
Different *kind* can be selected with the ``kind`` keyword argument. Setting it
to a type (as listed above) will have two effects:
1) Only values of that type will be allowed upon assignment. This acts as an
implicit validator. It is also true for using the default ``str`` kind.
2) When reading configuration files from disk, the content of the file will be
interpreted accordingly.
Let's expand our example to indicate that the ``debug`` variable is actually a
boolean::
>>> class AppConfig(Config):
... log_level = Variable(section='logging', default='WARNING')
... log_file = Variable(
... section='logging',
... default='/var/log/plainbox.log')
... debug = Variable(default=False, kind=bool)
Specifying Custom Validators
----------------------------
As mentioned above in the kind section, values are validated upon assignment.
By default all values are validated to check if the value is appropriate for
the variable ``kind``
In certain cases additional constraints may be necessary. Those can be
expressed as any callable object (function, method or anything else with a
``__call__`` method). Let's expand the example to ensure that ``log_level`` is
only one of fixed possible choices::
>>> class ChoiceValidator:
...
... def __init__(self, choices):
... self.choices = choices
...
... def __call__(self, variable, value):
... if value not in self.choices:
... return "unspported value"
Each time the called validator returns None, it is assumed that everything is
okay. Otherwise the returned string is used as a message and
:class:`plainbox.impl.secure.config.ValidationError` is raised.
To use the new validator simply pass it to the ``validator_list`` keyword
argument::
>>> class AppConfig(Config):
... log_level = Variable(
... section='logging',
... default='WARNING',
... validator_list=[
... ChoiceValidator([
... "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"])])
...
... log_file = Variable(
... section='logging',
... default='/var/log/plainbox.log')
...
... debug = Variable(default=False, kind=bool)
.. note::
Validators that want to see the ``Unset`` value need to be explicitly
tagged, otherwise they will never see that value (they will not be called)
but can assume that the value is of correct type (bool, int, float or str).
If you need to write a validator that understands and somehow handles the
Unset value, decorate it with the
:func:`~plainbox.impl.secure.config.understands_Unset` decorator.
Using Section objects
---------------------
Sometimes there is a necessity to allow the user to add arbitrary key=value
data to the configuration file. This is possible using the
:class:`plainbox.impl.secure.config.Section` class. Consider this example::
>>> class AppConfig(Config):
... log_level = Variable(
... section='logging',
... default='WARNING',
... validator_list=[
... ChoiceValidator([
... "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"])])
...
... log_file = Variable(
... section='logging',
... default='/var/log/plainbox.log')
...
... debug = Variable(default=False, kind=bool)
...
... logger_levels = Section()
This is the same application config example we've been using. This time it's
extended with a ``logger_levels`` attribute. The intent for this attribute is
to allow the user to customise the logging level for any named logger. This
could be implemented by iterating over all the values of that section and
setting the level accordingly.
.. note::
Accessing Section objects returns a dictionary of the key-value pairs that
were defined in that section.
Loading configuration from file
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Configuration objects are not of much use without being able to load data from
actual files. This is fully supported using just one call to
:meth:`plainbox.impl.secure.config.Config.read()`. Read takes a list of files
to read as argument and tries to parse and load data from each existing file.
Missing files are silently ignored.
Because configuration files may be corrupted, have typos, incorrectly specified
values or other human-caused mistakes. The read() operation never fails as the
application probably does not want to block on errors unconditionally. Instead
after calling read() the application may inspect two instance attributes:
:attr:`plainbox.impl.secure.config.Config.problem_list` and
:attr:`plainbox.impl.secure.config.Config.filename_list`. They contain the list
of exceptions raised while trying to load and use the configuration files and
the list of files that were actually loaded, respectively.
.. note:: The only supported delimiter is ``=``.
The Config.Meta class
^^^^^^^^^^^^^^^^^^^^^
Each Config class or subclass has a special Meta class as an attribute. This is
*not* about the python metaclass system. This is a special helper class that
contains a list of meta-data about each Config class.
The Meta class has several attributes that are used internally but can be
sometimes useful for applications.
Meta.variable_list
------------------
This attribute holds a list of all the Variable objects defined in the parent
Config class. The order is maintained exactly as defined by the source code.
Meta.section_list
-----------------
This attribute holds a list of all the Section objects defined in the parent
Config class. The order is maintained exactly as defined in the source code.
Meta.filename_list
------------------
This attribute is an empty list by default. The intent is to hold a list of all
the possible pathnames that the configuration should be loaded from. This field
is used by :func:`plainbox.impl.secure.config.Config.get()` method.
Typically this field is specified in a custom version of the Meta class to
encode where the configuration files are typically stored.
Notes on subclassing Meta
-------------------------
A Config sub-class can define a custom Meta class with any attributes that may
be desired. That class will be merged with an internal
:class:`plainbox.impl.secure.config.ConfigMetaData` class. In effect the actual
Meta attribute will be a new type that inherits from both the custom class that
was specified in the source code and the standard ConfigMetaData class.
This mechanism is fully transparent to the user. There is no need to explicitly
inherit from ConfigMetaData directly.
The Unset value
^^^^^^^^^^^^^^^
The config system uses a special value :obj:`plainbox.impl.secure.config.Unset`
which is the only instance of :class:`plainbox.impl.secure.config.UnsetType`.
Unset is used instead of ``None`` as an implicit default for each ``Variable``
The only thing that ``Unset`` is special for is that it evaluates to false in a
boolean context.
plainbox-0.25/docs/dev/index.rst 0000664 0001750 0001750 00000000740 12627266441 017436 0 ustar pierre pierre 0000000 0000000 Core Developers
===============
This chapter organizes information useful for developers working on the core,
aka, Plainbox itself.
.. note::
The Plainbox project hopes to be a friendly developer environment. We
invested in a lot of tools to make your life easier. Despite being a
business-centric software project we welcome and encourage contributions
from both Canonical and Community members.
.. toctree::
:maxdepth: 3
intro.rst
architecture.rst
plainbox-0.25/docs/dev/architecture.rst 0000664 0001750 0001750 00000002776 12627266441 021024 0 ustar pierre pierre 0000000 0000000 Plainbox Architecture
=====================
This document explains the architecture of Plainbox internals. It should be
always up-to-date and accurate to the extent of the scope of this overview.
.. toctree::
:maxdepth: 3
trusted-launcher.rst
config.rst
resources.rst
old.rst
General design considerations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Plainbox is a reimplementation of Checkbox that replaces a reactor / event /
plugin architecture with a monolithic core and tightly integrated components.
The implementation models a few of the externally-visible concepts such as
jobs, resources and resource programs but also has some additional design that
was not present in Checkbox before.
The goal of the rewrite is to provide the right model and APIs for user
interfaces in order to build the kind of end-user solution that we could not
build with Checkbox.
This is expressed by additional functionality that is there only to provide the
higher layers with the right data (failure reason, descriptions, etc.). The
code is also intended to be highly testable. Test coverage at the time of
writing this document was exceeding 80%
The core requirement for the current phase of Plainbox development is feature
parity with Checkbox and gradual shift from one to another in the daily
responsibilities of the Hardware Certification team. Currently Plainbox
implements a large chunk of core / essential features from Checkbox. While not
all features are present the core is considered almost feature complete at this
stage.
plainbox-0.25/docs/dev/resources.rst 0000664 0001750 0001750 00000024707 12627266441 020352 0 ustar pierre pierre 0000000 0000000 .. _resources:
Resources
=========
Resources are a mechanism that allows to constrain certain :term:`job` to
execute only on devices with appropriate hardware or software dependencies.
This mechanism allows some types of jobs to publish resource objects to an
abstract namespace and to a way to evaluate a resource program to determine if
a job can be started.
Resources in Plainbox
=====================
The following chapters explain how resources actually work in :term:`Plainbox`.
Currently there *is* a subtle difference between this and the original
:term:`Checkbox` implementation.
Resource programs
-----------------
Resource programs are multi-line statements that can be embedded in job
definitions. By far, the most common use case is to check if a required package
is installed, and thus, the job can use it as a part of a test. A check like
this looks like this::
package.name == "fwts"
This resource program codifies that the job needs the ``fwts`` package to run.
There is a companion job with the same name that interrogates the local package
database and publishes a set of resource objects. Each such object is a
collection of arbitrary key-value pairs. The ``package`` job simply publishes
the ``name`` and ``version`` of each installed package but the mechanism is
generic and applies to all resources.
As stated, resource programs can be multi-line, a real world example of that is
presented below::
device.category == 'CDROM'
optical_drive.cd == 'writable'
This example is much like the one above, referring to some resources, here
coming from jobs ``device`` and ``optical_drive``. What is important to point
out is that, as a rule of a thumb, multi line programs have an implicit ``and``
operator between each line. This program would only evaluate to True if there
is a writable CD-ROM available.
Each resource program is composed of resource expressions. Each line maps
directly onto one expression so the example program above uses two resource
expressions.
Resource expressions
--------------------
Resource expressions are evaluated like normal python programs. They use all of
the same syntax, semantics and behavior. None of the operators are overridden
to do anything unexpected. The evaluator tries to follow the principle of least
surprise but this is not always possible.
Resource expressions cannot execute arbitrary python code. In general almost
everything is disallowed, except as noted below:
* Expressions can use any literals (strings, numbers, True, False, lists and tuples)
* Expressions can use boolean operators (``and``, ``or``, ``not``)
* Expressions can use all comparison operators
* Expressions can use all binary and unary operators
* Expressions can use the set membership operator (``in``)
* Expressions can use read-only attribute access
Anything else is rejected as an invalid resource expression.
In addition to that, each resource expression must use at least one variable,
which must be used like an object with attributes. The name of that variable
must correspond to the name of the job that generates resources. You can use
the ``imports`` field (at a job definition level) to rename a resource job to
be compatible with the identifier syntax. It can also be used to refer to
resources from another namespace.
In the examples elsewhere in this page the ``package`` resources are generated
by the ``package`` job. Plainbox uses this to know which resources to try but
also to implicitly to express dependencies so that the ``package`` job does not
have to be explicitly selected and marked for execution prior to the job that
in fact depends on it. This is all done automatically.
Evaluation
----------
Due to mandatory compatibility with existing :term:`Checkbox` jobs there are
some unexpected aspects of how evaluation is performed. Those are marked as
**unexpected** below:
1. First Plainbox looks at the resource program and splits it into lines. Each
non-empty line is parsed and converted to a resource expression.
2. **unexpected** Each resource expression is repeatedly evaluated, once for
each resource from the group determined by the variable name. All exceptions
are silently ignored and treated as if the iteration had evaluated to False.
The whole resource expression evaluates to ``True`` if any of the iterations
evaluated to ``True``. In other words, there is an implicit ``any()`` around
each resource expression, iterating over all resources.
3. **unexpected** The resource program evaluates to ``True`` only if all
resource expressions evaluated to ``True``. In other words, there is an
implicit ``and`` between each line.
Limitations
-----------
The design of resource programs has the following shortcomings. The list is
non-exhaustive, it only contains issues that we came across found not to work
in practice.
Joins are not optimized
^^^^^^^^^^^^^^^^^^^^^^^
Starting with plainbox 0.24, a resource expression can use more than one
resource object (resource job) at the same time. This allows the use of joins
as the whole expression is evaluated over the cartesian product of all the
resource records. This operation is not optimized, you can think of it as a
JOIN that is performed on a database without any indices.
Let's look at a practical example::
package.name == desired_package.name
Here, two resource jobs would run. The classic *package* resource (that
produces, typically, a great number of resource records, one for each package
installed on the system) and a hypothetical *desired_package* resource (for
this example let's pretend that it is a simple constant resource that just
contains one object). Here, this operation is not any worse than before because
``size(desired_package) * size(package)`` is not any larger. If, however,
*desired_package* was on the same order as *package* (approximately a thousand
resource objects). Then the computational cost of evaluating that expression
would be quadratic.
In general, the cost, assuming all resources have the same order, is
exponential with the number of distinct resource jobs referenced by the
expression.
Exactly one resource bound to a variable at once
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It's not possible to refer to two different resources, from the same resource
group, in one resource expression. In other terms, the variable always points
to one object, it is not a collection of objects.
For example, let's consider this program::
package.name == 'xorg' and package.name == 'procps'
Seemingly the intent was to ensure that both ``xorg`` and ``procps`` are
installed. The reason why this does not work is that at each iteration of the
the expression evaluator, the name ``package`` refers to exactly one resource
object. In other words, that expression is equivalent to this one::
A == True and A == False
This type of error is not captured by our limited semantic analyzer. It will
silently evaluate to False and inhibit the job from being stated.
To work around this, split the expression to two consecutive lines. As stated
in rule 3 in the list above, there is an implicit ``and`` operator between all
expressions. A working example that expresses the same intent looks like this::
package.name == 'xorg'
package.name == 'procps'
Operator != is useless
^^^^^^^^^^^^^^^^^^^^^^
This is strange at first but quickly becomes obvious once you recall rule 2
from the list above. That rule states that the expression is evaluated
repeatedly for each resource from a particular group and that any ``True``
iteration marks the whole expression as ``True``).
Let's look at a real-world example::
xinput.device_class == 'XITouchClass' and xinput.touch_mode != 'dependent'
So seemingly, the intent here was to have at least ``xinput`` resource with a
``device_class`` attribute equal to ``XITouchClass`` that has ``touch_mode``
attribute equal to anything but ``dependent``.
Now let's assume that we have exactly two resources in the ``xinput`` group::
device_class: XITouchClass
touch_mode: dependant
device_class: XITouchClass
touch_mode: something else
Now, this expression will evaluate to ``True``, as the second resource fulfils
the requirements. Is this what the test designer had expected? That's hard to
say. The problem here is that this expression can be understood as *at least
one resource isn't something* **or** *all resources weren't something*. Both
are equally valid desires and, depending on how the test is implemented, may or
many not work correctly in practice.
Currently there is no workaround. We are considering adding a new syntax that
would allow to specify this explicitly. The proposal is documented below as
"implicit any(), explicit all()"
Everything is a string
^^^^^^^^^^^^^^^^^^^^^^
Resource programs are regular python programs evaluated in unusual ways but
all of the variables that are exposed through the resource object are strings.
This has considerable impact on comparison, unless you are comparing to a
string the comparison will always silently fail as python has dynamic but
strict, not loose types (there is no implicit type conversion). To alleviate
this problem several type names / conversion functions are allowed in
requirement programs. Those are:
* :py:class:`int`, to convert to integer numbers
* :py:class:`float`, to convert to floating point numbers
* :py:class:`bool`, to convert to a boolean context
Considered enhancements
-----------------------
We are currently considering one improvement to resource programs. This would
allow us to introduce a fix that resolves some issues in a backwards compatible
way. Technical aspects are not yet resolved as that extension would not be
available in :term:`Checkbox` until Checkbox can be built on top of
:term:`Plainbox`
Implicit any(), explicit all()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This proposal changes the way resource expressions are evaluated.
The implicit ``any()`` implemented as a loop over all resources from the
resource group designated by variable name would be configurable.
A developer may choose to wrap the whole expression in the ``all()`` function
to indicate that the expression inside ``all()`` must evaluate to ``True`` for
**all** iterations (all resources).
This would allow solving the case where a job can only run, for example, when a
certain package is **not** installed. This could be expressed as::
all(package.name != 'ubuntu-desktop')
Resources in Checkbox
=====================
The following chapters explain how resources originally worked in
:term:`Checkbox`. Only notable differences from :term:`Plainbox` implementation
are listed.
plainbox-0.25/docs/dev/intro.rst 0000664 0001750 0001750 00000026735 12627266441 017476 0 ustar pierre pierre 0000000 0000000 Getting started with development
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Plainbox uses python3 for development. The core is really system independent
but you will need Ubuntu to really make the best of it and experience it as we
do. We encourage everyone to use the most recent Ubuntu release for
development. Usually this brings the best, most recent tools without having to
search for software on the Internet.
Plainbox has almost no dependencies itself, almost, because we depend on the
mighty :term:`Checkbox` project to provide us with a lot of existing
infrastructure. Testing Plainbox requires additional packages and some
non-packaged software. You will typically want to install it and take advantage
of the integration we provide.
.. note::
If you are working with the source please be aware that Plainbox requires
an installed copy of Checkbox. Checkbox in turns is has many scripts that
depend on various system packages, including python packages that cannot be
installed from pypi. If you were planning on using :command:`virtualenv`
then please make sure to create it with the ``--system-site-packages``
option.
Get the source
--------------
Source code for Plainbox is kept along with several other related projects in
the `checkbox` project on launchpad. You will need to use bzr to get a local
copy.
.. code-block:: bash
$ bzr branch lp:checkbox
.. note::
If you would rather use ``git`` you can also do that (and in fact, some of
us already do). Head to `git-lp homepage `_
and follow the guide there to use git-lp with this project.
Get the dependencies
--------------------
You will need some tools to work on Checkbox. Scripted installation of almost
everything required is available (except for VirtualBox and Vagrant, those are
still manual).
From the top of the checkbox checkout run `mk-venv`, that script will install
all the missing dependencies and set you up for work on your machine.
Getting Vagrant
---------------
While developing Plainbox you will often need to run potentially dangerous
commands on your system, such as asking it to suspend and wake up
automatically. We also need to support a range of Ubuntu releases, going all
the way back to Ubuntu 12.04. This may cause compatibility issues that are
unnoticed all until they hit our CI system. To minimize this Plainbox uses
:term:`Vagrant` to create lightweight execution environments that transparently
share your source tree and allow you to quickly create and share testing
environment that can be deployed by any developer in minutes. Vagrant uses
:term:`VirtualBox` and while both are packaged in Ubuntu, unless you are
running Ubuntu 13.04 you should download and install the software from their
upstream projects.
If you are running Ubuntu 13.04
.. code-block:: bash
$ sudo apt-get install vagrant
If you are running earlier version of Ubuntu follow those two links to get started:
* http://downloads.vagrantup.com/
* https://www.virtualbox.org/wiki/Downloads
If you have not installed VirtualBox before, you must add yourself to the
``vboxusers`` group, log out and log back in again.
.. code-block:: bash
$ sudo usermod -G vboxusers -a $USER
Initialize virtualenv
---------------------
Plainbox will use a few unpackaged and bleeding-edge releases from :term:`pypi`
those are installed by additional script. By default the script assumes you
have a `/ramdisk` directory but you can pass any path as an argument for an
alternate location.
.. code-block:: bash
$ ./mk-venv
After everything is set up you can activate the virtualenv environment with the
dot command. Note that there *is* a space between the dot and the forward
slash. You can repeat this command in as many shells as you like.
.. code-block:: bash
$ . /ramdisk/venv/bin/activate
Once virtualenv is activated your shell prompt will be changed to reflect that.
You should now be able to run :command:`plainbox --help` to ensure everything
is working properly.
Initialize vagrant
------------------
Vagrant allows us to ship a tiny text file :file:`Vagrantfile` that describes
the development and testing environment. This file tells :command:`vagrant` how
to prepare a virtual machine for testing. If you never used it before you may
want to keep a tab open on `vagrant getting started guide
`_
We did all the hard work so that you don't have to, to get everything ready
just run one command:
.. code-block:: bash
$ vagrant up
This will download vanilla Ubuntu cloud images, initialize VirtualBox,
provision virtual machines (one for each supported Ubuntu release) and allow
you to ssh into them for testing with one command.
This will take a moment, depending on the speed of your network. Once that is
done you should be able to log into, say, ``precise`` and run
:command:`plainbox --help` to see if everything is all right.
.. code-block:: bash
$ vagrant ssh precise
vagrant@vagrant-ubuntu-precise-32:~$ plainbox --help
usage: plainbox [-h] [-v] {run,special,self-test} ...
positional arguments:
{run,special,self-test}
run run a test job
special special/internal commands
self-test run integration tests
optional arguments:
-h, --help show this help message and exit
-v, --version show program's version number and exit
$ exit
Getting and setting up LXC
--------------------------
An alternative to run tests in isolated environments for various Ubuntu
releases is to use `LXC `_. LXC is lighter on
resources and doesn't require hardware virtualization support, but since it
doesn't do real, full virtualization, it may be inadequate for some kinds of
tests. It's up to you to decide whether you want to use it.
If you want to use LXC, the easiest way is to use Ubuntu 14.04, and just
install the lxc package:
.. code-block:: bash
$ sudo apt-get install lxc
Setting LXC up for plainbox testing is easy, simply configure your system so
that the user that will run the tests can use `sudo` to execute lxc subcommands
without requiring a password. For example if your user is called `peter`, run
`sudo visudo` and paste this configuration at the very end of that file, this
will allow running lxc tests as that user:
.. code-block:: bash
Cmnd_Alias LXC_COMMANDS = /usr/bin/lxc-create, /usr/bin/lxc-start, \
/usr/bin/lxc-destroy, /usr/bin/lxc-attach, /usr/bin/lxc-start-ephemeral, \
/usr/bin/lxc-stop, /usr/bin/lxc-ls
peter ALL=NOPASSWD: LXC_COMMANDS
The first time you use lxc, it will download the base files for each release
you test, which will be slow; afterwards, it will use a locally cached copy to
speed things up.
Running Plainbox tests
^^^^^^^^^^^^^^^^^^^^^^
Plainbox is designed to be testable so it would be silly if it was hard to run
tests. Actually, there are many different ways to run tests. They all run the
same code so don't worry.
To test the current code you are working on you can:
- Run the :command:`./test-in-vagrant.sh` from the top-level directory. This
will take the longer but will go over *all* the tests on *all* the supported
versions of Ubuntu. It will run Checkbox unit-tests, Plainbox unit-tests and
it will even run integration tests that actually execute jobs.
- Run the :command:`./test-in-lxc.sh` from the top-level directory. This also
executes *all* the tests on *all* the supported versions of Ubuntu, however
it uses LXC containers instead of a Virtualbox virtual machine.
- Run :command:`plainbox self-test --unit-tests` or
:command:`plainbox self-test --integration-tests`. This will execute all the
tests right on your machine, without any virtualization (well, unless you do
that after running :command:`vagrant ssh`). Typically you would run unit
tests while being in a ``virtualenv`` with the ``plainbox`` package in
development mode, as created by running :command:`python setup.py develop`
- Run :command:`./setup.py test` this will install any required test
dependencies from pypi and run unit tests.
- Run the script :command:`test-with-coverage.sh` while being in a virtualenv.
This will also compute testing code coverage and is very much recommended
while working on new code and tests.
Submitting Patches
^^^^^^^^^^^^^^^^^^
We use `Launchpad `_ for most of our project management.
All code changes should be submitted as merge requests. Launchpad has
`extensive documentation `_ on how to use various
facilities it provides.
In general we are open to contributions but we reserve the right to reject
patches if they don't fit into the needs of the :term:`Hardware Certification`.
If you have an idea go and talk to us on :abbr:`IRC (Internet Relay Chat)` on
the `#ubuntu-quality `_ channel.
We have some basic rules patch acceptance:
0. Be prepare to alter your changes.
This is a meta-rule. One of the points of code reviews is to improve the
proposal. That implies the proposal may need to change. You must be prepared
and able to change your code after getting feedback.
To do that efficiently you must structure your work in a way where each
committed change works for you rather than against you. The rules listed
below are a reflection of this.
1. Each patch should be a single logical change that can be applied.
Don't clump lots of changes into one big patch. That will only delay review,
make accepting feedback difficult and annoying. This may mean that the history
has many small patches that can land in trunk in a FIFO mode. The oldest patch
of your branch may be allowed to land and should make sense. This has
implications on how general code editing should be performed. If you break some
APIs then firsts introduce a working replacement, then change usage of the API
and lastly remove any dead code.
2. Don't keep junk patches in your branch.
Don't keep patches such as "fix typo" in your branch, that makes the review
process more difficult as some reviewers will read your patches one by one.
This is especially important if your changes are substantial.
3. Don't merge with trunk, rebase on trunk.
This way you can keep your local delta as a collection of meaningful,
readable patches. Reading the full diff and following the complex merge
history (especially for long-lived branches) is difficult in practice.
4. Keep unrelated changes in separate branches.
If you ware working on something and found a bug that needed immediate
fixing, typo or anything else that is small and quick to fix, do it. Then
take that patch out of your development branch and into a dedicated branch
and propose it. As the small change is reviewed and lands you can remove
that patch from your development branch.
This is intended to help both the developer and the reviewer. Seemingly
trivial patches may turn out to be more complicated than initially assumed
(and may have their own feedback cycle and iterations). The reviewer can
focus on logical changes and not on a collection of unrelated alterations.
Lastly we may need to apply some fixes to other supported branches and
release those.
5. Don't propose untested code.
We generally like tests for new code. This is not a super-strict requirement
but unless writing tests is incredibly hard we'd rather wait. If testing is
hard we'd rather invest some time in refactoring the code or building
required support infrastructure.
plainbox-0.25/docs/dev/trusted-launcher.rst 0000664 0001750 0001750 00000021003 12627266441 021613 0 ustar pierre pierre 0000000 0000000 Running jobs as root
====================
:term:`Plainbox` is started without any privilege. But several tests need to
start commands requiring privileges.
Such tests will call a trusted launcher, a standalone script which does not
depend on the :term:`Plainbox` core modules.
`polkit `_ will control access
to system resources. The trusted launcher has to be started using
`pkexec `_
so that the related policy file works as expected.
To avoid a security hole that allows anyone to run anything as root, the
launcher can only run jobs installed in a system-wide directory. This way we
are not weaken the trust system as root access is required to install both
components (the trusted runner and jobs). The :term:`Plainbox` process will
send an identifier which is matched by a well-known list in the trusted
launcher. This identifier is the job hash:
.. code-block:: bash
$ pkexec plainbox-trusted-launcher-1 --hash JOB-HASH
See :attr:`plainbox.impl.secure.job.BaseJob.checksum` for details about job
hashes.
Using Polkit
^^^^^^^^^^^^
Available authentication methods
--------------------------------
.. note::
Only applicable to the package version of Plainbox
Plainbox comes with two authentication methods but both aim to retain the
granted privileges for the life of the :term:`Plainbox` process.
* The first method will ask the password only once and show the following
agent on desktop systems (a text-based agent is available for servers):
.. code-block:: text
+-----------------------------------------------------------------------------+
| [X] Authenticate |
+-----------------------------------------------------------------------------+
| |
| [Icon] Please enter your password. Some tests require root access to run |
| properly. Your password will never be stored and will never be |
| submitted with test results. |
| |
| An application is attempting to perform an action that requires |
| privileges. |
| Authentication as the super user is required to perform this action. |
| |
| Password: [________________________________________________________] |
| |
| [V] Details: |
| Action: org.freedesktop.policykit.pkexec.run-plainbox-job |
| Vendor: Plainbox |
| |
| [Cancel] [Authenticate] |
+-----------------------------------------------------------------------------+
The following policy file has to be installed in
:file:`/usr/share/polkit-1/actions/` on Ubuntu systems. Asking the
password just one time and keeps the authentication for forthcoming calls
is provided by the **allow_active** element and the **auth_admin_keep**
value.
Check the `polkit actions `_
documentation for details about the other parameters.
.. code-block:: xml
Plainbox
https://launchpad.net/checkbox
checkbox
Run Job command
Authentication is required to run a job command.
no
no
auth_admin_keep
/usr/bin/plainbox-trusted-launcher-1
TRUE
* The second method is only intended to be used in headless mode (like `SRU`).
The only difference with the above method is that **allow_active** will be
set to **yes**.
.. note::
The two policy files are available in the Plainbox :file:`contrib/`
directory.
Environment settings with pkexec
--------------------------------
`pkexec `_
allows an authorized user to execute a command as another user. But the
environment that ``command`` will run it, will be set to a minimal known and
safe environment in order to avoid injecting code through ``LD_LIBRARY_PATH``
or similar mechanisms.
However, some jobs commands require specific enviroment variables such as the
name of an access point for a wireless test. Those kind of variables must be
available to the trusted launcher. To do so, the enviromment mapping is sent
to the launcher like key/value pairs are sent to the env(1) command:
.. code-block:: bash
$ pkexec trusted-launcher JOB-HASH [NAME=VALUE [NAME=VALUE ...]]
Each NAME will be set to VALUE in the environment given that they are known
and defined in the :ref:`JobDefinition.environ ` parameter.
plainbox-trusted-launcher-1
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The trusted launcher is the minimal code needed to be able to run a
:term:`Checkbox` job command.
Internally the checkbox trusted launcher looks for jobs in the system locations
defined in :attr:`plainbox.impl.secure.providers.v1.all_providers` which
defaults to :file:`/usr/share/plainbox-trusted-launcher-1/*.provider`.
Usage
-----
.. code-block:: text
plainbox-trusted-launcher-1 [-h] (--hash HASH | --warmup)
[--via LOCAL-JOB-HASH]
[NAME=VALUE [NAME=VALUE ...]]
positional arguments:
NAME=VALUE Set each NAME to VALUE in the string environment
optional arguments:
-h, --help show this help message and exit
--hash HASH job hash to match
--warmup Return immediately, only useful when used with
pkexec(1)
--via LOCAL-JOB-HASH Local job hash to use to match the generated job
.. note::
Check all job hashes with ``plainbox special -J``
As stated in the polkit chapter, only a trusted subset of the environment
mapping will be set using `subprocess.call` to run the command. Only the
variables defined in the job environ property are allowed to avoid compromising
the root environment. Needed modifications like adding ``CHECKBOX_SHARE`` and
new paths to scripts are managed by the plainbox-trusted-launcher-1.
Authentication on Plainbox startup
----------------------------------
To avoid prompting the password at the first test requiring privileges,
:term:`Plainbox` will call the ``plainbox-trusted-launcher-1`` with the
``--warmup`` option. It's like a NOOP and it will return immediately, but
thanks to the installed policy file the authentication will be kept.
.. note::
When running the development version from a branch, the usual polkit
authentication agent will pop up to ask the password each and every time.
This is the only difference.
Special case of jobs using the Checkbox local plugin
----------------------------------------------------
For jobs generated from :ref:`local ` jobs (e.g.
disk/read_performance.*) the trusted launcher is started with ``--via`` meaning
that we have to first eval a local job to find a hash match. Once a match is
found, the job command is executed.
.. code-block:: bash
$ pkexec plainbox-trusted-launcher-1 --hash JOB-HASH --via LOCAL-JOB-HASH
.. note::
it will obviously fail if any local job can ever generate another local job.
plainbox-0.25/docs/usage.rst 0000664 0001750 0001750 00000006623 12627266441 016663 0 ustar pierre pierre 0000000 0000000 .. _usage:
Basic Usage
===========
Currently :term:`Plainbox` has no graphical user interface. To use it you need
to use the command line.
Plainbox has built-in help system so running :command:`plainbox run --help`
will give you instant information about all the various arguments and options
that are available. This document is not intended to replace that.
Running a specific job
^^^^^^^^^^^^^^^^^^^^^^
Basically there is just one command that does everything we can do so far, that
is :command:`plainbox run`. It has a number of options that tell it which
:term:`job` to run and what to do with results.
To run a specific :term:`job` pass it to the ``--include-pattern`` or ``-i``
option.
For example, to run one of the internal "smoke" test job:
.. code-block:: bash
$ plainbox run -i 2013.com.canonical.plainbox::stub/true
.. note::
The option ``-i`` can be provided any number of times.
Running jobs related to a specific area
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Plainbox has no concept of job categories but you can simulate that by
running all jobs that follow a specific naming pattern. For example, to run
all of the USB tests you can run the following command:
.. code-block:: bash
$ plainbox run -i 'usb/.*'
To list all known jobs run:
.. code-block:: bash
plainbox dev special --list-jobs
Running a white list
^^^^^^^^^^^^^^^^^^^^
To run a :term:`whitelist` pass the ``--whitelist`` or ``-w`` option.
For example, to run the default white list run:
.. code-block:: bash
$ plainbox run -w /path/to/some/file.whitelist
Saving test results
^^^^^^^^^^^^^^^^^^^
Anything that Plainbox captures and stores during test execution can be
exported to a file using the exporter system. The two most commonly used
exporters are JSON (versatile and general) and XML (for internal Canonical use).
JSON Exporter
-------------
To generate a JSON file with all of the internally available data (for storage,
processing or other automation) you will need to pass three additional
arguments to ``plainbox run``:
#. ``--output-format=2013.com.canonical.plainbox::json``
#. ``--output-options=OPTION1,OPTION2`` where *OPTIONx* are option names.
#. ``--output-file=NAME`` where *NAME* is a file name.
Pass ``?`` to ``--output-options`` for a list of available options. Multiple
exporter options can be specified, separated with commas.
.. code-block:: bash
$ plainbox run --whitelist=/path/to/some/file.whitelist --output-format=2013.com.canonical.plainbox::json --output-file=results.json
XML Exporter
------------
To generate an XML file that can be sent to the :term:`certification website`
you need to pass two additional arguments to ``plainbox run``:
#. ``--output-format=2013.com.canonical.plainbox::hexr``
#. ``--output-file=NAME`` where *NAME* is a file name
For example, to get the default certification tests ready to be submitted
run this command:
.. code-block:: bash
$ plainbox run --whitelist=/path/to/some/file.whitelist --output-format=2013.com.canonical.plainbox::hexr --output-file=submission.xml
Other Exporters
---------------
You can discover the full list of known exporters at runtime, by passing ``?``
to ``--output-format``.
Custom Exporters
----------------
Exporters can be provided by third party packages. Exporters are very simple to
write. If you don't want to transform JSON to your preferred format, you can
copy the json exporter and use it as template for writing your own.
plainbox-0.25/docs/changelog.rst 0000664 0001750 0001750 00000102310 12627266441 017474 0 ustar pierre pierre 0000000 0000000 ChangeLog
=========
.. note::
This changelog contains only a summary of changes. For a more accurate
accounting of development history please inspect the source history
directly.
.. _version_0_25:
Plainbox 0.25 (unreleased)
^^^^^^^^^^^^^^^^^^^^^^^^^^
* `plainbox startprovider` may now be run with an `--empty` option that
generates very basic provider that has only `./manage.py` file. Use this
option when you know your way around, and you want to quickly start
developing plainbox jobs without any other jobs polluting your provider.
* Plainbox now supports a new flag :ref:`explicit-fail
`. Using that flag makes manual failing of the job
require a comment to be entered. This flag naturally makes sense only for
'manual', 'user-interact-verify', 'user-verify' jobs.
.. _version_0_24:
Plainbox 0.24
^^^^^^^^^^^^^
* Add a dependency on guacamole.
* Plainbox ignores trailing garbage after EOF while reading IOLog zip.
See https://bugs.python.org/issue24301.
* Session assistant now preserves job ordering from test plans.
* Session assistant ignores calls to finalize_session when the session has
already been finalized. This lets application call finalization freely
without having to keep that state information in them.
* Plainbox expands the SessionAssistant initializer API
**app_version**: so that we can use this implicitly in some places,
e.g. don't resumes sessions created by future versions, etc.
**api_version**: so that we can change usage expectations over time
but let applications stay compatible by using a fixed API version.
This can be changed to a __new__ call that returns a versioned
SA class instead of doing if-then-else magic in all the places.
**api_flags**: so that we can allow applications to opt-into optional
features and so that we can adjust expectations accordingly. This
will also allow us to easily compare applications for feature
parity.
For now all new arguments have sane defaults. Once all applications are
patched the defaults will go away.
* Plainbox now supports a new way to express the estimated duration of
:ref:`jobs ` and
:ref:`test plans ` that is much easier for
humans to read and write. Instead of having to mentally parse ``3725`` you
can just write ``1h 2m 5s`` or ``1h:2m:5s``.
* Plainbox now supports an *after* job ordering constraint. This constraint is
very similar to the existing *depends* constraint, except that the outcome of
the referenced job is not important. In practical terms, even if one job runs
and fails, another job that runs *after* it, will run.
This constraint is immediately useful to all *attachment* jobs that want to
collect a log file from some other operation, regardless of the outcome of
that operation. In the past those would have to be carefully placed in the
test plan, in the right order. By using the *after* constraint, the
attachment jobs will automatically pull in their log-generating cousins and
will run at the right time no matter what happens.
* Plainbox now allows more than one resource object to be used in a resource
expression. This can be used to construct resource expressions that combine
facts from multiple sources (e.g. the manifest resource with something else).
As an **important** implementation limitation please remember that the
complexity of such resource programs is proportional to the product of the
number of resource objects associated with each resource in an expression.
In practice it is not advised to use resource objects with more than a few
resource records associated with them. This is just an implementation detail
that can be lifted in subsequent versions.
Examples of the usage of this feature can be found in the TPM (Trusted
Platform Module) provider.
* https://launchpad.net/plainbox/+milestone/0.24
.. _version_0_23:
Plainbox 0.23
^^^^^^^^^^^^^
* Mandatory jobs - jobs may be marked as mandatory - this way they are always
executed - useful for jobs that get information about hardware. Use
mandatory_include test plan field to mark the jobs you want always to be run.
* Bootstrapping jobs - applications may run jobs that generate other jobs prior
to the execution of the 'normal' list of jobs. Use bootstrap_include field of
the test plan to list all jobs that generate other jobs.
Read more about mandatory and bootstrapping jobs in
:doc:`plainbox test plan unit `
* Plainbox now supports a new flag :ref:`has-leftovers
`, that governs the behavior of leftover file
detection feature. When this flag is added to a job definition files left
over by the execution of a command are silently ignored.
* Plainbox now supports a new flag on job definitions :ref:`simple
` that is meant to cut the boiler-plate from fully automated
test cases. When this flag is added to a job definition then many otherwise
mandatory or recommended features are disabled.
.. _version_0_18:
Plainbox 0.18
^^^^^^^^^^^^^
.. note::
This version is under active development. The details in the milestone page
may vary before the release is finalized.
This is a periodic release, containing both bug fixes and some minor new
features. Details are available at:
* https://launchpad.net/plainbox/+milestone/0.18
.. warning::
API changes were not documented for this release. We are working on a new
system that will allow us to automatically generate API changes between
.. _version_0_17:
Plainbox 0.17
^^^^^^^^^^^^^
This is an (out-of-cycle) periodic release, containing both bug fixes and some
minor new features. Details are available at:
* https://launchpad.net/plainbox/+milestone/0.17
.. warning::
API changes were not documented for this release. We are working on a new
system that will allow us to automatically generate API changes between
releases without the added manual maintenance burden.
.. _version_0_16:
Plainbox 0.16
^^^^^^^^^^^^^
This is a periodic release, containing both bug fixes and some minor new
features. Details are available at:
* https://launchpad.net/plainbox/+milestone/0.16
.. warning::
API changes were not documented for this release. We are working on a new
system that will allow us to automatically generate API changes between
releases without the added manual maintenance burden.
.. _version_0_15:
Plainbox 0.15
^^^^^^^^^^^^^
This is a periodic release, containing both bug fixes and some minor new
features. Details are available at:
* https://launchpad.net/plainbox/+milestone/0.15
.. warning::
API changes were not documented for this release. We are working on a new
system that will allow us to automatically generate API changes between
releases without the added manual maintenance burden.
.. _version_0_14:
Plainbox 0.14
^^^^^^^^^^^^^
This is a periodic release, containing both bug fixes and some minor new
features. Details are available at:
* https://launchpad.net/plainbox/+milestone/0.14
.. warning::
API changes were not documented for this release. We are working on a new
system that will allow us to automatically generate API changes between
releases without the added manual maintenance burden.
.. _version_0_13:
Plainbox 0.13
^^^^^^^^^^^^^
This is a periodic release, containing both bug fixes and some minor new
features. Details are available at:
* https://launchpad.net/plainbox/+milestone/0.13
.. warning::
API changes were not documented for this release. We are working on a new
system that will allow us to automatically generate API changes between
releases without the added manual maintenance burden.
.. _version_0_12:
Plainbox 0.12
^^^^^^^^^^^^^
This is a periodic release, containing both bug fixes and some minor new
features. Details are available at:
* https://launchpad.net/plainbox/+milestone/0.12
.. warning::
API changes were not documented for this release. We are working on a new
system that will allow us to automatically generate API changes between
releases without the added manual maintenance burden.
.. _version_0_11:
Plainbox 0.11
^^^^^^^^^^^^^
This is a periodic release, containing both bug fixes and some minor new
features. Details are available at:
* https://launchpad.net/plainbox/+milestone/0.11
.. warning::
API changes were not documented for this release. We are working on a new
system that will allow us to automatically generate API changes between
releases without the added manual maintenance burden.
.. _version_0_10:
Plainbox 0.10
^^^^^^^^^^^^^
This is a periodic release, containing both bug fixes and some minor new
features. Details are available at:
* https://launchpad.net/plainbox/+milestone/0.10
.. warning::
API changes were not documented for this release. We are working on a new
system that will allow us to automatically generate API changes between
releases without the added manual maintenance burden.
.. _version_0_9:
Plainbox 0.9
^^^^^^^^^^^^
This is a periodic release, containing both bug fixes and some minor new
features. Details are available at:
* https://launchpad.net/plainbox/+milestone/0.9
.. warning::
API changes were not documented for this release. We are working on a new
system that will allow us to automatically generate API changes between
releases without the added manual maintenance burden.
.. _version_0_8:
Plainbox 0.8
^^^^^^^^^^^^
This is a periodic release, containing both bug fixes and some minor new
features. Details are available at:
* https://launchpad.net/plainbox/+milestone/0.8
.. warning::
API changes were not documented for this release. We are working on a new
system that will allow us to automatically generate API changes between
releases without the added manual maintenance burden.
.. _version_0_7:
Plainbox 0.7
^^^^^^^^^^^^
This is a periodic release, containing both bug fixes and some minor new
features. Details are available at:
* https://launchpad.net/plainbox/+milestone/0.7
.. warning::
API changes were not documented for this release. We are working on a new
system that will allow us to automatically generate API changes between
releases without the added manual maintenance burden.
.. _version_0_6:
Plainbox 0.6
^^^^^^^^^^^^
This is a periodic release, containing both bug fixes and some minor new
features. Details are available at:
* https://launchpad.net/plainbox/+milestone/0.6
.. warning::
API changes were not documented for this release. We are working on a new
system that will allow us to automatically generate API changes between
releases without the added manual maintenance burden.
.. _version_0_5:
Plainbox 0.5.4
^^^^^^^^^^^^^^
This is a maintenance release of the 0.5 series.
Bugs fixed in this release are assigned to the following milestone:
* Bugfixes: https://launchpad.net/plainbox/+milestone/0.5.4
Plainbox 0.5.3
^^^^^^^^^^^^^^
This is a maintenance release of the 0.5 series.
Bug fixes
---------
Bugs fixed in this release are assigned to the following milestone:
* Bugfixes: https://launchpad.net/plainbox/+milestone/0.5.3
API changes
-----------
* Plainbox now has an interface for transport classes.
:class:`plainbox.abc.ISessionStateTransport` that differs from the old
implementation of the certification transport (the only one that used to
exist). The new interface has well-defined return value, error semantics and
takes one more argument (session state). This change was required to
implement the launchpad transport.
* Plainbox now has support for pluggable build systems that supply automatic
value for the build_cmd argument in manage.py's setup() call. They existing
build systems are available in the :mod:`plainbox.impl.buildsystems` module.
* All exporters can now make use of key=value options.
* The XML exporter can now be customized to set the client name option. This is
available using the standard exporter option list and is available both at
API level and on command line.
* The provider class can now keep track of the src/ directory and the build/bin
directory, which are important for providers under development. This feature
is used to run executables from the build/bin directory.
* Plainbox will now load the src/EXECUTABLES file, if present, to enumerate
executables built from source. This allows manage.py install to be more
accurate and allows manage.py info do display executables even before they
are built.
Plainbox 0.5.2
^^^^^^^^^^^^^^
This is a maintenance release of the 0.5 series.
Bug fixes
---------
Bugs fixed in this release are assigned to the following milestone:
* Bugfixes: https://launchpad.net/checkbox/+milestone/plainbox-0.5.2
API changes
-----------
* Plainbox now remembers the base directory (aka location) associated with each
provider. This is available as and
:attr:`plainbox.impl.secure.providers.v1.Provider1.base_dir`
* The :class:`plainbox.impl.commands.checkbox.CheckboxInvocationMixIn` gained a
new required argument to pass the configuration object around. This is
required to fix bug https://bugs.launchpad.net/checkbox/+bug/1298166. This
API change is backwards incompatible and breaks checkbox-ng << 0.3.
* Plainbox now offers the generic extensibility point for build systems for
provider executables. Entry points for classes implementing the
:class:`plainbox.abc.IBuildSystem` interface can be registered in the
``plainbox.buildsystems`` pkg-resources entry point.
* Plainbox has a better job validation subsystem. Job validation parameters
(eventually passed to
:meth:`plainbox.impl.job.CheckboxJobValidator.validate()`) can be set on the
provider loader class and they will propagate across the stack. Along with
more fine-tuned controls for strict validation and deprecated fields
validation this offers tools better ways to discover potential problems.
Plainbox 0.5.1
^^^^^^^^^^^^^^
First working release of the 0.5 series, 0.5 was missing one critical patch and
didn't work. Basically, The tag was applied on the wrong revision.
Plainbox 0.5
^^^^^^^^^^^^
New Features
------------
* Plainbox is now a better development tool for test authors. With the new
'plainbox startprovider' command it is easy to bootstrap development of
third party test collections. This is further explained in the new
:ref:`tutorial`. The template is described in :doc:`provider template
`.
* Test providers now control namespaces for job definitions, allowing test
authors to freely name job definitions without any central coordination
authority. See more about :doc:`provider namespaces
`.
* Plainbox is now fully internationalized, making it possible to translate all
of the user interface. Certain extensible features such as commands and test
job providers are also translatable and can be shipped by third party
developers. All the translations are seamlessly enabled, even if they come
from different sources. See more about :doc:`provider internationalization
`.
Command Line Interfaces Changes
-------------------------------
* The -c | --checkbox option was removed. It used to select which "provider" to
load (out of packaged providers, special source provider and special stub
provider) but with the introduction of :term:`namespaces ` this
option became meaningless. To support a subset of reasons why it was being
used a new option was added in its place. The new --providers option can
decide if plainbox will load **all** providers (default), just the special
**src** provider or just the special **stub** provider. We hope that nobody
will need to use this option.
* The ``plainbox run -i``, ``plainbox dev analyze -i`` and similar
--include-patterns options no longer works with simple job definition
identifier patterns. It now requires fully qualified patterns that also
include the name-space of the defining provider. In practical terms instead
of ``plainbox run -i foo`` one needs to use ``plainbox run -i
2013.example.com::foo``. If one really needs to run *any* job ``foo`` from
any provider that can be achieved with ``plainbox run -i '.*::foo'``.
Workflow Changes
----------------
* Plainbox is now available in Debian as the ``python3-plainbox`` and
``plainbox`` packages. Several of the Checkbox project developers are
maintaining packages for the core library, test providers and whole test
applications.
* Plainbox dropped support for Ubuntu 13.04 (Raring Rigtail), following
scheduled end-of-life of that release.
* Plainbox dropped support for Ubuntu 13.10 (Saucy Salamander) given the
imminent release of the next version of Ubuntu.
* Plainbox now supports Ubuntu 14.04 (Trusty Thar), scheduled for release on
the 17th of April 2014.
This implies that any patch merged into trunk is only tested on Ubuntu 12.04
(with python3.2) and Ubuntu 14.04 (with python3.3, which will switch to python
3.4 later, before the final release.)
Internal Changes
----------------
General Changes
...............
* Plainbox now supports Python 3.4. This includes existing support for Python
3.2 and 3.3. Effective Ubuntu coverage now spans two LTS releases.
This will be maintained until the end of Ubuntu 12.04 support.
New Modules
...........
* Plainbox now has a dedicated module for implementing versatile command line
utilities :mod:`plainbox.impl.clitools`. This module is used to implement the
new :mod:`plainbox.provider_manager` which is what backs the per-provider
management script.
* The new :mod:`plainbox.provider_manager` module contains the implementation
of the ``manage.py`` script, which is generated for each new provider. The
script implements a set of subcommands for working with the provider from a
developer's point of view.
* The vendor package now contains a pre-release version of
:mod:`~plainbox.impl.vendor.textland` - a text mode, work-in-progress,
compositor for console applications. TextLand is used to implement certain
screens displayed by checkbox-ng. This makes it easier to test, easier to
develop (without having to rely on complex curses APIs) and more portable as
the basic TextLand API (to display a buffer and provide various events) can
be implemented on many platforms.
API changes (Job Definitions)
.............................
* Plainbox now offers two new properties for identifying (naming) job
definitions, :meth:`plainbox.impl.job.JobDefinition.id` and
:meth:`plainbox.impl.job.JobDefinition.partial_id`. The ``id`` property is
the full, effective identifier composed of ``partial_id`` and
``provider.namespace``, with the C++ scope resulution operator, ``::``
joining both into one string. The ``partial_id`` field is loaded from the
``id`` key in RFC822-like job definition syntax and is the part without the
name-space. Plainbox now uses the ``id`` everywhere where ``name`` used to be
used before. If the ``id`` field (which defines ``partial_id`` is not present
in a RFC822 job definition then it defaults to ``name`` making this change
fully backwards compatible.
* The :meth:`plainbox.impl.job.JobDefinition.name` property is now deprecated.
It is still available but is has been entirely replaced by the new ``id`` and
``partial_id`` properties. It will be removed as a property in the next
release of Plainbox.
* Plainbox now offers the new :meth:`plainbox.impl.job.JobDefinition.summary`
which is like a short, one line description of the provider. It should be
used whenever a job definition needs to be listed (in user interfaces,
reports, etc). It can be translated and a localized version is available as
:meth:`plainbox.impl.job.JobDefinition.tr_summary()`
* Plainbox now offers a localized version of a job description as
:meth:`plainbox.impl.job.JobDefinition.tr_description()`.
API changes (White Lists)
.........................
* Plainbox now offers new and improved APIs for loading whitelists
:meth:`plainbox.impl.secure.qualifiers.WhiteList.from_string()` and
:meth:`plainbox.impl.secure.qualifiers.WhiteList.from_file()`.
* Plainbox now tracks the origin of whitelist, knowing where they were defined
in. Origin is available as
:meth:`plainbox.impl.secure.qualifiers.WhiteList.origin`
* Plainbox can now optionally store and use the implicit name-space of a
WhiteList objects. This name space will be used to qualify all the patterns
that don't use the scope resolution operator ``::``.
The implicit name-space is available as
:meth:`plainbox.impl.secure.qualifiers.WhiteList.implicit_namespace`.
API changes (Providers)
.......................
* Plainbox can validate providers, jobs and whitelists better than before. In
particular, broken providers are now verbosely ignored. This is implemented
as a number of additional validators on
:class:`plainbox.impl.secure.providers.v1.Provider1Definition`
* Plainbox can now enumerate all the executables of a provider
:meth:`plainbox.abc.IProvider1.get_all_executables()`
* Plainbox now offers new APIs for applications to load as much of provider
content as possible, without stopping on the first encountered problem.
:meth:`plainbox.impl.secure.providers.v1.Provider1.load_all_jobs()`
* The ``Provider1.load_jobs()`` method has been removed. It was only used
internally by the class itself. Identical functionality is now offered by
:class:`plainbox.impl.secure.plugins.FsPlugInCollection` and
:class:`plainbox.impl.secure.providers.v1.JobDefinitionPlugIn`.
* Plainbox now associates a gettext domain with each provider. This
information is available both in
:attr:`plainbox.impl.secure.providers.v1.Provider1Definition.gettext_domain`
and :attr:`plainbox.impl.secure.providers.v1.Provider1.gettext_domain`
* Plainbox now derives a namespace from the name of the provider. The namespace
is defined as the part of the provider name up to the colon. For example
provider name ``2013.com.canonical.ceritifaction:resources`` defines provider
namespace ``2013.com.canonical.certification``. The computed namespace is
available as :meth:`plainbox.impl.secure.providers.v1.Provider1.namespace`
* Plainbox now offers a localized version of the provider description string as
:meth:`plainbox.impl.secure.providers.v1.Provider1.tr_description()`
* Plainbox now passes the provider namespace to both whitelist and job
definition loaders, thus making them fully aware of the namespace they come
from.
* The implementation of various directory properties on the
:class:`plainbox.impl.secure.providers.v1.Provider1` class have changed. They
are now explicitly configurable and are not derived from the now-gone
``location`` property. This affects
:meth:`plainbox.impl.secure.providers.v1.Provider1.jobs_dir`,
:meth:`plainbox.impl.secure.providers.v1.Provider1.whitelists_dir`,
:meth:`plainbox.impl.secure.providers.v1.Provider1.data_dir`,
:meth:`plainbox.impl.secure.providers.v1.Provider1.bin_dir`, and the new
:meth:`plainbox.impl.secure.providers.v1.Provider1.locale_dir`. This change
makes the runtime layout of each directory flexible and more suitable for
packaging requirements of particular distributions.
* Plainbox now associates an optional directory with per-provider locale data.
This allows it to pass it to ``bindtextdomain()``. The locale directory is
available as :meth:`plainbox.impl.secure.providers.v1.Provider1.locale_dir`.
* Plainbox now offers a utility method,
:meth:`plainbox.impl.secure.providers.v1.Provider1.from_definition()`, to
instantiate a new provider from
:class:`plainbox.impl.secure.providers.v1.Provider1Definition`
* The :class:`plainbox.impl.secure.providers.v1.Provider1Definition` class now
offers a set of properties that compute the implicit values of certain
directories. Those all depend on a non-Unset ``location`` field. Those
include:
:meth:`plainbox.impl.secure.providers.v1.Provider1Definition.implicit_jobs_dir`,
:meth:`plainbox.impl.secure.providers.v1.Provider1Definition.implicit_whitelists_dir`,
:meth:`plainbox.impl.secure.providers.v1.Provider1Definition.implicit_data_dir`,
:meth:`plainbox.impl.secure.providers.v1.Provider1Definition.implicit_bin_dir`,
:meth:`plainbox.impl.secure.providers.v1.Provider1Definition.implicit_locale_dir`,
and
:meth:`plainbox.impl.secure.providers.v1.Provider1Definition.implicit_build_locale_dir`,
* The :class:`plainbox.impl.secure.providers.v1.Provider1Definition` class now
offers a set of properties that compute the effective values of certain
directories:
:meth:`plainbox.impl.secure.providers.v1.Provider1Definition.effective_jobs_dir`,
:meth:`plainbox.impl.secure.providers.v1.Provider1Definition.effective_whitelists_dir`,
:meth:`plainbox.impl.secure.providers.v1.Provider1Definition.effective_data_dir`,
:meth:`plainbox.impl.secure.providers.v1.Provider1Definition.effective_bin_dir`,
and
:meth:`plainbox.impl.secure.providers.v1.Provider1Definition.effective_locale_dir`.
* The :class:`plainbox.impl.secure.providers.v1.Provider1Definition` class now
offers the
:meth:`plainbox.impl.secure.providers.v1.Provider1Definition.effective_gettext_domain`
property.
API changes (Qualifiers)
........................
* Plainbox now has additional APIs that correctly preserve order of jobs
selected by a :term:`WhiteList`, see:
:func:`plainbox.impl.secure.qualifiers.select_jobs`.
* Plainbox has new APIs for converting any qualifier into a list of primitive
(non-divisible) qualifiers that express the same selection,
:meth:`plainbox.abc.IJobQualifier.get_primitive_qualifiers()` and
:meth:`plainbox.abc.IJobQualifier.is_primitive()`.
* Plainbox has new APIs for qualifiers to uniformly include and exclude jobs
from the selection list. This is implemented as a voting system described in
the :meth:`plainbox.abc.IJobQualifier.get_vote()` method.
* Plainbox has new APIs for creating almost arbitrary job qualifiers out of the
:class:`plainbox.impl.secure.qualifiers.FieldQualifier` and
:class:`plainbox.impl.secure.qualifiers.IMatcher` implementations such as
:class:`plainbox.impl.secure.qualifiers.OperatorMatcher` or
:class:`plainbox.impl.secure.qualifiers.PatternMatcher`. Older qualifiers
will likely be entirely dropped and replaced by one of the subsequent
releases.
API changes (command line tools)
--------------------------------
* :class:`plainbox.impl.clitools.ToolBase` now offers additional methods for
setting up translations specific to a specific tool. This allows a library
(such as Plainbox) to offer a basic tool that other libraries or applications
subclass and customize, part of the tool implementation (including
translations) will come from one library while the rest will come from
another. This allows various strings to use different gettext domains. This
is implemented in the new set of methods:
:meth:`plainbox.impl.clitools.ToolBase.get_gettext_domain()`
:meth:`plainbox.impl.clitools.ToolBase.get_locale_dir()` and
:meth:`plainbox.impl.clitools.ToolBase.setup_i18n()` last of which is now
being called by the existing
:meth:`plainbox.impl.clitools.ToolBase.early_init()` method.
* :class:`plainbox.impl.clitools.CommandBase` now offers additional methods for
setting up sub-commands that rely on the docstring of the subcommand
implementation class. Those are
:meth:`plainbox.impl.clitools.CommandBase.get_command_name()`
:meth:`plainbox.impl.clitools.CommandBase.get_command_help()`,
:meth:`plainbox.impl.clitools.CommandBase.get_command_description()` and
:meth:`plainbox.impl.clitools.CommandBase.get_command_epilog()`. Those
methods return values suitable to argparse. They are all used from one
high-level method :meth:`plainbox.impl.clitools.CommandBase.add_subcommand()`
which is now used in the implementation of various new subcommand classes.
All of those methods are aware of i18n and hide all of the associated
complexity.
API changes (Resources)
-----------------------
* :class:`plainbox.impl.resource.ResourceExpression` now accepts, stores and
users an optional implicit name-space that qualifies all resource
identifiers. It is also available as
:meth:`plainbox.impl.resource.ResourceExpression.implicit_namespace`.
* :class:`plainbox.impl.resource.ResourceProgram` now accepts and uses an
optional implicit name-space that is being forwarded to the resource
expressions.
API changes (Execution Controllers)
-----------------------------------
* :class:`plainbox.impl.ctrl.CheckboxExecutionController` no longer puts all of
the provider-specific executables onto the PATH of the execution environment
for each job definition. Now only executables from providers that have the
same name-space as the job that needs to be executed are added to PATH. This
brings the behavior of execution controllers in sync with all the other
name-space-aware components.
API changes (Other)
...................
* :class:`plainbox.impl.secure.plugins.FsPlugInCollection` can now load plug-ins
from files of various extensions. The ``ext`` argument can now be a list of
extensions to load.
* :class:`plainbox.impl.secure.plugins.FsPlugInCollection` now takes a list of
directories instead of a PATH-like argument that had to be split with the
platform-specific path separator.
* :class:`plainbox.impl.secure.rfc822.Origin` gained the
:meth:`plainbox.impl.secure.rfc822.Origin.relative_to()` method which is
useful for presenting origin objects in a human-friendly form.
* Implementations of :class:`plainbox.impl.secure.plugins.IPlugIn` can now
raise :class:`plainbox.impl.secure.plugins.PlugInError` to prevent being
added to a plug-in collection.
* :class:`plainbox.impl.secure.config.Config` gained
:meth:`plainbox.impl.secure.config.Config.get_parser_obj()` and
:meth:`plainbox.impl.secure.config.Config.write()` which allow configuration
changes to be written back to the filesystem.
* :class:`plainbox.impl.secure.config.Config` now has special support for the
new :class:`plainbox.impl.secure.config.NotUnsetValidator`. Unlike all other
validators, it is allowed to inspect the special
:data:`plainbox.impl.secure.config.Unset` value.
* Plainbox now stores application identifier
:meth:`plainbox.impl.session.state.SessionMetaData.app_id` which complements
the existing application-specific blob property
:meth:`plainbox.impl.session.state.SessionMetaData.app_blob` to allow
applications to resume only the session that they have created. This feature
will allow multiple plainbox-based applications to co-exist their state
without clashes.
* Plainbox now stores both the normalized and raw version of the data produced
by the RFC822 parser. The raw form is suitable as keys to gettext. This is
exposed through the RFC822 and Job Definition classes.
Bug fixes
---------
Bugs fixed in this release are assigned to the following milestones:
* https://launchpad.net/checkbox/+milestone/plainbox-0.5a1
* https://launchpad.net/checkbox/+milestone/plainbox-0.5b1
* https://launchpad.net/checkbox/+milestone/plainbox-0.5
Plainbox 0.4
^^^^^^^^^^^^
* Bugfixes: https://launchpad.net/checkbox/+milestone/plainbox-0.4
Plainbox 0.4 beta 2
^^^^^^^^^^^^^^^^^^^
* Bugfixes: https://launchpad.net/checkbox/+milestone/plainbox-0.4b2
Plainbox 0.4 beta 1
^^^^^^^^^^^^^^^^^^^
* Lots of production usage, bug fixes and improvements. Too many to
list here but we shipped one commercial product on top of plainbox
and it basically works.
* Better internal abstractions, job runner, execution controller,
session state controller, session manager, suspend and resume
Helpers, on-disk format version and upgrade support. Lots of very
important internal plumbing done better to improve maintainability
of the code.
* Switched from a model where checkbox and plainbox are tied closely
together to a model where plainbox is a back-end for multiple
different products and job definitions (all kinds of "test
payload") is orthogonal to the interaction/work-flow/user
interface. This opens up the path for a separate "test payload
market" to form around plainbox where various projects can just
focus on producing and maintaining tests rather than complete
solutions by themselves. Such parties don't have to coordinate with
anyone or manage their code inside our repository.
* Generalized the trusted launcher concept to run any job wrapped
inside a job provider. This allows any job, regardless where it is
coming from, to run as another user securely and easily.
* DBus service (present throughout the development cycle) moved to
checkbox-ng as it was not mature enough. Makes plainbox easier to
test by hiding the complexity in another project. Not sure if we
keep the DBus interface though so this was a good move for the core
itself.
Plainbox 0.3
^^^^^^^^^^^^
* Added support for all job types (manual, user-interact, user-verify, attachment, local)
* Added support for running as another user
* Added support for creating session checkpoints and resuming testing across reboots
* Added support for exporting test results to JSON, plain text and XML
* Added support for handling binary data (eg, binary attachments)
* Added support for using sub-commands to the main plainbox executable
* Added documentation to the project
* Numerous internal re-factorings, changes and improvements.
* Improved unit and integration testing coverage
Plainbox 0.2
^^^^^^^^^^^^
* Last release made from the standalone github tree.
* Added support for discovering dependencies and automatic dependency
resolution (for both job dependencies and resource dependencies)
Plainbox 0.1
^^^^^^^^^^^^
* Initial release
plainbox-0.25/docs/glossary.rst 0000664 0001750 0001750 00000012733 12627266441 017421 0 ustar pierre pierre 0000000 0000000 Glossary
========
.. glossary::
hardware certification
A process of ensuring that a specific device works well with Ubuntu.
For more details see our certification program:
http://www.canonical.com/engineering-services/certification/hardware-certification
hardware certification team
A team inside Canonical working on :term:`Hardware Certification`.
Checkbox
Checkbox is a hardware testing tool developed by Canonical for
certifying hardware with Ubuntu. Checkbox is free software and is
available at http://launchpad.net/checkbox. The ``checkbox`` package is
pre-installed on all Ubuntu systems
Checkbox-ng
This is the actual direct replacement for Checkbox. It provides a
few binaries that can do end-user testing, and which leverage
Plainbox as a library to do the heavy lifting. This lives in the
``checkbox-ng`` package for the binaries, and
``python3-checkbox-ng`` for the core functionality.
Plainbox
Plainbox is a rewrite of Checkbox with the aim of improving internal
architecture, testability, robustness, quality and speed. It is
currently under active development. It is not pre-installed on Ubuntu.
It is developed inside the Checkbox code repository. In common
use, the term *Plainbox* can refer to either of two things:
* The core library (``python3-plainbox``). ``python3-plainbox`` is
usually installed implicitly, as most of our tools depend on it.
* The ``plainbox`` utility/binary, which is essentially a
command-line swiss-army frontend to all of the library's
functionality. It's useful for develoment and diagnostics but not
necessary for end-user work. ``plainbox`` is usually installed
explicitly if needed.
whitelist
Whitelists are text files used by Checkbox to select jobs for
execution. They can include simple regular expressions to match and
pick many similar jobs at once. For more information see
:doc:`Checkbox Whitelist Files `
job
Jobs are smallest units of testing that can be performed by either
Checkbox or Plainbox. All jobs have an unique name. There are many
types of jobs, some are fully automated others are fully manual. Some
jobs are only an implementation detail and a part of the internal
architecture of Checkbox
provider
A container for jobs, whitelists, private executables and data.
Providers are the foundation of Plainbox as they *provide* all of the
content. Providers can be created and managed by any entity, separately
from the Checkbox project.
namespace
A private space for naming job definitions. Each job definition has a
partial identifier and a full identifier (typically just called job
id). The partial identifier is encoded in job definition file. The
full identifier is composed of the namespace of a job provider and the
partial identifier, joined with the double-colon string ``::``.
resources
Resources are collections of key-value data sets that are generated by
special resource jobs. They are extensively used to indicate hardware
or software dependencies. For example a bluetooth test may indicate it
requires bluetooth hardware and appropriate software packages
installed.
requirement program
Requirement programs are small (one to few lines) programs that use a
subset of python to execute some code against resources. They are what
actually describes the relationship of a Job to some Resources. For
example a resource program ``package.name == "bluez"`` indicates that
at least one resource generated by the ``package`` job has a key
``name`` equal to the string ``bluez``.
attachment
Attachments are a special type of a Job that can creates an attachment
record in the submission.xml file. They are commonly used to include
basic system information files and output of certain commands which can
aid in system certification.
certification website
The website https://certification.canonical.com/
Canonical ID
A number assigned to the specific device (laptop, desktop or server) by
Canonical. This number is used on the Certification Website and by the
Hardware Certification Team. It is an internal bookkeeping identifier
used in our labs.
Secure ID
An identifier, similar to Canonical ID, used for hardware
certification. This identifier is used when interacting with the
Certification Website, it does not reveal anything about the actual
hardware (like the manufacturer name or device name)
pypi
The Python Package Index where any developer can share their python
programs and libraries. Pypi is available at:
https://pypi.python.org/pypi.
Vagrant
Vagrant is command line program intended for software developers to
quickly create portable virtual environments for testing their software
in a production operating system. Vagrant is free software and is
available at http://www.vagrantup.com/
VirtualBox
VirtualBox is a free, powerful desktop vitalization software.
VirtualBox is available in the Ubuntu Software Center and at
https://www.virtualbox.org/
plainbox-0.25/docs/ref/ 0000775 0001750 0001750 00000000000 12633675274 015577 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/docs/ref/plainbox.impl.resource.rst 0000664 0001750 0001750 00000000164 12627266441 022727 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.resource
.. automodule:: plainbox.impl.resource
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.exporter.json.rst 0000664 0001750 0001750 00000000176 12627266441 023723 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.exporter.json
.. automodule:: plainbox.impl.exporter.json
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.commands.checkbox.rst 0000664 0001750 0001750 00000000206 12627266441 024463 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.commands.checkbox
.. automodule:: plainbox.impl.commands.checkbox
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.unit.template.rst 0000664 0001750 0001750 00000000176 12627266441 023674 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.unit.template
.. automodule:: plainbox.impl.unit.template
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.secure.qualifiers.rst 0000664 0001750 0001750 00000000206 12627266441 024526 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.secure.qualifiers
.. automodule:: plainbox.impl.secure.qualifiers
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.commands.dev.rst 0000664 0001750 0001750 00000000174 12627266441 023457 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.commands.dev
.. automodule:: plainbox.impl.commands.dev
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.vendor.textland.rst 0000664 0001750 0001750 00000000762 12627266441 023263 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.vendor.textland
:mod:`plainbox.vendor.textland` -- TextLand
===========================================
This package contains a bundled copy of the upstream TextLand project. Over
time it will be updated with subsequent releases. Eventually it will be
replaced by a dependency on API-stable TextLand release.
.. seealso::
TextLand upstream project: https://github.com/zyga/textland/
.. automodule:: plainbox.vendor.textland
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.exporter.rfc822.rst 0000664 0001750 0001750 00000000202 12627266441 023746 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.exporter.rfc822
.. automodule:: plainbox.impl.exporter.rfc822
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.logging.rst 0000664 0001750 0001750 00000000162 12627266441 022524 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.logging
.. automodule:: plainbox.impl.logging
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.testing_utils.resource.rst 0000664 0001750 0001750 00000000206 12627266441 024660 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.testing_utils.resource
.. automodule:: plainbox.testing_utils.resource
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.commands.parse.rst 0000664 0001750 0001750 00000000200 12627266441 024001 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.commands.parse
.. automodule:: plainbox.impl.commands.parse
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.testing_utils.testcases.rst 0000664 0001750 0001750 00000000210 12627266441 025022 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.testing_utils.testcases
.. automodule:: plainbox.testing_utils.testcases
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.provider_manager.rst 0000664 0001750 0001750 00000000172 12627266441 023463 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.provider_manager
.. automodule:: plainbox.provider_manager
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.symbol.rst 0000664 0001750 0001750 00000000160 12627266441 022401 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.symbol
.. automodule:: plainbox.impl.symbol
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.testing_utils.cwd.rst 0000664 0001750 0001750 00000000174 12627266441 023612 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.testing_utils.cwd
.. automodule:: plainbox.testing_utils.cwd
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.commands.session.rst 0000664 0001750 0001750 00000000204 12627266441 024356 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.commands.session
.. automodule:: plainbox.impl.commands.session
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.session.state.rst 0000664 0001750 0001750 00000000176 12627266441 023705 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.session.state
.. automodule:: plainbox.impl.session.state
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.testing_utils.io.rst 0000664 0001750 0001750 00000000172 12627266441 023442 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.testing_utils.io
.. automodule:: plainbox.testing_utils.io
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.providers.rst 0000664 0001750 0001750 00000000166 12627266441 023117 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.providers
.. automodule:: plainbox.impl.providers
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.commands.check_config.rst 0000664 0001750 0001750 00000000216 12627266441 025300 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.commands.check_config
.. automodule:: plainbox.impl.commands.check_config
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.testing_utils.rst 0000664 0001750 0001750 00000000176 12627266441 024000 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.testing_utils
.. automodule:: plainbox.impl.testing_utils
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.color.rst 0000664 0001750 0001750 00000000156 12627266441 022217 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.color
.. automodule:: plainbox.impl.color
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.unit.rst 0000664 0001750 0001750 00000000154 12627266441 022056 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.unit
.. automodule:: plainbox.impl.unit
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.ctrl.rst 0000664 0001750 0001750 00000000154 12627266441 022043 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.ctrl
.. automodule:: plainbox.impl.ctrl
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.session.manager.rst 0000664 0001750 0001750 00000000202 12627266441 024165 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.session.manager
.. automodule:: plainbox.impl.session.manager
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.unit.job.rst 0000664 0001750 0001750 00000000164 12627266441 022630 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.unit.job
.. automodule:: plainbox.impl.unit.job
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.depmgr.rst 0000664 0001750 0001750 00000000160 12627266441 022352 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.depmgr
.. automodule:: plainbox.impl.depmgr
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.result.rst 0000664 0001750 0001750 00000000160 12627266441 022412 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.result
.. automodule:: plainbox.impl.result
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.commands.selftest.rst 0000664 0001750 0001750 00000000206 12627266441 024526 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.commands.selftest
.. automodule:: plainbox.impl.commands.selftest
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.rst 0000664 0001750 0001750 00000000142 12627266441 021075 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl
.. automodule:: plainbox.impl
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.testing_utils.rst 0000664 0001750 0001750 00000000164 12627266441 023035 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.testing_utils
.. automodule:: plainbox.testing_utils
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.commands.special.rst 0000664 0001750 0001750 00000000204 12627266441 024313 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.commands.special
.. automodule:: plainbox.impl.commands.special
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.vendor.rst 0000664 0001750 0001750 00000000146 12627266441 021435 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.vendor
.. automodule:: plainbox.vendor
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.secure.rfc822.rst 0000664 0001750 0001750 00000000176 12627266441 023376 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.secure.rfc822
.. automodule:: plainbox.impl.secure.rfc822
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.rst 0000664 0001750 0001750 00000000130 12627266441 020132 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox
.. automodule:: plainbox
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.session.jobs.rst 0000664 0001750 0001750 00000000174 12627266441 023520 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.session.jobs
.. automodule:: plainbox.impl.session.jobs
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.secure.origin.rst 0000664 0001750 0001750 00000000176 12627266441 023657 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.secure.origin
.. automodule:: plainbox.impl.secure.origin
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.highlevel.rst 0000664 0001750 0001750 00000000166 12627266441 023051 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.highlevel
.. automodule:: plainbox.impl.highlevel
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.xparsers.rst 0000664 0001750 0001750 00000000164 12627266441 022747 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.xparsers
.. automodule:: plainbox.impl.xparsers
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.secure.rst 0000664 0001750 0001750 00000000160 12627266441 022362 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.secure
.. automodule:: plainbox.impl.secure
:members:
:undoc-members:
plainbox-0.25/docs/ref/index.rst 0000664 0001750 0001750 00000000231 12627266441 017427 0 ustar pierre pierre 0000000 0000000 .. _apiref:
===============
API Reference
===============
:Release: |version|
:Date: |today|
.. toctree::
:maxdepth: 1
:glob:
plainbox*
plainbox-0.25/docs/ref/plainbox.impl.commands.logtest.rst 0000664 0001750 0001750 00000000204 12627266441 024354 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.commands.logtest
.. automodule:: plainbox.impl.commands.logtest
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.commands.list.rst 0000664 0001750 0001750 00000000176 12627266441 023656 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.commands.list
.. automodule:: plainbox.impl.commands.list
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.secure.providers.v1.rst 0000664 0001750 0001750 00000000212 12627266441 024721 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.secure.providers.v1
.. automodule:: plainbox.impl.secure.providers.v1
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.secure.config.rst 0000664 0001750 0001750 00000000176 12627266441 023635 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.secure.config
.. automodule:: plainbox.impl.secure.config
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.secure.providers.rst 0000664 0001750 0001750 00000000204 12627266441 024375 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.secure.providers
.. automodule:: plainbox.impl.secure.providers
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.commands.run.rst 0000664 0001750 0001750 00000000174 12627266441 023505 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.commands.run
.. automodule:: plainbox.impl.commands.run
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.box.rst 0000664 0001750 0001750 00000000152 12627266441 021665 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.box
.. automodule:: plainbox.impl.box
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.commands.script.rst 0000664 0001750 0001750 00000000202 12627266441 024175 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.commands.script
.. automodule:: plainbox.impl.commands.script
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.commands.crash.rst 0000664 0001750 0001750 00000000200 12627266441 023767 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.commands.crash
.. automodule:: plainbox.impl.commands.crash
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.runner.rst 0000664 0001750 0001750 00000000160 12627266441 022405 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.runner
.. automodule:: plainbox.impl.runner
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.transport.rst 0000664 0001750 0001750 00000000166 12627266441 023136 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.transport
.. automodule:: plainbox.impl.transport
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.commands.analyze.rst 0000664 0001750 0001750 00000000204 12627266441 024336 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.commands.analyze
.. automodule:: plainbox.impl.commands.analyze
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.clitools.rst 0000664 0001750 0001750 00000000164 12627266441 022730 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.clitools
.. automodule:: plainbox.impl.clitools
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.integration_tests.rst 0000664 0001750 0001750 00000000206 12627266441 024642 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.integration_tests
.. automodule:: plainbox.impl.integration_tests
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.exporter.rst 0000664 0001750 0001750 00000000164 12627266441 022750 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.exporter
.. automodule:: plainbox.impl.exporter
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.session.storage.rst 0000664 0001750 0001750 00000000202 12627266441 024217 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.session.storage
.. automodule:: plainbox.impl.session.storage
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.parsers.rst 0000664 0001750 0001750 00000000162 12627266441 022555 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.parsers
.. automodule:: plainbox.impl.parsers
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.providers.special.rst 0000664 0001750 0001750 00000000206 12627266441 024531 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.providers.special
.. automodule:: plainbox.impl.providers.special
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.exporter.text.rst 0000664 0001750 0001750 00000000176 12627266441 023736 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.exporter.text
.. automodule:: plainbox.impl.exporter.text
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.session.suspend.rst 0000664 0001750 0001750 00000000202 12627266441 024234 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.session.suspend
.. automodule:: plainbox.impl.session.suspend
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.abc.rst 0000664 0001750 0001750 00000000140 12627266441 020657 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.abc
.. automodule:: plainbox.abc
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.session.rst 0000664 0001750 0001750 00000000162 12627266441 022561 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.session
.. automodule:: plainbox.impl.session
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.exporter.jinja2.rst 0000664 0001750 0001750 00000000202 12627266441 024115 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.exporter.jinja2
.. automodule:: plainbox.impl.exporter.jinja2
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.providers.v1.rst 0000664 0001750 0001750 00000000174 12627266441 023443 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.providers.v1
.. automodule:: plainbox.impl.providers.v1
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.applogic.rst 0000664 0001750 0001750 00000000164 12627266441 022676 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.applogic
.. automodule:: plainbox.impl.applogic
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.secure.launcher1.rst 0000664 0001750 0001750 00000000204 12627266441 024242 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.secure.launcher1
.. automodule:: plainbox.impl.secure.launcher1
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.buildsystems.rst 0000664 0001750 0001750 00000000174 12627266441 023630 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.buildsystems
.. automodule:: plainbox.impl.buildsystems
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.secure.plugins.rst 0000664 0001750 0001750 00000000200 12627266441 024035 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.secure.plugins
.. automodule:: plainbox.impl.secure.plugins
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.censoREd.rst 0000664 0001750 0001750 00000000164 12627266441 022602 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.censoREd
.. automodule:: plainbox.impl.censoREd
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.pod.rst 0000664 0001750 0001750 00000000152 12627266441 021657 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.pod
.. automodule:: plainbox.impl.pod
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.public.rst 0000664 0001750 0001750 00000000146 12627266441 021416 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.public
.. automodule:: plainbox.public
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.commands.rst 0000664 0001750 0001750 00000000164 12627266441 022701 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.commands
.. automodule:: plainbox.impl.commands
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.session.resume.rst 0000664 0001750 0001750 00000000200 12627266441 024051 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.session.resume
.. automodule:: plainbox.impl.session.resume
:members:
:undoc-members:
plainbox-0.25/docs/ref/plainbox.impl.job.rst 0000664 0001750 0001750 00000000152 12627266441 021647 0 ustar pierre pierre 0000000 0000000 .. currentmodule:: plainbox.impl.job
.. automodule:: plainbox.impl.job
:members:
:undoc-members:
plainbox-0.25/docs/appdev/ 0000775 0001750 0001750 00000000000 12633675274 016302 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/docs/appdev/index.rst 0000664 0001750 0001750 00000003363 12627266441 020143 0 ustar pierre pierre 0000000 0000000 ======================
Application developers
======================
This chapter organizes information useful for developers working on testing
systems and :term:`Checkbox` derivatives.
.. warning::
This chapter is very much under development. The list of stories below is a
guiding point for subsequent editions that will expand and provide real
value.
Personas and stories
--------------------
* I'm a Checkbox, Checkbox derivative or third party developer:
* What use cases should require a new application?
* How should I be using Plainbox APIs?
* Which parts of Plainbox APIs are stable?
* How can I have *special sauce* with using Plainbox at the core?
* What is covered by Checkbox
* I'm a Checkbox developer.
* I'm adding a new feature, should that feature go to Checkbox or Plainbox?
* I'm writing a new job, should that job go to Checkbox or JobBox?
* I'm a developer working on test system different from but not unlike plainbox
(this is in the same chapter but should heavily link to derivative systems
and application development chapter)
* Why would I depend on plainbox rather than do everything I need myself?
* Do I need to create a derivative or can I just create jobs for what
plainbox supports?
* What are the stability guarantees if I choose to build with planbox?
* How can I use plainbox as a base for my automated or manual testing
system?
* How does an example third party test system built on top of plainbox look
like?
Key topics
----------
.. note::
The list here should always be based on the personas and stories section
above.
* Introduction to plainbox
* Where is plainbox getting the jobs from?
* Creating and maintaining jobs with plainbox
plainbox-0.25/docs/conf.py 0000664 0001750 0001750 00000032571 12627266441 016325 0 ustar pierre pierre 0000000 0000000 #!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# Plainbox documentation build configuration file, created by
# sphinx-quickstart on Wed Feb 13 11:18:39 2013.
#
# This file is execfile()d with the current directory set to its containing
# dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
try:
import plainbox
except ImportError as exc:
raise SystemExit("plainbox has to be importable")
else:
modules_to_mock = [
'xlsxwriter',
'xlsxwriter.workbook',
'xlsxwriter.utility',
'requests',
'requests.exceptions'
]
# Inject mock modules so that we can build the
# documentation without having the real stuff available
from plainbox.vendor import mock
for mod_name in modules_to_mock:
sys.modules[mod_name] = mock.Mock()
# If extensions (or modules to document with autodoc) are in another
# directory, add these directories to sys.path here. If the directory is
# relative to the documentation root, use os.path.abspath to make it absolute,
# like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.todo',
'sphinx.ext.coverage', 'sphinx.ext.viewcode',
'plainbox.vendor.sphinxarg.ext']
autodoc_default_flags = ['members', 'undoc-members', 'inherited-members',
'show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'Plainbox'
copyright = '2012-2014 Canonical Ltd'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = "{0[0]}.{0[1]}".format(plainbox.__version__)
# The full version, including alpha/beta/rc tags.
release = "{0[0]}.{0[1]}.{0[2]}.{0[3]}.{0[4]}".format(plainbox.__version__)
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
# Use our custom theme. For now it only adds Disqus.com support but we may
# customize it further later on. The theme is called 'plainbox' and has one
# option which controls if disqus is active or not.
html_theme = 'plainbox'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# Due to the way disqus works, it's only going to work on
# plainbox.readthedocs.org so only use it if building for readthedocs.
html_theme_options = {
'show_disqus': 'true' if os.environ.get(
"READTHEDOCS", None) == 'True' else ''
}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = ['_theme']
# The name for this set of Sphinx documents. If None, it defaults to
# " v documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is
# True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'Plainboxdoc'
# -- Options for LaTeX output ------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author,
# documentclass [howto/manual]).
latex_documents = [
('index', 'Plainbox.tex', 'Plainbox Documentation',
'Zygmunt Krynicki', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top
# of the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
_authors = ['Zygmunt Krynicki & Checkbox Contributors']
man_pages = [
# Section 1
('manpages/plainbox', 'plainbox',
'toolkit for software and hardware integration testing',
_authors, 1),
('manpages/plainbox-trusted-launcher-1', 'plainbox-trusted-launcher-1',
'execute job command as another user', _authors, 1),
('manpages/plainbox-run', 'plainbox-run',
'run a test job', _authors, 1),
('manpages/plainbox-check-config', 'plainbox-check-config',
'check and display plainbox configuration', _authors, 1),
('manpages/plainbox-startprovider', 'plainbox-startprovider',
'create a new plainbox provider', _authors, 1),
('manpages/plainbox-self-test', 'plainbox-self-test',
'run unit and integration tests', _authors, 1),
('manpages/plainbox-manage.py', 'manage.py',
'plainbox provider management script', _authors, 1),
('manpages/plainbox-session', 'plainbox-session',
'session management sub-commands', _authors, 1),
('manpages/plainbox-session-list', 'plainbox-session-list',
'list available session', _authors, 1),
('manpages/plainbox-session-remove', 'plainbox-session-remove',
'remove one ore more sessions', _authors, 1),
('manpages/plainbox-session-show', 'plainbox-session-show',
'show a single session', _authors, 1),
('manpages/plainbox-session-archive', 'plainbox-session-archive',
'archive a single session', _authors, 1),
('manpages/plainbox-session-export', 'plainbox-session-export',
'(re-)export an existing session', _authors, 1),
('manpages/plainbox-dev', 'plainbox-dev',
'commands for test developers', _authors, 1),
('manpages/plainbox-dev-script', 'plainbox-dev-script',
'run a command from a job', _authors, 1),
('manpages/plainbox-dev-special', 'plainbox-dev-special',
'special/internal commands', _authors, 1),
('manpages/plainbox-dev-analyze', 'plainbox-dev-analyze',
'analyze how seleted jobs would be executed', _authors, 1),
('manpages/plainbox-dev-parse', 'plainbox-dev-parse',
'parse stdin with the specified parser', _authors, 1),
('manpages/plainbox-dev-crash', 'plainbox-dev-crash',
'crash the application', _authors, 1),
('manpages/plainbox-dev-logtest', 'plainbox-dev-logtest',
'log messages at various levels', _authors, 1),
('manpages/plainbox-dev-list', 'plainbox-dev-list',
'list and describe various objects', _authors, 1),
('manpages/plainbox-device', 'plainbox-device',
'device management commands', _authors, 1),
('manpages/plainbox-qml-shell', 'plainbox-qml-shell',
'standalone qml-native shell', _authors, 1),
# Section 5
('manpages/plainbox.conf', 'plainbox.conf',
'plainbox configuration file', _authors, 5),
# Section 7
('manpages/plainbox-session-structure', 'plainbox-session-structure',
'structure of per-session directory', _authors, 7),
('manpages/plainbox-template-units', 'plainbox-template-units',
'syntax and semantics of Plainbox template unit type', _authors, 7),
('manpages/plainbox-category-units', 'plainbox-category-units',
'syntax and semantics of Plainbox category unit type', _authors, 7),
('manpages/plainbox-file-units', 'plainbox-file-units',
'syntax and semantics of Plainbox file unit type', _authors, 7),
('manpages/plainbox-test-plan-units', 'plainbox-test-plan-units',
'syntax and semantics of Plainbox test plan unit type', _authors, 7),
('manpages/plainbox-job-units', 'plainbox-job-units',
'syntax and semantics of Plainbox job unit type', _authors, 7),
('manpages/plainbox-manifest-entry-units', 'plainbox-job-units',
'syntax and semantics of Plainbox manifest entry unit type',
_authors, 7),
('manpages/plainbox-exporter-units', 'plainbox-exporter-units',
'syntax and semantics of Plainbox exporter unit type',
_authors, 7),
('manpages/plainbox-packaging-meta-data-units',
'plainbox-packaging-meta-data-units',
'syntax and semantics of Plainbox package meta-data unit type',
_authors, 7),
('manpages/PLAINBOX_SESSION_SHARE', 'PLAINBOX_SESSION_SHARE',
'per-session runtime shared-state directory', _authors, 7),
('manpages/PLAINBOX_PROVIDER_DATA', 'PLAINBOX_PROVIDER_DATA',
'per-provider data directory', _authors, 7),
('manpages/CHECKBOX_DATA', 'CHECKBOX_DATA',
'legacy name for PLAINBOX_SESSION_SHARE', _authors, 7),
('manpages/CHECKBOX_SHARE', 'CHECKBOX_SHARE',
'legacy name for PLAINBOX_PROVIDER_DATA', _authors, 7),
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output ----------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'Plainbox', 'Plainbox Documentation',
'Zygmunt Krynicki', 'Plainbox', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
plainbox-0.25/plainbox/ 0000775 0001750 0001750 00000000000 12633675274 015707 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/data/ 0000775 0001750 0001750 00000000000 12633675274 016620 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/data/report/ 0000775 0001750 0001750 00000000000 12633675274 020133 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/data/report/hardware-1_0.rng 0000664 0001750 0001750 00000046106 12627266441 023017 0 ustar pierre pierre 0000000 0000000
1.0
The attribute "plugin" must be set, if the question
is generated by a plugin.
multiple_choice
measurement
convenience for Python code: 'True'/'False' for boolean values
instead of 'true'/'false' as defined by
http://www.w3.org/2001/XMLSchema-datatypes .
True
False
Allowed types and values:
The dbus... data types are used for HAL properties; the data
types are specified in
http://dbus.freedesktop.org/doc/dbus-specification.html
The other data types are Python data types, defined in
http://docs.python.org/lib/types.html
dbus.Boolean
bool
dbus.String
dbus.UTF8String
str
dbus.Byte
dbus.Int16
dbus.Int32
dbus.Int64
dbus.UInt16
dbus.UInt32
dbus.UInt64
int
long
dbus.Double
float
dbus.Array
list
dbus.Dictionary
dict
plainbox-0.25/plainbox/data/plainbox-qml-modules/ 0000775 0001750 0001750 00000000000 12633675274 022671 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/data/plainbox-qml-modules/Plainbox/ 0000775 0001750 0001750 00000000000 12633675274 024445 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/data/plainbox-qml-modules/Plainbox/QmlJob.qml 0000664 0001750 0001750 00000001605 12627266441 026341 0 ustar pierre pierre 0000000 0000000 /*
* This file is part of Checkbox
*
* Copyright 2015 Canonical Ltd.
*
* Authors:
* - Maciej Kisielewski
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 3.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see .
*/
import QtQuick 2.0
import Ubuntu.Components 1.1
Item {
signal testDone(var testResult)
property var testingShell;
property var clearedPermissions: [];
}
plainbox-0.25/plainbox/data/plainbox-qml-modules/Plainbox/qmldir 0000664 0001750 0001750 00000001361 12627266441 025654 0 ustar pierre pierre 0000000 0000000 #
# This file is part of Checkbox
#
# Copyright 2015 Canonical Ltd.
#
# Authors:
# - Maciej Kisielewski
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 3.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see .
#
module Plainbox
QmlJob 0.1 QmlJob.qml
plainbox-0.25/plainbox/i18n.py 0000664 0001750 0001750 00000054142 12627266441 017041 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.i18n` -- i18n support
====================================
This module provides public APIs for plainbox translation system.
"""
from abc import ABCMeta, abstractmethod
import collections
import gettext as gettext_module
import logging
import os
import random
import re
__all__ = [
'bindtextdomain',
'dgettext',
'dngettext',
'gettext',
'ngettext',
'pdgettext',
'pdngettext',
'pgettext',
'pngettext',
'textdomain',
]
_logger = logging.getLogger("plainbox.i18n")
class ITranslator(metaclass=ABCMeta):
"""
Interface for all translators
"""
@abstractmethod
def gettext(self, msgid):
"""
Translate a message
:param msgid:
Identifier of the message to translate
:returns:
Translated message or msgid if translation is not available
"""
@abstractmethod
def ngettext(self, msgid1, msgid2, n):
"""
Translate a message involving plural form
:param msgid1:
Identifier of the singular form of the message to translate
:param msgid2:
Identifier of the plural form of message to translate
:param n:
Any integer number
:returns:
Translated message appropriate for the specified number, if
available. If the translated number is not available one of msgid1
and msgid2 are returned, depending on the value of n.
"""
# Context aware gettext + ngettext
@abstractmethod
def pgettext(self, msgctxt, msgid):
"""
Translate a message within a context.
:param msgctxt:
Context that specifies which translation of msgid to pick
:param msgid:
Identifier of the message to translate
:returns:
Translated message or msgid if translation is not available
"""
@abstractmethod
def pngettext(self, msgctxt, msgid1, msgid2, n):
"""
Translate a message involving plural form
:param msgctxt:
Context that specifies which translation of msgid1/msgid2 to pick
:param msgid1:
Identifier of the singular form of the message to translate
:param msgid2:
Identifier of the plural form of message to translate
:param n:
Any integer number
:returns:
Translated message appropriate for the specified number, if
available. If the translated number is not available one of msgid1
and msgid2 are returned, depending on the value of n.
"""
# Explicit domain gettext + ngettext
@abstractmethod
def dgettext(self, domain, msgid):
"""
Translate a message using a specific domain
:param domain:
Name of the domain from which translations are obtained
:param msgid:
Identifier of the message to translate
:returns:
Translated message or msgid if translation is not available
"""
@abstractmethod
def dngettext(self, domain, msgid1, msgid2, n):
"""
Translate a message involving plural form using a specific domain
:param domain:
Name of the domain from which translations are obtained
:param msgid1:
Identifier of the singular form of the message to translate
:param msgid2:
Identifier of the plural form of message to translate
:param n:
Any integer number
:returns:
Translated message appropriate for the specified number, if
available. If the translated number is not available one of msgid1
and msgid2 are returned, depending on the value of n.
"""
# Explicit domain and context gettext + ngettext
@abstractmethod
def pdgettext(self, msgctxt, domain, msgid):
"""
Translate a message using a specific context and domain
:param msgctxt:
Context that specifies which translation of msgid to pick
:param domain:
Name of the domain from which translations are obtained
:param msgid:
Identifier of the message to translate
:returns:
Translated message or msgid if translation is not available
"""
@abstractmethod
def pdngettext(self, msgctxt, domain, msgid1, msgid2, n):
"""
Translate a message involving plural form using a specific context and
domain
:param msgctxt:
Context that specifies which translation of msgid1/msgid2 to pick
:param domain:
Name of the domain from which translations are obtained
:param msgid1:
Identifier of the singular form of the message to translate
:param msgid2:
Identifier of the plural form of message to translate
:param n:
Any integer number
:returns:
Translated message appropriate for the specified number, if
available. If the translated number is not available one of msgid1
and msgid2 are returned, depending on the value of n.
"""
@abstractmethod
def textdomain(self, domain):
"""
Set global gettext domain
:param domain:
Name of the global gettext domain. This domain will be used to all
unqualified calls to gettext() and ngettext().
.. note::
gettext and ngettext exposed from this module transparently use
"plainbox" as the domain name. This call affects all *other*,
typical gettext calls.
"""
@abstractmethod
def bindtextdomain(self, domain, localedir=None):
"""
Set set directory for gettext messages for a specific domain
:param domain:
Name of the domain to configure
:param localedir:
Name of the directory with translation catalogs.
"""
class NoOpTranslator(ITranslator):
"""
A translator that doesn't translate anything
"""
def gettext(self, msgid):
return msgid
def ngettext(self, msgid1, msgid2, n):
return msgid1 if n == 1 else msgid2
def pgettext(self, msgctxt, msgid):
return self.gettext(msgid)
def pngettext(self, msgctxt, msgid1, msgid2, n):
return self.ngettext(msgid1, msgid2, n)
def dgettext(self, domain, msgid):
return self.gettext(msgid)
def dngettext(self, domain, msgid1, msgid2, n):
return self.ngettext(msgid1, msgid2, n)
def pdgettext(self, msgctxt, domain, msgid):
return self.gettext(msgid)
def pdngettext(self, msgctxt, domain, msgid1, msgid2, n):
return self.ngettext(msgid1, msgid2, n)
def textdomain(self, domain):
pass
def bindtextdomain(self, domain, localedir=None):
pass
class LoremIpsumTranslator(NoOpTranslator):
LOREM_IPSUM = {
"ch": ('', """å°ç¶“ 施消 了稱 能文 安種 之用 無心 å‹å¸‚ 景內 èªžæ ¼ã€‚å¡å°
轉醫 題苦 å€‘æœƒå“¡ï¼ æˆ‘è¦ªå°± è—äº†åƒ é–“é€šã€‚ 有發 è½‰å‰ è—¥æƒ³
äºžæ²’ï¼Œé€šé ˆ æ‡‰ç®¡ã€æ‰“者 å°æˆ 公出? 般記 䏿ˆåŒ– ä»–å››è¯ åˆ†åœ‹è¶Š
分ä½é›¢ï¼Œæ›´ç‚ºè€… 文難 我如 我布?經動 著為 安經, 們天然 我親 唱顯
ä¸ï¼›å¾—ç•¶ 出一來得金 著作 到到 æ“弟 人望ï¼åŽ»æŒ‡ åœ¨æ ¼æ“šï¼"""),
"kr": (' ' """ë§ì„ í•˜ê³ ê³ì—서 ì¼ ë§ë ¤ê°€ê³ 그걸로 하다 ê°™ì€ ì—†ë„¤
ì•‰ì€ ë¿Œë¦¬ì¹˜ë”니 ë™ì†Œë¬¸ ì¼ ë³´ì§€ 재우쳤다 분량 ë§ì„ 가지ê³
ê¹€ì²¨ì§€ì˜ ì‹œìž‘í•˜ì˜€ë‹¤ 내리는 나를 김첨지는 ì¢ìŒ€ 준 반가운지
김첨지는 ë†“ì¹˜ê² êµ¬ë¨¼ 늦추잡았다 ì¸ë ¥ê±° ì† ìƒê°í•˜ê²Œ ëˆì„ 시체를
한 ì •ê±°ìž¥ê¹Œì§€ ëŠë¼ì—ˆë‹¤ ê·€ì— ë„˜ì–´ 왜목 ê²ƒì„ ì‹¶ì–´ ì„¤ë ˆëŠ” 맞붙들ê³
하네 오늘 ë°°ê°€ í•˜ëŠ˜ì€ í•˜ìžë§ˆìž ë§žë¬¼ê³ ì¼ì´ì—ˆë‹¤ 운수가 못쓸
ëˆì˜ ë¼ê³ ì–´ì´ ì—†ì§€ë§Œ 받아야 ì•„ë‚´ì˜ ì‹œìž‘í•˜ì˜€ë‹¤ ì°¨ë„ ì™œ
사용ìžë¡œë¶€í„° ì¶”ì–´íƒ•ì„ ì²˜ìŒ ë³´ë¼ ì¶œíŒì‚¬ ì°¨ì› ë”°ë¼ì„œ 펴서 í’€ì´
ì‚¬ëžŒì€ ê·¼ì‹¬ê³¼ 초조해온다 íŠ¸ê³ ì œ ì°½ì„ ë‚´ë¦¬ì—ˆë‹¤ ì¸ë ¥ê±°í•˜ê³
같으면 í° ì´ë†ˆì•„ ì–´ë¦°ì• ê·¸ 넘어 울었다V"""),
"he": (' ', """תורת ×§×¨×™×ž×™× ×•×œ×•×’×™×” ×ל ×תה הטבע לחיבור ×× ×חר מדע ×—×™× ×•×š
×ž×ž×•× ×¨×›×™×” ×’× ×¤× ××™ ××—×¨×™× ×”×ž×§×•×‘×œ ×ת ×תה ×ª× ×š ××—×¨×™× ×œ×˜×™×¤×•×œ של ×ת
תי×טרון ו××œ×§×˜×¨×•× ×™×§×” מתן דת ×•×”× ×“×¡×” ×©×™×ž×•×©×™×™× ×¡×“×¨ בה סרבול
××™× ×˜×¨× ×˜ שתי ב ×× × ×ª×•×›×œ לערך רוסית כדי ×ת תוכל ×›× ×™×¡×” המלחמה
עוד מה מיזמי ×ודות ×•×ž×”×™×ž× ×”"""),
"ar": (' ', """ دار أن منتص٠أوراقهم الرئيسية هو الا Ø§Ù„ØØ±Ø¨ الجبهة لان
مع تنÙّس للصين لإنعدام نتيجة الثقيلة أي شيء عقبت وأزيز لألمانيا
ÙˆÙÙŠ كل ØØ¯Ù‰ إختار المنتصرة أي به، بغزو بالسيطرة أن جدول
Ø¨Ø§Ù„ÙØ´Ù„ إيطاليا قام كل هنا؟ ÙØ±Ù†Ø³Ø§ الهجوم هذه مع ØÙ‚ول
الإمبراطورية لها أي قدما اليابانية عام مع جنود أراضي السوÙييتي،
هو بلا لم وجهان Ø§Ù„Ø³Ø§ØØ© الإمبراطورية لان ما بØÙ‚ ألمانيا الياباني،
ÙØ¹Ù„ ÙØ§ØªÙ‘بع الشّعبين المعركة، ما الى ما يطول المشتّتون وكسبت
وإيطالي ذات أم تلك ثم القص٠قبضتهم قد وأزيز إستمات ونستون غزو
الأرض الأولية عن بين بـ دÙّة كانت Ø§Ù„Ù†ÙØ· لمّ تلك Ùهرست الأرض
Ø§Ù„Ø¥ØªÙØ§Ù‚ية مع"""),
"ru": (' ', """Магна азжюывырит мÑль ут нам ыт видырÑÑ€ такематыш кибо
ыррор ут квюо Ð’Ñш аппарÑат пондÑрюм интылльÑгÑбат Ñи про ед
еллум дикунт Квюо Ñкз льаборÑж нужквюам анкилльаы мÑль омйттам
мÑÐ½Ð°Ð½Ð´Ñ€Ñ ÐµÐ´ МÑль Ñи Ñ€ÑктÑÐºÐ²ÑƒÑ ÐºÐ¾Ð½ÑÑквюат контынтёонÑж ты ёужто
Ñ„ÑугÑат вивÑндюм шÑа Ðтквюе трётанё ÑÑŽ квуй омнеж латины Ñкз
вимi"""),
"jp": ('', """戸ã¶ã ã®æ„ 化巡奇 ä¾› クソリヤ ç„¡æ– ãƒ¨ã‚µãƒªãƒ² 念休ã°ã‚¤
例会 コトヤ 耕智ㆠã°ã£ã‚ƒ ä½å‘Šæ±ºã† ã§æ‰“表 ãž ã¼ã³æƒ…記ト レ表関銀
ãƒãƒ¢ã‚¢ ãƒ‹æ¬¡å· ã‚ˆå…¨å コãƒãƒ• ソ政象 ä½å²³ã´ èªãƒ¯ ä¸€é‡ ãƒ˜æ–
首画リ ã®ã½ ã›è¶³ 決属 è¡“ã“ ã¦ãƒ© é ˜ 技 ã‘リ㴠分率㴠ããœã£
物味ドン ãŠãŽä¸€ç”°ã´ ã¶ã®è¬™ 調ヲ星度 レã¼ã‚€å›² 舗åŒè„ˆ 鶴挑ã’
ã»ã¶ã€‚ç„¡ç„¡ ツ縄第㌠本公作 ゅゃ㵠ã質失フ ç±³ä¸Šè° ã‚¢è¨˜æ²» ãˆã‚Œæœ¬
æ„ã¤ã‚“ ãŽãƒ¬å±€ ç·ã‚±ç›› 載テ ã‚³éƒ¨æ¢ ãƒ¡ãƒ„è¼ª å¸°æ´ å°±äº›ãƒ« ã£ã"""),
"pl": (' ', """
litwo ojczyzno moja ty jesteś jak zdrowie ile cię stracił
dziś piękność widziana więc wszyscy dokoła brali stronę kusego
albo sam wewnątrz siebie czuł się położył co by stary
dąbrowskiego usłyszeć mazurek biegał po stole i krwi tonęła
gdy sędziego służono niedbale słudzy nie na utrzymanie lecz
mniej piękne niż myśliwi młodzi tak nie zmruża jako swe
osadzał dziwna rzecz miejsca wkoło pali nawet stary który
teraz za nim psów gromada gracz szarak skoro poczuł wszystkie
charty w drobne strączki białe dziwnie ozdabiał głowę bo tak
przekradł się uparta coraz głośniejsza kłótnia o wiejskiego
pożycia nudach i długie paznokcie przedstawiając dwa tysiące
jako jenerał dąbrowski z wysogierdem radziwiłł z drzewa lecz
lekki odgadniesz że pewnie na jutro solwuję i na kształt
ogrodowych grządek że ją bardzo szybko suwała się na
przeciwnej zajadłość dowiodę że dziś z lasu wracało towarzystwo
całe wesoło lecz go grzecznie na złość rejentowi że u
wieczerzy będzie jego upadkiem domy i bagnami skradał się tłocz
i jak bawić się nie było bo tak na jutro solwuję i przepraszał
sędziego sędzia sam na początek dać małą kiedy"""),
}
def __init__(self, kind):
self.kind = kind
self.space = self.LOREM_IPSUM[self.kind][0]
self.words = self.LOREM_IPSUM[self.kind][1].split()
self.n_words = collections.defaultdict(list)
for word in self.words:
self.n_words[len(word)].append(word)
def _get_ipsum(self, text):
return re.sub(
'(%[sdr]|{[^}]*}|[a-zA-Z]+)',
lambda match: self._tr_word(match.group(1)),
text)
def _tr_word(self, word):
if re.search("(%[sdr])|({[^}]*})", word):
return word
elif word.startswith("--"):
return "--{}".format(self._tr_word(word[2:]))
elif word.startswith("-"):
return "-{}".format(self._tr_word(word[1:]))
elif word.startswith("[") and word.endswith("]"):
return "[{}]".format(self._tr_word(word[1:-1]))
elif word.startswith("<") and word.endswith(">"):
return "<{}>".format(self._tr_word(word[1:-1]))
else:
tr_word = self._tr_approx(len(word))
if word.isupper():
return tr_word.upper()
if word[0].isupper():
return tr_word.capitalize()
else:
return tr_word
def _tr_approx(self, desired_length):
for avail_length in sorted(self.n_words):
if desired_length <= avail_length:
break
return random.choice(self.n_words[avail_length])
def gettext(self, msgid):
return self.dgettext("plainbox", msgid)
def ngettext(self, msgid1, msgid2, n):
if n == 1:
return self._get_ipsum(msgid1)
else:
return self._get_ipsum(msgid2)
def dgettext(self, domain, msgid):
return "<{}: {}>".format(domain, self._get_ipsum(msgid))
class GettextTranslator(ITranslator):
"""
A translator using native stdlib gettext
# NOTE: The gettext API is a bit wrong as it doesn't respect the
# textdomain/bindtextdomain calls.
"""
def __init__(self, domain, locale_dir=None):
self._domain = domain
self._translations = {}
self._locale_dir_map = {
domain: locale_dir
}
def _get_translation(self, domain):
try:
return self._translations[domain]
except KeyError:
try:
translation = gettext_module.translation(
domain, self._locale_dir_map.get(domain))
except IOError:
translation = gettext_module.NullTranslations()
self._translations[domain] = translation
return translation
def _contextualize(self, ctx, msg):
"""
Contextualize message identifier
This method combines the context string with the message identifier
using the character used by gettext (END OF TRANSMISSION, U+0004)
"""
GETTEXT_CONTEXT_GLUE = "\004"
return ctx + GETTEXT_CONTEXT_GLUE + msg
def gettext(self, msgid):
return self._get_translation(self._domain).gettext(msgid)
def ngettext(self, msgid1, msgid2, n):
return self._get_translation(self._domain).ngettext(msgid1, msgid2, n)
def pgettext(self, msgctxt, msgid):
effective_msgid = self._contextualize(msgctxt, msgid)
msgstr = self.gettext(effective_msgid)
# If we got the untranslated version then we want to just return msgid
# back, without msgctxt prepended in front.
if msgstr == effective_msgid:
return msgid
else:
return msgstr
def pngettext(self, msgctxt, msgid1, msgid2, n):
effective_msgid1 = self._contextualize(msgctxt, msgid1)
effective_msgid2 = self._contextualize(msgctxt, msgid2)
msgstr = self.ngettext(effective_msgid1, effective_msgid2, n)
# If we got the untranslated version then we want to just return msgid1
# or msgid2 back, without msgctxt prepended in front.
if msgstr == effective_msgid1:
return msgid1
elif msgstr == effective_msgid2:
return msgid2
else:
return msgstr
def dgettext(self, domain, msgid):
return self._get_translation(domain).gettext(msgid)
def dngettext(self, domain, msgid1, msgid2, n):
return self._get_translation(domain).ngettext(msgid1, msgid2, n)
def pdgettext(self, msgctxt, domain, msgid):
effective_msgid = self._contextualize(msgctxt, msgid)
msgstr = self._get_translation(domain).gettext(effective_msgid)
# If we got the untranslated version then we want to just return msgid
# back, without msgctxt prepended in front.
if msgstr == effective_msgid:
return msgid
else:
return msgstr
def pdngettext(self, msgctxt, domain, msgid1, msgid2, n):
effective_msgid1 = self._contextualize(msgctxt, msgid1)
effective_msgid2 = self._contextualize(msgctxt, msgid2)
msgstr = self._get_translation(domain).ngettext(
effective_msgid1, effective_msgid2, n)
# If we got the untranslated version then we want to just return msgid1
# or msgid2 back, without msgctxt prepended in front.
if msgstr == effective_msgid1:
return msgid1
elif msgstr == effective_msgid2:
return msgid2
else:
return msgstr
def textdomain(self, domain):
"""
Set global gettext domain
:param domain:
Name of the global gettext domain. This domain will be used to all
unqualified calls to gettext() and ngettext().
.. note::
gettext and ngettext exposed from this module transparently use
"plainbox" as the domain name. This call affects all *other*,
typical gettext calls.
"""
_logger.debug("textdomain(%r)", domain)
self._domain = domain
gettext_module.textdomain(domain)
def bindtextdomain(self, domain, localedir=None):
"""
Set set directory for gettext messages for a specific domain
:param domain:
Name of the domain to configure
:param localedir:
Name of the directory with translation catalogs.
"""
_logger.debug("bindtextdomain(%r, %r)", domain, localedir)
self._locale_dir_map[domain] = localedir
gettext_module.bindtextdomain(domain, localedir)
def docstring(docstring):
"""
Decorator factory for assigning docstrings to functions.
This decorator is intended for functions that reuse their docstring
as translatable text that needs to be tagged with gettext_noop.
Example:
@docstring("the foo function")
def foo():
pass
@docstring("the Foo class")
class Foo:
pass
"""
def decorator(cls_or_func):
try:
cls_or_func.__doc__ = docstring
return cls_or_func
except AttributeError:
assert isinstance(cls_or_func, type)
return type(
cls_or_func.__name__,
(cls_or_func,),
{'__doc__': docstring})
return decorator
def gettext_noop(msgid):
"""
No-operation gettext implementation.
:param msgid:
The message not to translate
:returns:
msgid itself
This function should be used (typically aliased as ``N_`` to mark strings
that don't require translation at the place where they are defined but will
be translated later on. This is just a hint to the message extraction
system.
"""
return msgid
# This is the global plainbox-specific translator.
try:
_translator = {
"gettext": GettextTranslator(
"plainbox", os.getenv("PLAINBOX_LOCALE_DIR", None)),
"no-op": NoOpTranslator(),
"lorem-ipsum-ar": LoremIpsumTranslator("ar"),
"lorem-ipsum-ch": LoremIpsumTranslator("ch"),
"lorem-ipsum-he": LoremIpsumTranslator("he"),
"lorem-ipsum-jp": LoremIpsumTranslator("jp"),
"lorem-ipsum-kr": LoremIpsumTranslator("kr"),
"lorem-ipsum-pl": LoremIpsumTranslator("pl"),
"lorem-ipsum-ru": LoremIpsumTranslator("ru"),
}[os.getenv("PLAINBOX_I18N_MODE", "gettext")]
except KeyError as exc:
raise RuntimeError(
"Unsupported PLAINBOX_I18N_MODE: {!r}".format(exc.args[0]))
# This is the public API of this module
gettext = _translator.gettext
ngettext = _translator.ngettext
pgettext = _translator.pgettext
pngettext = _translator.pngettext
dgettext = _translator.dgettext
dngettext = _translator.dngettext
pdgettext = _translator.pdgettext
pdngettext = _translator.pdngettext
bindtextdomain = _translator.bindtextdomain
textdomain = _translator.textdomain
plainbox-0.25/plainbox/tests.py 0000664 0001750 0001750 00000003636 12627266441 017426 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.tests` -- auxiliary test loaders for plainbox
============================================================
"""
import os
from unittest.loader import defaultTestLoader
from plainbox.impl import get_plainbox_dir
def load_unit_tests():
"""
Load all unit tests and return a TestSuite object
"""
# Discover all unit tests. By simple convention those are kept in
# python modules that start with the word 'test_' .
start_dir = get_plainbox_dir()
top_level_dir = os.path.normpath(os.path.join(start_dir, '..'))
return defaultTestLoader.discover(start_dir, top_level_dir=top_level_dir)
def load_integration_tests():
"""
Load all integration tests and return a TestSuite object
"""
# Discover all integration tests. By simple convention those are kept in
# python modules that start with the word 'integration_' .
return defaultTestLoader.discover(
get_plainbox_dir(), pattern="integration_*.py")
def test_suite():
"""
Test suite function used by setuptools test loader.
Uses unittest test discovery system to get a list of test cases defined
inside plainbox. See setup.py setup(test_suite=...) for a matching entry
"""
return load_unit_tests()
plainbox-0.25/plainbox/__main__.py 0000664 0001750 0001750 00000001660 12627266441 017777 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.__main__` -- execute plainbox
============================================
This module allows plainbox to be executed with:
python3 -m plainbox
"""
from plainbox.public import main
if __name__ == '__main__':
main()
plainbox-0.25/plainbox/public.py 0000664 0001750 0001750 00000003202 12627266441 017527 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
Public, stable, high-level API for third party developers.
:mod:`plainbox.public`
======================
The are actually implemented by the plainbox.impl package. This module is here
so that the essential API concepts are in a single spot and are easier to
understand (by not being mixed with additional source code).
.. warning::
This module is ironically UNSTABLE until the 1.0 release
.. note::
This module has API stability guarantees. We are not going to break or
introduce backwards incompatible interfaces here without following our API
deprecation policy. All existing features will be retained for at least
three releases. All deprecated symbols will warn when they will cease to be
available.
"""
from plainbox._lazymod import LazyModule, far
_mod = LazyModule.shadow_normal_module()
_mod.lazily('main', far, ('plainbox.impl.box:main',))
_mod.lazily('get_providers', far, ('plainbox.impl.providers:get_providers',))
plainbox-0.25/plainbox/impl/ 0000775 0001750 0001750 00000000000 12633675274 016650 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/impl/test_validation.py 0000664 0001750 0001750 00000010450 12627266441 022406 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.test_validation
=============================
Test definitions for plainbox.impl.validation module
"""
from unittest import TestCase
from plainbox.impl.validation import ValidationError
from plainbox.impl.validation import Issue
from plainbox.vendor import mock
class ValidationErrorTests(TestCase):
def test_smoke__no_hint(self):
err = ValidationError('field', 'problem')
self.assertEqual(str(err), "Problem with field field: problem")
self.assertEqual(repr(err), (
"ValidationError("
"field='field', problem='problem', hint=None, origin=None)"))
def test_smoke__hint(self):
err = ValidationError('field', 'problem', 'hint')
self.assertEqual(str(err), "Problem with field field: problem")
self.assertEqual(repr(err), (
"ValidationError("
"field='field', problem='problem', hint='hint', origin=None)"))
def test_smoke__origin(self):
err = ValidationError('field', 'problem', origin='origin')
self.assertEqual(str(err), "Problem with field field: problem")
self.assertEqual(repr(err), (
"ValidationError("
"field='field', problem='problem', hint=None, origin='origin')"))
class IssueTests(TestCase):
def setUp(self):
self.message = mock.MagicMock(name='message')
self.severity = mock.MagicMock(name='severity')
self.kind = mock.MagicMock(name='kind')
self.origin = mock.MagicMock(name='origin')
self.issue = Issue(self.message, self.severity, self.kind, self.origin)
def test_init(self):
self.assertIs(self.issue.message, self.message)
self.assertIs(self.issue.severity, self.severity)
self.assertIs(self.issue.kind, self.kind)
self.assertIs(self.issue.origin, self.origin)
def test_str__with_origin(self):
self.message.__str__.return_value = ''
self.origin.__str__.return_value = ''
self.kind.__str__.return_value = ''
self.severity.__str__.return_value = ''
self.assertEqual(str(self.issue), ": : ")
def test_str__without_origin(self):
self.issue.origin = None
self.message.__str__.return_value = ''
self.kind.__str__.return_value = ''
self.severity.__str__.return_value = ''
self.assertEqual(str(self.issue), ": ")
def test_repr__with_origin(self):
self.message.__repr__ = lambda mock: '(message)'
self.origin.__repr__ = lambda mock: '(origin)'
self.kind.__repr__ = lambda mock: '(kind)'
self.severity.__repr__ = lambda mock: '(severity)'
self.assertEqual(
repr(self.issue), (
'Issue(message=(message), severity=(severity),'
' kind=(kind), origin=(origin))'))
def test_relative_to__with_origin(self):
path = 'path'
issue2 = self.issue.relative_to(path)
self.issue.origin.relative_to.assert_called_with(path)
self.assertIs(self.issue.message, issue2.message)
self.assertIs(self.issue.severity, issue2.severity)
self.assertIs(self.issue.kind, issue2.kind)
self.assertIs(self.issue.origin.relative_to(path), issue2.origin)
def test_relative_to__without_origin(self):
path = 'path'
self.issue.origin = None
issue2 = self.issue.relative_to(path)
self.assertIs(issue2.message, self.issue.message)
self.assertIs(issue2.severity, self.issue.severity)
self.assertIs(issue2.kind, self.issue.kind)
self.assertIs(issue2.origin, None)
plainbox-0.25/plainbox/impl/test_testing_utils.py 0000664 0001750 0001750 00000005244 12627266441 023156 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.test_testing_utils
================================
Test definitions for plainbox.impl.testing_utils module
"""
from unittest import TestCase
from warnings import warn, catch_warnings
import os
from plainbox.impl.secure.origin import PythonFileTextSource
from plainbox.impl.testing_utils import make_job
from plainbox.impl.testing_utils import suppress_warnings
class SuppressWarningTests(TestCase):
def test_suppress_warnings_works(self):
"""
suppress_warnings() hides all warnings
"""
@suppress_warnings
def func():
warn("this is a warning!")
with catch_warnings(record=True) as warning_list:
func()
self.assertEqual(warning_list, [])
def test_suppress_warnings_is_a_good_decorator(self):
"""
suppress_warnings() does not clobber function name and docstring
"""
@suppress_warnings
def func_with_name():
"""and docstring"""
self.assertEqual(func_with_name.__name__, 'func_with_name')
self.assertEqual(func_with_name.__doc__, 'and docstring')
class MakeJobTests(TestCase):
"""
Tests for the make_job() function
"""
def setUp(self):
self.job = make_job('job')
def test_origin_is_set(self):
"""
verify that jobs created with make_job() have a non-None origin
"""
self.assertIsNot(self.job.origin, None)
def test_origin_source_is_special(self):
"""
verify that jobs created with make_job() use PythonFileTextSource as
the origin.source attribute.
"""
self.assertIsInstance(self.job.origin.source, PythonFileTextSource)
def test_origin_source_filename_is_correct(self):
"""
verify that make_job() can properly trace the filename of the python
module that called make_job()
"""
self.assertEqual(
os.path.basename(self.job.origin.source.filename),
"test_testing_utils.py")
plainbox-0.25/plainbox/impl/buildsystems.py 0000664 0001750 0001750 00000005470 12627266441 021752 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.buildsystems` -- build system interfaces
============================================================
"""
import glob
import shlex
import os
from plainbox.abc import IBuildSystem
from plainbox.impl.secure.plugins import PkgResourcesPlugInCollection
# python3.2 doesn't have shlex.quote
# so let's use the bundled copy here
if not hasattr(shlex, 'quote'):
from ._shlex import quote
shlex.quote = quote
class MakefileBuildSystem(IBuildSystem):
"""
A build system for projects using classic makefiles
"""
def probe(self, src_dir: str) -> int:
# If a configure script exists (autotools?) then let's not pretend we
# do the whole thing and bail out. It's better to let test authors to
# customize everything.
if os.path.isfile(os.path.join(src_dir, "configure")):
return 0
if os.path.isfile(os.path.join(src_dir, "Makefile")):
return 90
return 0
def get_build_command(self, src_dir: str, build_dir: str) -> str:
return "VPATH={} make -f {}".format(
shlex.quote(os.path.relpath(src_dir, build_dir)),
shlex.quote(os.path.relpath(
os.path.join(src_dir, 'Makefile'), build_dir)))
class AutotoolsBuildSystem(IBuildSystem):
"""
A build system for projects using autotools
"""
def probe(self, src_dir: str) -> int:
if os.path.isfile(os.path.join(src_dir, "configure")):
return 90
return 0
def get_build_command(self, src_dir: str, build_dir: str) -> str:
return "{}/configure && make".format(
shlex.quote(os.path.relpath(src_dir, build_dir)))
class GoBuildSystem(IBuildSystem):
"""
A build system for projects written in go
"""
def probe(self, src_dir: str) -> int:
if glob.glob("{}/*.go".format(src_dir)) != []:
return 50
return 0
def get_build_command(self, src_dir: str, build_dir: str) -> str:
return "go build {}/*.go".format(os.path.relpath(src_dir, build_dir))
# Collection of all buildsystems
all_buildsystems = PkgResourcesPlugInCollection('plainbox.buildsystem')
plainbox-0.25/plainbox/impl/_textwrap.py 0000664 0001750 0001750 00000010404 12627266441 021231 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
#
# Parts copied from Python3.4:
# Copyright (C) 1999-2001 Gregory P. Ward.
# Copyright (C) 2002, 2003 Python Software Foundation.
# Written by Greg Ward
#
# PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
# --------------------------------------------
#
# 1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"),
# and the Individual or Organization ("Licensee") accessing and otherwise
# using this software ("Python") in source or binary form and its associated
# documentation.
#
# 2. Subject to the terms and conditions of this License Agreement, PSF hereby
# grants Licensee a nonexclusive, royalty-free, world-wide license to
# reproduce, analyze, test, perform and/or display publicly, prepare
# derivative works, distribute, and otherwise use Python alone or in any
# derivative version, provided, however, that PSF's License Agreement and
# PSF's notice of copyright, i.e., "Copyright (c) 2001, 2002, 2003, 2004,
# 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014 Python Software
# Foundation; All Rights Reserved" are retained in Python alone or in any
# derivative version prepared by Licensee.
#
# 3. In the event Licensee prepares a derivative work that is based on or
# incorporates Python or any part thereof, and wants to make the derivative
# work available to others as provided herein, then Licensee hereby agrees
# to include in any such work a brief summary of the changes made to Python.
#
# 4. PSF is making Python available to Licensee on an "AS IS" basis. PSF MAKES
# NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE,
# BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR
# WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT
# THE USE OF PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
#
# 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON FOR ANY
# INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF
# MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, OR ANY DERIVATIVE
# THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
#
# 6. This License Agreement will automatically terminate upon a material breach
# of its terms and conditions.
#
# 7. Nothing in this License Agreement shall be deemed to create any
# relationship of agency, partnership, or joint venture between PSF and
# Licensee. This License Agreement does not grant permission to use PSF
# trademarks or trade name in a trademark sense to endorse or promote
# products or services of Licensee, or any third party.
#
# 8. By copying, installing or otherwise using Python, Licensee agrees to be
# bound by the terms and conditions of this License Agreement.
"""
:mod:`plainbox.impl._textwrap` -- support code for textwrap compatibility
=========================================================================
This module contains a copy of textwrap source code from python3.4
"""
def _textwrap_indent(text, prefix, predicate=None):
"""Adds 'prefix' to the beginning of selected lines in 'text'.
If 'predicate' is provided, 'prefix' will only be added to the lines
where 'predicate(line)' is True. If 'predicate' is not provided,
it will default to adding 'prefix' to all non-empty lines that do not
consist solely of whitespace characters.
"""
if predicate is None:
def predicate(line):
return line.strip()
def prefixed_lines():
for line in text.splitlines(True):
yield (prefix + line if predicate(line) else line)
return ''.join(prefixed_lines())
plainbox-0.25/plainbox/impl/test_ctrl.py 0000664 0001750 0001750 00000153172 12627266441 021231 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012, 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.test_ctrl
=======================
Test definitions for plainbox.impl.ctrl module
"""
from subprocess import CalledProcessError
from unittest import TestCase
import os
from plainbox.abc import IJobResult
from plainbox.abc import IProvider1
from plainbox.abc import IProviderBackend1
from plainbox.impl.applogic import PlainBoxConfig
from plainbox.impl.ctrl import CheckBoxExecutionController
from plainbox.impl.ctrl import CheckBoxSessionStateController
from plainbox.impl.ctrl import QmlJobExecutionController
from plainbox.impl.ctrl import RootViaPTL1ExecutionController
from plainbox.impl.ctrl import RootViaPkexecExecutionController
from plainbox.impl.ctrl import RootViaSudoExecutionController
from plainbox.impl.ctrl import SymLinkNest
from plainbox.impl.ctrl import UserJobExecutionController
from plainbox.impl.ctrl import gen_rfc822_records_from_io_log
from plainbox.impl.ctrl import get_via_cycle
from plainbox.impl.job import JobDefinition
from plainbox.impl.resource import Resource
from plainbox.impl.resource import ResourceExpression
from plainbox.impl.result import MemoryJobResult
from plainbox.impl.secure.origin import JobOutputTextSource
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.providers.v1 import Provider1
from plainbox.impl.secure.rfc822 import RFC822Record
from plainbox.impl.secure.rfc822 import RFC822SyntaxError
from plainbox.impl.session import InhibitionCause
from plainbox.impl.session import JobReadinessInhibitor
from plainbox.impl.session import JobState
from plainbox.impl.session import SessionState
from plainbox.vendor import extcmd
from plainbox.vendor import mock
class CheckBoxSessionStateControllerTests(TestCase):
def setUp(self):
self.ctrl = CheckBoxSessionStateController()
def test_get_dependency_set(self):
# Job with no dependencies
job_a = JobDefinition({})
self.assertEqual(
self.ctrl.get_dependency_set(job_a), set())
# Job with direct dependencies
job_b = JobDefinition({
'depends': 'j1, j2'
})
self.assertEqual(
self.ctrl.get_dependency_set(job_b),
{('direct', 'j1'), ('direct', 'j2')})
# Job with resouce dependencies
job_c = JobDefinition({
'requires': 'j3.attr == 1'
})
self.assertEqual(
self.ctrl.get_dependency_set(job_c),
{('resource', 'j3')})
# Job with ordering dependencies
job_d = JobDefinition({
'after': 'j1, j2'
})
self.assertEqual(
self.ctrl.get_dependency_set(job_d),
{('ordering', 'j1'), ('ordering', 'j2')})
# Job with both direct and resource dependencies
job_e = JobDefinition({
'depends': 'j4',
'requires': 'j5.attr == 1'
})
self.assertEqual(
self.ctrl.get_dependency_set(job_e),
{('direct', 'j4'), ('resource', 'j5')})
# Job with both direct and resource dependencies
# on the same job (j6)
job_f = JobDefinition({
'depends': 'j6',
'requires': 'j6.attr == 1'
})
self.assertEqual(
self.ctrl.get_dependency_set(job_f),
{('direct', 'j6'), ('resource', 'j6')})
def test_get_inhibitor_list_PENDING_RESOURCE(self):
# verify that jobs that require a resource that hasn't been
# invoked yet produce the PENDING_RESOURCE inhibitor
j1 = JobDefinition({
'id': 'j1',
'requires': 'j2.attr == "ok"'
})
j2 = JobDefinition({
'id': 'j2'
})
session_state = mock.MagicMock(spec=SessionState)
session_state.job_state_map['j2'].job = j2
session_state.resource_map = {}
self.assertEqual(
self.ctrl.get_inhibitor_list(session_state, j1),
[JobReadinessInhibitor(
InhibitionCause.PENDING_RESOURCE,
j2, ResourceExpression('j2.attr == "ok"'))])
def test_get_inhibitor_list_FAILED_RESOURCE(self):
# verify that jobs that require a resource that has been
# invoked and produced resources but the expression dones't
# evaluate to True produce the FAILED_RESOURCE inhibitor
j1 = JobDefinition({
'id': 'j1',
'requires': 'j2.attr == "ok"'
})
j2 = JobDefinition({
'id': 'j2'
})
session_state = mock.MagicMock(spec=SessionState)
session_state.job_state_map['j2'].job = j2
session_state.resource_map = {
'j2': [Resource({'attr': 'not-ok'})]
}
self.assertEqual(
self.ctrl.get_inhibitor_list(session_state, j1),
[JobReadinessInhibitor(
InhibitionCause.FAILED_RESOURCE,
j2, ResourceExpression('j2.attr == "ok"'))])
def test_get_inhibitor_list_good_resource(self):
# verify that jobs that require a resource that has been invoked and
# produced resources for which the expression evaluates to True don't
# have any inhibitors
j1 = JobDefinition({
'id': 'j1',
'requires': 'j2.attr == "ok"'
})
j2 = JobDefinition({
'id': 'j2'
})
session_state = mock.MagicMock(spec=SessionState)
session_state.resource_map = {
'j2': [Resource({'attr': 'ok'})]
}
session_state.job_state_map['j2'].job = j2
self.assertEqual(
self.ctrl.get_inhibitor_list(session_state, j1), [])
def test_get_inhibitor_list_PENDING_DEP(self):
# verify that jobs that depend on another job or wait (via after) for
# another that hasn't been invoked yet produce the PENDING_DEP
# inhibitor
j1 = JobDefinition({
'id': 'j1',
'depends': 'j2',
'after': 'j3',
})
j2 = JobDefinition({
'id': 'j2'
})
j3 = JobDefinition({
'id': 'j3'
})
session_state = mock.MagicMock(spec=SessionState)
session_state.job_state_map = {
'j1': mock.Mock(spec_set=JobState),
'j2': mock.Mock(spec_set=JobState),
'j3': mock.Mock(spec_set=JobState),
}
jsm_j2 = session_state.job_state_map['j2']
jsm_j2.job = j2
jsm_j2.result.outcome = IJobResult.OUTCOME_NONE
jsm_j3 = session_state.job_state_map['j3']
jsm_j3.job = j3
jsm_j3.result.outcome = IJobResult.OUTCOME_NONE
self.assertEqual(self.ctrl.get_inhibitor_list(session_state, j1), [
JobReadinessInhibitor(InhibitionCause.PENDING_DEP, j2, None),
JobReadinessInhibitor(InhibitionCause.PENDING_DEP, j3, None),
])
def test_get_inhibitor_list_FAILED_DEP(self):
# verify that jobs that depend on another job that ran but
# didn't result in OUTCOME_PASS produce the FAILED_DEP
# inhibitor.
j1 = JobDefinition({
'id': 'j1',
'depends': 'j2',
'after': 'j3',
})
j2 = JobDefinition({
'id': 'j2'
})
j3 = JobDefinition({
'id': 'j3'
})
session_state = mock.MagicMock(spec=SessionState)
session_state.job_state_map = {
'j1': mock.Mock(spec_set=JobState),
'j2': mock.Mock(spec_set=JobState),
'j3': mock.Mock(spec_set=JobState),
}
jsm_j2 = session_state.job_state_map['j2']
jsm_j2.job = j2
jsm_j2.result.outcome = IJobResult.OUTCOME_FAIL
jsm_j3 = session_state.job_state_map['j3']
jsm_j3.job = j3
jsm_j3.result.outcome = IJobResult.OUTCOME_FAIL
self.assertEqual(
self.ctrl.get_inhibitor_list(session_state, j1),
[JobReadinessInhibitor(
InhibitionCause.FAILED_DEP, j2, None)])
def test_get_inhibitor_list_good_dep(self):
# verify that jobs that depend on another job that ran and has outcome
# equal to OUTCOME_PASS don't have any inhibitors
j1 = JobDefinition({
'id': 'j1',
'depends': 'j2',
'after': 'j3'
})
j2 = JobDefinition({
'id': 'j2'
})
j3 = JobDefinition({
'id': 'j3'
})
session_state = mock.MagicMock(spec=SessionState)
session_state.job_state_map = {
'j1': mock.Mock(spec_set=JobState),
'j2': mock.Mock(spec_set=JobState),
'j3': mock.Mock(spec_set=JobState),
}
jsm_j2 = session_state.job_state_map['j2']
jsm_j2.job = j2
jsm_j2.result.outcome = IJobResult.OUTCOME_PASS
jsm_j3 = session_state.job_state_map['j3']
jsm_j3.job = j3
jsm_j3.result.outcome = IJobResult.OUTCOME_PASS
self.assertEqual(
self.ctrl.get_inhibitor_list(session_state, j1), [])
def test_observe_result__normal(self):
job = mock.Mock(spec=JobDefinition)
result = mock.Mock(spec=IJobResult)
session_state = mock.MagicMock(spec=SessionState)
self.ctrl.observe_result(session_state, job, result)
# Ensure that result got stored
self.assertIs(
session_state.job_state_map[job.id].result, result)
# Ensure that signals got fired
session_state.on_job_state_map_changed.assert_called_once_with()
session_state.on_job_result_changed.assert_called_once_with(
job, result)
def test_observe_result__OUTCOME_NONE(self):
job = mock.Mock(spec=JobDefinition, plugin='resource')
result = mock.Mock(spec=IJobResult, outcome=IJobResult.OUTCOME_NONE)
session_state = mock.MagicMock(spec=SessionState)
self.ctrl.observe_result(session_state, job, result)
# Ensure that result got stored
self.assertIs(
session_state.job_state_map[job.id].result, result)
# Ensure that signals got fired
session_state.on_job_state_map_changed.assert_called_once_with()
session_state.on_job_result_changed.assert_called_once_with(
job, result)
# Ensure that a resource was *not* defined
self.assertEqual(session_state.set_resource_list.call_count, 0)
def test_observe_result__resource(self):
job = mock.Mock(spec=JobDefinition, plugin='resource')
result = mock.Mock(spec=IJobResult, outcome=IJobResult.OUTCOME_PASS)
result.get_io_log.return_value = [
(0, 'stdout', b'attr: value1\n'),
(0, 'stdout', b'\n'),
(0, 'stdout', b'attr: value2\n')]
session_state = mock.MagicMock(spec=SessionState)
self.ctrl.observe_result(session_state, job, result)
# Ensure that result got stored
self.assertIs(
session_state.job_state_map[job.id].result, result)
# Ensure that signals got fired
session_state.on_job_state_map_changed.assert_called_once_with()
session_state.on_job_result_changed.assert_called_once_with(
job, result)
# Ensure that new resource was defined
session_state.set_resource_list.assert_called_once_with(
job.id, [
Resource({'attr': 'value1'}), Resource({'attr': 'value2'})])
@mock.patch('plainbox.impl.ctrl.logger')
def test_observe_result__broken_resource(self, mock_logger):
job = mock.Mock(spec=JobDefinition, plugin='resource')
result = mock.Mock(spec=IJobResult, outcome=IJobResult.OUTCOME_PASS)
result.get_io_log.return_value = [(0, 'stdout', b'barf\n')]
session_state = mock.MagicMock(spec=SessionState)
self.ctrl.observe_result(session_state, job, result)
# Ensure that result got stored
self.assertIs(
session_state.job_state_map[job.id].result, result)
# Ensure that signals got fired
session_state.on_job_state_map_changed.assert_called_once_with()
session_state.on_job_result_changed.assert_called_once_with(
job, result)
# Ensure that a warning was logged
mock_logger.warning.assert_called_once_with(
"local script %s returned invalid RFC822 data: %s",
job.id, RFC822SyntaxError(
None, 1, "Unexpected non-empty line: 'barf\\n'"))
@mock.patch('plainbox.impl.ctrl.gen_rfc822_records_from_io_log')
def test_observe_result__local_typical(self, mock_gen):
"""
verify side effects of using observe_result() that would define a new
job
"""
# Job A is any example job
job_a = JobDefinition({'id': 'a', 'plugin': 'shell', 'command': ':'})
# Job B is a job that prints the definition of job A
job_b = JobDefinition({'id': 'b', 'plugin': 'local'})
# Result B is a fake result of running job B
result_b = MemoryJobResult({'outcome': 'pass'})
# Session knows about just B
session_state = SessionState([job_b])
# Mock gen_rfc822_records_from_io_log to produce one mock record
mock_gen.return_value = [RFC822Record({})]
# Mock job B to create job A as a child if asked to
with mock.patch.object(job_b, 'create_child_job_from_record') as fn:
fn.side_effect = lambda record: job_a
# Pretend that we are observing a 'result_b' of 'job_b'
self.ctrl.observe_result(session_state, job_b, result_b)
# Ensure that result got stored
self.assertIs(session_state.job_state_map[job_b.id].result, result_b)
# Ensure that job A is now via-connected to job B
self.assertIs(session_state.job_state_map[job_a.id].via_job, job_b)
@mock.patch('plainbox.impl.ctrl.gen_rfc822_records_from_io_log')
@mock.patch('plainbox.impl.ctrl.logger')
def test_observe_result__local_imperfect_clash(
self, mock_logger, mock_gen):
"""
verify side effects of using observe_result() that would define a
already existing job with the non-identical definition.
We basically hope to see the old job being there intact and a warning
to be logged.
"""
# Jobs A1 and A2 are simple example jobs (different, with same id)
job_a1 = JobDefinition(
{'id': 'a', 'plugin': 'shell', 'command': 'true'})
job_a2 = JobDefinition(
{'id': 'a', 'plugin': 'shell', 'command': 'false'})
# Job B is a job that prints the definition of job A2
job_b = JobDefinition({'id': 'b', 'plugin': 'local'})
# Result B is a fake result of running job B
result_b = MemoryJobResult({'outcome': 'pass'})
# Session knows about A1 and B
session_state = SessionState([job_a1, job_b])
# Mock gen_rfc822_records_from_io_log to produce one mock record
mock_gen.return_value = [RFC822Record({})]
# Mock job B to create job A2 as a child if asked to
with mock.patch.object(job_b, 'create_child_job_from_record') as fn:
fn.side_effect = lambda record: job_a2
# Pretend that we are observing a 'result_b' of 'job_b'
self.ctrl.observe_result(session_state, job_b, result_b)
# Ensure that result got stored
self.assertIs(session_state.job_state_map[job_b.id].result, result_b)
# Ensure that we didn't change via_job of the job A1
self.assertIsNot(session_state.job_state_map[job_a1.id].via_job, job_b)
# Ensure that a warning was logged
mock_logger.warning.assert_called_once_with(
("Local job %s produced job %s that collides with"
" an existing job %s (from %s), the new job was"
" discarded"),
job_b.id, job_a2.id, job_a1.id, job_a1.origin)
@mock.patch('plainbox.impl.ctrl.gen_rfc822_records_from_io_log')
def test_observe_result__local_perfect_clash(self, mock_gen):
"""
verify side effects of using observe_result() that would define a
already existing job with the exactly identical definition.
We basically hope to see the old job being there but the origin field
should be updated to reflect the new association between 'existing_job'
and 'job'
"""
# Job A is any example job
job_a = JobDefinition({'id': 'a', 'plugin': 'shell', 'command': ':'})
# Job B is a job that prints the definition of job A
job_b = JobDefinition({'id': 'b', 'plugin': 'local'})
# Result B is a fake result of running job B
result_b = MemoryJobResult({'outcome': 'pass'})
# Session knows about A and B
session_state = SessionState([job_a, job_b])
# Mock gen_rfc822_records_from_io_log to produce one mock record
mock_gen.return_value = [RFC822Record({})]
# Mock job B to create job A as a child if asked to
with mock.patch.object(job_b, 'create_child_job_from_record') as fn:
fn.side_effect = lambda record: job_a
# Pretend that we are observing a 'result_b' of 'job_b'
self.ctrl.observe_result(session_state, job_b, result_b)
# Ensure that result got stored
self.assertIs(session_state.job_state_map[job_b.id].result, result_b)
# Ensure that job A is now via-connected to job B
self.assertIs(session_state.job_state_map[job_a.id].via_job, job_b)
class FunctionTests(TestCase):
"""
unit tests for gen_rfc822_records_from_io_log() and other functions.
"""
def test_parse_typical(self):
"""
verify typical operation without any parsing errors
"""
# Setup a mock job and result, give some io log to the result
job = mock.Mock(spec=JobDefinition)
result = mock.Mock(spec=IJobResult)
result.get_io_log.return_value = [
(0, 'stdout', b'attr: value1\n'),
(0, 'stdout', b'\n'),
(0, 'stdout', b'attr: value2\n')]
# Parse the IO log records
records = list(gen_rfc822_records_from_io_log(job, result))
# Ensure that we saw both records
self.assertEqual(records, [
RFC822Record(
{'attr': 'value1'}, Origin(JobOutputTextSource(job), 1, 1)),
RFC822Record(
{'attr': 'value2'}, Origin(JobOutputTextSource(job), 3, 3)),
])
@mock.patch('plainbox.impl.ctrl.logger')
def test_parse_error(self, mock_logger):
# Setup a mock job and result, give some io log to the result
job = mock.Mock(spec=JobDefinition)
result = mock.Mock(spec=IJobResult)
result.get_io_log.return_value = [
(0, 'stdout', b'attr: value1\n'),
(0, 'stdout', b'\n'),
(0, 'stdout', b'error\n'),
(0, 'stdout', b'\n'),
(0, 'stdout', b'attr: value2\n')]
# Parse the IO log records
records = list(gen_rfc822_records_from_io_log(job, result))
# Ensure that only the first record was generated
self.assertEqual(records, [
RFC822Record(
{'attr': 'value1'}, Origin(JobOutputTextSource(job), 1, 1)),
])
# Ensure that a warning was logged
mock_logger.warning.assert_called_once_with(
"local script %s returned invalid RFC822 data: %s",
job.id, RFC822SyntaxError(
None, 3, "Unexpected non-empty line: 'error\\n'"))
def test_get_via_cycle__no_cycle(self):
job_a = mock.Mock(spec_set=JobDefinition, name='job_a')
job_a.id = 'a'
job_state_a = mock.Mock(spec_set=JobState, name='job_state_a')
job_state_a.job = job_a
job_state_a.via_job = None
job_state_map = {job_a.id: job_state_a}
self.assertEqual(get_via_cycle(job_state_map, job_a), ())
def test_get_via_cycle__trivial(self):
job_a = mock.Mock(spec_set=JobDefinition, name='job_a')
job_a.id = 'a'
job_state_a = mock.Mock(spec_set=JobState, name='job_state_b')
job_state_a.job = job_a
job_state_a.via_job = job_a
job_state_map = {job_a.id: job_state_a}
self.assertEqual(get_via_cycle(job_state_map, job_a), [job_a, job_a])
def test_get_via_cycle__indirect(self):
job_a = mock.Mock(spec_set=JobDefinition, name='job_a')
job_a.id = 'a'
job_b = mock.Mock(spec_set=JobDefinition, name='job_b')
job_b.id = 'b'
job_state_a = mock.Mock(spec_set=JobState, name='job_state_a')
job_state_a.job = job_a
job_state_a.via_job = job_b
job_state_b = mock.Mock(spec_set=JobState, name='job_state_b')
job_state_b.job = job_b
job_state_b.via_job = job_a
job_state_map = {
job_a.id: job_state_a,
job_b.id: job_state_b,
}
self.assertEqual(
get_via_cycle(job_state_map, job_a),
[job_a, job_b, job_a])
class SymLinkNestTests(TestCase):
"""
Tests for SymLinkNest class
"""
NEST_DIR = "nest"
def setUp(self):
self.nest = SymLinkNest(self.NEST_DIR)
def test_init(self):
"""
verify that SymLinkNest.__init__() stores its argument
"""
self.assertEqual(self.nest._dirname, self.NEST_DIR)
def test_add_provider(self):
"""
verify that add_provider() adds each executable
"""
provider = mock.Mock(name='provider', spec=Provider1)
provider.executable_list = ['exec1', 'exec2']
with mock.patch.object(self.nest, 'add_executable'):
self.nest.add_provider(provider)
self.nest.add_executable.assert_has_calls([
(('exec1',), {}),
(('exec2',), {})])
@mock.patch('os.symlink')
def test_add_executable(self, mock_symlink):
self.nest.add_executable('/usr/lib/foo/exec')
mock_symlink.assert_called_with(
'/usr/lib/foo/exec', 'nest/exec')
class CheckBoxExecutionControllerTestsMixIn:
"""
Mix-in class that defines tests for CheckBoxExecutionController
"""
SESSION_DIR = 'session-dir'
PROVIDER_LIST = [] # we don't need any here
NEST_DIR = 'nest-dir' # used as fake data only
CLS = CheckBoxExecutionController
@mock.patch('plainbox.impl.ctrl.check_output')
def setUp(self, mock_check_output):
self.ctrl = self.CLS(self.PROVIDER_LIST)
# Create mocked job definition.
# Put a mocked provider on the job and give it some values for:
# * extra_PYTHONPATH (optional, set it to None),
# * CHECKBOX_SHARE (mandatory)
self.job = mock.Mock(
name='job',
spec=JobDefinition,
provider=mock.Mock(
name='provider',
spec=IProvider1,
extra_PYTHONPATH=None,
CHECKBOX_SHARE='CHECKBOX_SHARE',
data_dir='data_dir', units_dir='units_dir'))
self.job_state = mock.Mock(name='job_state', spec=JobState)
# Mock the default flags (empty set)
self.job.get_flag_set.return_value = frozenset()
# Create mocked config.
# Put an empty dictionary of environment overrides
# that is expected by get_execution_environment()
self.config = mock.Mock(
name='config',
spec=PlainBoxConfig,
environment={})
# Create a mocked extcmd_popen
self.extcmd_popen = mock.Mock(
name='extcmd_popen',
spec=extcmd.ExternalCommand)
@mock.patch('plainbox.impl.ctrl.check_output')
def test_init(self, mock_check_output):
"""
verify that __init__() stores session_dir
"""
provider_list = mock.Mock()
ctrl = self.CLS(provider_list)
self.assertIs(ctrl._provider_list, provider_list)
@mock.patch('os.path.isdir')
@mock.patch('os.makedirs')
def test_execute_job(self, mock_makedirs, mock_os_path_isdir):
"""
verify that execute_job() correctly glues all the basic pieces
"""
# Call the tested method, execute_job() but mock-away
# methods that we're not testing here,
# get_execution_{command,environment}() and configured_filesystem()
with mock.patch.object(self.ctrl, 'get_execution_command'), \
mock.patch.object(self.ctrl, 'get_execution_environment'), \
mock.patch.object(self.ctrl, 'configured_filesystem'), \
mock.patch.object(self.ctrl, 'temporary_cwd'):
retval = self.ctrl.execute_job(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.extcmd_popen)
# Ensure that call was invoked with command end environment (passed
# as keyword argument). Extract the return value of
# configured_filesystem() as nest_dir so that we can pass it to
# other calls to get their mocked return values.
# Urgh! is this doable somehow without all that?
nest_dir = self.ctrl.configured_filesystem().__enter__()
cwd_dir = self.ctrl.temporary_cwd().__enter__()
self.extcmd_popen.call.assert_called_with(
self.ctrl.get_execution_command(
self.job, self.job_state, self.config, self.SESSION_DIR,
nest_dir),
env=self.ctrl.get_execution_environment(
self.job, self.job_state, self.config, self.SESSION_DIR,
nest_dir),
cwd=cwd_dir)
# Ensure that execute_job() returns the return value of call()
self.assertEqual(retval, self.extcmd_popen.call())
# Ensure that presence of CHECKBOX_DATA directory was checked for
mock_os_path_isdir.assert_called_with(
self.ctrl.get_CHECKBOX_DATA(self.SESSION_DIR))
def test_get_score_for_random_jobs(self):
# Ensure that score for random jobs is -1
self.assertEqual(self.ctrl.get_score(mock.Mock()), -1)
def test_get_score_for_checkbox_jobs(self):
# Ensure that mock for JobDefinition (which is checkbox job in
# disguise) is whatever get_checkbox_score() returns.
with mock.patch.object(
self.ctrl, 'get_checkbox_score') as mock_get_checkbox_score:
self.assertEqual(
self.ctrl.get_score(mock.Mock(spec=JobDefinition)),
mock_get_checkbox_score())
def test_CHECKBOX_DATA(self):
"""
verify the value of CHECKBOX_DATA
"""
self.assertEqual(
self.ctrl.get_CHECKBOX_DATA(self.SESSION_DIR),
"session-dir/CHECKBOX_DATA")
@mock.patch('json.dumps')
@mock.patch('json.loads')
@mock.patch('os.fdopen')
def test_noreturn_flag_hangs(self, mock_os_fdopen, mock_json_loads,
mock_json_dumps):
"""
verify that jobs having 'noreturn' flag call _halt after executing
command
"""
self.job.get_flag_set.return_value = {'noreturn'}
with mock.patch.object(self.ctrl, 'get_execution_command'), \
mock.patch.object(self.ctrl, 'get_execution_environment'), \
mock.patch.object(self.ctrl, 'configured_filesystem'), \
mock.patch.object(self.ctrl, 'temporary_cwd'), \
mock.patch.object(self.ctrl, '_halt'):
self.ctrl.execute_job(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.extcmd_popen)
self.ctrl._halt.assert_called_once_with()
class UserJobExecutionControllerTests(CheckBoxExecutionControllerTestsMixIn,
TestCase):
"""
Tests for UserJobExecutionController
"""
CLS = UserJobExecutionController
def test_get_command(self):
"""
verify that we simply execute the command via job.shell
"""
self.assertEqual(
self.ctrl.get_execution_command(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR),
[self.job.shell, '-c', self.job.command])
def test_get_checkbox_score_for_jobs_without_user(self):
"""
verify that score for jobs without user override is one
"""
self.job.user = None
self.assertEqual(self.ctrl.get_checkbox_score(self.job), 1)
@mock.patch('sys.platform')
@mock.patch('os.getuid')
def test_get_checkbox_score_for_jobs_with_user(
self, mock_getuid, mock_plat):
"""
verify that score for jobs with an user override is minus one
"""
mock_plat.return_value = 'linux'
# Ensure we're not root, in case test suite *is* run by root.
mock_getuid.return_value = 1000
self.job.user = 'root'
self.assertEqual(self.ctrl.get_checkbox_score(self.job), -1)
@mock.patch('sys.platform')
@mock.patch('os.getuid')
def test_get_checkbox_score_as_root(self, mock_getuid, mock_plat):
"""
verify that score for jobs with an user override is 4 if I am root
"""
mock_plat.return_value = 'linux'
mock_getuid.return_value = 0 # Pretend to be root
self.job.user = 'root'
self.assertEqual(self.ctrl.get_checkbox_score(self.job), 4)
@mock.patch.dict('os.environ', clear=True)
def test_get_execution_environment_resets_locales(self):
# Call the tested method
env = self.ctrl.get_execution_environment(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR)
# Ensure that LANG is reset to C.UTF-8
self.assertEqual(env['LANG'], 'C.UTF-8')
@mock.patch.dict('os.environ', clear=True, LANG='fake_LANG',
LANGUAGE='fake_LANGUAGE', LC_ALL='fake_LC_ALL')
def test_get_execution_environment_preserves_locales_if_requested(self):
self.job.get_flag_set.return_value = {'preserve-locale'}
# Call the tested method
env = self.ctrl.get_execution_environment(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR)
# Ensure that locale variables are what we mocked them to be
self.assertEqual(env['LANG'], 'fake_LANG')
self.assertEqual(env['LANGUAGE'], 'fake_LANGUAGE')
self.assertEqual(env['LC_ALL'], 'fake_LC_ALL')
@mock.patch.dict('os.environ', clear=True, PYTHONPATH='PYTHONPATH')
def test_get_execution_environment_keeps_PYTHONPATH(self):
# Call the tested method
env = self.ctrl.get_execution_environment(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR)
# Ensure that extra_PYTHONPATH is preprended to PYTHONPATH
self.assertEqual(env['PYTHONPATH'], 'PYTHONPATH')
@mock.patch.dict('os.environ', clear=True)
def test_get_execution_environment_uses_extra_PYTHONPATH(self):
# Set a extra_PYTHONPATH on the provider object
self.job.provider.extra_PYTHONPATH = 'extra_PYTHONPATH'
# Call the tested method
env = self.ctrl.get_execution_environment(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR)
# Ensure that extra_PYTHONPATH is preprended to PYTHONPATH
self.assertTrue(env['PYTHONPATH'].startswith(
self.job.provider.extra_PYTHONPATH))
@mock.patch.dict('os.environ', clear=True, PYTHONPATH='PYTHONPATH')
def test_get_execution_environment_merges_PYTHONPATH(self):
# Set a extra_PYTHONPATH on the provider object
self.job.provider.extra_PYTHONPATH = 'extra_PYTHONPATH'
# Call the tested method
env = self.ctrl.get_execution_environment(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR)
# Ensure that extra_PYTHONPATH is preprended to PYTHONPATH
self.assertTrue(env['PYTHONPATH'].startswith(
self.job.provider.extra_PYTHONPATH))
self.assertTrue(env['PYTHONPATH'].endswith('PYTHONPATH'))
@mock.patch.dict('os.environ', clear=True)
def test_get_execution_environment_sets_CHECKBOX_SHARE(self):
# Call the tested method
env = self.ctrl.get_execution_environment(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR)
# Ensure that CHECKBOX_SHARE is set to what the job provider wants
self.assertEqual(
env['CHECKBOX_SHARE'], self.job.provider.CHECKBOX_SHARE)
@mock.patch.dict('os.environ', clear=True)
def test_get_execution_environment_sets_CHECKBOX_DATA(self):
# Call the tested method
env = self.ctrl.get_execution_environment(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR)
# Ensure that CHECKBOX_DATA is set to what the controller wants
self.assertEqual(
env['CHECKBOX_DATA'],
self.ctrl.get_CHECKBOX_DATA(self.SESSION_DIR))
@mock.patch.dict('os.environ', clear=True)
def test_get_execution_environment_respects_config_environment(self):
self.config.environment['key'] = 'value'
# Call the tested method
env = self.ctrl.get_execution_environment(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR)
# Ensure that key=value was passed to the environment
self.assertEqual(env['key'], 'value')
@mock.patch.dict('os.environ', clear=True, key='old-value')
def test_get_execution_environment_preferes_existing_environment(self):
self.config.environment['key'] = 'value'
# Call the tested method
env = self.ctrl.get_execution_environment(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR)
# Ensure that 'old-value' takes priority over 'value'
self.assertEqual(env['key'], 'old-value')
class RootViaPTL1ExecutionControllerTests(
CheckBoxExecutionControllerTestsMixIn, TestCase):
"""
Tests for RootViaPTL1ExecutionController
"""
CLS = RootViaPTL1ExecutionController
def test_get_execution_environment_is_None(self):
# Call the tested method
env = self.ctrl.get_execution_environment(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR)
# Ensure that the environment is None
self.assertEqual(env, None)
@mock.patch.dict('os.environ', clear=True, PATH='vanilla-path')
def test_get_command(self):
"""
verify that we run plainbox-trusted-launcher-1 as the desired user
"""
self.job.get_environ_settings.return_value = []
self.job_state.via_job = mock.Mock(
name='generator_job',
spec=JobDefinition,
provider=mock.Mock(
name='provider',
spec=IProviderBackend1,
extra_PYTHONPATH=None,
data_dir="data_dir-generator",
units_dir="units_dir-generator",
CHECKBOX_SHARE='CHECKBOX_SHARE-generator'))
# Mock the default flags (empty set)
self.job_state.via_job.get_flag_set.return_value = frozenset()
PATH = os.pathsep.join([self.NEST_DIR, 'vanilla-path'])
expected = [
'pkexec', '--user', self.job.user,
'plainbox-trusted-launcher-1',
'--generator', self.job_state.via_job.checksum,
'-G', 'CHECKBOX_DATA=session-dir/CHECKBOX_DATA',
'-G', 'CHECKBOX_SHARE=CHECKBOX_SHARE-generator',
'-G', 'LANG=C.UTF-8',
'-G', 'LANGUAGE=',
'-G', 'LC_ALL=C.UTF-8',
'-G', 'PATH={}'.format(PATH),
'-G', 'PLAINBOX_PROVIDER_DATA=data_dir-generator',
'-G', 'PLAINBOX_PROVIDER_UNITS=units_dir-generator',
'-G', 'PLAINBOX_SESSION_SHARE=session-dir/CHECKBOX_DATA',
'--target', self.job.checksum,
'-T', 'CHECKBOX_DATA=session-dir/CHECKBOX_DATA',
'-T', 'CHECKBOX_SHARE=CHECKBOX_SHARE',
'-T', 'LANG=C.UTF-8',
'-T', 'LANGUAGE=',
'-T', 'LC_ALL=C.UTF-8',
'-T', 'PATH={}'.format(PATH),
'-T', 'PLAINBOX_PROVIDER_DATA=data_dir',
'-T', 'PLAINBOX_PROVIDER_UNITS=units_dir',
'-T', 'PLAINBOX_SESSION_SHARE=session-dir/CHECKBOX_DATA',
]
actual = self.ctrl.get_execution_command(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR)
self.assertEqual(actual, expected)
@mock.patch.dict('os.environ', clear=True, PATH='vanilla-path')
def test_get_command_without_via(self):
"""
verify that we run plainbox-trusted-launcher-1 as the desired user
"""
self.job.get_environ_settings.return_value = []
self.job_state.via_job = None
PATH = os.pathsep.join([self.NEST_DIR, 'vanilla-path'])
expected = [
'pkexec', '--user', self.job.user,
'plainbox-trusted-launcher-1',
'--target', self.job.checksum,
'-T', 'CHECKBOX_DATA=session-dir/CHECKBOX_DATA',
'-T', 'CHECKBOX_SHARE=CHECKBOX_SHARE',
'-T', 'LANG=C.UTF-8',
'-T', 'LANGUAGE=',
'-T', 'LC_ALL=C.UTF-8',
'-T', 'PATH={}'.format(PATH),
'-T', 'PLAINBOX_PROVIDER_DATA=data_dir',
'-T', 'PLAINBOX_PROVIDER_UNITS=units_dir',
'-T', 'PLAINBOX_SESSION_SHARE=session-dir/CHECKBOX_DATA',
]
actual = self.ctrl.get_execution_command(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR)
self.assertEqual(actual, expected)
def test_get_checkbox_score_for_other_providers(self):
# Ensure that the job provider is not Provider1
self.assertNotIsInstance(self.job.provider, Provider1)
# Ensure that we get a negative score of minus one
self.assertEqual(self.ctrl.get_checkbox_score(self.job), -1)
def test_get_checkbox_score_for_insecure_provider1(self):
# Assume that the job is coming from Provider1 provider
# but the provider itself is insecure
self.job.provider = mock.Mock(spec=Provider1, secure=False)
# Ensure that we get a negative score of minus one
self.assertEqual(self.ctrl.get_checkbox_score(self.job), -1)
@mock.patch.dict('plainbox.impl.ctrl.os.environ', clear=True)
def test_get_checkbox_score_for_secure_provider_and_user_job(self):
# Assume that the job is coming from Provider1 provider
# and the provider is secure
self.job.provider = mock.Mock(spec=Provider1, secure=True)
# Assume that the job runs as the current user
self.job.user = None
# Ensure that we get a neutral score of zero
self.assertEqual(self.ctrl.get_checkbox_score(self.job), 0)
@mock.patch.dict('plainbox.impl.ctrl.os.environ', clear=True)
@mock.patch('plainbox.impl.ctrl.check_output')
def test_get_checkbox_score_for_secure_provider_root_job_with_policy(
self, mock_check_output):
# Assume that the job is coming from Provider1 provider
# and the provider is secure
self.job.provider = mock.Mock(spec=Provider1, secure=True)
# Assume that the job runs as root
self.job.user = 'root'
# Ensure we get the right action id from pkaction(1)
mock_check_output.return_value = \
b"org.freedesktop.policykit.pkexec.run-plainbox-job\n"
# Ensure that we get a positive score of three
ctrl = self.CLS(self.PROVIDER_LIST)
self.assertEqual(ctrl.get_checkbox_score(self.job), 3)
@mock.patch.dict('plainbox.impl.ctrl.os.environ', clear=True)
@mock.patch('plainbox.impl.ctrl.check_output')
def test_get_checkbox_score_for_secure_provider_root_job_with_policy_2(
self, mock_check_output):
# Assume that the job is coming from Provider1 provider
# and the provider is secure
self.job.provider = mock.Mock(spec=Provider1, secure=True)
# Assume that the job runs as root
self.job.user = 'root'
# Ensure we get the right action id from pkaction(1) even with
# polikt version < 0.110 (pkaction always exists with status 1), see:
# https://bugs.freedesktop.org/show_bug.cgi?id=29936#attach_78263
mock_check_output.side_effect = CalledProcessError(
1, '', b"org.freedesktop.policykit.pkexec.run-plainbox-job\n")
# Ensure that we get a positive score of three
ctrl = self.CLS(self.PROVIDER_LIST)
self.assertEqual(ctrl.get_checkbox_score(self.job), 3)
@mock.patch.dict('plainbox.impl.ctrl.os.environ', values={
'SSH_CONNECTION': '1.2.3.4 123 1.2.3.5 22'
})
@mock.patch('plainbox.impl.ctrl.check_output')
def test_get_checkbox_score_for_normally_supported_job_over_ssh(
self, mock_check_output):
# Assume that the job is coming from Provider1 provider
# and the provider is secure
self.job.provider = mock.Mock(spec=Provider1, secure=True)
# Assume that the job runs as root
self.job.user = 'root'
# Assume we get the right action id from pkaction(1) even with
# polikt version < 0.110 (pkaction always exists with status 1), see:
# https://bugs.freedesktop.org/show_bug.cgi?id=29936#attach_78263
mock_check_output.side_effect = CalledProcessError(
1, '', b"org.freedesktop.policykit.pkexec.run-plainbox-job\n")
# Ensure that we get a positive score of three
ctrl = self.CLS(self.PROVIDER_LIST)
self.assertEqual(ctrl.get_checkbox_score(self.job), -1)
@mock.patch.dict('plainbox.impl.ctrl.os.environ', clear=True)
@mock.patch('plainbox.impl.ctrl.check_output')
def test_get_checkbox_score_for_secure_provider_root_job_no_policy(
self, mock_check_output):
# Assume that the job is coming from Provider1 provider
# and the provider is secure
self.job.provider = mock.Mock(spec=Provider1, secure=True)
# Assume that the job runs as root
self.job.user = 'root'
# Ensure pkaction(1) return nothing
mock_check_output.return_value = "No action with action id BLAHBLAH"
# Ensure that we get a positive score of two
ctrl = self.CLS(self.PROVIDER_LIST)
self.assertEqual(ctrl.get_checkbox_score(self.job), 0)
class RootViaPkexecExecutionControllerTests(
CheckBoxExecutionControllerTestsMixIn, TestCase):
"""
Tests for RootViaPkexecExecutionController
"""
CLS = RootViaPkexecExecutionController
def test_get_execution_environment_is_None(self):
# Call the tested method
env = self.ctrl.get_execution_environment(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR)
# Ensure that the environment is None
self.assertEqual(env, None)
@mock.patch.dict('os.environ', clear=True, PATH='vanilla-path')
def test_get_command(self):
"""
verify that we run env(1) + job.shell as the target user
"""
self.job.get_environ_settings.return_value = []
self.assertEqual(
self.ctrl.get_execution_command(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR),
['pkexec', '--user', self.job.user,
'env',
'CHECKBOX_DATA=session-dir/CHECKBOX_DATA',
'CHECKBOX_SHARE=CHECKBOX_SHARE',
'LANG=C.UTF-8',
'LANGUAGE=',
'LC_ALL=C.UTF-8',
'PATH={}'.format(
os.pathsep.join([self.NEST_DIR, 'vanilla-path'])),
'PLAINBOX_PROVIDER_DATA=data_dir',
'PLAINBOX_PROVIDER_UNITS=units_dir',
'PLAINBOX_SESSION_SHARE=session-dir/CHECKBOX_DATA',
self.job.shell, '-c', self.job.command])
def test_get_checkbox_score_for_user_jobs(self):
# Assume that the job runs as the current user
self.job.user = None
# Ensure that we get a neutral score of zero
self.assertEqual(self.ctrl.get_checkbox_score(self.job), 0)
def test_get_checkbox_score_for_root_jobs(self):
# Assume that the job runs as the root user
self.job.user = 'root'
# Ensure that we get a positive score of one
self.assertEqual(self.ctrl.get_checkbox_score(self.job), 1)
class RootViaSudoExecutionControllerTests(
CheckBoxExecutionControllerTestsMixIn, TestCase):
"""
Tests for RootViaSudoExecutionController
"""
CLS = RootViaSudoExecutionController
@mock.patch.dict('os.environ', clear=True, PATH='vanilla-path')
def test_get_command(self):
"""
verify that we run sudo(8)
"""
self.job.get_environ_settings.return_value = []
self.assertEqual(
self.ctrl.get_execution_command(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR),
['sudo', '-u', self.job.user, 'env',
'CHECKBOX_DATA=session-dir/CHECKBOX_DATA',
'CHECKBOX_SHARE=CHECKBOX_SHARE',
'LANG=C.UTF-8',
'LANGUAGE=',
'LC_ALL=C.UTF-8',
'PATH={}'.format(
os.pathsep.join([self.NEST_DIR, 'vanilla-path'])),
'PLAINBOX_PROVIDER_DATA=data_dir',
'PLAINBOX_PROVIDER_UNITS=units_dir',
'PLAINBOX_SESSION_SHARE=session-dir/CHECKBOX_DATA',
self.job.shell, '-c', self.job.command])
SUDO, ADMIN = range(2)
# Mock gid's for 'sudo' and 'admin'
def fake_getgrnam(self, name):
if name == 'sudo':
return mock.Mock(gr_gid=self.SUDO)
elif name == 'admin':
return mock.Mock(gr_gid=self.ADMIN)
else:
raise ValueError("unexpected group name")
@mock.patch('grp.getgrnam')
@mock.patch('posix.getgroups')
def test_user_can_sudo__sudo_group(self, mock_getgroups, mock_getgrnam):
# Mock gid's for 'sudo' and 'admin'
mock_getgrnam.side_effect = self.fake_getgrnam
# Mock that the current user is a member of group 1 ('sudo')
mock_getgroups.return_value = [self.SUDO]
# Create a fresh execution controller
ctrl = self.CLS(self.PROVIDER_LIST)
# Ensure that the user can use sudo
self.assertTrue(ctrl.user_can_sudo)
@mock.patch('grp.getgrnam')
@mock.patch('posix.getgroups')
def test_user_can_sudo__admin_group(self, mock_getgroups, mock_getgrnam):
sudo, admin = range(2)
# Mock gid's for 'sudo' and 'admin'
mock_getgrnam.side_effect = self.fake_getgrnam
# Mock that the current user is a member of group 1 ('admin')
mock_getgroups.return_value = [self.ADMIN]
# Create a fresh execution controller
ctrl = self.CLS(self.PROVIDER_LIST)
# Ensure that the user can use sudo
self.assertTrue(ctrl.user_can_sudo)
@mock.patch('grp.getgrnam')
@mock.patch('posix.getgroups')
def test_user_can_sudo__no_groups(self, mock_getgroups, mock_getgrnam):
sudo, admin = range(2)
# Mock gid's for 'sudo' and 'admin'
mock_getgrnam.side_effect = self.fake_getgrnam
# Mock that the current user not a member of any group
mock_getgroups.return_value = []
# Create a fresh execution controller
ctrl = self.CLS(self.PROVIDER_LIST)
# Ensure that the user can use sudo
self.assertFalse(ctrl.user_can_sudo)
def test_get_checkbox_score_without_sudo(self):
# Assume that the user cannot use sudo
self.ctrl.user_can_sudo = False
# Ensure that we get a negative score for this controller
self.assertEqual(self.ctrl.get_checkbox_score(self.job), -1)
def test_get_checkbox_score_with_sudo(self):
# Assume that the user can use sudo
self.ctrl.user_can_sudo = True
# Ensure that we get a positive score for this controller
# The score is actually 2 to be better than the pkexec controller
self.assertEqual(self.ctrl.get_checkbox_score(self.job), 2)
def test_get_checkbox_score_for_non_root_jobs(self):
# Assume that the user can use sudo
self.ctrl.user_can_sudo = True
# But don't require root for the jobs itself
self.job.user = None
# Ensure that we get a negative score for this controller
self.assertEqual(self.ctrl.get_checkbox_score(self.job), -1)
class QmlJobExecutionControllerTests(CheckBoxExecutionControllerTestsMixIn,
TestCase):
"""
Tests for QmlJobExecutionController
"""
CLS = QmlJobExecutionController
SHELL_OUT_FD = 6
SHELL_IN_FD = 7
def test_job_repr(self):
self.assertEqual(
self.ctrl.gen_job_repr(self.job),
{'id': self.job.id,
'summary': self.job.tr_summary(),
'description': self.job.tr_description()})
def test_get_execution_command(self):
"""
Tests gluing of commandline arguments when running QML exec. ctrl.
"""
self.assertEqual(
self.ctrl.get_execution_command(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.NEST_DIR, self.SHELL_OUT_FD, self.SHELL_IN_FD),
['qmlscene', '-I', self.ctrl.QML_MODULES_PATH, '--job',
self.job.qml_file, '--fd-out', self.SHELL_OUT_FD, '--fd-in',
self.SHELL_IN_FD, self.ctrl.QML_SHELL_PATH])
@mock.patch('json.dumps')
@mock.patch('os.path.isdir')
@mock.patch('os.fdopen')
@mock.patch('os.pipe')
@mock.patch('os.write')
@mock.patch('os.close')
def test_execute_job(self, mock_os_close, mock_os_write, mock_os_pipe,
mock_os_fdopen, mock_os_path_isdir, mock_json_dumps):
"""
Test if qml exec. ctrl. correctly runs piping
"""
mock_os_pipe.side_effect = [("pipe0_r", "pipe0_w"),
("pipe1_r", "pipe1_w")]
with mock.patch.object(self.ctrl, 'get_execution_command'), \
mock.patch.object(self.ctrl, 'get_execution_environment'), \
mock.patch.object(self.ctrl, 'configured_filesystem'), \
mock.patch.object(self.ctrl, 'temporary_cwd'), \
mock.patch.object(self.ctrl, 'gen_job_repr', return_value={}):
retval = self.ctrl.execute_job(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.extcmd_popen)
# Ensure that call was invoked with command end environment (passed
# as keyword argument). Extract the return value of
# configured_filesystem() as nest_dir so that we can pass it to
# other calls to get their mocked return values.
# Urgh! is this doable somehow without all that?
nest_dir = self.ctrl.configured_filesystem().__enter__()
cwd_dir = self.ctrl.temporary_cwd().__enter__()
self.extcmd_popen.call.assert_called_with(
self.ctrl.get_execution_command(
self.job, self.config, self.SESSION_DIR, nest_dir),
env=self.ctrl.get_execution_environment(
self.job, self.config, self.SESSION_DIR, nest_dir),
cwd=cwd_dir,
pass_fds=["pipe0_w", "pipe1_r"])
# Ensure that execute_job() returns the return value of call()
self.assertEqual(retval, self.extcmd_popen.call())
# Ensure that presence of CHECKBOX_DATA directory was checked for
mock_os_path_isdir.assert_called_with(
self.ctrl.get_CHECKBOX_DATA(self.SESSION_DIR))
self.assertEqual(mock_os_pipe.call_count, 2)
self.assertEqual(mock_os_fdopen.call_count, 2)
self.assertEqual(mock_os_close.call_count, 6)
# Ensure that testing_shell_data is properly created
mock_json_dumps.assert_called_once_with({
"job_repr": {},
"session_dir": self.ctrl.get_CHECKBOX_DATA(self.SESSION_DIR)
})
mock_os_fdopen().write.assert_called_with(mock_json_dumps())
@mock.patch('os.path.isdir')
@mock.patch('os.fdopen')
@mock.patch('os.pipe')
@mock.patch('os.write')
@mock.patch('os.close')
def test_pipes_closed_when_cmd_raises(
self, mock_os_close, mock_os_write, mock_os_pipe, mock_os_fdopen,
mock_os_path_isdir):
"""
Test if all pipes used by execute_job() are properly closed if
exception is raised during execution of command
"""
mock_os_pipe.side_effect = [("pipe0_r", "pipe0_w"),
("pipe1_r", "pipe1_w")]
with mock.patch.object(self.ctrl, 'get_execution_command'), \
mock.patch.object(self.ctrl, 'get_execution_environment'), \
mock.patch.object(self.ctrl, 'configured_filesystem'), \
mock.patch.object(self.ctrl, 'temporary_cwd'), \
mock.patch.object(self.ctrl, 'gen_job_repr', return_value={}), \
mock.patch.object(self.extcmd_popen, 'call',
side_effect=Exception('Boom')):
with self.assertRaises(Exception):
self.ctrl.execute_job(
self.job, self.job_state, self.config, self.SESSION_DIR,
self.extcmd_popen)
os.close.assert_any_call('pipe0_r')
os.close.assert_any_call('pipe1_r')
os.close.assert_any_call('pipe0_w')
os.close.assert_any_call('pipe1_w')
def test_get_checkbox_score_for_qml_job(self):
self.job.plugin = 'qml'
self.assertEqual(self.ctrl.get_checkbox_score(self.job), 4)
plainbox-0.25/plainbox/impl/color.py 0000664 0001750 0001750 00000020707 12627266441 020341 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.color` -- ANSI color codes
==============================================
"""
import sys
class ansi_on:
"""
ANSI control codes for various useful stuff.
Reference source: wikipedia
"""
class f:
"""
Foreground color attributes
"""
BLACK = 30
RED = 31
GREEN = 32
YELLOW = 33
BLUE = 34
MAGENTA = 35
CYAN = 36
WHITE = 37
# what was 38?
RESET = 39
class b:
"""
Background color attributes
"""
BLACK = 40
RED = 41
GREEN = 42
YELLOW = 44
BLUE = 44
MAGENTA = 45
CYAN = 46
WHITE = 47
# what was 48?
RESET = 49
class s:
"""
Style attributes
"""
BRIGHT = 1
DIM = 2
NORMAL = 22
RESET_ALL = 0
class ansi_off:
class f:
pass
class b:
pass
class s:
pass
# Convert from numbers to full escape sequences
for obj_on, obj_off in zip(
(ansi_on.f, ansi_on.b, ansi_on.s),
(ansi_off.f, ansi_off.b, ansi_off.s)):
for name in [name for name in dir(obj_on) if name.isupper()]:
setattr(obj_on, name, "\033[%sm" % getattr(obj_on, name))
setattr(obj_off, name, "")
# XXX: Temporary hack that disables colors on win32 until
# all of the codebase has been ported over to use colorama
if sys.platform == 'win32':
try:
import colorama
except ImportError:
ansi_on = ansi_off
else:
colorama.init()
def get_color_for_tty(stream=None):
"""
Get ``ansi_on`` if stdout is a tty, ``ansi_off`` otherwise.
:param stream:
Alternate stream to use (sys.stdout by default)
:returns:
``ansi_on`` or ``ansi_off``, depending on if the stream being a tty or
not.
"""
if stream is None:
stream = sys.stdout
return ansi_on if stream.isatty() else ansi_off
class Colorizer:
"""
Colorizing helper for various kinds of content we need to handle
"""
# NOTE: Ideally result and all would be handled by multi-dispatch __call__
def __init__(self, color=None):
if color is True:
self.c = ansi_on
elif color is False:
self.c = ansi_off
elif color is None:
self.c = get_color_for_tty()
else:
self.c = color
@property
def is_enabled(self):
"""
if true, this colorizer is actually using colors
This property is useful to let applications customize their
behavior if they know color support is desired and enabled.
"""
return self.c is ansi_on
def result(self, result):
return self.custom(
result.tr_outcome(), result.outcome_color_ansi())
def header(self, text, color_name='WHITE', bright=True, fill='='):
return self("[ {} ]".format(text).center(80, fill), color_name, bright)
def f(self, color_name):
return getattr(self.c.f, color_name.upper())
def b(self, color_name):
return getattr(self.c.b, color_name.upper())
def s(self, style_name):
return getattr(self.c.s, style_name.upper())
def __call__(self, text, color_name="WHITE", bright=True):
return ''.join([
self.f(color_name),
self.c.s.BRIGHT if bright else '', str(text),
self.c.s.RESET_ALL])
def custom(self, text, ansi_code):
"""
Render a piece of text with custom ANSI styling sequence
:param text:
The text to stylize
:param ansi_code:
A string containing ANSI escape sequence to use.
:returns:
A combination of ``ansi_code``, ``text`` and a fixed
reset sequence that resets text styles.
.. note::
When the colorizer is not really doing anything (see
:meth:`is_enabled`) then custom text is not used at all. This is
done to ensure that any custom styling is not permantently enabled
if colors are to be disabled.
"""
return ''.join([
ansi_code if self.is_enabled else "",
text,
self.c.s.RESET_ALL])
def BLACK(self, text, bright=True):
return self(text, "BLACK", bright)
def RED(self, text, bright=True):
return self(text, "RED", bright)
def GREEN(self, text, bright=True):
return self(text, "GREEN", bright)
def YELLOW(self, text, bright=True):
return self(text, "YELLOW", bright)
def BLUE(self, text, bright=True):
return self(text, "BLUE", bright)
def MAGENTA(self, text, bright=True):
return self(text, "MAGENTA", bright)
def CYAN(self, text, bright=True):
return self(text, "CYAN", bright)
def WHITE(self, text, bright=True):
return self(text, "WHITE", bright)
class CanonicalColors:
"""
Canonical Color Palette.
Colour is an effective, powerful and instantly recognisable medium for
visual communications. To convey the brand personality and brand values,
there is a sophisticated colour palette.
We have introduced a palette which includes both a fresh, lively orange,
and a rich, mature aubergine. The use of aubergine indicates commercial
involvement, while orange is a signal of community engagement.
These colours are used widely in the brand communications, to convey the
precise, reliable and free personality.
Ubuntu core colours.
The Ubuntu colour palette has been created to reflect the spirit of our
brand. :attr:`ubuntu_orange` for a community feel. :attr:`white` for a
clean, fresh and light feel.
:attr:`black` is used in some versions of the brandmark for flexibility of
application and where print restrictions apply. It can also be used for
body copy.
Supporting colours
In addition, there is a supporting colour palette for when communications
have a consumer or enterprise focus.
- :attr:`light_aubergine` for a consumer focus
- :attr:`dark_aubergine` for an enterprise focus
- :attr:`mid_aubergine` for a balance of both
Neutral colours.
:attr:`warm_grey`
For balance. The addition of warm grey softens the combination of
orange and aubergine and provides a bridge between the two.
Warm grey can be used for; backgrounds, graphics, pictograms, dot
patterns, charts and diagrams. It can also be used for large size text.
:attr:`cool_grey`
For typography, particularly body copy. Black can be quite harsh in
combination with aubergine, but grey delivers more balance while still
being legible.
Cool grey can also be used within charts and diagrams.
:attr:`text_grey`
Text grey is used for small size headings, sub-headings and body
copy text only.
Canonical core colours.
The Canonical colour palette has been created to reflect the spirit of our
brand. Aubergine for a smart, focussed feel. White for a clean, fresh and
light feel.
.. see::
http://design.ubuntu.com/brand/colour-palette
"""
#: Ubuntu orange color
ubuntu_orange = (0xdd, 0x48, 0x14)
#: White color
white = (0xff, 0xff, 0xff)
#: Black color
black = (0x00, 0x00, 0x00)
#: Light aubergine color
light_aubergine = (0x77, 0x21, 0x6f)
#: Mid aubergine color
mid_aubergine = (0x5e, 0x27, 0x50)
#: Dark aubergine color
dark_aubergine = (0x2c, 0x00, 0x1e)
#: Warm grey color
warm_grey = (0xae, 0xa7, 0x9f)
#: Cool grey color
cool_grey = (0x33, 0x33, 0x33)
#: Color for small grey dots
small_dot_grey = (0xae, 0xa7, 0x9f)
#: Canonical aubergine color
canonical_aubergine = (0x77, 0x29, 0x53)
#: Text gray color
text_grey = (0x33, 0x33, 0x33)
plainbox-0.25/plainbox/impl/resource.py 0000664 0001750 0001750 00000056141 12627266441 021053 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.resource` -- job resources
==============================================
.. warning::
THIS MODULE DOES NOT HAVE STABLE PUBLIC API
"""
import ast
import itertools
import logging
from plainbox.i18n import gettext as _
logger = logging.getLogger("plainbox.resource")
class ExpressionFailedError(Exception):
"""
Exception raise when a resource expression failed to produce a true value.
This class is meant to be consumed by the UI layers to provide meaningful
error messages to the operator. The expression attribute can be used to
obtain the text of the expression that failed as well as the resource id
that is used by that expression. The resource id can be used to lookup
the (resource) job that produces such values.
"""
def __init__(self, expression):
self.expression = expression
def __str__(self):
return _("expression {!r} evaluated to a non-true result").format(
self.expression.text)
def __repr__(self):
return "<{} expression:{!r}>".format(
self.__class__.__name__, self.expression)
class ExpressionCannotEvaluateError(ExpressionFailedError):
"""
Exception raised when a resource could not be evaluated because it requires
an unavailable resource.
Unlike the base class, this exception is raised before even running the
expression. As in the base class the exception object is meant to have
enough data to provide rich and meaningful error messages to the operator.
"""
def __init__(self, expression, resource_id):
self.expression = expression
self.resource_id = resource_id
def __str__(self):
return _("expression {!r} needs unavailable resource {!r}").format(
self.expression.text, self.resource_id)
class Resource:
"""
A simple container for key-value data
Resource objects are used when evaluating expressions as containers for
data read from resource scripts. Each RFC822 record produced by a resource
script is converted to a new Resource object
"""
__slots__ = ('_data')
def __init__(self, data=None):
if data is None:
data = {}
object.__setattr__(self, '_data', data)
def __iter__(self):
data = object.__getattribute__(self, '_data')
return iter(data)
def __setattr__(self, attr, value):
if attr.startswith("_"):
raise AttributeError(attr)
data = object.__getattribute__(self, '_data')
data[attr] = value
def __delattr__(self, attr):
data = object.__getattribute__(self, '_data')
if attr in data:
del data[attr]
else:
raise AttributeError(attr)
def __getattr__(self, attr):
data = object.__getattribute__(self, '_data')
if attr in data:
return data[attr]
else:
raise AttributeError(attr, "don't poke at %r" % attr)
def __getattribute__(self, attr):
if attr != "_data":
return object.__getattribute__(self, attr)
else:
raise AttributeError("don't poke at _data")
def __getitem__(self, item):
data = object.__getattribute__(self, '_data')
return data[item]
def __setitem__(self, item, value):
data = object.__getattribute__(self, '_data')
data[item] = value
def __delitem__(self, item):
data = object.__getattribute__(self, '_data')
del data[item]
def __repr__(self):
data = object.__getattribute__(self, '_data')
return "Resource({!r})".format(data)
def __eq__(self, other):
if not isinstance(other, Resource):
return False
return (
object.__getattribute__(self, '_data')
== object.__getattribute__(other, '_data'))
def __ne__(self, other):
if not isinstance(other, Resource):
return True
return (
object.__getattribute__(self, '_data')
!= object.__getattribute__(other, '_data'))
class FakeResource:
"""
A resource that seemingly has any accessed attribute.
All attributes resolve back to the their name. All accessed attributes are
recorded and can be referenced from a set that needs to be passed to the
initializer. Knowledge about accessed attributes can be helpful in various
forms of static analysis.
"""
def __init__(self, accessed_attributes=None):
"""
Initialize a fake resource object.
:param accessed_attributes:
An optional set object that will record all accessed resource
attributes.
"""
self._accessed_attributes = accessed_attributes
def _notice(self, attr):
if self._accessed_attributes is not None:
self._accessed_attributes.add(attr)
def __getattr__(self, attr):
self._notice(attr)
return attr
def __getitem__(self, item):
self._notice(item)
return item
def __contains__(self, item):
return True
class ResourceProgram:
"""
Class for storing and executing resource programs.
This is used by job requirement expressions
"""
def __init__(self, program_text, implicit_namespace=None, imports=None):
"""
Analyze the requirement program and prepare it for execution
The requirement program must be a string (of possibly many lines), each
of which must be a valid ResourceExpression. Empty lines are ignored.
May raise ResourceProgramError (including CodeNotAllowed) or a
SyntaxError
"""
self._expression_list = []
for line in program_text.splitlines():
if line.strip() != "":
self._expression_list.append(
ResourceExpression(line, implicit_namespace, imports))
@property
def expression_list(self):
"""
A list of ResourceExpression instances
"""
return self._expression_list
@property
def required_resources(self):
"""
A set() of resource ids that are needed to evaluate this program
"""
ids = set()
for expression in self._expression_list:
for resource_id in expression.resource_id_list:
ids.add(resource_id)
return ids
def evaluate_or_raise(self, resource_map):
"""
Evaluate the program with the given map of resources.
Raises a ExpressionFailedError exception if the any of the expressions
that make up this program cannot be executed or executes but produces a
non-true value.
Returns True
Resources must be a dictionary of mapping resource id to a list of
Resource objects.
"""
# First check if we have all required resources
for expression in self._expression_list:
for resource_id in expression.resource_id_list:
if resource_id not in resource_map:
raise ExpressionCannotEvaluateError(
expression, resource_id)
# Then evaluate all expressions
for expression in self._expression_list:
result = expression.evaluate(*[
resource_map[resource_id]
for resource_id in expression.resource_id_list
])
if not result:
raise ExpressionFailedError(expression)
return True
class ResourceProgramError(Exception):
"""
Base class for errors in requirement programs.
This class of errors are based on static analysis, not runtime execution.
Typically they encode unsupported or disallowed python code being used by
an expression somewhere.
"""
class CodeNotAllowed(ResourceProgramError):
"""
Exception raised when unsupported computing is detected inside requirement
expression.
"""
def __init__(self, node):
self.node = node
def __repr__(self):
return "CodeNotAllowed({!r})".format(self.node)
def __str__(self):
return _("this kind of python code is not allowed: {}").format(
ast.dump(self.node))
class ResourceNodeVisitor(ast.NodeVisitor):
"""
A NodeVisitor subclass used to analyze requirement expressions.
.. warning::
Implementation of this class requires understanding of
some of the lower levels of python. The general idea is
to use the ast (abstract syntax tree) module to allow
the ResourceExpression class to execute safely (by
not permitting various unsafe operations) and quickly
(by knowing which resources are required so no O(n)
operations over all resources are ever needed.
Resource expressions are written one per line, each line is like a
separate min-program. This visitor will be applied to the root (module)
node resulting from parsing each of those lines.
Each actual expression can only use a small subset of python syntax, most
stuff is actually disallowed. Only basic expressions are permitted.
Function calls are also disallowed, with the notable exception of 'bool',
'int', 'float' and 'len'.
One very important aspect of each expression is the id of the resource it
is computing against. This is visible as the 'object' the expressions are
operating on, such as:
package.name == 'fwts'
As a rule of a thumb exactly one such id is allowed per expression. This
allows the code that evaluates this to know which resource to use. As
resources are actually lists of records (where record values are available
as object attribute) only one object/record is exposed to each expression.
Using more than one object (by intent or simple typo) would lead to
expression that will never match. This visitor class facilitates detecting
that by computing the ids_seen set.
One notable fact is that storing is not allowed so it is (presumably) safe
to evaluate the code in the context of the current python interpreter.
How this works:
Using the ast.NodeVisitor we can visit any node type by defining the
visit_ method. We care about Name and Call nodes and they have
custom validation implemented. For all other nodes the generic_visit()
method is called instead.
On each visit to ast.Name node we record the referenced 'id' (the id of
the object being referenced, in simple terms)
On each visit to ast.Call node we check if the called function is in the
allowed list of ids. This also takes care of stuff like foo()() which
would call the return value of foo.
On each visit to any other ast.Node we check if the class is in the
white-list.
All violation cause a CodeNotAllowed exception to be raised with the
node that was rejected as argument.
"""
# Allowed function calls
_allowed_call_func_list = (
'len',
'bool',
'int',
'float',
)
# A tuple of allowed types of ast.Node that are white-listed by
# _check_node()
_allowed_node_cls_list = (
# Allowed statements (ast.stmt sub-classes)
ast.Expr, # expressions
# Allowed 'mod's (ast.mod sub-classes)
ast.Module,
# Allowed expressions (ast.expr sub-classes)
ast.Attribute, # attribute access
ast.BinOp, # binary operators
ast.BoolOp, # boolean operations (and/or)
ast.Compare, # comparisons
ast.List, # lists
ast.Name, # name access (top-level name references)
ast.Num, # numbers
ast.Str, # strings
ast.Tuple, # tuples
ast.UnaryOp, # unary operators
# Allow all comparison operators
ast.cmpop, # this allows ast.Eq, ast.Gt and so on
# Allow all boolean operators
ast.boolop, # this allows ast.And, ast.Or
# Allowed expression context (ast.expr_context)
ast.Load, # allow all loads
)
def __init__(self):
"""
Initialize a ResourceNodeVisitor with empty trace of seen identifiers
"""
self._ids_seen_set = set()
self._ids_seen_list = []
@property
def ids_seen_set(self):
"""
set() of ast.Name().id values seen
"""
return self._ids_seen_set
@property
def ids_seen_list(self):
"""
list() of ast.Name().id values seen
"""
return self._ids_seen_list
def visit_Name(self, node):
"""
Internal method of NodeVisitor.
This method is called whenever generic_visit() looks at an instance of
ast.Name(). It records the node identifier and calls _check_node()
"""
self._check_node(node)
if node.id not in self._ids_seen_set:
self._ids_seen_set.add(node.id)
self._ids_seen_list.append(node.id)
def visit_Call(self, node):
"""
Internal method of NodeVisitor.
This method is called whenever generic_visit() looks at an instance of
ast.Call(). Since white-listing Call in general would be unsafe only a
small subset of calls are allowed.
"""
# XXX: Do not call _check_node() here as Call is not on the whitelist
if node.func.id not in self._allowed_call_func_list:
raise CodeNotAllowed(node)
def generic_visit(self, node):
"""
Internal method of NodeVisitor.
Called for all ast.Node() subclasses that don't have a dedicated
visit_xxx() method here. Only needed to all the _check_node() method.
"""
self._check_node(node)
return super(ResourceNodeVisitor, self).generic_visit(node)
def _check_node(self, node):
"""
Internal method of ResourceNodeVisitor.
This method raises CodeNotAllowed() for any node that is outside
of the set of supported node classes.
"""
if not isinstance(node, self._allowed_node_cls_list):
raise CodeNotAllowed(node)
class RequirementNodeVisitor(ast.NodeVisitor):
"""
A NodeVisitor subclass used to analyze package requirement expressions.
"""
def __init__(self):
"""
Initialize a ResourceNodeVisitor with empty list of packages_seen
"""
self._packages_seen = []
@property
def packages_seen(self):
"""
set() of ast.Str().id values seen joined with the "|" operator for
use in debian/control files
"""
return self._packages_seen
def visit_Str(self, node):
"""
Internal method of NodeVisitor.
This method is called whenever generic_visit() looks at an instance of
ast.Str().
"""
self._packages_seen.append(node.s)
class NoResourcesReferenced(ResourceProgramError):
"""
Exception raised when an expression does not reference any resources.
"""
def __str__(self):
return _("expression did not reference any resources")
class ResourceSyntaxError(ResourceProgramError):
def __str__(self):
return _("syntax error in resource expression")
class ResourceExpression:
"""
Class representing a single line of an requirement program.
Each valid expression references exactly one resource. In practical terms
each resource expression is a valid python expression that has no side
effects (calls almost no methods, does not assign anything) that can be
evaluated against a single variable which references a Resource object.
"""
def __init__(self, text, implicit_namespace=None, imports=None):
"""
Analyze the text and prepare it for execution
May raise ResourceProgramError
"""
self._implicit_namespace = implicit_namespace
self._resource_alias_list = self._analyze(text)
self._resource_id_list = []
if imports is None:
imports = ()
# Respect any import statements.
# They always take priority over anything we may know locally
for resource_alias in self._resource_alias_list:
for imported_resource_id, imported_alias in imports:
if imported_alias == resource_alias:
self._resource_id_list.append(imported_resource_id)
break
else:
self._resource_id_list.append(resource_alias)
self._text = text
self._lambda = eval("lambda {}: {}".format(
', '.join(self._resource_alias_list), self._text))
def __str__(self):
return self._text
def __repr__(self):
return "".format(self._text)
def __eq__(self, other):
if isinstance(other, ResourceExpression):
return self._text == other._text
return NotImplemented
def __ne__(self, other):
if isinstance(other, ResourceExpression):
return self._text != other._text
return NotImplemented
@property
def text(self):
"""
The text of the original expression
"""
return self._text
@property
def resource_id_list(self):
"""
The id of the resource this expression depends on
This is different from :meth:`resource_alias` in that it may not be a
valid python identifier and it is always (ideally) a fully-qualified
job identifier.
"""
return [
"{}::{}".format(self._implicit_namespace, resource_id)
if "::" not in resource_id and self._implicit_namespace
else resource_id
for resource_id in self._resource_id_list
]
@property
def resource_alias_list(self):
"""
The alias of the resource object this expression operates on
This is different from :meth:`resource_id` in that it is always a valid
python identifier. The alias is either the partial identifier of the
resource job or an arbitrary identifier, as used by the job definition.
"""
return self._resource_alias_list
@property
def implicit_namespace(self):
"""
implicit namespace for partial identifiers, may be None
"""
return self._implicit_namespace
def evaluate(self, *resource_list_list):
"""
Evaluate the expression against a list of resources
Each subsequent resource from the list will be bound to the resource
id in the expression. The return value is True if any of the attempts
return a true value, otherwise the result is False.
"""
for resource_list in resource_list_list:
for resource in resource_list:
if not isinstance(resource, Resource):
raise TypeError(
"Each resource must be a Resource instance")
# Try each resource in sequence.
for resource_pack in itertools.product(*resource_list_list):
# Attempt to evaluate the code with the current resource
try:
result = self._lambda(*resource_pack)
except Exception as exc:
# Treat any exception as a non-fatal error
#
# XXX: it would be interesting to see if we have exceptions and
# why they happen. We could do deeper validation this way.
logger.debug(
_("Exception in requirement expression %r (with %s=%r):"
" %r"),
self._text, self._resource_id_list, resource, exc)
continue
# Treat any true result as a success
if result:
return True
# If we get here then the expression did not match. It's pointless (as
# python returns None implicitly) but it's more explicit on the
# documentation side.
return False
@classmethod
def _analyze(cls, text):
"""
Analyze the expression and return the id of the required resource
May raise SyntaxError or a ResourceProgramError subclass
"""
# Use the ast module to build an abstract syntax tree of the expression
try:
node = ast.parse(text)
except SyntaxError:
raise ResourceSyntaxError
# Use ResourceNodeVisitor to see what kind of ast.Name objects are
# referenced by the expression. This may also raise CodeNotAllowed
# which should be captured by the higher layers.
visitor = ResourceNodeVisitor()
visitor.visit(node)
# Bail if the expression is not using exactly one resource id
if len(visitor.ids_seen_list) == 0:
raise NoResourcesReferenced()
else:
return list(visitor.ids_seen_list)
def parse_imports_stmt(imports):
"""
Parse the 'imports' line and compute the imported symbols.
Return generator for a sequence of pairs (job_id, identifier) that
describe the imported job identifiers from arbitrary namespace.
The syntax of each imports line is:
IMPORT_STMT :: "from" "import"
| "from" "import"
AS
"""
# Poor man's parser. Replace this with our own parser once we get one
for lineno, line in enumerate(imports.splitlines()):
parts = line.split()
if len(parts) not in (4, 6):
raise ValueError(
_("unable to parse imports statement {0!r}: expected"
" exactly four or six tokens").format(line))
if parts[0] != "from":
raise ValueError(
_("unable to parse imports statement {0!r}: expected"
" 'from' keyword").format(line))
namespace = parts[1]
if "::" in namespace:
raise ValueError(
_("unable to parse imports statement {0!r}: expected"
" a namespace, not fully qualified job identifier"))
if parts[2] != "import":
raise ValueError(
_("unable to parse imports statement {0!r}: expected"
" 'import' keyword").format(line))
job_id = effective_id = parts[3]
if "::" in job_id:
raise ValueError(
_("unable to parse imports statement {0!r}: expected"
" a partial job identifier, not a fully qualified job"
" identifier").format(line))
if len(parts) == 6:
if parts[4] != "as":
raise ValueError(
_("unable to parse imports statement {0!r}: expected"
" 'as' keyword").format(line))
effective_id = parts[5]
yield ("{}::{}".format(namespace, job_id), effective_id)
plainbox-0.25/plainbox/impl/parsers.py 0000664 0001750 0001750 00000011572 12627266441 020702 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.parsers` -- generic parser interface
========================================================
This module offers high-level API for parsing text into hierarchical
data structures, in particular, JSON. Parsers like this can be used
to create abstract syntax trees of compatible inputs. For convenience
and scriptability any parser is expected to be able to dump its AST
as JSON.
"""
import abc
import inspect
import json
import logging
import re
from plainbox.i18n import gettext as _
from plainbox.impl.secure.plugins import PkgResourcesPlugInCollection, PlugIn
logger = logging.getLogger("plainbox.parsers")
Pattern = type(re.compile(""))
class IParser(metaclass=abc.ABCMeta):
"""
Abstract interface for parsers.
The interface is meant to be suitable for the implementation of the
`plainbox dev parse` command. It offers a simple API for parsing strings
and getting JSON in result.
"""
@abc.abstractproperty
def name(self):
"""
name of the parser
"""
@abc.abstractproperty
def summary(self):
"""
one-line description of the parser
"""
@abc.abstractmethod
def parse_text_to_ast(self, text):
"""
Parse the specified text and return a parser-specific native Abstract
Syntax Tree that represents the input.
Any exception gets logged and causes None to be returned.
"""
@abc.abstractmethod
def parse_text_to_json(self, text):
"""
Parse the specified text and return a JSON string representing the
result.
:returns: None in case of parse error
:returns: string representing JSON version of the parsed AST
"""
class ParserPlugIn(IParser, PlugIn):
"""
PlugIn wrapping a parser function.
Useful for wrapping checkbox parser functions.
"""
@property
def name(self):
"""
name of the parser
"""
return self.plugin_name
@property
def parser_fn(self):
"""
real parser function
"""
return self.plugin_object
@property
def summary(self):
"""
one-line description of the parser
This value is computed from the docstring of the wrapped function.
In fact, it is the fist line of the docstring.
"""
return inspect.getdoc(self.parser_fn).split('\n', 1)[0]
def parse_text_to_json(self, text):
"""
Parse the specified text and return a JSON string representing the
result.
:returns: None in case of parse error
:returns: string representing JSON version of the parsed AST
"""
ast = self.parse_text_to_ast(text)
if ast is not None:
return json.dumps(ast, indent=4, sort_keys=True,
default=self._to_json)
def parse_text_to_ast(self, text):
"""
Parse the specified text and return a parser-specific native Abstract
Syntax Tree that represents the input.
Any exception gets logged and causes None to be returned.
"""
try:
return self.parser_fn(text)
except Exception:
# TODO: portable parser error would be nice, to know where it
# fails. This is difficult at this stage.
logger.exception(_("Cannot parse input"))
return None
def _to_json(self, obj):
"""
Helper method to convert arbitrary objects to their JSON
representation.
Anything that has a 'as_json' attribute will be converted to the result
of calling that method. For all other objects __dict__ is returned.
"""
if isinstance(obj, Pattern):
return ""
elif hasattr(obj, "as_json"):
return obj.as_json()
elif hasattr(obj, "__dict__"):
return obj.__dict__
elif hasattr(obj, "__slots__"):
return {slot: getattr(obj, slot) for slot in obj.__slots__}
else:
raise NotImplementedError(
"unable to json-ify {!r}".format(obj.__class__))
# Collection of all parsers
all_parsers = PkgResourcesPlugInCollection(
'plainbox.parsers', wrapper=ParserPlugIn)
plainbox-0.25/plainbox/impl/test_resource.py 0000664 0001750 0001750 00000036401 12627266441 022107 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.test_resource
===========================
Test definitions for plainbox.impl.resouce module
"""
import ast
from unittest import TestCase
from plainbox.impl.resource import CodeNotAllowed
from plainbox.impl.resource import ExpressionCannotEvaluateError
from plainbox.impl.resource import ExpressionFailedError
from plainbox.impl.resource import FakeResource
from plainbox.impl.resource import NoResourcesReferenced
from plainbox.impl.resource import Resource
from plainbox.impl.resource import ResourceExpression
from plainbox.impl.resource import ResourceNodeVisitor
from plainbox.impl.resource import ResourceProgram
from plainbox.impl.resource import ResourceProgramError
from plainbox.impl.resource import ResourceSyntaxError
class ExpressionFailedTests(TestCase):
def test_smoke(self):
expression = ResourceExpression('resource.attr == "value"')
exc = ExpressionFailedError(expression)
self.assertIs(exc.expression, expression)
self.assertEqual(str(exc), (
"expression 'resource.attr == \"value\"' evaluated to a non-true"
" result"))
self.assertEqual(repr(exc), (
">"))
class ExpressionCannotEvaluateErrorTests(TestCase):
def test_smoke(self):
expression = ResourceExpression('resource.attr == "value"')
exc = ExpressionCannotEvaluateError(expression, 'resource')
self.assertIs(exc.expression, expression)
self.assertEqual(str(exc), (
"expression 'resource.attr == \"value\"' needs unavailable"
" resource 'resource'"))
self.assertEqual(repr(exc), (
">"))
class ResourceTests(TestCase):
def test_init(self):
res = Resource()
self.assertEqual(self._get_private_data(res), {})
res = Resource({'attr': 'value'})
self.assertEqual(self._get_private_data(res), {'attr': 'value'})
def test_private_data_is_somewhat_protected(self):
res = Resource()
self.assertRaises(AttributeError, getattr, res, "_data")
self.assertRaises(AttributeError, delattr, res, "_data")
self.assertRaises(AttributeError, setattr, res, "_data", None)
def test_private_data_is_not_that_protected(self):
res = Resource()
data = self._get_private_data(res)
self.assertEqual(data, {})
data['attr'] = 'value'
self.assertEqual(res.attr, 'value')
def test_getattr(self):
res = Resource()
self.assertRaises(AttributeError, getattr, res, "attr")
res = Resource({'attr': 'value'})
self.assertEqual(getattr(res, 'attr'), 'value')
def test_getitem(self):
res = Resource()
self.assertRaises(KeyError, lambda res: res["attr"], res)
res = Resource({'attr': 'value'})
self.assertEqual(res['attr'], 'value')
def test_setattr(self):
res = Resource()
res.attr = 'value'
self.assertEqual(res.attr, 'value')
res.attr = 'other value'
self.assertEqual(res.attr, 'other value')
def test_setitem(self):
res = Resource()
res['attr'] = 'value'
self.assertEqual(res['attr'], 'value')
res['attr'] = 'other value'
self.assertEqual(res['attr'], 'other value')
def test_delattr(self):
res = Resource()
self.assertRaises(AttributeError, delattr, res, "attr")
res = Resource({'attr': 'value'})
del res.attr
self.assertRaises(AttributeError, getattr, res, "attr")
self.assertRaises(AttributeError, lambda res: res.attr, res)
def test_delitem(self):
res = Resource()
with self.assertRaises(KeyError):
del res["attr"]
res = Resource({'attr': 'value'})
del res['attr']
self.assertRaises(KeyError, lambda res: res['attr'], res)
def test_repr(self):
self.assertEqual(repr(Resource()), "Resource({})")
self.assertEqual(repr(Resource({'attr': 'value'})),
"Resource({'attr': 'value'})")
def test_eq(self):
self.assertEqual(Resource(), Resource())
self.assertEqual(Resource({'attr': 'value'}),
Resource({'attr': 'value'}))
self.assertFalse(Resource() == object())
def test_ne(self):
self.assertNotEqual(Resource({'attr': 'value'}),
Resource({'attr': 'other value'}))
self.assertNotEqual(Resource({'attr': 'value'}),
Resource())
self.assertTrue(Resource() != object())
def _get_private_data(self, res):
return object.__getattribute__(res, '_data')
class FakeResourceTests(TestCase):
def test_resource_attributes(self):
"""
Verify that any accessed attribute / item resolves to its name
"""
resource = FakeResource()
self.assertEqual(resource.foo, 'foo')
self.assertEqual(resource['bar'], 'bar')
def test_set_membership(self):
"""
Verify that any item is present
"""
self.assertTrue('foo' in FakeResource())
def test_tracking_support(self):
"""
Verify that each accessed attribute / item is remembered
"""
accessed = set()
resource = FakeResource(accessed)
self.assertEqual(resource.foo, 'foo')
self.assertEqual(resource['bar'], 'bar')
self.assertEqual(accessed, {'foo', 'bar'})
class ResourceProgramErrorTests(TestCase):
def test_none(self):
exc = NoResourcesReferenced()
self.assertEqual(
str(exc), "expression did not reference any resources")
class CodeNotAllowedTests(TestCase):
def test_smoke(self):
node = ast.parse("foo")
exc = CodeNotAllowed(node)
self.assertIs(exc.node, node)
def test_inheritance(self):
self.assertTrue(issubclass(CodeNotAllowed, ResourceProgramError))
class ResourceNodeVisitorTests(TestCase):
def test_smoke(self):
visitor = ResourceNodeVisitor()
self.assertEqual(visitor.ids_seen_set, set())
self.assertEqual(visitor.ids_seen_list, [])
def test_ids_seen(self):
visitor = ResourceNodeVisitor()
node = ast.parse("package.name == 'fwts' and package.version == '1.2'")
visitor.visit(node)
self.assertEqual(visitor.ids_seen_set, {'package'})
self.assertEqual(visitor.ids_seen_list, ['package'])
def test_name_assignment_disallowed(self):
visitor = ResourceNodeVisitor()
node = ast.parse("package = 'fwts'")
self.assertRaises(CodeNotAllowed, visitor.visit, node)
def test_attribute_assignment_disallowed(self):
visitor = ResourceNodeVisitor()
node = ast.parse("package.name = 'fwts'")
self.assertRaises(CodeNotAllowed, visitor.visit, node)
def test_slice_assignment_disallowed(self):
visitor = ResourceNodeVisitor()
node = ast.parse("package[:] = 'fwts'")
self.assertRaises(CodeNotAllowed, visitor.visit, node)
def test_index_assignment_disallowed(self):
visitor = ResourceNodeVisitor()
node = ast.parse("package[0] = 'fwts'")
self.assertRaises(CodeNotAllowed, visitor.visit, node)
def test_raising_disallowed(self):
visitor = ResourceNodeVisitor()
node = ast.parse("raise foo")
self.assertRaises(CodeNotAllowed, visitor.visit, node)
def test_importing_disallowed(self):
visitor = ResourceNodeVisitor()
node = ast.parse("import foo")
self.assertRaises(CodeNotAllowed, visitor.visit, node)
def test_function_calls_disallowed(self):
visitor = ResourceNodeVisitor()
node = ast.parse("foo()")
self.assertRaises(CodeNotAllowed, visitor.visit, node)
def test_calling_int_is_allowed(self):
visitor = ResourceNodeVisitor()
node = ast.parse("len(a)")
visitor.visit(node)
def test_calling_len_is_allowed(self):
visitor = ResourceNodeVisitor()
node = ast.parse("int('10')")
visitor.visit(node)
def test_boolean_ops_are_allowed(self):
visitor = ResourceNodeVisitor()
node = ast.parse("package.name and package.version")
visitor.visit(node)
def test_comparisons_are_allowed(self):
visitor = ResourceNodeVisitor()
node = ast.parse("package.name == 'foo'")
visitor.visit(node)
def test_in_expresions_are_allowed(self):
visitor = ResourceNodeVisitor()
node = ast.parse("'foo' in package.name")
visitor.visit(node)
def test_in_expresions_with_list_are_allowed(self):
visitor = ResourceNodeVisitor()
node = ast.parse("package.name in ['foo', 'bar']")
visitor.visit(node)
class ResourceExpressionTests(TestCase):
def test_smoke_good(self):
text = "package.name == 'fwts'"
expr = ResourceExpression(text)
self.assertEqual(expr.text, text)
self.assertEqual(expr.resource_id_list, ["package"])
self.assertEqual(expr.implicit_namespace, None)
def test_namespace_support(self):
text = "package.name == 'fwts'"
expr = ResourceExpression(text, "2014.com.canonical")
self.assertEqual(expr.text, text)
self.assertEqual(expr.resource_id_list,
["2014.com.canonical::package"])
self.assertEqual(expr.implicit_namespace, "2014.com.canonical")
def test_imports_support(self):
text = "package.name == 'fwts'"
expr1 = ResourceExpression(text, "2014.com.example")
self.assertEqual(expr1.text, text)
self.assertEqual(expr1.resource_id_list, ["2014.com.example::package"])
self.assertEqual(expr1.implicit_namespace, "2014.com.example")
expr2 = ResourceExpression(text, "2014.com.example", imports=())
self.assertEqual(expr2.text, text)
self.assertEqual(expr2.resource_id_list, ["2014.com.example::package"])
self.assertEqual(expr2.implicit_namespace, "2014.com.example")
expr3 = ResourceExpression(
text, "2014.com.example", imports=[
('2014.com.canonical::package', 'package')])
self.assertEqual(expr3.text, text)
self.assertEqual(expr3.resource_id_list,
["2014.com.canonical::package"])
self.assertEqual(expr3.implicit_namespace, "2014.com.example")
def test_smoke_bad(self):
self.assertRaises(ResourceSyntaxError, ResourceExpression, "barf'")
self.assertRaises(CodeNotAllowed, ResourceExpression, "a = 5")
self.assertRaises(NoResourcesReferenced, ResourceExpression, "5 < 10")
def test_multiple_resources(self):
expr = ResourceExpression("a.foo == 1 and b.bar == 2")
self.assertEqual(expr.resource_id_list, ["a", "b"])
def test_evaluate_no_namespaces(self):
self.assertFalse(ResourceExpression("whatever").evaluate([]))
def test_evaluate_normal(self):
# NOTE: the actual expr.resource_id_list is irrelevant for this test
expr = ResourceExpression("obj.a == 2")
self.assertTrue(
expr.evaluate([
Resource({'a': 1}), Resource({'a': 2})]))
self.assertTrue(
expr.evaluate([
Resource({'a': 2}), Resource({'a': 1})]))
self.assertFalse(
expr.evaluate([
Resource({'a': 1}), Resource({'a': 3})]))
def test_evaluate_exception(self):
# NOTE: the actual expr.resource_id_list is irrelevant for this test
expr = ResourceExpression("obj.a == 2")
self.assertFalse(expr.evaluate([Resource()]))
def test_evaluate_checks_resource_type(self):
expr = ResourceExpression("obj.a == 2")
self.assertRaises(TypeError, expr.evaluate, [{'a': 2}])
class ResourceProgramTests(TestCase):
def setUp(self):
super(ResourceProgramTests, self).setUp()
self.prog = ResourceProgram(
"\n" # empty lines are ignored
"package.name == 'fwts'\n"
"platform.arch in ('i386', 'amd64')")
def test_expressions(self):
self.assertEqual(len(self.prog.expression_list), 2)
self.assertEqual(self.prog.expression_list[0].text,
"package.name == 'fwts'")
self.assertEqual(self.prog.expression_list[0].resource_id_list,
["package"])
self.assertEqual(self.prog.expression_list[1].text,
"platform.arch in ('i386', 'amd64')")
self.assertEqual(self.prog.expression_list[1].resource_id_list,
["platform"])
def test_required_resources(self):
self.assertEqual(self.prog.required_resources,
set(('package', 'platform')))
def test_evaluate_failure_not_true(self):
resource_map = {
'package': [
Resource({'name': 'plainbox'}),
],
'platform': [
Resource({'arch': 'i386'})]
}
with self.assertRaises(ExpressionFailedError) as call:
self.prog.evaluate_or_raise(resource_map)
self.assertEqual(call.exception.expression.text,
"package.name == 'fwts'")
def test_evaluate_without_no_match(self):
resource_map = {
'package': [],
'platform': []
}
with self.assertRaises(ExpressionFailedError) as call:
self.prog.evaluate_or_raise(resource_map)
self.assertEqual(call.exception.expression.text,
"package.name == 'fwts'")
def test_evaluate_failure_no_resource(self):
resource_map = {
'platform': [
Resource({'arch': 'i386'})]
}
with self.assertRaises(ExpressionCannotEvaluateError) as call:
self.prog.evaluate_or_raise(resource_map)
self.assertEqual(call.exception.expression.text,
"package.name == 'fwts'")
def test_evaluate_success(self):
resource_map = {
'package': [
Resource({'name': 'plainbox'}),
Resource({'name': 'fwts'})],
'platform': [
Resource({'arch': 'i386'})]
}
self.assertTrue(self.prog.evaluate_or_raise(resource_map))
def test_namespace_support(self):
prog = ResourceProgram(
"package.name == 'fwts'\n"
"platform.arch in ('i386', 'amd64')",
implicit_namespace="2014.com.canonical")
self.assertEqual(
prog.required_resources,
{'2014.com.canonical::package', '2014.com.canonical::platform'})
plainbox-0.25/plainbox/impl/testing_utils.py 0000664 0001750 0001750 00000006013 12627266441 022112 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.testing_utils` -- plainbox specific test tools
==================================================================
.. warning::
THIS MODULE DOES NOT HAVE STABLE PUBLIC API
"""
from functools import wraps
from gzip import GzipFile
from io import TextIOWrapper
from tempfile import NamedTemporaryFile
import warnings
from plainbox.impl.job import JobDefinition
from plainbox.impl.result import IOLogRecordWriter
from plainbox.impl.result import MemoryJobResult
from plainbox.impl.secure.origin import Origin
from plainbox.vendor.mock import Mock
def MockJobDefinition(id, *args, **kwargs):
"""
Mock for JobDefinition class
"""
job = Mock(*args, name="job-with-id:{}".format(id),
spec_set=JobDefinition, **kwargs)
job.id = id
return job
def make_io_log(io_log, io_log_dir):
"""
Make the io logs serialization to json and return the saved file pathname
WARNING: The caller has to remove the file once done with it!
"""
with NamedTemporaryFile(
delete=False, suffix='.record.gz', dir=io_log_dir) as byte_stream, \
GzipFile(fileobj=byte_stream, mode='wb') as gzip_stream, \
TextIOWrapper(gzip_stream, encoding='UTF-8') as text_stream:
writer = IOLogRecordWriter(text_stream)
for record in io_log:
writer.write_record(record)
return byte_stream.name
# Deprecated, use JobDefinition() directly
def make_job(id, plugin="dummy", requires=None, depends=None, **kwargs):
"""
Make and return a dummy JobDefinition instance
"""
data = {'id': id}
if plugin is not None:
data['plugin'] = plugin
if requires is not None:
data['requires'] = requires
if depends is not None:
data['depends'] = depends
# Add any custom key-value properties
data.update(kwargs)
return JobDefinition(data, Origin.get_caller_origin())
def make_job_result(outcome="dummy"):
"""
Make and return a dummy JobResult instance
"""
return MemoryJobResult({
'outcome': outcome
})
def suppress_warnings(func):
"""
Suppress all warnings from the decorated function
"""
@wraps(func)
def decorator(*args, **kwargs):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
return func(*args, **kwargs)
return decorator
plainbox-0.25/plainbox/impl/unit/ 0000775 0001750 0001750 00000000000 12633675274 017627 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/impl/unit/test_template.py 0000664 0001750 0001750 00000046015 12627266441 023054 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.unit.test_template
================================
Test definitions for plainbox.impl.unit.template module
"""
from unittest import TestCase
import warnings
from plainbox.abc import IProvider1
from plainbox.impl.resource import Resource
from plainbox.impl.resource import ResourceExpression
from plainbox.impl.unit.job import JobDefinition
from plainbox.impl.unit.template import TemplateUnit
from plainbox.impl.unit.test_unit import UnitFieldValidationTests
from plainbox.impl.unit.unit import Unit
from plainbox.impl.unit.unit_with_id import UnitWithId
from plainbox.impl.unit.validators import UnitValidationContext
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
from plainbox.impl.validation import ValidationError
from plainbox.vendor import mock
class TemplateUnitValidator(TestCase):
def setUp(self):
warnings.filterwarnings(
'ignore', 'validate is deprecated since version 0.11')
def tearDown(self):
warnings.resetwarnings()
def test_checks_if_template_resource_is_defined(self):
with self.assertRaises(ValidationError) as boom:
TemplateUnit({}).validate()
self.assertEqual(
boom.exception.field, TemplateUnit.fields.template_resource)
self.assertEqual(boom.exception.problem, Problem.missing)
def test_checks_if_template_filter_is_bad(self):
with self.assertRaises(ValidationError) as boom:
TemplateUnit({
'template-resource': 'resource',
'template-filter': 'this is not a valid program'
}).validate()
self.assertEqual(
boom.exception.field, TemplateUnit.fields.template_filter)
self.assertEqual(boom.exception.problem, Problem.wrong)
def test_checks_if_id_is_constant(self):
with self.assertRaises(ValidationError) as boom:
TemplateUnit({
'template-resource': 'resource',
'id': 'constant',
}).validate()
self.assertEqual(
boom.exception.field, JobDefinition.fields.id)
self.assertEqual(boom.exception.problem, Problem.constant)
def test_checks_if_plugin_is_variable(self):
with self.assertRaises(ValidationError) as boom:
TemplateUnit({
'template-resource': 'resource',
'id': 'variable-{attr}',
'plugin': 'variable-{attr}',
}).validate()
self.assertEqual(
boom.exception.field, JobDefinition.fields.plugin)
self.assertEqual(boom.exception.problem, Problem.variable)
def test_checks_if_summary_is_constant(self):
with self.assertRaises(ValidationError) as boom:
TemplateUnit({
'template-resource': 'resource',
'id': 'variable-{attr}',
'plugin': 'constant',
'summary': 'constant',
}).validate()
self.assertEqual(
boom.exception.field, JobDefinition.fields.summary)
self.assertEqual(boom.exception.problem, Problem.constant)
def test_checks_if_description_is_constant(self):
with self.assertRaises(ValidationError) as boom:
TemplateUnit({
'template-resource': 'resource',
'id': 'variable-{attr}',
'plugin': 'constant',
'summary': 'variable-{attr}',
'description': 'constant',
}).validate()
self.assertEqual(
boom.exception.field, JobDefinition.fields.description)
self.assertEqual(boom.exception.problem, Problem.constant)
def test_checks_if_user_is_variable(self):
with self.assertRaises(ValidationError) as boom:
TemplateUnit({
'template-resource': 'resource',
'id': 'variable-{attr}',
'plugin': 'constant',
'summary': 'variable-{attr}',
'description': 'variable-{attr}',
'command': 'variable-{attr}',
'user': 'variable-{attr}',
}).validate()
self.assertEqual(
boom.exception.field, JobDefinition.fields.user)
self.assertEqual(boom.exception.problem, Problem.variable)
def test_checks_instantiated_job(self):
template = TemplateUnit({
'template-resource': 'resource',
'id': 'variable-{attr}',
'plugin': 'constant',
'summary': 'variable-{attr}',
'description': 'variable-{attr}',
'command': 'variable-{attr}',
'user': 'constant',
})
job = mock.Mock(spec_set=JobDefinition)
with mock.patch.object(template, 'instantiate_one', return_value=job):
template.validate()
job.validate.assert_called_once_with(strict=False, deprecated=False)
class TemplateUnitTests(TestCase):
def test_resource_partial_id__empty(self):
"""
Ensure that ``resource_partial_id`` defaults to None
"""
self.assertEqual(TemplateUnit({}).resource_partial_id, None)
def test_resource_partial_id__bare(self):
"""
Ensure that ``resource_partial_id`` is looked up from the
``template-resource`` field
"""
self.assertEqual(TemplateUnit({
'template-resource': 'resource'
}).resource_partial_id, 'resource')
def test_resource_partial_id__explicit(self):
"""
Ensure that ``resource_partial_id`` is correctly parsed from a fully
qualified resource identifier.
"""
self.assertEqual(TemplateUnit({
'template-resource': 'explicit::resource'
}).resource_partial_id, 'resource')
def test_resource_namespace__empty(self):
"""
Ensure that ``resource_namespace`` defaults to None
"""
self.assertEqual(TemplateUnit({}).resource_namespace, None)
def test_resource_namespace__bare(self):
"""
Ensure that ``resource_namespace`` is correctly parsed from a
not-qualified resource identifier
"""
self.assertEqual(TemplateUnit({
'template-resource': 'resource'
}).resource_namespace, None)
def test_resource_namespace__implicit(self):
"""
Ensure that ``resource_namespace``, if not parsed from a
fully-qualified resource identifier, defaults to the provider
namespace.
"""
provider = mock.Mock(spec=IProvider1)
self.assertEqual(TemplateUnit({
'template-resource': 'resource'
}, provider=provider).resource_namespace, provider.namespace)
def test_resource_namespace__explicit(self):
"""
Ensure that ``resource_namespace``, is correctly pared from a
fully-qualified resource identifier
"""
self.assertEqual(TemplateUnit({
'template-resource': 'explicit::resource'
}).resource_namespace, 'explicit')
def test_resource_id__empty(self):
"""
Ensure that ``resource_id`` defaults to None
"""
self.assertEqual(TemplateUnit({}).resource_id, None)
def test_resource_id__bare(self):
"""
Ensure that ``resource_id`` is just the partial resource identifier
when both a fully-qualified resource identifier and the provider
namespace are absent.
"""
self.assertEqual(TemplateUnit({
'template-resource': 'resource'
}).resource_id, 'resource')
def test_resource_id__explicit(self):
"""
Ensure that ``resource_id`` is the fully-qualified resource identifier
when ``template-resource`` is also fully-qualified.
"""
self.assertEqual(TemplateUnit({
'template-resource': 'explicit::resource'
}).resource_id, 'explicit::resource')
def test_resource_id__template_imports(self):
"""
Ensure that ``resource_id`` is the fully-qualified resource identifier
when ``template-resource`` refers to a ``template-imports`` imported
name
"""
self.assertEqual(TemplateUnit({
'template-imports': (
'from 2014.com.example import resource/name as rc'),
'template-resource': 'rc'
}).resource_id, '2014.com.example::resource/name')
def test_resource_id__template_imports_and_provider_ns(self):
"""
Ensure that ``resource_id`` is the fully-qualified resource identifier
when ``template-resource`` refers to a ``template-imports`` imported
name, even if provider namespace could have been otherwise used
We're essentially testing priority of imports over the implicit namespa
"""
provider = mock.Mock(spec=IProvider1)
provider.namespace = 'namespace'
self.assertEqual(TemplateUnit({
'template-imports': (
'from 2014.com.example import resource/name as rc'),
'template-resource': 'rc'
}, provider=provider).resource_id, '2014.com.example::resource/name')
def test_resource_id__template_and_provider_ns(self):
"""
Ensure that ``resource_id`` is the fully-qualified resource identifier
when ``template-resource`` refers to a partial identifier but the
provider has a namespace we can use
"""
provider = mock.Mock(spec=IProvider1)
provider.namespace = 'namespace'
self.assertEqual(TemplateUnit({
'template-resource': 'rc'
}, provider=provider).resource_id, 'namespace::rc')
def test_template_resource__empty(self):
self.assertEqual(TemplateUnit({}).template_resource, None)
def test_template_resource__bare(self):
self.assertEqual(TemplateUnit({
'template-resource': 'resource'
}).template_resource, 'resource')
def test_template_resource__explicit(self):
self.assertEqual(TemplateUnit({
'template-resource': 'explicit::resource'
}).template_resource, 'explicit::resource')
def test_template_filter__empty(self):
"""
Ensure that ``template_filter`` defaults to None
"""
self.assertEqual(TemplateUnit({}).template_filter, None)
def test_template_filter__typical(self):
"""
Ensure that ``template_filter`` is looked up from the
``template-filter`` field.
"""
self.assertEqual(TemplateUnit({
'template-filter': 'resource.attr == "value"'
}).template_filter, 'resource.attr == "value"')
def test_template_filter__multi_line(self):
"""
Ensure that ``template_filter`` can have multiple lines
(corresponding to multiple conditions that must be met)
"""
self.assertEqual(TemplateUnit({
'template-filter': (
'resource.attr == "value"\n'
'resource.other == "some other value"\n')
}).template_filter, (
'resource.attr == "value"\n'
'resource.other == "some other value"\n'
))
def test_get_filter_program__nothing(self):
# Without a template-program field there is no filter program
self.assertEqual(TemplateUnit({}).get_filter_program(), None)
def test_get_filter_program__bare(self):
# Programs are properly represented
prog = TemplateUnit({
'template-filter': 'resource.attr == "value"'
}).get_filter_program()
# The program wraps the right expressions
self.assertEqual(
prog.expression_list,
[ResourceExpression('resource.attr == "value"')])
# The program references the right resources
self.assertEqual(prog.required_resources, set(['resource']))
def test_get_filter_program__explicit(self):
# Programs are properly represented
prog = TemplateUnit({
'template-resource': 'explicit::resource',
'template-filter': 'resource.attr == "value"'
}).get_filter_program()
# The program wraps the right expressions
self.assertEqual(
prog.expression_list,
[ResourceExpression('resource.attr == "value"')])
# The program references the right resources
self.assertEqual(prog.required_resources, set(['explicit::resource']))
def test_get_filter_program__inherited(self):
provider = mock.Mock(spec=IProvider1)
provider.namespace = 'inherited'
# Programs are properly represented
prog = TemplateUnit({
'template-resource': 'resource',
'template-filter': 'resource.attr == "value"'
}, provider=provider).get_filter_program()
# The program wraps the right expressions
self.assertEqual(
prog.expression_list,
[ResourceExpression('resource.attr == "value"')])
# The program references the right resources
self.assertEqual(prog.required_resources, set(['inherited::resource']))
def test_get_target_unit_cls(self):
t1 = TemplateUnit({})
self.assertIs(t1.get_target_unit_cls(), JobDefinition)
t2 = TemplateUnit({'template-unit': 'job'})
self.assertIs(t2.get_target_unit_cls(), JobDefinition)
t3 = TemplateUnit({'template-unit': 'unit'})
self.assertIs(t3.get_target_unit_cls(), Unit)
t4 = TemplateUnit({'template-unit': 'template'})
self.assertIs(t4.get_target_unit_cls(), TemplateUnit)
def test_instantiate_one(self):
template = TemplateUnit({
'template-resource': 'resource',
'id': 'check-device-{dev_name}',
'summary': 'Test {name} ({sys_path})',
'plugin': 'shell',
})
job = template.instantiate_one(Resource({
'dev_name': 'sda1',
'name': 'some device',
'sys_path': '/sys/something',
}))
self.assertIsInstance(job, JobDefinition)
self.assertEqual(job.partial_id, 'check-device-sda1')
self.assertEqual(job.summary, 'Test some device (/sys/something)')
self.assertEqual(job.plugin, 'shell')
def test_should_instantiate__filter(self):
template = TemplateUnit({
'template-resource': 'resource',
'template-filter': 'resource.attr == "value"',
})
self.assertTrue(
template.should_instantiate(Resource({'attr': 'value'})))
self.assertFalse(
template.should_instantiate(Resource({'attr': 'other value'})))
self.assertFalse(
template.should_instantiate(Resource()))
def test_should_instantiate__no_filter(self):
template = TemplateUnit({
'template-resource': 'resource',
})
self.assertTrue(
template.should_instantiate(Resource({'attr': 'value'})))
self.assertTrue(
template.should_instantiate(Resource({'attr': 'other value'})))
self.assertTrue(
template.should_instantiate(Resource()))
def test_instantiate_all(self):
template = TemplateUnit({
'template-resource': 'resource',
'template-filter': 'resource.attr == "value"',
'id': 'check-device-{dev_name}',
'summary': 'Test {name} ({sys_path})',
'plugin': 'shell',
})
unit_list = template.instantiate_all([
Resource({
'attr': 'value',
'dev_name': 'sda1',
'name': 'some device',
'sys_path': '/sys/something',
}),
Resource({
'attr': 'bad value',
'dev_name': 'sda2',
'name': 'some other device',
'sys_path': '/sys/something-else',
})
])
self.assertEqual(len(unit_list), 1)
self.assertEqual(unit_list[0].partial_id, 'check-device-sda1')
class TemplateUnitFieldValidationTests(UnitFieldValidationTests):
unit_cls = TemplateUnit
def test_template_unit__untranslatable(self):
issue_list = self.unit_cls({
# NOTE: the value must be a valid unit!
'_template-unit': 'unit'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.template_unit,
Problem.unexpected_i18n, Severity.warning)
def test_template_unit__present(self):
issue_list = self.unit_cls({
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.template_unit,
Problem.missing, Severity.advice)
def test_template_resource__untranslatable(self):
issue_list = self.unit_cls({
'_template-resource': 'template_resource'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.template_resource,
Problem.unexpected_i18n, Severity.warning)
def test_template_resource__refers_to_other_units(self):
unit = self.unit_cls({
'template-resource': 'some-unit'
}, provider=self.provider)
message = ("field 'template-resource',"
" unit 'ns::some-unit' is not available")
self.provider.unit_list = [unit]
self.provider.problem_list = []
context = UnitValidationContext([self.provider])
issue_list = unit.check(context=context)
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.template_resource,
Problem.bad_reference, Severity.error, message)
def test_template_resource__refers_to_other_jobs(self):
other_unit = UnitWithId({
'id': 'some-unit'
}, provider=self.provider)
unit = self.unit_cls({
'template-resource': 'some-unit'
}, provider=self.provider)
message = ("field 'template-resource',"
" the referenced unit is not a job")
self.provider.unit_list = [unit, other_unit]
self.provider.problem_list = []
context = UnitValidationContext([self.provider])
issue_list = unit.check(context=context)
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.template_resource,
Problem.bad_reference, Severity.error, message)
def test_template_filter__untranslatable(self):
issue_list = self.unit_cls({
'_template-filter': 'template-filter'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.template_filter,
Problem.unexpected_i18n, Severity.warning)
plainbox-0.25/plainbox/impl/unit/unit.py 0000664 0001750 0001750 00000102710 12627266441 021154 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.unit.unit` -- unit definition
=================================================
"""
import abc
import collections
import hashlib
import json
import logging
import string
from plainbox.i18n import gettext as _
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.rfc822 import normalize_rfc822_value
from plainbox.impl.symbol import Symbol
from plainbox.impl.symbol import SymbolDef
from plainbox.impl.symbol import SymbolDefMeta
from plainbox.impl.symbol import SymbolDefNs
from plainbox.impl.unit import get_accessed_parameters
from plainbox.impl.unit._legacy import UnitLegacyAPI
from plainbox.impl.unit.validators import IFieldValidator
from plainbox.impl.unit.validators import MultiUnitFieldIssue
from plainbox.impl.unit.validators import PresentFieldValidator
from plainbox.impl.unit.validators import TemplateInvariantFieldValidator
from plainbox.impl.unit.validators import UnitFieldIssue
from plainbox.impl.unit.validators import UntranslatableFieldValidator
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
__all__ = ['Unit', 'UnitValidator']
logger = logging.getLogger("plainbox.unit")
class UnitValidator:
"""
Validator class for basic :class:`Unit` type
Typically validators are not used directly. Instead, please call
:meth:`Unit.check()` and iterate over the returned issues.
:attr issue_list:
A list of :class`plainbox.impl.validate.Issue`
"""
def __init__(self):
"""
Initialize a new validator
"""
self.issue_list = []
def check(self, unit):
"""
Check a specific unit for correctness
:param unit:
The :class:`Unit` to check
:returns:
A generator yielding subsequent issues
"""
for field_validator, field in self.make_field_validators(unit):
for issue in field_validator.check(self, unit, field):
yield issue
def check_in_context(self, unit, context):
"""
Check a specific unit for correctness in a broader context
:param unit:
The :class:`Unit` to check
:param context:
A :class:`UnitValidationContext` to use as context
:returns:
A generator yielding subsequent issues
"""
for field_validator, field in self.make_field_validators(unit):
for issue in field_validator.check_in_context(
self, unit, field, context):
yield issue
def make_field_validators(self, unit):
"""
Convert unit meta-data to a sequence of validators
:returns:
A generator for pairs (field_validator, field) where
field_validator is an instance of :class:`IFieldValidator` and
field is a symbol with the field name.
"""
for field, spec in sorted(unit.Meta.field_validators.items()):
if isinstance(spec, type):
validator_list = [spec]
elif isinstance(spec, list):
validator_list = spec
else:
raise TypeError(_(
"{}.Meta.fields[{!r}] is not a validator"
).format(unit.__class__.__name__, field))
for index, spec in enumerate(validator_list):
# If it's a validator class, instantiate it
if isinstance(spec, type) \
and issubclass(spec, IFieldValidator):
yield spec(), field
# If it's a validator instance, just return it
elif isinstance(spec, IFieldValidator):
yield spec, field
else:
raise TypeError(_(
"{}.Meta.fields[{!r}][{}] is not a validator"
).format(unit.__class__.__name__, field, index))
def advice(self, unit, field, kind, message=None, *, offset=0,
origin=None):
"""
Shortcut for :meth:`report_issue` with severity=Severity.advice
"""
return self.report_issue(
unit, field, kind, Severity.advice, message,
offset=offset, origin=origin)
def warning(self, unit, field, kind, message=None, *, offset=0,
origin=None):
"""
Shortcut for :meth:`report_issue` with severity=Severity.warning
"""
return self.report_issue(
unit, field, kind, Severity.warning, message,
offset=offset, origin=origin)
def error(self, unit, field, kind, message=None, *, offset=0, origin=None):
"""
Shortcut for :meth:`report_issue` with severity=Severity.error
"""
return self.report_issue(
unit, field, kind, Severity.error, message,
offset=offset, origin=origin)
def report_issue(self, unit, field, kind, severity, message=None,
*, offset=0, origin=None):
"""
Helper method that aids in adding issues
:param unit:
A :class:`Unit` that the issue refers to or a list of such objects
:param field:
Name of the field the issue is specific to
:param kind:
Type of the issue, this can be an arbitrary
symbol. If it is not known to the :meth:`explain()`
then a message must be provided or a ValueError
will be raised.
:param severity:
A symbol that represents the severity of the issue.
See :class:`plainbox.impl.validation.Severity`.
:param message:
An (optional) message to use instead of a stock message.
This argument is required if :meth:`explain()` doesn't know
about the specific value of ``kind`` used
:param offset:
An (optional, keyword-only) offset within the field itself.
If specified it is used to point to a specific line in a multi-line
field.
:param origin:
An (optional, keyword-only) origin to use to report the issue.
If specified it totally overrides all implicit origin detection.
The ``offset`` is not applied in this case.
:returns:
The reported issue
:raises ValueError:
if ``kind`` is not known to :meth:`explain()` and
``message`` is None.
"""
# compute the actual message
message = self.explain(
unit[0] if isinstance(unit, list) else unit, field, kind, message)
if message is None:
raise ValueError(
_("unable to deduce message and no message provided"))
# compute the origin
if isinstance(unit, list):
cls = MultiUnitFieldIssue
if origin is None:
origin = unit[0].origin
if field in unit[0].field_offset_map:
origin = origin.with_offset(
unit[0].field_offset_map[field] + offset
).just_line()
elif '_{}'.format(field) in unit[0].field_offset_map:
if origin is None:
origin = origin.with_offset(
unit[0].field_offset_map['_{}'.format(field)]
+ offset).just_line()
else:
cls = UnitFieldIssue
if origin is None:
origin = unit.origin
if field in unit.field_offset_map:
origin = origin.with_offset(
unit.field_offset_map[field] + offset
).just_line()
elif '_{}'.format(field) in unit.field_offset_map:
if origin is None:
origin = origin.with_offset(
unit.field_offset_map['_{}'.format(field)]
+ offset).just_line()
issue = cls(message, severity, kind, origin, unit, field)
self.issue_list.append(issue)
return issue
def explain(self, unit, field, kind, message):
"""
Lookup an explanatory string for a given issue kind
:returns:
A string (explanation) or None if the issue kind
is not known to this method.
"""
stock_msg = self._explain_map.get(kind)
if message or stock_msg:
return _("field {field!a}, {message}").format(
field=str(field), message=message or stock_msg)
_explain_map = {
Problem.missing: _("required field missing"),
Problem.wrong: _("incorrect value supplied"),
Problem.useless: _("definition useless in this context"),
Problem.deprecated: _("deprecated field used"),
Problem.constant: _("value must be variant (parametrized)"),
Problem.variable: _("value must be invariant (unparametrized)"),
Problem.unknown_param: _("field refers to unknown parameter"),
Problem.not_unique: _("field value is not unique"),
Problem.expected_i18n: _("field should be marked as translatable"),
Problem.unexpected_i18n: (
_("field should not be marked as translatable")),
Problem.syntax_error: _("syntax error inside the field"),
Problem.bad_reference: _("bad reference to another unit"),
}
class UnitType(abc.ABCMeta):
"""
Meta-class for all Units
This metaclass is responsible for collecting meta-data about particular
units and exposing them in the special 'Meta' attribute of each class.
It also handles Meta inheritance so that SubUnit.Meta inherits from
Unit.Meta even if it was not specified directly.
"""
def __new__(mcls, name, bases, ns):
# mro = super().__new__(mcls, name, bases, ns).__mro__
base_meta_list = [
base.Meta for base in bases if hasattr(base, 'Meta')]
our_meta = ns.get('Meta')
if our_meta is not None and base_meta_list:
new_meta_ns = dict(our_meta.__dict__)
new_meta_ns['__doc__'] = """
Collection of meta-data about :class:`{}`
This class is partially automatically generated.
It always inherits the Meta class of the base unit type.
This class has (at most) three attributes:
`field_validators`:
A dictionary mapping from each field to a list of
:class:`IFieldvalidator:` that check that particular
field for correctness.
`fields`:
A :class`SymbolDef` with a symbol for each field that
this unit defines. This does not include dynamically
created fields that are not a part of the unit itself.
`validator_cls`:
A :class:`UnitValidator` subclass that can be used to
check this unit for correctness
""".format(name)
new_meta_bases = tuple(base_meta_list)
# Merge custom field_validators with base unit validators
if 'field_validators' in our_meta.__dict__:
merged_validators = dict()
for base_meta in base_meta_list:
if hasattr(base_meta, 'field_validators'):
merged_validators.update(base_meta.field_validators)
merged_validators.update(our_meta.field_validators)
new_meta_ns['field_validators'] = merged_validators
# Merge fields with base unit fields
if 'fields' in our_meta.__dict__:
# Look at all the base Meta classes and collect each
# Meta.fields class as our (real) list of base classes.
assert our_meta.fields.__bases__ == (SymbolDef,)
merged_fields_bases = [
base_meta.fields
for base_meta in base_meta_list
if hasattr(base_meta, 'fields')]
# If there are no base classes then let's just inherit from the
# base SymbolDef class (not that we're actually ignoring any
# base classes on the our_meta.fields class as it can only be
# SymbolDef and nothing else is supported or makes sense.
if not merged_fields_bases:
merged_fields_bases.append(SymbolDef)
# The list of base fields needs to be a tuple
merged_fields_bases = tuple(merged_fields_bases)
# Copy all of the Symbol objects out of the our_meta.field
# class that we're re-defining.
merged_fields_ns = SymbolDefNs()
for sym_name in dir(our_meta.fields):
sym = getattr(our_meta.fields, sym_name)
if isinstance(sym, Symbol):
merged_fields_ns[sym_name] = sym
merged_fields_ns['__doc__'] = """
A symbol definition containing all fields used by :class:`{}`
This class is partially automatically generated. It always
inherits from the Meta.fields class of the base unit class.
""".format(name)
# Create a new class in place of the 'fields' defined in
# our_meta.fields.
fields = SymbolDefMeta(
'fields', merged_fields_bases, merged_fields_ns)
fields.__qualname__ = '{}.Meta.fields'.format(name)
new_meta_ns['fields'] = fields
# Ensure that Meta.name is explicitly defined
if 'name' not in our_meta.__dict__:
raise TypeError(_(
"Please define 'name' in {}.Meta"
).format(name))
ns['Meta'] = type('Meta', new_meta_bases, new_meta_ns)
ns['fields'] = ns['Meta'].fields
return super().__new__(mcls, name, bases, ns)
class Unit(UnitLegacyAPI, metaclass=UnitType):
"""
Units are representations of data loaded from RFC822 definitions
Units are used by plainbox to represent various important objects loaded
from the filesystem. All units have identical representation (RFC822
records) but each unit type has different semantics.
.. warning::
There is no metaclass to do it automatically yet so please be aware
that the Unit.Meta class (which is a collection of metadata, not a
meta-class) needs to be manually inherited in each subclass of the Unit
class.
"""
def __init__(self, data, raw_data=None, origin=None, provider=None,
parameters=None, field_offset_map=None, virtual=False):
"""
Initialize a new unit
:param data:
A dictionary of normalized data. This data is suitable for normal
application usage. It is not suitable for gettext lookups as the
original form is lost by the normalization process.
:param raw_data:
A dictionary of raw data (optional). Defaults to data. Values in
this dictionary are in their raw form, as they were loaded from a
unit file. This data is suitable for gettext lookups.
:param origin:
An (optional) Origin object. If omitted a fake origin object is
created. Normally the origin object should be obtained from the
RFC822Record object.
:param parameters:
An (optional) dictionary of parameters. Parameters allow for unit
properties to be altered while maintaining a single definition.
This is required to obtain translated summary and description
fields, while having a single translated base text and any
variation in the available parameters.
:param field_offset_map:
An optional dictionary with offsets (in line numbers) of each
field. Line numbers are relative to the value of origin.line_start
:param virtual:
An optional flag marking this unit as "virtual". It can be used
to annotate units synthetized by PlainBox itself so that certain
operations can treat them differently. It also helps with merging
non-virtual and virtual units.
"""
if raw_data is None:
raw_data = data
if origin is None:
origin = Origin.get_caller_origin()
if field_offset_map is None:
field_offset_map = {field: 0 for field in data}
self._data = data
self._raw_data = raw_data
self._origin = origin
self._field_offset_map = field_offset_map
self._provider = provider
self._checksum = None
self._parameters = parameters
self._virtual = virtual
@classmethod
def instantiate_template(cls, data, raw_data, origin, provider, parameters,
field_offset_map):
"""
Instantiate this unit from a template.
The point of this method is to have a fixed API, regardless of what the
API of a particular unit class ``__init__`` method actually looks like.
It is easier to standardize on a new method that to patch all of the
initializers, code using them and tests to have an uniform initializer.
"""
# This assertion is a low-cost trick to ensure that we override this
# method in all of the subclasses to ensure that the initializer is
# called with correctly-ordered arguments.
assert cls is Unit, \
"{}.instantiate_template() not customized".format(cls.__name__)
return cls(data, raw_data, origin, provider, parameters,
field_offset_map)
def __eq__(self, other):
if not isinstance(other, Unit):
return False
return self.checksum == other.checksum
def __ne__(self, other):
if not isinstance(other, Unit):
return True
return self.checksum != other.checksum
def __hash__(self):
return hash(self.checksum)
@property
def unit(self):
"""
the value of the unit field
This property _may_ be overridden by certain subclasses but this
behavior is not generally recommended.
"""
return self.get_record_value('unit')
def tr_unit(self):
"""
Translated (optionally) value of the unit field (overridden)
The return value is always 'self.Meta.name' (translated)
"""
return _(self.Meta.name)
@property
def origin(self):
"""
The Origin object associated with this Unit
"""
return self._origin
@property
def field_offset_map(self):
"""
The field-to-line-number-offset mapping.
A dictionary mapping field name to offset (in lines) relative to the
origin where that field definition commences.
Note: the return value may be None
"""
return self._field_offset_map
@property
def provider(self):
"""
The provider object associated with this Unit
"""
return self._provider
@property
def parameters(self):
"""
The mapping of parameters supplied to this Unit
This may be either a dictionary or None.
.. seealso::
:meth:`is_parametric()`
"""
return self._parameters
@property
def virtual(self):
"""
Flag indicating if this unit is a virtual unit
Virtual units are created (synthetised) by PlainBox and don't exist
in any one specific file as normal units do.
"""
return self._virtual
@property
def is_parametric(self):
"""
If true, then this unit is parametric
Parametric units are instances of a template. To know which fields are
constant and which are parametrized call the support method
:meth:`get_accessed_parametes()`
"""
return self._parameters is not None
def get_accessed_parameters(self, *, force=False):
"""
Get a set of attributes accessed from each template attribute
:param force (keyword-only):
If specified then it will operate despite being invoked on a
non-parametric unit. This is only intended to be called by
TemplateUnit to inspect what the generated unit looks like in the
early validation code.
:returns:
A dictionary of sets with names of attributes accessed by each
template field. Note that for non-parametric Units the return value
is always a dictionary of empty sets, regardless of how they actual
parameter values look like.
This function computes a dictionary of sets mapping from each template
field (except from fields starting with the string 'template-') to a
set of all the resource object attributes accessed by that element.
"""
if force or self.is_parametric:
return {
key: get_accessed_parameters(value)
for key, value in self._data.items()
}
else:
return {key: frozenset() for key in self._data}
@classmethod
def from_rfc822_record(cls, record, provider=None):
"""
Create a new Unit from RFC822 record. The resulting instance may not be
valid but will always be created.
:param record:
A RFC822Record object
:returns:
A new Unit
"""
# Strip the trailing newlines form all the raw values coming from the
# RFC822 parser. We don't need them and they don't match gettext keys
# (xgettext strips out those newlines)
changed_raw_data = {
key: value.rstrip('\n')
for key, value in record.raw_data.items()
}
return cls(record.data, origin=record.origin,
raw_data=changed_raw_data, provider=provider,
field_offset_map=record.field_offset_map)
def get_record_value(self, name, default=None):
"""
Obtain the normalized value of the specified record attribute
:param name:
Name of the field to access
:param default:
Default value, used if the field is not defined in the unit
:returns:
The value of the field, possibly with parameters inserted, or the
default value
:raises:
KeyError if the field is parametrized but parameters are incorrect
"""
value = self._data.get('_{}'.format(name))
if value is None:
value = self._data.get('{}'.format(name), default)
if value is not None and self.is_parametric:
value = string.Formatter().vformat(value, (), self.parameters)
return value
def get_raw_record_value(self, name, default=None):
"""
Obtain the raw value of the specified record attribute
:param name:
Name of the field to access
:param default:
Default value, used if the field is not defined in the unit
:returns:
The raw value of the field, possibly with parameters inserted, or
the default value
:raises:
KeyError if the field is parametrized but parameters are incorrect
The raw value may have additional whitespace or indentation around the
text. It will also not have the magic RFC822 dots removed. In general
the text will be just as it was parsed from the unit file.
"""
value = self._raw_data.get('_{}'.format(name))
if value is None:
value = self._raw_data.get('{}'.format(name), default)
if value is not None and self.is_parametric:
value = string.Formatter().vformat(value, (), self.parameters)
return value
def get_translated_record_value(self, name, default=None):
"""
Obtain the translated value of the specified record attribute
:param name:
Name of the field/attribute to access
:param default:
Default value, used if the field is not defined in the unit
:returns:
The (perhaps) translated value of the field with (perhaps)
parameters inserted, or the default value. The idea is to return
the best value we can but there are no guarantees on returning a
translated value.
:raises:
KeyError if the field is parametrized but parameters are incorrect
This may imply that the unit is invalid but it may also imply that
translations are broken. A malicious translation can break
formatting and prevent an otherwise valid unit from working.
"""
# Try to access the marked-for-translation record
msgid = self._raw_data.get('_{}'.format(name))
if msgid is not None:
# We now have a translatable message that we can look up in the
# provider translation database.
msgstr = self.get_translated_data(msgid)
assert msgstr is not None
# We now have the translation _or_ the untranslated msgid again.
# We can now normalize it so that it looks nice:
msgstr = normalize_rfc822_value(msgstr)
# We can now feed it through the template system to get parameters
# inserted.
if self.is_parametric:
# This should not fail if the unit validates okay but it still
# might fail due to broken translations. Perhaps we should
# handle exceptions here and hint that this might be the cause
# of the problem?
msgstr = string.Formatter().vformat(
msgstr, (), self.parameters)
return msgstr
# If there was no marked-for-translation value then let's just return
# the normal (untranslatable) version.
msgstr = self._data.get(name)
if msgstr is not None:
# NOTE: there is no need to normalize anything as we already got
# the non-raw value here.
if self.is_parametric:
msgstr = string.Formatter().vformat(
msgstr, (), self.parameters)
return msgstr
# If we have nothing better let's just return the default value
return default
def is_translatable_field(self, name):
"""
Check if a field is marked as translatable
:param name:
Name of the field to check
:returns:
True if the field is marked as translatable, False otherwise
"""
return '_{}'.format(name) in self._data
def qualify_id(self, some_id):
"""
Transform some unit identifier to be fully qualified
:param some_id:
A potentially unqualified unit identifier
:returns:
A fully qualified unit identifier
This method uses the namespace of the associated provider to transform
unqualified unit identifiers to qualified identifiers. Qualified
identifiers are left alone.
"""
if "::" not in some_id and self.provider is not None:
return "{}::{}".format(self.provider.namespace, some_id)
else:
return some_id
@property
def checksum(self):
"""
Checksum of the unit definition.
This property can be used to compute the checksum of the canonical form
of the unit definition. The canonical form is the UTF-8 encoded JSON
serialization of the data that makes up the full definition of the unit
(all keys and values). The JSON serialization uses no indent and
minimal separators.
The checksum is defined as the SHA256 hash of the canonical form.
"""
if self._checksum is None:
self._checksum = self._compute_checksum()
return self._checksum
def _compute_checksum(self):
"""
Compute the value for :attr:`checksum`.
"""
# Ideally we'd use simplejson.dumps() with sorted keys to get
# predictable serialization but that's another dependency. To get
# something simple that is equally reliable, just sort all the keys
# manually and ask standard json to serialize that..
sorted_data = collections.OrderedDict(sorted(self._data.items()))
# Define a helper function to convert symbols to strings for the
# purpose of computing the checksum's canonical representation.
def default_fn(obj):
if isinstance(obj, Symbol):
return str(obj)
raise TypeError
# Compute the canonical form which is arbitrarily defined as sorted
# json text with default indent and separator settings.
canonical_form = json.dumps(
sorted_data, indent=None, separators=(',', ':'),
default=default_fn)
text = canonical_form.encode('UTF-8')
# Parametric units also get a copy of their parameters stored as an
# additional piece of data
if self.is_parametric:
sorted_parameters = collections.OrderedDict(
sorted(self.parameters.items()))
canonical_parameters = json.dumps(
sorted_parameters, indent=None, separators=(',', ':'),
default=default_fn)
text += canonical_parameters.encode('UTF-8')
# Compute the sha256 hash of the UTF-8 encoding of the canonical form
# and return the hex digest as the checksum that can be displayed.
return hashlib.sha256(text).hexdigest()
def get_translated_data(self, msgid):
"""
Get a localized piece of data
:param msgid:
data to translate
:returns:
translated data obtained from the provider if this unit has one,
msgid itself otherwise.
"""
if msgid and self._provider:
return self._provider.get_translated_data(msgid)
else:
return msgid
def get_normalized_translated_data(self, msgid):
"""
Get a localized piece of data and filter it with RFC822 parser
normalization
:param msgid:
data to translate
:returns:
translated and normalized data obtained from the provider if this
unit has one, msgid itself otherwise.
"""
msgstr = self.get_translated_data(msgid)
if msgstr is not None:
return normalize_rfc822_value(msgstr)
else:
return msgid
def check(self, *, context=None, live=False):
"""
Check this unit for correctness
:param context:
A keyword-only argument, if specified it should be a
:class:`UnitValidationContext` instance used to validate a number
of units together.
:param live:
A keyword-only argument, if True the return value is a generator
that yields subsequent issues. Otherwise (default) the return value
is buffered and returned as a list. Checking everything takes
considerable time, for responsiveness, consider using live=True.
:returns:
A list of issues or a generator yielding subsequent issues. Each
issue is a :class:`plainbox.impl.validation.Issue`.
"""
if live:
return self._check_gen(context)
else:
return list(self._check_gen(context))
def _check_gen(self, context):
validator = self.Meta.validator_cls()
for issue in validator.check(self):
yield issue
if context is not None:
for issue in validator.check_in_context(self, context):
yield issue
class Meta:
"""
Class containing additional meta-data about this unit.
:attr name:
Name of this unit as it can appear in unit definition files
:attr fields:
A :class:`plainbox.impl.symbol.SymbolDef` with a symbol for each of
the fields used by this unit.
:attr validator_cls:
A custom validator class specific to this unit
:attr field_validators:
A dictionary mapping each field to a list of field validators
"""
name = 'unit'
class fields(SymbolDef):
"""
Unit defines only one field, the 'unit'
"""
unit = 'unit'
validator_cls = UnitValidator
field_validators = {
fields.unit: [
# We don't want anyone marking unit type up for translation
UntranslatableFieldValidator,
# We want each instantiated template to define same unit type
TemplateInvariantFieldValidator,
# We want to gently advise everyone to mark all units with
# and explicit unit type so that we can disable default 'job'
PresentFieldValidator(
severity=Severity.advice,
message=_("unit should explicitly define its type")),
]
}
plainbox-0.25/plainbox/impl/unit/test_init.py 0000664 0001750 0001750 00000002643 12627266441 022203 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Sylvain Pineau
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.unit.test_init
============================
Test definitions for plainbox.impl.unit (package init file)
"""
from unittest import TestCase
from plainbox.impl.unit import get_accessed_parameters
class FunctionTests(TestCase):
def test_get_accessed_parameters(self):
self.assertEqual(
get_accessed_parameters("some text"), frozenset())
self.assertEqual(
get_accessed_parameters("some {parametric} text"),
frozenset(['parametric']))
self.assertEqual(
get_accessed_parameters("some {} text"),
frozenset(['']))
self.assertEqual(
get_accessed_parameters("some {1} {2} {3} text"),
frozenset(['1', '2', '3']))
plainbox-0.25/plainbox/impl/unit/test_job.py 0000664 0001750 0001750 00000155527 12627266441 022024 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Sylvain Pineau
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.unit.test_job
===========================
Test definitions for plainbox.impl.unit.job module
"""
from unittest import TestCase
import warnings
from plainbox.impl.providers.v1 import Provider1
from plainbox.impl.secure.origin import FileTextSource
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.rfc822 import RFC822Record
from plainbox.impl.unit.job import JobDefinition
from plainbox.impl.unit.job import propertywithsymbols
from plainbox.impl.unit.test_unit_with_id import UnitWithIdFieldValidationTests
from plainbox.impl.unit.unit_with_id import UnitWithId
from plainbox.impl.unit.validators import UnitValidationContext
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
from plainbox.impl.validation import ValidationError
from plainbox.testing_utils.testcases import TestCaseWithParameters
from plainbox.vendor import mock
class DecoratorTests(TestCase):
def setUp(self):
self.symbols = mock.Mock(name='symbols')
class C:
@propertywithsymbols(symbols=self.symbols)
def prop(self):
"""a docstring"""
return 'prop'
self.C = C
def test_propertywithsymbols__fget_works(self):
self.assertEqual(self.C().prop, 'prop')
def test_propertywithsmybols__symbols_works(self):
self.assertIs(self.C.prop.symbols, self.symbols)
def test_propertywithsymbols__inherits_doc_from_fget(self):
self.assertEqual(self.C.prop.__doc__, 'a docstring')
def test_propertywithsymbols__honors_doc_argument(self):
class C:
@propertywithsymbols(doc='different', symbols=self.symbols)
def prop(self):
"""a docstring"""
return 'prop'
self.assertEqual(C.prop.__doc__, 'different')
class TestJobDefinitionDefinition(TestCase):
def test_get_raw_record_value(self):
"""
Ensure that get_raw_record_value() works okay
"""
job1 = JobDefinition({'key': 'value'}, raw_data={'key': 'raw-value'})
job2 = JobDefinition({'_key': 'value'}, raw_data={'_key': 'raw-value'})
self.assertEqual(job1.get_raw_record_value('key'), 'raw-value')
self.assertEqual(job2.get_raw_record_value('key'), 'raw-value')
def test_get_record_value(self):
"""
Ensure that get_record_value() works okay
"""
job1 = JobDefinition({'key': 'value'}, raw_data={'key': 'raw-value'})
job2 = JobDefinition({'_key': 'value'}, raw_data={'_key': 'raw-value'})
self.assertEqual(job1.get_record_value('key'), 'value')
self.assertEqual(job2.get_record_value('key'), 'value')
def test_properties(self):
"""
Ensure that properties are looked up in the non-raw copy of the data
"""
job = JobDefinition({
'plugin': 'plugin-value',
'command': 'command-value',
'environ': 'environ-value',
'user': 'user-value',
'shell': 'shell-value',
'flags': 'flags-value',
'category_id': 'category_id-value',
}, raw_data={
'plugin': 'plugin-raw',
'command': 'command-raw',
'environ': 'environ-raw',
'user': 'user-raw',
'shell': 'shell-raw',
'flags': 'flags-raw',
'category_id': 'category_id-raw',
})
self.assertEqual(job.plugin, "plugin-value")
self.assertEqual(job.command, "command-value")
self.assertEqual(job.environ, "environ-value")
self.assertEqual(job.user, "user-value")
self.assertEqual(job.shell, "shell-value")
self.assertEqual(job.flags, "flags-value")
self.assertEqual(job.category_id, "category_id-value")
def test_qml_file_property_none_when_missing_provider(self):
"""
Ensure that qml_file property is set to None when provider is not set.
"""
job = JobDefinition({
'qml_file': 'qml_file-value'
}, raw_data={
'qml_file': 'qml_file-raw'
})
self.assertEqual(job.qml_file, None)
def test_qml_file_property(self):
"""
Ensure that qml_file property is properly constructed
"""
mock_provider = mock.Mock()
type(mock_provider).data_dir = mock.PropertyMock(return_value='data')
job = JobDefinition({
'qml_file': 'qml_file-value'
}, raw_data={
'qml_file': 'qml_file-raw'
}, provider=mock_provider)
with mock.patch('os.path.join', return_value='path') as mock_join:
self.assertEqual(job.qml_file, 'path')
mock_join.assert_called_with('data', 'qml_file-value')
def test_properties_default_values(self):
"""
Ensure that all properties default to None
"""
job = JobDefinition({})
self.assertEqual(job.plugin, None)
self.assertEqual(job.command, None)
self.assertEqual(job.environ, None)
self.assertEqual(job.user, None)
self.assertEqual(job.shell, 'bash')
self.assertEqual(job.flags, None)
self.assertEqual(job.category_id,
'2013.com.canonical.plainbox::uncategorised')
self.assertEqual(job.qml_file, None)
def test_checksum_smoke(self):
job1 = JobDefinition({'plugin': 'plugin', 'user': 'root'})
identical_to_job1 = JobDefinition({'plugin': 'plugin', 'user': 'root'})
# Two distinct but identical jobs have the same checksum
self.assertEqual(job1.checksum, identical_to_job1.checksum)
job2 = JobDefinition({'plugin': 'plugin', 'user': 'anonymous'})
# Two jobs with different definitions have different checksum
self.assertNotEqual(job1.checksum, job2.checksum)
# The checksum is stable and does not change over time
self.assertEqual(
job1.checksum,
"c47cc3719061e4df0010d061e6f20d3d046071fd467d02d093a03068d2f33400")
def test_get_environ_settings(self):
job1 = JobDefinition({})
self.assertEqual(job1.get_environ_settings(), set())
job2 = JobDefinition({'environ': 'a b c'})
self.assertEqual(job2.get_environ_settings(), set(['a', 'b', 'c']))
job3 = JobDefinition({'environ': 'a,b,c'})
self.assertEqual(job3.get_environ_settings(), set(['a', 'b', 'c']))
def test_get_flag_set(self):
job1 = JobDefinition({})
self.assertEqual(job1.get_flag_set(), set())
job2 = JobDefinition({'flags': 'a b c'})
self.assertEqual(job2.get_flag_set(), set(['a', 'b', 'c']))
job3 = JobDefinition({'flags': 'a,b,c'})
self.assertEqual(job3.get_flag_set(), set(['a', 'b', 'c']))
class JobDefinitionParsingTests(TestCaseWithParameters):
parameter_names = ('glue',)
parameter_values = (
('commas',),
('spaces',),
('tabs',),
('newlines',),
('spaces_and_commas',),
('multiple_spaces',),
('multiple_commas',)
)
parameters_keymap = {
'commas': ',',
'spaces': ' ',
'tabs': '\t',
'newlines': '\n',
'spaces_and_commas': ', ',
'multiple_spaces': ' ',
'multiple_commas': ',,,,'
}
def test_environ_parsing_with_various_separators(self):
job = JobDefinition({
'id': 'id',
'plugin': 'plugin',
'environ': self.parameters_keymap[
self.parameters.glue].join(['foo', 'bar', 'froz'])})
expected = set({'foo', 'bar', 'froz'})
observed = job.get_environ_settings()
self.assertEqual(expected, observed)
def test_environ_parsing_empty(self):
job = JobDefinition({'plugin': 'plugin'})
expected = set()
observed = job.get_environ_settings()
self.assertEqual(expected, observed)
class JobDefinitionFieldValidationTests(UnitWithIdFieldValidationTests):
unit_cls = JobDefinition
def test_unit__present(self):
# NOTE: this is overriding an identical method from the base class to
# disable this test.
pass
def test_name__untranslatable(self):
issue_list = self.unit_cls({
'_name': 'name'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.name,
Problem.unexpected_i18n, Severity.warning)
def test_name__template_variant(self):
issue_list = self.unit_cls({
'name': 'name'
}, parameters={}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.name,
Problem.constant, Severity.error)
def test_name__deprecated(self):
issue_list = self.unit_cls({
'name': 'name'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.name,
Problem.deprecated, Severity.advice)
def test_summary__translatable(self):
issue_list = self.unit_cls({
'summary': 'summary'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.summary,
Problem.expected_i18n, Severity.warning)
def test_summary__template_variant(self):
issue_list = self.unit_cls({
'summary': 'summary'
}, provider=self.provider, parameters={}).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.summary,
Problem.constant, Severity.error)
def test_summary__present(self):
issue_list = self.unit_cls({
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.summary,
Problem.missing, Severity.advice)
def test_summary__one_line(self):
issue_list = self.unit_cls({
'summary': 'line1\nline2'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.summary,
Problem.wrong, Severity.warning)
def test_summary__short_line(self):
issue_list = self.unit_cls({
'summary': 'x' * 81
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.summary,
Problem.wrong, Severity.warning)
def test_plugin__untranslatable(self):
issue_list = self.unit_cls({
'_plugin': 'plugin'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.plugin,
Problem.unexpected_i18n, Severity.warning)
def test_plugin__template_invarinat(self):
issue_list = self.unit_cls({
'plugin': '{attr}'
}, parameters={'attr': 'plugin'}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.plugin,
Problem.variable, Severity.error)
def test_plugin__correct(self):
issue_list = self.unit_cls({
'plugin': 'foo'
}, provider=self.provider).check()
message = ("field 'plugin', valid values are: attachment, local,"
" manual, qml, resource, shell, user-interact,"
" user-interact-verify, user-verify")
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.plugin,
Problem.wrong, Severity.error, message)
def test_plugin__not_local(self):
issue_list = self.unit_cls({
'plugin': 'local'
}, provider=self.provider).check()
message = ("field 'plugin', please migrate to job templates, "
"see plainbox-template-unit(7) for details")
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.plugin,
Problem.deprecated, Severity.advice, message)
def test_plugin__not_user_verify(self):
issue_list = self.unit_cls({
'plugin': 'user-verify'
}, provider=self.provider).check()
message = "field 'plugin', please migrate to user-interact-verify"
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.plugin,
Problem.deprecated, Severity.advice, message)
def test_command__untranslatable(self):
issue_list = self.unit_cls({
'_command': 'command'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.command,
Problem.unexpected_i18n, Severity.warning)
def test_command__present__on_non_manual(self):
for plugin in self.unit_cls.plugin.symbols.get_all_symbols():
if plugin in ('manual', 'qml'):
continue
# TODO: switch to subTest() once we depend on python3.4
issue_list = self.unit_cls({
'plugin': plugin,
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.command,
Problem.missing, Severity.error)
def test_command__useless__on_manual(self):
issue_list = self.unit_cls({
'plugin': 'manual',
'command': 'command'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.command,
Problem.useless, Severity.warning)
def test_command__useless__on_qml(self):
issue_list = self.unit_cls({
'plugin': 'qml',
'command': 'command'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.command,
Problem.useless, Severity.warning)
def test_command__not_using_CHECKBOX_SHARE(self):
issue_list = self.unit_cls({
'command': '$CHECKBOX_SHARE'
}, provider=self.provider).check()
message = ("field 'command', please use PLAINBOX_PROVIDER_DATA"
" instead of CHECKBOX_SHARE")
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.command,
Problem.deprecated, Severity.advice, message)
def test_command__not_using_CHECKBOX_DATA(self):
issue_list = self.unit_cls({
'command': '$CHECKBOX_DATA'
}, provider=self.provider).check()
message = ("field 'command', please use PLAINBOX_SESSION_SHARE"
" instead of CHECKBOX_DATA")
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.command,
Problem.deprecated, Severity.advice, message)
def test_command__has_valid_syntax(self):
issue_list = self.unit_cls({
'command': """# Echo a few numbers
for i in 1 2 "3; do
echo $i
done"""
}, provider=self.provider).check()
message = ("field 'command', No closing quotation, near '2'")
issue = self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.command,
Problem.syntax_error, Severity.error, message)
# Make sure the offset was good too. Since offset is dependant on the
# place where we instantiate the unit in the self.unit_cls({}) line
# above let's just ensure that the reported error is at a +3 offset
# from that line. Note, the offset is a bit confusing since the error
# is on line reading 'for i in 1 2 "3; do' but shlex will actually only
# report it at the end of the input which is the line with 'done'
self.assertEqual(
issue.origin.line_start,
issue.unit.origin.line_start + 3)
def test_description__translatable(self):
issue_list = self.unit_cls({
'description': 'description'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.description,
Problem.expected_i18n, Severity.warning)
def test_description__template_variant(self):
issue_list = self.unit_cls({
'description': 'description'
}, parameters={}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.description,
Problem.constant, Severity.error)
def test_description__present__on_non_manual(self):
for plugin in self.unit_cls.plugin.symbols.get_all_symbols():
if plugin == 'manual':
continue
message = ("field 'description', all jobs should have a"
" description field, or a set of purpose, steps and"
" verification fields")
# TODO: switch to subTest() once we depend on python3.4
issue_list = self.unit_cls({
'plugin': plugin
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.description,
Problem.missing, Severity.advice, message)
def test_description__present__on_manual(self):
message = ("field 'description', manual jobs must have a description"
" field, or a set of purpose, steps, and verification"
" fields")
issue_list = self.unit_cls({
'plugin': 'manual'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.description,
Problem.missing, Severity.error, message)
def test_user__untranslatable(self):
issue_list = self.unit_cls({
'_user': 'user'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.user,
Problem.unexpected_i18n, Severity.warning)
def test_user__template_invarinat(self):
issue_list = self.unit_cls({
'user': '{attr}'
}, parameters={'attr': 'user'}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.user,
Problem.variable, Severity.error)
def test_user__defined_but_not_root(self):
message = "field 'user', user can only be 'root'"
issue_list = self.unit_cls({
'user': 'user'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.user,
Problem.wrong, Severity.error, message)
def test_user__useless_without_command(self):
message = "field 'user', user without a command makes no sense"
issue_list = self.unit_cls({
'user': 'user'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.user,
Problem.useless, Severity.warning, message)
def test_environ__untranslatable(self):
issue_list = self.unit_cls({'_environ': 'environ'}).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.environ,
Problem.unexpected_i18n, Severity.warning)
def test_environ__template_invarinat(self):
issue_list = self.unit_cls({
'environ': '{attr}'
}, parameters={'attr': 'environ'}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.environ,
Problem.variable, Severity.error)
def test_environ__useless_without_command(self):
message = "field 'environ', environ without a command makes no sense"
issue_list = self.unit_cls({
'environ': 'environ'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.environ,
Problem.useless, Severity.warning, message)
def test_estimated_duration__untranslatable(self):
issue_list = self.unit_cls({
'_estimated_duration': 'estimated_duration'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.estimated_duration,
Problem.unexpected_i18n, Severity.warning)
def test_estimated_duration__template_invarinat(self):
issue_list = self.unit_cls({
'estimated_duration': '{attr}'
}, parameters={'attr': 'value'}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.estimated_duration,
Problem.variable, Severity.error)
def test_estimated_duration__present(self):
issue_list = self.unit_cls({
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.estimated_duration,
Problem.missing, Severity.advice)
def test_estimated_duration__positive(self):
issue_list = self.unit_cls({
'estimated_duration': '0'
}, provider=self.provider).check()
message = "field 'estimated_duration', value must be a positive number"
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.estimated_duration,
Problem.wrong, Severity.error, message)
def test_depends__untranslatable(self):
issue_list = self.unit_cls({
'_depends': 'depends'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.depends,
Problem.unexpected_i18n, Severity.warning)
def test_depends__refers_to_other_units(self):
unit = self.unit_cls({
'depends': 'some-unit'
}, provider=self.provider)
message = "field 'depends', unit 'ns::some-unit' is not available"
self.provider.unit_list = [unit]
context = UnitValidationContext([self.provider])
issue_list = unit.check(context=context)
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.depends,
Problem.bad_reference, Severity.error, message)
def test_depends__refers_to_other_jobs(self):
other_unit = UnitWithId({
'id': 'some-unit'
}, provider=self.provider)
unit = self.unit_cls({
'depends': 'some-unit'
}, provider=self.provider)
message = "field 'depends', the referenced unit is not a job"
self.provider.unit_list = [unit, other_unit]
context = UnitValidationContext([self.provider])
issue_list = unit.check(context=context)
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.depends,
Problem.bad_reference, Severity.error, message)
def test_after__untranslatable(self):
issue_list = self.unit_cls({
'_after': 'after'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.after,
Problem.unexpected_i18n, Severity.warning)
def test_after__refers_to_other_units(self):
unit = self.unit_cls({
'after': 'some-unit'
}, provider=self.provider)
message = "field 'after', unit 'ns::some-unit' is not available"
self.provider.unit_list = [unit]
context = UnitValidationContext([self.provider])
issue_list = unit.check(context=context)
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.after,
Problem.bad_reference, Severity.error, message)
def test_after__refers_to_other_jobs(self):
other_unit = UnitWithId({
'id': 'some-unit'
}, provider=self.provider)
unit = self.unit_cls({
'after': 'some-unit'
}, provider=self.provider)
message = "field 'after', the referenced unit is not a job"
self.provider.unit_list = [unit, other_unit]
context = UnitValidationContext([self.provider])
issue_list = unit.check(context=context)
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.after,
Problem.bad_reference, Severity.error, message)
def test_requires__untranslatable(self):
issue_list = self.unit_cls({
'_requires': 'requires'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.requires,
Problem.unexpected_i18n, Severity.warning)
def test_requires__refers_to_other_units(self):
unit = self.unit_cls({
'requires': 'some_unit.attr == "value"'
}, provider=self.provider)
message = "field 'requires', unit 'ns::some_unit' is not available"
self.provider.unit_list = [unit]
context = UnitValidationContext([self.provider])
issue_list = unit.check(context=context)
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.requires,
Problem.bad_reference, Severity.error, message)
def test_requires__refers_to_other_jobs(self):
other_unit = UnitWithId({
'id': 'some_unit'
}, provider=self.provider)
unit = self.unit_cls({
'requires': 'some_unit.attr == "value"'
}, provider=self.provider)
message = "field 'requires', the referenced unit is not a job"
self.provider.unit_list = [unit, other_unit]
context = UnitValidationContext([self.provider])
issue_list = unit.check(context=context)
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.requires,
Problem.bad_reference, Severity.error, message)
def test_requires__refers_to_other_resource_jobs(self):
other_unit = JobDefinition({
'id': 'some_unit', 'plugin': 'shell'
}, provider=self.provider)
unit = self.unit_cls({
'requires': 'some_unit.attr == "value"'
}, provider=self.provider)
message = "field 'requires', the referenced job is not a resource job"
self.provider.unit_list = [unit, other_unit]
context = UnitValidationContext([self.provider])
issue_list = unit.check(context=context)
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.requires,
Problem.bad_reference, Severity.error, message)
def test_shell__untranslatable(self):
issue_list = self.unit_cls({
'_shell': 'shell'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.shell,
Problem.unexpected_i18n, Severity.warning)
def test_shell__template_invarinat(self):
issue_list = self.unit_cls({
'shell': '{attr}'
}, parameters={'attr': 'shell'}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.shell,
Problem.variable, Severity.error)
def test_shell__defined_but_invalid(self):
message = "field 'shell', only /bin/sh and /bin/bash are allowed"
issue_list = self.unit_cls({'shell': 'shell'},).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.shell,
Problem.wrong, Severity.error, message)
def test_category_id__untranslatable(self):
issue_list = self.unit_cls({
'_category_id': 'category_id'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.category_id,
Problem.unexpected_i18n, Severity.warning)
def test_category_id__template_invarinat(self):
issue_list = self.unit_cls({
'category_id': '{attr}'
}, parameters={'attr': 'category_id'}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.category_id,
Problem.variable, Severity.error)
def test_category_id__refers_to_other_units(self):
unit = self.unit_cls({
'category_id': 'some-unit'
}, provider=self.provider)
message = "field 'category_id', unit 'ns::some-unit' is not available"
self.provider.unit_list = [unit]
context = UnitValidationContext([self.provider])
issue_list = unit.check(context=context)
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.category_id,
Problem.bad_reference, Severity.error, message)
def test_category_id__refers_to_other_jobs(self):
other_unit = UnitWithId({
'id': 'some-unit'
}, provider=self.provider)
unit = self.unit_cls({
'category_id': 'some-unit'
}, provider=self.provider)
message = "field 'category_id', the referenced unit is not a category"
self.provider.unit_list = [unit, other_unit]
context = UnitValidationContext([self.provider])
issue_list = unit.check(context=context)
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.category_id,
Problem.bad_reference, Severity.error, message)
def test_flags__preserve_locale_is_set(self):
message = ("field 'flags', please ensure that the command supports"
" non-C locale then set the preserve-locale flag")
issue_list = self.unit_cls({
'command': 'command'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.flags,
Problem.expected_i18n, Severity.advice, message)
def test_flags__usless_explicit_fail_on_shell_jobs(self):
message = ("field 'flags', explicit-fail makes no sense for job which "
"outcome is automatically determined.")
issue_list = self.unit_cls({
'plugin': 'shell',
'flags': 'explicit-fail'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.flags,
Problem.useless, Severity.advice, message)
class JobDefinitionValidatorTests(TestCase):
def setUp(self):
warnings.filterwarnings(
'ignore', 'validate is deprecated since version 0.11')
def tearDown(self):
warnings.resetwarnings()
def test_validate_checks_for_deprecated_name(self):
"""
verify that validate() checks if jobs have a value for the 'id'
field.
"""
job = JobDefinition({
'name': 'name'
})
with self.assertRaises(ValidationError) as boom:
job.validate(deprecated=True)
self.assertEqual(boom.exception.field, JobDefinition.fields.name)
self.assertEqual(boom.exception.problem, Problem.deprecated)
def test_validate_checks_for_missing_id(self):
"""
verify that validate() checks if jobs have a value for the 'id'
field.
"""
job = JobDefinition({})
with self.assertRaises(ValidationError) as boom:
job.validate()
self.assertEqual(boom.exception.field, JobDefinition.fields.id)
self.assertEqual(boom.exception.problem, Problem.missing)
def test_validate_checks_for_missing_plugin(self):
"""
verify that validate() checks if jobs have a value for the 'plugin'
field.
"""
job = JobDefinition({
'id': 'id'
})
with self.assertRaises(ValidationError) as boom:
job.validate()
self.assertEqual(boom.exception.field, JobDefinition.fields.plugin)
self.assertEqual(boom.exception.problem, Problem.missing)
def test_validate_checks_for_unknown_plugins(self):
"""
verify that validate() checks if jobs have a known value for the
'plugin' field.
"""
job = JobDefinition({
'id': 'id',
'plugin': 'dummy'
})
with self.assertRaises(ValidationError) as boom:
job.validate()
self.assertEqual(boom.exception.field, JobDefinition.fields.plugin)
self.assertEqual(boom.exception.problem, Problem.wrong)
def test_validate_checks_for_useless_user(self):
"""
verify that validate() checks for jobs that have the 'user' field but
don't have the 'command' field.
"""
job = JobDefinition({
'id': 'id',
'plugin': 'shell',
'user': 'root'
})
with self.assertRaises(ValidationError) as boom:
job.validate(strict=True)
self.assertEqual(boom.exception.field, JobDefinition.fields.user)
self.assertEqual(boom.exception.problem, Problem.useless)
def test_validate_checks_for_uselss_environ(self):
"""
verify that validate() checks for jobs that have the 'environ' field
but don't have the 'command' field.
"""
job = JobDefinition({
'id': 'id',
'plugin': 'shell',
'environ': 'VAR_NAME'
})
with self.assertRaises(ValidationError) as boom:
job.validate(strict=True)
self.assertEqual(boom.exception.field, JobDefinition.fields.environ)
self.assertEqual(boom.exception.problem, Problem.useless)
def test_validate_checks_for_description_on_manual_jobs(self):
"""
verify that validate() checks for manual jobs that don't have a value
for the 'description' field.
"""
job = JobDefinition({
'id': 'id',
'plugin': 'manual',
})
with self.assertRaises(ValidationError) as boom:
job.validate()
self.assertEqual(boom.exception.field,
JobDefinition.fields.description)
self.assertEqual(boom.exception.problem, Problem.missing)
def test_validate_checks_for_command_on_manual_jobs(self):
"""
verify that validate() checks for manual jobs that have a value for the
'command' field.
"""
job = JobDefinition({
'id': 'id',
'plugin': 'manual',
'description': 'Runs some test',
'command': 'run_some_test'
})
with self.assertRaises(ValidationError) as boom:
job.validate(strict=True)
self.assertEqual(boom.exception.field, JobDefinition.fields.command)
self.assertEqual(boom.exception.problem, Problem.useless)
class JobDefinitionValidatorTests2(TestCaseWithParameters):
"""
Continuation of unit tests for JobDefinition.validate().
Moved to a separate class because of limitations of TestCaseWithParameters
which operates on the whole class.
"""
parameter_names = ('plugin',)
parameter_values = (
('shell',), ('local',), ('resource',), ('attachment',),
('user-verify',), ('user-interact',),)
def setUp(self):
warnings.filterwarnings(
'ignore', 'validate is deprecated since version 0.11')
def tearDown(self):
warnings.resetwarnings()
def test_validate_checks_for_missing_command(self):
"""
verify that validate() checks if jobs have a value for the 'command'
field.
"""
job = JobDefinition({
'id': 'id',
'plugin': self.parameters.plugin
})
with self.assertRaises(ValidationError) as boom:
job.validate()
self.assertEqual(boom.exception.field, JobDefinition.fields.command)
self.assertEqual(boom.exception.problem, Problem.missing)
def test_validate_checks_for_wrong_user(self):
"""
verify that validate() checks if jobs have a wrong value for the 'user'
field.
This field has been limited to either not defined or 'root' for sanity.
While other choices _may_ be possible having just the two makes our job
easier.
"""
job = JobDefinition({
'id': 'id',
'plugin': self.parameters.plugin,
'command': 'true',
'user': 'fred',
})
with self.assertRaises(ValidationError) as boom:
job.validate()
self.assertEqual(boom.exception.field, JobDefinition.fields.user)
self.assertEqual(boom.exception.problem, Problem.wrong)
class TestJobDefinition(TestCase):
def setUp(self):
self._full_record = RFC822Record({
'plugin': 'plugin',
'id': 'id',
'summary': 'summary-value',
'requires': 'requires',
'command': 'command',
'description': 'description-value'
}, Origin(FileTextSource('file.txt'), 1, 5))
self._full_gettext_record = RFC822Record({
'_plugin': 'plugin',
'_id': 'id',
'_summary': 'summary-value',
'_requires': 'requires',
'_command': 'command',
'_description': 'description-value'
}, Origin(FileTextSource('file.txt.in'), 1, 5))
self._min_record = RFC822Record({
'plugin': 'plugin',
'id': 'id',
}, Origin(FileTextSource('file.txt'), 1, 2))
self._split_description_record = RFC822Record({
'id': 'id',
'purpose': 'purpose-value',
'steps': 'steps-value',
'verification': 'verification-value'
}, Origin(FileTextSource('file.txt'), 1, 1))
def test_instantiate_template(self):
data = mock.Mock(name='data')
raw_data = mock.Mock(name='raw_data')
origin = mock.Mock(name='origin')
provider = mock.Mock(name='provider')
parameters = mock.Mock(name='parameters')
field_offset_map = mock.Mock(name='field_offset_map')
unit = JobDefinition.instantiate_template(
data, raw_data, origin, provider, parameters, field_offset_map)
self.assertIs(unit._data, data)
self.assertIs(unit._raw_data, raw_data)
self.assertIs(unit._origin, origin)
self.assertIs(unit._provider, provider)
self.assertIs(unit._parameters, parameters)
self.assertIs(unit._field_offset_map, field_offset_map)
def test_smoke_full_record(self):
job = JobDefinition(self._full_record.data)
self.assertEqual(job.plugin, "plugin")
self.assertEqual(job.id, "id")
self.assertEqual(job.requires, "requires")
self.assertEqual(job.command, "command")
self.assertEqual(job.description, "description-value")
def test_smoke_full_gettext_record(self):
job = JobDefinition(self._full_gettext_record.data)
self.assertEqual(job.plugin, "plugin")
self.assertEqual(job.id, "id")
self.assertEqual(job.requires, "requires")
self.assertEqual(job.command, "command")
self.assertEqual(job.description, "description-value")
def test_smoke_min_record(self):
job = JobDefinition(self._min_record.data)
self.assertEqual(job.plugin, "plugin")
self.assertEqual(job.id, "id")
self.assertEqual(job.requires, None)
self.assertEqual(job.command, None)
self.assertEqual(job.description, None)
def test_smoke_description_split(self):
job = JobDefinition(self._split_description_record.data)
self.assertEqual(job.id, "id")
self.assertEqual(job.purpose, "purpose-value")
self.assertEqual(job.steps, "steps-value")
self.assertEqual(job.verification, "verification-value")
def test_description_combining(self):
job = JobDefinition(self._split_description_record.data)
expected = ("PURPOSE:\npurpose-value\nSTEPS:\nsteps-value\n"
"VERIFICATION:\nverification-value")
self.assertEqual(job.description, expected)
def test_from_rfc822_record_full_record(self):
job = JobDefinition.from_rfc822_record(self._full_record)
self.assertEqual(job.plugin, "plugin")
self.assertEqual(job.id, "id")
self.assertEqual(job.requires, "requires")
self.assertEqual(job.command, "command")
self.assertEqual(job.description, "description-value")
def test_from_rfc822_record_min_record(self):
job = JobDefinition.from_rfc822_record(self._min_record)
self.assertEqual(job.plugin, "plugin")
self.assertEqual(job.id, "id")
self.assertEqual(job.requires, None)
self.assertEqual(job.command, None)
self.assertEqual(job.description, None)
def test_str(self):
job = JobDefinition(self._min_record.data)
self.assertEqual(str(job), "id")
def test_id(self):
# NOTE: this test will change when namespace support lands
job = JobDefinition(self._min_record.data)
self.assertEqual(job.id, "id")
def test_partial_id(self):
job = JobDefinition(self._min_record.data)
self.assertEqual(job.partial_id, "id")
def test_repr(self):
job = JobDefinition(self._min_record.data)
expected = ""
observed = repr(job)
self.assertEqual(expected, observed)
def test_hash(self):
job1 = JobDefinition(self._min_record.data)
job2 = JobDefinition(self._min_record.data)
job3 = JobDefinition(self._full_record.data)
self.assertEqual(hash(job1), hash(job2))
self.assertNotEqual(hash(job1), hash(job3))
def test_dependency_parsing_empty(self):
job = JobDefinition({
'id': 'id',
'plugin': 'plugin'})
expected = set()
observed = job.get_direct_dependencies()
self.assertEqual(expected, observed)
def test_dependency_parsing_single_word(self):
job = JobDefinition({
'id': 'id',
'plugin': 'plugin',
'depends': 'word'})
expected = set(['word'])
observed = job.get_direct_dependencies()
self.assertEqual(expected, observed)
def test_environ_parsing_empty(self):
job = JobDefinition({
'id': 'id',
'plugin': 'plugin'})
expected = set()
observed = job.get_environ_settings()
self.assertEqual(expected, observed)
def test_dependency_parsing_quoted_word(self):
job = JobDefinition({
'id': 'id',
'plugin': 'plugin',
'depends': '"quoted word"'})
expected = set(['quoted word'])
observed = job.get_direct_dependencies()
self.assertEqual(expected, observed)
def test_environ_parsing_single_word(self):
job = JobDefinition({
'id': 'id',
'plugin': 'plugin',
'environ': 'word'})
expected = set(['word'])
observed = job.get_environ_settings()
self.assertEqual(expected, observed)
def test_resource_parsing_empty(self):
job = JobDefinition({
'id': 'id',
'plugin': 'plugin'})
expected = set()
observed = job.get_resource_dependencies()
self.assertEqual(expected, observed)
def test_resource_parsing_typical(self):
job = JobDefinition({
'id': 'id',
'plugin': 'plugin',
'requires': 'foo.bar == 10'})
expected = set(['foo'])
observed = job.get_resource_dependencies()
self.assertEqual(expected, observed)
def test_resource_parsing_many(self):
job = JobDefinition({
'id': 'id',
'plugin': 'plugin',
'requires': (
"foo.bar == 10\n"
"froz.bot == 10\n")})
expected = set(['foo', 'froz'])
observed = job.get_resource_dependencies()
self.assertEqual(expected, observed)
def test_checksum_smoke(self):
job1 = JobDefinition({
'id': 'id',
'plugin': 'plugin'
})
identical_to_job1 = JobDefinition({
'id': 'id',
'plugin': 'plugin'
})
# Two distinct but identical jobs have the same checksum
self.assertEqual(job1.checksum, identical_to_job1.checksum)
job2 = JobDefinition({
'id': 'other id',
'plugin': 'plugin'
})
# Two jobs with different definitions have different checksum
self.assertNotEqual(job1.checksum, job2.checksum)
# The checksum is stable and does not change over time
self.assertEqual(
job1.checksum,
"cd21b33e6a2f4d1291977b60d922bbd276775adce73fca8c69b4821c96d7314a")
def test_estimated_duration(self):
self.assertEqual(JobDefinition({}).estimated_duration, None)
self.assertEqual(JobDefinition(
{'estimated_duration': 'foo'}).estimated_duration, None)
self.assertEqual(JobDefinition(
{'estimated_duration': '123.5'}).estimated_duration,
123.5)
self.assertEqual(JobDefinition(
{'estimated_duration': '5s'}).estimated_duration, 5)
self.assertEqual(JobDefinition(
{'estimated_duration': '1m 5s'}).estimated_duration, 65)
self.assertEqual(JobDefinition(
{'estimated_duration': '1h 1m 5s'}).estimated_duration, 3665)
self.assertEqual(JobDefinition(
{'estimated_duration': '1h'}).estimated_duration, 3600)
self.assertEqual(JobDefinition(
{'estimated_duration': '2m'}).estimated_duration, 120)
self.assertEqual(JobDefinition(
{'estimated_duration': '1h 1s'}).estimated_duration, 3601)
self.assertEqual(JobDefinition(
{'estimated_duration': '1m:5s'}).estimated_duration, 65)
self.assertEqual(JobDefinition(
{'estimated_duration': '1h:1m:5s'}).estimated_duration, 3665)
self.assertEqual(JobDefinition(
{'estimated_duration': '1h:1s'}).estimated_duration, 3601)
def test_summary(self):
job1 = JobDefinition({})
self.assertEqual(job1.summary, None)
job2 = JobDefinition({'name': 'name'})
self.assertEqual(job2.summary, 'name')
job3 = JobDefinition({'summary': 'summary'})
self.assertEqual(job3.summary, 'summary')
job4 = JobDefinition({'summary': 'summary', 'name': 'name'})
self.assertEqual(job4.summary, 'summary')
def test_tr_summary(self):
"""
Verify that Provider1.tr_summary() works as expected
"""
job = JobDefinition(self._full_record.data)
with mock.patch.object(job, "get_translated_record_value") as mgtrv:
retval = job.tr_summary()
# Ensure that get_translated_record_value() was called
mgtrv.assert_called_once_with('summary', job.partial_id)
# Ensure tr_summary() returned its return value
self.assertEqual(retval, mgtrv())
def test_tr_summary__falls_back_to_id(self):
"""
Verify that Provider1.tr_summary() falls back to job.id, if summary is
not defined
"""
job = JobDefinition({'id': 'id'})
self.assertEqual(job.tr_summary(), 'id')
def test_tr_description(self):
"""
Verify that Provider1.tr_description() works as expected
"""
job = JobDefinition(self._full_record.data)
with mock.patch.object(job, "get_translated_record_value") as mgtrv:
retval = job.tr_description()
# Ensure that get_translated_record_value() was called
mgtrv.assert_called_once_with('description')
# Ensure tr_description() returned its return value
self.assertEqual(retval, mgtrv())
def test_tr_description_combining(self):
"""
Verify that translated description is properly generated
"""
job = JobDefinition(self._split_description_record.data)
def side_effect(arg):
return {
'description': None,
'PURPOSE': 'TR_PURPOSE',
'STEPS': 'TR_STEPS',
'VERIFICATION': 'TR_VERIFICATION',
'purpose': 'tr_purpose_value',
'steps': 'tr_steps_value',
'verification': 'tr_verification_value'
}[arg]
with mock.patch.object(job, "get_translated_record_value") as mgtrv:
mgtrv.side_effect = side_effect
with mock.patch('plainbox.impl.unit.job._') as mock_gettext:
mock_gettext.side_effect = side_effect
retval = job.tr_description()
mgtrv.assert_any_call('description')
mgtrv.assert_any_call('purpose')
mgtrv.assert_any_call('steps')
mgtrv.assert_any_call('verification')
self.assertEqual(mgtrv.call_count, 4)
mock_gettext.assert_any_call('PURPOSE')
mock_gettext.assert_any_call('STEPS')
mock_gettext.assert_any_call('VERIFICATION')
self.assertEqual(mock_gettext.call_count, 3)
expected = ("TR_PURPOSE:\ntr_purpose_value\nTR_STEPS:\n"
"tr_steps_value\nTR_VERIFICATION:\ntr_verification_value")
self.assertEqual(retval, expected)
def test_tr_purpose(self):
"""
Verify that Provider1.tr_purpose() works as expected
"""
job = JobDefinition(self._split_description_record.data)
with mock.patch.object(job, "get_translated_record_value") as mgtrv:
retval = job.tr_purpose()
# Ensure that get_translated_record_value() was called
mgtrv.assert_called_once_with('purpose')
# Ensure tr_purpose() returned its return value
self.assertEqual(retval, mgtrv())
def test_tr_steps(self):
"""
Verify that Provider1.tr_steps() works as expected
"""
job = JobDefinition(self._split_description_record.data)
with mock.patch.object(job, "get_translated_record_value") as mgtrv:
retval = job.tr_steps()
# Ensure that get_translated_record_value() was called
mgtrv.assert_called_once_with('steps')
# Ensure tr_steps() returned its return value
self.assertEqual(retval, mgtrv())
def test_tr_verification(self):
"""
Verify that Provider1.tr_verification() works as expected
"""
job = JobDefinition(self._split_description_record.data)
with mock.patch.object(job, "get_translated_record_value") as mgtrv:
retval = job.tr_verification()
# Ensure that get_translated_record_value() was called
mgtrv.assert_called_once_with('verification')
# Ensure tr_verification() returned its return value
self.assertEqual(retval, mgtrv())
def test_imports(self):
job1 = JobDefinition({})
self.assertEqual(job1.imports, None)
job2 = JobDefinition({'imports': 'imports'})
self.assertEqual(job2.imports, 'imports')
def test_get_imported_jobs(self):
job1 = JobDefinition({})
self.assertEqual(list(job1.get_imported_jobs()), [])
job2 = JobDefinition({
'imports': 'from 2013.com.canonical.certification import package'
})
self.assertEqual(list(job2.get_imported_jobs()), [
('2013.com.canonical.certification::package', 'package')
])
job3 = JobDefinition({
'imports': ('from 2013.com.canonical.certification'
' import package as pkg')
})
self.assertEqual(list(job3.get_imported_jobs()), [
('2013.com.canonical.certification::package', 'pkg')
])
def test_get_resource_program_using_imports(self):
job = JobDefinition({
'imports': ('from 2013.com.canonical.certification'
' import package as pkg'),
'requires': 'pkg.name == "checkbox"',
})
prog = job.get_resource_program()
self.assertEqual(
prog.required_resources,
{'2013.com.canonical.certification::package'})
class TestJobDefinitionStartup(TestCaseWithParameters):
"""
Continuation of unit tests for TestJobDefinition.
Moved to a separate class because of limitations of TestCaseWithParameters
which operates on the whole class.
"""
parameter_names = ('plugin',)
parameter_values = (
('shell',),
('attachment',),
('resource',),
('local',),
('manual',),
('user-interact',),
('user-verify',),
('user-interact-verify',)
)
parameters_keymap = {
'shell': False,
'attachment': False,
'resource': False,
'local': False,
'manual': True,
'user-interact': True,
'user-verify': False,
'user-interact-verify': True,
}
def test_startup_user_interaction_required(self):
job = JobDefinition({
'id': 'id',
'plugin': self.parameters.plugin})
expected = self.parameters_keymap[self.parameters.plugin]
observed = job.startup_user_interaction_required
self.assertEqual(expected, observed)
class JobParsingTests(TestCaseWithParameters):
parameter_names = ('glue',)
parameter_values = (
('commas',),
('spaces',),
('tabs',),
('newlines',),
('spaces_and_commas',),
('multiple_spaces',),
('multiple_commas',)
)
parameters_keymap = {
'commas': ',',
'spaces': ' ',
'tabs': '\t',
'newlines': '\n',
'spaces_and_commas': ', ',
'multiple_spaces': ' ',
'multiple_commas': ',,,,'
}
def test_environ_parsing_with_various_separators(self):
job = JobDefinition({
'id': 'id',
'plugin': 'plugin',
'environ': self.parameters_keymap[
self.parameters.glue].join(['foo', 'bar', 'froz'])})
expected = set({'foo', 'bar', 'froz'})
observed = job.get_environ_settings()
self.assertEqual(expected, observed)
def test_dependency_parsing_with_various_separators(self):
job = JobDefinition({
'id': 'id',
'plugin': 'plugin',
'depends': self.parameters_keymap[
self.parameters.glue].join(['foo', 'bar', 'froz'])})
expected = set({'foo', 'bar', 'froz'})
observed = job.get_direct_dependencies()
self.assertEqual(expected, observed)
class RegressionTests(TestCase):
""" Regression tests. """
def test_1444242(self):
""" Regression test for http://pad.lv/1444242/. """
provider = mock.Mock(spec_set=Provider1, name='provider')
provider.namespace = '2013.com.canonical.certification'
job = JobDefinition({
'id': 'audio/playback_thunderbolt',
'imports': 'from 2013.com.canonical.plainbox import manifest',
'requires': (
"device.category == 'AUDIO'\n"
"manifest.has_thunderbolt == 'True'\n"),
}, provider=provider)
prog = job.get_resource_program()
self.assertEqual(prog.expression_list[-1].resource_id_list,
['2013.com.canonical.plainbox::manifest'])
plainbox-0.25/plainbox/impl/unit/job.py 0000664 0001750 0001750 00000120306 12627266441 020750 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
# Sylvain Pineau
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.unit.job` -- job unit
=========================================
"""
import logging
import re
import os
from plainbox.abc import IJobDefinition
from plainbox.i18n import gettext as _
from plainbox.i18n import gettext_noop as N_
from plainbox.impl.resource import ResourceProgram
from plainbox.impl.resource import parse_imports_stmt
from plainbox.impl.secure.origin import JobOutputTextSource
from plainbox.impl.secure.origin import Origin
from plainbox.impl.symbol import SymbolDef
from plainbox.impl.unit._legacy import JobDefinitionLegacyAPI
from plainbox.impl.unit.unit_with_id import UnitWithId
from plainbox.impl.unit.validators import CorrectFieldValueValidator
from plainbox.impl.unit.validators import DeprecatedFieldValidator
from plainbox.impl.unit.validators import PresentFieldValidator
from plainbox.impl.unit.validators import ReferenceConstraint
from plainbox.impl.unit.validators import ShellProgramValidator
from plainbox.impl.unit.validators import TemplateInvariantFieldValidator
from plainbox.impl.unit.validators import TemplateVariantFieldValidator
from plainbox.impl.unit.validators import TranslatableFieldValidator
from plainbox.impl.unit.validators import UnitReferenceValidator
from plainbox.impl.unit.validators import UntranslatableFieldValidator
from plainbox.impl.unit.validators import UselessFieldValidator
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
from plainbox.impl.xparsers import Error
from plainbox.impl.xparsers import Text
from plainbox.impl.xparsers import Visitor
from plainbox.impl.xparsers import WordList
__all__ = ['JobDefinition', 'propertywithsymbols']
logger = logging.getLogger("plainbox.unit.job")
class propertywithsymbols(property):
"""
A property that also keeps a group of symbols around
"""
def __init__(self, fget=None, fset=None, fdel=None, doc=None,
symbols=None):
"""
Initializes the property with the specified values
"""
super(propertywithsymbols, self).__init__(fget, fset, fdel, doc)
self.__doc__ = doc
self.symbols = symbols
def __getattr__(self, attr):
"""
Internal implementation detail.
Exposes all of the attributes of the SymbolDef group as attributes of
the property. The way __getattr__() works it can never hide any
existing attributes so it is safe not to break the property.
"""
return getattr(self.symbols, attr)
def __call__(self, fget):
"""
Internal implementation detail.
Used to construct the decorator with fget defined to the decorated
function.
"""
return propertywithsymbols(
fget, self.fset, self.fdel, self.__doc__ or fget.__doc__,
symbols=self.symbols)
class _PluginValues(SymbolDef):
"""
Symbols for each value of the JobDefinition.plugin field
"""
attachment = 'attachment'
local = 'local'
resource = 'resource'
manual = 'manual'
user_verify = "user-verify"
user_interact = "user-interact"
user_interact_verify = "user-interact-verify"
shell = 'shell'
qml = 'qml'
class _CertificationStatusValues(SymbolDef):
"""
Symbols for each value of the JobDefinition.certification_status field
Particular values have the following meanings.
unspecified:
One of the new possible certification status values. This value means
that a job was not analyzed in the context of certification status
classification and it has no classification at this time. This is also
the implicit certification status for all jobs.
not-part-of-certification:
One of the new possible certification status values. This value means
that a given job may fail and this will not affect the certification
process in any way. Typically jobs with this certification status are
not executed during the certification process. In the past this was
informally referred to as a *blacklist item*.
non-blocker:
One of the new possible certification status values. This value means
that a given job may fail and while that should be regarded as a
possible future problem it will not block the certification process. In
the past this was informally referred to as a *graylist item*.
Canonical reserves the right to promote jobs from the *non-blocker* to
*blocker*.
blocker:
One of the new possible certification status values. This value means
that a given job must pass for the certification process to succeed. In
the past this was informally referred to as a *whitelist item*. The
term *blocker* was chosen to disambiguate the meaning of the two
concepts.
"""
unspecified = 'unspecified'
not_part_of_certification = 'not-part-of-certification'
non_blocker = 'non-blocker'
blocker = 'blocker'
class JobDefinition(UnitWithId, JobDefinitionLegacyAPI, IJobDefinition):
"""
Job definition class.
Thin wrapper around the RFC822 record that defines a checkbox job
definition
"""
def __init__(self, data, origin=None, provider=None, controller=None,
raw_data=None, parameters=None, field_offset_map=None):
"""
Initialize a new JobDefinition instance.
:param data:
Normalized data that makes up this job definition
:param origin:
An (optional) Origin object. If omitted a fake origin object is
created. Normally the origin object should be obtained from the
RFC822Record object.
:param provider:
An (optional) Provider1 object. If omitted it defaults to None but
the actual job definition is not suitable for execution. All job
definitions are expected to have a provider.
:param controller:
An (optional) session state controller. If omitted a checkbox
session state controller is implicitly used. The controller defines
how this job influences the session it executes in.
:param raw_data:
An (optional) raw version of data, without whitespace
normalization. If omitted then raw_data is assumed to be data.
:param parameters:
An (optional) dictionary of parameters. Parameters allow for unit
properties to be altered while maintaining a single definition.
This is required to obtain translated summary and description
fields, while having a single translated base text and any
variation in the available parameters.
:param field_offset_map:
An optional dictionary with offsets (in line numbers) of each
field. Line numbers are relative to the value of origin.line_start
.. note::
You should almost always use :meth:`from_rfc822_record()` instead.
"""
if origin is None:
origin = Origin.get_caller_origin()
super().__init__(data, raw_data=raw_data, origin=origin,
provider=provider, parameters=parameters,
field_offset_map=field_offset_map)
# NOTE: controllers cannot be customized for instantiated templates so
# I wonder if we should start hard-coding it in. Nothing seems to be
# using custom controller functionality anymore.
if controller is None:
# XXX: moved here because of cyclic imports
from plainbox.impl.ctrl import checkbox_session_state_ctrl
controller = checkbox_session_state_ctrl
self._resource_program = None
self._controller = controller
@classmethod
def instantiate_template(cls, data, raw_data, origin, provider,
parameters, field_offset_map):
"""
Instantiate this unit from a template.
The point of this method is to have a fixed API, regardless of what the
API of a particular unit class ``__init__`` method actually looks like.
It is easier to standardize on a new method that to patch all of the
initializers, code using them and tests to have an uniform initializer.
"""
# This assertion is a low-cost trick to ensure that we override this
# method in all of the subclasses to ensure that the initializer is
# called with correctly-ordered arguments.
assert cls is JobDefinition, \
"{}.instantiate_template() not customized".format(cls.__name__)
return cls(data, origin, provider, None, raw_data, parameters,
field_offset_map)
def __str__(self):
return self.summary
def __repr__(self):
return "".format(
self.id, self.plugin)
@property
def unit(self):
"""
the value of the unit field (overridden)
The return value is always 'job'
"""
return 'job'
@property
def partial_id(self):
"""
Identifier of this job, without the provider name
This field should not be used anymore, except for display
"""
return self.get_record_value('id', self.get_record_value('name'))
@propertywithsymbols(symbols=_PluginValues)
def plugin(self):
plugin = self.get_record_value('plugin')
if plugin is None and 'simple' in self.get_flag_set():
plugin = 'shell'
return plugin
@property
def summary(self):
return self.get_record_value('summary', self.partial_id)
@property
def description(self):
# since version 0.17 description field should be replaced with
# purpose/steps/verification fields. To keep backwards compability
# description will be generated by combining new ones if description
# field is missing
description = self.get_record_value('description')
if description is None:
# try combining purpose/steps/verification fields
description = ""
for stage in ['purpose', 'steps', 'verification']:
stage_value = self.get_record_value(stage)
if stage_value is not None:
description += stage.upper() + ':\n' + stage_value + '\n'
description = description.strip()
if not description:
# combining new description yielded empty string
description = None
return description
@property
def purpose(self):
return self.get_record_value('purpose')
@property
def steps(self):
return self.get_record_value('steps')
@property
def verification(self):
return self.get_record_value('verification')
@property
def requires(self):
return self.get_record_value('requires')
@property
def depends(self):
return self.get_record_value('depends')
@property
def after(self):
return self.get_record_value('after')
@property
def command(self):
return self.get_record_value('command')
@property
def environ(self):
return self.get_record_value('environ')
@property
def user(self):
return self.get_record_value('user')
@property
def flags(self):
return self.get_record_value('flags')
@property
def shell(self):
"""
Shell that is used to interpret the command
Defaults to 'bash' for checkbox compatibility.
"""
return self.get_record_value('shell', 'bash')
@property
def imports(self):
return self.get_record_value('imports')
@property
def category_id(self):
"""
fully qualified identifier of the category unit this job belongs to
.. note::
Jobs that don't have an explicit category association, also known
as the natural category, automatically get assigned to the special,
built-in 2013.com.canonical.plainbox::uncategorised category.
Note that to get the definition of that special category unit
applications need to include one of the special providers exposed
as :func:`plainbox.impl.providers.special:get_categories()`.
"""
return self.qualify_id(
self.get_record_value(
'category_id', '2013.com.canonical.plainbox::uncategorised'))
@property
def qml_file(self):
"""
path to a QML file that implements tests UI for this job
This property exposes a path to QML file that follows the Plainbox QML
Test Specification. The file will be loaded either in the native test
shell of the application using plainbox or with a helper, generic
loader for all command-line applications.
To use this property, the plugin type should be set to 'qml'.
"""
qml_file = self.get_record_value('qml_file')
if qml_file is not None and self.provider is not None:
return os.path.join(self.provider.data_dir, qml_file)
@propertywithsymbols(symbols=_CertificationStatusValues)
def certification_status(self):
"""
Get the natural certification status of this job.
The default certification status of all jobs is
``CertificationStatus.unspecified``
.. note::
Remember that the certification status can be overridden by a test
plan. You should, instead, consider the effective certification
status that can be obtained from :class:`JobState`.
"""
return self.get_record_value('certification-status', 'unspecified')
@property
def estimated_duration(self):
"""
estimated duration of this job in seconds.
The value may be None, which indicates that the duration is basically
unknown. Fractional numbers are allowed and indicate fractions of a
second.
"""
value = self.get_record_value('estimated_duration')
# NOTE: Some tests do that, I'd rather not change them now
if isinstance(value, (int, float)):
return value
elif value is None:
return None
match = re.match('^(\d+h)?[ :]*(\d+m)?[ :]*(\d+s)?$', value)
if match:
g_hours = match.group(1)
if g_hours:
assert g_hours.endswith('h')
hours = int(g_hours[:-1])
else:
hours = 0
g_minutes = match.group(2)
if g_minutes:
assert g_minutes.endswith('m')
minutes = int(g_minutes[:-1])
else:
minutes = 0
g_seconds = match.group(3)
if g_seconds:
assert g_seconds.endswith('s')
seconds = int(g_seconds[:-1])
else:
seconds = 0
return seconds + minutes * 60 + hours * 3600
else:
try:
return float(value)
except ValueError:
pass
@property
def controller(self):
"""
The controller object associated with this JobDefinition
"""
return self._controller
def tr_summary(self):
"""
Get the translated version of :meth:`summary`
"""
return self.get_translated_record_value('summary', self.partial_id)
def tr_description(self):
"""
Get the translated version of :meth:`description`
"""
tr_description = self.get_translated_record_value('description')
if tr_description is None:
# try combining purpose/steps/verification fields
tr_stages = {
'purpose': _('PURPOSE'),
'steps': _('STEPS'),
'verification': _('VERIFICATION')
}
tr_description = ""
for stage in ['purpose', 'steps', 'verification']:
stage_value = self.get_translated_record_value(stage)
if stage_value is not None:
tr_description += (tr_stages[stage] + ':\n' +
stage_value + '\n')
tr_description = tr_description.strip()
if not tr_description:
# combining new description yielded empty string
tr_description = None
return tr_description
def tr_purpose(self):
"""
Get the translated version of :meth:`purpose`
"""
return self.get_translated_record_value('purpose')
def tr_steps(self):
"""
Get the translated version of :meth:`steps`
"""
return self.get_translated_record_value('steps')
def tr_verification(self):
"""
Get the translated version of :meth:`verification`
"""
return self.get_translated_record_value('verification')
def get_environ_settings(self):
"""
Return a set of requested environment variables
"""
if self.environ is not None:
return {variable for variable in re.split('[\s,]+', self.environ)}
else:
return set()
def get_flag_set(self):
"""
Return a set of flags associated with this job
"""
if self.flags is not None:
return {flag for flag in re.split('[\s,]+', self.flags)}
else:
return set()
def get_imported_jobs(self):
"""
Parse the 'imports' line and compute the imported symbols.
Return generator for a sequence of pairs (job_id, identifier) that
describe the imported job identifiers from arbitrary namespace.
The syntax of each imports line is:
IMPORT_STMT :: "from" "import"
| "from" "import"
AS
"""
imports = self.imports or ""
return parse_imports_stmt(imports)
@property
def automated(self):
"""
Whether the job is fully automated and runs without any
intervention from the user
"""
return self.plugin in ['shell', 'resource',
'attachment', 'local']
@property
def startup_user_interaction_required(self):
"""
The job needs to be started explicitly by the test operator. This is
intended for things that may be timing-sensitive or may require the
tester to understand the necessary manipulations that he or she may
have to perform ahead of time.
The test operator may select to skip certain tests, in that case the
outcome is skip.
"""
return self.plugin in ['manual', 'user-interact',
'user-interact-verify']
def get_resource_program(self):
"""
Return a ResourceProgram based on the 'requires' expression.
The program instance is cached in the JobDefinition and is not
compiled or validated on subsequent calls.
:returns:
ResourceProgram if one is available or None
:raises ResourceProgramError:
If the program definition is incorrect
"""
if self.requires is not None and self._resource_program is None:
if self._provider is not None:
implicit_namespace = self._provider.namespace
else:
implicit_namespace = None
if self.imports is not None:
imports = list(self.get_imported_jobs())
else:
imports = None
self._resource_program = ResourceProgram(
self.requires, implicit_namespace, imports)
return self._resource_program
def get_direct_dependencies(self):
"""
Compute and return a set of direct dependencies
To combat a simple mistake where the jobs are space-delimited any
mixture of white-space (including newlines) and commas are allowed.
"""
deps = set()
if self.depends is None:
return deps
class V(Visitor):
def visit_Text_node(visitor, node: Text):
deps.add(self.qualify_id(node.text))
def visit_Error_node(visitor, node: Error):
logger.warning(_("unable to parse depends: %s"), node.msg)
V().visit(WordList.parse(self.depends))
return deps
def get_after_dependencies(self):
"""
Compute and return a set of after dependencies.
After dependencies express the desire that given job A runs after a
given job B. This is spelled out as::
id: A
after: B
id: B
To combat a simple mistake where the jobs are space-delimited any
mixture of white-space (including newlines) and commas are allowed.
"""
deps = set()
if self.after is None:
return deps
class V(Visitor):
def visit_Text_node(visitor, node: Text):
deps.add(self.qualify_id(node.text))
def visit_Error_node(visitor, node: Error):
logger.warning(_("unable to parse depends: %s"), node.msg)
V().visit(WordList.parse(self.after))
return deps
def get_resource_dependencies(self):
"""
Compute and return a set of resource dependencies
"""
program = self.get_resource_program()
if program:
return program.required_resources
else:
return set()
def get_category_id(self):
"""
Get the fully-qualified category id that this job belongs to
"""
maybe_partial_id = self.category_id
if maybe_partial_id is not None:
return self.qualify_id(maybe_partial_id)
@classmethod
def from_rfc822_record(cls, record, provider=None):
"""
Create a JobDefinition instance from rfc822 record. The resulting
instance may not be valid but will always be created. Only valid jobs
should be executed.
The record must be a RFC822Record instance.
"""
# Strip the trailing newlines form all the raw values coming from the
# RFC822 parser. We don't need them and they don't match gettext keys
# (xgettext strips out those newlines)
return cls(record.data, record.origin, provider=provider, raw_data={
key: value.rstrip('\n')
for key, value in record.raw_data.items()
}, field_offset_map=record.field_offset_map)
def create_child_job_from_record(self, record):
"""
Create a new JobDefinition from RFC822 record.
This method should only be used to create additional jobs from local
jobs (plugin local). This ensures that the child job shares the
embedded provider reference.
"""
if not isinstance(record.origin.source, JobOutputTextSource):
# TRANSLATORS: don't translate record.origin or JobOutputTextSource
raise ValueError(_("record.origin must be a JobOutputTextSource"))
if record.origin.source.job is not self:
# TRANSLATORS: don't translate record.origin.source.job
raise ValueError(_("record.origin.source.job must be this job"))
return self.from_rfc822_record(record, self.provider)
class Meta:
name = N_('job')
class fields(SymbolDef):
"""
Symbols for each field that a JobDefinition can have
"""
name = 'name'
summary = 'summary'
plugin = 'plugin'
command = 'command'
description = 'description'
user = 'user'
environ = 'environ'
estimated_duration = 'estimated_duration'
depends = 'depends'
after = 'after'
requires = 'requires'
shell = 'shell'
imports = 'imports'
flags = 'flags'
category_id = 'category_id'
purpose = 'purpose'
steps = 'steps'
verification = 'verification'
qml_file = 'qml_file'
certification_status = 'certification_status'
field_validators = {
fields.name: [
UntranslatableFieldValidator,
TemplateVariantFieldValidator,
DeprecatedFieldValidator(
_("use 'id' and 'summary' instead of 'name'")),
],
# NOTE: 'id' validators are "inherited" so we don't have it here
fields.summary: [
TranslatableFieldValidator,
TemplateVariantFieldValidator,
PresentFieldValidator(severity=Severity.advice),
# We want the summary to be a single line
CorrectFieldValueValidator(
lambda summary: summary.count("\n") == 0,
Problem.wrong, Severity.warning,
message=_("please use only one line"),
onlyif=lambda unit: unit.summary is not None),
# We want the summary to be relatively short
CorrectFieldValueValidator(
lambda summary: len(summary) <= 80,
Problem.wrong, Severity.warning,
message=_("please stay under 80 characters"),
onlyif=lambda unit: unit.summary is not None),
],
fields.plugin: [
UntranslatableFieldValidator,
TemplateInvariantFieldValidator,
PresentFieldValidator,
CorrectFieldValueValidator(
lambda plugin: (
plugin in JobDefinition.plugin.get_all_symbols()),
message=_('valid values are: {}').format(
', '.join(str(sym) for sym in sorted(
_PluginValues.get_all_symbols())))),
CorrectFieldValueValidator(
lambda plugin: plugin != 'local',
Problem.deprecated, Severity.advice,
message=_("please migrate to job templates, "
"see plainbox-template-unit(7) for details")),
CorrectFieldValueValidator(
lambda plugin: plugin != 'user-verify',
Problem.deprecated, Severity.advice,
message=_("please migrate to user-interact-verify")),
],
fields.command: [
UntranslatableFieldValidator,
# All jobs except for manual must have a command
PresentFieldValidator(
message=_("command is mandatory for non-manual jobs"),
onlyif=lambda unit: unit.plugin not in ('manual', 'qml')),
# Manual jobs cannot have a command
UselessFieldValidator(
message=_("command on a manual or qml job makes no sense"),
onlyif=lambda unit: unit.plugin in ('manual', 'qml')),
# We don't want to refer to CHECKBOX_SHARE anymore
CorrectFieldValueValidator(
lambda command: "CHECKBOX_SHARE" not in command,
Problem.deprecated, Severity.advice,
message=_("please use PLAINBOX_PROVIDER_DATA"
" instead of CHECKBOX_SHARE"),
onlyif=lambda unit: unit.command is not None),
# We don't want to refer to CHECKBOX_DATA anymore
CorrectFieldValueValidator(
lambda command: "CHECKBOX_DATA" not in command,
Problem.deprecated, Severity.advice,
message=_("please use PLAINBOX_SESSION_SHARE"
" instead of CHECKBOX_DATA"),
onlyif=lambda unit: unit.command is not None),
# We want to catch silly mistakes that shlex can detect
ShellProgramValidator,
],
fields.description: [
TranslatableFieldValidator,
TemplateVariantFieldValidator,
# Description is mandatory for manual jobs
PresentFieldValidator(
message=_("manual jobs must have a description field, or a"
" set of purpose, steps, and verification "
"fields"),
onlyif=lambda unit: unit.plugin == 'manual' and
unit.purpose is None and unit.steps is None and
unit.verification is None
),
# Description or a set of purpose, steps and verification
# fields is recommended for all other jobs
PresentFieldValidator(
severity=Severity.advice,
message=_("all jobs should have a description field, or a "
"set of purpose, steps and verification fields"),
onlyif=lambda unit: (
'simple' not in unit.get_flag_set() and
unit.plugin != 'manual' and (
unit.purpose is None and
unit.steps is None and
unit.verification is None))),
],
fields.purpose: [
TranslatableFieldValidator,
PresentFieldValidator(
severity=Severity.advice,
message=("please use purpose, steps, and verification"
" fields. See http://plainbox.readthedocs.org"
"/en/latest/author/faq.html#faq-2"),
onlyif=lambda unit:
unit.startup_user_interaction_required and
unit.get_record_value('summary') is None),
],
fields.steps: [
TranslatableFieldValidator,
PresentFieldValidator(
severity=Severity.advice,
message=("please use purpose, steps, and verification"
" fields. See http://plainbox.readthedocs.org"
"/en/latest/author/faq.html#faq-2"),
onlyif=lambda unit:
unit.startup_user_interaction_required),
],
fields.verification: [
TranslatableFieldValidator,
PresentFieldValidator(
severity=Severity.advice,
message=("please use purpose, steps, and verification"
" fields. See http://plainbox.readthedocs.org"
"/en/latest/author/faq.html#faq-2"),
onlyif=lambda unit: unit.plugin in (
'manual', 'user-verify', 'user-interact-verify')),
],
fields.user: [
UntranslatableFieldValidator,
TemplateInvariantFieldValidator,
# User should be either None or 'root'
CorrectFieldValueValidator(
message=_("user can only be 'root'"),
correct_fn=lambda user: user in (None, 'root')),
# User is useless without a command to run
UselessFieldValidator(
message=_("user without a command makes no sense"),
onlyif=lambda unit: unit.command is None)
],
fields.environ: [
UntranslatableFieldValidator,
TemplateInvariantFieldValidator,
# Environ is useless without a command to run
UselessFieldValidator(
message=_("environ without a command makes no sense"),
onlyif=lambda unit: unit.command is None),
],
fields.estimated_duration: [
UntranslatableFieldValidator,
TemplateInvariantFieldValidator,
PresentFieldValidator(
severity=Severity.advice,
onlyif=lambda unit: 'simple' not in unit.get_flag_set()
),
CorrectFieldValueValidator(
lambda duration: float(duration) > 0,
message="value must be a positive number",
onlyif=lambda unit: (
unit.get_record_value('estimated_duration'))),
],
fields.depends: [
UntranslatableFieldValidator,
CorrectFieldValueValidator(
lambda value, unit: (
unit.get_direct_dependencies() is not None)),
UnitReferenceValidator(
lambda unit: unit.get_direct_dependencies(),
constraints=[
ReferenceConstraint(
lambda referrer, referee: referee.unit == 'job',
message=_("the referenced unit is not a job"))])
# TODO: should not refer to deprecated jobs,
# onlyif job itself is not deprecated
],
fields.after: [
UntranslatableFieldValidator,
CorrectFieldValueValidator(
lambda value, unit: (
unit.get_after_dependencies() is not None)),
UnitReferenceValidator(
lambda unit: unit.get_after_dependencies(),
constraints=[
ReferenceConstraint(
lambda referrer, referee: referee.unit == 'job',
message=_("the referenced unit is not a job"))])
],
fields.requires: [
UntranslatableFieldValidator,
CorrectFieldValueValidator(
lambda value, unit: unit.get_resource_program(),
onlyif=lambda unit: unit.requires is not None),
UnitReferenceValidator(
lambda unit: unit.get_resource_dependencies(),
constraints=[
ReferenceConstraint(
lambda referrer, referee: referee.unit == 'job',
message=_("the referenced unit is not a job")),
ReferenceConstraint(
lambda referrer, referee: (
referee.plugin == 'resource'),
onlyif=lambda referrer, referee: (
referee.unit == 'job'),
message=_(
"the referenced job is not a resource job")),
]),
# TODO: should not refer to deprecated jobs,
# onlyif job itself is not deprecated
],
fields.shell: [
UntranslatableFieldValidator,
TemplateInvariantFieldValidator,
# Shell should be only '/bin/sh', or None (which gives bash)
CorrectFieldValueValidator(
lambda shell: shell in ('/bin/sh', '/bin/bash', 'bash'),
message=_("only /bin/sh and /bin/bash are allowed")),
],
fields.imports: [
UntranslatableFieldValidator,
TemplateInvariantFieldValidator,
CorrectFieldValueValidator(
lambda value, unit: (
list(unit.get_imported_jobs()) is not None)),
UnitReferenceValidator(
lambda unit: [
job_id
for job_id, identifier in unit.get_imported_jobs()],
constraints=[
ReferenceConstraint(
lambda referrer, referee: referee.unit == 'job',
message=_("the referenced unit is not a job"))]),
# TODO: should not refer to deprecated jobs,
# onlyif job itself is not deprecated
],
fields.category_id: [
UntranslatableFieldValidator,
TemplateInvariantFieldValidator,
UnitReferenceValidator(
lambda unit: (
[unit.get_category_id()] if unit.category_id else ()),
constraints=[
ReferenceConstraint(
lambda referrer, referee: (
referee.unit == 'category'),
message=_(
"the referenced unit is not a category"))]),
# TODO: should not refer to deprecated categories,
# onlyif job itself is not deprecated
],
fields.flags: [
UntranslatableFieldValidator,
TemplateInvariantFieldValidator,
CorrectFieldValueValidator(
lambda value, unit: (
'simple' in unit.get_flag_set() or
'preserve-locale' in unit.get_flag_set()),
Problem.expected_i18n, Severity.advice,
message=_(
'please ensure that the command supports'
' non-C locale then set the preserve-locale flag'
),
onlyif=lambda unit: unit.command),
CorrectFieldValueValidator(
lambda value, unit: (
not ('explicit-fail' in unit.get_flag_set() and
unit.plugin in {
'shell', 'user-interact', 'attachment',
'local', 'resource'})),
Problem.useless, Severity.advice,
message=_('explicit-fail makes no sense for job which '
'outcome is automatically determined.')
),
# The has-leftovers flag is useless without a command
CorrectFieldValueValidator(
lambda value, unit: (
'has-leftovers' not in unit.get_flag_set()),
Problem.useless, Severity.advice,
message=_(
'has-leftovers makes no sense without a command'
),
onlyif=lambda unit: unit.command is None),
],
fields.qml_file: [
UntranslatableFieldValidator,
TemplateInvariantFieldValidator,
PresentFieldValidator(
onlyif=lambda unit: unit.plugin == 'qml'),
CorrectFieldValueValidator(
lambda value: value.endswith('.qml'),
Problem.wrong, Severity.advice,
message=_('use the .qml extension for all QML files'),
onlyif=lambda unit: (unit.plugin == 'qml' and
unit.qml_file)),
CorrectFieldValueValidator(
lambda value, unit: os.path.isfile(unit.qml_file),
message=_('please point to an existing QML file'),
onlyif=lambda unit: (unit.plugin == 'qml' and
unit.qml_file)),
],
fields.certification_status: [
UntranslatableFieldValidator,
TemplateInvariantFieldValidator,
CorrectFieldValueValidator(
lambda certification_status: (
certification_status in
_CertificationStatusValues.get_all_symbols()),
message=_('valid values are: {}').format(
', '.join(str(sym) for sym in sorted(
_CertificationStatusValues.get_all_symbols())))),
],
}
plainbox-0.25/plainbox/impl/unit/packaging.py 0000664 0001750 0001750 00000040210 12627266441 022115 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
Packaging Meta-Data Unit.
This module contains the implementation of the packaging meta-data unit. This
unit can be used to describe a dependency in system packaging. This can be
used to associate jobs with system-level dependencies so that those
dependencies can be automatically added to the appropriate system packaging
meta-data.
For example, consider this unit::
plugin: shell
id: virtualization/xen_ok
requires: package.name == 'libvirt-bin'
user: root
estimated_duration: 1.0
command: virsh -c xen:/// domstate Domain-0
_description:
Test to verify that the Xen Hypervisor is running.
_summary:
Verify Xen is running
This unit, depends on the ``virsh`` executable. This has to be ensured during
packaging or the test won't be able to execute correctly. To avoid having to
carefully track this at packaging time (where one may have to review many jobs)
it's better to express this inside the provider, as a unit.
A packaging meta-data unit that does exactly this, looks like this::
unit: packaging meta-data
os-id: debian
os-version: 8
Depends: libvirt-bin
unit: packaging meta-data
os-id: fedora
os-version: 21
Requires: libvirt-client
Having this additional data, one can generate runtime dependencies for a given
unit using management commands::
./manage.py packaging
This command uses the operating-system-specific driver to introspect the system
and see if each of the packaging meta-data unit is applicable. There are
several strategies, they are tried in order, they are:
- id and version match
- id match
- id_like match
The base Linux distribution driver parses the ``/etc/os-release`` file, looks
at the ``ID``, ``ID_VERSION`` and optionally the ``ID_LIKE`` fields. They are
used as a standard way to determine the distribution for which packaging
meta-data is being collected for.
The *id and version match* strategy requires that both the ``os-id`` and
``os-dependencies`` fields are present and that they match the ``ID`` and
``ID_VERSION`` values. This strategy allows the test maintainer to express each
dependency accurately for each operating system they wish to support.
The *id match* strategy is only used when the ``os-version`` is not defined.
It is useful when a single definition is applicable to many subsequent
releases. This is especially useful when job works well with sufficiently old
version of a third party dependency and there is no need to repeatedly re-state
the same dependency for each later release of the operating system.
The *id_like match* strategy is only used as a last resort and can be seen as a
weaker *id match* strategy. This time the ``os-id`` field is compared to the
``ID_LIKE`` field (if present). It is useful for working with Debian
derivatives, like Ubuntu.
Each matching packaging meta-data unit is then passed to the driver to generate
packaging meta-data. The driver suitable for Debian-like systems, uses the
following three fields from the unit ``Depends``, ``Suggests``, ``Recommends``.
They can be accessed in packaging directly using the ``${plainbox:Depends}``,
``${plainbox:Suggests}`` and ``${plainbox:Recommends}`` syntax that is similar
to ``${misc:Depends}``.
To use it for packaging, place the following rule in your ``debian/rules``
file::
override_dh_gencontrol:
python3 manage.py packaging
dh_gencontrol
And add the following header to one of the binary packages that contains the
actual provider::
X-Plainbox-Provider: yes
A driver suitable for Fedora might be developed later so at this time it is
not documented.
"""
import abc
import errno
import logging
import re
import sys
from plainbox.i18n import gettext as _
from plainbox.impl.device import get_os_release
from plainbox.impl.symbol import SymbolDef
from plainbox.impl.unit.unit import Unit
from plainbox.impl.unit.validators import PresentFieldValidator
from plainbox.impl.unit.validators import UntranslatableFieldValidator
_logger = logging.getLogger("plainbox.unit.packaging")
__all__ = ('PackagingMetaDataUnit', 'get_packaging_driver')
class PackagingMetaDataUnit(Unit):
"""
Unit representing a dependency between some unit and system packaging.
This unit can be used to describe a dependency in system packaging. This
can be used to associate jobs with system-level dependencies so that those
dependencies can be automatically added to the appropriate system packaging
meta-data.
"""
@property
def os_id(self):
"""Identifier of the operating system."""
return self.get_record_value(self.Meta.fields.os_id)
@property
def os_version_id(self):
"""Version of the operating system."""
return self.get_record_value(self.Meta.fields.os_version_id)
class Meta:
name = 'packaging meta-data'
class fields(SymbolDef):
"""Symbols for each field of a packaging meta-data unit."""
os_id = 'os-id'
os_version_id = 'os-version-id'
field_validators = {
fields.os_id: [
UntranslatableFieldValidator,
PresentFieldValidator,
],
fields.os_version_id: [
UntranslatableFieldValidator,
],
}
def __str__(self):
parts = [_("Operating System: {}").format(self.os_id)]
if self.os_id == 'debian' or self.os_id == 'ubuntu':
Depends = self.get_record_value('Depends')
Recommends = self.get_record_value('Recommends')
Suggests = self.get_record_value('Suggests')
if Depends:
parts.append(_("Depends: {}").format(Depends))
if Recommends:
parts.append(_("Recommends: {}").format(Recommends))
if Suggests:
parts.append(_("Suggests: {}").format(Suggests))
else:
parts.append("...")
return ', '.join(parts)
class PackagingDriverError(Exception):
"""Base for all packaging driver exceptions."""
class NoPackagingDetected(PackagingDriverError):
"""Exception raised when packaging cannot be found."""
class NoApplicableBinaryPackages(PackagingDriverError):
"""Exception raised when no applicable binary packages are found."""
class IPackagingDriver(metaclass=abc.ABCMeta):
"""Interface for all packaging drivers."""
@abc.abstractmethod
def __init__(self, os_release: 'Dict[str, str]'):
"""
Initialize the packaging driver.
:param os_release:
The dictionary that represents the contents of the
``/etc/os-release`` file. Using this file the packaging driver can
infer information about the target operating system that the
packaging will be built for.
This assumes that packages are built natively, not through a
cross-compiler of some sort where the target distribution is
different from the host distribution.
"""
@abc.abstractmethod
def inspect_provider(self, provider: 'Provider1') -> None:
"""
Inspect a provider looking for packaging meta-data.
:param provider:
A provider object to look at. All of the packaging meta-data units
there are inspected, if they are applicable (see
:meth:`is_applicable()`. Information from applicable units is
collected using the :meth:`collect()` method.
"""
@abc.abstractmethod
def is_applicable(self, unit: Unit) -> bool:
"""
Check if the given unit is applicable for collecting.
:param unit:
The unit to inspect. This doesn't have to be a packaging meta-data
unit. In fact, all units are checked with this method.
:returns:
True if the unit is applicable for collection.
Packaging meta-data units that have certain properties are applicable.
Refer to the documentation of the module for details.
"""
@abc.abstractmethod
def collect(self, unit: Unit) -> None:
"""
Collect information from the given applicable unit.
:param unit:
The unit to collect information from. This is usually expressed as
additional fields that are specific to the type of native packaging
for the system.
Collected information is recorded and made available for the
:meth:`modify_packaging_tree()` method later.
"""
@abc.abstractmethod
def inspect_packaging(self) -> None:
"""
Inspect the packaging tree for additional information.
:raises NoPackagingDetected:
Exception raised when packaging cannot be found.
:raises NoApplicableBinaryPackages:
Exception raised when no applicable binary packages are found.
This method looks at the packaging system located in the current
directory. This can be the ``debian/`` directory, a particular
``.spec`` file or anything else. Information obtained from the package
is used to infer additional properties that can aid in the packaging
process.
"""
@abc.abstractmethod
def modify_packaging_tree(self) -> None:
"""
Modify the packaging tree with information from the packaging units.
This method uses all of the available information collected from
particular packaging meta-data units and from the native packaging to
modify the packaging. Additional dependencies may be injected in
appropriate places. Please refer to the documentation specific to your
packaging system for details.
"""
def _strategy_id_version(unit, os_release):
_logger.debug(_("Considering strategy: %s"),
_("os-id == ID and os-version-id == VERSION_ID"))
return (
'ID' in os_release
and unit.os_id == os_release['ID']
and 'VERSION_ID' in os_release
and unit.os_version_id == os_release['VERSION_ID']
)
def _strategy_id(unit, os_release):
_logger.debug(_("Considering strategy: %s"),
_("os-id == ID and os-version-id == undefined"))
return (
'ID' in os_release
and unit.os_id == os_release['ID']
and unit.os_version_id is None
)
def _strategy_id_like(unit, os_release):
_logger.debug(_("Considering strategy: %s"),
_("os-id == ID_LIKE and os-version-id == undefined"))
return (
'ID_LIKE' in os_release
and unit.os_id == os_release['ID_LIKE']
and unit.os_version_id is None
)
class PackagingDriverBase(IPackagingDriver):
"""Base implementation of a packaging driver."""
def __init__(self, os_release: 'Dict[str, str]'):
self.os_release = os_release
def is_applicable(self, unit: Unit) -> bool:
os_release = self.os_release
if unit.Meta.name != PackagingMetaDataUnit.Meta.name:
return False
if (not _strategy_id_version(unit, os_release)
and not _strategy_id(unit, os_release)
and not _strategy_id_like(unit, os_release)):
_logger.debug(_("All strategies unsuccessful"))
return False
_logger.debug(_("Last strategy was successful"))
return True
def inspect_provider(self, provider: 'Provider1') -> None:
for unit in provider.unit_list:
if self.is_applicable(unit):
self.collect(unit)
class NullPackagingDriver(PackagingDriverBase):
"""
Null implementation of a packaging driver.
This driver just does nothing at all. It is used as a fall-back when
nothing better is detected.
"""
def is_applicable(self, unit: Unit) -> bool:
return False
def collect(self, unit: Unit) -> None:
pass
def inspect_packaging(self) -> None:
pass
def modify_packaging_tree(self) -> None:
pass
NULL_DRIVER = NullPackagingDriver({})
class DebianPackagingDriver(PackagingDriverBase):
"""
Debian implementation of a packaging driver.
This packaging driver looks for binary packages (as listed by
``debian/control``) that contain the header ``X-Plainbox-Provider: yes``.
Each such package will have additional substitution variables in the form
of ``${plainbox:Depends}``, ``${plainbox:Suggests}`` and
``${plainbox:Recommends}``. The variables are filled with data from all the
packaging meta-data units present in the provider.
"""
def __init__(self, os_release: 'Dict[str, str]'):
super().__init__(os_release)
self._depends = []
self._suggests = []
self._recommends = []
self._pkg_list = []
def inspect_packaging(self) -> None:
self._pkg_list.extend(self._gen_provider_packages())
if self._pkg_list:
return
raise NoApplicableBinaryPackages(_(
"There are no applicable binary packages.\n"
"Add 'X-Plainbox-Provider: yes' to each binary package that "
"contains a provider"))
def modify_packaging_tree(self) -> None:
for pkg in self._pkg_list:
self._write_pkg_substvars(pkg)
def collect(self, unit: Unit) -> None:
def rel_list(field):
relations = unit.get_record_value(field, '').replace('\n', ' ')
return (
rel.strip()
for rel in re.split(', *', relations)
if rel.strip()
)
self._depends.extend(rel_list('Depends'))
self._suggests.extend(rel_list('Suggests'))
self._recommends.extend(rel_list('Recommends'))
def _write_pkg_substvars(self, pkg):
fname = 'debian/{}.substvars'.format(pkg)
_logger.info(_("Writing %s"), fname)
# NOTE: we're appending to that file
with open(fname, 'at', encoding='UTF-8') as stream:
if self._depends:
print('plainbox:Depends={}'.format(
', '.join(self._depends)), file=stream)
if self._suggests:
print('plainbox:Suggests={}'.format(
', '.join(self._suggests)), file=stream)
if self._recommends:
print('plainbox:Recommends={}'.format(
', '.join(self._recommends)), file=stream)
def _gen_provider_packages(self):
try:
_logger.info(_("Loading debian/control"))
with open('debian/control', 'rt', encoding='UTF-8') as stream:
from debian.deb822 import Deb822
for para in Deb822.iter_paragraphs(stream.readlines()):
if 'Package' not in para:
continue
if para.get('X-Plainbox-Provider') != 'yes':
continue
pkg = para['Package']
_logger.info(_("Found binary provider package: %s"), pkg)
yield pkg
except OSError as exc:
if exc.errno == errno.ENOENT:
raise NoPackagingDetected(_(
"There is no appropriate packaging in this directory.\n"
"The file debian/control could not be found"))
def get_packaging_driver() -> IPackagingDriver:
"""Get the packaging driver appropriate for the current platform."""
if sys.platform.startswith("linux"):
os_release = get_os_release()
if (os_release.get('ID') == 'debian'
or os_release.get('ID_LIKE') == 'debian'):
_logger.info(_("Using Debian packaging driver"))
return DebianPackagingDriver(os_release)
_logger.info(_("Using null packaging driver"))
return NULL_DRIVER
plainbox-0.25/plainbox/impl/unit/file.py 0000664 0001750 0001750 00000012146 12627266441 021117 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.unit.file` -- file unit
===========================================
"""
import logging
import os
from plainbox.i18n import gettext as _
from plainbox.i18n import gettext_noop as N_
from plainbox.impl.symbol import SymbolDef
from plainbox.impl.unit.job import propertywithsymbols
from plainbox.impl.unit.unit import Unit, UnitValidator
from plainbox.impl.unit.validators import CorrectFieldValueValidator
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
__all__ = ['FileRole', 'FileUnit']
logger = logging.getLogger("plainbox.unit.file")
class FileRole(SymbolDef):
"""
Symbols that correspond to the role that a particular file plays.
Each file in a particular provider can be classified to belong to one
of the following roles. It is possible that the set of roles is not
exhaustive and new roles will be added in the futurte.
"""
unit_source = 'unit-source'
legacy_whitelist = 'legacy-whitelist'
script = 'script' # architecture independent executable
binary = 'binary' # architecture dependent executable
data = 'data' # data file
i18n = 'i18n' # translation catalog
manage_py = 'manage.py' # management script
legal = 'legal' # license & copyright
docs = 'docs' # documentation
unknown = 'unknown' # unknown / unclassified
build = 'build' # build artefact
invalid = 'invalid' # invalid file that will never be used
vcs = 'vcs' # version control system data
src = 'src' # source
class FileUnitValidator(UnitValidator):
"""
Validator for the FileUnit class.
The sole purpose of this class is to have a custom :meth:`explain()`
so that we can skip the 'field' part as nobody is really writing file
units and the notion of a field may be confusing.
"""
def explain(self, unit, field, kind, message):
stock_msg = self._explain_map.get(kind)
if message or stock_msg:
return message or stock_msg
class FileUnit(Unit):
"""
Unit that describes a single file.
Every file that is a part of a provider has a corresponding file unit.
Units like this are automatically generated by the provider itself.
The file unit can be still defined to provide any additional meta-data.
The file unit is used for contextual validation of job definitions and
other unit types. The sole purpose, for now, is to advise against using
the ``.txt`` or the ``.txt.in`` extensions in favour of the new one
``.pxu``
"""
def __str__(self):
"""
Same as .path
"""
return self.path
def __repr__(self):
return "".format(self.path, self.role)
@property
def path(self):
"""
Absolute path of the file this unit describes
Typically you may wish to construct a relative path, using some other
directory as the base directory, depending on context.
"""
return self.get_record_value('path')
@propertywithsymbols(symbols=FileRole)
def role(self):
"""
Role of the file within the provider
"""
return self.get_record_value('role')
class Meta:
name = N_('file')
validator_cls = FileUnitValidator
class fields(SymbolDef):
"""
Symbols for each field that a FileUnit can have
"""
path = 'path'
role = 'role'
base = 'base'
field_validators = {
fields.path: [
CorrectFieldValueValidator(
lambda value: os.path.splitext(value)[1] == '.pxu',
Problem.deprecated, Severity.advice,
onlyif=lambda unit: unit.role == FileRole.unit_source,
message=_(
"please use .pxu as an extension for all"
" files with plainbox units, see: {}"
).format(
'http://plainbox.readthedocs.org/en/latest/author/'
'faq.html#faq-1'
)),
],
fields.role: [
CorrectFieldValueValidator(
lambda value: value in FileRole.get_all_symbols(),
message=_('valid values are: {}').format(
', '.join(str(sym) for sym in sorted(
FileRole.get_all_symbols())))),
]
}
plainbox-0.25/plainbox/impl/unit/_legacy.py 0000664 0001750 0001750 00000047154 12627266441 021612 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.unit` -- unit definition
============================================
Module with implementation of legacy validation API for all the current units.
This module can be removed once that API is no longer needed.
"""
import itertools
from plainbox.i18n import gettext as _
from plainbox.impl import deprecated
from plainbox.impl.resource import Resource
from plainbox.impl.resource import ResourceProgramError
from plainbox.impl.validation import Problem
from plainbox.impl.validation import ValidationError
# --- validators ---
class UnitValidatorLegacyAPI:
@deprecated('0.11', 'use get_issues() instead')
def validate(self, unit, strict=False, deprecated=False):
"""
Validate data stored in the unit
:param validation_kwargs:
Validation parameters (may vary per subclass)
:raises ValidationError:
If the unit is incorrect somehow.
Non-parametric units are always valid. Parametric units are valid if
they don't violate the parametric constraints encoded in the
:class:`Unit.Meta` unit meta-data class'
:attr:`Unit.Meta.template_constraints` field.
"""
# Non-parametric units are always valid
if not unit.is_parametric:
return
# Parametric units should obey the parametric constraints (encoded in
# the helper meta-data class Meta's template_constraints field)
for field, param_set in unit.get_accessed_parameters().items():
constraint = unit.Meta.template_constraints.get(field)
# Fields cannot refer to parameters that we don't have
for param_name in param_set:
if param_name not in unit.parameters:
raise ValidationError(field, Problem.wrong)
# Fields without constraints are otherwise valid.
if constraint is None:
continue
assert constraint in ('vary', 'const')
# Fields that need to be variable cannot have a non-parametrized
# value
if constraint == 'vary' and len(param_set) == 0:
raise ValidationError(field, Problem.constant)
# Fields that need to be constant cannot have parametrized value
elif constraint == 'const' and len(param_set) != 0:
raise ValidationError(field, Problem.variable)
class UnitWithIdValidatorLegacyAPI(UnitValidatorLegacyAPI):
@deprecated('0.11', 'use .check() instead')
def validate(self, unit, strict=False, deprecated=False):
super().validate(unit, strict, deprecated)
# Check if the partial_id field is empty
if unit.partial_id is None:
raise ValidationError("id", Problem.missing)
class JobDefinitionValidatorLegacyAPI(UnitWithIdValidatorLegacyAPI):
@deprecated('0.11', 'use .check() instead')
def validate(self, job, strict=False, deprecated=False):
"""
Validate the specified job
:param strict:
Enforce strict validation. Non-conforming jobs will be rejected.
This is off by default to ensure that non-critical errors don't
prevent jobs from running.
:param deprecated:
Enforce deprecation validation. Jobs having deprecated fields will
be rejected. This is off by default to allow backwards compatible
jobs to be used without any changes.
"""
super().validate(job, strict, deprecated)
from plainbox.impl.unit.job import JobDefinition
# Check if name is still being used, if running in strict mode
if deprecated and job.get_record_value('name') is not None:
raise ValidationError(job.fields.name, Problem.deprecated)
# Check if the partial_id field is empty
if job.partial_id is None:
raise ValidationError(job.fields.id, Problem.missing)
# Check if summary is empty, if running in strict mode
if strict and job.summary is None:
raise ValidationError(job.fields.summary, Problem.missing)
# Check if plugin is empty
if job.plugin is None:
raise ValidationError(job.fields.plugin, Problem.missing)
# Check if plugin has a good value
elif job.plugin not in JobDefinition.plugin.get_all_symbols():
raise ValidationError(job.fields.plugin, Problem.wrong)
# Check if user is given without a command to run, if running in strict
# mode
if strict and job.user is not None and job.command is None:
raise ValidationError(job.fields.user, Problem.useless)
# Check if environ is given without a command to run, if running in
# strict mode
if strict and job.environ is not None and job.command is None:
raise ValidationError(job.fields.environ, Problem.useless)
# Verify that command is present on a job within the subset that should
# really have them (shell, local, resource, attachment, user-verify and
# user-interact)
if job.plugin in {JobDefinition.plugin.shell,
JobDefinition.plugin.local,
JobDefinition.plugin.resource,
JobDefinition.plugin.attachment,
JobDefinition.plugin.user_verify,
JobDefinition.plugin.user_interact,
JobDefinition.plugin.user_interact_verify}:
# Check if shell jobs have a command
if job.command is None:
raise ValidationError(job.fields.command, Problem.missing)
# Check if user has a good value
if job.user not in (None, "root"):
raise ValidationError(job.fields.user, Problem.wrong)
# Do some special checks for manual jobs as those should really be
# fully interactive, non-automated jobs (otherwise they are either
# user-interact or user-verify)
if job.plugin == JobDefinition.plugin.manual:
# Ensure that manual jobs have a description
if job.description is None:
raise ValidationError(
job.fields.description, Problem.missing)
# Ensure that manual jobs don't have command, if running in strict
# mode
if strict and job.command is not None:
raise ValidationError(job.fields.command, Problem.useless)
estimated_duration = job.get_record_value('estimated_duration')
if estimated_duration is not None:
try:
float(estimated_duration)
except ValueError:
raise ValidationError(
job.fields.estimated_duration, Problem.wrong)
elif strict and estimated_duration is None:
raise ValidationError(
job.fields.estimated_duration, Problem.missing)
# The resource program should be valid
try:
job.get_resource_program()
except ResourceProgramError:
raise ValidationError(job.fields.requires, Problem.wrong)
class TemplateUnitValidatorLegacyAPI(UnitValidatorLegacyAPI):
@deprecated('0.11', 'use .check() instead')
def validate(self, template, strict=False, deprecated=False):
"""
Validate the specified job template
:param strict:
Enforce strict validation. Non-conforming jobs will be rejected.
This is off by default to ensure that non-critical errors don't
prevent jobs from running.
:param deprecated:
Enforce deprecation validation. Jobs having deprecated fields will
be rejected. This is off by default to allow backwards compatible
jobs to be used without any changes.
"""
super().validate(template, strict, deprecated)
# All templates need the template-resource field
if template.template_resource is None:
raise ValidationError(
template.fields.template_resource, Problem.missing)
# All templates need a valid (or empty) template filter
try:
template.get_filter_program()
except (ResourceProgramError, SyntaxError) as exc:
raise ValidationError(
template.fields.template_filter, Problem.wrong,
hint=str(exc))
# All templates should use the resource object correctly. This is
# verified by the code below. It generally means that fields should or
# should not use variability induced by the resource object data.
accessed_parameters = template.get_accessed_parameters(force=True)
# The unit field must be constant.
if ('unit' in accessed_parameters
and len(accessed_parameters['unit']) != 0):
raise ValidationError(template.fields.id, Problem.variable)
# Now that we know it's constant we can look up the unit it is supposed
# to instantiate.
try:
unit_cls = template.get_target_unit_cls()
except LookupError:
raise ValidationError(template.fields.unit, Problem.wrong)
# Let's look at the template constraints for the unit
for field, constraint in unit_cls.Meta.template_constraints.items():
assert constraint in ('vary', 'const')
if constraint == 'vary':
if (field in accessed_parameters
and len(accessed_parameters[field]) == 0):
raise ValidationError(field, Problem.constant)
elif constraint == 'const':
if (field in accessed_parameters
and len(accessed_parameters[field]) != 0):
raise ValidationError(field, Problem.variable)
# Lastly an example unit generated with a fake resource should still be
resource = self._get_fake_resource(accessed_parameters)
unit = template.instantiate_one(resource, unit_cls_hint=unit_cls)
return unit.validate(strict=strict, deprecated=deprecated)
@classmethod
def _get_fake_resource(cls, accessed_parameters):
return Resource({
key: key.upper()
for key in set(itertools.chain(*accessed_parameters.values()))
})
class CategoryUnitValidatorLegacyAPI(UnitWithIdValidatorLegacyAPI):
@deprecated('0.11', 'use .check() instead')
def validate(self, unit, strict=False, deprecated=False):
"""
Validate the specified category
:param unit:
:class:`CategoryUnit` to validate
:param strict:
Enforce strict validation. Non-conforming categories will be
rejected. This is off by default to ensure that non-critical errors
don't prevent categories from being used.
:param deprecated:
Enforce deprecation validation. Categories having deprecated fields
will be rejected. This is off by default to allow backwards
compatible categories to be used without any changes.
"""
# Check basic stuff
super().validate(unit, strict=strict, deprecated=deprecated)
# Check if name is empty
if unit.name is None:
raise ValidationError(unit.fields.name, Problem.missing)
class TestPlanUnitValidatorLegacyAPI(UnitWithIdValidatorLegacyAPI):
"""
Validator for :class:`TestPlanUnit`
"""
@deprecated('0.11', 'use .check() instead')
def validate(self, unit, **validation_kwargs):
# Check basic stuff
super().validate(unit, **validation_kwargs)
# Check if name field is empty
if unit.name is None:
raise ValidationError("name", Problem.missing)
# Check that we can convert include + exclude into a list of qualifiers
# this is not perfect but it has some sort of added value
if unit.include is not None:
self._validate_selector(unit, "include")
if unit.exclude is not None:
self._validate_selector(unit, "exclude")
# check if .estimated_duration crashes on ValueError
try:
unit.estimated_duration
except ValueError:
raise ValidationError("estimated_duration", Problem.wrong)
def _validate_selector(self, unit, field_name):
field_value = getattr(unit, field_name)
matchers_gen = unit.parse_matchers(field_value)
for lineno_offset, matcher_field, matcher, error in matchers_gen:
if error is None:
continue
raise ValidationError(
field_name, Problem.wrong,
hint=_("invalid regular expression: {0}".format(str(error))),
origin=unit.origin.with_offset(
lineno_offset + unit.field_offset_map[field_name]
).just_line())
# --- units ---
class UnitLegacyAPI:
@deprecated("0.7", "call unit.tr_unit() instead")
def get_unit_type(self):
return self.tr_unit()
@deprecated('0.11', 'use .check() instead')
def validate(self, **validation_kwargs):
"""
Validate data stored in the unit
:param validation_kwargs:
Validation parameters (may vary per subclass)
:raises ValidationError:
If the unit is incorrect somehow.
Non-parametric units are always valid. Parametric units are valid if
they don't violate the parametric constraints encoded in the
:class:`Unit.Meta` unit meta-data class'
:attr:`Unit.Meta.template_constraints` field.
"""
return UnitValidatorLegacyAPI().validate(self, **validation_kwargs)
class Meta:
template_constraints = {
'unit': 'const'
}
class UnitWithIdLegacyAPI(UnitLegacyAPI):
@deprecated('0.11', 'use .check() instead')
def validate(self, **validation_kwargs):
"""
Validate data stored in the unit
:param validation_kwargs:
Validation parameters (may vary per subclass)
:raises ValidationError:
If the unit is incorrect somehow.
Non-parametric units are always valid. Parametric units are valid if
they don't violate the parametric constraints encoded in the
:class:`Unit.Meta` unit meta-data class'
:attr:`Unit.Meta.template_constraints` field.
"""
return UnitWithIdValidatorLegacyAPI().validate(
self, **validation_kwargs)
class Meta(UnitLegacyAPI.Meta):
template_constraints = dict(UnitLegacyAPI.Meta.template_constraints)
template_constraints.update({
'id': 'vary'
})
class JobDefinitionLegacyAPI(UnitWithIdLegacyAPI):
@property
@deprecated('0.11', 'use .partial_id or .summary instead')
def name(self):
return self.get_record_value('name')
def validate(self, **validation_kwargs):
"""
Validate this job definition
:param validation_kwargs:
Keyword arguments to pass to the
:meth:`JobDefinitionValidator.validate()`
:raises ValidationError:
If the job has any problems that make it unsuitable for execution.
"""
JobDefinitionValidatorLegacyAPI().validate(
self, **validation_kwargs)
class Meta(UnitWithIdLegacyAPI.Meta):
template_constraints = {
'name': 'vary',
'unit': 'const',
# The 'id' field should be always variable (depending on at least
# resource reference) or clashes are inevitable (they can *still*
# occur but this is something we cannot prevent).
'id': 'vary',
# The summary should never be constant as that would be confusing
# to the test operator. If it is defined in the template it should
# be customized by at least one resource reference.
'summary': 'vary',
# The 'plugin' field should be constant as otherwise validation is
# very unreliable. There is no current demand for being able to
# customize it from a resource record.
'plugin': 'const',
# The description should never be constant as that would be
# confusing to the test operator. If it is defined in the template
# it should be customized by at least one resource reference.
'description': 'vary',
# There is no conceivable value in having a variable user field
'user': 'const',
'environ': 'const',
# TODO: what about estimated duration?
# 'estimated_duration': '?',
# TODO: what about depends and requires?
#
# If both are const then we can determine test ordering without any
# action and the ordering is not perturbed at runtime. This may be
# too strong of a limitation though. We'll see.
# 'depends': '?',
# 'requires': '?',
'shell': 'const',
'imports': 'const',
'category_id': 'const',
}
class CategoryUnitLegacyAPI(UnitWithIdLegacyAPI):
@deprecated('0.11', 'use .check() instead')
def validate(self, **validation_kwargs):
"""
Validate this job definition
:param validation_kwargs:
Keyword arguments to pass to the
:meth:`CategoryUnitValidator.validate()`
:raises ValidationError:
If the category has any problems.
"""
return CategoryUnitValidatorLegacyAPI().validate(
self, **validation_kwargs)
class Meta(UnitWithIdLegacyAPI.Meta):
template_constraints = dict(
UnitWithIdLegacyAPI.Meta.template_constraints)
template_constraints.update({
# The name field should vary so that instantiated categories
# have different user-visible names
'name': 'vary',
})
class TemplateUnitLegacyAPI(UnitLegacyAPI):
@deprecated('0.11', 'use .check() instead')
def validate(self, **validation_kwargs):
"""
Validate this job definition template
:param validation_kwargs:
Keyword arguments to pass to the
:meth:`TemplateUnitValidator.validate()`
:raises ValidationError:
If the template has any problems that make it unsuitable for
execution.
"""
return TemplateUnitValidatorLegacyAPI().validate(
self, **validation_kwargs)
class Meta(UnitLegacyAPI.Meta):
pass
class TestPlanUnitLegacyAPI(UnitWithIdLegacyAPI):
@deprecated('0.11', 'use .check() instead')
def validate(self, **validation_kwargs):
"""
Validate data stored in the unit
:param validation_kwargs:
Validation parameters (may vary per subclass)
:raises ValidationError:
If the unit is incorrect somehow.
Non-parametric units are always valid. Parametric units are valid if
they don't violate the parametric constraints encoded in the
:class:`Unit.Meta` unit meta-data class'
:attr:`Unit.Meta.template_constraints` field.
"""
return TestPlanUnitValidatorLegacyAPI().validate(
self, **validation_kwargs)
plainbox-0.25/plainbox/impl/unit/test_category.py 0000664 0001750 0001750 00000014177 12627266441 023062 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.unit.test_category
================================
Test definitions for plainbox.impl.unit.category module
"""
from unittest import TestCase
import warnings
from plainbox.impl.secure.origin import FileTextSource
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.rfc822 import RFC822Record
from plainbox.impl.unit.category import CategoryUnit
from plainbox.impl.unit.test_unit_with_id import UnitWithIdFieldValidationTests
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
from plainbox.impl.validation import ValidationError
from plainbox.vendor import mock
class CategoryUnitTests(TestCase):
def setUp(self):
self._record = RFC822Record({
'id': 'id',
'name': 'name',
}, Origin(FileTextSource('file.txt'), 1, 2))
self._gettext_record = RFC822Record({
'_id': 'id',
'_name': 'name'
}, Origin(FileTextSource('file.txt.in'), 1, 2))
warnings.filterwarnings(
'ignore', 'validate is deprecated since version 0.11')
def tearDown(self):
warnings.resetwarnings()
def test_instantiate_template(self):
data = mock.Mock(name='data')
raw_data = mock.Mock(name='raw_data')
origin = mock.Mock(name='origin')
provider = mock.Mock(name='provider')
parameters = mock.Mock(name='parameters')
field_offset_map = mock.Mock(name='field_offset_map')
unit = CategoryUnit.instantiate_template(
data, raw_data, origin, provider, parameters, field_offset_map)
self.assertIs(unit._data, data)
self.assertIs(unit._raw_data, raw_data)
self.assertIs(unit._origin, origin)
self.assertIs(unit._provider, provider)
self.assertIs(unit._parameters, parameters)
self.assertIs(unit._field_offset_map, field_offset_map)
def test_smoke_record(self):
cat = CategoryUnit(self._record.data)
self.assertEqual(cat.id, "id")
self.assertEqual(cat.name, "name")
def test_smoke_gettext_record(self):
cat = CategoryUnit(self._gettext_record.data)
self.assertEqual(cat.id, "id")
self.assertEqual(cat.name, "name")
def test_str(self):
cat = CategoryUnit(self._record.data)
self.assertEqual(str(cat), "name")
def test_id(self):
cat = CategoryUnit(self._record.data)
self.assertEqual(cat.id, "id")
def test_partial_id(self):
cat = CategoryUnit(self._record.data)
self.assertEqual(cat.partial_id, "id")
def test_repr(self):
cat = CategoryUnit(self._record.data)
expected = ""
observed = repr(cat)
self.assertEqual(expected, observed)
def test_tr_name(self):
"""
Verify that CategoryUnit.tr_summary() works as expected
"""
cat = CategoryUnit(self._record.data)
with mock.patch.object(cat, "get_translated_record_value") as mgtrv:
retval = cat.tr_name()
# Ensure that get_translated_record_value() was called
mgtrv.assert_called_once_with(cat.name)
# Ensure tr_summary() returned its return value
self.assertEqual(retval, mgtrv())
def test_validate(self):
# NOTE: this test depends on the order of checks in UnitValidator
# Id is required
with self.assertRaises(ValidationError) as boom:
CategoryUnit({}).validate()
self.assertEqual(boom.exception.problem, Problem.missing)
self.assertEqual(boom.exception.field, 'id')
# Name is also required
with self.assertRaises(ValidationError) as boom:
CategoryUnit({'id': 'id'}).validate()
self.assertEqual(boom.exception.problem, Problem.missing)
self.assertEqual(boom.exception.field, 'name')
# When both id and name are present, everything is OK
self.assertIsNone(CategoryUnit({
'id': 'id', 'name': 'name'
}).validate())
class CategoryUnitFieldValidationTests(UnitWithIdFieldValidationTests):
unit_cls = CategoryUnit
def test_name__translatable(self):
issue_list = self.unit_cls({
'name': 'name'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.name,
Problem.expected_i18n, Severity.warning)
def test_name__template_variant(self):
issue_list = self.unit_cls({
'name': 'name'
}, provider=self.provider, parameters={}).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.name,
Problem.constant, Severity.error)
def test_name__present(self):
issue_list = self.unit_cls({
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.name,
Problem.missing, Severity.error)
def test_name__one_line(self):
issue_list = self.unit_cls({
'name': 'line1\nline2'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.name,
Problem.wrong, Severity.warning)
def test_name__short_line(self):
issue_list = self.unit_cls({
'name': 'x' * 81
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.name,
Problem.wrong, Severity.warning)
plainbox-0.25/plainbox/impl/unit/exporter.py 0000664 0001750 0001750 00000016332 12627266441 022051 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2015 Canonical Ltd.
# Written by:
# Sylvain Pineau
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""Exporter Entry Unit."""
import json
import logging
import os.path
import re
import pkg_resources
from plainbox.i18n import gettext as _
from plainbox.impl.symbol import SymbolDef
from plainbox.impl.unit.unit_with_id import UnitWithId
from plainbox.impl.unit.validators import CorrectFieldValueValidator
from plainbox.impl.unit.validators import PresentFieldValidator
from plainbox.impl.unit.validators import TranslatableFieldValidator
from plainbox.impl.unit.validators import UntranslatableFieldValidator
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
logger = logging.getLogger("plainbox.unit.exporter")
__all__ = ('ExporterUnit', )
class ExporterUnit(UnitWithId):
"""
Unit representing a session exporter.
This unit is used to define mechanisms for exporting session state data
into any format.
"""
def __str__(self):
return self.summary
def __repr__(self):
return "".format(
self.id, self.entry_point)
@property
def support(self):
if not self.check():
return ExporterUnitSupport(self)
else:
return None
@property
def summary(self):
"""
Summary of this exporter.
.. note::
This value is not translated, see :meth:`tr_summary()` for
a translated equivalent.
"""
return self.get_record_value('summary', '')
def tr_summary(self):
"""Get the translated version of :meth:`summary`."""
return self.get_translated_record_value('summary', '')
@property
def entry_point(self):
"""Exporter EntryPoint to call."""
return self.get_record_value('entry_point')
@property
def file_extension(self):
"""Filename extension when the exporter stream is saved to a file."""
return self.get_record_value('file_extension')
@property
def options(self):
"""Configuration options to send to the exporter class."""
return self.get_record_value('options')
@property
def data(self):
"""Data to send to the exporter class."""
return self.get_record_value('data')
class Meta:
name = 'exporter'
class fields(SymbolDef):
"""Symbols for each field that an Exporter can have."""
summary = 'summary'
entry_point = 'entry_point'
file_extension = 'file_extension'
options = 'options'
data = 'data'
field_validators = {
fields.summary: [
PresentFieldValidator(severity=Severity.advice),
TranslatableFieldValidator,
# We want the summary to be a single line
CorrectFieldValueValidator(
lambda summary: summary.count("\n") == 0,
Problem.wrong, Severity.warning,
message=_("please use only one line"),
onlyif=lambda unit: unit.summary is not None),
# We want the summary to be relatively short
CorrectFieldValueValidator(
lambda summary: len(summary) <= 80,
Problem.wrong, Severity.warning,
message=_("please stay under 80 characters"),
onlyif=lambda unit: unit.summary is not None),
],
fields.entry_point: [
PresentFieldValidator,
UntranslatableFieldValidator,
CorrectFieldValueValidator(
lambda entry_point: pkg_resources.load_entry_point(
'plainbox', 'plainbox.exporter', entry_point),
Problem.wrong, Severity.error),
],
fields.file_extension: [
PresentFieldValidator,
UntranslatableFieldValidator,
CorrectFieldValueValidator(
lambda extension: re.search("^[\w\.\-]+$", extension),
Problem.syntax_error, Severity.error),
],
fields.options: [
UntranslatableFieldValidator,
],
fields.data: [
UntranslatableFieldValidator,
CorrectFieldValueValidator(
lambda value, unit: json.loads(value),
Problem.syntax_error, Severity.error,
onlyif=lambda unit: unit.data),
CorrectFieldValueValidator(
lambda value, unit: os.path.isfile(os.path.join(
unit.provider.data_dir,
json.loads(value)['template'])),
Problem.wrong, Severity.error,
message=_("Jinja2 template not found"),
onlyif=lambda unit: unit.entry_point == 'jinja2'),
],
}
class ExporterUnitSupport():
"""
Helper class that distills exporter data into more usable form.
This class serves to offload some of the code from :class:`ExporterUnit`
branch. It takes a single exporter unit and extracts all the interesting
information out of it. Subsequently it exposes that data so that some
methods on the exporter unit class itself can be implemented in an easier
way.
"""
def __init__(self, exporter):
self._data = self._get_data(exporter)
self._data_dir = exporter.provider.data_dir
self.exporter_cls = self._get_exporter_cls(exporter)
self._option_list = self._get_option_list(exporter)
self.file_extension = exporter.file_extension
self.summary = exporter.tr_summary()
if exporter.entry_point == 'jinja2':
self._template = self._data['template']
@property
def data(self):
return self._data
@property
def data_dir(self):
return self._data_dir
@property
def option_list(self):
return self._option_list
@property
def template(self):
return self._template
def _get_data(self, exporter):
"""Data to send to the exporter class."""
if exporter.data:
return json.loads(exporter.data)
else:
return {}
def _get_option_list(self, exporter):
"""Option list to send to the exporter class."""
if exporter.options:
return re.split(r'[;,\s]+', exporter.options)
else:
return []
def _get_exporter_cls(self, exporter):
"""Return the exporter class."""
return pkg_resources.load_entry_point(
'plainbox', 'plainbox.exporter', exporter.entry_point)
plainbox-0.25/plainbox/impl/unit/test_unit_with_id.py 0000664 0001750 0001750 00000006412 12627266441 023724 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Sylvain Pineau
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.unit.test_unit_with_id
====================================
Test definitions for plainbox.impl.unit.unit_with_id module
"""
from plainbox.impl.unit.test_unit import UnitFieldValidationTests
from plainbox.impl.unit.unit_with_id import UnitWithId
from plainbox.impl.unit.validators import UnitValidationContext
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
class UnitWithIdFieldValidationTests(UnitFieldValidationTests):
unit_cls = UnitWithId
def test_id__untranslatable(self):
issue_list = self.unit_cls({
'_id': 'id'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.id,
Problem.unexpected_i18n, Severity.warning)
def test_id__template_variant(self):
issue_list = self.unit_cls({
'id': 'id'
}, parameters={}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.id,
Problem.constant, Severity.error)
def test_id__present(self):
issue_list = self.unit_cls({
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.id,
Problem.missing, Severity.error)
def test_id__unique(self):
unit = self.unit_cls({
'id': 'id'
}, provider=self.provider)
other_unit = self.unit_cls({
'id': 'id'
}, provider=self.provider)
self.provider.unit_list = [unit, other_unit]
self.provider.problem_list = []
context = UnitValidationContext([self.provider])
message_start = (
"{} 'id', field 'id', clashes with 1 other unit,"
" look at: "
).format(unit.tr_unit())
issue_list = unit.check(context=context)
issue = self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.id,
Problem.not_unique, Severity.error)
self.assertTrue(issue.message.startswith(message_start))
def test_id__without_namespace(self):
unit = self.unit_cls({
'id': 'some_ns::id'
}, provider=self.provider)
issue_list = unit.check()
message = (
"{} 'some_ns::id', field 'id', identifier cannot"
" define a custom namespace"
).format(unit.tr_unit())
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.id,
Problem.wrong, Severity.error, message)
plainbox-0.25/plainbox/impl/unit/test_exporter.py 0000664 0001750 0001750 00000014676 12627266441 023121 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2015 Canonical Ltd.
# Written by:
# Sylvain Pineau
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.unit.test_exporter
================================
Test definitions for plainbox.impl.unit.exporter module
"""
from unittest import TestCase
import warnings
from plainbox.impl.secure.origin import FileTextSource
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.rfc822 import RFC822Record
from plainbox.impl.unit.exporter import ExporterUnit
from plainbox.impl.unit.exporter import ExporterUnitSupport
from plainbox.impl.unit.test_unit_with_id import UnitWithIdFieldValidationTests
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
from plainbox.impl.validation import ValidationError
from plainbox.vendor import mock
class ExporterUnitTests(TestCase):
def setUp(self):
self._record = RFC822Record({
'id': 'id',
'unit': 'exporter',
'_summary': 'summary',
'entry_point': 'text',
'file_extension': 'file_extension',
}, Origin(FileTextSource('file.txt'), 1, 2))
warnings.filterwarnings(
'ignore', 'validate is deprecated since version 0.11')
def tearDown(self):
warnings.resetwarnings()
def test_smoke_record(self):
exp = ExporterUnit(self._record.data)
self.assertEqual(exp.id, "id")
self.assertEqual(exp.summary, "summary")
def test_str(self):
exp = ExporterUnit(self._record.data)
self.assertEqual(str(exp), "summary")
def test_id(self):
exp = ExporterUnit(self._record.data)
self.assertEqual(exp.id, "id")
def test_partial_id(self):
exp = ExporterUnit(self._record.data)
self.assertEqual(exp.partial_id, "id")
def test_repr(self):
exp = ExporterUnit(self._record.data)
expected = ""
observed = repr(exp)
self.assertEqual(expected, observed)
def test_tr_summary(self):
"""Verify that ExporterUnit.tr_summary() works as expected."""
exp = ExporterUnit(self._record.data)
with mock.patch.object(exp, "get_translated_record_value") as mgtrv:
retval = exp.tr_summary()
# Ensure that get_translated_record_value() was called
mgtrv.assert_called_once_with(exp.summary, '')
# Ensure tr_summary() returned its return value
self.assertEqual(retval, mgtrv())
def test_options(self):
exp = mock.Mock(spec_set=ExporterUnit)
exp.data = "{}"
exp.entry_point = 'text'
exp.options = 'a bc de=f, g ;h, ij-k\nlm=nop , q_r'
exp.check.return_value = False
sup = ExporterUnitSupport(exp)
self.assertEqual(
sup.option_list,
['a', 'bc', 'de=f', 'g', 'h', 'ij-k', 'lm=nop', 'q_r'])
def test_validate(self):
# NOTE: this test depends on the order of checks in UnitValidator
# Id is required
with self.assertRaises(ValidationError) as boom:
ExporterUnit({}).validate()
self.assertEqual(boom.exception.problem, Problem.missing)
self.assertEqual(boom.exception.field, 'id')
# When both id, file_extension and entry_point are present, everything
# is OK
self.assertIsNone(ExporterUnit({
'id': 'id', 'entry_point': 'entry_point',
'file_extension': 'file_extension'
}).validate())
class ExporterUnitFieldValidationTests(UnitWithIdFieldValidationTests):
unit_cls = ExporterUnit
def a_test_summary__present(self):
issue_list = self.unit_cls({
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.summary,
Problem.missing, Severity.advice)
def test_summary__translatable(self):
issue_list = self.unit_cls({
'summary': 'summary'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.summary,
Problem.expected_i18n, Severity.warning)
def test_entry_point__untranslatable(self):
issue_list = self.unit_cls({
'_entry_point': 'entry_point'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.entry_point,
Problem.unexpected_i18n, Severity.warning)
def test_file_extension__present(self):
issue_list = self.unit_cls({
}, provider=self.provider).check()
self.assertIssueFound(issue_list,
self.unit_cls.Meta.fields.file_extension,
Problem.missing, Severity.error)
def test_file_extension__untranslatable(self):
issue_list = self.unit_cls({
'_file_extension': 'file_extension'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.file_extension,
Problem.unexpected_i18n, Severity.warning)
def test_entry_point__present(self):
issue_list = self.unit_cls({
}, provider=self.provider).check()
self.assertIssueFound(issue_list,
self.unit_cls.Meta.fields.entry_point,
Problem.missing, Severity.error)
def test_data__untranslatable(self):
issue_list = self.unit_cls({
'_data': '{"foo": "bar"}'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.data,
Problem.unexpected_i18n, Severity.warning)
def test_data__json_content(self):
issue_list = self.unit_cls({
'data': 'junk'
}, provider=self.provider).check()
self.assertIssueFound(
issue_list, self.unit_cls.Meta.fields.data,
Problem.syntax_error, Severity.error)
plainbox-0.25/plainbox/impl/unit/__init__.py 0000664 0001750 0001750 00000003175 12627266441 021741 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.unit` -- package with all of the units
==========================================================
"""
import string
from plainbox.impl.secure.plugins import PkgResourcesPlugInCollection
__all__ = ['get_accessed_parameters', 'all_unit']
def get_accessed_parameters(text):
"""
Parse a new-style python string template and return parameter names
:param text:
Text string to parse
:returns:
A frozenset() with a list of names (or indices) of accessed parameters
"""
# https://docs.python.org/3.4/library/string.html#string.Formatter.parse
#
# info[1] is the field_name (name of the referenced
# formatting field) it _may_ be None if there are no format
# parameters used
return frozenset(
info[1] for info in string.Formatter().parse(text)
if info[1] is not None)
# Collection of all unit classes
all_units = PkgResourcesPlugInCollection('plainbox.unit')
plainbox-0.25/plainbox/impl/unit/test_packging.py 0000664 0001750 0001750 00000017341 12627266441 023024 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""Tests for the PackagingMetaDataUnit and friends."""
from unittest import TestCase
from plainbox.impl.unit.packaging import DebianPackagingDriver
from plainbox.impl.unit.packaging import PackagingMetaDataUnit
from plainbox.impl.unit.packaging import _strategy_id
from plainbox.impl.unit.packaging import _strategy_id_like
from plainbox.impl.unit.packaging import _strategy_id_version
class DebianPackagingDriverTests(TestCase):
"""Tests for the DebianPackagingDriver class."""
DEBIAN_JESSIE = {
'PRETTY_NAME': "Debian GNU/Linux 8 (jessie)",
'NAME': "Debian GNU/Linux",
'VERSION_ID': "8",
'VERSION': "8 (jessie)",
'ID': 'debian',
'HOME_URL': "http://www.debian.org/",
'SUPPORT_URL': "http://www.debian.org/support/",
'BUG_REPORT_URL': "https://bugs.debian.org/",
}
DEBIAN_SID = {
'PRETTY_NAME': "Debian GNU/Linux stretch/sid",
'NAME': "Debian GNU/Linux",
'ID': 'debian',
'HOME_URL': "https://www.debian.org/",
'SUPPORT_URL': "https://www.debian.org/support/",
'BUG_REPORT_URL': "https://bugs.debian.org/",
}
UBUNTU_VIVID = {
'NAME': "Ubuntu",
'VERSION': "15.04 (Vivid Vervet)",
'ID': 'ubuntu',
'ID_LIKE': 'debian',
'PRETTY_NAME': "Ubuntu 15.04",
'VERSION_ID': "15.04",
'HOME_URL': "http://www.ubuntu.com/",
'SUPPORT_URL': "http://help.ubuntu.com/",
'BUG_REPORT_URL': "http://bugs.launchpad.net/ubuntu/",
}
def test_fix_1476678(self):
"""Check https://bugs.launchpad.net/plainbox/+bug/1476678."""
driver = DebianPackagingDriver({})
driver.collect(PackagingMetaDataUnit({
'Depends': (
'python3-checkbox-support (>= 0.2),\n'
'python3 (>= 3.2),\n'),
'Recommends': (
'dmidecode,\n'
'dpkg (>= 1.13),\n'
'lsb-release,\n'
'wodim')
}))
self.assertEqual(driver._depends, [
'python3-checkbox-support (>= 0.2)',
'python3 (>= 3.2)',
])
self.assertEqual(driver._recommends, [
'dmidecode',
'dpkg (>= 1.13)',
'lsb-release',
'wodim'
])
self.assertEqual(driver._suggests, [])
def test_fix_1477095(self):
"""Check https://bugs.launchpad.net/plainbox/+bug/1477095."""
# This unit is supposed to for Debian (any version) and derivatives.
# Note below that id match lets both Debian Jessie and Debian Sid pass
# and that id_like match also lets Ubuntu Vivid pass.
unit = PackagingMetaDataUnit({
'os-id': 'debian',
})
# Using id and version match
self.assertFalse(_strategy_id_version(unit, {}))
self.assertFalse(_strategy_id_version(unit, self.DEBIAN_SID))
self.assertFalse(_strategy_id_version(unit, self.DEBIAN_JESSIE))
self.assertFalse(_strategy_id_version(unit, self.UBUNTU_VIVID))
# Using id match
self.assertFalse(_strategy_id(unit, {}))
self.assertTrue(_strategy_id(unit, self.DEBIAN_SID))
self.assertTrue(_strategy_id(unit, self.DEBIAN_JESSIE))
self.assertFalse(_strategy_id(unit, self.UBUNTU_VIVID))
# Using id like
self.assertFalse(_strategy_id_like(unit, {}))
self.assertFalse(_strategy_id_like(unit, self.DEBIAN_SID))
self.assertFalse(_strategy_id_like(unit, self.DEBIAN_JESSIE))
self.assertTrue(_strategy_id_like(unit, self.UBUNTU_VIVID))
# This unit is supposed to for Debian Jessie only. Note below that
# only Debian Jessie is passed and only by id and version match.
# Nothing else is allowed.
unit = PackagingMetaDataUnit({
'os-id': 'debian',
'os-version-id': '8'
})
# Using id and version match
self.assertFalse(_strategy_id_version(unit, {}))
self.assertFalse(_strategy_id_version(unit, self.DEBIAN_SID))
self.assertTrue(_strategy_id_version(unit, self.DEBIAN_JESSIE))
self.assertFalse(_strategy_id_version(unit, self.UBUNTU_VIVID))
# Using id match
self.assertFalse(_strategy_id(unit, {}))
self.assertFalse(_strategy_id(unit, self.DEBIAN_SID))
self.assertFalse(_strategy_id(unit, self.DEBIAN_JESSIE))
self.assertFalse(_strategy_id(unit, self.UBUNTU_VIVID))
# Using id like
self.assertFalse(_strategy_id_like(unit, {}))
self.assertFalse(_strategy_id_like(unit, self.DEBIAN_SID))
self.assertFalse(_strategy_id_like(unit, self.DEBIAN_JESSIE))
self.assertFalse(_strategy_id_like(unit, self.UBUNTU_VIVID))
# This unit is supposed to for Ubuntu (any version) and derivatives.
# Note that None of the Debian versions pass anymore and the only
# version that is allowed here is the one Vivid version we test for.
# (If there was an Elementary test here it would have passed as well, I
# hope).
unit = PackagingMetaDataUnit({
'os-id': 'ubuntu',
})
# Using id and version match
self.assertFalse(_strategy_id_version(unit, {}))
self.assertFalse(_strategy_id_version(unit, self.DEBIAN_SID))
self.assertFalse(_strategy_id_version(unit, self.DEBIAN_JESSIE))
self.assertFalse(_strategy_id_version(unit, self.UBUNTU_VIVID))
# Using id match
self.assertFalse(_strategy_id(unit, {}))
self.assertFalse(_strategy_id(unit, self.DEBIAN_SID))
self.assertFalse(_strategy_id(unit, self.DEBIAN_JESSIE))
self.assertTrue(_strategy_id(unit, self.UBUNTU_VIVID))
# Using id like
self.assertFalse(_strategy_id_like(unit, {}))
self.assertFalse(_strategy_id_like(unit, self.DEBIAN_SID))
self.assertFalse(_strategy_id_like(unit, self.DEBIAN_JESSIE))
self.assertFalse(_strategy_id_like(unit, self.UBUNTU_VIVID))
# This unit is supposed to for Ubuntu Vivid only. Note that it behaves
# exactly like the Debian Jessie test above. Only Ubuntu Vivid is
# passed and only by the id and version match.
unit = PackagingMetaDataUnit({
'os-id': 'ubuntu',
'os-version-id': '15.04'
})
# Using id and version match
self.assertFalse(_strategy_id_version(unit, {}))
self.assertFalse(_strategy_id_version(unit, self.DEBIAN_SID))
self.assertFalse(_strategy_id_version(unit, self.DEBIAN_JESSIE))
self.assertTrue(_strategy_id_version(unit, self.UBUNTU_VIVID))
# Using id match
self.assertFalse(_strategy_id(unit, {}))
self.assertFalse(_strategy_id(unit, self.DEBIAN_SID))
self.assertFalse(_strategy_id(unit, self.DEBIAN_JESSIE))
self.assertFalse(_strategy_id(unit, self.UBUNTU_VIVID))
# Using id like
self.assertFalse(_strategy_id_like(unit, {}))
self.assertFalse(_strategy_id_like(unit, self.DEBIAN_SID))
self.assertFalse(_strategy_id_like(unit, self.DEBIAN_JESSIE))
self.assertFalse(_strategy_id_like(unit, self.UBUNTU_VIVID))
plainbox-0.25/plainbox/impl/unit/manifest.py 0000664 0001750 0001750 00000007344 12627266441 022012 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
""" Manifest Entry Unit. """
import logging
from plainbox.impl.symbol import SymbolDef
from plainbox.impl.unit.unit_with_id import UnitWithId
from plainbox.impl.unit.validators import CorrectFieldValueValidator
from plainbox.impl.unit.validators import PresentFieldValidator
from plainbox.impl.unit.validators import TemplateVariantFieldValidator
from plainbox.impl.unit.validators import TranslatableFieldValidator
from plainbox.impl.unit.validators import UntranslatableFieldValidator
logger = logging.getLogger("plainbox.unit.manifest")
__all__ = ('ManifestEntryUnit', )
class ManifestEntryUnit(UnitWithId):
"""
Unit representing a single entry in a hardware specification manifest.
This unit can be used to describe a single quality (either qualitative or
quantitative) of a device under test. Manifest data is provided externally
and cannot or should not be detected by the code running on the device.
"""
@property
def name(self):
""" Name of the entry. """
return self.get_record_value('name')
def tr_name(self):
""" Name of the entry (translated). """
return self.get_translated_record_value('name')
@property
def value_type(self):
"""
Type of value of the entry.
This field defines the kind of entry we wish to describe. Currently
only ``"natural"`` and ``"bool"`` are supported. This value is loaded
from the ``value-type`` field.
"""
return self.get_record_value('value-type')
@property
def value_unit(self):
"""
Type of unit the value is measured in.
Typically this will be the unit in which the quantity is measured, e.g.
"Mbit", "GB". This value is loaded from the ``value-unit`` field.
"""
return self.get_record_value('value-unit')
@property
def resource_key(self):
"""
Name of this manifest entry when presented as a resource.
This value is loaded from the ``resource-key`` field. It defaults to
the partial identifier of the unit.
"""
return self.get_record_value('resource-key', self.partial_id)
class Meta:
name = 'manifest entry'
class fields(SymbolDef):
""" Symbols for each field that a TestPlan can have. """
name = 'name'
value_type = 'value-type'
value_unit = 'value-unit'
resource_key = 'resource-key'
field_validators = {
fields.name: [
TranslatableFieldValidator,
TemplateVariantFieldValidator,
PresentFieldValidator,
],
fields.value_type: [
UntranslatableFieldValidator,
PresentFieldValidator(),
CorrectFieldValueValidator(
lambda value_type: value_type in ('bool', 'natural')),
],
fields.value_unit: [
# OPTIONAL
],
fields.resource_key: [
UntranslatableFieldValidator,
]
}
plainbox-0.25/plainbox/impl/unit/test_validators.py 0000664 0001750 0001750 00000003707 12627266441 023412 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Sylvain Pineau
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.unit.test_validators
==================================
Test definitions for plainbox.impl.validators
"""
from unittest import TestCase
from plainbox.impl.unit.validators import CorrectFieldValueValidator
from plainbox.impl.unit.validators import DeprecatedFieldValidator
from plainbox.impl.unit.validators import IFieldValidator
from plainbox.impl.unit.validators import PresentFieldValidator
from plainbox.impl.unit.validators import TemplateInvariantFieldValidator
from plainbox.impl.unit.validators import TemplateVariantFieldValidator
from plainbox.impl.unit.validators import TranslatableFieldValidator
from plainbox.impl.unit.validators import UniqueValueValidator
from plainbox.impl.unit.validators import UnitReferenceValidator
from plainbox.impl.unit.validators import UntranslatableFieldValidator
class NoTestsForAllThatCode(TestCase):
def test_fake(self):
# So that flake8 is silent
CorrectFieldValueValidator
DeprecatedFieldValidator
IFieldValidator
PresentFieldValidator
TemplateInvariantFieldValidator
TemplateVariantFieldValidator
TranslatableFieldValidator
UniqueValueValidator
UnitReferenceValidator
UntranslatableFieldValidator
self.assertTrue(True)
plainbox-0.25/plainbox/impl/unit/test_unit.py 0000664 0001750 0001750 00000036702 12627266441 022222 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Sylvain Pineau
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.unit.test_init
============================
Test definitions for plainbox.impl.unit (package init file)
"""
from unittest import TestCase
import warnings
from plainbox.abc import IProvider1
from plainbox.impl.unit.unit import Unit
from plainbox.impl.unit.unit_with_id import UnitWithId
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
from plainbox.impl.validation import ValidationError
from plainbox.vendor import mock
def setUpModule():
warnings.filterwarnings(
'ignore', 'validate is deprecated since version 0.11')
def tearDownModule():
warnings.resetwarnings()
class IssueMixIn:
"""
Mix in for TestCase to work with issues and issue lists
"""
def assertIssueFound(self, issue_list, field=None, kind=None,
severity=None, message=None):
"""
Raise an assertion unless an issue with the required fields is found
:param issue_list:
A list of issues to look through
:param field:
(optional) value that must match the same attribute on the Issue
:param kind:
(optional) value that must match the same attribute on the Issue
:param severity:
(optional) value that must match the same attribute on the Issue
:param message:
(optional) value that must match the same attribute on the Issue
:returns:
The issue matching those constraints, if found
"""
for issue in issue_list:
if field is not None and issue.field is not field:
continue
if severity is not None and issue.severity is not severity:
continue
if kind is not None and issue.kind is not kind:
continue
if message is not None and issue.message != message:
continue
return issue
else:
msg = "no issue matching:\n{}\nwas found in:\n{}".format(
'\n'.join(
' * {} is {!r}'.format(issue_attr, value)
for issue_attr, value in
[('field', field),
('severity', severity),
('kind', kind),
('message', message)]
if value is not None),
'\n'.join(" - {!r}".format(issue) for issue in issue_list))
return self.fail(msg)
class TestUnitDefinition(TestCase):
def test_instantiate_template(self):
data = mock.Mock(name='data')
raw_data = mock.Mock(name='raw_data')
origin = mock.Mock(name='origin')
provider = mock.Mock(name='provider')
parameters = mock.Mock(name='parameters')
field_offset_map = mock.Mock(name='field_offset_map')
unit = Unit.instantiate_template(
data, raw_data, origin, provider, parameters, field_offset_map)
self.assertIs(unit._data, data)
self.assertIs(unit._raw_data, raw_data)
self.assertIs(unit._origin, origin)
self.assertIs(unit._provider, provider)
self.assertIs(unit._parameters, parameters)
self.assertIs(unit._field_offset_map, field_offset_map)
def test_get_raw_record_value(self):
"""
Ensure that get_raw_record_value() works okay
"""
unit1 = Unit({'key': 'value'}, {'key': 'raw-value'})
unit2 = Unit({'_key': 'value'}, {'_key': 'raw-value'})
unit3 = Unit({'key': '{param}'}, {'key': 'raw-{param}'},
parameters={'param': 'value'})
unit4 = Unit({'key': '{missing_param}'},
{'key': 'raw-{missing_param}'},
parameters={'param': 'value'})
unit5 = Unit({})
unit6 = Unit({}, parameters={'param': 'value'})
self.assertEqual(unit1.get_raw_record_value('key'), 'raw-value')
self.assertEqual(unit2.get_raw_record_value('key'), 'raw-value')
self.assertEqual(unit3.get_raw_record_value('key'), 'raw-value')
with self.assertRaises(KeyError):
unit4.get_raw_record_value('key')
self.assertEqual(unit5.get_raw_record_value('key'), None)
self.assertEqual(
unit5.get_raw_record_value('key', 'default'), 'default')
self.assertEqual(unit6.get_raw_record_value('key'), None)
self.assertEqual(
unit6.get_raw_record_value('key', 'default'), 'default')
def test_get_record_value(self):
"""
Ensure that get_record_value() works okay
"""
unit1 = Unit({'key': 'value'}, {'key': 'raw-value'})
unit2 = Unit({'_key': 'value'}, {'_key': 'raw-value'})
unit3 = Unit({'key': '{param}'}, {'key': 'raw-{param}'},
parameters={'param': 'value'})
unit4 = Unit({'key': '{missing_param}'},
{'key': 'raw-{missing_param}'},
parameters={'param': 'value'})
unit5 = Unit({})
unit6 = Unit({}, parameters={'param': 'value'})
self.assertEqual(unit1.get_record_value('key'), 'value')
self.assertEqual(unit2.get_record_value('key'), 'value')
self.assertEqual(unit3.get_record_value('key'), 'value')
with self.assertRaises(KeyError):
unit4.get_record_value('key')
self.assertEqual(unit5.get_record_value('key'), None)
self.assertEqual(unit5.get_record_value('key', 'default'), 'default')
self.assertEqual(unit6.get_record_value('key'), None)
self.assertEqual(unit6.get_record_value('key', 'default'), 'default')
def test_validate(self):
# Empty units are valid, with or without parameters
Unit({}).validate()
Unit({}, parameters={}).validate()
# Fields cannot refer to parameters that are not supplied
with self.assertRaises(ValidationError) as boom:
Unit({'field': '{param}'}, parameters={}).validate()
self.assertEqual(boom.exception.field, 'field')
self.assertEqual(boom.exception.problem, Problem.wrong)
# Fields must obey template constraints. (id: vary)
with self.assertRaises(ValidationError) as boom:
UnitWithId({'id': 'a-simple-id'}, parameters={}).validate()
self.assertEqual(boom.exception.field, 'id')
self.assertEqual(boom.exception.problem, Problem.constant)
# Fields must obey template constraints. (unit: const)
with self.assertRaises(ValidationError) as boom:
Unit({'unit': '{parametric_id}'},
parameters={'parametric_id': 'foo'}).validate()
self.assertEqual(boom.exception.field, 'unit')
self.assertEqual(boom.exception.problem, Problem.variable)
def test_get_translated_data__typical(self):
"""
Verify the runtime behavior of get_translated_data()
"""
unit = Unit({})
with mock.patch.object(unit, "_provider") as mock_provider:
retval = unit.get_translated_data('foo')
mock_provider.get_translated_data.assert_called_with("foo")
self.assertEqual(retval, mock_provider.get_translated_data())
def test_get_translated_data__no_provider(self):
"""
Verify the runtime behavior of get_translated_data()
"""
unit = Unit({})
unit._provider = None
self.assertEqual(unit.get_translated_data('foo'), 'foo')
def test_get_translated_data__empty_msgid(self):
"""
Verify the runtime behavior of get_translated_data()
"""
unit = Unit({})
with mock.patch.object(unit, "_provider"):
self.assertEqual(unit.get_translated_data(''), '')
def test_get_translated_data__None_msgid(self):
"""
Verify the runtime behavior of get_translated_data()
"""
unit = Unit({})
with mock.patch.object(unit, "_provider"):
self.assertEqual(unit.get_translated_data(None), None)
@mock.patch('plainbox.impl.unit.unit.normalize_rfc822_value')
def test_get_normalized_translated_data__typical(self, mock_norm):
"""
verify the runtime behavior of get_normalized_translated_data()
"""
unit = Unit({})
with mock.patch.object(unit, "get_translated_data") as mock_tr:
retval = unit.get_normalized_translated_data('foo')
# get_translated_data('foo') was called
mock_tr.assert_called_with("foo")
# normalize_rfc822_value(x) was called
mock_norm.assert_called_with(mock_tr())
# return value was returned
self.assertEqual(retval, mock_norm())
@mock.patch('plainbox.impl.unit.unit.normalize_rfc822_value')
def test_get_normalized_translated_data__no_translation(self, mock_norm):
"""
verify the runtime behavior of get_normalized_translated_data()
"""
unit = Unit({})
with mock.patch.object(unit, "get_translated_data") as mock_tr:
mock_tr.return_value = None
retval = unit.get_normalized_translated_data('foo')
# get_translated_data('foo') was called
mock_tr.assert_called_with("foo")
# normalize_rfc822_value(x) was NOT called
self.assertEqual(mock_norm.call_count, 0)
# return value was returned
self.assertEqual(retval, 'foo')
def test_checksum_smoke(self):
unit1 = Unit({'plugin': 'plugin', 'user': 'root'})
identical_to_unit1 = Unit({'plugin': 'plugin', 'user': 'root'})
# Two distinct but identical units have the same checksum
self.assertEqual(unit1.checksum, identical_to_unit1.checksum)
unit2 = Unit({'plugin': 'plugin', 'user': 'anonymous'})
# Two units with different definitions have different checksum
self.assertNotEqual(unit1.checksum, unit2.checksum)
# The checksum is stable and does not change over time
self.assertEqual(
unit1.checksum,
"c47cc3719061e4df0010d061e6f20d3d046071fd467d02d093a03068d2f33400")
unit3 = Unit({'plugin': 'plugin', 'user': 'anonymous'},
parameters={'param': 'value'})
# Units with identical data but different parameters have different
# checksums
self.assertNotEqual(unit2.checksum, unit3.checksum)
# The checksum is stable and does not change over time
self.assertEqual(
unit3.checksum,
"5558e5231fb192e8126ed69d950972fa878375d1364a221ed6550852e7d5cde0")
def test_comparison(self):
# Ensure that units with equal data are equal
self.assertEqual(Unit({}), Unit({}))
# Ensure that units with equal data and equal parameters are equal
self.assertEqual(
Unit({}, parameters={'param': 'value'}),
Unit({}, parameters={'param': 'value'}))
# Ensure that units with equal data but different origin are still
# equal
self.assertEqual(
Unit({}, origin=mock.Mock()),
Unit({}, origin=mock.Mock()))
# Ensure that units with equal data but different provider are still
# equal
self.assertEqual(
Unit({}, provider=mock.Mock()),
Unit({}, provider=mock.Mock()))
# Ensure that units with equal data but different raw data are still
# equal
self.assertEqual(
Unit({}, raw_data={'key': 'raw-value-1'}),
Unit({}, raw_data={'key': 'raw-value-2'}))
# Ensure that units with different data are not equal
self.assertNotEqual(
Unit({'key': 'value'}), Unit({'key': 'other-value'}))
# Ensure that units with equal data but different parameters are not
# equal
self.assertNotEqual(
Unit({}, parameters={'param': 'value1'}),
Unit({}, parameters={'param': 'value2'}))
# Ensure that units are not equal to other classes
self.assertTrue(Unit({}) != object())
self.assertFalse(Unit({}) == object())
def test_get_accessed_parameters(self):
# There are no accessed parameters if the unit is not parameterized
self.assertEqual(
Unit({}).get_accessed_parameters(), {})
self.assertEqual(
Unit({'field': 'value'}).get_accessed_parameters(),
{'field': frozenset()})
self.assertEqual(
Unit({'field': '{param}'}).get_accessed_parameters(),
{'field': frozenset()})
# As soon as we enable parameters we get them exposed
self.assertEqual(
Unit({}, parameters={'param': 'value'}).get_accessed_parameters(),
{})
self.assertEqual(
Unit({
'field': 'value'}, parameters={'param': 'value'}
).get_accessed_parameters(), {'field': frozenset()})
self.assertEqual(
Unit({
'field': '{param}'}, parameters={'param': 'value'}
).get_accessed_parameters(), {'field': frozenset(['param'])})
# We can always use force=True to pretend any unit is parametric
self.assertEqual(Unit({}).get_accessed_parameters(force=True), {})
self.assertEqual(
Unit({'field': 'value'}).get_accessed_parameters(force=True),
{'field': frozenset()})
self.assertEqual(
Unit({'field': '{param}'}).get_accessed_parameters(force=True),
{'field': frozenset(['param'])})
def test_qualify_id__with_provider(self):
provider = mock.Mock(spec_set=IProvider1)
provider.namespace = 'ns'
unit = Unit({}, provider=provider)
self.assertEqual(unit.qualify_id('id'), 'ns::id')
self.assertEqual(unit.qualify_id('some-ns::id'), 'some-ns::id')
def test_qualify_id__without_provider(self):
unit = Unit({})
self.assertEqual(unit.qualify_id('id'), 'id')
self.assertEqual(unit.qualify_id('some-ns::id'), 'some-ns::id')
class UnitFieldValidationTests(TestCase, IssueMixIn):
unit_cls = Unit
def setUp(self):
self.provider = mock.Mock(spec_set=IProvider1)
self.provider.namespace = 'ns'
def test_unit__untranslatable(self):
issue_list = self.unit_cls({
'_unit': 'unit'
}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.unit,
Problem.unexpected_i18n, Severity.warning)
def test_unit__template_invariant(self):
issue_list = self.unit_cls({
'unit': '{attr}'
}, parameters={'attr': 'unit'}, provider=self.provider).check()
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.unit,
Problem.variable, Severity.error)
def test_unit__present(self):
issue_list = self.unit_cls({
}, provider=self.provider).check()
message = "field 'unit', unit should explicitly define its type"
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.unit,
Problem.missing, Severity.advice, message)
plainbox-0.25/plainbox/impl/unit/template.py 0000664 0001750 0001750 00000045101 12627266441 022010 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.template` -- template unit
==============================================
"""
import itertools
import logging
from plainbox.i18n import gettext as _
from plainbox.i18n import gettext_noop as N_
from plainbox.impl.resource import ExpressionFailedError
from plainbox.impl.resource import Resource
from plainbox.impl.resource import ResourceProgram
from plainbox.impl.resource import parse_imports_stmt
from plainbox.impl.secure.origin import Origin
from plainbox.impl.symbol import SymbolDef
from plainbox.impl.unit import all_units
from plainbox.impl.unit._legacy import TemplateUnitLegacyAPI
from plainbox.impl.unit._legacy import TemplateUnitValidatorLegacyAPI
from plainbox.impl.unit.unit import Unit
from plainbox.impl.unit.unit import UnitValidator
from plainbox.impl.unit.validators import CorrectFieldValueValidator
from plainbox.impl.unit.validators import PresentFieldValidator
from plainbox.impl.unit.validators import ReferenceConstraint
from plainbox.impl.unit.validators import UnitReferenceValidator
from plainbox.impl.unit.validators import UntranslatableFieldValidator
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
__all__ = ['TemplateUnit']
logger = logging.getLogger("plainbox.unit.template")
class TemplateUnitValidator(UnitValidator, TemplateUnitValidatorLegacyAPI):
"""Validator for template unit."""
def check(self, unit):
for issue in super().check(unit):
yield issue
# Apart from all the per-field checks, ensure that the unit,
# if instantiated with fake resource, produces a valid target unit
accessed_parameters = unit.get_accessed_parameters(force=True)
resource = Resource({
key: key.upper()
for key in set(itertools.chain(*accessed_parameters.values()))
})
try:
new_unit = unit.instantiate_one(resource)
except Exception as exc:
self.error(unit, unit.Meta.fields.template_unit, Problem.wrong,
_("unable to instantiate template: {}").format(exc))
else:
# TODO: we may need some origin translation to correlate issues
# back to the template.
for issue in new_unit.check():
self.issue_list.append(issue)
yield issue
class TemplateUnit(Unit, TemplateUnitLegacyAPI):
"""
Template that can instantiate zero or more additional units.
Templates are a generalized replacement to the ``local job`` system from
Checkbox. Instead of running a job definition that prints additional job
definitions, a static template is provided. PlainBox has all the visibility
of each of the fields in the template and can perform validation and other
analysis without having to run any external commands.
To instantiate a template a resource object must be provided. This adds a
natural dependency from each template unit to a resource job definition
unit. Actual instantiation allows PlainBox to create additional unit
instance for each resource eligible record. Eligible records are either all
records or a subset of records that cause the filter program to evaluate to
True. The filter program uses the familiar resource program syntax
available to normal job definitions.
:attr _filter_program:
Cached ResourceProgram computed (once) and returned by
:meth:`get_filter_program()`
"""
def __init__(self, data, origin=None, provider=None, raw_data=None,
parameters=None, field_offset_map=None):
"""
Initialize a new TemplateUnit instance.
:param data:
Normalized data that makes up this job template
:param origin:
An (optional) Origin object. If omitted a fake origin object is
created. Normally the origin object should be obtained from the
RFC822Record object.
:param provider:
An (optional) Provider1 object. If omitted it defaults to None but
the actual job template is not suitable for execution. All job
templates are expected to have a provider.
:param controller:
An (optional) session state controller. If omitted a checkbox
session state controller is implicitly used. The controller defines
how this job influences the session it executes in.
:param raw_data:
An (optional) raw version of data, without whitespace
normalization. If omitted then raw_data is assumed to be data.
:param parameters:
An (optional) dictionary of parameters. Parameters allow for unit
properties to be altered while maintaining a single definition.
This is required to obtain translated summary and description
fields, while having a single translated base text and any
variation in the available parameters.
.. note::
You should almost always use :meth:`from_rfc822_record()` instead.
"""
if origin is None:
origin = Origin.get_caller_origin()
super().__init__(
data, raw_data, origin, provider, parameters, field_offset_map)
self._filter_program = None
@classmethod
def instantiate_template(cls, data, raw_data, origin, provider, parameters,
field_offset_map):
"""
Instantiate this unit from a template.
The point of this method is to have a fixed API, regardless of what the
API of a particular unit class ``__init__`` method actually looks like.
It is easier to standardize on a new method that to patch all of the
initializers, code using them and tests to have an uniform initializer.
"""
# This assertion is a low-cost trick to ensure that we override this
# method in all of the subclasses to ensure that the initializer is
# called with correctly-ordered arguments.
assert cls is TemplateUnit, \
"{}.instantiate_template() not customized".format(cls.__name__)
return cls(data, raw_data, origin, provider, parameters,
field_offset_map)
def __str__(self):
"""String representation of Template unit objects."""
return "{} <~ {}".format(self.id, self.resource_id)
@property
def partial_id(self):
"""
Identifier of this job, without the provider name.
This field should not be used anymore, except for display
"""
return self.get_record_value('id', '?')
@property
def id(self):
"""Identifier of this template unit."""
if self.provider:
return "{}::{}".format(self.provider.namespace, self.partial_id)
else:
return self.partial_id
@property
def resource_partial_id(self):
"""name of the referenced resource object."""
text = self.template_resource
if text is not None and "::" in text:
return text.split("::", 1)[1]
return text
@property
def resource_namespace(self):
"""namespace of the referenced resource object."""
text = self.template_resource
if text is not None and "::" in text:
return text.split("::", 1)[0]
elif self._provider is not None:
return self._provider.namespace
@property
def resource_id(self):
"""fully qualified identifier of the resource object."""
resource_partial_id = self.resource_partial_id
if resource_partial_id is None:
return None
imports = self.get_imported_jobs()
assert imports is not None
for imported_resource_id, imported_alias in imports:
if imported_alias == resource_partial_id:
return imported_resource_id
resource_namespace = self.resource_namespace
if resource_namespace is None:
return resource_partial_id
else:
return "{}::{}".format(resource_namespace, resource_partial_id)
@property
def template_resource(self):
"""value of the 'template-resource' field."""
return self.get_record_value('template-resource')
@property
def template_filter(self):
"""
value of the 'template-filter' field.
This attribute stores the text of a resource program (optional) that
select a subset of available resource objects. If you wish to access
the actual resource program call :meth:`get_filter_program()`. In both
cases the value can be None.
"""
return self.get_record_value('template-filter')
@property
def template_imports(self):
"""
value of the 'template-imports' field.
This attribute stores the text of a resource import that is specific
to the template itself. In other words, it allows the template
to access resources from any namespace.
"""
return self.get_record_value('template-imports')
@property
def template_unit(self):
"""
value of the 'template-unit' field.
This attribute stores the type of the unit that this template intends
to instantiate. It defaults to 'job' for backwards compatibility and
simplicity.
"""
return self.get_record_value('template-unit', 'job')
def get_imported_jobs(self):
"""
Parse the 'imports' line and compute the imported symbols.
Return generator for a sequence of pairs (job_id, identifier) that
describe the imported job identifiers from arbitrary namespace.
The syntax of each imports line is:
IMPORT_STMT :: "from" "import"
| "from" "import"
AS
"""
imports = self.template_imports or ""
return parse_imports_stmt(imports)
def get_filter_program(self):
"""
Get filter program compiled from the template-filter field.
:returns:
ResourceProgram created out of the text of the template-filter
field.
"""
if self.template_filter is not None and self._filter_program is None:
self._filter_program = ResourceProgram(
self.template_filter, self.resource_namespace,
self.get_imported_jobs())
return self._filter_program
def get_target_unit_cls(self):
"""
Get the Unit subclass that implements the instantiated unit.
:returns:
A subclass of Unit the template will try to instantiate. If there
is no ``template-unit`` field in the template then a ``job``
template is assumed.
:raises KeyError:
if the field 'template-unit' refers to unknown unit or is undefined
.. note::
Typically this will return a JobDefinition class but it's not the
only possible value.
"""
all_units.load()
return all_units.get_by_name(self.template_unit).plugin_object
def instantiate_all(self, resource_list):
"""
Instantiate a list of job definitions.
By creating one from each non-filtered out resource records.
:param resource_list:
A list of resource objects with the correct name
(:meth:`template_resource`)
:returns:
A list of new Unit (or subclass) objects.
"""
unit_cls = self.get_target_unit_cls()
resources = []
index = 0
for resource in resource_list:
if self.should_instantiate(resource):
index += 1
resources.append(self.instantiate_one(resource,
unit_cls_hint=unit_cls,
index=index))
return resources
def instantiate_one(self, resource, unit_cls_hint=None, index=0):
"""
Instantiate a single job out of a resource and this template.
:param resource:
A Resource object to provide template data
:param unit_cls_hint:
A unit class to instantiate
:param index:
An integer parameter representing the current loop index
:returns:
A new JobDefinition created out of the template and resource data.
:raises AttributeError:
If the template referenced a value not defined by the resource
object.
Fields starting with the string 'template-' are discarded. All other
fields are interpolated by attributes from the resource object.
References to missing resource attributes cause the process to fail.
"""
# Look up the unit we're instantiating
if unit_cls_hint is not None:
unit_cls = unit_cls_hint
else:
unit_cls = self.get_target_unit_cls()
assert unit_cls is not None
# Filter out template- data fields as they are not relevant to the
# target unit.
data = {
key: value for key, value in self._data.items()
if not key.startswith('template-')
}
raw_data = {
key: value for key, value in self._raw_data.items()
if not key.startswith('template-')
}
# Override the value of the 'unit' field from 'template-unit' field
data['unit'] = raw_data['unit'] = self.template_unit
# XXX: extract raw dictionary from the resource object, there is no
# normal API for that due to the way resource objects work.
parameters = object.__getattribute__(resource, '_data')
# Add the special __index__ to the resource namespace variables
parameters['__index__'] = index
# Instantiate the class using the instantiation API
return unit_cls.instantiate_template(
data, raw_data, self.origin, self.provider, parameters,
self.field_offset_map)
def should_instantiate(self, resource):
"""
Check if a job should be instantiated for a specific resource.
:param resource:
A Resource object to check
:returns:
True if a job should be instantiated for the resource object
Determine if a job instance should be created using the specific
resource object. This is the case if there is no filter or if the
specified resource object would make the filter program evaluate to
True.
"""
program = self.get_filter_program()
if program is None:
return True
try:
# NOTE: this is a little tricky. The interface for
# evaluate_or_raise() is {str: List[Resource]} but we are being
# called with Resource. The reason for that is that we wish to get
# per-resource answer not an aggregate 'yes' or 'no'.
return program.evaluate_or_raise({
self.resource_id: [resource]
})
except ExpressionFailedError:
return False
class Meta:
name = N_('template')
class fields(SymbolDef):
"""Symbols for each field that a TemplateUnit can have."""
template_unit = 'template-unit'
template_resource = 'template-resource'
template_filter = 'template-filter'
template_imports = 'template-imports'
validator_cls = TemplateUnitValidator
field_validators = {
fields.template_unit: [
UntranslatableFieldValidator,
CorrectFieldValueValidator(
lambda value, unit: (
unit.get_record_value('template-unit') is not None),
Problem.missing, Severity.advice, message=_(
"template should explicitly define instantiated"
" unit type")),
],
fields.template_resource: [
UntranslatableFieldValidator,
PresentFieldValidator,
UnitReferenceValidator(
lambda unit: (
[unit.resource_id] if unit.resource_id else []),
constraints=[
ReferenceConstraint(
lambda referrer, referee: referee.unit == 'job',
message=_("the referenced unit is not a job")),
ReferenceConstraint(
lambda referrer, referee: (
referee.plugin == 'resource'),
onlyif=lambda referrer, referee: (
referee.unit == 'job'),
message=_(
"the referenced job is not a resource job")),
]),
# TODO: should not refer to deprecated job,
# onlyif job itself is not deprecated
],
fields.template_filter: [
UntranslatableFieldValidator,
# All templates need a valid (or empty) template filter
CorrectFieldValueValidator(
lambda value, unit: unit.get_filter_program(),
onlyif=lambda unit: unit.template_filter is not None),
# TODO: must refer to the same job as template-resource
],
fields.template_imports: [
UntranslatableFieldValidator,
CorrectFieldValueValidator(
lambda value, unit: (
list(unit.get_imported_jobs()) is not None)),
CorrectFieldValueValidator(
lambda value, unit: (
len(list(unit.get_imported_jobs())) in (0, 1)),
message=_("at most one import statement is allowed")),
# TODO: must refer to known or possibly-known job
# TODO: should not refer to deprecated jobs,
# onlyif job itself is not deprecated
],
}
plainbox-0.25/plainbox/impl/unit/testplan.py 0000664 0001750 0001750 00000104517 12627266441 022036 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.unit.job` -- job unit
=========================================
"""
import collections
import logging
import operator
import re
from plainbox.i18n import gettext as _
from plainbox.impl.secure.qualifiers import CompositeQualifier
from plainbox.impl.secure.qualifiers import FieldQualifier
from plainbox.impl.secure.qualifiers import OperatorMatcher
from plainbox.impl.secure.qualifiers import PatternMatcher
from plainbox.impl.symbol import SymbolDef
from plainbox.impl.unit._legacy import TestPlanUnitLegacyAPI
from plainbox.impl.unit.unit_with_id import UnitWithId
from plainbox.impl.unit.validators import CorrectFieldValueValidator
from plainbox.impl.unit.validators import FieldValidatorBase
from plainbox.impl.unit.validators import PresentFieldValidator
from plainbox.impl.unit.validators import ReferenceConstraint
from plainbox.impl.unit.validators import TemplateInvariantFieldValidator
from plainbox.impl.unit.validators import TemplateVariantFieldValidator
from plainbox.impl.unit.validators import TranslatableFieldValidator
from plainbox.impl.unit.validators import UnitReferenceValidator
from plainbox.impl.unit.validators import UntranslatableFieldValidator
from plainbox.impl.unit.validators import compute_value_map
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
from plainbox.impl.xparsers import Error
from plainbox.impl.xparsers import FieldOverride
from plainbox.impl.xparsers import IncludeStmt
from plainbox.impl.xparsers import IncludeStmtList
from plainbox.impl.xparsers import OverrideFieldList
from plainbox.impl.xparsers import ReFixed
from plainbox.impl.xparsers import RePattern
from plainbox.impl.xparsers import Text
from plainbox.impl.xparsers import Visitor
from plainbox.impl.xparsers import WordList
logger = logging.getLogger("plainbox.unit.testplan")
__all__ = ['TestPlanUnit']
class NonEmptyPatternIntersectionValidator(FieldValidatorBase):
"""
We want to ensure that it is a good pattern, we need to parse it
to see the fine structure and know what it describes.
We want to ensure it describes a known job, either precisely
"""
def check_in_context(self, parent, unit, field, context):
for issue in self._check_test_plan_in_context(
parent, unit, field, context):
yield issue
def _check_test_plan_in_context(self, parent, unit, field, context):
id_map = context.compute_shared(
"field_value_map[id]", compute_value_map, context, 'id')
# TODO: compute potential_id_map
advice = _("selector {!a} may not match any known or generated job")
# error = _("selector {!a} doesn't match any known or generated job")
qual_gen = unit._gen_qualifiers(
str(field), getattr(unit, str(field)), True)
for qual in qual_gen:
assert isinstance(qual, FieldQualifier)
if qual.field != 'id':
# NOTE: unsupported field
continue
if isinstance(qual.matcher, PatternMatcher):
# TODO: check potential_id map
for an_id in id_map:
if an_id is None:
# Don't report this twice.
# Each unit-with-id cares about having an id
continue
if qual.matcher.match(an_id):
break
else:
yield parent.advice(
unit, field, Problem.bad_reference,
advice.format(qual.matcher.pattern_text),
origin=qual.origin)
elif isinstance(qual.matcher, OperatorMatcher):
assert qual.matcher.op is operator.eq
target_id = qual.matcher.value
if target_id not in id_map:
assert qual.origin.source is unit.origin.source
yield parent.advice(
unit, field, Problem.bad_reference,
advice.format(target_id),
origin=qual.origin)
else:
# NOTE: unsupported matcher
raise NotImplementedError
class NoBaseIncludeValidator(FieldValidatorBase):
"""
We want to ensure it does not select jobs already selected by the 'include'
field patterns.
"""
def check_in_context(self, parent, unit, field, context):
for issue in self._check_test_plan_in_context(
parent, unit, field, context):
yield issue
def _check_test_plan_in_context(self, parent, unit, field, context):
included_job_id = []
id_map = context.compute_shared(
"field_value_map[id]", compute_value_map, context, 'id')
warning = _("selector {!a} will select a job already matched by the "
"'include' field patterns")
qual_gen = unit._gen_qualifiers(
'include', getattr(unit, 'include'), True)
# Build the list of all jobs already included with the normal include
# field
for qual in qual_gen:
assert isinstance(qual, FieldQualifier)
if qual.field != 'id':
continue
if isinstance(qual.matcher, PatternMatcher):
for an_id in id_map:
if an_id is None:
continue
if qual.matcher.match(an_id):
included_job_id.append(an_id)
elif isinstance(qual.matcher, OperatorMatcher):
assert qual.matcher.op is operator.eq
target_id = qual.matcher.value
if target_id in id_map:
included_job_id.append(target_id)
else:
raise NotImplementedError
# Now check that mandatory field patterns do not select a job already
# included with normal include.
qual_gen = unit._gen_qualifiers(
str(field), getattr(unit, str(field)), True)
for qual in qual_gen:
assert isinstance(qual, FieldQualifier)
if qual.field != 'id':
continue
if isinstance(qual.matcher, PatternMatcher):
for an_id in included_job_id:
if qual.matcher.match(an_id):
yield parent.warning(
unit, field, Problem.bad_reference,
warning.format(qual.matcher.pattern_text),
origin=qual.origin)
break
elif isinstance(qual.matcher, OperatorMatcher):
assert qual.matcher.op is operator.eq
target_id = qual.matcher.value
if target_id in included_job_id:
yield parent.warning(
unit, field, Problem.bad_reference,
warning.format(target_id),
origin=qual.origin)
else:
raise NotImplementedError
class TestPlanUnit(UnitWithId, TestPlanUnitLegacyAPI):
"""
Test plan class
A container for a named selection of jobs to run and additional meta-data
useful for various user interfaces.
"""
def __str__(self):
"""
same as .name
"""
return self.name
def __repr__(self):
return "".format(self.id, self.name)
@property
def name(self):
"""
name of this test plan
.. note::
This value is not translated, see :meth:`tr_name()` for
a translated equivalent.
"""
return self.get_record_value('name')
@property
def description(self):
"""
description of this test plan
.. note::
This value is not translated, see :meth:`tr_name()` for
a translated equivalent.
"""
return self.get_record_value('description')
@property
def include(self):
return self.get_record_value('include')
@property
def mandatory_include(self):
return self.get_record_value('mandatory_include')
@property
def bootstrap_include(self):
return self.get_record_value('bootstrap_include')
@property
def exclude(self):
return self.get_record_value('exclude')
@property
def icon(self):
return self.get_record_value('icon')
@property
def category_overrides(self):
return self.get_record_value('category-overrides')
@property
def certification_status_overrides(self):
return self.get_record_value('certification-status-overrides')
@property
def estimated_duration(self):
"""
estimated duration of this test plan in seconds.
The value may be None, which indicates that the duration is basically
unknown. Fractional numbers are allowed and indicate fractions of a
second.
"""
value = self.get_record_value('estimated_duration')
if value is None:
return None
match = re.match('^(\d+h)?[ :]*(\d+m)?[ :]*(\d+s)?$', value)
if match:
g_hours = match.group(1)
if g_hours:
assert g_hours.endswith('h')
hours = int(g_hours[:-1])
else:
hours = 0
g_minutes = match.group(2)
if g_minutes:
assert g_minutes.endswith('m')
minutes = int(g_minutes[:-1])
else:
minutes = 0
g_seconds = match.group(3)
if g_seconds:
assert g_seconds.endswith('s')
seconds = int(g_seconds[:-1])
else:
seconds = 0
return seconds + minutes * 60 + hours * 3600
else:
return float(value)
def tr_name(self):
"""
Get the translated version of :meth:`summary`
"""
return self.get_translated_record_value('name')
def tr_description(self):
"""
Get the translated version of :meth:`description`
"""
return self.get_translated_record_value('description')
def get_bootstrap_job_ids(self):
"""Compute and return a set of job ids from bootstrap_include field."""
job_ids = set()
if self.bootstrap_include is None:
return job_ids
class V(Visitor):
def visit_Text_node(visitor, node: Text):
job_ids.add(self.qualify_id(node.text))
def visit_Error_node(visitor, node: Error):
logger.warning(_(
"unable to parse bootstrap_include: %s"), node.msg)
V().visit(WordList.parse(self.bootstrap_include))
return job_ids
def get_qualifier(self):
"""
Convert this test plan to an equivalent qualifier for job selection
:returns:
A CompositeQualifier corresponding to the contents of both
the include and exclude fields.
"""
qual_list = []
qual_list.extend(self._gen_qualifiers('include', self.include, True))
qual_list.extend(self._gen_qualifiers('exclude', self.exclude, False))
qual_list.extend([self.get_bootstrap_qualifier(excluding=True)])
return CompositeQualifier(qual_list)
def get_mandatory_qualifier(self):
"""
Convert this test plan to an equivalent qualifier for job selection
:returns:
A CompositeQualifier corresponding to the contents of both
the include and exclude fields.
"""
qual_list = []
qual_list.extend(self._gen_qualifiers('include', self.mandatory_include, True))
return CompositeQualifier(qual_list)
def get_bootstrap_qualifier(self, excluding=False):
"""
Convert this test plan to an equivalent qualifier for job selection
"""
qual_list = []
if self.bootstrap_include is None:
return CompositeQualifier(qual_list)
field_origin = self.origin.just_line().with_offset(
self.field_offset_map['bootstrap_include'])
qual_list = [FieldQualifier(
'id', OperatorMatcher(operator.eq, target_id), field_origin,
not excluding) for target_id in self.get_bootstrap_job_ids()]
return CompositeQualifier(qual_list)
def _gen_qualifiers(self, field_name, field_value, inclusive):
if field_value is not None:
field_origin = self.origin.just_line().with_offset(
self.field_offset_map[field_name])
matchers_gen = self.parse_matchers(field_value)
for lineno_offset, matcher_field, matcher, error in matchers_gen:
if error is not None:
raise error
offset = field_origin.with_offset(lineno_offset)
yield FieldQualifier(matcher_field, matcher, offset, inclusive)
def parse_matchers(self, text):
"""
Parse the specified text and create a list of matchers
:param text:
string of text, including newlines and comments, to parse
:returns:
A generator returning quads (lineno_offset, field, matcher, error)
where ``lineno_offset`` is the offset of a line number from the
start of the text, ``field`` is the name of the field in a job
definition unit that the matcher should be applied,
``matcher`` can be None (then ``error`` is relevant) or one of
the ``IMatcher`` subclasses discussed below.
Supported matcher objects include:
PatternMatcher:
This matcher is created for lines of text that **are** regular
expressions. The pattern is automatically expanded to include
^...$ (if missing) so that it cannot silently match a portion of
a job definition
OperatorMatcher:
This matcher is created for lines of text that **are not** regular
expressions. The matcher uses the operator.eq operator (equality)
and stores the expected job identifier as the right-hand-side value
"""
from plainbox.impl.xparsers import Error
from plainbox.impl.xparsers import ReErr, ReFixed, RePattern
from plainbox.impl.xparsers import IncludeStmt
from plainbox.impl.xparsers import IncludeStmtList
from plainbox.impl.xparsers import Visitor
outer_self = self
class IncludeStmtVisitor(Visitor):
def __init__(self):
self.results = [] # (lineno_offset, field, matcher, error)
def visit_IncludeStmt_node(self, node: IncludeStmt):
if isinstance(node.pattern, ReErr):
matcher = None
error = node.pattern.exc
elif isinstance(node.pattern, ReFixed):
target_id = outer_self.qualify_id(node.pattern.text)
matcher = OperatorMatcher(operator.eq, target_id)
error = None
elif isinstance(node.pattern, RePattern):
text = node.pattern.text
# Ensure that pattern is surrounded by ^ and $
if text.startswith('^') and text.endswith('$'):
target_id_pattern = '^{}$'.format(
outer_self.qualify_id(text[1:-1]))
elif text.startswith('^'):
target_id_pattern = '^{}$'.format(
outer_self.qualify_id(text[1:]))
elif text.endswith('$'):
target_id_pattern = '^{}$'.format(
outer_self.qualify_id(text[:-1]))
else:
target_id_pattern = '^{}$'.format(
outer_self.qualify_id(text))
matcher = PatternMatcher(target_id_pattern)
error = None
result = (node.lineno, 'id', matcher, error)
self.results.append(result)
def visit_Error_node(self, node: Error):
# we're just faking an exception object here
error = ValueError(node.msg)
result = (node.lineno, 'id', None, error)
self.results.append(result)
visitor = IncludeStmtVisitor()
visitor.visit(IncludeStmtList.parse(text, 0, 0))
return visitor.results
def parse_category_overrides(self, text):
"""
Parse the specified text as a list of category overrides.
:param text:
string of text, including newlines and comments, to parse
:returns:
A list of tuples (lineno_offset, category_id, pattern) where
lineno_offset is the line number offset from the start of the text,
category_id is the desired category identifier and pattern is the
actual regular expression text (which may be invalid).
:raises ValueError:
if there are any issues with the override declarations
"""
from plainbox.impl.xparsers import Error
from plainbox.impl.xparsers import FieldOverride
from plainbox.impl.xparsers import OverrideFieldList
from plainbox.impl.xparsers import Visitor
outer_self = self
class OverrideListVisitor(Visitor):
def __init__(self):
self.override_list = []
def visit_FieldOverride_node(self, node: FieldOverride):
category_id = outer_self.qualify_id(node.value.text)
regexp_pattern = r"^{}$".format(
outer_self.qualify_id(node.pattern.text))
self.override_list.append(
(node.lineno, category_id, regexp_pattern))
def visit_Error_node(self, node: Error):
raise ValueError(node.msg)
visitor = OverrideListVisitor()
visitor.visit(OverrideFieldList.parse(text, 0, 0))
return visitor.override_list
def get_effective_category_map(self, job_list):
"""
Compute the effective category association for the given list of jobs
:param job_list:
a list of JobDefinition units
:returns:
A dictionary mapping job.id to the effective category_id. Note that
category_id may be None or may not refer to a valid, known
category. The caller is responsible for validating that.
"""
effective_map = {job.id: job.category_id for job in job_list}
if self.category_overrides is not None:
overrides_gen = self.parse_category_overrides(
self.category_overrides)
for lineno_offset, category_id, pattern in overrides_gen:
for job in job_list:
if re.match(pattern, job.id):
effective_map[job.id] = category_id
return effective_map
def get_effective_category(self, job):
"""
Compute the effective category association for a single job
:param job:
a JobDefinition units
:returns:
The effective category_id
"""
if self.category_overrides is not None:
overrides_gen = self.parse_category_overrides(
self.category_overrides)
for lineno_offset, category_id, pattern in overrides_gen:
if re.match(pattern, job.id):
return category_id
return job.category_id
def qualify_pattern(self, pattern):
""" qualify bare pattern (without ^ and $) """
if pattern.startswith('^') and pattern.endswith('$'):
return '^{}$'.format(self.qualify_id(pattern[1:-1]))
elif pattern.startswith('^'):
return '^{}$'.format(self.qualify_id(pattern[1:]))
elif pattern.endswith('$'):
return '^{}$'.format(self.qualify_id(pattern[:-1]))
else:
return '^{}$'.format(self.qualify_id(pattern))
class Meta:
name = 'test plan'
class fields(SymbolDef):
"""
Symbols for each field that a TestPlan can have
"""
name = 'name'
description = 'description'
include = 'include'
mandatory_include = 'mandatory_include'
bootstrap_include = 'bootstrap_include'
exclude = 'exclude'
estimated_duration = 'estimated_duration'
icon = 'icon'
category_overrides = 'category-overrides'
field_validators = {
fields.name: [
TranslatableFieldValidator,
TemplateVariantFieldValidator,
PresentFieldValidator,
# We want the summary to be a single line
CorrectFieldValueValidator(
lambda name: name.count("\n") == 0,
Problem.wrong, Severity.warning,
message=_("please use only one line"),
onlyif=lambda unit: unit.name is not None),
# We want the summary to be relatively short
CorrectFieldValueValidator(
lambda name: len(name) <= 80,
Problem.wrong, Severity.warning,
message=_("please stay under 80 characters"),
onlyif=lambda unit: unit.name is not None),
],
fields.description: [
TranslatableFieldValidator,
TemplateVariantFieldValidator,
PresentFieldValidator(
severity=Severity.advice,
onlyif=lambda unit: unit.virtual is False),
],
fields.include: [
NonEmptyPatternIntersectionValidator,
PresentFieldValidator(),
],
fields.mandatory_include: [
NonEmptyPatternIntersectionValidator,
NoBaseIncludeValidator,
],
fields.bootstrap_include: [
UntranslatableFieldValidator,
NoBaseIncludeValidator,
UnitReferenceValidator(
lambda unit: unit.get_bootstrap_job_ids(),
constraints=[
ReferenceConstraint(
lambda referrer, referee: referee.unit == 'job',
message=_("the referenced unit is not a job")),
ReferenceConstraint(
lambda referrer, referee: (
referee.plugin in ['local', 'resource']),
message=_("only local and resource jobs are "
"allowed in bootstrapping_include"))])
],
fields.exclude: [
NonEmptyPatternIntersectionValidator,
],
fields.estimated_duration: [
UntranslatableFieldValidator,
TemplateInvariantFieldValidator,
PresentFieldValidator(
severity=Severity.advice,
onlyif=lambda unit: unit.virtual is False),
CorrectFieldValueValidator(
lambda duration, unit: unit.estimated_duration > 0,
message="value must be a positive number",
onlyif=lambda unit: (
unit.virtual is False
and unit.get_record_value('estimated_duration'))),
],
fields.icon: [
UntranslatableFieldValidator,
],
fields.category_overrides: [
# optional
# valid
# referring to jobs correctly
# referring to categories correctly
],
}
class TestPlanUnitSupport:
"""
Helper class that distills test plan data into more usable form
This class serves to offload some of the code from :class:`TestPlanUnit`
branch. It takes a single test plan unit and extracts all the interesting
information out of it. Subsequently it exposes that data so that some
methods on the test plan unit class itself can be implemented in an easier
way.
The key data to handle are obviously the ``include`` and ``exclude``
fields. Those are used to come up with a qualifier object suitable for
selecting jobs.
The second key piece of data is obtained from the ``include`` field and
from the ``category-overrides`` and ``certification-status-overrides``
fields. From those fields we come up with a data structure that can be
applied to a list of jobs to compute their override values.
Some examples of how that works, given this test plan:
>>> testplan = TestPlanUnit({
... 'include': '''
... job-a certification-status=blocker, category-id=example
... job-b certification-status=non-blocker
... job-c
... ''',
... 'exclude': '''
... job-[x-z]
... ''',
... 'category-overrides': '''
... apply other-example to job-[bc]
... ''',
... 'certification-status-overrides': '''
... apply not-part-of-certification to job-c
... ''',
... })
>>> support = TestPlanUnitSupport(testplan)
We can look at the override list:
>>> support.override_list
... # doctest: +NORMALIZE_WHITESPACE
[('^job-[bc]$', [('category_id', 'other-example')]),
('^job-a$', [('certification_status', 'blocker'),
('category_id', 'example')]),
('^job-b$', [('certification_status', 'non-blocker')]),
('^job-c$', [('certification_status', 'not-part-of-certification')])]
And the qualifiers:
>>> support.qualifier # doctest: +NORMALIZE_WHITESPACE
CompositeQualifier(qualifier_list=[FieldQualifier('id', OperatorMatcher(, 'job-a'), inclusive=True),
FieldQualifier('id', OperatorMatcher(, 'job-b'), inclusive=True),
FieldQualifier('id', OperatorMatcher(, 'job-c'), inclusive=True),
FieldQualifier('id', PatternMatcher('^job-[x-z]$'), inclusive=False)])
"""
def __init__(self, testplan):
self.override_list = self._get_override_list(testplan)
self.qualifier = self._get_qualifier(testplan)
def _get_qualifier(self, testplan):
qual_list = []
qual_list.extend(
self._get_qualifier_for(testplan, 'include', True))
qual_list.extend(
self._get_qualifier_for(testplan, 'exclude', False))
return CompositeQualifier(qual_list)
def _get_qualifier_for(self, testplan, field_name, inclusive):
field_value = getattr(testplan, field_name)
if field_value is None:
return []
field_origin = testplan.origin.just_line().with_offset(
testplan.field_offset_map[field_name])
matchers_gen = self._get_matchers(testplan, field_value)
results = []
for lineno_offset, matcher_field, matcher in matchers_gen:
offset = field_origin.with_offset(lineno_offset)
results.append(
FieldQualifier(matcher_field, matcher, offset, inclusive))
return results
def _get_matchers(self, testplan, text):
"""
Parse the specified text and create a list of matchers
:param text:
string of text, including newlines and comments, to parse
:returns:
A generator returning quads (lineno_offset, field, matcher, error)
where ``lineno_offset`` is the offset of a line number from the
start of the text, ``field`` is the name of the field in a job
definition unit that the matcher should be applied,
``matcher`` can be None (then ``error`` is relevant) or one of
the ``IMatcher`` subclasses discussed below.
Supported matcher objects include:
PatternMatcher:
This matcher is created for lines of text that **are** regular
expressions. The pattern is automatically expanded to include
^...$ (if missing) so that it cannot silently match a portion of
a job definition
OperatorMatcher:
This matcher is created for lines of text that **are not** regular
expressions. The matcher uses the operator.eq operator (equality)
and stores the expected job identifier as the right-hand-side value
"""
results = []
class V(Visitor):
def visit_IncludeStmt_node(self, node: IncludeStmt):
if isinstance(node.pattern, ReFixed):
target_id = testplan.qualify_id(node.pattern.text)
matcher = OperatorMatcher(operator.eq, target_id)
elif isinstance(node.pattern, RePattern):
pattern = testplan.qualify_pattern(node.pattern.text)
matcher = PatternMatcher(pattern)
result = (node.lineno, 'id', matcher)
results.append(result)
V().visit(IncludeStmtList.parse(text, 0))
return results
def _get_override_list(
self, testplan: TestPlanUnit
) -> "List[Tuple[str, List[Tuple[str, str]]]]":
"""
Look at a test plan and compute the full (overall) override list. The
list contains information about each job selection pattern (fully
qualified pattern) to a list of pairs ``(field, value)`` that ought to
be applied to a :class:`JobState` object.
The code below ensures that each ``field`` is an existing attribute of
the job state object.
.. note::
The code below in *not* resilient to errors so make sure to
validate the unit before starting with the helper.
"""
override_map = collections.defaultdict(list)
# ^^ Dict[str, Tuple[str, str]]
for pattern, field_value_list in self._get_inline_overrides(testplan):
override_map[pattern].extend(field_value_list)
for pattern, field, value in self._get_category_overrides(testplan):
override_map[pattern].append((field, value))
for pattern, field, value in self._get_blocker_status_overrides(
testplan):
override_map[pattern].append((field, value))
return sorted((key, field_value_list)
for key, field_value_list in override_map.items())
def _get_category_overrides(
self, testplan: TestPlanUnit
) -> "List[Tuple[str, str, str]]]":
"""
Look at the category overrides and collect refined data about what
overrides to apply. The result is represented as a list of tuples
``(pattern, field, value)`` where ``pattern`` is the string that
describes the pattern, ``field`` is the field to which an override must
be applied (but without the ``effective_`` prefix) and ``value`` is the
overridden value.
"""
override_list = []
if testplan.category_overrides is None:
return override_list
class V(Visitor):
def visit_FieldOverride_node(self, node: FieldOverride):
category_id = testplan.qualify_id(node.value.text)
pattern = r"^{}$".format(
testplan.qualify_id(node.pattern.text))
override_list.append((pattern, 'category_id', category_id))
V().visit(OverrideFieldList.parse(testplan.category_overrides))
return override_list
def _get_blocker_status_overrides(
self, testplan: TestPlanUnit
) -> "List[Tuple[str, str, str]]]":
"""
Look at the certification blocker status overrides and collect refined
data about what overrides to apply. The result is represented as a list
of tuples ``(pattern, field, value)`` where ``pattern`` is the string
that describes the pattern, ``field`` is the field to which an override
must be applied (but without the ``effective_`` prefix) and ``value``
is the overridden value.
"""
override_list = []
if testplan.certification_status_overrides is None:
return override_list
class V(Visitor):
def visit_FieldOverride_node(self, node: FieldOverride):
blocker_status = node.value.text
pattern = r"^{}$".format(
testplan.qualify_id(node.pattern.text))
override_list.append(
(pattern, 'certification_status', blocker_status))
V().visit(OverrideFieldList.parse(
testplan.certification_status_overrides))
return override_list
def _get_inline_overrides(
self, testplan: TestPlanUnit
) -> "List[Tuple[str, List[Tuple[str, str]]]]":
"""
Look at the include field of a test plan and collect all of the in-line
overrides. For an include statement that has any overrides they are
collected into a list of tuples ``(field, value)`` and this list is
subsequently packed into a tuple ``(pattern, field_value_list)``.
"""
override_list = []
if testplan.include is None:
return override_list
class V(Visitor):
def visit_IncludeStmt_node(self, node: IncludeStmt):
if not node.overrides:
return
pattern = r"^{}$".format(
testplan.qualify_id(node.pattern.text))
field_value_list = [
(override_exp.field.text.replace('-', '_'),
override_exp.value.text)
for override_exp in node.overrides]
override_list.append((pattern, field_value_list))
V().visit(IncludeStmtList.parse(testplan.include))
return override_list
plainbox-0.25/plainbox/impl/unit/test_testplan.py 0000664 0001750 0001750 00000035025 12627266441 023072 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.unit.test_testplan
================================
Test definitions for plainbox.impl.unit.testplan module
"""
from unittest import TestCase
import doctest
import operator
from plainbox.abc import IProvider1
from plainbox.abc import ITextSource
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.qualifiers import OperatorMatcher
from plainbox.impl.secure.qualifiers import PatternMatcher
from plainbox.impl.unit.testplan import TestPlanUnit
from plainbox.vendor import mock
def load_tests(loader, tests, ignore):
tests.addTests(
doctest.DocTestSuite('plainbox.impl.unit.testplan',
optionflags=doctest.REPORT_NDIFF))
return tests
class TestTestPlan(TestCase):
def setUp(self):
self.provider = mock.Mock(name='provider', spec_set=IProvider1)
self.provider.namespace = 'ns'
def test_name__default(self):
unit = TestPlanUnit({
}, provider=self.provider)
self.assertEqual(unit.name, None)
def test_name__normal(self):
unit = TestPlanUnit({
'name': 'name'
}, provider=self.provider)
self.assertEqual(unit.name, "name")
def test_description__default(self):
name = TestPlanUnit({
}, provider=self.provider)
self.assertEqual(name.description, None)
def test_description__normal(self):
name = TestPlanUnit({
'description': 'description'
}, provider=self.provider)
self.assertEqual(name.description, "description")
def test_icon__default(self):
unit = TestPlanUnit({
}, provider=self.provider)
self.assertEqual(unit.icon, None)
def test_icon__normal(self):
unit = TestPlanUnit({
'icon': 'icon'
}, provider=self.provider)
self.assertEqual(unit.icon, "icon")
def test_include__default(self):
unit = TestPlanUnit({
}, provider=self.provider)
self.assertEqual(unit.include, None)
def test_include__normal(self):
unit = TestPlanUnit({
'include': 'include'
}, provider=self.provider)
self.assertEqual(unit.include, "include")
def test_mandatory_include__default(self):
unit = TestPlanUnit({
}, provider=self.provider)
self.assertEqual(unit.mandatory_include, None)
def test_mandatory_include__normal(self):
unit = TestPlanUnit({
'mandatory_include': 'mandatory_include'
}, provider=self.provider)
self.assertEqual(unit.mandatory_include, "mandatory_include")
def test_bootstrap_include__default(self):
unit = TestPlanUnit({
}, provider=self.provider)
self.assertEqual(unit.bootstrap_include, None)
def test_bootstrap_include__normal(self):
unit = TestPlanUnit({
'bootstrap_include': 'bootstrap_include'
}, provider=self.provider)
self.assertEqual(unit.bootstrap_include, 'bootstrap_include')
def test_exclude__default(self):
unit = TestPlanUnit({
}, provider=self.provider)
self.assertEqual(unit.exclude, None)
def test_exclude__normal(self):
unit = TestPlanUnit({
'exclude': 'exclude'
}, provider=self.provider)
self.assertEqual(unit.exclude, "exclude")
def test_category_override__default(self):
unit = TestPlanUnit({
}, provider=self.provider)
self.assertEqual(unit.category_overrides, None)
def test_category_override__normal(self):
unit = TestPlanUnit({
'category-overrides': 'value',
}, provider=self.provider)
self.assertEqual(unit.category_overrides, 'value')
def test_str(self):
unit = TestPlanUnit({
'name': 'name'
}, provider=self.provider)
self.assertEqual(str(unit), "name")
def test_repr(self):
unit = TestPlanUnit({
'name': 'name',
'id': 'id',
}, provider=self.provider)
self.assertEqual(repr(unit), "")
def test_tr_unit(self):
unit = TestPlanUnit({
}, provider=self.provider)
self.assertEqual(unit.tr_unit(), 'test plan')
def test_estimated_duration__default(self):
unit = TestPlanUnit({
}, provider=self.provider)
self.assertEqual(unit.estimated_duration, None)
def test_estimated_duration__normal(self):
self.assertEqual(TestPlanUnit(
{}, provider=self.provider).estimated_duration, None)
self.assertEqual(TestPlanUnit(
{'estimated_duration': '5'}, provider=self.provider
).estimated_duration, 5)
self.assertEqual(TestPlanUnit(
{'estimated_duration': '123.5'}, provider=self.provider
).estimated_duration, 123.5)
self.assertEqual(TestPlanUnit(
{'estimated_duration': '5s'}, provider=self.provider
).estimated_duration, 5)
self.assertEqual(TestPlanUnit(
{'estimated_duration': '1m 5s'}, provider=self.provider
).estimated_duration, 65)
self.assertEqual(TestPlanUnit(
{'estimated_duration': '1h 1m 5s'}, provider=self.provider
).estimated_duration, 3665)
self.assertEqual(TestPlanUnit(
{'estimated_duration': '1h'}, provider=self.provider
).estimated_duration, 3600)
self.assertEqual(TestPlanUnit(
{'estimated_duration': '2m'}, provider=self.provider
).estimated_duration, 120)
self.assertEqual(TestPlanUnit(
{'estimated_duration': '1h 1s'}, provider=self.provider
).estimated_duration, 3601)
self.assertEqual(TestPlanUnit(
{'estimated_duration': '1m:5s'}, provider=self.provider
).estimated_duration, 65)
self.assertEqual(TestPlanUnit(
{'estimated_duration': '1h:1m:5s'}, provider=self.provider
).estimated_duration, 3665)
self.assertEqual(TestPlanUnit(
{'estimated_duration': '1h:1s'}, provider=self.provider
).estimated_duration, 3601)
def test_estimated_duration__broken(self):
unit = TestPlanUnit({
'estimated_duration': 'foo'
}, provider=self.provider)
with self.assertRaises(ValueError):
unit.estimated_duration
def test_tr_name(self):
unit = TestPlanUnit({
}, provider=self.provider)
with mock.patch.object(unit, "get_translated_record_value") as mgtrv:
retval = unit.tr_name()
# Ensure that get_translated_record_value() was called
mgtrv.assert_called_once_with('name')
# Ensure tr_summary() returned its return value
self.assertEqual(retval, mgtrv())
def test_tr_description(self):
unit = TestPlanUnit({
}, provider=self.provider)
with mock.patch.object(unit, "get_translated_record_value") as mgtrv:
retval = unit.tr_description()
# Ensure that get_translated_record_value() was called
mgtrv.assert_called_once_with('description')
# Ensure tr_summary() returned its return value
self.assertEqual(retval, mgtrv())
def test_parse_matchers__with_provider(self):
unit = TestPlanUnit({
}, provider=self.provider)
self.assertEqual(
list(unit.parse_matchers("foo")),
[(0, 'id', OperatorMatcher(operator.eq, 'ns::foo'), None)])
self.assertEqual(
list(unit.parse_matchers("other::bar")),
[(0, 'id', OperatorMatcher(operator.eq, "other::bar"), None)])
self.assertEqual(
list(unit.parse_matchers("sd[a-z]")),
[(0, 'id', PatternMatcher("^ns::sd[a-z]$"), None)])
self.assertEqual(
list(unit.parse_matchers("sd[a-z]$")),
[(0, 'id', PatternMatcher("^ns::sd[a-z]$"), None)])
self.assertEqual(
list(unit.parse_matchers("^sd[a-z]")),
[(0, 'id', PatternMatcher("^ns::sd[a-z]$"), None)])
self.assertEqual(
list(unit.parse_matchers("^sd[a-z]$")),
[(0, 'id', PatternMatcher("^ns::sd[a-z]$"), None)])
def test_parse_matchers__without_provider(self):
unit = TestPlanUnit({
}, provider=None)
self.assertEqual(
list(unit.parse_matchers("foo")),
[(0, 'id', OperatorMatcher(operator.eq, 'foo'), None)])
self.assertEqual(
list(unit.parse_matchers("other::bar")),
[(0, 'id', OperatorMatcher(operator.eq, "other::bar"), None)])
self.assertEqual(
list(unit.parse_matchers("sd[a-z]")),
[(0, 'id', PatternMatcher("^sd[a-z]$"), None)])
self.assertEqual(
list(unit.parse_matchers("sd[a-z]$")),
[(0, 'id', PatternMatcher("^sd[a-z]$"), None)])
self.assertEqual(
list(unit.parse_matchers("^sd[a-z]")),
[(0, 'id', PatternMatcher("^sd[a-z]$"), None)])
self.assertEqual(
list(unit.parse_matchers("^sd[a-z]$")),
[(0, 'id', PatternMatcher("^sd[a-z]$"), None)])
def test_get_qualifier__full(self):
# Let's pretend the unit looks like this:
# +0 unit: test-plan
# +1 name: An example test plan
# +2 include:
# +3 foo
# +4 # nothing
# +5 b.*
# +6 exclude: bar
# Let's also assume that it is at a +10 offset in the file it comes
# from so that the first line +0 is actually the 10th Line
src = mock.Mock(name='source', spec_set=ITextSource)
origin = Origin(src, 10, 16)
field_offset_map = {
'unit': 0,
'name': 1,
'include': 3,
'exclude': 6
}
unit = TestPlanUnit({
'unit': 'test-plan',
'name': 'An example test plan',
'include': (
'foo\n'
'# nothing\n'
'b.*\n'
),
'exclude': 'bar\n'
}, provider=self.provider, origin=origin,
field_offset_map=field_offset_map)
qual_list = unit.get_qualifier().get_primitive_qualifiers()
self.assertEqual(qual_list[0].field, 'id')
self.assertIsInstance(qual_list[0].matcher, OperatorMatcher)
self.assertEqual(qual_list[0].matcher.value, 'ns::foo')
self.assertEqual(qual_list[0].origin, Origin(src, 13, 13))
self.assertEqual(qual_list[0].inclusive, True)
self.assertEqual(qual_list[1].field, 'id')
self.assertIsInstance(qual_list[1].matcher, PatternMatcher)
self.assertEqual(qual_list[1].matcher.pattern_text, '^ns::b.*$')
self.assertEqual(qual_list[1].origin, Origin(src, 15, 15))
self.assertEqual(qual_list[1].inclusive, True)
self.assertEqual(qual_list[2].field, 'id')
self.assertIsInstance(qual_list[2].matcher, OperatorMatcher)
self.assertEqual(qual_list[2].matcher.value, 'ns::bar')
self.assertEqual(qual_list[2].origin, Origin(src, 16, 16))
self.assertEqual(qual_list[2].inclusive, False)
def test_get_qualifier__only_comments(self):
unit = TestPlanUnit({
'include': '# nothing\n'
}, provider=self.provider)
self.assertEqual(unit.get_qualifier().get_primitive_qualifiers(), [])
def test_get_qualifier__empty(self):
unit = TestPlanUnit({
}, provider=self.provider)
self.assertEqual(unit.get_qualifier().get_primitive_qualifiers(), [])
def test_parse_category_overrides__with_provider(self):
unit = TestPlanUnit({
}, provider=self.provider)
self.assertEqual(
unit.parse_category_overrides('apply "wireless" to "wireless/.*"'),
[(0, "ns::wireless", "^ns::wireless/.*$")])
self.assertEqual(
unit.parse_category_overrides(
'apply "other::wireless" to "wireless/.*"'),
[(0, "other::wireless", "^ns::wireless/.*$")])
self.assertEqual(
unit.parse_category_overrides(
'apply "wireless" to "other::wireless/.*"'),
[(0, "ns::wireless", "^other::wireless/.*$")])
self.assertEqual(
unit.parse_category_overrides(
'apply "first::wireless" to "second::wireless/.*"'),
[(0, "first::wireless", "^second::wireless/.*$")])
def test_parse_category_overrides__without_provider(self):
unit = TestPlanUnit({
}, provider=None)
self.assertEqual(
unit.parse_category_overrides('apply "wireless" to "wireless/.*"'),
[(0, "wireless", "^wireless/.*$")])
self.assertEqual(
unit.parse_category_overrides(
'apply "other::wireless" to "wireless/.*"'),
[(0, "other::wireless", "^wireless/.*$")])
self.assertEqual(
unit.parse_category_overrides(
'apply "wireless" to "other::wireless/.*"'),
[(0, "wireless", "^other::wireless/.*$")])
self.assertEqual(
unit.parse_category_overrides(
'apply "first::wireless" to "second::wireless/.*"'),
[(0, "first::wireless", "^second::wireless/.*$")])
def test_parse_category_overrides__errors(self):
unit = TestPlanUnit({}, provider=self.provider)
with self.assertRaisesRegex(ValueError, "expected override value"):
unit.parse_category_overrides('apply')
def test_get_bootstrap_job_ids__empty(self):
unit = TestPlanUnit({}, provider=None)
self.assertEqual(unit.get_bootstrap_job_ids(), set())
def test_get_bootstrap_job_ids__normal(self):
unit = TestPlanUnit({
'bootstrap_include': 'Foo\nBar'
}, provider=None)
self.assertEqual(unit.get_bootstrap_job_ids(), set(['Foo', 'Bar']))
def test_get_bootstrap_job_ids__qualified_ids(self):
unit = TestPlanUnit({
'bootstrap_include': 'Foo\nBar'
}, provider=self.provider)
self.assertEqual(unit.get_bootstrap_job_ids(),
set(['ns::Foo', 'ns::Bar']))
plainbox-0.25/plainbox/impl/unit/unit_with_id.py 0000664 0001750 0001750 00000011300 12627266441 022655 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.unit_with_id` -- unit with identifier definition
====================================================================
"""
import logging
from plainbox.i18n import gettext as _
from plainbox.i18n import gettext_noop as N_
from plainbox.impl.symbol import SymbolDef
from plainbox.impl.unit._legacy import UnitWithIdLegacyAPI
from plainbox.impl.unit._legacy import UnitWithIdValidatorLegacyAPI
from plainbox.impl.unit.unit import Unit
from plainbox.impl.unit.unit import UnitValidator
from plainbox.impl.unit.validators import CorrectFieldValueValidator
from plainbox.impl.unit.validators import PresentFieldValidator
from plainbox.impl.unit.validators import TemplateVariantFieldValidator
from plainbox.impl.unit.validators import UniqueValueValidator
from plainbox.impl.unit.validators import UntranslatableFieldValidator
__all__ = ['UnitWithId']
logger = logging.getLogger("plainbox.unit.unit_with_id")
class UnitWithIdValidator(UnitValidator, UnitWithIdValidatorLegacyAPI):
"""
Validator for :class:`UnitWithId`
"""
def explain(self, unit, field, kind, message):
"""
Lookup an explanatory string for a given issue kind
:returns:
A string (explanation) or None if the issue kind
is not known to this method.
This version overrides the base implementation to use the unit id, if
it is available, when reporting issues. This makes the error message
easier to read for the vast majority of current units (jobs) that have
an identifier and are commonly addressed with one by developers.
"""
if unit.partial_id is None:
return super().explain(unit, field, kind, message)
stock_msg = self._explain_map.get(kind)
if stock_msg is None:
return None
return _("{unit} {id!a}, field {field!a}, {message}").format(
unit=unit.tr_unit(), id=unit.partial_id, field=str(field),
message=message or stock_msg)
class UnitWithId(Unit, UnitWithIdLegacyAPI):
"""
Base class for Units that have unique identifiers
Unlike the JobDefintion class the partial_id property has no fallback
and is simply tied directly to the "id" field. The id property works
in conjunction with a provider associated with the unit and simply adds
the namespace part.
"""
@property
def partial_id(self):
"""
Identifier of this unit, without the provider namespace
"""
return self.get_record_value('id')
@property
def id(self):
"""
Identifier of this unit, with the provider namespace.
.. note::
In rare (unit tests only?) edge case a Unit can be separated
from the parent provider. In that case the value of ``id`` is
always equal to ``partial_id``.
"""
if self.provider and self.partial_id:
return "{}::{}".format(self.provider.namespace, self.partial_id)
else:
return self.partial_id
class Meta:
name = N_('unit-with-id')
class fields(SymbolDef):
id = 'id'
validator_cls = UnitWithIdValidator
field_validators = {
fields.id: [
# We don't want anyone marking id up for translation
UntranslatableFieldValidator,
# We want this field to be present at all times
PresentFieldValidator,
# We want each instance to have a different identifier
TemplateVariantFieldValidator,
# When checking in a globally, all units need an unique value
UniqueValueValidator,
# We want to have bare, namespace-less identifiers
CorrectFieldValueValidator(
lambda value, unit: (
"::" not in unit.get_record_value('id')),
message=_("identifier cannot define a custom namespace"),
onlyif=lambda unit: unit.get_record_value('id')),
]
}
plainbox-0.25/plainbox/impl/unit/validators.py 0000664 0001750 0001750 00000057241 12627266441 022355 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.unit` -- unit definition
============================================
"""
import abc
import inspect
import itertools
import logging
import os
import shlex
import sys
from plainbox.i18n import gettext as _
from plainbox.i18n import ngettext
from plainbox.impl import pod
from plainbox.abc import IProvider1
from plainbox.impl.unit import get_accessed_parameters
from plainbox.impl.validation import Issue
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
__all__ = [
'CorrectFieldValueValidator',
'DeprecatedFieldValidator',
'FieldValidatorBase',
'IFieldValidator',
'PresentFieldValidator',
'TemplateInvariantFieldValidator',
'TemplateVariantFieldValidator',
'TranslatableFieldValidator',
'UniqueValueValidator',
'UnitReferenceValidator',
'UntranslatableFieldValidator',
]
logger = logging.getLogger("plainbox.unit")
def field2prop(field):
"""
Convert a field to the property that is used to access that field
:param field:
A string or Symbol that represents the field
:returns:
Name of the property to access on the unit.
"""
return str(field).replace('-', '_')
class UnitValidationContext(pod.POD):
"""
Helper class for validating units in a bigger context
This class has two purposes:
1) to allow the validated object to see "everything" (other units)
2) to allow validators to share temporary data structures
and to prevent O(N**2) complexity of some checks.
"""
provider_list = pod.Field(
"list of all the providers", list, pod.MANDATORY,
assign_filter_list=[pod.typed, pod.typed.sequence(IProvider1)])
shared_cache = pod.Field(
"cached computations", dict, initial_fn=dict,
assign_filter_list=[pod.typed])
def compute_shared(self, cache_key, func, *args, **kwargs):
"""
Compute a shared helper.
:param cache_key:
Key to use to lookup the helper value
:param func:
Function that computes the helper value. The function is called
with the context as the only argument
:returns:
Return value of func(self, *args, **kwargs) (possibly computed
earlier).
Compute something that can be shared by all the validation classes
and units within one context. This allows certain validators to
only compute expensive 'global' transformations of the context at most
once.
.. note::
The caller is responsible for ensuring that ``args`` and ``kwargs``
match the `cache_key` each time this function is called.
"""
if cache_key not in self.shared_cache:
self.shared_cache[cache_key] = func(*args, **kwargs)
return self.shared_cache[cache_key]
class UnitFieldIssue(Issue):
"""
Issue specific to a field of an Unit
:attr unit:
Name of the unit that the issue relates to
:attr field:
Name of the field within the unit
"""
def __init__(self, message, severity, kind, origin, unit, field):
super().__init__(message, severity, kind, origin)
self.unit = unit
self.field = field
def __repr__(self):
return (
"{}(message={!r}, severity={!r}, kind={!r}, origin={!r}"
" unit={!r}, field={!r})"
).format(
self.__class__.__name__,
self.message, self.severity, self.kind, self.origin,
self.unit, self.field)
class MultiUnitFieldIssue(Issue):
"""
Issue involving multiple units.
:attr unit_list:
Name of the unit that the issue relates to
:attr field:
Name of the field within the unit
"""
def __init__(self, message, severity, kind, origin, unit_list, field):
super().__init__(message, severity, kind, origin)
self.unit_list = unit_list
self.field = field
def __repr__(self):
return (
"{}(message={!r}, severity={!r}, kind={!r}, origin={!r}"
" unit_list={!r}, field={!r})"
).format(
self.__class__.__name__,
self.message, self.severity, self.kind, self.origin,
self.unit_list, self.field)
class IFieldValidator(metaclass=abc.ABCMeta):
"""
Interface for all :class:`Unit` field validators.
Instances of this class participate in the validation process.
"""
@abc.abstractmethod
def __init__(self, **kwargs):
"""
Initialize the validator to check the specified field.
:param kwargs:
Any additional arguments associated with the validator
that were defined on the UnitValidator
"""
def check(self, parent, unit, field):
"""
Perform the check associated with a specific field
:param parent:
The :class:`UnitValidator` that this validator cooperates with
:param unit:
The :class:`Unit` to validate
:param field:
The field to check, this may be a Symbol
:returns:
None
This method doesn't raise any exceptions nor returns error values.
Instead it is expected to use the :meth:`UnitValidator.report_issue()`
family of methods (including error, warning and advice) to report
detected problems
"""
def check_in_context(self, parent, unit, field, context):
"""
Perform the check associated with a specific field in a known context
:param parent:
The :class:`UnitValidator` that this validator cooperates with
:param unit:
The :class:`Unit` to validate
:param field:
The field to check, this may be a Symbol
:param context:
The :class:`UnitValidationContext` to use
:returns:
None
This method doesn't raise any exceptions nor returns error values.
Instead it is expected to use the :meth:`UnitValidator.report_issue()`
family of methods (including error, warning and advice) to report
detected problems
"""
class FieldValidatorBase(IFieldValidator):
"""
Base validator that implements no checks of any kind
"""
def __init__(self, message=None):
self.message = message
def check(self, parent, unit, field):
return ()
def check_in_context(self, parent, unit, field, context):
return ()
class CorrectFieldValueValidator(FieldValidatorBase):
"""
Validator ensuring that a field value is correct according to some criteria
This validator simply ensures that a value of a field (as accessed through
a field-property) matches a predefined criteria. The criteria can
be specified externally which makes this validator very flexible.
"""
default_severity = Severity.error
default_kind = Problem.wrong
def __init__(self, correct_fn, kind=None, severity=None, message=None,
onlyif=None):
"""
correct_fn:
A function that checks if the value is correct or not. If it
returns False then an issue is reported in accordance with other
arguments. It is called either as ``correct_fn(value)`` or
``correct_fn(value, unit)`` based on the number of accepted
arguments.
kind:
Kind of issue to report. By default this is Problem.wrong
severity:
Severity of the issue to report. By default this is Severity.error
message:
Customized error message. This message will be used to report the
issue if the validation fails. By default it is derived from the
specified issue ``kind`` by :meth:`UnitValidator.explain()`.
onlyif:
An optional function that checks if this validator should be
applied or not. The function is called with the `unit` as the only
argument. If it returns True then the validator proceeds to
perform its check.
"""
super().__init__(message)
if sys.version_info[:2] >= (3, 5):
has_two_args = len(inspect.signature(correct_fn).parameters) == 2
else:
has_two_args = len(inspect.getargspec(correct_fn).args) == 2
self.correct_fn = correct_fn
self.correct_fn_needs_unit = has_two_args
self.kind = kind or self.default_kind
self.severity = severity or self.default_severity
self.onlyif = onlyif
def check(self, parent, unit, field):
# Skip this validator if onlyif says we should do so
if self.onlyif is not None and not self.onlyif(unit):
return
# Look up the value
value = getattr(unit, field2prop(field))
try:
if self.correct_fn_needs_unit:
is_correct = self.correct_fn(value, unit)
else:
is_correct = self.correct_fn(value)
except Exception as exc:
yield parent.report_issue(
unit, field, self.kind, self.severity,
self.message or str(exc))
else:
# Report an issue if the correctness check failed
if not is_correct:
yield parent.report_issue(
unit, field, self.kind, self.severity, self.message)
class PresentFieldValidator(CorrectFieldValueValidator):
"""
Validator ensuring that a field has a value
This validator simply ensures that a value of a field (as accessed through
a field-property) is not None. It is useful for simple checks for required
fields.
"""
default_kind = Problem.missing
def __init__(self, kind=None, severity=None, message=None, onlyif=None):
"""
correct_fn:
A function that checks if the value is correct or not. If it
returns False then an issue is reported in accordance with other
arguments
kind:
Kind of issue to report. By default this is Problem.missing
severity:
Severity of the issue to report. By default this is Severity.error
message:
Customized error message. This message will be used to report the
issue if the validation fails. By default it is derived from the
specified issue ``kind`` by :meth:`UnitValidator.explain()`.
"""
correct_fn = lambda value: value is not None
super().__init__(correct_fn, kind, severity, message, onlyif)
class UselessFieldValidator(CorrectFieldValueValidator):
"""
Validator ensuring that no value is specified to a field in certain context
The context should be encoded by passing the onlyif argument which can
inspect the unit and determine if a field is useless or not.
"""
default_kind = Problem.useless
default_severity = Severity.warning
def __init__(self, kind=None, severity=None, message=None, onlyif=None):
"""
correct_fn:
A function that checks if the value is correct or not. If it
returns False then an issue is reported in accordance with other
arguments
kind:
Kind of issue to report. By default this is Problem.useless
severity:
Severity of the issue to report. By default this is
Severity.warning
message:
Customized error message. This message will be used to report the
issue if the validation fails. By default it is derived from the
specified issue ``kind`` by :meth:`UnitValidator.explain()`.
"""
correct_fn = lambda value: value is None
super().__init__(correct_fn, kind, severity, message, onlyif)
class DeprecatedFieldValidator(FieldValidatorBase):
"""
Validator ensuring that deprecated field is not used (passed a value)
"""
def check(self, parent, unit, field):
# This is not a using a property so that we can remove the property but
# still check that the field is not being used.
if unit.get_record_value(field) is not None:
yield parent.report_issue(
unit, field, Problem.deprecated, Severity.advice, self.message)
class TranslatableFieldValidator(FieldValidatorBase):
"""
Validator ensuring that a field is marked as translatable
The validator can be customized by passing the following keyword arguments:
message:
Customized error message. This message will be used to report the
issue if the validation fails. By default it is derived from
``Problem.expected_i18n`` by :meth:`UnitValidator.explain()`.
"""
def check(self, parent, unit, field):
if (unit.virtual is False
and unit.get_record_value(field) is not None
and not unit.is_translatable_field(field)):
yield parent.warning(unit, field, Problem.expected_i18n)
class UntranslatableFieldValidator(FieldValidatorBase):
"""
Validator ensuring that a field is not marked as translatable
The validator can be customized by passing the following keyword arguments:
message:
Customized error message. This message will be used to report the
issue if the validation fails. By default it is derived from
``Problem.unexpected_i18n`` by :meth:`UnitValidator.explain()`.
"""
def check(self, parent, unit, field):
if (unit.get_record_value(field)
and unit.is_translatable_field(field)):
yield parent.warning(unit, field, Problem.unexpected_i18n)
class TemplateInvariantFieldValidator(FieldValidatorBase):
"""
Validator ensuring that a field value doesn't depend on a template resource
"""
def check(self, parent, unit, field):
# Non-parametric units are always valid
if unit.is_parametric:
value = unit._data.get(field)
# No value? No problem!
if value is None:
return
param_set = get_accessed_parameters(value)
# Invariant fields cannot depend on any parameters
if len(param_set) != 0:
yield parent.error(unit, field, Problem.variable, self.message)
class TemplateVariantFieldValidator(FieldValidatorBase):
"""
Validator ensuring that a field value does depend on a template resource
In addition, the actual value template is checked to ensure that each
parameter it references is defined in the particular unit being validated.
"""
def check(self, parent, unit, field):
# Non-parametric units are always valid
if unit.is_parametric:
value = unit._data.get(field)
# No value? No problem!
if value is not None:
param_set = get_accessed_parameters(value)
# Variant fields must depend on some parameters
if len(param_set) == 0:
yield parent.error(
unit, field, Problem.constant, self.message)
# Each parameter must be present in the unit
for param_name in param_set:
if param_name not in unit.parameters:
message = _(
"reference to unknown parameter {!r}"
).format(param_name)
yield parent.error(
unit, field, Problem.unknown_param, message)
class ShellProgramValidator(FieldValidatorBase):
"""
Validator ensuring that a field value looks like a valid shell program
This validator can help catch simple mistakes detected by a
shell-compatible lexer. It doesn't support the heredoc syntax and it
silently ignores fields that have '<<' anywhere in the value.
"""
def check(self, parent, unit, field):
# Look up the value
value = getattr(unit, field2prop(field))
if value is not None:
if '<<' in value:
# TODO: implement heredoc-aware shlex parser
# and use it to validate the input
pass
else:
lex = shlex.shlex(value, posix=True)
token = None
try:
for token in lex:
pass
except ValueError as exc:
if token is not None:
yield parent.error(
unit, field, Problem.syntax_error,
"{}, near {!r}".format(exc, token),
offset=lex.lineno - 1)
else:
yield parent.error(
unit, field, Problem.syntax_error, str(exc),
offset=lex.lineno - 1)
def compute_value_map(context, field):
"""
Compute support data structure
:param context:
The :class:`UnitValidationContext` instance that this data is computed
for. It is used to discover a list of providers
:returns:
A dictionary mapping from all the existing values of a specific field
(that is being validated) to a list of units that have that value in
that field.
"""
value_map = {}
all_units = itertools.chain(
*(provider.unit_list for provider in context.provider_list))
for unit in all_units:
try:
value = getattr(unit, field2prop(field))
except AttributeError:
continue
if value not in value_map:
value_map[value] = [unit]
else:
value_map[value].append(unit)
return value_map
class UniqueValueValidator(FieldValidatorBase):
"""
Validator that checks if a value of a specific field is unique
This validator only works in context mode where it ensures that all the
units in all providers present in the context have an unique value for a
specific field.
This is mostly applicable to the 'id' field but other fields may be used.
The algorithm has O(1) complexity (where N is the number of units) per unit
which translates to O(N) cost for the whole context.
"""
def check_in_context(self, parent, unit, field, context):
value_map = context.compute_shared(
"field_value_map[{}]".format(field),
compute_value_map, context, field)
value = getattr(unit, field2prop(field))
units_with_this_value = value_map[value]
n = len(units_with_this_value)
if n > 1:
# come up with unit_list where this unit is always at the front
unit_list = list(units_with_this_value)
unit_list = sorted(
unit_list,
key=lambda a_unit: 0 if a_unit is unit
else unit_list.index(a_unit) + 1)
yield parent.error(
unit_list, field, Problem.not_unique, ngettext(
"clashes with {0} other unit",
"clashes with {0} other units", n - 1
).format(n - 1) + ', look at: ' + ', '.join(
# XXX: the relative_to is a hack, ideally we would
# allow the UI to see the fine structure of the error
# message and pass appropriate path to relative_to()
str(other_unit.origin.relative_to(os.getcwd()))
for other_unit in units_with_this_value
if other_unit is not unit))
class ReferenceConstraint:
"""
Description of a constraint on a unit reference
:attr constraint_fn:
A function fn(referrer, referee) that describes the constraint.
The function must return True in order for the constraint to hold.
:attr message:
Message that should be reported when the constraint fails to hold
:attr onlyif:
An (optional) function fn(referrer, referee) that checks if the
constraint should be checked or not. It must return True for the
``constraint_fn`` to make sense.
"""
def __init__(self, constraint_fn, message, *, onlyif=None):
self.constraint_fn = constraint_fn
self.onlyif = onlyif
self.message = message
class UnitReferenceValidator(FieldValidatorBase):
"""
Validator that checks if a field references another unit
This validator only works in context mode where it ensures that all the
units in all providers present in the context have an unique value for a
specific field.
The algorithm has O(1) complexity (where N is the number of units) per unit
which translates to O(N) cost for the whole context.
"""
def __init__(self, get_references_fn, constraints=None, message=None):
super().__init__(message)
self.get_references_fn = get_references_fn
if constraints is None:
constraints = ()
self.constraints = constraints
def check_in_context(self, parent, unit, field, context):
id_map = context.compute_shared(
"field_value_map[id]", compute_value_map, context, 'id')
try:
value_list = self.get_references_fn(unit)
except Exception as exc:
yield parent.error(unit, field, Problem.wrong, str(exc))
value_list = None
if value_list is None:
value_list = []
elif not isinstance(value_list, (list, tuple, set)):
value_list = [value_list]
for unit_id in value_list:
try:
units_with_this_id = id_map[unit_id]
except KeyError:
# zero is wrong, broken reference
yield parent.error(
unit, field, Problem.bad_reference,
self.message or _(
"unit {!a} is not available"
).format(unit_id))
continue
n = len(units_with_this_id)
if n == 1:
# one is exactly right, let's see if it's good
referrer = unit
referee = units_with_this_id[0]
for constraint in self.constraints:
if constraint.onlyif is not None and not constraint.onlyif(
referrer, referee):
continue
if not constraint.constraint_fn(referrer, referee):
yield parent.error(
unit, field, Problem.bad_reference,
self.message or constraint.message
or _("referee constraint failed"))
elif n > 1:
# more than one is also good, which one are we targeting?
yield parent.error(
unit, field, Problem.bad_reference,
self.message or _(
"multiple units with id {!a}: {}"
).format(
unit_id, ', '.join(
# XXX: the relative_to is a hack, ideally we would
# allow the UI to see the fine structure of the
# error message and pass appropriate path to
# relative_to()
str(other_unit.origin.relative_to(os.getcwd()))
for other_unit in units_with_this_id)))
plainbox-0.25/plainbox/impl/unit/category.py 0000664 0001750 0001750 00000011016 12627266441 022010 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.unit.category` -- category unit
===================================================
Categories are a way of associating tests with a human-readable "group".
Particular job definitions can say that they belong to a specific group
(using the category_id field). The display value of that group is loaded
from a particular category unit. This way any provider can extend the list
of categories and we can reliably fix typos and translate the actual names
in a compatible way.
"""
import logging
from plainbox.i18n import gettext as _
from plainbox.i18n import gettext_noop as N_
from plainbox.impl.symbol import SymbolDef
from plainbox.impl.unit._legacy import CategoryUnitLegacyAPI
from plainbox.impl.unit.unit_with_id import UnitWithId
from plainbox.impl.unit.validators import CorrectFieldValueValidator
from plainbox.impl.unit.validators import PresentFieldValidator
from plainbox.impl.unit.validators import TemplateVariantFieldValidator
from plainbox.impl.unit.validators import TranslatableFieldValidator
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
__all__ = ['CategoryUnit']
logger = logging.getLogger("plainbox.unit.category")
class CategoryUnit(UnitWithId, CategoryUnitLegacyAPI):
"""
Test Category Unit
This unit defines testing categories. Job definitions can be associated
with at most one category.
"""
@classmethod
def instantiate_template(cls, data, raw_data, origin, provider,
parameters, field_offset_map):
"""
Instantiate this unit from a template.
The point of this method is to have a fixed API, regardless of what the
API of a particular unit class ``__init__`` method actually looks like.
It is easier to standardize on a new method that to patch all of the
initializers, code using them and tests to have an uniform initializer.
"""
# This assertion is a low-cost trick to ensure that we override this
# method in all of the subclasses to ensure that the initializer is
# called with correctly-ordered arguments.
assert cls is CategoryUnit, \
"{}.instantiate_template() not customized".format(cls.__name__)
return cls(data, raw_data, origin, provider, parameters,
field_offset_map)
def __str__(self):
"""
same as .name
"""
return self.name
def __repr__(self):
return "".format(self.id, self.name)
@property
def name(self):
"""
Name of the category
"""
return self.get_record_value('name')
def tr_name(self):
"""
Translated name of the category
"""
return self.get_translated_record_value("name")
class Meta:
name = N_('category')
class fields(SymbolDef):
"""
Symbols for each field that a JobDefinition can have
"""
name = 'name'
field_validators = {
fields.name: [
TranslatableFieldValidator,
TemplateVariantFieldValidator,
PresentFieldValidator,
# We want the name to be a single line
CorrectFieldValueValidator(
lambda name: name.count("\n") == 0,
Problem.wrong, Severity.warning,
message=_("please use only one line"),
onlyif=lambda unit: unit.name is not None),
# We want the name to be relatively short
CorrectFieldValueValidator(
lambda name: len(name) <= 80,
Problem.wrong, Severity.warning,
message=_("please stay under 80 characters"),
onlyif=lambda unit: unit.name is not None),
]
}
plainbox-0.25/plainbox/impl/unit/test_file.py 0000664 0001750 0001750 00000004374 12627266441 022162 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Sylvain Pineau
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.unit.test_file
============================
Test definitions for plainbox.impl.unit.file module
"""
from plainbox.impl.unit.file import FileUnit
from plainbox.impl.unit.file import FileRole
from plainbox.impl.unit.test_unit import UnitFieldValidationTests
from plainbox.impl.validation import Problem
from plainbox.impl.validation import Severity
class FileUnitFieldValidationTests(UnitFieldValidationTests):
unit_cls = FileUnit
def test_path__recommends_pxu(self):
issue_list = self.unit_cls({
'unit': self.unit_cls.Meta.name,
'path': 'foo.txt',
'role': FileRole.unit_source,
}, provider=self.provider).check()
message = ("please use .pxu as an extension for all files with "
"plainbox units, see: http://plainbox.readthedocs.org"
"/en/latest/author/faq.html#faq-1")
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.path,
Problem.deprecated, Severity.advice, message)
def test_unit__present(self):
"""
overridden version of UnitFieldValidationTests.test_unit__present()
This version has a different message and the same semantics as before
"""
issue_list = self.unit_cls({
}, provider=self.provider).check()
message = "unit should explicitly define its type"
self.assertIssueFound(issue_list, self.unit_cls.Meta.fields.unit,
Problem.missing, Severity.advice, message)
plainbox-0.25/plainbox/impl/test_init.py 0000664 0001750 0001750 00000006263 12627266441 021226 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.test_init
=======================
Test definitions for plainbox.impl module
"""
from unittest import TestCase
import warnings
from plainbox.impl import _get_doc_margin
from plainbox.impl import deprecated
class MiscTests(TestCase):
def test_get_doc_margin(self):
self.assertEqual(
_get_doc_margin(
"the first line is ignored\n"
" subsequent lines"
" get counted"
" though"),
2)
self.assertEqual(
_get_doc_margin("what if there is no margin?"), 0)
class DeprecatedDecoratorTests(TestCase):
"""
Tests for the @deprecated function decorator
"""
def assertWarns(self, warning, callable, *args, **kwds):
with warnings.catch_warnings(record=True) as warning_list:
warnings.simplefilter('always')
result = callable(*args, **kwds)
self.assertTrue(any(item.category == warning for item in warning_list))
return result, warning_list
def test_func_deprecation_warning(self):
"""
Ensure that @deprecated decorator makes functions emit deprecation
warnings on call.
"""
@deprecated("0.6")
def func():
return "value"
result, warning_list = self.assertWarns(
DeprecationWarning,
func,
)
self.assertEqual(result, "value")
# NOTE: we need to use str() as warnings API is a bit silly there
self.assertEqual(str(warning_list[0].message),
'func is deprecated since version 0.6')
def test_func_docstring(self):
"""
Ensure that we set or modify the docstring to indicate the fact that
the function is now deprecated. The original docstring should be
preserved.
"""
@deprecated("0.6")
def func1():
pass
@deprecated("0.6")
def func2():
""" blah """
self.assertIn(".. deprecated:: 0.6", func1.__doc__)
self.assertIn(".. deprecated:: 0.6", func2.__doc__)
self.assertIn("blah", func2.__doc__)
def test_common_mistake(self):
"""
Ensure that we provide a helpful message when a common mistake is made
"""
with self.assertRaises(SyntaxError) as boom:
@deprecated
def func():
pass
self.assertEqual(
str(boom.exception),
"@deprecated() must be called with a parameter")
plainbox-0.25/plainbox/impl/result.py 0000664 0001750 0001750 00000050345 12627266441 020542 0 ustar pierre pierre 0000000 0000000 # encoding: utf-8
# This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
Implementation of job result (test result) classes.
:mod:`plainbox.impl.result` -- job result
=========================================
This module has two basic implementation of :class:`IJobResult`:
:class:`MemoryJobResult` and :class:`DiskJobResult`.
"""
import base64
import codecs
import gzip
import inspect
import io
import json
import logging
import re
from collections import namedtuple
from plainbox.abc import IJobResult
from plainbox.i18n import gettext as _
from plainbox.i18n import pgettext as C_
from plainbox.impl import pod
from plainbox.impl.decorators import raises
logger = logging.getLogger("plainbox.result")
# Regular expressions that match control characters, EXCEPT for the newline,
# carriage return, tab and vertical space
#
# According to http://unicode.org/glossary/#control_codes
# control codes are "The 65 characters in the ranges U+0000..U+001F and
# U+007F..U+009F. Also known as control characters."
#
# NOTE: we don't want to match certain control characters (newlines, carriage
# returns, tabs or vertical tabs as those are allowed by lxml and it would be
# silly to strip them.
CONTROL_CODE_RE_STR = re.compile(
"(?![\n\r\t\v])[\u0000-\u001F]|[\u007F-\u009F]")
# Tuple representing entries in the JobResult.io_log
# Each entry has three fields:
#
# delay - time elapsed since the previous record was created (in seconds,
# floating point unit represent fractional parts)
#
# stream_name - name of the stream the IO was observed on, currently
# 'stdout' and 'stderr' are supported.
#
# data - the actual IO seen (bytes)
IOLogRecord = namedtuple("IOLogRecord", "delay stream_name data".split())
# Tuple representing meta-data associated with each possible value of "outcome"
#
# This tuple replaces various ad-hoc mapping that keyed off the outcome field
# to compute something. Currently the following fields are supported:
#
# value - the actual constant like IJobResult.OUTCOME_NONE (for completeness)
#
# unicode_sigil - a short string that renders to one character cell, useful
# for representing this outcome in tabular renderings.
#
# tr_outcome - a translatable, short string that describes the outcome. Those
# strings are looked up with the context of "textual outcome" so that
# translations can be more easily tuned without also affecting random parts
# of the stack.
#
# tr_label - a label suitable for displaying the type of the outcome. This is
# different from tr_outcome as outcome but mostly in translations.
#
# color_ansi - a string containing the ANSI escape sequence for colorizing
# this outcome (or for representing it in general). This sequence is suitable
# for various terminals.
#
# color_hex - a string containing 7 character string like #RRGGBB that
# encodes the hexadecimal representation of the color. This value is suitable
# for graphical applications in the same way as color_ansi is useful for
# console applications.
#
# hexr_xml_mapping - a string that needs to be used in the XML report for the
# Canonical HEXR application (also for the Canonical Certification web
# application). Those values must be in sync with a piece of code in
# checkbox_support that handles parsing of the XML report, for as long as the
# report is to be maintained.
#
# hexr_xml_allowed - a boolean indicating that this outcome may appear
# in the XML document generated for the Canonical HEXR application. In
# theory it can go away as we can now easily control both "sides"
# (client and server) but it does exist today.
#
# hexr_xml_order - an (optional) integer used for ordering allowed values.
# This is used so that the XML output can have a fixed ordering regardless of
# the actual order of entries in the dictionary.
OutcomeMetadata = namedtuple(
"OutcomeMetadata", ("value unicode_sigil tr_outcome tr_label color_ansi"
" color_hex hexr_xml_mapping hexr_xml_allowed"
" hexr_xml_order"))
OUTCOME_METADATA_MAP = {
IJobResult.OUTCOME_NONE: OutcomeMetadata(
value=IJobResult.OUTCOME_NONE,
unicode_sigil=' ',
tr_outcome=C_("textual outcome", "job didn't run"),
tr_label=C_("chart label", "not started"),
color_ansi="",
color_hex="#000000",
hexr_xml_mapping="none",
hexr_xml_allowed=True,
hexr_xml_order=0,
),
IJobResult.OUTCOME_PASS: OutcomeMetadata(
value=IJobResult.OUTCOME_PASS,
unicode_sigil='☑ ',
tr_outcome=C_("textual outcome", "job passed"),
tr_label=C_("chart label", "passed"),
color_ansi="\033[32;1m",
color_hex="#6AA84F",
hexr_xml_mapping="pass",
hexr_xml_allowed=True,
hexr_xml_order=1,
),
IJobResult.OUTCOME_FAIL: OutcomeMetadata(
value=IJobResult.OUTCOME_FAIL,
unicode_sigil='☒ ',
tr_outcome=C_("textual outcome", "job failed"),
tr_label=C_("chart label", "failed"),
color_ansi="\033[31;1m",
color_hex="#DC3912",
hexr_xml_mapping="fail",
hexr_xml_allowed=True,
hexr_xml_order=2,
),
IJobResult.OUTCOME_SKIP: OutcomeMetadata(
value=IJobResult.OUTCOME_SKIP,
unicode_sigil='☠',
tr_outcome=C_("textual outcome", "job skipped"),
tr_label=C_("chart label", "skipped"),
color_ansi="\033[33;1m",
color_hex="#FF9900",
hexr_xml_mapping="skip",
hexr_xml_allowed=True,
hexr_xml_order=3,
),
IJobResult.OUTCOME_NOT_SUPPORTED: OutcomeMetadata(
value=IJobResult.OUTCOME_NOT_SUPPORTED,
unicode_sigil='☠',
tr_outcome=C_("textual outcome", "job cannot be started"),
tr_label=C_("chart label", "not supported"),
color_ansi="\033[33;1m",
color_hex="#FF9900",
hexr_xml_mapping="skip",
hexr_xml_allowed=False,
hexr_xml_order=None,
),
IJobResult.OUTCOME_NOT_IMPLEMENTED: OutcomeMetadata(
value=IJobResult.OUTCOME_NOT_IMPLEMENTED,
unicode_sigil='-',
tr_outcome=C_("textual outcome", "job is not implemented"),
tr_label=C_("chart label", "not implemented"),
color_ansi="\033[31;1m",
color_hex="#DC3912",
hexr_xml_mapping="skip",
hexr_xml_allowed=False,
hexr_xml_order=None,
),
IJobResult.OUTCOME_UNDECIDED: OutcomeMetadata(
value=IJobResult.OUTCOME_UNDECIDED,
unicode_sigil='⇠',
tr_outcome=C_("textual outcome", "job needs verification"),
tr_label=C_("chart label", "undecided"),
color_ansi="\033[35;1m",
color_hex="#FF00FF",
hexr_xml_mapping="skip",
hexr_xml_allowed=False,
hexr_xml_order=None,
),
IJobResult.OUTCOME_CRASH: OutcomeMetadata(
value=IJobResult.OUTCOME_CRASH,
unicode_sigil='âš ',
tr_outcome=C_("textual outcome", "job crashed"),
tr_label=C_("chart label", "crashed"),
color_ansi="\033[41;37;1m",
color_hex="#FF0000",
hexr_xml_mapping="fail",
hexr_xml_allowed=False,
hexr_xml_order=None,
),
}
def tr_outcome(outcome):
"""Get the translated value of ``OUTCOME_`` constant."""
return OUTCOME_METADATA_MAP[outcome].tr_outcome
def outcome_color_hex(outcome):
"""Get the hexadecimal "#RRGGBB" color that represents this outcome."""
return OUTCOME_METADATA_MAP[outcome].color_hex
def outcome_color_ansi(outcome):
"""Get an ANSI escape sequence that represents this outcome."""
return OUTCOME_METADATA_MAP[outcome].color_ansi
def outcome_meta(outcome):
"""Get the OutcomeMetadata object associated with this outcome."""
return OUTCOME_METADATA_MAP[outcome]
class JobResultBuilder(pod.POD):
"""A builder for job result objects."""
outcome = pod.Field(
'outcome of a test',
str, pod.UNSET, assign_filter_list=[pod.unset_or_typed])
execution_duration = pod.Field(
'time of test execution',
float, pod.UNSET, assign_filter_list=[pod.unset_or_typed])
comments = pod.Field(
'comments from the test operator',
str, pod.UNSET, assign_filter_list=[pod.unset_or_typed])
return_code = pod.Field(
'return code from the (optional) test process',
int, pod.UNSET, assign_filter_list=[pod.unset_or_typed])
io_log = pod.Field(
'history of the I/O log of the (optional) test process',
list, pod.UNSET, assign_filter_list=[
pod.unset_or_typed, pod.unset_or_typed.sequence(tuple)])
io_log_filename = pod.Field(
'path to a structured I/O log file of the (optional) test process',
str, pod.UNSET, assign_filter_list=[pod.unset_or_typed])
def add_comment(self, comment):
"""
Add a new comment.
The comment is safely combined with any prior comments.
"""
if self.comments is pod.UNSET:
self.comments = comment
else:
self.comments += '\n' + comment
@raises(ValueError)
def get_result(self):
"""
Use the current state of the builder to create a new result.
:returns:
A new MemoryJobResult or DiskJobResult with all the data
:raises ValueError:
If both io_log and io_log_filename were used.
"""
if not (self.io_log_filename is pod.UNSET or self.io_log is pod.UNSET):
raise ValueError(
"you can use only io_log or io_log_filename at a time")
if self.io_log_filename is not pod.UNSET:
cls = DiskJobResult
else:
cls = MemoryJobResult
return cls(self.as_dict())
class _JobResultBase(IJobResult):
"""
Base class for :`IJobResult` implementations.
This class defines base properties common to all variants of `IJobResult`
"""
def __init__(self, data):
"""
Initialize a new result with the specified data.
Data is a dictionary that can hold arbitrary values. At least some
values are explicitly used, such as 'outcome', 'comments' and
'return_code' but all of those are optional.
"""
# Filter out boring items so that stuff that is rally identical,
# behaves as if it was identical. This is especially important for
# __eq__() below as various types of IJobResult are constructed and
# compared with default entries that should not compare differently.
self._data = {
key: value for key, value in data.items()
if value is not None and value != []}
def get_builder(self, **kwargs):
"""Create a new job result builder from the data in this result."""
builder = JobResultBuilder(**self._data)
for key, value in kwargs.items():
setattr(builder, key, value)
return builder
def __eq__(self, other):
if not isinstance(other, _JobResultBase):
return NotImplemented
return self._data == other._data
def __str__(self):
return str(self.outcome)
def __repr__(self):
return "<{}>".format(
' '.join([self.__class__.__name__] + [
"{}:{!r}".format(key, self._data[key])
for key in sorted(self._data.keys())]))
@property
def outcome(self):
"""
outcome of running this job.
The outcome ultimately classifies jobs (tests) as failures or
successes. There are several other types of outcome that all basically
mean that the job did not run for some particular reason.
"""
return self._data.get('outcome', self.OUTCOME_NONE)
def tr_outcome(self):
"""Get the translated value of the outcome."""
return tr_outcome(self.outcome)
def outcome_color_hex(self):
"""Get the hexadecimal "#RRGGBB" color that represents this outcome."""
return outcome_color_hex(self.outcome)
def outcome_color_rgb(self):
h = outcome_meta(self.outcome).color_hex
assert len(h) == 7, "expected format #RRGGBB"
return (int(h[1:3], 16), int(h[3:5], 16), int(h[5:7], 16))
def outcome_color_ansi(self):
"""Get an ANSI escape sequence that represents this outcome."""
return outcome_color_ansi(self.outcome)
def outcome_meta(self):
"""Get the OutcomeMetadata object associated with this outcome."""
return outcome_meta(self.outcome)
@property
def execution_duration(self):
"""The amount of time in seconds it took to run this job."""
return self._data.get('execution_duration')
@property
def comments(self):
"""Get the comments of the test operator."""
return self._data.get('comments')
@property
def return_code(self):
"""return code of the command associated with the job, if any."""
return self._data.get('return_code')
@property
def io_log(self):
return tuple(self.get_io_log())
@property
def io_log_as_flat_text(self):
"""
Perform a lossy conversion from the binary I/O log to text.
Convert the I/O log to a text string, replacing non Unicode characters
with U+FFFD, the REPLACEMENT CHARACTER.
Both stdout and stderr streams are merged together into a single
string. I/O log record are first decoded to UTF-8 and all control
characters (EXCEPT for the newline, carriage return, tab and
vertical space) are removed:
>>> result = MemoryJobResult({'io_log': [
... (0, 'stdout', b'foo\\n'),
... (1, 'stderr', b'\u001Ebar\\n')]})
>>> result.io_log_as_flat_text
'foo\\nbar\\n'
When the input bytes can’t be converted they are replaced by U+FFFD:
>>> special_char = bytes([255,])
>>> result = MemoryJobResult({'io_log': [(0, 'stdout', special_char)]})
>>> result.io_log_as_flat_text
'�'
"""
return ''.join(
CONTROL_CODE_RE_STR.sub('', text_chunk)
for text_chunk in codecs.iterdecode(
(record.data for record in self.get_io_log()),
'UTF-8', 'replace'))
@property
def io_log_as_text_attachment(self):
"""
Perform a conversion of the binary I/O log to text, if possible.
Convert the I/O log to text attachment, if possible, otherwise return
an empty string.
This method is similar to
:meth:`_JobResultBase.io_log_as_flat_text()` but only merge stdout
records to recreate the original attachment file.
:returns:
stdout of the given job, converted to text (assuming UTF-8
encoding) with Unicode control characters removed, if possible, or
an empty string otherwise.
"""
try:
return ''.join(
CONTROL_CODE_RE_STR.sub('', text_chunk)
for text_chunk in codecs.iterdecode(
(record.data for record in self.get_io_log()
if record[1] == 'stdout'), 'UTF-8'))
except UnicodeDecodeError:
return ''
@property
def is_hollow(self):
"""
flag that indicates if the result is hollow.
Hollow results may have been created but hold no data at all.
Hollow results are also tentatively deprecated, once we have some
time to re-factor SessionState and specifically the job_state_map
code we will remove the need to have hollow results.
Hollow results are not saved, beginning with
:class:`plainbox.impl.session.suspend.SessionSuspendHelper4`.
"""
return not bool(self._data)
class MemoryJobResult(_JobResultBase):
"""
A :class:`IJobResult` that keeps IO logs in memory.
This type of JobResult is indented for writing unit tests where the hassle
of going through the filesystem would make them needlessly complicated.
"""
def get_io_log(self):
io_log_data = self._data.get('io_log', ())
for entry in io_log_data:
if isinstance(entry, IOLogRecord):
yield entry
elif isinstance(entry, tuple):
yield IOLogRecord(*entry)
else:
raise TypeError(
"each item in io_log must be either a tuple"
" or special the IOLogRecord tuple")
class GzipFile(gzip.GzipFile):
"""
Subclass of GzipFile that works around missing read1() on python3.2.
See: http://bugs.python.org/issue10791
"""
def _read_gzip_header(self):
"""
Ignore the non-compressed garbage at the end of the file
See: https://bugs.python.org/issue24301
"""
try:
return super()._read_gzip_header()
except OSError:
return False
def read1(self, n):
return self.read(n)
class DiskJobResult(_JobResultBase):
"""
A :class:`IJobResult` that keeps IO logs on disk.
This type of JobResult is intended for working with most results. It does
not store IO logs in memory so it is scalable to arbitrary IO log sizes.
Each instance just knows where the log file is located (using the
'io_log_filename' attribute for that) and offers streaming API for
accessing particular parts of the log.
"""
@property
def io_log_filename(self):
"""pathname of the file containing serialized IO log records."""
return self._data.get("io_log_filename")
def get_io_log(self):
record_path = self.io_log_filename
if record_path:
with GzipFile(record_path, mode='rb') as gzip_stream, \
io.TextIOWrapper(gzip_stream, encoding='UTF-8') as stream:
for record in IOLogRecordReader(stream):
yield record
@property
def io_log(self):
caller_frame, filename, lineno = inspect.stack(0)[1][:3]
logger.warning(
# TRANSLATORS: please keep DiskJobResult.io_log untranslated
_("Expensive DiskJobResult.io_log property access from %s:%d"),
filename, lineno)
return super(DiskJobResult, self).io_log
class IOLogRecordWriter:
"""Class for writing :class:`IOLogRecord` instances to a text stream."""
def __init__(self, stream):
self.stream = stream
def close(self):
self.stream.close()
def write_record(self, record):
"""Write an :class:`IOLogRecord` to the stream."""
text = json.dumps([
record[0], record[1],
base64.standard_b64encode(record[2]).decode("ASCII")],
check_circular=False, ensure_ascii=True, indent=None,
separators=(',', ':'))
logger.debug(_("Encoded %r into string %r"), record, text)
assert "\n" not in text
self.stream.write(text)
self.stream.write('\n')
class IOLogRecordReader:
"""Class for streaming :class`IOLogRecord` instances from a text stream."""
def __init__(self, stream):
self.stream = stream
def close(self):
self.stream.close()
def read_record(self):
"""
Read the next record from the stream.
:returns: None if the stream is empty
:returns: next :class:`IOLogRecord` as found in the stream.
"""
text = self.stream.readline()
if len(text) == 0:
return
data = json.loads(text)
return IOLogRecord(
data[0], data[1],
base64.standard_b64decode(data[2].encode("ASCII")))
def __iter__(self):
"""
Iterate over the entire stream generating subsequent records.
This method generates subsequent :class:`IOLogRecord` entries.
"""
while True:
record = self.read_record()
if record is None:
break
yield record
plainbox-0.25/plainbox/impl/censoREd.py 0000664 0001750 0001750 00000005650 12627266441 020725 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.censoREd` -- working around frustrating stuff
=================================================================
This module is the result of an evening of frustration caused by the need to
support Python 3.2 and a failing doctest that exercises, unintentionally, the
behavior of the compiled regular expression object's __repr__() method. That
should be something we can fix, right? Let's not get crazy here:
>>> import re
>>> sre_cls = type(re.compile(""))
>>> sre_cls
Aha, we have a nice type. It's only got a broken __repr__ method that sucks.
But this is Python, we can fix that? Right?
>>> sre_cls.__repr__ = (
... lambda self: "re.compile({!r})".format(self.pattern))
... # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
TypeError: can't set attributes of built-in/extension
type '_sre.SRE_Pattern'
Hmm, okay, so let's try something else:
>>> class Pattern(sre_cls):
... def __repr__(self):
... return "re.compile({!r})".format(self.pattern)
Traceback (most recent call last):
...
TypeError: type '_sre.SRE_Pattern' is not an acceptable base type
*Sigh*, denial, anger, bargaining, depression, acceptance
https://twitter.com/zygoon/status/560088469192843264
The last resort, aka, the proxy approach. Let's use a bit of magic to work
around the problem. This way we won't have to subclass or override anything.
"""
from padme import proxy
__all__ = ["PatternProxy"]
class PatternProxy(proxy):
"""
A proxy that overrides the __repr__() to match what Python 3.3+ providers
on the internal object representing a compiled regular expression.
>>> import re
>>> sre_cls = type(re.compile(""))
>>> pattern = PatternProxy(re.compile("profanity"))
Can we have a repr() like in Python3.4 please?
>>> pattern
re.compile('profanity')
Does it still work like a normal pattern object?
>>> pattern.match("profanity") is not None
True
>>> pattern.match("love") is not None
False
**Yes** (gets another drink).
"""
@proxy.direct
def __repr__(self):
return "re.compile({!r})".format(self.pattern)
plainbox-0.25/plainbox/impl/ctrl.py 0000664 0001750 0001750 00000171476 12627266441 020201 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.ctrl` -- Controller Classes
===============================================
Session controller classes implement the glue between models (jobs, whitelists,
session state) and the rest of the application. They encapsulate knowledge that
used to be special-cased and sprinkled around various parts of both plainbox
and particular plainbox-using applications.
Execution controllers are used by the :class:`~plainbox.impl.runner.JobRunner`
class to select the best method to execute a command of a particular job. This
is mostly applicable to jobs that need to run as another user, typically as
root, as the method that is used to effectively gain root differs depending on
circumstances.
"""
import abc
import contextlib
import errno
try:
import grp
except ImportError:
grp = None
import itertools
import json
import logging
import os
try:
import posix
except ImportError:
posix = None
import tempfile
import sys
from subprocess import check_output, CalledProcessError, STDOUT
from plainbox.abc import IExecutionController
from plainbox.abc import IJobResult
from plainbox.abc import ISessionStateController
from plainbox.i18n import gettext as _
from plainbox.impl import get_plainbox_dir
from plainbox.impl.depmgr import DependencyDuplicateError
from plainbox.impl.depmgr import DependencyMissingError
from plainbox.impl.resource import ExpressionCannotEvaluateError
from plainbox.impl.resource import ExpressionFailedError
from plainbox.impl.resource import ResourceProgramError
from plainbox.impl.resource import Resource
from plainbox.impl.secure.config import Unset
from plainbox.impl.secure.origin import JobOutputTextSource
from plainbox.impl.secure.providers.v1 import Provider1
from plainbox.impl.secure.rfc822 import RFC822SyntaxError
from plainbox.impl.secure.rfc822 import gen_rfc822_records
from plainbox.impl.session.jobs import InhibitionCause
from plainbox.impl.session.jobs import JobReadinessInhibitor
from plainbox.impl.unit.job import JobDefinition
from plainbox.impl.unit.template import TemplateUnit
from plainbox.impl.validation import ValidationError
from plainbox.vendor import morris
from plainbox.vendor import extcmd
__all__ = [
'CheckBoxSessionStateController',
'RootViaPTL1ExecutionController',
'RootViaPkexecExecutionController',
'RootViaSudoExecutionController',
'UserJobExecutionController',
'checkbox_session_state_ctrl',
]
logger = logging.getLogger("plainbox.ctrl")
class CheckBoxSessionStateController(ISessionStateController):
"""
A combo controller for CheckBox-like jobs.
This controller implements the following features:
* A job may depend on another job, this is expressed via the 'depends'
attribute. Cyclic dependencies are not allowed. A job will become
inhibited if any of its dependencies have outcome other than
OUTCOME_PASS
* A job may require that a particular resource expression evaluates to
true. This is expressed via the 'requires' attribute. A job will
become inhibited if any of the requirement programs evaluates to
value other than True.
* A job may have the attribute 'plugin' equal to "local" which will
cause the controller to interpret the stdout of the command as a set
of job definitions.
* A job may have the attribute 'plugin' equal to "resource" which will
cause the controller to interpret the stdout of the command as a set
of resource definitions.
"""
def get_dependency_set(self, job):
"""
Get the set of direct dependencies of a particular job.
:param job:
A IJobDefinition instance that is to be visited
:returns:
set of pairs (dep_type, job_id)
Returns a set of pairs (dep_type, job_id) that describe all
dependencies of the specified job. The first element in the pair,
dep_type, is either DEP_TYPE_DIRECT, DEP_TYPE_ORDERING or
DEP_TYPE_RESOURCE. The second element is the id of the job.
"""
direct = DependencyMissingError.DEP_TYPE_DIRECT
ordering = DependencyMissingError.DEP_TYPE_ORDERING
resource = DependencyMissingError.DEP_TYPE_RESOURCE
direct_deps = job.get_direct_dependencies()
after_deps = job.get_after_dependencies()
try:
resource_deps = job.get_resource_dependencies()
except ResourceProgramError:
resource_deps = ()
result = set(itertools.chain(
zip(itertools.repeat(direct), direct_deps),
zip(itertools.repeat(resource), resource_deps),
zip(itertools.repeat(ordering), after_deps)))
return result
def get_inhibitor_list(self, session_state, job):
"""
Get a list of readiness inhibitors that inhibit a particular job.
:param session_state:
A SessionState instance that is used to interrogate the
state of the session where it matters for a particular
job. Currently this is used to access resources and job
results.
:param job:
A JobDefinition instance
:returns:
List of JobReadinessInhibitor
"""
inhibitors = []
# Check if all job resource requirements are met
prog = job.get_resource_program()
if prog is not None:
try:
prog.evaluate_or_raise(session_state.resource_map)
except ExpressionCannotEvaluateError as exc:
for resource_id in exc.expression.resource_id_list:
if session_state.job_state_map[resource_id].result.outcome == 'pass':
continue
# Lookup the related job (the job that provides the
# resources needed by the expression that cannot be
# evaluated)
related_job = session_state.job_state_map[resource_id].job
# Add A PENDING_RESOURCE inhibitor as we are unable to
# determine if the resource requirement is met or not. This
# can happen if the resource job did not ran for any reason
# (it can either be prevented from running by normal means
# or simply be on the run_list but just was not executed
# yet).
inhibitor = JobReadinessInhibitor(
cause=InhibitionCause.PENDING_RESOURCE,
related_job=related_job,
related_expression=exc.expression)
inhibitors.append(inhibitor)
except ExpressionFailedError as exc:
# When expressions fail then all the associated resources are
# marked as failed since we don't want to get into the analysis
# of logic expressions to know any "better".
for resource_id in exc.expression.resource_id_list:
# Lookup the related job (the job that provides the
# resources needed by the expression that failed)
related_job = session_state.job_state_map[resource_id].job
# Add a FAILED_RESOURCE inhibitor as we have all the data
# to run the requirement program but it simply returns a
# non-True value. This typically indicates a missing
# software package or necessary hardware.
inhibitor = JobReadinessInhibitor(
cause=InhibitionCause.FAILED_RESOURCE,
related_job=related_job,
related_expression=exc.expression)
inhibitors.append(inhibitor)
# Check if all job dependencies ran successfully
for dep_id in sorted(job.get_direct_dependencies()):
dep_job_state = session_state.job_state_map[dep_id]
# If the dependency did not have a chance to run yet add the
# PENDING_DEP inhibitor.
if dep_job_state.result.outcome == IJobResult.OUTCOME_NONE:
inhibitor = JobReadinessInhibitor(
cause=InhibitionCause.PENDING_DEP,
related_job=dep_job_state.job)
inhibitors.append(inhibitor)
# If the dependency is anything but successful add the
# FAILED_DEP inhibitor. In theory the PENDING_DEP code above
# could be discarded but this would loose context and would
# prevent the operator from actually understanding why a job
# cannot run.
elif dep_job_state.result.outcome != IJobResult.OUTCOME_PASS:
inhibitor = JobReadinessInhibitor(
cause=InhibitionCause.FAILED_DEP,
related_job=dep_job_state.job)
inhibitors.append(inhibitor)
# Check if all "after" dependencies ran yet
for dep_id in sorted(job.get_after_dependencies()):
dep_job_state = session_state.job_state_map[dep_id]
# If the dependency did not have a chance to run yet add the
# PENDING_DEP inhibitor.
if dep_job_state.result.outcome == IJobResult.OUTCOME_NONE:
inhibitor = JobReadinessInhibitor(
cause=InhibitionCause.PENDING_DEP,
related_job=dep_job_state.job)
inhibitors.append(inhibitor)
return inhibitors
def observe_result(self, session_state, job, result):
"""
Notice the specified test result and update readiness state.
:param session_state:
A SessionState object
:param job:
A JobDefinition object
:param result:
A IJobResult object
This function updates the internal result collection with the data from
the specified test result. Results can safely override older results.
Results also change the ready map (jobs that can run) because of
dependency relations.
Some results have deeper meaning, those are results for local and
resource jobs. They are discussed in detail below:
Resource jobs produce resource records which are used as data to run
requirement expressions against. Each time a result for a resource job
is presented to the session it will be parsed as a collection of RFC822
records. A new entry is created in the resource map (entirely replacing
any old entries), with a list of the resources that were parsed from
the IO log.
Local jobs produce more jobs. Like with resource jobs, their IO log is
parsed and interpreted as additional jobs. Unlike in resource jobs
local jobs don't replace anything. They cannot replace an existing job
with the same id.
"""
# Store the result in job_state_map
session_state.job_state_map[job.id].result = result
session_state.on_job_state_map_changed()
session_state.on_job_result_changed(job, result)
# Treat some jobs specially and interpret their output
if job.plugin == "resource":
self._process_resource_result(session_state, job, result)
elif job.plugin == "local":
self._process_local_result(session_state, job, result)
def _process_resource_result(self, session_state, job, result):
"""
Analyze a result of a CheckBox "resource" job and generate
or replace resource records.
"""
self._parse_and_store_resource(session_state, job, result)
self._instantiate_templates(session_state, job, result)
def _parse_and_store_resource(self, session_state, job, result):
# NOTE: https://bugs.launchpad.net/checkbox/+bug/1297928
# If we are resuming from a session that had a resource job that
# never ran, we will see an empty MemoryJobResult object.
# Processing empty I/O log would create an empty resource list
# and that state is different from the state the session started
# before it was suspended, so don't
if result.outcome is IJobResult.OUTCOME_NONE:
return
new_resource_list = []
for record in gen_rfc822_records_from_io_log(job, result):
# XXX: Consider forwarding the origin object here. I guess we
# should have from_frc822_record as with JobDefinition
resource = Resource(record.data)
logger.info(
_("Storing resource record %r: %s"), job.id, resource)
new_resource_list.append(resource)
# Replace any old resources with the new resource list
session_state.set_resource_list(job.id, new_resource_list)
def _instantiate_templates(self, session_state, job, result):
# NOTE: https://bugs.launchpad.net/checkbox/+bug/1297928
# If we are resuming from a session that had a resource job that
# never ran, we will see an empty MemoryJobResult object.
# Processing empty I/O log would create an empty resource list
# and that state is different from the state the session started
# before it was suspended, so don't
if result.outcome is IJobResult.OUTCOME_NONE:
return
for unit in session_state.unit_list:
if isinstance(unit, TemplateUnit) and unit.resource_id == job.id:
logger.info(_("Instantiating unit: %s"), unit)
for new_unit in unit.instantiate_all(
session_state.resource_map[job.id]):
try:
new_unit.validate()
except ValidationError as exc:
logger.error(
_("Ignoring invalid instantiated unit %s: %s"),
new_unit, exc)
else:
session_state.add_unit(new_unit)
if new_unit.Meta.name == 'job':
job_state = session_state.job_state_map[
new_unit.id]
job_state.via_job = job
def _process_local_result(self, session_state, job, result):
"""
Analyze a result of a CheckBox "local" job and generate
additional job definitions
"""
# First parse all records and create a list of new jobs (confusing
# name, not a new list of jobs)
new_job_list = []
for record in gen_rfc822_records_from_io_log(job, result):
# Skip non-job units as the code below is wired to work with jobs
# Fixes: https://bugs.launchpad.net/plainbox/+bug/1443228
if record.data.get('unit', 'job') != 'job':
continue
new_job = job.create_child_job_from_record(record)
try:
new_job.validate()
except ValidationError as exc:
logger.error(_("Ignoring invalid generated job %s: %s"),
new_job.id, exc)
else:
new_job_list.append(new_job)
# Then for each new job, add it to the job_list, unless it collides
# with another job with the same id.
for new_job in new_job_list:
try:
added_job = session_state.add_job(new_job, recompute=False)
except DependencyDuplicateError as exc:
# XXX: there should be a channel where such errors could be
# reported back to the UI layer. Perhaps update_job_result()
# could simply return a list of problems in a similar manner
# how update_desired_job_list() does.
logger.warning(
# TRANSLATORS: keep the word "local" untranslated. It is a
# special type of job that needs to be distinguished.
_("Local job %s produced job %s that collides with"
" an existing job %s (from %s), the new job was"
" discarded"),
job.id, exc.duplicate_job.id, exc.job.id, exc.job.origin)
else:
# Set the via_job attribute of the newly added job to point to
# the generator job. This way it can be traced back to the old
# __category__-style local jobs or to their corresponding
# generator job in general.
#
# NOTE: this is the only place where we assign via_job so as
# long as that holds true, we can detect and break via cycles.
#
# Via cycles occur whenever a job can reach itself again
# through via associations. Note that the chain may be longer
# than one link (A->A) and can include other jobs in the list
# (A->B->C->A)
#
# To detect a cycle we must iterate back the via chain (and we
# must do it here because we have access to job_state_map that
# allows this iteration to happen) and break the cycle if we
# see the job being added.
job_state_map = session_state.job_state_map
job_state_map[added_job.id].via_job = job
via_cycle = get_via_cycle(job_state_map, added_job)
if via_cycle:
logger.warning(_("Automatically breaking via-cycle: %s"),
' -> '.join(str(cycle_job)
for cycle_job in via_cycle))
job_state_map[added_job.id].via_job = None
def get_via_cycle(job_state_map, job):
"""
Find a possible cycle including via_job.
:param job_state_map:
A dictionary mapping job.id to a JobState object.
:param via_job:
Any job, start of a hypothetical via job cycle.
:raises KeyError:
If any of the encountered jobs are not present in job_state_map.
:return:
A list of jobs that represent the cycle or an empty tuple if no cycle
is present. The list has the property that item[0] is item[-1]
A via cycle occurs if *job* is reachable through the *via_job* by
recursively following via_job connection until via_job becomes None.
"""
cycle = []
seen = set()
while job is not None:
cycle.append(job)
seen.add(job)
next_job = job_state_map[job.id].via_job
if next_job in seen:
break
job = next_job
else:
return ()
# Discard all the jobs leading to the cycle.
# cycle = cycle[cycle.index(next_job):]
# This is just to hold the promise of the return value so
# that processing is easier for the caller.
cycle.append(next_job)
# assert cycle[0] is cycle[-1]
return cycle
def gen_rfc822_records_from_io_log(job, result):
"""
Convert io_log from a job result to a sequence of rfc822 records
"""
logger.debug(_("processing output from a job: %r"), job)
# Select all stdout lines from the io log
line_gen = (record[2].decode('UTF-8', errors='replace')
for record in result.get_io_log()
if record[1] == 'stdout')
# Allow the generated records to be traced back to the job that defined
# the command which produced (printed) them.
source = JobOutputTextSource(job)
try:
# Parse rfc822 records from the subsequent lines
for record in gen_rfc822_records(line_gen, source=source):
yield record
except RFC822SyntaxError as exc:
# When this exception happens we will _still_ store all the
# preceding records. This is worth testing
logger.warning(
# TRANSLATORS: keep the word "local" untranslated. It is a
# special type of job that needs to be distinguished.
_("local script %s returned invalid RFC822 data: %s"),
job.id, exc)
checkbox_session_state_ctrl = CheckBoxSessionStateController()
class SymLinkNest:
"""
A class for setting up a control directory with symlinked executables
"""
def __init__(self, dirname):
self._dirname = dirname
def add_provider(self, provider):
"""
Add all of the executables associated a particular provider
:param provider:
A Provider1 instance
"""
for filename in provider.executable_list:
self.add_executable(filename)
def add_executable(self, filename):
"""
Add a executable to the control directory
"""
logger.debug(
_("Adding executable %s to nest %s"),
filename, self._dirname)
dest = os.path.join(self._dirname, os.path.basename(filename))
try:
os.symlink(filename, dest)
except OSError as exc:
# Allow symlinks to fail on Windows where it requires some
# untold voodoo magic to work (aka running as root)
logger.error(
_("Unable to create symlink s%s -> %s: %r"),
filename, dest, exc)
if sys.platform != 'win32':
raise
class CheckBoxExecutionController(IExecutionController):
"""
Base class for checkbox-like execution controllers.
This abstract class provides common features for all checkbox execution
controllers.
"""
def __init__(self, provider_list):
"""
Initialize a new CheckBoxExecutionController
:param provider_list:
A list of Provider1 objects that will be available for script
dependency resolutions. Currently all of the scripts are makedirs
available but this will be refined to the minimal set later.
"""
self._provider_list = provider_list
def execute_job(self, job, job_state, config, session_dir, extcmd_popen):
"""
Execute the specified job using the specified subprocess-like object
:param job:
The JobDefinition to execute
:param job_state:
The JobState associated to the job to execute.
:param config:
A PlainBoxConfig instance which can be used to load missing
environment definitions that apply to all jobs. It is used to
provide values for missing environment variables that are required
by the job (as expressed by the environ key in the job definition
file).
:param session_dir:
Base directory of the session this job will execute in.
This directory is used to co-locate some data that is unique to
this execution as well as data that is shared by all executions.
:param extcmd_popen:
A subprocess.Popen like object
:returns:
The return code of the command, as returned by subprocess.call()
"""
# CHECKBOX_DATA is where jobs can share output.
# It has to be an directory that scripts can assume exists.
if not os.path.isdir(self.get_CHECKBOX_DATA(session_dir)):
os.makedirs(self.get_CHECKBOX_DATA(session_dir))
# Setup the executable nest directory
with self.configured_filesystem(job, config) as nest_dir:
# Get the command and the environment.
# of this execution controller
cmd = self.get_execution_command(
job, job_state, config, session_dir, nest_dir)
env = self.get_execution_environment(
job, job_state, config, session_dir, nest_dir)
with self.temporary_cwd(job, config) as cwd_dir:
# run the command
logger.debug(_("job[%s] executing %r with env %r in cwd %r"),
job.id, cmd, env, cwd_dir)
return_code = extcmd_popen.call(cmd, env=env, cwd=cwd_dir)
if 'noreturn' in job.get_flag_set():
self._halt()
return return_code
@contextlib.contextmanager
def configured_filesystem(self, job, config):
"""
Context manager for handling filesystem aspects of job execution.
:param job:
The JobDefinition to execute
:param config:
A PlainBoxConfig instance which can be used to load missing
environment definitions that apply to all jobs. It is used to
provide values for missing environment variables that are required
by the job (as expressed by the environ key in the job definition
file).
:returns:
Pathname of the executable symlink nest directory.
"""
# Create a nest for all the private executables needed for execution
prefix = 'nest-'
suffix = '.{}'.format(job.checksum)
with tempfile.TemporaryDirectory(suffix, prefix) as nest_dir:
logger.debug(_("Symlink nest for executables: %s"), nest_dir)
nest = SymLinkNest(nest_dir)
# Add all providers sharing namespace with the current job to PATH
for provider in self._provider_list:
if job.provider.namespace == provider.namespace:
nest.add_provider(provider)
yield nest_dir
@contextlib.contextmanager
def temporary_cwd(self, job, config):
"""
Context manager for handling temporary current working directory
for a particular execution of a job definition command.
:param job:
The JobDefinition to execute
:param config:
A PlainBoxConfig instance which can be used to load missing
environment definitions that apply to all jobs. It is used to
provide values for missing environment variables that are required
by the job (as expressed by the environ key in the job definition
file).
:returns:
Pathname of the new temporary directory
"""
# Create a nest for all the private executables needed for execution
prefix = 'cwd-'
suffix = '.{}'.format(job.checksum)
with tempfile.TemporaryDirectory(suffix, prefix) as cwd_dir:
logger.debug(
_("Job temporary current working directory: %s"), cwd_dir)
try:
yield cwd_dir
finally:
leftovers = self._find_leftovers(cwd_dir)
if leftovers:
self.on_leftover_files(job, config, cwd_dir, leftovers)
def _find_leftovers(self, cwd_dir):
"""
Find left-over files and directories
:param cwd_dir:
Directory to inspect for leftover files
:returns:
A list of discovered files and directories (except for the cwd_dir
itself)
"""
leftovers = []
for dirpath, dirnames, filenames in os.walk(cwd_dir):
if dirpath != cwd_dir:
leftovers.append(dirpath)
leftovers.extend(
os.path.join(dirpath, filename)
for filename in filenames)
return leftovers
@morris.signal
def on_leftover_files(self, job, config, cwd_dir, leftovers):
"""
Handle any files left over by the execution of a job definition.
:param job:
job definition with the command and environment definitions
:param config:
configuration object (a PlainBoxConfig instance)
:param cwd_dir:
Temporary directory set as current working directory during job
definition command execution. During the time this signal is
emitted that directory still exists.
:param leftovers:
List of absolute pathnames of files and directories that were
created in the current working directory (cwd_dir).
.. note::
Anyone listening to this signal does not need to remove any of the
files. They are removed automatically after this method returns.
"""
def get_score(self, job):
"""
Compute how applicable this controller is for the specified job.
:returns:
A numeric score, or None if the controller cannot run this job.
The higher the value, the more applicable this controller is.
"""
if isinstance(job, JobDefinition):
return self.get_checkbox_score(job)
else:
return -1
@abc.abstractmethod
def get_checkbox_score(self, job):
"""
Compute how applicable this controller is for the specified job.
The twist is that it is always a checkbox job definition so we can be
more precise.
:returns:
A number that specifies how applicable this controller is for the
specified job (the higher the better) or None if it cannot be used
at all
"""
@abc.abstractmethod
def get_execution_command(self, job, job_state, config, session_dir,
nest_dir):
"""
Get the command to execute the specified job
:param job:
job definition with the command and environment definitions
:param job_state:
The JobState associated to the job to execute.
:param config:
A PlainBoxConfig instance which can be used to load missing
environment definitions that apply to all jobs. It is used to
provide values for missing environment variables that are required
by the job (as expressed by the environ key in the job definition
file).
:param session_dir:
Base directory of the session this job will execute in.
This directory is used to co-locate some data that is unique to
this execution as well as data that is shared by all executions.
:param nest_dir:
A directory with a nest of symlinks to all executables required to
execute the specified job. This argument may or may not be used,
depending on how PATH is passed to the command (via environment or
via the commant line)
:returns:
List of command arguments
"""
def get_execution_environment(self, job, job_state, config, session_dir,
nest_dir):
"""
Get the environment required to execute the specified job:
:param job:
job definition with the command and environment definitions
:param job_state:
The JobState associated to the job to execute.
:param config:
A PlainBoxConfig instance which can be used to load missing
environment definitions that apply to all jobs. It is used to
provide values for missing environment variables that are required
by the job (as expressed by the environ key in the job definition
file).
:param session_dir:
Base directory of the session this job will execute in.
This directory is used to co-locate some data that is unique to
this execution as well as data that is shared by all executions.
:param nest_dir:
A directory with a nest of symlinks to all executables required to
execute the specified job. This argument may or may not be used,
depending on how PATH is passed to the command (via environment or
via the commant line)
:return:
dictionary with the environment to use.
This returned environment has additional PATH, PYTHONPATH entries. It
also uses fixed LANG so that scripts behave as expected. Lastly it
sets CHECKBOX_SHARE and CHECKBOX_DATA that may be required by some
scripts.
"""
# Get a proper environment
env = dict(os.environ)
# Neuter locale unless 'preserve-locale' flag is set
if 'preserve-locale' not in job.get_flag_set():
# Use non-internationalized environment
env['LANG'] = 'C.UTF-8'
if 'LANGUAGE' in env:
del env['LANGUAGE']
for name in list(env.keys()):
if name.startswith("LC_"):
del env[name]
else:
# Set the per-provider gettext domain and locale directory
if job.provider.gettext_domain is not None:
env['TEXTDOMAIN'] = env['PLAINBOX_PROVIDER_GETTEXT_DOMAIN'] = \
job.provider.gettext_domain
if job.provider.locale_dir is not None:
env['TEXTDOMAINDIR'] = env['PLAINBOX_PROVIDER_LOCALE_DIR'] = \
job.provider.locale_dir
# Use PATH that can lookup checkbox scripts
if job.provider.extra_PYTHONPATH:
env['PYTHONPATH'] = os.pathsep.join(
[job.provider.extra_PYTHONPATH]
+ env.get("PYTHONPATH", "").split(os.pathsep))
# Inject nest_dir into PATH
env['PATH'] = os.pathsep.join(
[nest_dir]
+ env.get("PATH", "").split(os.pathsep))
# Add per-session shared state directory
env['PLAINBOX_SESSION_SHARE'] = env['CHECKBOX_DATA'] = \
self.get_CHECKBOX_DATA(session_dir)
# Add a path to the per-provider data directory
if job.provider.data_dir is not None:
env['PLAINBOX_PROVIDER_DATA'] = job.provider.data_dir
# Add a path to the per-provider units directory
if job.provider.units_dir is not None:
env['PLAINBOX_PROVIDER_UNITS'] = job.provider.units_dir
# Add a path to the base provider directory (legacy)
if job.provider.CHECKBOX_SHARE is not None:
env['CHECKBOX_SHARE'] = job.provider.CHECKBOX_SHARE
# Inject additional variables that are requested in the config
if config is not None and config.environment is not Unset:
for env_var in config.environment:
# Don't override anything that is already present in the
# current environment. This will allow users to customize
# variables without editing any config files.
if env_var in env:
continue
# If the environment section of the configuration file has a
# particular variable then copy it over.
env[env_var] = config.environment[env_var]
return env
def get_CHECKBOX_DATA(self, session_dir):
"""
value of the CHECKBOX_DATA environment variable.
This variable names a sub-directory of the session directory
where jobs can share data between invocations.
"""
# TODO, rename this, it's about time now
return os.path.join(session_dir, "CHECKBOX_DATA")
def get_warm_up_for_job(self, job):
"""
Get a warm-up function that should be called before running this job.
:returns:
None
"""
def _halt(self):
"""
Suspend operation until signal is received
This function is useful when plainbox should stop execution and wait
for external process to kill it.
"""
import signal
signal.pause()
class UserJobExecutionController(CheckBoxExecutionController):
"""
An execution controller that works for jobs invoked as the current user.
"""
def get_execution_command(self, job, job_state, config, session_dir,
nest_dir):
"""
Get the command to execute the specified job
:param job:
job definition with the command and environment definitions
:param job_state:
The JobState associated to the job to execute.
:param config:
A PlainBoxConfig instance which can be used to load missing
environment definitions that apply to all jobs. Ignored.
:param session_dir:
Base directory of the session this job will execute in.
This directory is used to co-locate some data that is unique to
this execution as well as data that is shared by all executions.
Ignored.
:param nest_dir:
A directory with a nest of symlinks to all executables required to
execute the specified job. Ingored.
:returns:
List of command arguments
The return value depends on the flags that a job carries. Since
plainbox has originated in a Linux environment where the default
shell is a POSIX-y shell (bash or dash) and that's what all existing
jobs assume, unless running on windows, this method returns::
[job.shell, '-c', job.command]
When the system is running windows, the job must have the 'win32'
flag set (or it won't be possible to run it as get_checkbox_score()
will be -1). In that case a windows-specific command is used::
['cmd.exe', '/C', job.command]
"""
if 'win32' in job.get_flag_set():
return ['cmd.exe', '/C', job.command]
else:
return [job.shell, '-c', job.command]
def get_checkbox_score(self, job):
"""
Compute how applicable this controller is for the specified job.
:returns:
1 for jobs without a user override, 4 for jobs with user override
if the invoking uid is 0 (root), -1 otherwise
"""
if sys.platform == 'win32':
# Switching user credentials is not supported on Windows
if job.user is not None:
return -1
# Oridinary jobs cannot run on Windows
if 'win32' not in job.get_flag_set():
return -1
return 1
else:
# Windows jobs won't run on other platforms
if 'win32' in job.get_flag_set():
return -1
if job.user is not None:
if os.getuid() == 0:
return 4
else:
return -1
return 1
class QmlJobExecutionController(CheckBoxExecutionController):
"""
An execution controller that is able to run jobs in QML shell.
"""
QML_SHELL_PATH = os.path.join(get_plainbox_dir(), 'data', 'qml-shell',
'plainbox_qml_shell.qml')
QML_MODULES_PATH = os.path.join(get_plainbox_dir(), 'data',
'plainbox-qml-modules')
def get_execution_command(self, job, job_state, config, session_dir,
nest_dir, shell_out_fd, shell_in_fd):
"""
Get the command to execute the specified job
:param job:
job definition with the command and environment definitions
:param job_state:
The JobState associated to the job to execute.
:param config:
A PlainBoxConfig instance which can be used to load missing
environment definitions that apply to all jobs. Ignored.
:param session_dir:
Base directory of the session this job will execute in.
This directory is used to co-locate some data that is unique to
this execution as well as data that is shared by all executions.
Ignored.
:param nest_dir:
A directory with a nest of symlinks to all executables required to
execute the specified job. Ingored.
:param shell_out_fd:
File descriptor number which is used to pipe through result object
from the qml shell to plainbox.
:param shell_in_fd:
File descriptor number which is used to pipe through test meta
information from plainbox to qml shell.
:returns:
List of command arguments
"""
cmd = ['qmlscene', '-I', self.QML_MODULES_PATH, '--job', job.qml_file,
'--fd-out', shell_out_fd, '--fd-in', shell_in_fd,
self.QML_SHELL_PATH]
return cmd
def get_checkbox_score(self, job):
"""
Compute how applicable this controller is for the specified job.
:returns:
4 if the job is a qml job or -1 otherwise
"""
if job.plugin == 'qml':
return 4
else:
return -1
def gen_job_repr(self, job):
"""
Generate simplified job representation for use in qml shell
:returns:
dictionary with simplified job representation
"""
logger.debug(_("Generating job repr for job: %r"), job)
return {
"id": job.id,
"summary": job.tr_summary(),
"description": job.tr_description(),
}
def execute_job(self, job, job_state, config, session_dir, extcmd_popen):
"""
Execute the specified job using the specified subprocess-like object,
passing fd with opened pipe for qml-shell->plainbox communication.
:param job:
The JobDefinition to execute
:param config:
A PlainBoxConfig instance which can be used to load missing
environment definitions that apply to all jobs. It is used to
provide values for missing environment variables that are required
by the job (as expressed by the environ key in the job definition
file).
:param session_dir:
Base directory of the session this job will execute in.
This directory is used to co-locate some data that is unique to
this execution as well as data that is shared by all executions.
:param extcmd_popen:
A subprocess.Popen like object
:returns:
The return code of the command, as returned by subprocess.call()
"""
class DuplexPipe:
"""
Helper context creating two pipes, ensuring they are closed
properly
"""
def __enter__(self):
self.a_read, self.b_write = os.pipe()
self.b_read, self.a_write = os.pipe()
return self.a_read, self.b_write, self.b_read, self.a_write
def __exit__(self, *args):
for pipe in (self.a_read, self.b_write,
self.b_read, self.a_write):
# typically those pipes are already closed; trying to
# re-close them causes OSError (errno == 9) to be raised
try:
os.close(pipe)
except OSError as exc:
if exc.errno != errno.EBADF:
raise
# CHECKBOX_DATA is where jobs can share output.
# It has to be an directory that scripts can assume exists.
if not os.path.isdir(self.get_CHECKBOX_DATA(session_dir)):
os.makedirs(self.get_CHECKBOX_DATA(session_dir))
# Setup the executable nest directory
with self.configured_filesystem(job, config) as nest_dir:
with DuplexPipe() as (plainbox_read, shell_write,
shell_read, plainbox_write):
# Get the command and the environment.
# of this execution controller
cmd = self.get_execution_command(
job, job_state, config, session_dir, nest_dir,
str(shell_write), str(shell_read))
env = self.get_execution_environment(
job, job_state, config, session_dir, nest_dir)
with self.temporary_cwd(job, config) as cwd_dir:
testing_shell_data = json.dumps({
"job_repr": self.gen_job_repr(job),
"session_dir": self.get_CHECKBOX_DATA(session_dir)
})
pipe_out = os.fdopen(plainbox_write, 'wt')
pipe_out.write(testing_shell_data)
pipe_out.close()
# run the command
logger.debug(_("job[%s] executing %r with"
"env %r in cwd %r"),
job.id, cmd, env, cwd_dir)
ret = extcmd_popen.call(cmd, env=env, cwd=cwd_dir,
pass_fds=[shell_write, shell_read])
os.close(shell_read)
os.close(shell_write)
pipe_in = os.fdopen(plainbox_read)
res_object_json_string = pipe_in.read()
pipe_in.close()
if 'noreturn' in job.get_flag_set():
self._halt()
if ret != 0:
return ret
try:
result = json.loads(res_object_json_string)
if result['outcome'] == "pass":
return 0
else:
return 1
except ValueError:
# qml-job did not print proper json object
return 1
class CheckBoxDifferentialExecutionController(CheckBoxExecutionController):
"""
A CheckBoxExecutionController subclass that uses differential environment.
This special subclass has a special :meth:`get_execution_environment()`
method that always returns None. Instead the new method
:meth:`get_differential_execution_environment()` that returns the
difference between the target environment and the current environment.
"""
def get_differential_execution_environment(
self, job, job_state, config, session_dir, nest_dir):
"""
Get the environment required to execute the specified job:
:param job:
job definition with the command and environment definitions
:param job_state:
The JobState associated to the job to execute.
:param config:
A PlainBoxConfig instance which can be used to load missing
environment definitions that apply to all jobs. It is used to
provide values for missing environment variables that are required
by the job (as expressed by the environ key in the job definition
file).
:param session_dir:
Base directory of the session this job will execute in.
This directory is used to co-locate some data that is unique to
this execution as well as data that is shared by all executions.
:param nest_dir:
A directory with a nest of symlinks to all executables required to
execute the specified job. This is simply passed to
:meth:`get_execution_environment()` directly.
:returns:
Differential environment (see below).
This implementation computes the desired environment (as it was
computed in the base class) and then discards all of the environment
variables that are identical in both sets. The exception are variables
that are mentioned in
:meth:`plainbox.impl.job.JobDefinition.get_environ_settings()` which
are always retained.
"""
base_env = os.environ
target_env = super().get_execution_environment(
job, job_state, config, session_dir, nest_dir)
delta_env = {
key: value
for key, value in target_env.items()
if key not in base_env or base_env[key] != value
or key in job.get_environ_settings()
}
# Neutral locale in the differential environment unless the
# 'preserve-locale' flag is set.
if 'preserve-locale' not in job.get_flag_set():
delta_env['LANG'] = 'C.UTF-8'
delta_env['LANGUAGE'] = ''
delta_env['LC_ALL'] = 'C.UTF-8'
return delta_env
def get_execution_environment(self, job, job_state, config, session_dir,
nest_dir):
"""
Get the environment required to execute the specified job:
:param job:
job definition with the command and environment definitions.
Ignored.
:param job_state:
The JobState associated to the job to execute. Ignored.
:param config:
A PlainBoxConfig instance which can be used to load missing
environment definitions that apply to all jobs. Ignored.
:param session_dir:
Base directory of the session this job will execute in.
This directory is used to co-locate some data that is unique to
this execution as well as data that is shared by all executions.
Ignored.
:param nest_dir:
A directory with a nest of symlinks to all executables required to
execute the specified job. Ignored.
:returns:
None
This implementation always returns None since the environment is always
passed in via :meth:`get_execution_command()`
"""
return None
class RootViaPTL1ExecutionController(CheckBoxDifferentialExecutionController):
"""
Execution controller that gains root using plainbox-trusted-launcher-1
"""
def __init__(self, provider_list):
"""
Initialize a new RootViaPTL1ExecutionController
"""
super().__init__(provider_list)
# Ask pkaction(1) if the "run-plainbox-job" policykit action is
# registered on this machine.
action_id = b"org.freedesktop.policykit.pkexec.run-plainbox-job"
# Catch CalledProcessError because pkaction (polkit < 0.110) always
# exits with status 1, see:
# https://bugs.freedesktop.org/show_bug.cgi?id=29936#attach_78263
try:
result = check_output(["pkaction", "--action-id", action_id],
stderr=STDOUT)
except OSError as exc:
logger.warning(
_("Cannot check if plainbox-trusted-launcher-1 is"
" available: %s"), str(exc))
result = b""
except CalledProcessError as exc:
result = exc.output
self.is_supported = True if result.strip() == action_id else False
def get_execution_command(self, job, job_state, config, session_dir,
nest_dir):
"""
Get the command to invoke.
:param job:
job definition with the command and environment definitions
:param job_state:
The JobState associated to the job to execute.
:param config:
A PlainBoxConfig instance which can be used to load missing
environment definitions that apply to all jobs. Passed to
:meth:`get_differential_execution_environment()`.
:param session_dir:
Base directory of the session this job will execute in.
This directory is used to co-locate some data that is unique to
this execution as well as data that is shared by all executions.
:param nest_dir:
A directory with a nest of symlinks to all executables required to
execute the specified job. Passed to
:meth:`get_differential_execution_environment()`.
This overridden implementation returns especially crafted command that
uses pkexec to run the plainbox-trusted-launcher-1 as the desired user
(typically root). It passes the checksum of the job definition as
argument, along with all of the required environment key-value pairs.
If a job is generated it also passes the special via attribute to let
the trusted launcher discover the generated job. Currently it supports
at most one-level of generated jobs.
"""
# Run plainbox-trusted-launcher-1 as the required user
cmd = ['pkexec', '--user', job.user, 'plainbox-trusted-launcher-1']
# Run the specified generator job in the specified environment
if job_state.via_job is not None:
cmd += ['--generator', job_state.via_job.checksum]
parent_env = self.get_differential_execution_environment(
# FIXME: job_state is from an unrelated job :/
job_state.via_job, job_state, config, session_dir,
nest_dir)
for key, value in sorted(parent_env.items()):
cmd += ['-G', '{}={}'.format(key, value)]
# Run the specified target job in the specified environment
cmd += ['--target', job.checksum]
env = self.get_differential_execution_environment(
job, job_state, config, session_dir, nest_dir)
for key, value in sorted(env.items()):
cmd += ['-T', '{}={}'.format(key, value)]
return cmd
def get_checkbox_score(self, job):
"""
Compute how applicable this controller is for the specified job.
:returns:
two for jobs with an user override that can be invoked by the
trusted launcher, zero for jobs without an user override that can
be invoked by the trusted launcher, -1 otherwise
"""
# Only works with jobs coming from the Provider1 instance
if not isinstance(job.provider, Provider1):
return -1
# Only works with jobs loaded from the secure PROVIDERPATH
if not job.provider.secure:
return -1
# Doesn't work when connected over SSH (LP: #1299201)
if os.environ.get("SSH_CONNECTION"):
return -1
# Doesn't work for windows jobs
if 'win32' in job.get_flag_set():
return -1
# Only makes sense with jobs that need to run as another user
# Promote this controller only if the trusted launcher is authorized to
# run jobs as another user
if job.user is not None and self.is_supported:
return 3
else:
return 0
def get_warm_up_for_job(self, job):
"""
Get a warm-up function that should be called before running this job.
:returns:
a warm-up function for jobs that need to run as another
user or None if the job can run as the current user.
"""
if job.user is None:
return
else:
return plainbox_trusted_launcher_warm_up
def plainbox_trusted_launcher_warm_up():
"""
Warm-up function for plainbox-trusted-laucher-1.
returned by :meth:`RootViaPTL1ExecutionController.get_warm_up_for_job()`
"""
warmup_popen = extcmd.ExternalCommand()
return warmup_popen.call(
['pkexec', 'plainbox-trusted-launcher-1', '--warmup'])
class RootViaPkexecExecutionController(
CheckBoxDifferentialExecutionController):
"""
Execution controller that gains root by using pkexec.
This controller should be used for jobs that need root but cannot be
executed by the plainbox-trusted-launcher-1. This happens whenever the job
is not in the system-wide provider location.
In practice it is used when working with the special
'checkbox-in-source-tree' provider as well as for jobs that need to run as
root from the non-system-wide location.
"""
def get_execution_command(self, job, job_state, config, session_dir,
nest_dir):
"""
Get the command to invoke.
:param job:
job definition with the command and environment definitions
:param job_state:
The JobState associated to the job to execute.
:param config:
A PlainBoxConfig instance which can be used to load missing
environment definitions that apply to all jobs. Passed to
:meth:`get_differential_execution_environment()`.
:param session_dir:
Base directory of the session this job will execute in.
This directory is used to co-locate some data that is unique to
this execution as well as data that is shared by all executions.
:param nest_dir:
A directory with a nest of symlinks to all executables required to
execute the specified job. Passed to
:meth:`get_differential_execution_environment()`.
Since we cannot pass environment in the ordinary way while using
pkexec(1) (pkexec starts new processes in a sanitized, pristine,
environment) we're relying on env(1) to pass some of the environment
variables that we require.
"""
# Run env(1) as the required user
cmd = ['pkexec', '--user', job.user, 'env']
# Append all environment data
env = self.get_differential_execution_environment(
job, job_state, config, session_dir, nest_dir)
cmd += ["{key}={value}".format(key=key, value=value)
for key, value in sorted(env.items())]
# Lastly use job.shell -c, to run our command
cmd += [job.shell, '-c', job.command]
return cmd
def get_checkbox_score(self, job):
"""
Compute how applicable this controller is for the specified job.
:returns:
one for jobs with a user override, zero otherwise
"""
# Doesn't work for windows jobs
if 'win32' in job.get_flag_set():
return -1
if job.user is not None:
return 1
else:
return 0
class RootViaSudoExecutionController(
CheckBoxDifferentialExecutionController):
"""
Execution controller that gains root by using sudo.
This controller should be used for jobs that need root but cannot be
executed by the plainbox-trusted-launcher-1.
This happens whenever the job is not in the system-wide provider location.
In practice it is used when working with the special
'checkbox-in-source-tree' provider as well as for jobs that need to run as
root from the non-system-wide location.
Using this controller is preferable to pkexec if running on command line as
unlike pkexec, it retains 'memory' and doesn't ask for the password over
and over again.
"""
def __init__(self, provider_list):
"""
Initialize a new RootViaSudoExecutionController
"""
super().__init__(provider_list)
# Check if the user can use 'sudo' on this machine. This check is a bit
# Ubuntu specific and can be wrong due to local configuration but
# without a better API all we can do is guess.
#
# Shamelessly stolen from command-not-found
try:
in_sudo_group = grp.getgrnam("sudo").gr_gid in posix.getgroups()
except KeyError:
in_sudo_group = False
try:
in_admin_group = grp.getgrnam("admin").gr_gid in posix.getgroups()
except KeyError:
in_admin_group = False
self.user_can_sudo = in_sudo_group or in_admin_group
def get_execution_command(self, job, job_state, config, session_dir,
nest_dir):
"""
Get the command to invoke.
:param job:
job definition with the command and environment definitions
:param job_state:
The JobState associated to the job to execute.
:param config:
A PlainBoxConfig instance which can be used to load missing
environment definitions that apply to all jobs. Ignored.
:param session_dir:
Base directory of the session this job will execute in.
This directory is used to co-locate some data that is unique to
this execution as well as data that is shared by all executions.
:param nest_dir:
A directory with a nest of symlinks to all executables required to
execute the specified job. Ingored.
Since we cannot pass environment in the ordinary way while using
sudo(8) (even passing -E doesn't get us everything due to security
features built into sudo itself) we're relying on env(1) to pass some
of the environment variables that we require.
"""
# Run env(1) as the required user
cmd = ['sudo', '-u', job.user, 'env']
# Append all environment data
env = self.get_differential_execution_environment(
job, job_state, config, session_dir, nest_dir)
cmd += ["{key}={value}".format(key=key, value=value)
for key, value in sorted(env.items())]
# Lastly use job.shell -c, to run our command
cmd += [job.shell, '-c', job.command]
return cmd
def get_checkbox_score(self, job):
"""
Compute how applicable this controller is for the specified job.
:returns:
-1 if the job does not have a user override or the user cannot use
sudo and 2 otherwise
"""
# Doesn't work for windows jobs
if 'win32' in job.get_flag_set():
return -1
# Only makes sense with jobs that need to run as another user
if job.user is not None and self.user_can_sudo:
return 2
else:
return -1
plainbox-0.25/plainbox/impl/job.py 0000664 0001750 0001750 00000001701 12627266441 017766 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012, 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
Job Tree Builder.
:mod:`plainbox.impl.job` -- job definition
==========================================
.. warning::
THIS MODULE DOES NOT HAVE STABLE PUBLIC API
"""
from plainbox.impl.unit.job import JobDefinition
__all__ = ('JobDefinition', )
plainbox-0.25/plainbox/impl/logging.py 0000664 0001750 0001750 00000035055 12627266441 020653 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012, 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.logging` -- configuration for logging
=========================================================
.. warning::
THIS MODULE DOES NOT HAVE STABLE PUBLIC API
"""
__all__ = ['setup_logging', 'adjust_logging']
import logging
import logging.config
import os
import sys
from plainbox.i18n import gettext as _
from plainbox.impl.color import ansi_on, ansi_off
logger = logging.getLogger("plainbox.logging")
# XXX: enable ansi escape sequences if sys.std{out,err} are both TTYs
#
# This is a bad place to take this decision (ideally we'd do that per log
# handler) but it's rather hard to do correctly (handlers know where stuff
# goes, formatters decide how stuff looks like) so this half solution is
# better than nothing.
if sys.stdout.isatty() and sys.stderr.isatty():
ansi = ansi_on
else:
ansi = ansi_off
class ANSIFormatter(logging.Formatter):
"""
Formatter that allows to expand '{ansi}' (using new-style
python formatting syntax) inside format descriptions.
"""
def __init__(self, fmt=None, datefmt=None, style='%'):
if fmt is not None:
fmt = fmt.format(ansi=ansi)
super(ANSIFormatter, self).__init__(fmt, datefmt, style)
class LevelFilter:
"""
Log filter that accepts records in a certain level range
"""
def __init__(self, min_level="NOTSET", max_level="CRITICAL"):
self.min_level = logging._checkLevel(min_level)
self.max_level = logging._checkLevel(max_level)
def filter(self, record):
if self.min_level <= record.levelno <= self.max_level:
return 1
else:
return 0
class LoggingHelper:
"""
Helper class that manages logging subsystem
"""
def setup_logging(self):
config_dict = self.DEFAULT_CONFIG
# Ensure that the logging directory exists. This is important
# because we're about to open some files there. If it can't be created
# we fall back to a console-only config.
if not os.path.exists(self.log_dir):
# It seems that exists_ok is flaky
try:
os.makedirs(self.log_dir, exist_ok=True)
except OSError as error:
logger.warning(
_("Unable to create log directory: %s"), self.log_dir)
logger.warning(_("Reason: %s. All logs will go to "
"console instead."), error)
config_dict = self.DEFAULT_CONSOLE_ONLY_CONFIG
# Apply the selected configuration. This overrides anything currently
# defined for all of the logging subsystem in this python runtime
logging.config.dictConfig(config_dict)
def adjust_logging(self, level=None, trace_list=None, debug_console=False):
# Bump logging on the root logger if requested
if level is not None:
logging.getLogger(None).setLevel(level)
logger.debug(_("Enabled %r on root logger"), level)
logging.getLogger("plainbox").setLevel(level)
logging.getLogger("checkbox").setLevel(level)
# Enable tracing on specified loggers
if trace_list is not None:
for name in trace_list:
logging.getLogger(name).setLevel(logging.DEBUG)
logger.debug(_("Enabled debugging on logger %r"), name)
if debug_console and (level == 'DEBUG' or trace_list):
# Enable DEBUG logging to console if explicitly requested
logging.config.dictConfig(self.DEBUG_CONSOLE_CONFIG)
@property
def log_dir(self):
"""
directory with all of the log files
"""
xdg_cache_home = os.environ.get('XDG_CACHE_HOME') or \
os.path.join(os.path.expanduser('~'), '.cache')
return os.path.join(xdg_cache_home, 'plainbox', 'logs')
@property
def DEFAULT_FORMATTERS(self):
"""
Reusable dictionary with the formatter configuration plainbox uses
"""
return {
"console_debug": {
"()": "plainbox.impl.logging.ANSIFormatter",
"format": (
"{ansi.f.BLACK}{ansi.s.BRIGHT}"
"%(levelname)s"
"{ansi.s.NORMAL}{ansi.f.RESET}"
" "
"{ansi.f.CYAN}{ansi.s.DIM}"
"%(name)s"
"{ansi.f.RESET}{ansi.s.NORMAL}"
": "
"{ansi.s.DIM}"
"%(message)s"
"{ansi.s.NORMAL}"
),
},
"console_info": {
"()": "plainbox.impl.logging.ANSIFormatter",
"format": (
"{ansi.f.WHITE}{ansi.s.BRIGHT}"
"%(levelname)s"
"{ansi.s.NORMAL}{ansi.f.RESET}"
" "
"{ansi.f.CYAN}{ansi.s.BRIGHT}"
"%(name)s"
"{ansi.f.RESET}{ansi.s.NORMAL}"
": "
"%(message)s"
),
},
"console_warning": {
"()": "plainbox.impl.logging.ANSIFormatter",
"format": (
"{ansi.f.YELLOW}{ansi.s.BRIGHT}"
"%(levelname)s"
"{ansi.f.RESET}{ansi.s.NORMAL}"
" "
"{ansi.f.CYAN}%(name)s{ansi.f.RESET}"
": "
"{ansi.f.WHITE}%(message)s{ansi.f.RESET}"
),
},
"console_error": {
"()": "plainbox.impl.logging.ANSIFormatter",
"format": (
"{ansi.f.RED}{ansi.s.BRIGHT}"
"%(levelname)s"
"{ansi.f.RESET}{ansi.s.NORMAL}"
" "
"{ansi.f.CYAN}%(name)s{ansi.f.RESET}"
": "
"{ansi.f.WHITE}%(message)s{ansi.f.RESET}"
),
},
"log_precise": {
"format": (
"%(asctime)s "
"[pid:%(process)s, thread:%(threadName)s, "
"reltime:%(relativeCreated)dms] "
"%(levelname)s %(name)s: %(message)s"
),
"datefmt": "%Y-%m-%d %H:%M:%S",
},
}
@property
def DEFAULT_FILTERS(self):
"""
Reusable dictionary with the filter configuration plainbox uses
"""
return {
"only_debug": {
"()": "plainbox.impl.logging.LevelFilter",
"max_level": "DEBUG",
},
"only_info": {
"()": "plainbox.impl.logging.LevelFilter",
"min_level": "INFO",
"max_level": "INFO",
},
"only_warnings": {
"()": "plainbox.impl.logging.LevelFilter",
"min_level": "WARNING",
"max_level": "WARNING",
},
}
@property
def DEFAULT_HANDLERS(self):
"""
Reusable dictionary with the handler configuration plainbox uses.
This configuration assumes the log file locations exist and are
writable.
"""
return {
"console_debug": {
"class": "logging.StreamHandler",
"stream": "ext://sys.stdout",
"formatter": "console_debug",
"filters": ["only_debug"],
"level": 150,
},
"console_info": {
"class": "logging.StreamHandler",
"stream": "ext://sys.stdout",
"formatter": "console_info",
"filters": ["only_info"],
},
"console_warning": {
"class": "logging.StreamHandler",
"stream": "ext://sys.stderr",
"formatter": "console_warning",
"filters": ["only_warnings"],
},
"console_error": {
"class": "logging.StreamHandler",
"stream": "ext://sys.stderr",
"formatter": "console_error",
"level": "ERROR",
},
"logfile_debug": {
"class": "logging.handlers.RotatingFileHandler",
"filename": os.path.join(self.log_dir, "debug.log"),
"maxBytes": 32 << 20,
"backupCount": 3,
"mode": "a",
"formatter": "log_precise",
"delay": True,
"filters": ["only_debug"],
},
"logfile_error": {
"class": "logging.handlers.RotatingFileHandler",
"filename": os.path.join(self.log_dir, "problem.log"),
"backupCount": 3,
"level": "WARNING",
"mode": "a",
"formatter": "log_precise",
"delay": True,
},
"logfile_crash": {
"class": "logging.handlers.RotatingFileHandler",
"filename": os.path.join(self.log_dir, "crash.log"),
"backupCount": 3,
"level": "ERROR",
"mode": "a",
"formatter": "log_precise",
"delay": True,
},
"logfile_bug": {
"class": "logging.handlers.RotatingFileHandler",
"filename": os.path.join(self.log_dir, "bug.log"),
"backupCount": 3,
"mode": "a",
"formatter": "log_precise",
"delay": True,
},
}
@property
def DEFAULT_CONSOLE_ONLY_HANDLERS(self):
"""
Reusable dictionary with a handler configuration using only the
console for output.
"""
return {
"console_debug": {
"class": "logging.StreamHandler",
"stream": "ext://sys.stdout",
"formatter": "console_debug",
"filters": ["only_debug"],
"level": 150,
},
"console_info": {
"class": "logging.StreamHandler",
"stream": "ext://sys.stdout",
"formatter": "console_info",
"filters": ["only_info"],
},
"console_warning": {
"class": "logging.StreamHandler",
"stream": "ext://sys.stderr",
"formatter": "console_warning",
"filters": ["only_warnings"],
},
"console_error": {
"class": "logging.StreamHandler",
"stream": "ext://sys.stderr",
"formatter": "console_error",
"level": "ERROR",
},
}
@property
def DEFAULT_LOGGERS(self):
"""
Reusable dictionary with the logger configuration plainbox uses.
This configuration assumes the log file locations exist and are
writable.
"""
return {
"checkbox": {
"level": "WARNING",
"handlers": [
"console_debug",
"console_info",
"console_warning",
"console_error",
"logfile_error",
"logfile_debug",
],
},
"plainbox": {
"level": "WARNING",
"handlers": [
"console_debug",
"console_info",
"console_warning",
"console_error",
"logfile_error",
"logfile_debug",
],
},
"plainbox.crashes": {
"level": "ERROR",
"handlers": ["logfile_crash"],
},
"plainbox.bug": {
"handlers": ["logfile_bug"],
},
}
@property
def DEFAULT_CONSOLE_ONLY_LOGGERS(self):
"""
Reusable dictionary with a logger configuration using only the
console for output.
"""
return {
"plainbox": {
"level": "WARNING",
"handlers": [
"console_debug",
"console_info",
"console_warning",
"console_error",
],
},
"plainbox.crashes": {
"level": "ERROR",
"handlers": ["console_error"],
},
}
@property
def DEFAULT_CONFIG(self):
"""
Plainbox logging configuration with logfiles and console.
"""
return {
"version": 1,
"formatters": self.DEFAULT_FORMATTERS,
"filters": self.DEFAULT_FILTERS,
"handlers": self.DEFAULT_HANDLERS,
"loggers": self.DEFAULT_LOGGERS,
"root": {
"level": "WARNING",
},
"incremental": False,
"disable_existing_loggers": True,
}
@property
def DEFAULT_CONSOLE_ONLY_CONFIG(self):
"""
Plainbox logging configuration with console output only.
"""
return {
"version": 1,
"formatters": self.DEFAULT_FORMATTERS,
"filters": self.DEFAULT_FILTERS,
"handlers": self.DEFAULT_CONSOLE_ONLY_HANDLERS,
"loggers": self.DEFAULT_CONSOLE_ONLY_LOGGERS,
"root": {
"level": "WARNING",
},
"incremental": False,
"disable_existing_loggers": True,
}
@property
def DEBUG_CONSOLE_CONFIG(self):
return {
"version": 1,
"handlers": {
"console_debug": {
"level": "DEBUG",
},
},
"incremental": True,
}
# Instantiate the helper
_LoggingHelper = LoggingHelper()
# And expose two methods from it
setup_logging = _LoggingHelper.setup_logging
adjust_logging = _LoggingHelper.adjust_logging
plainbox-0.25/plainbox/impl/test_applogic.py 0000664 0001750 0001750 00000002662 12627266441 022060 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.test_applogic
===========================
Test definitions for plainbox.impl.applogic module
"""
from unittest import TestCase
from plainbox.impl.applogic import get_matching_job_list
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.qualifiers import RegExpJobQualifier
from plainbox.impl.testing_utils import make_job
from plainbox.vendor import mock
class FunctionTests(TestCase):
def test_get_matching_job_list(self):
origin = mock.Mock(name='origin', spec_set=Origin)
job_list = [make_job('foo'), make_job('froz'), make_job('barg')]
self.assertEqual(
get_matching_job_list(job_list, RegExpJobQualifier('f.*', origin)),
[make_job('foo'), make_job('froz')])
plainbox-0.25/plainbox/impl/applogic.py 0000664 0001750 0001750 00000011002 12627266441 021005 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.applogic` -- application logic
==================================================
.. warning::
THIS MODULE DOES NOT HAVE STABLE PUBLIC API
"""
import os
from plainbox.abc import IJobResult
from plainbox.i18n import gettext as _
from plainbox.impl.result import MemoryJobResult
from plainbox.impl.secure import config
from plainbox.impl.secure.qualifiers import select_jobs
from plainbox.impl.session import SessionManager
from plainbox.impl.session.jobs import InhibitionCause
# Deprecated, use plainbox.impl.secure.qualifiers.select_jobs() instead
def get_matching_job_list(job_list, qualifier):
"""
Get a list of jobs that are designated by the specified qualifier.
This is intended to be used with :class:`CompositeQualifier`
but works with any :class:`IJobQualifier` subclass.
"""
return select_jobs(job_list, [qualifier])
def get_whitelist_by_name(provider_list, desired_whitelist):
"""
Get the first whitelist matching desired_whitelist from the loaded
providers
"""
for provider in provider_list:
for whitelist in provider.whitelist_list:
if whitelist.name == desired_whitelist:
return whitelist
else:
raise LookupError(
_("None of the providers had a whitelist "
"named '{}'").format(desired_whitelist))
def run_job_if_possible(session, runner, config, job, update=True, ui=None):
"""
Coupling point for session, runner, config and job
:returns: (job_state, job_result)
"""
job_state = session.job_state_map[job.id]
if job_state.can_start():
job_result = runner.run_job(job, job_state, config, ui)
else:
# Set the outcome of jobs that cannot start to
# OUTCOME_NOT_SUPPORTED _except_ if any of the inhibitors point to
# a job with an OUTCOME_SKIP outcome, if that is the case mirror
# that outcome. This makes 'skip' stronger than 'not-supported'
outcome = IJobResult.OUTCOME_NOT_SUPPORTED
for inhibitor in job_state.readiness_inhibitor_list:
if inhibitor.cause != InhibitionCause.FAILED_DEP:
continue
related_job_state = session.job_state_map[
inhibitor.related_job.id]
if related_job_state.result.outcome == IJobResult.OUTCOME_SKIP:
outcome = IJobResult.OUTCOME_SKIP
job_result = MemoryJobResult({
'outcome': outcome,
'comments': job_state.get_readiness_description()
})
assert job_result is not None
if update:
session.update_job_result(job, job_result)
return job_state, job_result
class PlainBoxConfig(config.Config):
"""
Configuration for PlainBox itself
"""
environment = config.Section(
help_text=_("Environment variables for scripts and jobs"))
extcmd = config.Variable(
section='FEATURE-FLAGS', kind=str, default="legacy",
validator_list=[config.ChoiceValidator(["legacy", "glibc"])],
help_text=_("Which implementation of extcmd to use"))
class Meta:
# TODO: properly depend on xdg and use real code that also handles
# XDG_CONFIG_HOME.
filename_list = [
'/etc/xdg/plainbox.conf',
os.path.expanduser('~/.config/plainbox.conf')]
def get_all_exporter_names():
"""
Get the identifiers (names) of all the supported session state exporters.
:returns:
A list of session exporter names (identifiers) available from all the
providers.
This function creates a temporary session associated with the local
device and adds all of the available providers to it. Finally, it returns
the list of exporter names. The session is transparently destroyed.
"""
with SessionManager.get_throwaway_manager() as manager:
return list(manager.exporter_map.keys())
plainbox-0.25/plainbox/impl/clitools.py 0000664 0001750 0001750 00000101777 12627266441 021062 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.clitools` -- support code for command line utilities
========================================================================
.. warning::
THIS MODULE DOES NOT HAVE STABLE PUBLIC API
"""
import abc
import argparse
import errno
import inspect
import logging
import os
import pdb
import sys
from plainbox.i18n import bindtextdomain
from plainbox.i18n import dgettext
from plainbox.i18n import gettext as _
from plainbox.i18n import textdomain
from plainbox.impl._argparse import LegacyHelpFormatter
from plainbox.impl.logging import adjust_logging
from plainbox.impl.secure.plugins import IPlugInCollection
from plainbox.impl.secure.plugins import now
logger = logging.getLogger("plainbox.clitools")
class CommandBase(metaclass=abc.ABCMeta):
"""
Simple interface class for sub-commands of :class:`ToolBase`.
Command objects like this are consumed by `ToolBase` subclasses to
implement hierarchical command system. The API supports arbitrary many sub
commands in arbitrary nesting arrangement.
Subcommands need to be registered inside the :meth:`register_parser()`,
either manually by calling add_parser() on the passed subparsers instance,
or by calling the helper :meth:`add_subcommand()` method. By common
convention each subclass of CommandBase adds exactly one subcommand to the
parser.
"""
@abc.abstractmethod
def invoked(self, ns):
"""
Implement what should happen when the command gets invoked
The ns is the namespace produced by argument parser
"""
@abc.abstractmethod
def register_parser(self, subparsers):
"""
Implement what should happen to register the additional parser for this
command. The subparsers argument is the return value of
ArgumentParser.add_subparsers()
"""
# This method is optional
def register_arguments(self, parser):
"""
Implement to customize which arguments need to be added to a parser.
This method differs from register_parser() in that it allows commands
which implement it to be invoked directly from a tool class (without
being a subcommand that needs to be selected). If implemented it should
be used from within :meth:`register_parser()` to ensure identical
behavior in both cases (subcommand and tool-level command)
"""
raise NotImplementedError("register_arguments() not customized")
def autopager(self):
"""
Enable automatic pager.
This invokes :func:`autopager()` which wraps execution in a pager
program so that long output is not a problem to read. Do not call this
in interactive commands.
"""
autopager()
def get_command_name(self):
"""
Get the name of the command, as seen on command line.
:returns:
self.name, if defined
:returns:
lower-cased class name, with the string "command" stripped out
"""
try:
return self.name
except AttributeError:
name = self.__class__.__name__.lower()
if name.endswith("command"):
name = name.replace("command", "")
return name
def get_localized_docstring(self):
"""
Get a cleaned-up, localized copy of docstring of this class.
"""
if self.__class__.__doc__ is not None:
return inspect.cleandoc(
dgettext(self.get_gettext_domain(), self.__class__.__doc__))
def get_command_help(self):
"""
Get a single-line help string associated with this command, as seen on
command line.
:returns:
self.help, if defined
:returns:
The first line of the docstring of this class, if any
:returns:
None, otherwise
"""
try:
return self.help
except AttributeError:
pass
try:
return self.get_localized_docstring().splitlines()[0]
except (AttributeError, ValueError, IndexError):
pass
def get_command_description(self):
"""
Get a multi-line description string associated with this command, as
seen on command line.
The description is printed after command usage but before argument and
option definitions.
:returns:
self.description, if defined
:returns:
A substring of the class docstring between the first line (which
goes to :meth:`get_command_help()`) and the string ``@EPILOG@``, if
present, or the end of the docstring, if any.
:returns:
None, otherwise
"""
try:
return self.description
except AttributeError:
pass
try:
return '\n'.join(
self.get_localized_docstring().splitlines()[1:]
).split('@EPILOG@', 1)[0].strip()
except (AttributeError, IndexError, ValueError):
pass
def get_command_epilog(self):
"""
Get a multi-line description string associated with this command, as
seen on command line.
The epilog is printed after the definitions of arguments and options
:returns:
self.epilog, if defined
:returns:
A substring of the class docstring between the string ``@EPILOG``
and the end of the docstring, if defined
:returns:
None, otherwise
"""
try:
return self.epilog
except AttributeError:
pass
try:
return '\n'.join(
self.get_localized_docstring().splitlines()[1:]
).split('@EPILOG@', 1)[1].strip()
except (AttributeError, IndexError, ValueError):
pass
def get_gettext_domain(self):
"""
Get the gettext translation domain associated with this command.
The domain will be used to translate the description, epilog and help
string, as obtained by their respective methods.
:returns:
self.gettext_domain, if defined
:returns:
None, otherwise. Note that it will cause the string to be
translated with the globally configured domain.
"""
try:
return self.gettext_domain
except AttributeError:
pass
def add_subcommand(self, subparsers):
"""
Add a parser to the specified subparsers instance.
:returns:
The new parser for the added subcommand
This command works by convention, depending on
:meth:`get_command_name(), :meth:`get_command_help()`,
:meth:`get_command_description()` and :meth:`get_command_epilog()`.
"""
help = self.get_command_help()
description = self.get_command_description()
epilog = self.get_command_epilog()
name = self.get_command_name()
parser = subparsers.add_parser(
name, help=help, description=description, epilog=epilog,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.set_defaults(command=self)
return parser
class ToolBase(metaclass=abc.ABCMeta):
"""
Base class for implementing programs with hierarchical subcommands
The tools support a variety of sub-commands, logging and debugging support.
If argcomplete module is available and used properly in the shell then
advanced tab-completion is also available.
There are three methods to implement for a basic tool. Those are:
1. :meth:`get_exec_name()` -- to know how the tool will be called
2. :meth:`get_exec_version()` -- to know how the version of the tool
3. :meth:`add_subcommands()` -- to add some actual commands to execute
This class has some complex control flow to support important and
interesting use cases. It is important to know that input is parsed with
two parsers, the early parser and the full parser. The early parser
quickly checks for a fraction of supported arguments and uses that data to
initialize environment before construction of a full parser is possible.
The full parser sees the reminder of the input and does not re-parse things
that where already handled.
"""
_RELEASELEVEL_TO_TOKEN = {
"alpha": "a",
"beta": "b",
"candidate": "c",
}
def __init__(self):
"""
Initialize all the variables, real stuff happens in main()
"""
self._setup_logging_from_environment()
self._early_parser = None # set in _early_init()
self._parser = None # set in main()
logger.debug(_("Constructed %r"), self)
def _setup_logging_from_environment(self):
if not os.getenv("PLAINBOX_DEBUG", ""):
return
adjust_logging(
level=os.getenv("PLAINBOX_LOG_LEVEL", "DEBUG"),
trace_list=os.getenv("PLAINBOX_TRACE", "").split(","),
debug_console=os.getenv("PLAINBOX_DEBUG", "") == "console")
logger.debug(_("Activated early logging via environment variables"))
def main(self, argv=None):
"""
Run as if invoked from command line directly
"""
# Another try/catch block for catching KeyboardInterrupt
# This one is really only meant for the early init abort
# (when someone runs main but bails out before we really
# get to the point when we do something useful and setup
# all the exception handlers).
try:
logger.debug(_("Tool initialization (early mode)"))
self.early_init()
logger.debug(_("Parsing command line arguments (early mode)"))
early_ns = self._early_parser.parse_args(argv)
logger.debug(
_("Command line parsed to (early mode): %r"), early_ns)
logger.debug(_("Tool initialization (late mode)"))
self.late_init(early_ns)
# Construct the full command line argument parser
logger.debug(_("Parser construction"))
self._parser = self.construct_parser(early_ns)
# parse the full command line arguments, this is also where we
# do argcomplete-dictated exit if bash shell completion
# is requested
logger.debug(_("Parsing command line arguments"))
ns = self._parser.parse_args(argv)
logger.debug(_("Command line parsed to: %r"), ns)
logger.debug(_("Tool initialization (final steps)"))
self.final_init(ns)
logger.debug(_("Tool initialization complete"))
except KeyboardInterrupt:
pass
else:
logger.debug(_("Dispatching command..."))
return self.dispatch_and_catch_exceptions(ns)
@classmethod
def format_version_tuple(cls, version_tuple):
major, minor, micro, releaselevel, serial = version_tuple
version = "%s.%s" % (major, minor)
if micro != 0:
version += ".%s" % micro
token = cls._RELEASELEVEL_TO_TOKEN.get(releaselevel)
if token:
version += "%s%d" % (token, serial)
if releaselevel == "dev":
version += ".dev"
return version
@classmethod
@abc.abstractmethod
def get_exec_name(cls):
"""
Get the name of this executable
"""
@classmethod
@abc.abstractmethod
def get_exec_version(cls):
"""
Get the version reported by this executable
"""
@abc.abstractmethod
def add_subcommands(self, subparsers, early_ns):
"""
Add top-level subcommands to the argument parser.
:param subparsers:
The argparse subparsers object. Use it to register additional
command line syntax parsers and to add your commands there.
:param early_ns:
A namespace from parsing by the special early parser. The early
parser may be used to quickly guess the command that needs to be
loaded, despite not really being able to parse everything the full
parser can. Using this as a hint one can optimize the command
loading process to skip loading commands that would not be
executed.
This can be overridden by subclasses to use a different set of
top-level subcommands.
"""
def early_init(self):
"""
Do very early initialization. This is where we initialize stuff even
without seeing a shred of command line data or anything else.
"""
self.setup_i18n()
self._early_parser = self.construct_early_parser()
def setup_i18n(self):
"""
Setup i18n and l10n system.
"""
domain = self.get_gettext_domain()
if domain is not None:
textdomain(domain)
bindtextdomain(domain, self.get_locale_dir())
def get_gettext_domain(self):
"""
Get the name of the gettext domain that should be used by this tool.
The value returned will be used to select translations to
global calls to gettext() and ngettext() everywhere in
python.
"""
return None
def get_locale_dir(self):
"""
Get the path of the gettext translation catalogs for this tool.
This value is used to bind the domain returned by
:meth:`get_gettext_domain()` to a specific directory. By default None
is returned, which means that standard, system-wide locations are used.
"""
return None
def late_init(self, early_ns):
"""
Initialize with early command line arguments being already parsed
"""
adjust_logging(
level=early_ns.log_level, trace_list=early_ns.trace,
debug_console=early_ns.debug_console)
def final_init(self, ns):
"""
Do some final initialization just before the command gets
dispatched. This is empty here but maybe useful for subclasses.
"""
def construct_early_parser(self):
"""
Create a parser that captures some of the early data we need to
be able to have a real parser and initialize the rest.
"""
parser = argparse.ArgumentParser(add_help=False)
# Fake --help and --version
parser.add_argument("-h", "--help", action="store_const", const=None)
parser.add_argument("--version", action="store_const", const=None)
self.add_early_parser_arguments(parser)
# A catch-all net for everything else
parser.add_argument("rest", nargs="...")
return parser
def create_parser_object(self):
"""
Construct a bare parser object.
This method is responsible for creating the main parser object and
adding --version and other basic top-level properties to it (but not
any of the commands).
It exists as a separate method in case some special customization is
required, so that subclasses can still use standard version of
:meth:`construct_parser()`.
:returns:
argparse.ArgumentParser instance.
"""
parser = argparse.ArgumentParser(
prog=self.get_exec_name(),
formatter_class=LegacyHelpFormatter)
# NOTE: help= is provided explicitly as argparse doesn't wrap
# everything with _() correctly (depending on version)
parser.add_argument(
"--version", action="version", version=self.get_exec_version(),
help=_("show program's version number and exit"))
return parser
def construct_parser(self, early_ns=None):
parser = self.create_parser_object()
# Add all the things really parsed by the early parser so that it
# shows up in --help and bash tab completion.
self.add_early_parser_arguments(parser)
subparsers = parser.add_subparsers()
self.add_subcommands(subparsers, early_ns)
self.enable_argcomplete_if_possible(parser)
return parser
def enable_argcomplete_if_possible(self, parser):
# Enable argcomplete if it is available.
try:
import argcomplete
except ImportError:
pass
else:
argcomplete.autocomplete(parser)
def add_early_parser_arguments(self, parser):
group = parser.add_argument_group(
title=_("logging and debugging"))
# Add the --log-level argument
group.add_argument(
"-l", "--log-level",
action="store",
choices=('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'),
default=None,
help=argparse.SUPPRESS)
# Add the --verbose argument
group.add_argument(
"-v", "--verbose",
dest="log_level",
action="store_const",
const="INFO",
# TRANSLATORS: please keep --log-level=INFO untranslated
help=_("be more verbose (same as --log-level=INFO)"))
# Add the --debug flag
group.add_argument(
"-D", "--debug",
dest="log_level",
action="store_const",
const="DEBUG",
# TRANSLATORS: please keep DEBUG untranslated
help=_("enable DEBUG messages on the root logger"))
# Add the --debug flag
group.add_argument(
"-C", "--debug-console",
action="store_true",
# TRANSLATORS: please keep DEBUG untranslated
help=_("display DEBUG messages in the console"))
# Add the --trace flag
group.add_argument(
"-T", "--trace",
metavar=_("LOGGER"),
action="append",
default=[],
# TRANSLATORS: please keep DEBUG untranslated
help=_("enable DEBUG messages on the specified logger "
"(can be used multiple times)"))
# Add the --pdb flag
group.add_argument(
"-P", "--pdb",
action="store_true",
default=False,
# TRANSLATORS: please keep pdb untranslated
help=_("jump into pdb (python debugger) when a command crashes"))
# Add the --debug-interrupt flag
group.add_argument(
"-I", "--debug-interrupt",
action="store_true",
default=False,
# TRANSLATORS: please keep SIGINT/KeyboardInterrupt and --pdb
# untranslated
help=_("crash on SIGINT/KeyboardInterrupt, useful with --pdb"))
def dispatch_command(self, ns):
# Argh the horrror!
#
# Since CPython revision cab204a79e09 (landed for python3.3)
# http://hg.python.org/cpython/diff/cab204a79e09/Lib/argparse.py
# the argparse module behaves differently than it did in python3.2
#
# In practical terms subparsers are now optional in 3.3 so all of the
# commands are no longer required parameters.
#
# To compensate, on python3.3 and beyond, when the user just runs
# plainbox without specifying the command, we manually, explicitly do
# what python3.2 did: call parser.error(_('too few arguments'))
if (sys.version_info[:2] >= (3, 3)
and getattr(ns, "command", None) is None):
self._parser.error(argparse._("too few arguments"))
else:
return ns.command.invoked(ns)
def dispatch_and_catch_exceptions(self, ns):
try:
return self.dispatch_command(ns)
except SystemExit:
# Don't let SystemExit be caught in the logic below, we really
# just want to exit when that gets thrown.
# TRANSLATORS: please keep SystemExit untranslated
logger.debug(_("caught SystemExit, exiting"))
# We may want to raise SystemExit as it can carry a status code
# along and we cannot just consume that.
raise
except BaseException as exc:
logger.debug(_("caught %r, deciding on what to do next"), exc)
# For all other exceptions (and I mean all), do a few checks
# and perform actions depending on the command line arguments
# By default we want to re-raise the exception
action = 'raise'
# We want to ignore IOErrors that are really EPIPE
if isinstance(exc, IOError):
if exc.errno == errno.EPIPE:
action = 'ignore'
# We want to ignore KeyboardInterrupt unless --debug-interrupt
# was passed on command line
elif isinstance(exc, KeyboardInterrupt):
if ns.debug_interrupt:
action = 'debug'
else:
action = 'ignore'
else:
# For all other execptions, debug if requested
if ns.pdb:
action = 'debug'
logger.debug(_("action for exception %r is %s"), exc, action)
if action == 'ignore':
return 0
elif action == 'raise':
logging.getLogger("plainbox.crashes").fatal(
_("Executable %r invoked with %r has crashed"),
self.get_exec_name(), ns, exc_info=1)
raise
elif action == 'debug':
logger.error(_("caught runaway exception: %r"), exc)
logger.error(_("starting debugger..."))
pdb.post_mortem()
return 1
class LazyLoadingToolMixIn(metaclass=abc.ABCMeta):
"""
Mix-in class for ToolBase that improves responsiveness by loading
subcommands lazily on demand and using some heuristic that works well in
the common case of running one command.
Unlike the original, this implementation uses a custom version of
add_subcommands() which uses the ``early_ns`` argument as a hint to not
load or register commands that are not going to be needed.
In practice ``tool --help`` doesn't benefit much but ``tool `` can now
be much, much faster (and more responsive) as it only loads that one
command.
Concrete subclasses must implement the :meth:`get_command_collection()`
method which must return a IPlugInCollection (ideally the
LazyPlugInCollection that contains extra optimizations for low-cost key
enumeration and one-at-a-time value loading).
"""
@abc.abstractmethod
def get_command_collection(self) -> IPlugInCollection:
"""
Get a (lazy) collection of all subcommands.
This method returns a IPlugInCollection that maps command name to
CommandBase subclass, such as :class:`PlainBoxCommand`.
The name of each plug in object **must** match the command name.
"""
def add_subcommands(
self,
subparsers: argparse._SubParsersAction,
early_ns: "Maybe[argparse.Namespace]"=None,
) -> None:
"""
Add top-level subcommands to the argument parser.
:param subparsers:
A part of argparse that can be used to create additional parsers
for specific subcommands.
:param early_ns:
(optional) An argparse namespace from earlier parsing. If it is not
None, it must have the ``rest`` attribute which is used as a list
of hints.
.. note::
This method is customized by LazyLoadingToolMixIn and should not be
overriden directly. To register your commands use
:meth:`get_command_collection()`
"""
if early_ns is not None:
self.add_subcommands_with_hints(subparsers, early_ns.rest)
else:
self.add_subcommands_without_hints(
subparsers, self.get_command_collection())
def add_subcommands_with_hints(
self, subparsers: argparse._SubParsersAction,
hint_list: "List[str]"
) -> None:
"""
Add top-level subcommands to the argument parser, using a list of
hints.
:param subparsers:
A part of argparse that can be used to create additional parsers
for specific subcommands.
:param hint_list:
A list of strings that should be used as hints.
This method tries to optimize the time needed to register and setup all
of the subcommands by looking at a list of hints in search for the
(likely) command that will be executed.
Things that look like options are ignored. The first element of
``hint_list`` that matches a known command name, as provided by
meth:`get_command_collection()`, is used as a sign that that command
will be executed and all other commands don't have to be loaded or
initialized. If no hints are found (e.g. when running ``tool --help``)
the slower fallback mode is used and all subcommands are added.
.. note::
This method is customized by LazyLoadingToolMixIn and should not be
overriden directly. To register your commands use
:meth:`get_command_collection()`
"""
logger.debug(
_("Trying to load exactly the right command: %r"), hint_list)
command_collection = self.get_command_collection()
for hint in hint_list:
# Skip all the things that look like additional options
if hint.startswith('-'):
continue
# Break on the first hint that we can load
try:
plugin = command_collection.get_by_name(hint)
except KeyError:
continue
else:
command = plugin.plugin_object
logger.debug("Registering single command %r", command)
start = now()
command.register_parser(subparsers)
logger.debug(_("Cost of registering guessed command: %f"),
now() - start)
break
else:
logger.debug("Falling back to loading all commands")
self.add_subcommands_without_hints(subparsers, command_collection)
def add_subcommands_without_hints(
self, subparsers: argparse._SubParsersAction,
command_collection: IPlugInCollection,
) -> None:
"""
Add top-level subcommands to the argument parser (fallback mode)
:param subparsers:
A part of argparse that can be used to create additional parsers
for specific subcommands.
:param command_collection:
A collection of commands that was obtaioned from
:meth:`get_command_collection()` earlier.
This method is called when hint-based optimization cannot be used and
all commands need to be loaded and initialized.
.. note::
This method is customized by LazyLoadingToolMixIn and should not be
overriden directly. To register your commands use
:meth:`get_command_collection()`
"""
command_collection.load()
logger.debug(
_("Cost of loading all top-level commands: %f"),
command_collection.get_total_time())
start = now()
for command in command_collection.get_all_plugin_objects():
logger.debug("Registering command %r", command)
command.register_parser(subparsers)
logger.debug(
_("Cost of registering all top-level commands: %f"),
now() - start)
class SingleCommandToolMixIn:
"""
Mix-in class for ToolBase to implement single-command dispatch.
This effectively turns the tool into a single-command tool. The only method
that needs to be implemented is the get_command() method.
"""
@abc.abstractmethod
def get_command(self):
"""
Get the command to register
The return value must be a CommandBase instance that implements the
:meth:`CommandBase.register_arguments()` method.
"""
def add_subcommands(self, subparsers, early_ns):
"""
Overridden version of add_subcommands()
This method does nothing. It is here because ToolBase requires it.
"""
def construct_parser(self, early_ns=None):
"""
Overridden version of construct_parser()
This method sets the single subcommand as default. This allows the
whole tool to be started without arguments and do the right thing while
still supporting optional sub-commands and true (and rich) built-in
help.
"""
parser = self.create_parser_object()
# Add all the things really parsed by the early parser so that it
# shows up in --help and bash tab completion.
self.add_early_parser_arguments(parser)
# Customize parser with command details
self.customize_parser(parser)
# Enable argcomplete if it is available.
self.enable_argcomplete_if_possible(parser)
return parser
def customize_parser(self, parser):
# Instantiate the command to use
cmd = self.get_command()
# Set top-level parser description and epilog
parser.epilog = cmd.get_command_epilog()
parser.description = cmd.get_command_description()
# Directly register the command
cmd.register_arguments(parser)
def autopager(pager_list=['sensible-pager', 'less', 'more']):
"""
Enable automatic pager
:param pager_list:
List of pager programs to try.
:returns:
Nothing immedaitely if auto-pagerification cannot be turned on.
This is true when running on windows or when sys.stdout is not
a tty.
This function executes the following steps:
* A pager is selected
* A pipe is created
* The current process forks
* The parent uses execlp() and becomes the pager
* The child/python carries on the execution of python code.
* The parent/pager stdin is connected to the childs stdout.
* The child/python stderr is connected to parent/pager stdin only when
sys.stderr is connected to a tty
.. note::
Pager selection is influenced by the pager environment variable. if set
it will be prepended to the pager_list. This makes the expected
behavior of allowing users to customize their environment work okay.
.. warning::
This function must not be used for interactive commands. Doing so
will prevent users from feeding any input to plainbox as all input
will be "stolen" by the pager process.
"""
# If stdout is not connected to a tty or when running on win32, just return
if not sys.stdout.isatty() or sys.platform == "win32":
return
# Check if the user has a PAGER set, if so, consider that the prime
# candidate for the effective pager.
pager = os.getenv('PAGER')
if pager is not None:
pager_list = [pager] + pager_list
# Find the best pager based on user preferences and built-in knowledge
try:
pager_name, pager_pathname = find_exec(pager_list)
except LookupError:
# If none of the pagers are installed, just return
return
# Flush any pending output
sys.stdout.flush()
sys.stderr.flush()
# Create a pipe that we'll use to glue ourselves to the pager
read_end, write_end = os.pipe()
# Fork so that we can have a pager process
if os.fork() == 0:
# NOTE: this is where plainbox will run
# Rewire stdout and stderr (if a tty) to the pipe
os.dup2(write_end, sys.stdout.fileno())
if sys.stderr.isatty():
os.dup2(write_end, sys.stderr.fileno())
# Close the unused end of the pipe
os.close(read_end)
else:
# NOTE: this is where the pager will run
# Rewire stdin to the pipe
os.dup2(read_end, sys.stdin.fileno())
# Close the unused end of the pipe
os.close(write_end)
# Execute the pager
os.execl(pager_pathname, pager_name)
def find_exec(name_list):
"""
Find the first executable from name_list in PATH
:param name_list:
List of names of executable programs to look for, in the order
of preference. Only basenames should be passed here (not absolute
pathnames)
:returns:
Tuple (name, pathname), if the executable can be found
:raises:
LookupError if none of the names in name_list are executable
programs in PATH
"""
path_list = os.get_exec_path()
for name in name_list:
for path in path_list:
pathname = os.path.join(path, name)
if os.access(pathname, os.X_OK):
return (name, pathname)
raise LookupError(
_("Unable to find any of the executables {}").format(
", ".join(name_list)))
plainbox-0.25/plainbox/impl/runner.py 0000664 0001750 0001750 00000117324 12627266441 020536 0 ustar pierre pierre 0000000 0000000 # encoding: utf-8
# This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
Definition of JobRunner class.
:mod:`plainbox.impl.runner` -- job runner
=========================================
.. warning::
THIS MODULE DOES NOT HAVE STABLE PUBLIC API
"""
import collections
import datetime
import gzip
import io
import logging
import os
import string
import sys
import time
from plainbox.abc import IJobResult, IJobRunner
from plainbox.i18n import gettext as _
from plainbox.impl.result import IOLogRecord
from plainbox.impl.result import IOLogRecordWriter
from plainbox.impl.result import JobResultBuilder
from plainbox.vendor import extcmd
from plainbox.vendor import morris
logger = logging.getLogger("plainbox.runner")
def slugify(_string):
"""Transform any string to onet that can be used in filenames."""
valid_chars = frozenset(
"-_.{}{}".format(string.ascii_letters, string.digits))
return ''.join(c if c in valid_chars else '_' for c in _string)
class IOLogRecordGenerator(extcmd.DelegateBase):
"""Delegate for extcmd that generates io_log entries."""
def on_begin(self, args, kwargs):
"""
Internal method of extcmd.DelegateBase.
Called when a command is being invoked.
Begins tracking time (relative time entries)
"""
self.last_msg = datetime.datetime.utcnow()
def on_line(self, stream_name, line):
"""
Internal method of extcmd.DelegateBase.
Creates a new IOLogRecord and passes it to :meth:`on_new_record()`.
Maintains a timestamp of the last message so that approximate delay
between each piece of output can be recorded as well.
"""
now = datetime.datetime.utcnow()
delay = now - self.last_msg
self.last_msg = now
record = IOLogRecord(delay.total_seconds(), stream_name, line)
self.on_new_record(record)
@morris.signal
def on_new_record(self, record):
"""
Internal signal method of :class:`IOLogRecordGenerator`.
Called when a new record is generated and needs to be processed.
"""
# TRANSLATORS: io means input-output
logger.debug(_("io log generated %r"), record)
class CommandOutputWriter(extcmd.DelegateBase):
"""
Delegate for extcmd that writes output to a file on disk.
The file itself is only opened once on_begin() gets called by extcmd. This
makes it safe to instantiate this without worrying about dangling
resources.
"""
def __init__(self, stdout_path, stderr_path):
"""
Initialize new writer.
Just records output paths.
"""
self.stdout_path = stdout_path
self.stderr_path = stderr_path
def on_begin(self, args, kwargs):
"""
Internal method of extcmd.DelegateBase.
Called when a command is being invoked
"""
self.stdout = open(self.stdout_path, "wb")
self.stderr = open(self.stderr_path, "wb")
def on_end(self, returncode):
"""
Internal method of extcmd.DelegateBase.
Called when a command finishes running
"""
self.stdout.close()
self.stderr.close()
def on_abnormal_end(self, signal_num):
"""
Internal method of extcmd.DelegateBase.
Called when a command abnormally finishes running
"""
self.stdout.close()
self.stderr.close()
def on_line(self, stream_name, line):
"""
Internal method of extcmd.DelegateBase.
Called for each line of output.
"""
if stream_name == 'stdout':
self.stdout.write(line)
elif stream_name == 'stderr':
self.stderr.write(line)
class FallbackCommandOutputPrinter(extcmd.DelegateBase):
"""
Delegate for extcmd that prints all output to stdout.
This delegate is only used as a fallback when no delegate was explicitly
provided to a JobRunner instance.
"""
def __init__(self, prompt):
"""Initialize a new fallback command output printer."""
self._prompt = prompt
self._lineno = collections.defaultdict(int)
self._abort = False
def on_line(self, stream_name, line):
"""
Internal method of extcmd.DelegateBase.
Called for each line of output. Normally each line is just printed
(assuming UTF-8 encoding) If decoding fails for any reason that and all
subsequent lines are ignored.
"""
if self._abort:
return
self._lineno[stream_name] += 1
try:
print("(job {}, <{}:{:05}>) {}".format(
self._prompt, stream_name, self._lineno[stream_name],
line.decode('UTF-8').rstrip()))
except UnicodeDecodeError:
self._abort = True
class JobRunnerUIDelegate(extcmd.DelegateBase):
"""
Delegate for extcmd that delegates extcmd events to IJobRunnerUI.
The file itself is only opened once on_begin() gets called by extcmd. This
makes it safe to instantiate this without worrying about dangling
resources.
The instance attribute 'ui' can be changed at any time. It can also be set
to None to silence all notifications from execution progress of external
programs.
"""
def __init__(self, ui=None):
"""
Initialize the JobRunnerUIDelegate.
:param ui:
(optional) an instance of IJobRunnerUI to delegate events to
"""
self.ui = ui
def on_begin(self, args, kwargs):
"""
Internal method of extcmd.DelegateBase.
Called when a command is being invoked
"""
if self.ui is not None:
self.ui.about_to_execute_program(args, kwargs)
def on_end(self, returncode):
"""
Internal method of extcmd.DelegateBase.
Called when a command finishes running
"""
if self.ui is not None:
self.ui.finished_executing_program(returncode)
def on_abnormal_end(self, signal_num):
"""
Internal method of extcmd.DelegateBase.
Called when a command abnormally finishes running
The negated signal number is used as the exit code of the program and
fed into the UI (if any)
"""
if self.ui is not None:
self.ui.finished_executing_program(-signal_num)
def on_line(self, stream_name, line):
"""
Internal method of extcmd.DelegateBase.
Called for each line of output.
"""
if self.ui is not None:
self.ui.got_program_output(stream_name, line)
def on_chunk(self, stream_name, chunk):
"""
Internal method of extcmd.DelegateBase.
Called for each chunk of output.
"""
if self.ui is not None:
self.ui.got_program_output(stream_name, chunk)
class JobRunner(IJobRunner):
"""
Runner for jobs - executes jobs and produces results.
The runner is somewhat de-coupled from jobs and session. It still carries
all checkbox-specific logic about the various types of plugins.
The runner consumes jobs and configuration objects and produces job result
objects. The runner can operate in dry-run mode, when enabled, most jobs
are never started. Only jobs listed in DRY_RUN_PLUGINS are executed.
"""
# List of plugins that are still executed
_DRY_RUN_PLUGINS = ('local', 'resource', 'attachment')
def __init__(self, session_dir, provider_list, jobs_io_log_dir,
command_io_delegate=None, dry_run=False,
execution_ctrl_list=None):
"""
Initialize a new job runner.
:param session_dir:
Base directory of the session. This is currently used to initialize
execution controllers. Later on it will go away and callers will be
responsible for passing a list of execution controllers explicitly.
:param jobs_io_log_dir:
Base directory where IO log files are created.
:param command_io_delegate:
(deprecated) Application specific extcmd IO delegate applicable for
extcmd.ExternalCommandWithDelegate. Can be Left out, in which case
:class:`FallbackCommandOutputPrinter` is used instead.
This argument is deprecated. Use The ui argument on
:meth:`run_job()` instead. Note that it has different (but
equivalent) API.
:param dry_run:
Flag indicating that the runner is in "dry run mode". When True
most normal commands won't execute. Useful for testing.
:param execution_ctrl_list:
(optional) a list of execution controllers that may be used by this
runner. By default this should be left blank. This will cause all
execution controllers to be instantiated and used. In special cases
it may be required to override this.
"""
self._session_dir = session_dir
if execution_ctrl_list is None:
logger.debug("execution_ctrl_list not passed to JobRunner")
if sys.platform == 'linux' or sys.platform == 'linux2':
from plainbox.impl.ctrl import RootViaPkexecExecutionController
from plainbox.impl.ctrl import RootViaPTL1ExecutionController
from plainbox.impl.ctrl import RootViaSudoExecutionController
from plainbox.impl.ctrl import UserJobExecutionController
from plainbox.impl.ctrl import QmlJobExecutionController
execution_ctrl_list = [
RootViaPTL1ExecutionController(provider_list),
RootViaPkexecExecutionController(provider_list),
# XXX: maybe this one should be only used on command line
RootViaSudoExecutionController(provider_list),
UserJobExecutionController(provider_list),
QmlJobExecutionController(provider_list),
]
elif sys.platform == 'win32':
from plainbox.impl.ctrl import UserJobExecutionController
execution_ctrl_list = [
UserJobExecutionController(provider_list)
]
else:
logger.warning("Unsupported platform: %s", sys.platform)
execution_ctrl_list = []
self._jobs_io_log_dir = jobs_io_log_dir
# NOTE: deprecated
self._command_io_delegate = command_io_delegate
self._job_runner_ui_delegate = JobRunnerUIDelegate()
self._dry_run = dry_run
self._execution_ctrl_list = execution_ctrl_list
self._log_leftovers = True
@property
def log_leftovers(self):
"""
flag controlling if leftover files should be logged.
If you wish to connect a custom handler to :meth:`on_leftover_files()`
then it is advisable to set this property to False so that leftover
files are not handled twice
By default, this property is True and a detailed warning is logged
"""
return self._log_leftovers
@log_leftovers.setter
def log_leftovers(self, value):
"""setter for log_leftovers property."""
self._log_leftovers = value
def get_warm_up_sequence(self, job_list):
"""
Determine if authentication warm-up may be needed.
:param job_lits:
A list of jobs that may be executed
:returns:
A list of methods to call to complete the warm-up step.
Authentication warm-up is related to the plainbox-secure-launcher-1
program that can be 'warmed-up' to perhaps cache the security
credentials. This is usually done early in the testing process so that
we can prompt for passwords before doing anything that takes an
extended amount of time.
"""
warm_up_list = []
for job in job_list:
try:
ctrl = self._get_ctrl_for_job(job)
except LookupError:
continue
warm_up_func = ctrl.get_warm_up_for_job(job)
if warm_up_func is not None and warm_up_func not in warm_up_list:
warm_up_list.append(warm_up_func)
return warm_up_list
def run_job(self, job, job_state, config=None, ui=None):
"""
Run the specified job an return the result.
:param job:
A JobDefinition to run
:param job_state:
The JobState associated to the job to execute.
:param config:
A PlainBoxConfig that may influence how this job is executed. This
is only used for the environment variables (that should be
specified in the environment but, for simplicity in certain setups,
can be pulled from a special section of the configuration file.
:param ui:
A IJobRunnerUI object (optional) which will be used do relay
external process interaction events during the execution of this
job.
:returns:
A IJobResult subclass that describes the result
:raises ValueError:
In the future, this method will not run jobs that don't themselves
validate correctly. Right now this is not enforced.
This method is the entry point for running all kinds of jobs. Typically
execution blocks while a command, embeded in many jobs, is running in
another process. How a job is executed depends mostly on the value of
the :attr:`plainbox.abc.IJobDefinition.plugin` field.
The result of a job may in some cases be OUTCOME_UNDECIDED, in which
case the application should ask the user what the outcome is (and
present sufficient information to make that choice, typically this is
the job description and the output of the command)
"""
# TRANSLATORS: %r is the name of the job
logger.info(_("Running %r"), job)
func_name = "run_{}_job".format(job.plugin.replace('-', '_'))
try:
runner = getattr(self, func_name)
except AttributeError:
return JobResultBuilder(
outcome=IJobResult.OUTCOME_NOT_IMPLEMENTED,
comments=_('This type of job is not supported')
).get_result()
else:
if self._dry_run and job.plugin not in self._DRY_RUN_PLUGINS:
return self._get_dry_run_result(job)
else:
self._job_runner_ui_delegate.ui = ui
try:
return runner(job, job_state, config)
finally:
self._job_runner_ui_delegate.ui = None
def run_shell_job(self, job, job_state, config):
"""
Method called to run a job with plugin field equal to 'shell'.
The 'shell' job implements the following scenario:
* Maybe display the description to the user
* The API states that :meth:`JobRunner.run_job()` should only be
called at this time.
* Run the command and wait for it to finish
* Decide on the outcome based on the return code
* The method ends here
.. note::
Shell jobs are an example of perfectly automated tests. Everything
about them is encapsulated inside the test command and the return
code from that command is enough to let plainbox know if the test
passed or not.
"""
if job.plugin != "shell":
# TRANSLATORS: please keep 'plugin' untranslated
raise ValueError(_("bad job plugin value"))
return self._just_run_command(job, job_state, config).get_result()
def run_attachment_job(self, job, job_state, config):
"""
Method called to run a job with plugin field equal to 'attachment'.
The 'attachment' job implements the following scenario:
* Maybe display the description to the user
* The API states that :meth:`JobRunner.run_job()` should only be
called at this time.
* Run the command and wait for it to finish
* Decide on the outcome based on the return code
* The method ends here
.. note::
Attachment jobs play an important role in CheckBox. They are used
to convert stdout of the command into a file that is embedded
inside the final representation of a testing session. Attachment
jobs are used to gather all kinds of essential information (by
catting log files, sysfs or procfs files)
"""
if job.plugin != "attachment":
# TRANSLATORS: please keep 'plugin' untranslated
raise ValueError(_("bad job plugin value"))
return self._just_run_command(job, job_state, config).get_result()
def run_resource_job(self, job, job_state, config):
"""
Method called to run a job with plugin field equal to 'resource'.
The 'resource' job implements the following scenario:
* Maybe display the description to the user
* The API states that :meth:`JobRunner.run_job()` should only be
called at this time.
* Run the command and wait for it to finish
* Decide on the outcome based on the return code
* The method ends here
.. note::
Resource jobs are similar to attachment, in that their goal is to
produce some text on standard output. Unlike attachment jobs they
are typically not added to the final representation of a testing
session. Instead the output is parsed and added to the internal
state of a testing session. This state can be queried from special
resource programs which are embedded in many job definitions.
"""
if job.plugin != "resource":
# TRANSLATORS: please keep 'plugin' untranslated
raise ValueError(_("bad job plugin value"))
return self._just_run_command(job, job_state, config).get_result()
def run_local_job(self, job, job_state, config):
"""
Method called to run a job with plugin field equal to 'local'.
The 'local' job implements the following scenario:
* Maybe display the description to the user
* The API states that :meth:`JobRunner.run_job()` should only be
called at this time.
* Run the command and wait for it to finish
* Decide on the outcome based on the return code
* The method ends here
.. note::
Local jobs are similar to resource jobs, in that the output matters
more than the return code. Unlike resource jobs and attachment
jobs, the output is expected to be a job definition in the
canonical RFC822 format. Local jobs are discouraged (due to some
complexities they introduce) but only supported way of generating
additional jobs at runtime.
"""
if job.plugin != "local":
# TRANSLATORS: please keep 'plugin' untranslated
raise ValueError(_("bad job plugin value"))
return self._just_run_command(job, job_state, config).get_result()
def run_manual_job(self, job, job_state, config):
"""
Method called to run a job with plugin field equal to 'manual'.
The 'manual' job implements the following scenario:
* Display the description to the user
* Ask the user to perform some operation
* Ask the user to decide on the outcome
.. note::
Technically this method almost always returns a result with
OUTCOME_UNDECIDED to indicate that it could not determine if the
test passed or not. Manual jobs are basically fully human driven
and could totally ignore the job runner. This method is provided
for completeness.
.. warning::
Before the interaction callback is fully removed and deprecated it
may also return other values through that callback.
"""
if job.plugin != "manual":
# TRANSLATORS: please keep 'plugin' untranslated
raise ValueError(_("bad job plugin value"))
return JobResultBuilder(
outcome=IJobResult.OUTCOME_UNDECIDED).get_result()
def run_user_interact_job(self, job, job_state, config):
"""
Method called to run a job with plugin field equal to 'user-interact'.
The 'user-interact' job implements the following scenario:
* Display the description to the user
* Ask the user to perform some operation
* Wait for the user to confirm this is done
* The API states that :meth:`JobRunner.run_job()` should only be
called at this time.
* Run the command and wait for it to finish
* Decide on the outcome based on the return code
* The method ends here
.. note::
User interaction jobs are candidates for further automation as the
outcome can be already determined automatically but some
interaction, yet, cannot.
.. note::
User interaction jobs are a hybrid between shell jobs and manual
jobs. They finish automatically, once triggered but still require a
human to understand and follow test instructions and prepare the
process. Instructions may range to getting a particular hardware
setup, physical manipulation (pressing a key, closing the lid,
plugging in a removable device) or talking to a microphone to get
some sound recorded.
.. note::
The user may want to re-run the test a number of times, perhaps
because there is some infrequent glitch or simply because he or she
was distracted the first time it ran. Users should be given that
option but it must always produce a separate result (simply re-run
the same API again).
"""
if job.plugin != "user-interact":
# TRANSLATORS: please keep 'plugin' untranslated
raise ValueError(_("bad job plugin value"))
return self._just_run_command(job, job_state, config).get_result()
def run_user_verify_job(self, job, job_state, config):
"""
Method called to run a job with plugin field equal to 'user-verify'.
The 'user-verify' job implements the following scenario:
* Maybe display the description to the user
* The API states that :meth:`JobRunner.run_job()` should only be
called at this time.
* Run the command and wait for it to finish
* Display the description to the user
* Display the output of the command to the user
* Ask the user to decide on the outcome
.. note::
User verify jobs are a hybrid between shell jobs and manual jobs.
They start automatically but require a human to inspect the output
and decide on the outcome. This may include looking if the screen
looks okay after a number of resolution changes, if the picture
quality is good, if the printed IP address matches some
expectations or if the sound played from the speakers was
distorted.
.. note::
The user may want to re-run the test a number of times, perhaps
because there is some infrequent glitch or simply because he or she
was distracted the first time it ran. Users should be given that
option but it must always produce a separate result (simply re-run
the same API again).
.. note::
Technically this method almost always returns a result with
OUTCOME_UNDECIDED to indicate that it could not determine if the
test passed or not.
.. warning::
Before the interaction callback is fully removed and deprecated it
may also return other values through that callback.
"""
if job.plugin != "user-verify":
# TRANSLATORS: please keep 'plugin' untranslated
raise ValueError(_("bad job plugin value"))
# Run the command
result_builder = self._just_run_command(job, job_state, config)
# Maybe ask the user
result_builder.outcome = IJobResult.OUTCOME_UNDECIDED
return result_builder.get_result()
def run_user_interact_verify_job(self, job, job_state, config):
"""
Method for running jobs with plugin equal to 'user-interact-verify'.
The 'user-interact-verify' job implements the following scenario:
* Ask the user to perform some operation
* Wait for the user to confirm this is done
* The API states that :meth:`JobRunner.run_job()` should only be
called at this time.
* Run the command and wait for it to finish
* Display the description to the user
* Display the output of the command to the user
* Ask the user to decide on the outcome
.. note::
User interact-verify jobs are a hybrid between shell jobs and
manual jobs. They are both triggered explicitly by the user and
require the user to decide on the outcome. The only function of the
command they embed is to give some feedback to the user and perhaps
partially automate certain instructions (instead of asking the user
to run some command we can run that for them).
.. note::
The user may want to re-run the test a number of times, perhaps
because there is some infrequent glitch or simply because he or she
was distracted the first time it ran. Users should be given that
option but it must always produce a separate result (simply re-run
the same API again).
.. note::
Technically this method almost always returns a result with
OUTCOME_UNDECIDED to indicate that it could not determine if the
test passed or not.
.. warning::
Before the interaction callback is fully removed and deprecated it
may also return other values through that callback.
"""
if job.plugin != "user-interact-verify":
# TRANSLATORS: please keep 'plugin' untranslated
raise ValueError(_("bad job plugin value"))
# Run the command
result_builder = self._just_run_command(job, job_state, config)
# Maybe ask the user
result_builder.outcome = IJobResult.OUTCOME_UNDECIDED
return result_builder.get_result()
def run_qml_job(self, job, job_state, config):
"""
Method called to run a job with plugin field equal to 'qml'.
The 'qml' job implements the following scenario:
* Maybe display the description to the user
* Run qmlscene with provided test and wait for it to finish
* Decide on the outcome based on the result object returned by qml
shell
* The method ends here
.. note::
QML jobs are fully manual jobs with graphical user interface
implemented in QML. They implement proposal described in CEP-5.
"""
if job.plugin != "qml":
# TRANSLATORS: please keep 'plugin' untranslated
raise ValueError(_("bad job plugin value"))
try:
ctrl = self._get_ctrl_for_job(job)
except LookupError:
return JobResultBuilder(
outcome=IJobResult.OUTCOME_NOT_SUPPORTED,
comments=_('No suitable execution controller is available)')
).get_result()
# Run the embedded command
start_time = time.time()
delegate, io_log_gen = self._prepare_io_handling(job, config)
# Create a subprocess.Popen() like object that uses the delegate
# system to observe all IO as it occurs in real time.
delegate_cls = self._get_delegate_cls(config)
extcmd_popen = delegate_cls(delegate)
# Stream all IOLogRecord entries to disk
record_path = self.get_record_path_for_job(job)
with gzip.open(record_path, mode='wb') as gzip_stream, \
io.TextIOWrapper(
gzip_stream, encoding='UTF-8') as record_stream:
writer = IOLogRecordWriter(record_stream)
io_log_gen.on_new_record.connect(writer.write_record)
try:
# Start the process and wait for it to finish getting the
# result code. This will actually call a number of callbacks
# while the process is running. It will also spawn a few
# threads although all callbacks will be fired from a single
# thread (which is _not_ the main thread)
logger.debug(
_("job[%s] starting qml shell: %s"), job.id, job.qml_file)
# Run the job command using extcmd
return_code = self._run_extcmd(job, job_state, config,
extcmd_popen, ctrl)
logger.debug(
_("job[%s] shell return code: %r"), job.id, return_code)
finally:
io_log_gen.on_new_record.disconnect(writer.write_record)
execution_duration = time.time() - start_time
# Convert the return of the command to the outcome of the job
if return_code == 0:
outcome = IJobResult.OUTCOME_PASS
else:
outcome = IJobResult.OUTCOME_FAIL
# Create a result object and return it
return JobResultBuilder(
outcome=outcome,
return_code=return_code,
io_log_filename=record_path,
execution_duration=execution_duration
).get_result()
def get_record_path_for_job(self, job):
return os.path.join(self._jobs_io_log_dir,
"{}.record.gz".format(slugify(job.id)))
def _get_dry_run_result(self, job):
"""
Internal method of JobRunner.
Returns a result that is used when running in dry-run mode (where we
don't really test anything)
"""
return JobResultBuilder(
outcome=IJobResult.OUTCOME_SKIP,
comments=_("Job skipped in dry-run mode")
).get_result()
def _just_run_command(self, job, job_state, config):
"""
Internal method of JobRunner.
Runs the command embedded in the job and returns a JobResultBuilder
that describes the result.
"""
try:
ctrl = self._get_ctrl_for_job(job)
except LookupError:
return JobResultBuilder(
outcome=IJobResult.OUTCOME_NOT_SUPPORTED,
comments=_('No suitable execution controller is available)'))
# Run the embedded command
start_time = time.time()
return_code, record_path = self._run_command(
job, job_state, config, ctrl)
execution_duration = time.time() - start_time
# Convert the return of the command to the outcome of the job
if return_code == 0:
outcome = IJobResult.OUTCOME_PASS
elif return_code < 0:
outcome = IJobResult.OUTCOME_CRASH
else:
outcome = IJobResult.OUTCOME_FAIL
# Create a result object and return it
return JobResultBuilder(
outcome=outcome,
return_code=return_code,
io_log_filename=record_path,
execution_duration=execution_duration)
def _prepare_io_handling(self, job, config):
ui_io_delegate = self._command_io_delegate
# NOTE: deprecated
# If there is no UI delegate specified create a simple
# delegate that logs all output to the console
if ui_io_delegate is None:
ui_io_delegate = FallbackCommandOutputPrinter(job.id)
# Compute a shared base filename for all logging activity associated
# with this job (aka: the slug)
slug = slugify(job.id)
# Create a delegate that writes all IO to disk
output_writer = CommandOutputWriter(
stdout_path=os.path.join(
self._jobs_io_log_dir, "{}.stdout".format(slug)),
stderr_path=os.path.join(
self._jobs_io_log_dir, "{}.stderr".format(slug)))
# Create a delegate for converting regular IO to IOLogRecords.
# It takes no arguments as all the interesting stuff is added as a
# signal listener.
io_log_gen = IOLogRecordGenerator()
# FIXME: this description is probably inaccurate and definitely doesn't
# take self._job_runner_ui_delegate into account.
#
# Create the delegate for routing IO
#
# Split the stream of data into three parts (each part is expressed as
# an element of extcmd.Chain()).
#
# Send the first copy of the data through bytes->text decoder and
# then to the UI delegate. This cold be something provided by the
# higher level caller or the default FallbackCommandOutputPrinter.
#
# Send the second copy of the data to the IOLogRecordGenerator instance
# that converts raw bytes into neat IOLogRecord objects. This generator
# has a on_new_record signal that can be used to do stuff when a new
# record is generated.
#
# Send the third copy to the output writer that writes everything to
# disk.
delegate = extcmd.Chain([self._job_runner_ui_delegate, ui_io_delegate,
io_log_gen, output_writer])
logger.debug(_("job[%s] extcmd delegate: %r"), job.id, delegate)
# Attach listeners to io_log_gen (the IOLogRecordGenerator instance)
# One listener appends each record to an array
return delegate, io_log_gen
def _run_command(self, job, job_state, config, ctrl):
"""
Run the shell command associated with the specified job.
:returns: (return_code, record_path) where return_code is the number
returned by the exiting child process while record_path is a pathname
of a gzipped content readable with :class:`IOLogRecordReader`
"""
# Bail early if there is nothing do do
if job.command is None:
raise ValueError(_("job {0} has no command to run").format(job.id))
# Get an extcmd delegate for observing all the IO the way we need
delegate, io_log_gen = self._prepare_io_handling(job, config)
# Create a subprocess.Popen() like object that uses the delegate
# system to observe all IO as it occurs in real time.
delegate_cls = self._get_delegate_cls(config)
flags = 0
# Use chunked IO for jobs that explicitly request this
if 'use-chunked-io' in job.get_flag_set():
flags |= extcmd.CHUNKED_IO
extcmd_popen = delegate_cls(delegate, flags=flags)
# Stream all IOLogRecord entries to disk
record_path = os.path.join(
self._jobs_io_log_dir, "{}.record.gz".format(
slugify(job.id)))
with gzip.open(record_path, mode='wb') as gzip_stream, \
io.TextIOWrapper(
gzip_stream, encoding='UTF-8') as record_stream:
writer = IOLogRecordWriter(record_stream)
io_log_gen.on_new_record.connect(writer.write_record)
try:
# Start the process and wait for it to finish getting the
# result code. This will actually call a number of callbacks
# while the process is running. It will also spawn a few
# threads although all callbacks will be fired from a single
# thread (which is _not_ the main thread)
logger.debug(
_("job[%s] starting command: %s"), job.id, job.command)
# Run the job command using extcmd
return_code = self._run_extcmd(
job, job_state, config, extcmd_popen, ctrl)
logger.debug(
_("job[%s] command return code: %r"), job.id, return_code)
finally:
io_log_gen.on_new_record.disconnect(writer.write_record)
return return_code, record_path
def _run_extcmd(self, job, job_state, config, extcmd_popen, ctrl):
ctrl.on_leftover_files.connect(self.on_leftover_files)
try:
return ctrl.execute_job(job, job_state, config, self._session_dir,
extcmd_popen)
finally:
ctrl.on_leftover_files.disconnect(self.on_leftover_files)
def _get_ctrl_for_job(self, job):
"""
Get the execution controller most applicable to run this job.
:param job:
A job definition to run
:returns:
An execution controller instance
:raises LookupError:
if no execution controller capable of running the specified job can
be found
"""
# Compute the score of each controller
ctrl_score = [
(ctrl, ctrl.get_score(job))
for ctrl in self._execution_ctrl_list]
# Sort scores
ctrl_score.sort(key=lambda pair: pair[1])
# Get the best score
ctrl, score = ctrl_score[-1]
# Ensure that the controller is viable
if score < 0:
raise LookupError(
_("No exec controller supports job {}").format(job))
logger.debug(
_("Selected execution controller %s (score %d) for job %r"),
ctrl.__class__.__name__, score, job.id)
return ctrl
@morris.signal
def on_leftover_files(self, job, config, cwd_dir, leftovers):
"""
Handle any files left over by the execution of a job definition.
:param job:
job definition with the command and environment definitions
:param config:
configuration object (a PlainBoxConfig instance)
:param cwd_dir:
Temporary directory set as current working directory during job
definition command execution. During the time this signal is
emitted that directory still exists.
:param leftovers:
List of absolute pathnames of files and directories that were
created in the current working directory (cwd_dir).
.. note::
Anyone listening to this signal does not need to remove any of the
files. They are removed automatically after this method returns.
"""
if (self._log_leftovers and
'has-leftovers' not in job.get_flag_set()):
logger.warning(
_("Job {0} created leftover filesystem artefacts"
" in its working directory").format(job.id))
for item in leftovers:
logger.warning(_("Leftover file/directory: %r"),
os.path.relpath(item, cwd_dir))
logger.warning(
_("Please store desired files in $PLAINBOX_SESSION_SHARE and"
" use regular temporary files for everything else"))
def _get_delegate_cls(self, config):
if (sys.version_info[0:2] >= (3, 4) and sys.platform == 'linux'
and config.extcmd == "glibc"):
logger.debug("Using glibc-based command runner")
from plainbox.vendor.extcmd.glibc import (
GlibcExternalCommandWithDelegate)
return GlibcExternalCommandWithDelegate
else:
logger.debug("Using classic thread-based command runner")
return extcmd.ExternalCommandWithDelegate
plainbox-0.25/plainbox/impl/_argparse.py 0000664 0001750 0001750 00000024301 12627266441 021160 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
#
# Parts copied from Python3.3.1:
# Steven J. Bethard .
#
# PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
# --------------------------------------------
#
# 1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"),
# and the Individual or Organization ("Licensee") accessing and otherwise
# using this software ("Python") in source or binary form and its associated
# documentation.
#
# 2. Subject to the terms and conditions of this License Agreement, PSF hereby
# grants Licensee a nonexclusive, royalty-free, world-wide license to
# reproduce, analyze, test, perform and/or display publicly, prepare
# derivative works, distribute, and otherwise use Python alone or in any
# derivative version, provided, however, that PSF's License Agreement and
# PSF's notice of copyright, i.e., "Copyright (c) 2001, 2002, 2003, 2004,
# 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014 Python Software
# Foundation; All Rights Reserved" are retained in Python alone or in any
# derivative version prepared by Licensee.
#
# 3. In the event Licensee prepares a derivative work that is based on or
# incorporates Python or any part thereof, and wants to make the derivative
# work available to others as provided herein, then Licensee hereby agrees
# to include in any such work a brief summary of the changes made to Python.
#
# 4. PSF is making Python available to Licensee on an "AS IS" basis. PSF MAKES
# NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE,
# BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR
# WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT
# THE USE OF PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
#
# 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON FOR ANY
# INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF
# MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, OR ANY DERIVATIVE
# THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
#
# 6. This License Agreement will automatically terminate upon a material breach
# of its terms and conditions.
#
# 7. Nothing in this License Agreement shall be deemed to create any
# relationship of agency, partnership, or joint venture between PSF and
# Licensee. This License Agreement does not grant permission to use PSF
# trademarks or trade name in a trademark sense to endorse or promote
# products or services of Licensee, or any third party.
#
# 8. By copying, installing or otherwise using Python, Licensee agrees to be
# bound by the terms and conditions of this License Agreement.
"""
:mod:`plainbox.impl._argparse` -- support code for argparse compatibility
=========================================================================
This module contains a copy of argparse source code from python3.3.1. It is
required for compatibility as argparse keeps having subtle changes in behavior
across releases.
"""
import argparse
class LegacyHelpFormatter(argparse.HelpFormatter):
"""
Vanilla copy of argparse.HelpFormatter from python 3.3.1
This class retains the behavior of argparse as seen on that version of
python. This is done for compatibility and for perfectly identical output
of PlainBox on various versions of python 3.x.
Investigation after a rather odd test failure lead to this diff::
--- raring/argparse.py 2014-01-28 18:52:35.789316074 +0100
+++ trusty/argparse.py 2014-01-28 19:11:19.121282883 +0100
@@ -174,6 +174,8 @@
self._prog = prog
self._indent_increment = indent_increment
self._max_help_position = max_help_position
+ self._max_help_position = min(max_help_position,
+ max(width - 20, indent_increment * 2))
self._width = width
self._current_indent = 0
@@ -345,7 +347,7 @@
else:
line_len = len(indent) - 1
for part in parts:
- if line_len + 1 + len(part) > text_width:
+ if line_len + 1 + len(part) > text_width and line:
lines.append(indent + ' '.join(line))
line = []
line_len = len(indent) - 1
@@ -485,7 +487,7 @@
def _format_text(self, text):
if '%(prog)' in text:
text = text % dict(prog=self._prog)
- text_width = self._width - self._current_indent
+ text_width = max(self._width - self._current_indent, 11)
indent = ' ' * self._current_indent
return self._fill_text(text, text_width, indent) + '\n\n'
@@ -493,7 +495,7 @@
# determine the required width and the entry label
help_position = min(self._action_max_length + 2,
self._max_help_position)
- help_width = self._width - help_position
+ help_width = max(self._width - help_position, 11)
action_width = help_position - self._current_indent - 2
action_header = self._format_action_invocation(action)
The relevant part is the second change, involving the addition of ``and line``.
It causes a line not to be printed, where it otherwise would. Since this is
a minor visual change we chose to retain the current behavior.
In the future, especially when python3.4 is the base version and older
versions are not supported, a reverse patch might be applied and held here,
to provide the non-legacy behavior.
"""
def _format_usage(self, usage, actions, groups, prefix):
if prefix is None:
prefix = argparse._('usage: ')
# if usage is specified, use that
if usage is not None:
usage = usage % dict(prog=self._prog)
# if no optionals or positionals are available, usage is just prog
elif usage is None and not actions:
usage = '%(prog)s' % dict(prog=self._prog)
# if optionals and positionals are available, calculate usage
elif usage is None:
prog = '%(prog)s' % dict(prog=self._prog)
# split optionals from positionals
optionals = []
positionals = []
for action in actions:
if action.option_strings:
optionals.append(action)
else:
positionals.append(action)
# build full usage string
format = self._format_actions_usage
action_usage = format(optionals + positionals, groups)
usage = ' '.join([s for s in [prog, action_usage] if s])
# wrap the usage parts if it's too long
text_width = self._width - self._current_indent
if len(prefix) + len(usage) > text_width:
# break usage into wrappable parts
part_regexp = r'\(.*?\)+|\[.*?\]+|\S+'
opt_usage = format(optionals, groups)
pos_usage = format(positionals, groups)
opt_parts = argparse._re.findall(part_regexp, opt_usage)
pos_parts = argparse._re.findall(part_regexp, pos_usage)
assert ' '.join(opt_parts) == opt_usage
assert ' '.join(pos_parts) == pos_usage
# helper for wrapping lines
def get_lines(parts, indent, prefix=None):
lines = []
line = []
if prefix is not None:
line_len = len(prefix) - 1
else:
line_len = len(indent) - 1
for part in parts:
if line_len + 1 + len(part) > text_width:
lines.append(indent + ' '.join(line))
line = []
line_len = len(indent) - 1
line.append(part)
line_len += len(part) + 1
if line:
lines.append(indent + ' '.join(line))
if prefix is not None:
lines[0] = lines[0][len(indent):]
return lines
# if prog is short, follow it with optionals or positionals
if len(prefix) + len(prog) <= 0.75 * text_width:
indent = ' ' * (len(prefix) + len(prog) + 1)
if opt_parts:
lines = get_lines([prog] + opt_parts, indent, prefix)
lines.extend(get_lines(pos_parts, indent))
elif pos_parts:
lines = get_lines([prog] + pos_parts, indent, prefix)
else:
lines = [prog]
# if prog is long, put it on its own line
else:
indent = ' ' * len(prefix)
parts = opt_parts + pos_parts
lines = get_lines(parts, indent)
if len(lines) > 1:
lines = []
lines.extend(get_lines(opt_parts, indent))
lines.extend(get_lines(pos_parts, indent))
lines = [prog] + lines
# join lines into usage
usage = '\n'.join(lines)
# prefix with 'usage:'
return '%s%s\n\n' % (prefix, usage)
plainbox-0.25/plainbox/impl/secure/ 0000775 0001750 0001750 00000000000 12633675274 020136 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/impl/secure/test_qualifiers.py 0000664 0001750 0001750 00000065637 12627266441 023727 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013, 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.secure.test_qualifiers
====================================
Test definitions for plainbox.impl.secure.qualifiers module
"""
from contextlib import contextmanager
from io import TextIOWrapper
from itertools import permutations
from unittest import TestCase
import operator
from plainbox.abc import IJobQualifier
from plainbox.impl.job import JobDefinition
from plainbox.impl.secure.origin import FileTextSource
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.origin import UnknownTextSource
from plainbox.impl.secure.qualifiers import CompositeQualifier
from plainbox.impl.secure.qualifiers import FieldQualifier
from plainbox.impl.secure.qualifiers import IMatcher
from plainbox.impl.secure.qualifiers import JobIdQualifier
from plainbox.impl.secure.qualifiers import NonLocalJobQualifier
from plainbox.impl.secure.qualifiers import NonPrimitiveQualifierOrigin
from plainbox.impl.secure.qualifiers import OperatorMatcher
from plainbox.impl.secure.qualifiers import PatternMatcher
from plainbox.impl.secure.qualifiers import RegExpJobQualifier
from plainbox.impl.secure.qualifiers import select_jobs
from plainbox.impl.secure.qualifiers import SimpleQualifier
from plainbox.impl.secure.qualifiers import WhiteList
from plainbox.impl.testing_utils import make_job
from plainbox.vendor import mock
class IJobQualifierTests(TestCase):
"""
Test cases for IJobQualifier interface
"""
def test_IJobQualifier_is_abstract(self):
"""
Verify that IJobQualifier is an interface and cannot be
instantiated
"""
self.assertRaises(TypeError, IJobQualifier)
class DummySimpleQualifier(SimpleQualifier):
"""
Dummy concrete subclass of SimpleQualifier
"""
def get_simple_match(self, job):
raise NotImplementedError() # pragma: no cover
class SimpleQualifierTests(TestCase):
"""
Test cases for SimpleQualifier class
"""
def setUp(self):
self.origin = mock.Mock(name='origin', spec_set=Origin)
self.obj = DummySimpleQualifier(self.origin)
self.job = JobDefinition({'id': "dummy"})
def test_init(self):
"""
verify that SimpleQualifier has a working initializer that sets the
inclusive flag
"""
obj1 = DummySimpleQualifier(self.origin)
self.assertEqual(obj1.origin, self.origin)
self.assertEqual(obj1.inclusive, True)
obj2 = DummySimpleQualifier(self.origin, False)
self.assertEqual(obj2.origin, self.origin)
self.assertEqual(obj2.inclusive, False)
obj3 = DummySimpleQualifier(self.origin, inclusive=False)
self.assertEqual(obj3.origin, self.origin)
self.assertEqual(obj3.inclusive, False)
def test_is_primitive(self):
"""
verify that SimpleQualifier.is_primitive is True
"""
self.assertTrue(self.obj.is_primitive)
def test_designates(self):
"""
verify that SimpleQualifier.designates returns True iff get_vote() for
the same job returns VOTE_INCLUDE.
"""
with mock.patch.object(self.obj, 'get_vote') as mock_get_vote:
mock_get_vote.return_value = IJobQualifier.VOTE_INCLUDE
self.assertTrue(self.obj.designates(self.job))
mock_get_vote.return_value = IJobQualifier.VOTE_EXCLUDE
self.assertFalse(self.obj.designates(self.job))
mock_get_vote.return_value = IJobQualifier.VOTE_IGNORE
self.assertFalse(self.obj.designates(self.job))
def test_get_vote__inclusive_matching(self):
"""
verify that SimpleQualifier.get_vote() returns VOTE_INCLUDE for
inclusive qualifier that matches a job
"""
obj = DummySimpleQualifier(self.origin, inclusive=True)
with mock.patch.object(obj, 'get_simple_match') as mock_gsm:
mock_gsm.return_value = True
self.assertEqual(obj.get_vote(self.job),
IJobQualifier.VOTE_INCLUDE)
def test_get_vote__not_inclusive_matching(self):
"""
verify that SimpleQualifier.get_vote() returns VOTE_EXCLUDE for
non-inclusive qualifier that matches a job
"""
obj = DummySimpleQualifier(self.origin, inclusive=False)
with mock.patch.object(obj, 'get_simple_match') as mock_gsm:
mock_gsm.return_value = True
self.assertEqual(obj.get_vote(self.job),
IJobQualifier.VOTE_EXCLUDE)
def test_get_vote__inclusive_nonmatching(self):
"""
verify that SimpleQualifier.get_vote() returns VOTE_IGNORE for
inclusive qualifier that does not match a job
"""
obj = DummySimpleQualifier(self.origin, inclusive=True)
with mock.patch.object(obj, 'get_simple_match') as mock_gsm:
mock_gsm.return_value = False
self.assertEqual(obj.get_vote(self.job), IJobQualifier.VOTE_IGNORE)
def test_get_vote__not_inclusive_nonmatching(self):
"""
verify that SimpleQualifier.get_vote() returns VOTE_IGNORE for
non-inclusive qualifier that does not match a job
"""
obj = DummySimpleQualifier(self.origin, inclusive=False)
with mock.patch.object(obj, 'get_simple_match') as mock_gsm:
mock_gsm.return_value = False
self.assertEqual(obj.get_vote(self.job), IJobQualifier.VOTE_IGNORE)
def test_get_primitive_qualifiers(self):
"""
verify that SimpleQualifier.get_primitive_qualifiers() returns a list
with itself
"""
return self.assertEqual(
self.obj.get_primitive_qualifiers(), [self.obj])
class OperatorMatcherTests(TestCase):
"""
Test cases for OperatorMatcher class
"""
def test_match(self):
matcher = OperatorMatcher(operator.eq, "foo")
self.assertTrue(matcher.match("foo"))
self.assertFalse(matcher.match("bar"))
def test_repr(self):
self.assertEqual(
repr(OperatorMatcher(operator.eq, "foo")),
"OperatorMatcher(, 'foo')")
class PatternMatcherTests(TestCase):
"""
Test cases for PatternMatcher class
"""
def test_match(self):
matcher = PatternMatcher("foo.*")
self.assertTrue(matcher.match("foobar"))
self.assertFalse(matcher.match("fo"))
def test_repr(self):
self.assertEqual(
repr(PatternMatcher("text")), "PatternMatcher('text')")
class FieldQualifierTests(TestCase):
"""
Test cases for FieldQualifier class
"""
_FIELD = "field"
def setUp(self):
self.matcher = mock.Mock(name='matcher', spec_set=IMatcher)
self.origin = mock.Mock(name='origin', spec_set=Origin)
self.qualifier_i = FieldQualifier(
self._FIELD, self.matcher, self.origin, True)
self.qualifier_e = FieldQualifier(
self._FIELD, self.matcher, self.origin, False)
def test_init(self):
"""
verify that FiledQualifier sets all of the properties correctly
"""
self.assertEqual(self.qualifier_i.field, self._FIELD)
self.assertEqual(self.qualifier_i.matcher, self.matcher)
self.assertEqual(self.qualifier_i.origin, self.origin)
self.assertEqual(self.qualifier_i.inclusive, True)
def test_is_primitive(self):
"""
verify that FieldQualifier.is_primitive is True
"""
self.assertTrue(self.qualifier_i.is_primitive)
self.assertTrue(self.qualifier_e.is_primitive)
def test_repr(self):
"""
verify that FieldQualifier.__repr__() works as expected
"""
self.assertEqual(
repr(self.qualifier_i),
"FieldQualifier({!r}, {!r}, inclusive=True)".format(
self._FIELD, self.matcher))
self.assertEqual(
repr(self.qualifier_e),
"FieldQualifier({!r}, {!r}, inclusive=False)".format(
self._FIELD, self.matcher))
def test_get_simple_match(self):
"""
verify that FieldQualifier.get_simple_match() works as expected
"""
job = mock.Mock()
for qualifier in (self.qualifier_i, self.qualifier_e):
self.matcher.reset_mock()
result = qualifier.get_simple_match(job)
self.matcher.match.assert_called_once_with(
getattr(job, self._FIELD))
self.assertEqual(result, self.matcher.match())
class RegExpJobQualifierTests(TestCase):
"""
Test cases for RegExpJobQualifier class
"""
def setUp(self):
self.origin = mock.Mock(name='origin', spec_set=Origin)
self.qualifier = RegExpJobQualifier("f.*", self.origin)
def test_init(self):
"""
verify that init assigns stuff to properties correctly
"""
self.assertEqual(self.qualifier.pattern_text, "f.*")
self.assertEqual(self.qualifier.origin, self.origin)
def test_is_primitive(self):
"""
verify that RegExpJobQualifier.is_primitive is True
"""
self.assertTrue(self.qualifier.is_primitive)
def test_pattern_text(self):
"""
verify that RegExpJobQualifier.pattern_text returns
the full text of the pattern
"""
self.assertEqual(self.qualifier.pattern_text, "f.*")
def test_repr(self):
"""
verify that RegExpJobQualifier.__repr__() works as expected
"""
self.assertEqual(
repr(self.qualifier), "RegExpJobQualifier('f.*', inclusive=True)")
def test_get_vote(self):
"""
verify that RegExpJobQualifier.get_vote() works as expected
"""
self.assertEqual(
RegExpJobQualifier("foo", self.origin).get_vote(
JobDefinition({'id': 'foo'})),
IJobQualifier.VOTE_INCLUDE)
self.assertEqual(
RegExpJobQualifier("foo", self.origin, inclusive=False).get_vote(
JobDefinition({'id': 'foo'})),
IJobQualifier.VOTE_EXCLUDE)
self.assertEqual(
RegExpJobQualifier("foo", self.origin).get_vote(
JobDefinition({'id': 'bar'})),
IJobQualifier.VOTE_IGNORE)
self.assertEqual(
RegExpJobQualifier("foo", self.origin, inclusive=False).get_vote(
JobDefinition({'id': 'bar'})),
IJobQualifier.VOTE_IGNORE)
class JobIdQualifierTests(TestCase):
"""
Test cases for JobIdQualifier class
"""
def setUp(self):
self.origin = mock.Mock(name='origin', spec_set=Origin)
self.qualifier = JobIdQualifier("foo", self.origin)
def test_init(self):
"""
verify that init assigns stuff to properties correctly
"""
self.assertEqual(self.qualifier.id, "foo")
self.assertEqual(self.qualifier.origin, self.origin)
def test_is_primitive(self):
"""
verify that JobIdQualifier.is_primitive is True
"""
self.assertTrue(self.qualifier.is_primitive)
def test_repr(self):
"""
verify that JobIdQualifier.__repr__() works as expected
"""
self.assertEqual(
repr(self.qualifier), "JobIdQualifier('foo', inclusive=True)")
def test_get_vote(self):
"""
verify that JobIdQualifier.get_vote() works as expected
"""
self.assertEqual(
JobIdQualifier("foo", self.origin).get_vote(
JobDefinition({'id': 'foo'})),
IJobQualifier.VOTE_INCLUDE)
self.assertEqual(
JobIdQualifier("foo", self.origin, inclusive=False).get_vote(
JobDefinition({'id': 'foo'})),
IJobQualifier.VOTE_EXCLUDE)
self.assertEqual(
JobIdQualifier("foo", self.origin).get_vote(
JobDefinition({'id': 'bar'})),
IJobQualifier.VOTE_IGNORE)
self.assertEqual(
JobIdQualifier("foo", self.origin, inclusive=False).get_vote(
JobDefinition({'id': 'bar'})),
IJobQualifier.VOTE_IGNORE)
def test_smoke(self):
"""
various smoke tests that check if JobIdQualifier.designates() works
"""
self.assertTrue(
JobIdQualifier('name', self.origin).designates(make_job('name')))
self.assertFalse(
JobIdQualifier('nam', self.origin).designates(make_job('name')))
self.assertFalse(
JobIdQualifier('.*', self.origin).designates(make_job('name')))
self.assertFalse(
JobIdQualifier('*', self.origin).designates(make_job('name')))
class NonLocalJobQualifierTests(TestCase):
"""
Test cases for NonLocalJobQualifier class
"""
def setUp(self):
self.origin = mock.Mock(name='origin', spec_set=Origin)
self.qualifier = NonLocalJobQualifier(self.origin)
def test_init(self):
"""
verify that init assigns stuff to properties correctly
"""
self.assertEqual(self.qualifier.origin, self.origin)
def test_is_primitive(self):
"""
verify that LocalJobQualifier.is_primitive is True
"""
self.assertTrue(self.qualifier.is_primitive)
def test_repr(self):
"""
verify that NonLocalJobQualifier.__repr__() works as expected
"""
self.assertEqual(
repr(self.qualifier), "NonLocalJobQualifier(inclusive=True)")
def test_get_vote(self):
"""
verify that NonLocalJobQualifier.get_vote() works as expected
"""
self.assertEqual(
NonLocalJobQualifier(self.origin).get_vote(
JobDefinition({'name': 'foo', 'plugin': 'shell'})),
IJobQualifier.VOTE_INCLUDE)
self.assertEqual(
NonLocalJobQualifier(self.origin, inclusive=False).get_vote(
JobDefinition({'name': 'foo', 'plugin': 'shell'})),
IJobQualifier.VOTE_EXCLUDE)
self.assertEqual(
NonLocalJobQualifier(self.origin).get_vote(
JobDefinition({'name': 'bar', 'plugin': 'local'})),
IJobQualifier.VOTE_IGNORE)
self.assertEqual(
NonLocalJobQualifier(self.origin, inclusive=False).get_vote(
JobDefinition({'name': 'bar', 'plugin': 'local'})),
IJobQualifier.VOTE_IGNORE)
class CompositeQualifierTests(TestCase):
"""
Test cases for CompositeQualifier class
"""
def setUp(self):
self.origin = mock.Mock(name='origin', spec_set=Origin)
def test_empty(self):
"""
verify that an empty CompositeQualifier does not designate a random job
"""
obj = CompositeQualifier([])
self.assertFalse(obj.designates(make_job("foo")))
def test_get_vote(self):
"""
verify how CompositeQualifier.get_vote() behaves in various situations
"""
# Default is IGNORE
self.assertEqual(
CompositeQualifier([]).get_vote(make_job("foo")),
IJobQualifier.VOTE_IGNORE)
# Any match is INCLUDE
self.assertEqual(
CompositeQualifier([
RegExpJobQualifier("foo", self.origin),
]).get_vote(make_job("foo")),
IJobQualifier.VOTE_INCLUDE)
# Any negative match is EXCLUDE
self.assertEqual(
CompositeQualifier([
RegExpJobQualifier("foo", self.origin, inclusive=False),
]).get_vote(make_job("foo")),
IJobQualifier.VOTE_EXCLUDE)
# Negative matches take precedence over positive matches
self.assertEqual(
CompositeQualifier([
RegExpJobQualifier("foo", self.origin),
RegExpJobQualifier("foo", self.origin, inclusive=False),
]).get_vote(make_job("foo")),
IJobQualifier.VOTE_EXCLUDE)
# Unrelated patterns are not affecting the result
self.assertEqual(
CompositeQualifier([
RegExpJobQualifier("foo", self.origin),
RegExpJobQualifier("bar", self.origin),
]).get_vote(make_job("foo")),
IJobQualifier.VOTE_INCLUDE)
def test_inclusive(self):
"""
verify that inclusive selection works
"""
self.assertTrue(
CompositeQualifier([
RegExpJobQualifier('foo', self.origin),
]).designates(make_job("foo")))
self.assertFalse(
CompositeQualifier([
RegExpJobQualifier('foo', self.origin),
]).designates(make_job("bar")))
def test_exclusive(self):
"""
verify that non-inclusive selection works
"""
self.assertFalse(
CompositeQualifier([
RegExpJobQualifier('foo', self.origin, inclusive=False)
]).designates(make_job("foo")))
self.assertFalse(
CompositeQualifier([
RegExpJobQualifier(".*", self.origin),
RegExpJobQualifier('foo', self.origin, inclusive=False)
]).designates(make_job("foo")))
self.assertTrue(
CompositeQualifier([
RegExpJobQualifier(".*", self.origin),
RegExpJobQualifier('foo', self.origin, inclusive=False)
]).designates(make_job("bar")))
def test_is_primitive(self):
"""
verify that CompositeQualifier.is_primitive is False
"""
self.assertFalse(CompositeQualifier([]).is_primitive)
def test_get_primitive_qualifiers(self):
"""
verify that CompositeQualifiers.get_composite_qualifiers() works
"""
# given three qualifiers
q1 = JobIdQualifier("q1", self.origin)
q2 = JobIdQualifier("q2", self.origin)
q3 = JobIdQualifier("q3", self.origin)
# we expect to see them flattened
expected = [q1, q2, q3]
# from a nested structure like this
measured = CompositeQualifier([
CompositeQualifier([q1, q2]), q3]
).get_primitive_qualifiers()
self.assertEqual(expected, measured)
def test_origin(self):
with self.assertRaises(NonPrimitiveQualifierOrigin):
CompositeQualifier([]).origin
class WhiteListTests(TestCase):
"""
Test cases for WhiteList class
"""
_name = 'whitelist.txt'
_content = [
"# this is a comment",
"foo # this is another comment",
"bar",
""
]
@contextmanager
def mocked_file(self, name, content):
m_open = mock.MagicMock(name='open', spec=open)
m_stream = mock.MagicMock(spec=TextIOWrapper)
m_stream.__enter__.return_value = m_stream
# The next two lines are complementary, either will suffice but the
# test may need changes if the code that reads stuff changes.
m_stream.__iter__.side_effect = lambda: iter(content)
m_stream.read.return_value = "\n".join(content)
m_open.return_value = m_stream
with mock.patch('plainbox.impl.secure.qualifiers.open', m_open,
create=True):
yield
m_open.assert_called_once_with(name, "rt", encoding="UTF-8")
def test_load_patterns(self):
with self.mocked_file(self._name, self._content):
pattern_list, max_lineno = WhiteList._load_patterns(self._name)
self.assertEqual(pattern_list, ['^foo$', '^bar$'])
self.assertEqual(max_lineno, 3)
def test_designates(self):
"""
verify that WhiteList.designates() works
"""
self.assertTrue(
WhiteList.from_string("foo").designates(make_job('foo')))
self.assertTrue(
WhiteList.from_string("foo\nbar\n").designates(make_job('foo')))
self.assertTrue(
WhiteList.from_string("foo\nbar\n").designates(make_job('bar')))
# Note, it's not matching either!
self.assertFalse(
WhiteList.from_string("foo").designates(make_job('foobar')))
self.assertFalse(
WhiteList.from_string("bar").designates(make_job('foobar')))
def test_from_file(self):
"""
verify that WhiteList.from_file() works
"""
with self.mocked_file(self._name, self._content):
whitelist = WhiteList.from_file(self._name)
# verify that the patterns are okay
self.assertEqual(
repr(whitelist.qualifier_list[0]),
"RegExpJobQualifier('^foo$', inclusive=True)")
# verify that whitelist name got set
self.assertEqual(whitelist.name, "whitelist")
# verify that the origin got set
self.assertEqual(
whitelist.origin,
Origin(FileTextSource("whitelist.txt"), 1, 3))
def test_from_string(self):
"""
verify that WhiteList.from_string() works
"""
whitelist = WhiteList.from_string("\n".join(self._content))
# verify that the patterns are okay
self.assertEqual(
repr(whitelist.qualifier_list[0]),
"RegExpJobQualifier('^foo$', inclusive=True)")
# verify that whitelist name is the empty default
self.assertEqual(whitelist.name, None)
# verify that the origin got set to the default constructed value
self.assertEqual(whitelist.origin, Origin(UnknownTextSource(), 1, 3))
def test_from_empty_string(self):
"""
verify that WhiteList.from_string("") works
"""
WhiteList.from_string("")
def test_from_string__with_name_and_origin(self):
"""
verify that WhiteList.from_string() works when passing name and origin
"""
# construct a whitelist with some dummy data, the names, pathnames and
# line ranges are arbitrary
whitelist = WhiteList.from_string(
"\n".join(self._content), name="somefile",
origin=Origin(FileTextSource("somefile.txt"), 1, 3))
# verify that the patterns are okay
self.assertEqual(
repr(whitelist.qualifier_list[0]),
"RegExpJobQualifier('^foo$', inclusive=True)")
# verify that whitelist name is copied
self.assertEqual(whitelist.name, "somefile")
# verify that the origin is copied
self.assertEqual(
whitelist.origin, Origin(FileTextSource("somefile.txt"), 1, 3))
def test_from_string__with_filename(self):
"""
verify that WhiteList.from_string() works when passing filename
"""
# construct a whitelist with some dummy data, the names, pathnames and
# line ranges are arbitrary
whitelist = WhiteList.from_string(
"\n".join(self._content), filename="somefile.txt")
# verify that the patterns are okay
self.assertEqual(
repr(whitelist.qualifier_list[0]),
"RegExpJobQualifier('^foo$', inclusive=True)")
# verify that whitelist name is derived from the filename
self.assertEqual(whitelist.name, "somefile")
# verify that the origin is properly derived from the filename
self.assertEqual(
whitelist.origin, Origin(FileTextSource("somefile.txt"), 1, 3))
def test_repr(self):
"""
verify that custom repr works
"""
whitelist = WhiteList([], name="test")
self.assertEqual(repr(whitelist), "")
def test_name_getter(self):
"""
verify that WhiteList.name getter works
"""
self.assertEqual(WhiteList([], "foo").name, "foo")
def test_name_setter(self):
"""
verify that WhiteList.name setter works
"""
whitelist = WhiteList([], "foo")
whitelist.name = "bar"
self.assertEqual(whitelist.name, "bar")
def test_name_from_filename(self):
"""
verify how name_from_filename() works
"""
self.assertEqual(
WhiteList.name_from_filename("some/path/foo.whitelist"), "foo")
self.assertEqual(WhiteList.name_from_filename("foo.whitelist"), "foo")
self.assertEqual(WhiteList.name_from_filename("foo."), "foo")
self.assertEqual(WhiteList.name_from_filename("foo"), "foo")
self.assertEqual(
WhiteList.name_from_filename("foo.notawhitelist"), "foo")
def test_namespace_behavior(self):
"""
verify that WhiteList() correctly respects namespace declarations
and uses implict_namespace to fully qualifiy all patterns
"""
whitelist = WhiteList.from_string(
"foo\n"
"2014\\.example\\.org::bar\n",
implicit_namespace="2014.other.example.org")
# verify that the implicit namespace was recorded
self.assertEqual(
whitelist.implicit_namespace, "2014.other.example.org")
# verify that the patterns are okay
self.assertEqual(
whitelist.qualifier_list[0].pattern_text,
"^2014\\.other\\.example\\.org::foo$")
self.assertEqual(
whitelist.qualifier_list[1].pattern_text,
"^2014\\.example\\.org::bar$")
class FunctionTests(TestCase):
def setUp(self):
self.origin = mock.Mock(name='origin', spec_set=Origin)
def test_select_jobs__inclusion(self):
"""
verify that select_jobs() honors qualifier ordering
"""
job_a = JobDefinition({'id': 'a'})
job_b = JobDefinition({'id': 'b'})
job_c = JobDefinition({'id': 'c'})
qual_a = JobIdQualifier("a", self.origin)
qual_c = JobIdQualifier("c", self.origin)
for job_list in permutations([job_a, job_b, job_c], 3):
# Regardless of how the list of job is ordered the result
# should be the same, depending on the qualifier list
self.assertEqual(
select_jobs(job_list, [qual_a, qual_c]),
[job_a, job_c])
def test_select_jobs__exclusion(self):
"""
verify that select_jobs() honors qualifier ordering
"""
job_a = JobDefinition({'id': 'a'})
job_b = JobDefinition({'id': 'b'})
job_c = JobDefinition({'id': 'c'})
qual_all = CompositeQualifier([
JobIdQualifier("a", self.origin),
JobIdQualifier("b", self.origin),
JobIdQualifier("c", self.origin),
])
qual_not_c = JobIdQualifier("c", self.origin, inclusive=False)
for job_list in permutations([job_a, job_b, job_c], 3):
# Regardless of how the list of job is ordered the result
# should be the same, depending on the qualifier list
self.assertEqual(
select_jobs(job_list, [qual_all, qual_not_c]),
[job_a, job_b])
plainbox-0.25/plainbox/impl/secure/test_plugins.py 0000664 0001750 0001750 00000046440 12627266441 023233 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.secure.test_plugins
=================================
Test definitions for plainbox.impl.secure.plugins module
"""
from unittest import TestCase
import collections
import os
from plainbox.impl.secure.plugins import FsPlugInCollection
from plainbox.impl.secure.plugins import IPlugIn, PlugIn
from plainbox.impl.secure.plugins import PkgResourcesPlugInCollection
from plainbox.impl.secure.plugins import PlugInCollectionBase
from plainbox.impl.secure.plugins import PlugInError
from plainbox.vendor import mock
class PlugInTests(TestCase):
"""
Tests for PlugIn class
"""
NAME = "name"
OBJ = mock.Mock(name="obj")
LOAD_TIME = 42
def setUp(self):
self.plugin = PlugIn(self.NAME, self.OBJ)
def test_property_name(self):
"""
verify that PlugIn.plugin_name getter works
"""
self.assertEqual(self.plugin.plugin_name, self.NAME)
def test_property_object(self):
"""
verify that PlugIn.plugin_object getter works
"""
self.assertEqual(self.plugin.plugin_object, self.OBJ)
def test_plugin_load_time(self):
"""
verify that PlugIn.plugin_load_time getter works
"""
self.assertEqual(PlugIn(self.NAME, self.OBJ).plugin_load_time, 0)
self.assertEqual(
PlugIn(self.NAME, self.OBJ, self.LOAD_TIME).plugin_load_time,
self.LOAD_TIME)
def test_plugin_wrap_time(self):
"""
verify that PlugIn.plugin_wrap_time getter works
"""
self.assertEqual(self.plugin.plugin_wrap_time, 0)
def test_repr(self):
"""
verify that repr for PlugIn works
"""
self.assertEqual(repr(self.plugin), "")
def test_base_cls(self):
"""
verify that PlugIn inherits IPlugIn
"""
self.assertTrue(issubclass(PlugIn, IPlugIn))
class DummyPlugInCollection(PlugInCollectionBase):
"""
A dummy, concrete subclass of PlugInCollectionBase
"""
def load(self):
"""
dummy implementation of load()
:raises NotImplementedError:
always raised
"""
raise NotImplementedError("this is a dummy method")
class PlugInCollectionBaseTests(TestCase):
"""
Tests for PlugInCollectionBase class.
Since this is an abstract class we're creating a concrete subclass with
dummy implementation of the load() method.
"""
LOAD_TIME = 42
def setUp(self):
self.col = DummyPlugInCollection()
self.plug1 = PlugIn("name1", "obj1")
self.plug2 = PlugIn("name2", "obj2")
@mock.patch.object(DummyPlugInCollection, "load")
def test_auto_loading(self, mock_col):
"""
verify that PlugInCollectionBase.load() is called when load=True is
passed to the initializer.
"""
col = DummyPlugInCollection(load=True)
col.load.assert_called_once_with()
def test_defaults(self):
"""
verify what defaults are passed to the initializer or set internally
"""
self.assertEqual(self.col._wrapper, PlugIn)
self.assertEqual(self.col._plugins, collections.OrderedDict())
self.assertEqual(self.col._loaded, False)
self.assertEqual(self.col._problem_list, [])
def test_get_by_name__typical(self):
"""
verify that PlugInCollectionBase.get_by_name() works
"""
with self.col.fake_plugins([self.plug1]):
self.assertEqual(
self.col.get_by_name(self.plug1.plugin_name), self.plug1)
def test_get_by_name__missing(self):
"""
check how PlugInCollectionBase.get_by_name() behaves when there is no
match for the given name.
"""
with self.assertRaises(KeyError), self.col.fake_plugins([]):
self.col.get_by_name(self.plug1.plugin_name)
def test_get_all_names(self):
"""
verify that PlugInCollectionBase.get_all_names() works
"""
with self.col.fake_plugins([self.plug1, self.plug2]):
self.assertEqual(
self.col.get_all_names(),
[self.plug1.plugin_name, self.plug2.plugin_name])
def test_get_all_plugins(self):
"""
verify that PlugInCollectionBase.get_all_plugins() works
"""
with self.col.fake_plugins([self.plug1, self.plug2]):
self.assertEqual(
self.col.get_all_plugins(), [self.plug1, self.plug2])
def test_get_all_plugin_objects(self):
"""
verify that PlugInCollectionBase.get_all_plugin_objects() works
"""
with self.col.fake_plugins([self.plug1, self.plug2]):
self.assertEqual(
self.col.get_all_plugin_objects(),
[self.plug1.plugin_object, self.plug2.plugin_object])
def test_get_items(self):
"""
verify that PlugInCollectionBase.get_all_items() works
"""
with self.col.fake_plugins([self.plug1, self.plug2]):
self.assertEqual(
self.col.get_all_items(),
[(self.plug1.plugin_name, self.plug1),
(self.plug2.plugin_name, self.plug2)])
def test_problem_list(self):
"""
verify that PlugInCollectionBase.problem_list works
"""
self.assertIs(self.col.problem_list, self.col._problem_list)
def test_fake_plugins(self):
"""
verify that PlugInCollectionBase.fake_plugins() works
"""
# create a canary object we'll check for below
canary = object()
# store it to all the attributes we expect to see changed by
# fake_plugins()
self.col._loaded = canary
self.col._plugins = canary
self.col._problems = canary
# use fake_plugins() with some plugins we have
fake_plugins = [self.plug1, self.plug2]
with self.col.fake_plugins(fake_plugins):
# ensure that we don't have canaries here
self.assertEqual(self.col._loaded, True)
self.assertEqual(self.col._plugins, collections.OrderedDict([
(self.plug1.plugin_name, self.plug1),
(self.plug2.plugin_name, self.plug2)]))
self.assertEqual(self.col._problem_list, [])
# ensure that we see canaries outside of the context manager
self.assertEqual(self.col._loaded, canary)
self.assertEqual(self.col._plugins, canary)
self.assertEqual(self.col._problems, canary)
def test_fake_plugins__with_problem_list(self):
"""
verify that PlugInCollectionBase.fake_plugins() works when called with
the optional problem list.
"""
# create a canary object we'll check for below
canary = object()
# store it to all the attributes we expect to see changed by
# fake_plugins()
self.col._loaded = canary
self.col._plugins = canary
self.col._problems = canary
# use fake_plugins() with some plugins we have
fake_plugins = [self.plug1, self.plug2]
fake_problems = [PlugInError("just testing")]
with self.col.fake_plugins(fake_plugins, fake_problems):
# ensure that we don't have canaries here
self.assertEqual(self.col._loaded, True)
self.assertEqual(self.col._plugins, collections.OrderedDict([
(self.plug1.plugin_name, self.plug1),
(self.plug2.plugin_name, self.plug2)]))
self.assertEqual(self.col._problem_list, fake_problems)
# ensure that we see canaries outside of the context manager
self.assertEqual(self.col._loaded, canary)
self.assertEqual(self.col._plugins, canary)
self.assertEqual(self.col._problems, canary)
def test_wrap_and_add_plugin__normal(self):
"""
verify that PlugInCollectionBase.wrap_and_add_plugin() works
"""
self.col.wrap_and_add_plugin("new-name", "new-obj", self.LOAD_TIME)
self.assertIn("new-name", self.col._plugins)
self.assertEqual(
self.col._plugins["new-name"].plugin_name, "new-name")
self.assertEqual(
self.col._plugins["new-name"].plugin_object, "new-obj")
self.assertEqual(
self.col._plugins["new-name"].plugin_load_time, self.LOAD_TIME)
def test_wrap_and_add_plugin__problem(self):
"""
verify that PlugInCollectionBase.wrap_and_add_plugin() works when a
problem occurs.
"""
with mock.patch.object(self.col, "_wrapper") as mock_wrapper:
mock_wrapper.side_effect = PlugInError
self.col.wrap_and_add_plugin("new-name", "new-obj", self.LOAD_TIME)
mock_wrapper.assert_called_with("new-name", "new-obj",
self.LOAD_TIME)
self.assertIsInstance(self.col.problem_list[0], PlugInError)
self.assertNotIn("new-name", self.col._plugins)
def test_extra_wrapper_args(self):
"""
verify that PlugInCollectionBase passes extra arguments to the wrapper
"""
class TestPlugIn(PlugIn):
def __init__(self, name, obj, load_time, *args, **kwargs):
super().__init__(name, obj, load_time)
self.args = args
self.kwargs = kwargs
col = DummyPlugInCollection(
False, TestPlugIn, 1, 2, 3, some="argument")
col.wrap_and_add_plugin("name", "obj", self.LOAD_TIME)
self.assertEqual(col._plugins["name"].args, (1, 2, 3))
self.assertEqual(col._plugins["name"].kwargs, {"some": "argument"})
class PkgResourcesPlugInCollectionTests(TestCase):
"""
Tests for PlugInCollectionBase class
"""
_NAMESPACE = "namespace"
def setUp(self):
# Create a collection
self.col = PkgResourcesPlugInCollection(self._NAMESPACE)
def test_namespace_is_set(self):
# Ensure that namespace was saved
self.assertEqual(self.col._namespace, self._NAMESPACE)
def test_plugins_are_empty(self):
# Ensure that plugins start out empty
self.assertEqual(len(self.col._plugins), 0)
def test_initial_loaded_flag(self):
# Ensure that 'loaded' flag is false
self.assertFalse(self.col._loaded)
def test_default_wrapper(self):
# Ensure that the wrapper is :class:`PlugIn`
self.assertEqual(self.col._wrapper, PlugIn)
@mock.patch('pkg_resources.iter_entry_points')
def test_load(self, mock_iter):
# Create a mocked entry point
mock_ep1 = mock.Mock()
mock_ep1.name = "zzz"
mock_ep1.load.return_value = "two"
# Create another mocked entry point
mock_ep2 = mock.Mock()
mock_ep2.name = "aaa"
mock_ep2.load.return_value = "one"
# Make the collection load both mocked entry points
mock_iter.return_value = [mock_ep1, mock_ep2]
# Load plugins
self.col.load()
# Ensure that pkg_resources were interrogated
mock_iter.assert_called_with(self._NAMESPACE)
# Ensure that both entry points were loaded
mock_ep1.load.assert_called_with()
mock_ep2.load.assert_called_with()
@mock.patch('plainbox.impl.secure.plugins.logger')
@mock.patch('pkg_resources.iter_entry_points')
def test_load_failing(self, mock_iter, mock_logger):
# Create a mocked entry point
mock_ep1 = mock.Mock()
mock_ep1.name = "zzz"
mock_ep1.load.return_value = "two"
# Create another mocked entry point
mock_ep2 = mock.Mock()
mock_ep2.name = "aaa"
mock_ep2.load.side_effect = ImportError("boom")
# Make the collection load both mocked entry points
mock_iter.return_value = [mock_ep1, mock_ep2]
# Load plugins
self.col.load()
# Ensure that pkg_resources were interrogated
mock_iter.assert_called_with(self._NAMESPACE)
# Ensure that both entry points were loaded
mock_ep1.load.assert_called_with()
mock_ep2.load.assert_called_with()
# Ensure that an exception was logged
mock_logger.exception.assert_called_with(
"Unable to import %s", mock_ep2)
# Ensure that the error was collected
self.assertIsInstance(self.col.problem_list[0], ImportError)
class FsPlugInCollectionTests(TestCase):
_P1 = "/system/providers"
_P2 = "home/user/.providers"
_DIR_LIST = [_P1, _P2]
_EXT = ".plugin"
def setUp(self):
# Create a collection
self.col = FsPlugInCollection(self._DIR_LIST, self._EXT)
def test_path_is_set(self):
# Ensure that path was saved
self.assertEqual(self.col._dir_list, self._DIR_LIST)
def test_ext_is_set(self):
# Ensure that ext was saved
self.assertEqual(self.col._ext, self._EXT)
def test_plugins_are_empty(self):
# Ensure that plugins start out empty
self.assertEqual(len(self.col._plugins), 0)
def test_initial_loaded_flag(self):
# Ensure that 'loaded' flag is false
self.assertFalse(self.col._loaded)
def test_default_wrapper(self):
# Ensure that the wrapper is :class:`PlugIn`
self.assertEqual(self.col._wrapper, PlugIn)
@mock.patch('plainbox.impl.secure.plugins.logger')
@mock.patch('builtins.open')
@mock.patch('os.path.isfile')
@mock.patch('os.listdir')
def test_load(self, mock_listdir, mock_isfile, mock_open, mock_logger):
# Mock a bit of filesystem access methods to make some plugins show up
def fake_listdir(path):
if path == self._P1:
return [
# A regular plugin
'foo.plugin',
# Another regular plugin
'bar.plugin',
# Unrelated file, not a plugin
'unrelated.txt',
# A directory that looks like a plugin
'dir.bad.plugin',
# A plugin without read permissions
'noperm.plugin']
else:
raise OSError("There is nothing in {}".format(path))
def fake_isfile(path):
return not os.path.basename(path).startswith('dir.')
def fake_open(path, encoding=None, mode=None):
m = mock.MagicMock(name='opened file {!r}'.format(path))
m.__enter__.return_value = m
if path == os.path.join(self._P1, 'foo.plugin'):
m.read.return_value = "foo"
return m
elif path == os.path.join(self._P1, 'bar.plugin'):
m.read.return_value = "bar"
return m
elif path == os.path.join(self._P1, 'noperm.plugin'):
raise OSError("You cannot open this file")
else:
raise IOError("Unexpected file: {}".format(path))
mock_listdir.side_effect = fake_listdir
mock_isfile.side_effect = fake_isfile
mock_open.side_effect = fake_open
# Load all plugins now
self.col.load()
# And 'again', just to ensure we're doing the IO only once
self.col.load()
# Ensure that we actually tried to look at the filesytstem
self.assertEqual(
mock_listdir.call_args_list, [
((self._P1, ), {}),
((self._P2, ), {})
])
# Ensure that we actually tried to check if things are files
self.assertEqual(
mock_isfile.call_args_list, [
((os.path.join(self._P1, 'foo.plugin'),), {}),
((os.path.join(self._P1, 'bar.plugin'),), {}),
((os.path.join(self._P1, 'dir.bad.plugin'),), {}),
((os.path.join(self._P1, 'noperm.plugin'),), {}),
])
# Ensure that we actually tried to open some files
self.assertEqual(
mock_open.call_args_list, [
((os.path.join(self._P1, 'bar.plugin'),),
{'encoding': 'UTF-8'}),
((os.path.join(self._P1, 'foo.plugin'),),
{'encoding': 'UTF-8'}),
((os.path.join(self._P1, 'noperm.plugin'),),
{'encoding': 'UTF-8'}),
])
# Ensure that an exception was logged
mock_logger.error.assert_called_with(
'Unable to load %r: %s',
'/system/providers/noperm.plugin',
'You cannot open this file')
# Ensure that all of the errors are collected
# Using repr() since OSError seems hard to compare correctly
self.assertEqual(
repr(self.col.problem_list[0]),
repr(OSError('You cannot open this file')))
@mock.patch('plainbox.impl.secure.plugins.logger')
@mock.patch('builtins.open')
@mock.patch('os.path.isfile')
@mock.patch('os.listdir')
def test_load__two_extensions(self, mock_listdir, mock_isfile, mock_open,
mock_logger):
"""
verify that FsPlugInCollection works with multiple extensions
"""
mock_listdir.return_value = ["foo.txt", "bar.txt.in"]
mock_isfile.return_value = True
def fake_open(path, encoding=None, mode=None):
m = mock.MagicMock(name='opened file {!r}'.format(path))
m.read.return_value = "text"
m.__enter__.return_value = m
return m
mock_open.side_effect = fake_open
# Create a collection that looks for both extensions
col = FsPlugInCollection([self._P1], (".txt", ".txt.in"))
# Load everything
col.load()
# Ensure that we actually tried to look at the filesystem
self.assertEqual(
mock_listdir.call_args_list, [
((self._P1, ), {}),
])
# Ensure that we actually tried to check if things are files
self.assertEqual(
mock_isfile.call_args_list, [
((os.path.join(self._P1, 'foo.txt'),), {}),
((os.path.join(self._P1, 'bar.txt.in'),), {}),
])
# Ensure that we actually tried to open some files
self.assertEqual(
mock_open.call_args_list, [
((os.path.join(self._P1, 'bar.txt.in'),),
{'encoding': 'UTF-8'}),
((os.path.join(self._P1, 'foo.txt'),),
{'encoding': 'UTF-8'}),
])
# Ensure that no exception was logged
self.assertEqual(mock_logger.error.mock_calls, [])
# Ensure that everything was okay
self.assertEqual(col.problem_list, [])
# Ensure that both files got added
self.assertEqual(
col.get_by_name(
os.path.join(self._P1, "foo.txt")
).plugin_object, "text")
self.assertEqual(
col.get_by_name(
os.path.join(self._P1, "bar.txt.in")
).plugin_object, "text")
plainbox-0.25/plainbox/impl/secure/rfc822.py 0000664 0001750 0001750 00000036735 12627266441 021527 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012, 2013 Canonical Ltd.
# Written by:
# Sylvain Pineau
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.secure.rfc822` -- RFC822 parser
===================================================
Implementation of rfc822 serializer and deserializer.
.. warning::
THIS MODULE DOES NOT HAVE STABLE PUBLIC API
"""
import logging
import re
import textwrap
from plainbox.i18n import gettext as _
from plainbox.impl.secure.origin import FileTextSource
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.origin import UnknownTextSource
logger = logging.getLogger("plainbox.secure.rfc822")
def normalize_rfc822_value(value):
# Remove the multi-line dot marker
value = re.sub('^(\s*)\.$', '\\1', value, flags=re.M)
# Remove consistent indentation
value = textwrap.dedent(value)
# Strip the remaining whitespace
value = value.strip()
return value
class RFC822Record:
"""
Class for tracking RFC822 records.
This is a simple container for the dictionary of data. The data is
represented by two copies, one original and one after value normalization.
Value normalization strips out excess whitespace and processes the magic
leading dot syntax that is essential for empty newlines.
Comparison is performed on the normalized data only, raw data is stored for
reference but does not differentiate records.
Each instance also holds the origin of the data (location of the
file/stream where it was parsed from).
"""
def __init__(self, data, origin=None, raw_data=None,
field_offset_map=None):
"""
Initialize a new record.
:param data:
A dictionary with normalized record data
:param origin:
A :class:`Origin` instance that describes where the data came from
:param raw_data:
An optional dictionary with raw record data. If omitted then it
will default to normalized data (as the same object, without making
a copy)
:param field_offset_map:
An optional dictionary with offsets (in line numbers) of each field
"""
self._data = data
if raw_data is None:
raw_data = data
self._raw_data = raw_data
if origin is None:
origin = Origin.get_caller_origin()
self._origin = origin
self._field_offset_map = field_offset_map
def __repr__(self):
return "<{} data:{!r} origin:{!r}>".format(
self.__class__.__name__, self._data, self._origin)
def __eq__(self, other):
if isinstance(other, RFC822Record):
return (self._data, self._origin) == (other._data, other._origin)
return NotImplemented
def __ne__(self, other):
if isinstance(other, RFC822Record):
return (self._data, self._origin) != (other._data, other._origin)
return NotImplemented
@property
def data(self):
"""
The normalized version of the data set (dictionary)
This property exposes the normalized version of the data encapsulated
in this record. Normalization is performed with
:func:`normalize_rfc822_value()`. Only values are normalized, keys are
left intact.
"""
return self._data
@property
def raw_data(self):
"""
The raw version of data set (dictionary)
This property exposes the raw (original) version of the data
encapsulated by this record. This data is as it was originally parsed,
including all the whitespace layout.
In some records this may be 'normal' data object itself (same object).
"""
return self._raw_data
@property
def origin(self):
"""
The origin of the record.
"""
return self._origin
@property
def field_offset_map(self):
"""
The field-to-line-number-offset mapping.
A dictionary mapping field name to offset (in lines) relative to the
origin where that field definition commences.
Note: the return value may be None
"""
return self._field_offset_map
def dump(self, stream):
"""
Dump this record to a stream
"""
def _dump_part(stream, key, values):
stream.write("{}:\n".format(key))
for value in values:
if not value:
stream.write(" .\n")
elif value == ".":
stream.write(" ..\n")
else:
stream.write(" {}\n".format(value))
for key, value in self.data.items():
if isinstance(value, (list, tuple)):
_dump_part(stream, key, value)
elif isinstance(value, str) and "\n" in value:
values = value.split("\n")
if not values[-1]:
values = values[:-1]
_dump_part(stream, key, values)
else:
stream.write("{}: {}\n".format(key, value))
stream.write("\n")
class RFC822SyntaxError(SyntaxError):
"""
SyntaxError subclass for RFC822 parsing functions
"""
def __init__(self, filename, lineno, msg):
self.filename = filename
self.lineno = lineno
self.msg = msg
def __repr__(self):
return "{}({!r}, {!r}, {!r})".format(
self.__class__.__name__, self.filename, self.lineno, self.msg)
def __eq__(self, other):
if isinstance(other, RFC822SyntaxError):
return ((self.filename, self.lineno, self.msg)
== (other.filename, other.lineno, other.msg))
return NotImplemented
def __ne__(self, other):
if isinstance(other, RFC822SyntaxError):
return ((self.filename, self.lineno, self.msg)
!= (other.filename, other.lineno, other.msg))
return NotImplemented
def __hash__(self):
return hash((self.filename, self.lineno, self.msg))
def load_rfc822_records(stream, data_cls=dict, source=None):
"""
Load a sequence of rfc822-like records from a text stream.
:param stream:
A file-like object from which to load the rfc822 data
:param data_cls:
The class of the dictionary-like type to hold the results. This is
mainly there so that callers may pass collections.OrderedDict.
:param source:
A :class:`plainbox.abc.ITextSource` subclass instance that describes
where stream data is coming from. If None, it will be inferred from the
stream (if possible). Specialized callers should provider a custom
source object to allow developers to accurately keep track of where
(possibly problematic) RFC822 data is coming from. If this is None and
inferring fails then all of the loaded records will have a None origin.
Each record consists of any number of key-value pairs. Subsequent records
are separated by one blank line. A record key may have a multi-line value
if the line starts with whitespace character.
Returns a list of subsequent values as instances RFC822Record class. If
the optional data_cls argument is collections.OrderedDict then the values
retain their original ordering.
"""
return list(gen_rfc822_records(stream, data_cls, source))
def gen_rfc822_records(stream, data_cls=dict, source=None):
"""
Load a sequence of rfc822-like records from a text stream.
:param stream:
A file-like object from which to load the rfc822 data
:param data_cls:
The class of the dictionary-like type to hold the results. This is
mainly there so that callers may pass collections.OrderedDict.
:param source:
A :class:`plainbox.abc.ITextSource` subclass instance that describes
where stream data is coming from. If None, it will be inferred from the
stream (if possible). Specialized callers should provider a custom
source object to allow developers to accurately keep track of where
(possibly problematic) RFC822 data is coming from. If this is None and
inferring fails then all of the loaded records will have a None origin.
Each record consists of any number of key-value pairs. Subsequent records
are separated by one blank line. A record key may have a multi-line value
if the line starts with whitespace character.
Returns a list of subsequent values as instances RFC822Record class. If
the optional data_cls argument is collections.OrderedDict then the values
retain their original ordering.
"""
record = None
key = None
value_list = None
origin = None
field_offset_map = None
# If the source was not provided then try constructing a FileTextSource
# from the name of the stream. If that fails, keep using None.
if source is None:
try:
source = FileTextSource(stream.name)
except AttributeError:
source = UnknownTextSource()
def _syntax_error(msg):
"""
Report a syntax error in the current line
"""
try:
filename = stream.name
except AttributeError:
filename = None
return RFC822SyntaxError(filename, lineno, msg)
def _new_record():
"""
Reset local state to track new record
"""
nonlocal key
nonlocal value_list
nonlocal record
nonlocal origin
nonlocal field_offset_map
key = None
value_list = None
if source is not None:
origin = Origin(source, None, None)
field_offset_map = {}
record = RFC822Record(data_cls(), origin, data_cls(), field_offset_map)
def _commit_key_value_if_needed():
"""
Finalize the most recently seen key: value pair
"""
nonlocal key
if key is not None:
raw_value = ''.join(value_list)
normalized_value = normalize_rfc822_value(raw_value)
record.raw_data[key] = raw_value
record.data[key] = normalized_value
logger.debug(_("Committed key/value %r=%r"), key, normalized_value)
key = None
def _set_start_lineno_if_needed():
"""
Remember the line number of the record start unless already set
"""
if origin and record.origin.line_start is None:
record.origin.line_start = lineno
def _update_end_lineno():
"""
Update the line number of the record tail
"""
if origin:
record.origin.line_end = lineno
# Start with an empty record
_new_record()
# Support simple text strings
if isinstance(stream, str):
# keepends=True (python3.2 has no keyword for this)
stream = iter(stream.splitlines(True))
# Iterate over subsequent lines of the stream
for lineno, line in enumerate(stream, start=1):
logger.debug(_("Looking at line %d:%r"), lineno, line)
# Treat # as comments
if line.startswith("#"):
pass
# Treat empty lines as record separators
elif line.strip() == "":
# Commit the current record so that the multi-line value of the
# last key, if any, is saved as a string
_commit_key_value_if_needed()
# If data is non-empty, yield the record, this allows us to safely
# use newlines for formatting
if record.data:
logger.debug(_("yielding record: %r"), record)
yield record
# Reset local state so that we can build a new record
_new_record()
# Treat lines staring with whitespace as multi-line continuation of the
# most recently seen key-value
elif line.startswith(" "):
if key is None:
# If we have not seen any keys yet then this is a syntax error
raise _syntax_error(_("Unexpected multi-line value"))
# Strip the initial space. This matches the behavior of xgettext
# scanning our job definitions with multi-line values.
line = line[1:]
# Append the current line to the list of values of the most recent
# key. This prevents quadratic complexity of string concatenation
value_list.append(line)
# Update the end line location of this record
_update_end_lineno()
# Treat lines with a colon as new key-value pairs
elif ":" in line:
# Since this is actual data let's try to remember where it came
# from. This may be a no-operation if there were any preceding
# key-value pairs.
_set_start_lineno_if_needed()
# Since we have a new, key-value pair we need to commit any
# previous key that we may have (regardless of multi-line or
# single-line values).
_commit_key_value_if_needed()
# Parse the line by splitting on the colon, getting rid of
# all surrounding whitespace from the key and getting rid of the
# leading whitespace from the value.
key, value = line.split(":", 1)
key = key.strip()
value = value.lstrip()
# Check if the key already exist in this message
if key in record.data:
raise _syntax_error(_(
"Job has a duplicate key {!r} "
"with old value {!r} and new value {!r}"
).format(key, record.raw_data[key], value))
if value.strip() != "":
# Construct initial value list out of the (only) value that we
# have so far. Additional multi-line values will just append to
# value_list
value_list = [value]
# Store the offset of the filed in the offset map
field_offset_map[key] = lineno - origin.line_start
else:
# The initial line may be empty, in that case the spaces and
# newlines there are discarded
value_list = []
# Store the offset of the filed in the offset map
# The +1 is for the fact that value is empty (or just
# whitespace) and that is stripped away in the normalized data
# part of the RFC822 record. To keep line tracking accurate
# we just assume that the field actually starts on
# the following line.
field_offset_map[key] = lineno - origin.line_start + 1
# Update the end-line location
_update_end_lineno()
# Treat all other lines as syntax errors
else:
raise _syntax_error(
_("Unexpected non-empty line: {!r}").format(line))
# Make sure to commit the last key from the record
_commit_key_value_if_needed()
# Once we've seen the whole file return the last record, if any
if record.data:
logger.debug(_("yielding record: %r"), record)
yield record
plainbox-0.25/plainbox/impl/secure/plugins.py 0000664 0001750 0001750 00000065051 12627266441 022173 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.secure.plugins` -- interface for accessing extension points
===============================================================================
This module contains plugin interface for plainbox. Plugins are based on
pkg_resources entry points feature. Any python package can advertise the
existence of entry points associated with a given namespace. Any other
package can query a given namespace and enumerate a sequence of entry points.
Each entry point has a name and importable identifier. The identifier can
be imported using the load() method. A loaded entry point is exposed as an
instance of :class:`PlugIn`. A high-level collection of plugins (that may
eventually also query alternate backends) is offered by
:class:`PlugInCollection`.
Using :meth:`PlugInCollection.load()` one can load all plugins from
a particular namespace and work with them using provided utility methods
such as :meth:`PlugInCollection.get_by_name()` or
:meth:`PlugInCollection.get_all_names()`
The set of loaded plugins can be overridden by mock/patching
:meth:`PlugInCollection._get_entry_points()`. This is especially useful for
testing in isolation from whatever entry points may exist in the system.
"""
import abc
import collections
import contextlib
import logging
import os
import time
import pkg_resources
from plainbox.i18n import gettext as _
logger = logging.getLogger("plainbox.secure.plugins")
def now() -> float:
"""
Get the current "time".
:returns:
A fractional number of seconds since some undefined base event.
This methods returns the current "time" that is useful for measuring
plug-in loading time. The return value is meaningless but delta between
two values is a fractional number of seconds between the two
corresponding events.
"""
try:
# time.perf_counter is only available since python 3.3
return time.perf_counter()
except AttributeError:
return time.clock()
class IPlugIn(metaclass=abc.ABCMeta):
"""
Piece of code loaded at runtime, one of many for a given extension point
"""
@abc.abstractproperty
def plugin_name(self) -> str:
"""
name of the plugin, may not be unique
"""
@abc.abstractproperty
def plugin_object(self) -> object:
"""
external object
"""
@abc.abstractproperty
def plugin_load_time(self) -> float:
"""
time, in fractional seconds, that was needed to load the plugin
"""
@abc.abstractproperty
def plugin_wrap_time(self) -> float:
"""
time, in fractional seconds, that was needed to wrap the plugin
.. note::
The difference between ``plugin_wrap_time`` and
``plugin_load_time`` depends on context. In practical terms the sum
of the two is interesting for analysis but in some cases having
access to both may be important.
"""
class PlugInError(Exception):
"""
Exception that may be raised by PlugIn.__init__() to signal it cannot
be fully loaded and should not be added to any collection.
"""
class PlugIn(IPlugIn):
"""
Simple plug-in that does not offer any guarantees beyond knowing it's name
and some arbitrary external object.
"""
def __init__(self, name: str, obj: object, load_time: float=0, wrap_time: float=0):
"""
Initialize the plug-in with the specified name and external object
:param name:
Name of the plug-in object, semantics is application-defined
:param obj:
The plugged in object itself
:param load_time:
Time it took to load the object (in fractional seconds)
:param wrap_time:
Time it took to wrap the object (in fractional seconds)
"""
self._name = name
self._obj = obj
self._load_time = load_time
self._wrap_time = wrap_time
def __repr__(self):
return "<{!s} plugin_name:{!r}>".format(
type(self).__name__, self.plugin_name)
@property
def plugin_name(self) -> str:
"""
plugin name, arbitrary string
"""
return self._name
@property
def plugin_object(self) -> float:
"""
plugin object, arbitrary object
"""
return self._obj
@property
def plugin_load_time(self) -> float:
"""
time, in fractional seconds, that was needed to load the plugin
"""
return self._load_time
@property
def plugin_wrap_time(self) -> float:
"""
time, in fractional seconds, that was needed to wrap the plugin
"""
return self._wrap_time
class IPlugInCollection(metaclass=abc.ABCMeta):
"""
A collection of IPlugIn objects.
"""
@abc.abstractmethod
def get_by_name(self, name):
"""
Get the specified plug-in (by name)
"""
@abc.abstractmethod
def get_all_names(self):
"""
Get an iterator to a sequence of plug-in names
"""
@abc.abstractmethod
def get_all_plugins(self):
"""
Get an iterator to a sequence plug-ins
"""
@abc.abstractmethod
def get_all_plugin_objects(self):
"""
Get an list of plug-in objects
This is a shortcut that gives fastest access to a list of
:attr:`IPlugIn.plugin_object` of each loaded plugin.
"""
@abc.abstractmethod
def get_all_items(self):
"""
Get an iterator to a sequence of (name, plug-in)
"""
@abc.abstractproperty
def problem_list(self):
"""
List of problems encountered while loading plugins
"""
@abc.abstractmethod
def load(self):
"""
Load all plug-ins.
This method loads all plug-ins from the specified name-space. It may
perform a lot of IO so it's somewhat slow / expensive on a cold disk
cache.
"""
@abc.abstractmethod
@contextlib.contextmanager
def fake_plugins(self, plugins, problem_list=None):
"""
Context manager for using fake list of plugins
:param plugins:
list of PlugIn-alike objects
:param problem_list:
list of problems (exceptions)
The provided list of plugins and exceptions overrides any previously
loaded plugins and prevent loading any other, real, plugins. After the
context manager exits the previous state is restored.
"""
@abc.abstractproperty
def discovery_time(self) -> float:
"""
Time, in fractional seconds, that was used to discover all objects.
This time is separate from the load and wrap time of all each
individual plug-in. Typically this is either a fixed cost or a
predictable cost related to traversing the file system.
"""
@abc.abstractmethod
def get_total_time(self) -> float:
"""
Get the cost to prepare everything required by this collection
:returns:
The total number of fractional seconds of wall-clock time spent on
discovering, loading and wrapping each object now contained in this
collection.
"""
class PlugInCollectionBase(IPlugInCollection):
"""
Base class that shares some of the implementation with the other
PlugInCollection implemenetations.
"""
def __init__(self, load=False, wrapper=PlugIn, *wrapper_args,
**wrapper_kwargs):
"""
Initialize a collection of plug-ins
:param load:
if true, load all of the plug-ins now
:param wrapper:
wrapper class for all loaded objects, defaults to :class:`PlugIn`
:param wrapper_args:
additional arguments passed to each instantiated wrapper
:param wrapper_kwargs:
additional keyword arguments passed to each instantiated wrapper
"""
self._wrapper = wrapper
self._wrapper_args = wrapper_args
self._wrapper_kwargs = wrapper_kwargs
self._plugins = collections.OrderedDict() # str -> IPlugIn instance
self._loaded = False
self._problem_list = []
self._discovery_time = 0
if load:
self.load()
def get_by_name(self, name):
"""
Get the specified plug-in (by name)
:param name:
name of the plugin to locate
:returns:
:class:`PlugIn` like object associated with the name
:raises KeyError:
if the specified name cannot be found
"""
return self._plugins[name]
def get_all_names(self):
"""
Get a list of all the plug-in names
:returns:
a list of plugin names
"""
return list(self._plugins.keys())
def get_all_plugins(self):
"""
Get a list of all the plug-ins
:returns:
a list of plugin objects
"""
return list(self._plugins.values())
def get_all_plugin_objects(self):
"""
Get an list of plug-in objects
"""
return [plugin.plugin_object for plugin in self._plugins.values()]
def get_all_items(self):
"""
Get a list of all the pairs of plugin name and plugin
:returns:
a list of tuples (plugin.plugin_name, plugin)
"""
return list(self._plugins.items())
@property
def problem_list(self):
"""
List of problems encountered while loading plugins
"""
return self._problem_list
@contextlib.contextmanager
def fake_plugins(self, plugins, problem_list=None):
"""
Context manager for using fake list of plugins
:param plugins:
list of PlugIn-alike objects
:param problem_list:
list of problems (exceptions)
The provided list of plugins overrides any previously loaded
plugins and prevent loading any other, real, plugins. After
the context manager exits the previous state is restored.
"""
old_loaded = self._loaded
old_problem_list = self._problem_list
old_plugins = self._plugins
self._loaded = True
self._plugins = collections.OrderedDict([
(plugin.plugin_name, plugin)
for plugin in plugins
])
if problem_list is None:
problem_list = []
self._problem_list = problem_list
try:
yield
finally:
self._loaded = old_loaded
self._plugins = old_plugins
self._problem_list = old_problem_list
def wrap_and_add_plugin(self, plugin_name, plugin_obj, plugin_load_time):
"""
Internal method of PlugInCollectionBase.
:param plugin_name:
plugin name, some arbitrary string
:param plugin_obj:
plugin object, some arbitrary object.
:param plugin_load_time:
number of seconds it took to load this plugin
This method prepares a wrapper (PlugIn subclass instance) for the
specified plugin name/object by attempting to instantiate the wrapper
class. If a PlugInError exception is raised then it is added to the
problem_list and the corresponding plugin is not added to the
collection of plugins.
"""
try:
wrapper = self._wrapper(
plugin_name, plugin_obj, plugin_load_time,
*self._wrapper_args, **self._wrapper_kwargs)
except PlugInError as exc:
logger.warning(
_("Unable to prepare plugin %s: %s"), plugin_name, exc)
self._problem_list.append(exc)
else:
self._plugins[plugin_name] = wrapper
@property
def discovery_time(self) -> float:
"""
Time, in fractional seconds, that was required to discover all objects.
This time is separate from the load and wrap time of all each
individual plug-in. Typically this is either a fixed cost or a
predictable cost related to traversing the file system.
"""
if self._loaded is False:
raise AttributeError(
_("discovery_time is meaningful after calling load()"))
return self._discovery_time
def get_total_time(self) -> float:
"""
Get the sum of load and wrap time of each plugin object
:returns:
The total number of fractional seconds of wall-clock time spent by
loading this collection. This value doesn't include some small
overhead of this class but is representative of the load times of
pluggable code.
"""
return sum(
plugin.plugin_load_time + plugin.plugin_wrap_time
for plugin in self._plugins.values()) + self.discovery_time
class PkgResourcesPlugInCollection(PlugInCollectionBase):
"""
Collection of plug-ins based on pkg_resources
Instantiate with :attr:`namespace`, call :meth:`load()` and then access any
of the loaded plug-ins using the API offered. All loaded objects are
wrapped by a plug-in container. By default that is :class:`PlugIn` but it
may be adjusted if required.
"""
def __init__(self, namespace, load=False, wrapper=PlugIn, *wrapper_args,
**wrapper_kwargs):
"""
Initialize a collection of plug-ins from the specified name-space.
:param namespace:
pkg_resources entry-point name-space of the plug-in collection
:param load:
if true, load all of the plug-ins now
:param wrapper:
wrapper class for all loaded objects, defaults to :class:`PlugIn`
:param wrapper_args:
additional arguments passed to each instantiated wrapper
:param wrapper_kwargs:
additional keyword arguments passed to each instantiated wrapper
"""
self._namespace = namespace
super().__init__(load, wrapper, *wrapper_args, **wrapper_kwargs)
def load(self):
"""
Load all plug-ins.
This method loads all plug-ins from the specified name-space. It may
perform a lot of IO so it's somewhat slow / expensive on a cold disk
cache.
.. note::
this method queries pkg-resources only once.
"""
if self._loaded:
return
self._loaded = True
start_time = now()
entry_point_list = list(self._get_entry_points())
entry_point_list.sort(key=lambda ep: ep.name)
self._discovery_time = now() - start_time
for entry_point in entry_point_list:
start_time = now()
try:
obj = entry_point.load()
except ImportError as exc:
logger.exception(_("Unable to import %s"), entry_point)
self._problem_list.append(exc)
else:
self.wrap_and_add_plugin(
entry_point.name, obj, now() - start_time)
def _get_entry_points(self):
"""
Get entry points from pkg_resources.
This is the method you want to mock if you are writing unit tests
"""
return pkg_resources.iter_entry_points(self._namespace)
class FsPlugInCollection(PlugInCollectionBase):
"""
Collection of plug-ins based on filesystem entries
Instantiate with :attr:`dir_list` and :attr:`ext`, call :meth:`load()` and
then access any of the loaded plug-ins using the API offered. All loaded
plugin information files are wrapped by a plug-in container. By default
that is :class:`PlugIn` but it may be adjusted if required.
The name of each plugin is the base name of the plugin file, the object of
each plugin is the text read from the plugin file.
"""
def __init__(self, dir_list, ext, recursive=False, load=False,
wrapper=PlugIn, *wrapper_args, **wrapper_kwargs):
"""
Initialize a collection of plug-ins from the specified name-space.
:param dir_list:
a list of directories to search
:param ext:
extension of each plugin definition file (or a list of those)
:param recursive:
a flag that indicates if we should perform recursive search
(default False)
:param load:
if true, load all of the plug-ins now
:param wrapper:
wrapper class for all loaded objects, defaults to :class:`PlugIn`
:param wrapper_args:
additional arguments passed to each instantiated wrapper
:param wrapper_kwargs:
additional keyword arguments passed to each instantiated wrapper
"""
if (not isinstance(dir_list, list)
or not all(isinstance(item, str) for item in dir_list)):
raise TypeError("dir_list needs to be List[str]")
self._dir_list = dir_list
self._ext = ext
self._recursive = recursive
super().__init__(load, wrapper, *wrapper_args, **wrapper_kwargs)
def load(self):
"""
Load all plug-ins.
This method loads all plug-ins from the search directories (as defined
by the path attribute). Missing directories are silently ignored.
"""
if self._loaded:
return
self._loaded = True
start_time = now()
filename_list = list(self._get_plugin_files())
filename_list.sort()
self._discovery_time = now() - start_time
for filename in filename_list:
start_time = now()
try:
text = self._get_file_text(filename)
except (OSError, IOError) as exc:
logger.error(_("Unable to load %r: %s"), filename, str(exc))
self._problem_list.append(exc)
else:
self.wrap_and_add_plugin(filename, text, now() - start_time)
def _get_file_text(self, filename):
with open(filename, encoding='UTF-8') as stream:
return stream.read()
def _get_plugin_files(self):
"""
Enumerate (generate) all plugin files according to 'path' and 'ext'
"""
# Look in all parts of 'path' separated by standard system path
# separator.
for dirname in self._dir_list:
if self._recursive:
entries = []
for base_dir, dirs, files in os.walk(dirname):
entries.extend([
os.path.relpath(
os.path.join(base_dir, filename), dirname)
for filename in files])
else:
# List all files in each path component
try:
entries = os.listdir(dirname)
except OSError:
# Silently ignore anything we cannot access
continue
# Look at each file there
for entry in entries:
# Skip files with other extensions
if isinstance(self._ext, str):
if not entry.endswith(self._ext):
continue
elif isinstance(self._ext, collections.Sequence):
for ext in self._ext:
if entry.endswith(ext):
break
else:
continue
info_file = os.path.join(dirname, entry)
# Skip all non-files
if not os.path.isfile(info_file):
continue
yield info_file
class LazyFileContent:
"""
Support class for FsPlugInCollection's subclasses that behaves like a
string of text loaded from a file. The actual text is loaded on demand, the
first time it is needed.
The actual methods implemented here are just enough to work for loading a
provider. Since __getattr__() is implemented the class should be pretty
versatile but your millage may vary.
"""
def __init__(self, name):
self.name = name
self._text = None
def __repr__(self):
return "<{} name:{!r}{}>".format(
self.__class__.__name__, self.name,
' (pending)' if self._text is None else ' (loaded)')
def __str__(self):
self._ensure_loaded()
return self._text
def __iter__(self):
self._ensure_loaded()
return iter(self._text.splitlines(True))
def __getattr__(self, attr):
self._ensure_loaded()
return getattr(self._text, attr)
def _ensure_loaded(self):
if self._text is None:
with open(self.name, encoding='UTF-8') as stream:
self._text = stream.read()
class LazyFsPlugInCollection(FsPlugInCollection):
"""
Collection of plug-ins based on filesystem entries
Instantiate with :attr:`dir_list` and :attr:`ext`, call :meth:`load()` and
then access any of the loaded plug-ins using the API offered. All loaded
plugin information files are wrapped by a plug-in container. By default
that is :class:`PlugIn` but it may be adjusted if required.
The name of each plugin is the base name of the plugin file, the object of
each plugin is a handle that can be used to optionally load the content of
the file.
"""
def _get_file_text(self, filename):
return LazyFileContent(filename)
class LazyPlugInCollection(PlugInCollectionBase):
"""
Collection of plug-ins based on a mapping of imported objects
All loaded plugin information files are wrapped by a plug-in container. By
default that is :class:`PlugIn` but it may be adjusted if required.
"""
def __init__(self, mapping, load=False, wrapper=PlugIn,
*wrapper_args, **wrapper_kwargs):
"""
Initialize a collection of plug-ins from the specified mapping of
callbacks.
:param callback_args_map:
any mapping from from any string (the plugin name) to a tuple
("module:obj", *args) that if imported and called ``obj(*args)``
produces the plugin object, alternatively, a mapping from the same
string to a string that is imported but *not* called.
:param load:
if true, load all of the plug-ins now
:param wrapper:
wrapper class for all loaded objects, defaults to :class:`PlugIn`
:param wrapper_args:
additional arguments passed to each instantiated wrapper
:param wrapper_kwargs:
additional keyword arguments passed to each instantiated wrapper
"""
self._mapping = mapping
super().__init__(load, wrapper, *wrapper_args, **wrapper_kwargs)
def load(self):
if self._loaded:
return
logger.debug(_("Loading everything in %r"), self)
self._loaded = True
name_discovery_data_list = self.discover()
for name, discovery_data in name_discovery_data_list:
if name in self._plugins:
continue
self.load_one(name, discovery_data)
def discover(self):
start = now()
result = self.do_discover()
self._discovery_time = now() - start
return result
def load_one(self, name, discovery_data):
start_time = now()
try:
logger.debug(_("Loading %r"), name)
obj = self.do_load_one(name, discovery_data)
except (ImportError, AttributeError, ValueError) as exc:
logger.exception(_("Unable to load: %r"), name)
self._problem_list.append(exc)
else:
logger.debug(_("Wrapping %r"), name)
self.wrap_and_add_plugin(name, obj, now() - start_time)
def do_discover(self):
return self._mapping.items()
def do_load_one(self, name, discovery_data):
if isinstance(discovery_data, tuple):
callable_obj = discovery_data[0]
args = discovery_data[1:]
else:
callable_obj = discovery_data
args = None
if isinstance(callable_obj, str):
logger.debug(_("Importing %s"), callable_obj)
callable_obj = getattr(
__import__(
callable_obj.split(':', 1)[0], fromlist=[1]),
callable_obj.split(':', 1)[1])
logger.debug(_("Calling %r with %r"), callable_obj, args)
if args is None:
return callable_obj
else:
return callable_obj(*args)
def get_all_names(self):
"""
Get a list of all the plug-in names
:returns:
a list of plugin names
"""
if self._loaded:
return super().get_all_names()
else:
return list(self._mapping.keys())
def get_by_name(self, name):
"""
Get the specified plug-in (by name)
:param name:
name of the plugin to locate
:returns:
:class:`PlugIn` like object associated with the name
:raises KeyError:
if the specified name cannot be found
"""
if self._loaded:
return super().get_by_name(name)
if name not in self._plugins:
discovery_data = self._mapping[name]
self.load_one(name, discovery_data)
return self._plugins[name]
@property
def discovery_time(self) -> float:
"""
Time, in fractional seconds, that was required to discover all objects.
This time is separate from the load and wrap time of all each
individual plug-in. Typically this is either a fixed cost or a
predictable cost related to traversing the file system.
.. note::
This overridden version can be called at any time, unlike the base
class implementation. Before all discovery is done, it simply
returns zero.
"""
return self._discovery_time
@contextlib.contextmanager
def fake_plugins(self, plugins, problem_list=None):
old_mapping = self._mapping
self._mapping = {} # fake the mapping
try:
with super().fake_plugins(plugins, problem_list):
yield
finally:
self._mapping = old_mapping
plainbox-0.25/plainbox/impl/secure/config.py 0000664 0001750 0001750 00000061443 12627266441 021760 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013, 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.config` -- configuration
============================================
.. warning::
THIS MODULE DOES NOT HAVE A STABLE PUBLIC API
"""
from abc import ABCMeta, abstractmethod
import collections
import configparser
import logging
import re
from plainbox.i18n import gettext as _
logger = logging.getLogger("plainbox.config")
class INameTracking(metaclass=ABCMeta):
"""
Interface for classes that are instantiated as a part of definition of
another class. The purpose of this interface is to allow instances to learn
about the name (python identifier) that was assigned to the instance at
class definition time.
Subclasses must define the _set_tracked_name() method.
"""
@abstractmethod
def _set_tracked_name(self, name):
"""
Set the that corresponds to the symbol used in class definition. This
can be a no-op if the name was already set by other means
"""
class ConfigMetaData:
"""
Class containing meta-data about a Config class
Sub-classes of this class are automatically added to each Config subclass
as a Meta class-level attribute.
This class has typically two attributes:
:attr variable_list:
A list of all Variable objects defined in the class
:attr section_list:
A list of all Section object defined in the class
:attr filename_list:
A list of config files (pathnames) to read on call to
:meth:`Config.read`
"""
variable_list = []
section_list = []
filename_list = []
class UnsetType:
"""
Class of the Unset object
"""
def __str__(self):
return _("unset")
def __repr__(self):
return "Unset"
def __bool__(self):
return False
Unset = UnsetType()
def understands_Unset(cls_or_func):
"""
Decorator for marking validators as supporting the special Unset value.
This decorator should be applied to every validator that natively supports
Unset values. Without it, Unset is never validated.
This decorator works by setting the ``understands_Unset`` attribute on the
decorated object and returning it intact.
"""
cls_or_func.understands_Unset = True
return cls_or_func
class Variable(INameTracking):
"""
Variable that can be used in a configuration systems
"""
_KIND_CHOICE = (bool, int, float, str)
def __init__(self, name=None, *, section='DEFAULT', kind=str,
default=Unset, validator_list=None, help_text=None):
# Ensure kind is correct
if kind not in self._KIND_CHOICE:
raise ValueError(_("unsupported kind"))
# Ensure that we have a validator_list, even if empty
if validator_list is None:
validator_list = []
if validator_list and isinstance(validator_list[0], NotUnsetValidator):
# XXX: Kludge ahead, beware!
# Insert a KindValidator as the second validator to run
# just after the NotUnsetValidator
# TODO: To properly handle this without any special-casing we
# should drop the implicit insertion of the KindValidator and
# convert all users to properly order KindValidator and
# NotUnsetValidator instances so that the error message is helpful
# to the user. The whole idea is to validate Unset before we try to
# validate the type.
validator_list.insert(1, KindValidator)
else:
# Insert a KindValidator as the first validator to run
validator_list.insert(0, KindValidator)
# Assign all the attributes
self._name = name
self._section = section
self._kind = kind
self._default = default
self._validator_list = validator_list
self._help_text = help_text
# Workaround for Sphinx breaking if __doc__ is a property
self.__doc__ = self.help_text or self.__class__.__doc__
def validate(self, value):
"""
Check if the supplied value is valid for this variable.
:param value:
The proposed value
:raises ValidationError:
Tf the value was not valid in any way
"""
for validator in self.validator_list:
# Most validators don't want to deal with the unset type so let's
# special case that. Anything that is decorated with
# @understands_Unset will have that attribute set to True.
#
# If the value _is_ unset and the validator doesn't claim to
# support it then just skip it.
if value is Unset and not getattr(validator, 'understands_Unset',
False):
continue
message = validator(self, value)
if message is not None:
raise ValidationError(self, value, message)
def _set_tracked_name(self, name):
"""
Internal method used by :meth:`ConfigMeta.__new__`
"""
if self._name is None:
self._name = name
@property
def name(self):
"""
name of this variable
"""
return self._name
@property
def section(self):
"""
name of the section this variable belongs to (in a configuration file)
"""
return self._section
@property
def kind(self):
"""
the "poor man's type", can be only str (default), bool, float or int
"""
return self._kind
@property
def default(self):
"""
a default value, if any
"""
return self._default
@property
def validator_list(self):
"""
a optional list of :class:`Validator` instances that are enforced on
the value
"""
return self._validator_list
@property
def help_text(self):
"""
an optional help text associated with this variable
"""
return self._help_text
def __repr__(self):
return "".format(self.name)
def __get__(self, instance, owner):
"""
Get the value of a variable
Missing variables return the default value
"""
if instance is None:
return self
try:
return instance._get_variable(self._name)
except KeyError:
return self.default
def __set__(self, instance, new_value):
"""
Set the value of a variable
:raises ValidationError: if the new value is incorrect
"""
# Check it against all validators
self.validate(new_value)
# Assign it to the backing store of the instance
instance._set_variable(self.name, new_value)
def __delete__(self, instance):
# NOTE: this is quite confusing, this method is a companion to __get__
# and __set__ but __del__ is entirely unrelated (object garbage
# collected, do final cleanup) so don't think this is a mistake
instance._del_variable(self._name)
class Section(INameTracking):
"""
A section of a configuration file.
"""
def __init__(self, name=None, *, help_text=None):
self._name = name
self._help_text = help_text
# Workaround for Sphinx breaking if __doc__ is a property
self.__doc__ = self.help_text or self.__class__.__doc__
def _set_tracked_name(self, name):
"""
Internal method used by :meth:`ConfigMeta.__new__()`
"""
if self._name is None:
self._name = name
@property
def name(self):
"""
name of this section
"""
return self._name
@property
def help_text(self):
"""
an optional help text associated with this section
"""
return self._help_text
def __get__(self, instance, owner):
if instance is None:
return self
try:
return instance._get_section(self._name)
except KeyError:
return Unset
def __set__(self, instance, new_value):
instance._set_section(self.name, new_value)
def __delete__(self, instance):
instance._del_section(self.name)
class ConfigMeta(type):
"""
Meta class for all configuration classes.
This meta class handles assignment of '_name' attribute to each
:class:`Variable` instance created in the class body.
It also accumulates such instances and assigns them to variable_list in a
helper Meta class which is assigned back to the namespace
"""
def __new__(mcls, name, bases, namespace, **kwargs):
# Keep track of variables and sections from base class
variable_list = []
section_list = []
if 'Meta' in namespace:
if hasattr(namespace['Meta'], 'variable_list'):
variable_list = namespace['Meta'].variable_list[:]
if hasattr(namespace['Meta'], 'section_list'):
section_list = namespace['Meta'].section_list[:]
# Discover all Variable and Section instances
# defined in the class namespace
for attr_name, attr_value in namespace.items():
if isinstance(attr_value, INameTracking):
attr_value._set_tracked_name(attr_name)
if isinstance(attr_value, Variable):
variable_list.append(attr_value)
elif isinstance(attr_value, Section):
section_list.append(attr_value)
# Get or create the class of the 'Meta' object.
#
# This class should always inherit from ConfigMetaData and whatever the
# user may have defined as Meta.
Meta_name = "Meta"
Meta_bases = (ConfigMetaData,)
Meta_ns = {
'variable_list': variable_list,
'section_list': section_list
}
if 'Meta' in namespace:
user_Meta_cls = namespace['Meta']
if not isinstance(user_Meta_cls, type):
raise TypeError("Meta must be a class")
Meta_bases = (user_Meta_cls, ConfigMetaData)
# Create a new type for the Meta class
namespace['Meta'] = type.__new__(
type(ConfigMetaData), Meta_name, Meta_bases, Meta_ns)
# Create a new type for the Config subclass
return type.__new__(mcls, name, bases, namespace)
@classmethod
def __prepare__(mcls, name, bases, **kwargs):
return collections.OrderedDict()
class PlainBoxConfigParser(configparser.ConfigParser):
"""
A subclass of ConfigParser with the following changes:
- option names are not lower-cased
- write() has deterministic ordering (sorted by name)
"""
def optionxform(self, option):
"""
Overridden method from :class:`configparser.ConfigParser`.
Returns `option` without any transformations
"""
return option
def write(self, fp, space_around_delimiters=True):
"""
Write an .ini-format representation of the configuration state.
If `space_around_delimiters` is True (the default), delimiters between
keys and values are surrounded by spaces. The ordering of section and
values within is deterministic.
"""
if space_around_delimiters:
d = " {} ".format(self._delimiters[0])
else:
d = self._delimiters[0]
if self._defaults:
self._write_section(
fp, self.default_section, sorted(self._defaults.items()), d)
for section in self._sections:
self._write_section(
fp, section, sorted(self._sections[section].items()), d)
class Config(metaclass=ConfigMeta):
"""
Base class for configuration systems
:attr _var:
storage backend for Variable definitions
:attr _section:
storage backend for Section definitions
:attr _filename_list:
list of pathnames to files that were loaded by the last call to
:meth:`read()`
:attr _problem_list:
list of :class:`ValidationError` that were detected by the last call to
:meth:`read()`
"""
def __init__(self):
"""
Initialize an empty Config object
"""
self._var = {}
self._section = {}
self._filename_list = []
self._problem_list = []
@property
def problem_list(self):
"""
list of :class:`ValidationError` that were detected by the last call to
:meth:`read()`
"""
return self._problem_list
@property
def filename_list(self):
"""
list of pathnames to files that were loaded by the last call to
:meth:`read()`
"""
return self._filename_list
@classmethod
def get(cls):
"""
Get an instance of this Config class with all the configuration loaded
from default locations. The locations are determined by
Meta.filename_list attribute.
:returns: fresh :class:`Config` instance
"""
self = cls()
self.read(cls.Meta.filename_list)
return self
def get_parser_obj(self):
"""
Get a ConfigParser-like object with the same data.
:returns:
A :class:`PlainBoxConfigParser` object with all of the data copied
from this :class:`Config` object.
Since :class:`PlainBoxConfigParser` is a subclass of
:class:`configparser.ConfigParser` it has a number of useful utility
methods. By using this function one can obtain a ConfigParser-like
object and work with it directly.
"""
parser = PlainBoxConfigParser(allow_no_value=True, delimiters=('='))
# Write all variables that we know about
for variable in self.Meta.variable_list:
if (not parser.has_section(variable.section)
and variable.section != "DEFAULT"):
parser.add_section(variable.section)
value = variable.__get__(self, self.__class__)
# Except Unset, we don't want that to convert to 'unset'
if value is not Unset:
parser.set(variable.section, variable.name, str(value))
# Write all sections that we know about
for section in self.Meta.section_list:
if not parser.has_section(section.name):
parser.add_section(section.name)
for name, value in section.__get__(self, self.__class__).items():
parser.set(section.name, name, str(value))
return parser
def read_string(self, string):
"""
Load settings from a string.
:param string:
The full text of INI-like configuration to parse and apply
This method parses the string as an INI file using
:class:`PlainBoxConfigParser` (a simple ConfigParser subclass that
respects the case of key names).
If any problem is detected during parsing (e.g. syntax errors) those
are captured and added to the :attr:`Config.problem_list`.
After parsing the string each :class:`Variable` and :class:`Section`
defined in the :class:`Config` class is assigned with the data from the
configuration data.
Any variables that cannot be assigned and raise
:class:`ValidationError` are ignored but the list of problems is saved.
All unused configuration (extra variables that are not defined as
either Variable or Section class) is silently ignored.
.. note::
This method resets :attr:`_problem_list`
and :attr:`_filename_list`.
"""
parser = PlainBoxConfigParser(allow_no_value=True, delimiters=('='))
# Reset filename list and problem list
self._filename_list = []
self._problem_list = []
# Try loading all of the config files
try:
parser.read_string(string)
except configparser.Error as exc:
self._problem_list.append(exc)
# Try to validate everything
try:
self._read_commit(parser)
except ValidationError as exc:
self._problem_list.append(exc)
def write(self, stream):
"""
Write configuration data to a stream.
:param stream:
a file-like object that can be written to.
This method recreates the content of all the configuration variables in
a manner that can be subsequently read back.
"""
self.get_parser_obj().write(stream)
def read(self, filename_list):
"""
Load and merge settings from many files.
This method tries to open each file from the list of filenames, parse
it as an INI file using :class:`PlainBoxConfigParser` (a simple
ConfigParser subclass that respects the case of key names). The list of
files actually accessed is saved as available as
:attr:`Config.filename_list`.
If any problem is detected during parsing (e.g. syntax errors) those
are captured and added to the :attr:`Config.problem_list`.
After all files are loaded each :class:`Variable` and :class:`Section`
defined in the :class:`Config` class is assigned with the data from the
merged configuration data.
Any variables that cannot be assigned and raise
:class:`ValidationError` are ignored but the list of problems is saved.
All unused configuration (extra variables that are not defined as
either Variable or Section class) is silently ignored.
.. note::
This method resets :attr:`_problem_list`
and :attr:`_filename_list`.
"""
parser = PlainBoxConfigParser(allow_no_value=True, delimiters=('='))
# Reset filename list and problem list
self._filename_list = []
self._problem_list = []
# Try loading all of the config files
try:
logger.info(_("Loading configuration from %s"), filename_list)
self._filename_list = parser.read(filename_list)
except configparser.Error as exc:
self._problem_list.append(exc)
# Try to validate everything
try:
self._read_commit(parser)
except ValidationError as exc:
self._problem_list.append(exc)
def _read_commit(self, parser):
# Pick a reader function appropriate for the kind of variable
reader_fn = {
str: parser.get,
bool: parser.getboolean,
int: parser.getint,
float: parser.getfloat
}
# Load all variables that we know about
for variable in self.Meta.variable_list:
# Access the variable in the configuration file
try:
value = reader_fn[variable.kind](
variable.section, variable.name)
except (configparser.NoSectionError, configparser.NoOptionError):
value = variable.default
# Try to assign it
try:
variable.__set__(self, value)
except ValidationError as exc:
self.problem_list.append(exc)
# Load all sections that we know about
for section in self.Meta.section_list:
try:
# Access the section in the configuration file
value = dict(parser.items(section.name))
except configparser.NoSectionError:
continue
# Assign it
section.__set__(self, value)
# Validate the whole configuration object
self.validate_whole()
def _get_variable(self, name):
"""
Internal method called by :meth:`Variable.__get__`
"""
return self._var[name]
def _set_variable(self, name, value):
"""
Internal method called by :meth:`Variable.__set__`
"""
self._var[name] = value
def _del_variable(self, name):
"""
Internal method called by :meth:`Variable.__delete__`
"""
del self._var[name]
def _get_section(self, name):
"""
Internal method called by :meth:`Section.__get__`
"""
return self._section[name]
def _set_section(self, name, value):
"""
Internal method called by :meth:`Section.__set__`
"""
self._section[name] = value
def _del_section(self, name):
"""
Internal method called by :meth:`Section.__delete__`
"""
del self._section[name]
def validate_whole(self):
"""
Validate the whole configuration object.
This method may be overridden to provide whole-configuration
validation. It is especially useful in cases when a pair or more of
variables need to be validated together to be meaningful.
The default implementation does nothing. Other implementations may
raise :class:`ValidationError`.
"""
class ValidationError(ValueError):
"""
Exception raised when configuration variables fail to validate
"""
def __init__(self, variable, new_value, message):
self.variable = variable
self.new_value = new_value
self.message = message
def __str__(self):
return self.message
class IValidator(metaclass=ABCMeta):
"""
An interface for variable vale validators
"""
@abstractmethod
def __call__(self, variable, new_value):
"""
Check if a value is appropriate for the variable.
:returns: None if everything is okay
:returns: string that describes the problem if the value cannot be used
"""
def KindValidator(variable, new_value):
"""
A validator ensuring that values match the "kind" of the variable.
"""
if not isinstance(new_value, variable.kind):
return {
bool: _("expected a boolean"),
int: _("expected an integer"),
float: _("expected a floating point number"),
str: _("expected a string"),
}[variable.kind]
class PatternValidator(IValidator):
"""
A validator ensuring that values match a given pattern
"""
def __init__(self, pattern_text):
self.pattern_text = pattern_text
self.pattern = re.compile(pattern_text)
def __call__(self, variable, new_value):
if not self.pattern.match(new_value):
return _("does not match pattern: {!r}").format(self.pattern_text)
def __eq__(self, other):
if isinstance(other, PatternValidator):
return self.pattern_text == other.pattern_text
else:
return False
class ChoiceValidator(IValidator):
"""
A validator ensuring that values are in a given set
"""
def __init__(self, choice_list):
self.choice_list = choice_list
def __call__(self, variable, new_value):
if new_value not in self.choice_list:
return _("must be one of {}").format(", ".join(self.choice_list))
def __eq__(self, other):
if isinstance(other, ChoiceValidator):
return self.choice_list == other.choice_list
else:
return False
@understands_Unset
class NotUnsetValidator(IValidator):
"""
A validator ensuring that values are set
..note::
Due to the way validators work this validator *must* be the first
one in any validator list in order to work. Otherwise the implicit
:func:`KindValidator` will take precedence and the check will most
likely fail as None or Unset are not of the expected type of the
configuration variable being worked with.
"""
def __init__(self, msg=None):
if msg is None:
msg = _("must be set to something")
self.msg = msg
def __call__(self, variable, new_value):
if new_value is Unset:
return self.msg
def __eq__(self, other):
if isinstance(other, NotUnsetValidator):
return self.msg == other.msg
else:
return False
class NotEmptyValidator(IValidator):
"""
A validator ensuring that values aren't empty
"""
def __init__(self, msg=None):
if msg is None:
msg = _("cannot be empty")
self.msg = msg
def __call__(self, variable, new_value):
if new_value == "":
return self.msg
def __eq__(self, other):
if isinstance(other, NotEmptyValidator):
return self.msg == other.msg
else:
return False
plainbox-0.25/plainbox/impl/secure/launcher1.py 0000664 0001750 0001750 00000025000 12627266441 022362 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.secure.launcher1` -- plainbox-trusted-launcher-1
====================================================================
"""
import argparse
import copy
import logging
import os
import subprocess
from plainbox.i18n import gettext as _
from plainbox.impl.job import JobDefinition
from plainbox.impl.resource import Resource
from plainbox.impl.unit.template import TemplateUnit
from plainbox.impl.secure.origin import JobOutputTextSource
from plainbox.impl.secure.providers.v1 import all_providers
from plainbox.impl.secure.rfc822 import load_rfc822_records, RFC822SyntaxError
class TrustedLauncher:
"""
Trusted Launcher for v1 jobs.
"""
def __init__(self):
"""
Initialize a new instance of the trusted launcher
"""
self._job_list = []
def add_job_list(self, job_list):
"""
Add jobs to the trusted launcher
"""
self._job_list.extend(job_list)
def find_job(self, checksum):
for job in self._job_list:
if job.checksum == checksum:
return job
else:
raise LookupError(
_("Cannot find job with checksum {}").format(checksum))
def modify_execution_environment(self, target_env):
"""
Modify the job execution environment with a new set of values.
It's mandatory to do this way to keep variables automatically set by
pkexec(1) when the org.freedesktop.policykit.exec.allow_gui annotation
is set.
It will allow the trusted launcher to run X11 applications as
another user since the $DISPLAY and $XAUTHORITY environment
variables will be retained.
"""
ptl_env = dict(os.environ)
if target_env:
ptl_env.update(target_env)
return ptl_env
def run_shell_from_job(self, checksum, env):
"""
Run a job with the given checksum.
:param checksum:
The checksum of the job to execute.
:param env:
Environment to execute the job in.
:returns:
The return code of the command
:raises LookupError:
If the checksum does not match any known job
"""
job = self.find_job(checksum)
cmd = [job.shell, '-c', job.command]
return subprocess.call(cmd, env=self.modify_execution_environment(env))
def run_generator_job(self, checksum, env):
"""
Run a job with and process the stdout to get a job definition.
:param checksum:
The checksum of the job to execute
:param env:
Environment to execute the job in.
:returns:
A list of job definitions that were processed from the output.
:raises LookupError:
If the checksum does not match any known job
"""
job = self.find_job(checksum)
cmd = [job.shell, '-c', job.command]
output = subprocess.check_output(
cmd, universal_newlines=True,
env=self.modify_execution_environment(env))
job_list = []
source = JobOutputTextSource(job)
try:
record_list = load_rfc822_records(output, source=source)
except RFC822SyntaxError as exc:
logging.error(
_("Syntax error in record generated from %s: %s"), job, exc)
else:
if job.plugin == 'local':
for record in record_list:
job = JobDefinition.from_rfc822_record(record)
job_list.append(job)
elif job.plugin == 'resource':
resource_list = []
for record in record_list:
resource = Resource(record.data)
resource_list.append(resource)
for plugin in all_providers.get_all_plugins():
for u in plugin.plugin_object.unit_list:
if (
isinstance(u, TemplateUnit) and
u.resource_id == job.id
):
logging.info(_("Instantiating unit: %s"), u)
for new_unit in u.instantiate_all(resource_list):
job_list.append(new_unit)
return job_list
class UpdateAction(argparse.Action):
"""
Argparse action that builds up a dictionary.
This action is similar to the built-in append action but it constructs
a dictionary instead of a list.
"""
def __init__(self, option_strings, dest, nargs=None, const=None,
default=None, type=None, choices=None, required=False,
help=None, metavar=None):
if nargs == 0:
raise ValueError('nargs for append actions must be > 0; if arg '
'strings are not supplying the value to append, '
'the append const action may be more appropriate')
if const is not None and nargs != argparse.OPTIONAL:
raise ValueError(
'nargs must be {!r} to supply const'.format(argparse.OPTIONAL))
super().__init__(
option_strings=option_strings, dest=dest, nargs=nargs, const=const,
default=default, type=type, choices=choices, required=required,
help=help, metavar=metavar)
def __call__(self, parser, namespace, values, option_string=None):
"""
Internal method of argparse.Action
This method is invoked to "apply" the action after seeing all the
values for a given argument. Please refer to argparse source code for
information on how it is used.
"""
items = copy.copy(argparse._ensure_value(namespace, self.dest, {}))
for value in values:
try:
k, v = value.split('=', 1)
except ValueError:
raise argparse.ArgumentError(self, "expected NAME=VALUE")
else:
items[k] = v
setattr(namespace, self.dest, items)
def get_parser_for_sphinx():
parser = argparse.ArgumentParser(
prog="plainbox-trusted-launcher-1",
description=_("Security elevation mechanism for plainbox"))
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument(
'-w', '--warmup',
action='store_true',
# TRANSLATORS: don't translate pkexec(1)
help=_('return immediately, only useful when used with pkexec(1)'))
group.add_argument(
'-t', '--target',
metavar=_('CHECKSUM'),
help=_('run a job with this checksum'))
group = parser.add_argument_group(_("target job specification"))
group.add_argument(
'-T', '--target-environment', metavar=_('NAME=VALUE'),
dest='target_env',
nargs='+',
action=UpdateAction,
help=_('environment passed to the target job'))
group = parser.add_argument_group(title=_("generator job specification"))
group.add_argument(
'-g', '--generator',
metavar=_('CHECKSUM'),
# TRANSLATORS: don't translate 'local' in the sentence below. It
# denotes a special type of job, not its location.
help=_('also run a job with this checksum (assuming it is a local'
' job)'))
group.add_argument(
'-G', '--generator-environment',
dest='generator_env',
nargs='+',
metavar=_('NAME=VALUE'),
action=UpdateAction,
help=_('environment passed to the generator job'))
return parser
def main(argv=None):
"""
Entry point for the plainbox-trusted-launcher-1
:param argv:
Command line arguments to parse. If None (default) then sys.argv is
used instead.
:returns:
The return code of the job that was selected with the --target argument
or zero if the --warmup argument was specified.
:raises:
SystemExit if --taget or --generator point to unknown jobs.
The trusted launcher is a sudo-like program, that can grant unprivileged
users permission to run something as root, that is restricted to executing
shell snippets embedded inside job definitions offered by v1 plainbox
providers.
As a security measure the trusted launcher only considers job providers
listed in the system-wide directory since one needs to be root to add
additional definitions there anyway.
Unlike the rest of plainbox, the trusted launcher does not produce job
results, instead it just literally executes the shell snippet and returns
stdout/stderr unaffected to the invoking process. The exception to this
rule is the way --via argument is handled, where the trusted launcher needs
to capture stdout to interpret that as job definitions.
Unlike sudo, the trusted launcher is not a setuid program and cannot grant
root access in itself. Instead it relies on a policykit and specifically on
pkexec(1) alongside with an appropriate policy file, to grant users a way
to run trusted-launcher as root (or another user).
"""
parser = get_parser_for_sphinx()
ns = parser.parse_args(argv)
# Just quit if warming up
if ns.warmup:
return 0
launcher = TrustedLauncher()
# Siphon all jobs from all secure providers otherwise
all_providers.load()
for plugin in all_providers.get_all_plugins():
launcher.add_job_list(plugin.plugin_object.job_list)
# Run the local job and feed the result back to the launcher
if ns.generator:
try:
generated_job_list = launcher.run_generator_job(
ns.generator, ns.generator_env)
launcher.add_job_list(generated_job_list)
except LookupError as exc:
raise SystemExit(str(exc))
# Run the target job and return the result code
try:
return launcher.run_shell_from_job(ns.target, ns.target_env)
except LookupError as exc:
raise SystemExit(str(exc))
if __name__ == "__main__":
main()
plainbox-0.25/plainbox/impl/secure/qualifiers.py 0000664 0001750 0001750 00000061125 12627266441 022654 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013, 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.secure.qualifiers` -- Job Qualifiers
========================================================
Qualifiers are callable objects that can be used to 'match' a job definition to
some set of rules.
"""
import abc
import functools
import itertools
import logging
import operator
import os
import re
import sre_constants
from plainbox.abc import IJobQualifier
from plainbox.i18n import gettext as _
from plainbox.impl import pod
from plainbox.impl.secure.origin import FileTextSource
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.origin import UnknownTextSource
_logger = logging.getLogger("plainbox.secure.qualifiers")
class SimpleQualifier(IJobQualifier):
"""
Abstract base class that implements common features of simple (non
composite) qualifiers. This allows two concrete subclasses below to
have share some code.
"""
def __init__(self, origin, inclusive=True):
if origin is not None and not isinstance(origin, Origin):
raise TypeError(_('argument {!a}, expected {}, got {}').format(
'origin', Origin, type(origin)))
if not isinstance(inclusive, bool):
raise TypeError(_('argument {!a}, expected {}, got {}').format(
'inclusive', bool, type(inclusive)))
self._inclusive = inclusive
self._origin = origin
@property
def inclusive(self):
return self._inclusive
@property
def is_primitive(self):
return True
def designates(self, job):
return self.get_vote(job) == self.VOTE_INCLUDE
@abc.abstractmethod
def get_simple_match(self, job):
"""
Get a simple yes-or-no boolean answer if the given job matches the
simple aspect of this qualifier. This method should be overridden by
concrete subclasses.
"""
def get_vote(self, job):
"""
Get one of the ``VOTE_IGNORE``, ``VOTE_INCLUDE``, ``VOTE_EXCLUDE``
votes that this qualifier associated with the specified job.
:param job:
A IJobDefinition instance that is to be visited
:returns:
* ``VOTE_INCLUDE`` if the job matches the simple qualifier concept
embedded into this qualifier and this qualifier is **inclusive**.
* ``VOTE_EXCLUDE`` if the job matches the simple qualifier concept
embedded into this qualifier and this qualifier is **not
inclusive**.
* ``VOTE_IGNORE`` otherwise.
.. versionadded: 0.5
"""
if self.get_simple_match(job):
if self.inclusive:
return self.VOTE_INCLUDE
else:
return self.VOTE_EXCLUDE
else:
return self.VOTE_IGNORE
def get_primitive_qualifiers(self):
"""
Return a list of primitives that constitute this qualifier.
:returns:
A list of IJobQualifier objects that each is the smallest,
indivisible entity. Here it just returns a list of one element,
itself.
.. versionadded: 0.5
"""
return [self]
@property
def origin(self):
"""
Origin of this qualifier
This property can be used to trace the origin of a qualifier back to
its definition point.
"""
return self._origin
class RegExpJobQualifier(SimpleQualifier):
"""
A JobQualifier that designates jobs by matching their id to a regular
expression
"""
def __init__(self, pattern, origin, inclusive=True):
"""
Initialize a new RegExpJobQualifier with the specified pattern.
"""
super().__init__(origin, inclusive)
try:
self._pattern = re.compile(pattern)
except sre_constants.error as exc:
assert len(exc.args) == 1
# XXX: This is a bit crazy but this lets us have identical error
# messages across python3.2 all the way to 3.5. I really really
# wish there was a better way at fixing this.
exc.args = (re.sub(" at position \d+", "", exc.args[0]), )
raise exc
self._pattern_text = pattern
def get_simple_match(self, job):
"""
Check if the given job matches this qualifier.
This method should not be called directly, it is an implementation
detail of SimpleQualifier class.
"""
return self._pattern.match(job.id) is not None
@property
def pattern_text(self):
"""
text of the regular expression embedded in this qualifier
"""
return self._pattern_text
def __repr__(self):
return "{0}({1!r}, inclusive={2})".format(
self.__class__.__name__, self._pattern_text, self._inclusive)
class JobIdQualifier(SimpleQualifier):
"""
A JobQualifier that designates a single job with a particular id
"""
def __init__(self, id, origin, inclusive=True):
super().__init__(origin, inclusive)
self._id = id
@property
def id(self):
"""
identifier to match
"""
return self._id
def get_simple_match(self, job):
"""
Check if the given job matches this qualifier.
This method should not be called directly, it is an implementation
detail of SimpleQualifier class.
"""
return self._id == job.id
def __repr__(self):
return "{0}({1!r}, inclusive={2})".format(
self.__class__.__name__, self._id, self._inclusive)
class NonLocalJobQualifier(SimpleQualifier):
"""
A JobQualifier that designates only non local jobs
"""
def __init__(self, origin, inclusive=True):
super().__init__(origin, inclusive)
def get_simple_match(self, job):
"""
Check if the given job matches this qualifier.
This method should not be called directly, it is an implementation
detail of SimpleQualifier class.
"""
return job.plugin != 'local'
def __repr__(self):
return "{0}(inclusive={1})".format(
self.__class__.__name__, self._inclusive)
class IMatcher(metaclass=abc.ABCMeta):
"""
Interface for objects that perform some kind of comparison on a value
"""
@abc.abstractmethod
def match(self, value):
"""
Match (or not) specified value
:param value:
value to match
:returns:
True if it matched, False otherwise
"""
@functools.total_ordering
class OperatorMatcher(IMatcher):
"""
A matcher that applies a binary operator to the value
"""
def __init__(self, op, value):
self._op = op
self._value = value
@property
def op(self):
"""
the operator to use
The operator is typically one of the functions from the ``operator``
module. For example. operator.eq corresponds to the == python operator.
"""
return self._op
@property
def value(self):
"""
The right-hand-side value to apply to the operator
The left-hand-side is the value that is passed to :meth:`match()`
"""
return self._value
def match(self, value):
return self._op(self._value, value)
def __repr__(self):
return "{0}({1!r}, {2!r})".format(
self.__class__.__name__, self._op, self._value)
def __eq__(self, other):
if isinstance(other, OperatorMatcher):
return self.op == other.op and self.value == other.value
else:
return NotImplemented
def __lt__(self, other):
if isinstance(other, OperatorMatcher):
if self.op < other.op:
return True
if self.value < other.value:
return True
return False
else:
return NotImplemented
class PatternMatcher(IMatcher):
"""
A matcher that compares values by regular expression pattern
"""
def __init__(self, pattern):
self._pattern_text = pattern
self._pattern = re.compile(pattern)
@property
def pattern_text(self):
return self._pattern_text
def match(self, value):
return self._pattern.match(value) is not None
def __repr__(self):
return "{0}({1!r})".format(
self.__class__.__name__, self._pattern_text)
def __eq__(self, other):
if isinstance(other, PatternMatcher):
return self.pattern_text == other.pattern_text
else:
return NotImplemented
def __lt__(self, other):
if isinstance(other, PatternMatcher):
return self.pattern_text < other.pattern_text
else:
return NotImplemented
class FieldQualifier(SimpleQualifier):
"""
A SimpleQualifer that uses matchers to compare particular fields
"""
def __init__(self, field, matcher, origin, inclusive=True):
"""
Initialize a new FieldQualifier with the specified field, matcher and
inclusive flag
:param field:
Name of the JobDefinition field to use
:param matcher:
A IMatcher object
:param inclusive:
Inclusive selection flag (default: True)
"""
super().__init__(origin, inclusive)
self._field = field
self._matcher = matcher
@property
def field(self):
"""
Name of the field to match
"""
return self._field
@property
def matcher(self):
"""
The IMatcher-implementing object to use to check for the match
"""
return self._matcher
def get_simple_match(self, job):
"""
Check if the given job matches this qualifier.
This method should not be called directly, it is an implementation
detail of SimpleQualifier class.
"""
field_value = getattr(job, str(self._field))
return self._matcher.match(field_value)
def __repr__(self):
return "{0}({1!r}, {2!r}, inclusive={3})".format(
self.__class__.__name__, self._field, self._matcher,
self._inclusive)
class CompositeQualifier(pod.POD):
"""
A JobQualifier that has qualifies jobs matching any inclusive qualifiers
while not matching all of the exclusive qualifiers
"""
qualifier_list = pod.Field("qualifier_list", list, pod.MANDATORY)
@property
def is_primitive(self):
return False
def designates(self, job):
return self.get_vote(job) == IJobQualifier.VOTE_INCLUDE
def get_vote(self, job):
"""
Get one of the ``VOTE_IGNORE``, ``VOTE_INCLUDE``, ``VOTE_EXCLUDE``
votes that this qualifier associated with the specified job.
:param job:
A IJobDefinition instance that is to be visited
:returns:
* ``VOTE_INCLUDE`` if the job matches at least one qualifier voted
to select it and no qualifiers voted to deselect it.
* ``VOTE_EXCLUDE`` if at least one qualifier voted to deselect it
* ``VOTE_IGNORE`` otherwise or if the list of qualifiers is empty.
.. versionadded: 0.5
"""
if self.qualifier_list:
return min([
qualifier.get_vote(job)
for qualifier in self.qualifier_list])
else:
return IJobQualifier.VOTE_IGNORE
def get_primitive_qualifiers(self):
return get_flat_primitive_qualifier_list(self.qualifier_list)
@property
def origin(self):
raise NonPrimitiveQualifierOrigin
IJobQualifier.register(CompositeQualifier)
class NonPrimitiveQualifierOrigin(Exception):
"""
Exception raised when IJobQualifier.origin is meaningless as it is being
requested on a non-primitive qualifier such as the CompositeQualifier
"""
# NOTE: using CompositeQualifier seems strange but it's a tested proven
# component so all we have to ensure is that we read the whitelist files
# correctly.
class WhiteList(CompositeQualifier):
"""
A qualifier that understands checkbox whitelist files.
A whitelist file is a plain text, line oriented file. Each line represents
a regular expression pattern that can be matched against the id of a job.
The file can contain simple shell-style comments that begin with the pound
or hash key (#). Those are ignored. Comments can span both a fraction of a
line as well as the whole line.
For historical reasons each pattern has an implicit '^' and '$' prepended
and appended (respectively) to the actual pattern specified in the file.
"""
def __init__(self, pattern_list, name=None, origin=None,
implicit_namespace=None):
"""
Initialize a WhiteList object with the specified list of patterns.
The patterns must be already mangled with '^' and '$'.
"""
self._name = name
self._origin = origin
self._implicit_namespace = implicit_namespace
if implicit_namespace is not None:
# If we have an implicit namespace then transform all the patterns
# without the namespace operator ('::')
namespace_pattern = implicit_namespace.replace('.', '\\.')
def transform_pattern(maybe_partial_id_pattern):
if "::" not in maybe_partial_id_pattern:
return "^{}::{}$".format(
namespace_pattern, maybe_partial_id_pattern[1:-1])
else:
return maybe_partial_id_pattern
qualifier_list = [
RegExpJobQualifier(
transform_pattern(pattern), origin, inclusive=True)
for pattern in pattern_list]
else:
# Otherwise just use the patterns directly
qualifier_list = [
RegExpJobQualifier(pattern, origin, inclusive=True)
for pattern in pattern_list]
super().__init__(qualifier_list)
def __repr__(self):
return "<{} name:{!r}>".format(self.__class__.__name__, self.name)
@property
def name(self):
"""
name of this WhiteList (might be None)
"""
return self._name
@name.setter
def name(self, value):
"""
set a new name for a WhiteList
"""
self._name = value
@property
def origin(self):
"""
origin object associated with this WhiteList (might be None)
"""
return self._origin
@property
def implicit_namespace(self):
"""
namespace used to qualify patterns without explicit namespace
"""
return self._implicit_namespace
@classmethod
def from_file(cls, pathname, implicit_namespace=None):
"""
Load and initialize the WhiteList object from the specified file.
:param pathname:
file to load
:param implicit_namespace:
(optional) implicit namespace for jobs that are using partial
identifiers (all jobs)
:returns:
a fresh WhiteList object
"""
pattern_list, max_lineno = cls._load_patterns(pathname)
name = os.path.splitext(os.path.basename(pathname))[0]
origin = Origin(FileTextSource(pathname), 1, max_lineno)
return cls(pattern_list, name, origin, implicit_namespace)
@classmethod
def from_string(cls, text, *, filename=None, name=None, origin=None,
implicit_namespace=None):
"""
Load and initialize the WhiteList object from the specified string.
:param text:
full text of the whitelist
:param filename:
(optional, keyword-only) filename from which text was read from.
This simulates a call to :meth:`from_file()` which properly
computes the name and origin of the whitelist.
:param name:
(optional) name of the whitelist, only used if filename is not
specified.
:param origin:
(optional) origin of the whitelist, only used if a filename is not
specified. If omitted a default origin value will be constructed
out of UnknownTextSource instance
:param implicit_namespace:
(optional) implicit namespace for jobs that are using partial
identifiers (all jobs)
:returns:
a fresh WhiteList object
The optional filename or a pair of name and origin arguments may be
provided in order to have additional meta-data. This is typically
needed when the :meth:`from_file()` method cannot be used as the caller
already has the full text of the intended file available.
"""
_logger.debug("Loaded whitelist from %r", filename)
pattern_list, max_lineno = cls._parse_patterns(text)
# generate name and origin if filename is provided
if filename is not None:
name = WhiteList.name_from_filename(filename)
origin = Origin(FileTextSource(filename), 1, max_lineno)
else:
# otherwise generate origin if it's not specified
if origin is None:
origin = Origin(UnknownTextSource(), 1, max_lineno)
return cls(pattern_list, name, origin, implicit_namespace)
@classmethod
def name_from_filename(cls, filename):
"""
Compute the name of a whitelist based on the name
of the file it is stored in.
"""
return os.path.splitext(os.path.basename(filename))[0]
@classmethod
def _parse_patterns(cls, text):
"""
Load whitelist patterns from the specified text
:param text:
string of text, including newlines, to parse
:returns:
(pattern_list, lineno) where lineno is the final line number
(1-based) and pattern_list is a list of regular expression strings
parsed from the whitelist.
"""
from plainbox.impl.xparsers import Re
from plainbox.impl.xparsers import Visitor
from plainbox.impl.xparsers import WhiteList
class WhiteListVisitor(Visitor):
def __init__(self):
self.pattern_list = []
self.lineno = 0
def visit_Re_node(self, node: Re):
self.pattern_list.append(r"^{}$".format(node.text.strip()))
self.lineno = max(node.lineno, self.lineno)
return super().generic_visit(node)
visit_ReFixed_node = visit_Re_node
visit_RePattern_node = visit_Re_node
visit_ReErr_node = visit_Re_node
visitor = WhiteListVisitor()
visitor.visit(WhiteList.parse(text))
return visitor.pattern_list, visitor.lineno
@classmethod
def _load_patterns(cls, pathname):
"""
Load whitelist patterns from the specified file
:param pathname:
pathname of the file to load and parse
:returns:
(pattern_list, lineno) where lineno is the final line number
(1-based) and pattern_list is a list of regular expression strings
parsed from the whitelist.
"""
with open(pathname, "rt", encoding="UTF-8") as stream:
return cls._parse_patterns(stream.read())
def get_flat_primitive_qualifier_list(qualifier_list):
return list(itertools.chain(*[
qual.get_primitive_qualifiers()
for qual in qualifier_list]))
def select_jobs(job_list, qualifier_list):
"""
Select desired jobs.
:param job_list:
A list of JobDefinition objects
:param qualifier_list:
A list of IJobQualifier objects.
:returns:
A sub-list of JobDefinition objects, selected from job_list.
"""
# Flatten the qualifier list, so that we can see the fine structure of
# composite objects, such as whitelists.
flat_qualifier_list = get_flat_primitive_qualifier_list(qualifier_list)
# Short-circuit if there are no jobs to select. Min is used later and this
# will allow us to assume that the matrix is not empty.
if not flat_qualifier_list:
return []
# Vote matrix, encodes the vote cast by a particular qualifier for a
# particular job. Visually it's a two-dimensional array like this:
#
# ^
# q |
# u | X
# a |
# l | ........
# i |
# f | .
# i | .
# e | .
# r |
# ------------------->
# job
#
# The vertical axis represents qualifiers from the flattened qualifier
# list. The horizontal axis represents jobs from job list. Dots represent
# inclusion, X represents exclusion.
#
# The result of the select_job() function is a list of jobs that have at
# least one inclusion and no exclusions. The resulting list is ordered by
# increasing qualifier index.
#
# The algorithm implemented below is composed of two steps.
#
# The first step iterates over the vote matrix (row-major, meaning that we
# visit all columns for each visit of one row) and constructs two
# structures: a set of jobs that got VOTE_INCLUDE and a list of those jobs,
# in the order of discovery. All VOTE_EXCLUDE votes are collected in
# another set.
#
# The second step filters-out all items from the excluded job set from the
# selected job list. For extra efficiency the algorithm operates on
# integers representing the index of a particular job in job_list.
#
# The final complexity is O(N x M) + O(M), where N is the number of
# qualifiers (flattened) and M is the number of jobs. The algorithm assumes
# that set lookup is a O(1) operation which is true enough for python.
#
# A possible optimization would differentiate qualifiers that may select
# more than one job and fall-back to the current implementation while
# short-circuiting qualifiers that may select at most one job with a
# separate set lookup. That would make the algorithm "mostly" linear in the
# common case.
#
# As a separate feature, we might return a list of qualifiers that never
# matched anything. That may be helpful for debugging.
included_list = []
id_to_index_map = {job.id: index for index, job in enumerate(job_list)}
included_set = set()
excluded_set = set()
for qualifier in flat_qualifier_list:
if (isinstance(qualifier, FieldQualifier)
and qualifier.field == 'id'
and isinstance(qualifier.matcher, OperatorMatcher)
and qualifier.matcher.op == operator.eq):
# optimize the super-common case where a qualifier refers to
# a specific job by using the id_to_index_map to instantly
# perform the requested operation on a single job
try:
j_index = id_to_index_map[qualifier.matcher.value]
except KeyError:
# The lookup can fail if the pattern is a constant reference to
# a generated job that doens't exist yet. To maintain correctness
# we should just ignore it, as it would not match anything yet.
continue
job = job_list[j_index]
vote = qualifier.get_vote(job)
if vote == IJobQualifier.VOTE_INCLUDE:
if j_index in included_set:
continue
included_set.add(j_index)
included_list.append(j_index)
elif vote == IJobQualifier.VOTE_EXCLUDE:
excluded_set.add(j_index)
elif vote == IJobQualifier.VOTE_IGNORE:
pass
else:
for j_index, job in enumerate(job_list):
vote = qualifier.get_vote(job)
if vote == IJobQualifier.VOTE_INCLUDE:
if j_index in included_set:
continue
included_set.add(j_index)
included_list.append(j_index)
elif vote == IJobQualifier.VOTE_EXCLUDE:
excluded_set.add(j_index)
elif vote == IJobQualifier.VOTE_IGNORE:
pass
return [job_list[index] for index in included_list
if index not in excluded_set]
plainbox-0.25/plainbox/impl/secure/test_origin.py 0000664 0001750 0001750 00000030237 12627266441 023036 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013-2014 Canonical Ltd.
# Written by:
# Sylvain Pineau
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.secure.test_origin
================================
Test definitions for plainbox.impl.secure.origin module
"""
from unittest import TestCase
import os
from plainbox.impl.secure.origin import CommandLineTextSource
from plainbox.impl.secure.origin import FileTextSource
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.origin import PythonFileTextSource
from plainbox.impl.secure.origin import UnknownTextSource
class UnknownTextSourceTests(TestCase):
"""
Tests for UnknownTextSource class
"""
def setUp(self):
self.src = UnknownTextSource()
def test_str(self):
"""
verify how UnknownTextSource. __str__() works
"""
self.assertEqual(str(self.src), "???")
def test_repr(self):
"""
verify how UnknownTextSource.__repr__() works
"""
self.assertEqual(repr(self.src), "UnknownTextSource()")
def test_eq(self):
"""
verify instances of UnknownTextSource are all equal to each other
but not equal to any other object
"""
other_src = UnknownTextSource()
self.assertTrue(self.src == other_src)
self.assertFalse(self.src == "???")
def test_eq_others(self):
"""
verify instances of UnknownTextSource are unequal to instances of other
classes
"""
self.assertTrue(self.src != object())
self.assertFalse(self.src == object())
def test_gt(self):
"""
verify that instances of UnknownTextSource are not ordered
"""
other_src = UnknownTextSource()
self.assertFalse(self.src < other_src)
self.assertFalse(other_src < self.src)
def test_gt_others(self):
"""
verify that instances of UnknownTextSource are not comparable to other
objects
"""
with self.assertRaises(TypeError):
self.src < object()
with self.assertRaises(TypeError):
object() < self.src
class FileTextSourceTests(TestCase):
"""
Tests for FileTextSource class
"""
_FILENAME = "filename"
_CLS = FileTextSource
def setUp(self):
self.src = self._CLS(self._FILENAME)
def test_filename(self):
"""
verify that FileTextSource.filename works
"""
self.assertEqual(self._FILENAME, self.src.filename)
def test_str(self):
"""
verify that FileTextSource.__str__() works
"""
self.assertEqual(str(self.src), self._FILENAME)
def test_repr(self):
"""
verify that FileTextSource.__repr__() works
"""
self.assertEqual(
repr(self.src),
"{}({!r})".format(self._CLS.__name__, self._FILENAME))
def test_eq(self):
"""
verify that FileTextSource compares equal to other instances with the
same filename and unequal to instances with different filenames.
"""
self.assertTrue(self._CLS('foo') == self._CLS('foo'))
self.assertTrue(self._CLS('foo') != self._CLS('bar'))
def test_eq_others(self):
"""
verify instances of FileTextSource are unequal to instances of other
classes
"""
self.assertTrue(self._CLS('foo') != object())
self.assertFalse(self._CLS('foo') == object())
def test_gt(self):
"""
verify that FileTextSource is ordered by filename
"""
self.assertTrue(self._CLS("a") < self._CLS("b") < self._CLS("c"))
self.assertTrue(self._CLS("c") > self._CLS("b") > self._CLS("a"))
def test_gt_others(self):
"""
verify that instances of FileTextSource are not comparable to other
objects
"""
with self.assertRaises(TypeError):
self.src < object()
with self.assertRaises(TypeError):
object() < self.src
def test_relative_to(self):
"""
verify that FileTextSource.relative_to() works
"""
self.assertEqual(
self._CLS("/path/to/file.txt").relative_to("/path/to"),
self._CLS("file.txt"))
class PythonFileTextSourceTests(FileTextSourceTests):
"""
Tests for PythonFileTextSource class
"""
_FILENAME = "filename.py"
_CLS = PythonFileTextSource
class OriginTests(TestCase):
"""
Tests for Origin class
"""
def setUp(self):
self.origin = Origin(FileTextSource("file.txt"), 10, 12)
def test_smoke(self):
"""
verify that all three instance attributes actually work
"""
self.assertEqual(self.origin.source.filename, "file.txt")
self.assertEqual(self.origin.line_start, 10)
self.assertEqual(self.origin.line_end, 12)
def test_repr(self):
"""
verify that Origin.__repr__() works
"""
expected = ("")
observed = repr(self.origin)
self.assertEqual(expected, observed)
def test_str(self):
"""
verify that Origin.__str__() works
"""
expected = "file.txt:10-12"
observed = str(self.origin)
self.assertEqual(expected, observed)
def test_str__single_line(self):
"""
verify that Origin.__str__() behaves differently when the range
describes a single line
"""
expected = "file.txt:15"
observed = str(Origin(FileTextSource("file.txt"), 15, 15))
self.assertEqual(expected, observed)
def test_str__whole_file(self):
"""
verify that Origin.__str__() behaves differently when the range
is empty
"""
expected = "file.txt"
observed = str(Origin(FileTextSource("file.txt")))
self.assertEqual(expected, observed)
def test_eq(self):
"""
verify instances of Origin are all equal to other instances with the
same instance attributes but not equal to instances with different
attributes
"""
origin1 = Origin(
self.origin.source, self.origin.line_start, self.origin.line_end)
origin2 = Origin(
self.origin.source, self.origin.line_start, self.origin.line_end)
self.assertTrue(origin1 == origin2)
origin_other1 = Origin(
self.origin.source, self.origin.line_start + 1,
self.origin.line_end)
self.assertTrue(origin1 != origin_other1)
self.assertFalse(origin1 == origin_other1)
origin_other2 = Origin(
self.origin.source, self.origin.line_start,
self.origin.line_end + 1)
self.assertTrue(origin1 != origin_other2)
self.assertFalse(origin1 == origin_other2)
origin_other3 = Origin(
FileTextSource("unrelated"), self.origin.line_start,
self.origin.line_end)
self.assertTrue(origin1 != origin_other3)
self.assertFalse(origin1 == origin_other3)
def test_eq_other(self):
"""
verify instances of UnknownTextSource are unequal to instances of other
classes
"""
self.assertTrue(self.origin != object())
self.assertFalse(self.origin == object())
def test_gt(self):
"""
verify that Origin instances are ordered by their constituting
components
"""
self.assertTrue(
Origin(FileTextSource('file.txt'), 1, 1) <
Origin(FileTextSource('file.txt'), 1, 2) <
Origin(FileTextSource('file.txt'), 1, 3))
self.assertTrue(
Origin(FileTextSource('file.txt'), 1, 10) <
Origin(FileTextSource('file.txt'), 2, 10) <
Origin(FileTextSource('file.txt'), 3, 10))
self.assertTrue(
Origin(FileTextSource('file1.txt'), 1, 10) <
Origin(FileTextSource('file2.txt'), 1, 10) <
Origin(FileTextSource('file3.txt'), 1, 10))
def test_gt_other(self):
"""
verify that Origin instances are not comparable to other objects
"""
with self.assertRaises(TypeError):
self.origin < object()
with self.assertRaises(TypeError):
object() < self.origin
def test_origin_caller(self):
"""
verify that Origin.get_caller_origin() uses PythonFileTextSource as the
origin.source attribute.
"""
self.assertIsInstance(
Origin.get_caller_origin().source, PythonFileTextSource)
def test_origin_source_filename_is_correct(self):
"""
verify that make_job() can properly trace the filename of the python
module that called make_job()
"""
# Pass -1 to get_caller_origin() to have filename point at this file
# instead of at whatever ends up calling the test method
self.assertEqual(
os.path.basename(Origin.get_caller_origin(-1).source.filename),
"test_origin.py")
def test_relative_to(self):
"""
verify how Origin.relative_to() works in various situations
"""
# if the source does not have relative_to method, nothing is changed
origin = Origin(UnknownTextSource(), 1, 2)
self.assertIs(origin.relative_to("/some/path"), origin)
# otherwise the source is replaced and a new origin is returned
self.assertEqual(
Origin(
FileTextSource("/some/path/file.txt"), 1, 2
).relative_to("/some/path"),
Origin(FileTextSource("file.txt"), 1, 2))
def test_with_offset(self):
"""
verify how Origin.with_offset() works as expected
"""
origin1 = Origin(UnknownTextSource(), 1, 2)
origin2 = origin1.with_offset(10)
self.assertEqual(origin2.line_start, 11)
self.assertEqual(origin2.line_end, 12)
self.assertIs(origin2.source, origin1.source)
def test_just_line(self):
"""
verify how Origin.just_line() works as expected
"""
origin1 = Origin(UnknownTextSource(), 1, 2)
origin2 = origin1.just_line()
self.assertEqual(origin2.line_start, origin1.line_start)
self.assertEqual(origin2.line_end, origin1.line_start)
self.assertIs(origin2.source, origin1.source)
def test_just_file(self):
"""
verify how Origin.just_file() works as expected
"""
origin1 = Origin(UnknownTextSource(), 1, 2)
origin2 = origin1.just_file()
self.assertEqual(origin2.line_start, None)
self.assertEqual(origin2.line_end, None)
self.assertIs(origin2.source, origin1.source)
class CommandLineTextSourceTests(TestCase):
def test_str(self):
self.assertEqual(
str(CommandLineTextSource("--foo", "value")),
"command line argument --foo='value'")
self.assertEqual(
str(CommandLineTextSource(None, "value")),
"command line argument 'value'")
def test_repr(self):
self.assertEqual(
repr(CommandLineTextSource("--foo", "value")),
"")
def test_relative_to(self):
src = CommandLineTextSource("--foo", "value")
self.assertIs(src.relative_to('path'), src)
def test_eq(self):
src1 = CommandLineTextSource("--foo", "value")
src2 = CommandLineTextSource("--foo", "value")
self.assertEqual(src1, src2)
def test_gt(self):
src1 = CommandLineTextSource("--arg2", "value")
src2 = CommandLineTextSource("--arg1", "value")
self.assertGreater(src1, src2)
plainbox-0.25/plainbox/impl/secure/__init__.py 0000664 0001750 0001750 00000002326 12627266441 022245 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Sylvain Pineau
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.secure` -- code for external (trusted) launchers
====================================================================
This package keeps all of the plainbox code that is executed as root. It should
be carefully reviewed to ensure that we don't introduce security issues that
could allow unpriviledged uses to exploit plainbox to run arbitrary commands as
root.
None of the modues in the secure package may import code that is not coming
from either the plainbox secure package or from the standard python library.
"""
plainbox-0.25/plainbox/impl/secure/origin.py 0000664 0001750 0001750 00000027221 12627266441 021776 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.secure.origin` -- origin objects
====================================================
"""
import functools
import inspect
import os
from plainbox.abc import ITextSource
from plainbox.i18n import gettext as _
from plainbox.impl.symbol import SymbolDef
class OriginMode(SymbolDef):
"""
A symbol definition (which will become an enumeration in the near future)
that describes all the possible "modes" an :class:`Origin` can operate in.
"""
# NOTE: this should be an enumeration
whole_file = 'whole-file'
single_line = 'single-line'
line_range = 'line-range'
@functools.total_ordering
class Origin:
"""
Simple class for tracking where something came from
This class supports "pinpointing" something in a block of text. The block
is described by the source attribute. The actual range is described by
line_start (inclusive) and line_end (exclusive).
:ivar source:
Something that describes where the text came frome. Technically it
should implement the :class:`~plainbox.abc.ITextSource` interface.
:ivar line_start:
The number of the line where the record begins. This can be None
when the intent is to cover the whole file. This can also be equal
to line_end (when not None) if the intent is to show a single line.
:ivar line_end:
The number of the line where the record ends
"""
__slots__ = ['source', 'line_start', 'line_end']
def __init__(self, source, line_start=None, line_end=None):
self.source = source
self.line_start = line_start
self.line_end = line_end
def mode(self):
"""
Compute the "mode" of this origin instance.
:returns:
:attr:`OriginMode.whole_file`, :attr:`OriginMode.single_line`
or :attr:`OriginMode.line_range`.
The mode tells if this instance is describing the whole file,
a range of lines or just a single line. It is mostly used internally
by the implementation.
"""
if self.line_start is None and self.line_end is None:
return OriginMode.whole_file
elif self.line_start == self.line_end:
return OriginMode.single_line
else:
return OriginMode.line_range
def __repr__(self):
return "<{} source:{!r} line_start:{} line_end:{}>".format(
self.__class__.__name__,
self.source, self.line_start, self.line_end)
def __str__(self):
mode = self.mode()
if mode is OriginMode.whole_file:
return str(self.source)
elif mode is OriginMode.single_line:
return "{}:{}".format(self.source, self.line_start)
elif mode is OriginMode.line_range:
return "{}:{}-{}".format(
self.source, self.line_start, self.line_end)
else:
raise NotImplementedError
def relative_to(self, base_dir):
"""
Create a Origin with source relative to the specified base directory.
:param base_dir:
A base directory name
:returns:
A new Origin with source replaced by the result of calling
relative_to(base_dir) on the current source *iff* the current
source has that method, self otherwise.
This method is useful for obtaining user friendly Origin objects that
have short, understandable filenames.
"""
relative_source = self.source.relative_to(base_dir)
if relative_source is not self.source:
return Origin(relative_source, self.line_start, self.line_end)
else:
return self
def with_offset(self, offset):
"""
Create a new Origin by adding a offset of a specific number of lines
:param offset:
Number of lines to add (or substract)
:returns:
A new Origin object
"""
mode = self.mode()
if mode is OriginMode.whole_file:
return self
elif mode is OriginMode.single_line or mode is OriginMode.line_range:
return Origin(self.source,
self.line_start + offset, self.line_end + offset)
else:
raise NotImplementedError
def just_line(self):
"""
Create a new Origin that points to the start line
:returns:
A new Origin with the end_line equal to start_line.
This effectively makes the origin describe a single line.
"""
return Origin(self.source, self.line_start, self.line_start)
def just_file(self):
"""
create a new Origin that points to the whole file
:returns:
A new Origin with line_end and line_start both set to None.
"""
return Origin(self.source)
def __eq__(self, other):
if isinstance(other, Origin):
return ((self.source, self.line_start, self.line_end) ==
(other.source, other.line_start, other.line_end))
else:
return NotImplemented
def __gt__(self, other):
if isinstance(other, Origin):
return ((self.source, self.line_start, self.line_end) >
(other.source, other.line_start, other.line_end))
else:
return NotImplemented
@classmethod
def get_caller_origin(cls, back=0):
"""
Create an Origin instance pointing at the call site of this method.
"""
# Create an Origin instance that pinpoints the place that called
# get_caller_origin().
caller_frame, filename, lineno = inspect.stack(0)[2 + back][:3]
try:
source = PythonFileTextSource(filename)
origin = Origin(source, lineno, lineno)
finally:
# Explicitly delete the frame object, this breaks the
# reference cycle and makes this part of the code deterministic
# with regards to the CPython garbage collector.
#
# As recommended by the python documentation:
# http://docs.python.org/3/library/inspect.html#the-interpreter-stack
del caller_frame
return origin
@functools.total_ordering
class UnknownTextSource(ITextSource):
"""
A :class:`ITextSource` subclass indicating that the source of text is
unknown.
This instances of this class are constructed by gen_rfc822_records() when
no explicit source is provided and the stream has no name. The serve as
non-None values to prevent constructing :class:`PythonFileTextSource` with
origin computed from :meth:`Origin.get_caller_origin()`
"""
def __str__(self):
return _("???")
def __repr__(self):
return "{}()".format(self.__class__.__name__)
def __eq__(self, other):
if isinstance(other, UnknownTextSource):
return True
else:
return False
def __gt__(self, other):
if isinstance(other, UnknownTextSource):
return False
else:
return NotImplemented
def relative_to(self, path):
return self
@functools.total_ordering
class FileTextSource(ITextSource):
"""
A :class:`ITextSource` subclass indicating that text came from a file.
:ivar filename:
name of the file something comes from
"""
def __init__(self, filename):
self.filename = filename
def __str__(self):
return self.filename
def __repr__(self):
return "{}({!r})".format(
self.__class__.__name__, self.filename)
def __eq__(self, other):
if isinstance(other, FileTextSource):
return self.filename == other.filename
else:
return False
def __gt__(self, other):
if isinstance(other, FileTextSource):
return self.filename > other.filename
else:
return NotImplemented
def relative_to(self, base_dir):
"""
Compute a FileTextSource with the filename being a relative path from
the specified base directory.
:param base_dir:
A base directory name
:returns:
A new FileTextSource with filename relative to that base_dir
"""
return self.__class__(os.path.relpath(self.filename, base_dir))
class PythonFileTextSource(FileTextSource):
"""
A :class:`FileTextSource` subclass indicating the file was a python file.
It implements no differences but in some context it might be helpful to
differentiate on the type of the source field in the origin of a job
definition record.
:ivar filename:
name of the python filename that something comes from
"""
@functools.total_ordering
class JobOutputTextSource(ITextSource):
"""
A :class:`ITextSource` subclass indicating that text came from job output.
This class is used by
:meth:`SessionState._gen_rfc822_records_from_io_log()` to allow such
(generated) jobs to be traced back to the job that generated them.
:ivar job:
:class:`plainbox.impl.job.JobDefinition` instance that generated the
text
"""
def __init__(self, job):
self.job = job
def __str__(self):
return str(self.job.id)
def __repr__(self):
return "<{} job:{!r}>".format(self.__class__.__name__, self.job)
def __eq__(self, other):
if isinstance(other, JobOutputTextSource):
return self.job == other.job
return NotImplemented
def __gt__(self, other):
if isinstance(other, JobOutputTextSource):
return self.job > other.job
return NotImplemented
def relative_to(self, base_path):
return self
@functools.total_ordering
class CommandLineTextSource(ITextSource):
"""
A :class:`ITextSource` describing text that originated arguments to main()
:attr arg_name:
The optional name of the argument that describes the arg_value
:attr arg_value:
The argument that was passed on command line (the actual text)
"""
def __init__(self, arg_name, arg_value):
self.arg_value = arg_value
self.arg_name = arg_name
def __str__(self):
if self.arg_name is not None:
return _("command line argument {}={!a}").format(
self.arg_name, self.arg_value)
else:
return _("command line argument {!a}").format(self.arg_value)
def __repr__(self):
return "<{} arg_name:{!r} arg_value:{!r}>".format(
self.__class__.__name__, self.arg_name, self.arg_value)
def __eq__(self, other):
if isinstance(other, CommandLineTextSource):
return (self.arg_name == other.arg_name
and self.arg_value == other.arg_value)
return NotImplemented
def __gt__(self, other):
if isinstance(other, CommandLineTextSource):
if self.arg_name > other.arg_name:
return True
if self.arg_value > other.arg_value:
return True
return False
return NotImplemented
def relative_to(self, base_path):
return self
plainbox-0.25/plainbox/impl/secure/test_launcher1.py 0000664 0001750 0001750 00000035123 12627266441 023430 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Sylvain Pineau
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.secure.test_launcher1
===================================
Test definitions for plainbox.impl.secure.launcher1 module
"""
from inspect import cleandoc
from unittest import TestCase
import os
from plainbox.impl.job import JobDefinition
from plainbox.impl.secure.launcher1 import TrustedLauncher
from plainbox.impl.secure.launcher1 import main
from plainbox.impl.secure.origin import JobOutputTextSource
from plainbox.impl.secure.providers.v1 import Provider1
from plainbox.impl.secure.providers.v1 import Provider1PlugIn
from plainbox.impl.secure.providers.v1 import all_providers
from plainbox.impl.secure.providers.v1 import get_secure_PROVIDERPATH_list
from plainbox.impl.secure.rfc822 import RFC822Record
from plainbox.testing_utils.io import TestIO
from plainbox.vendor import mock
class TrustedLauncherTests(TestCase):
"""
Unit tests for the TrustedLauncher class that implements much of
plainbox-trusted-launcher-1
"""
def setUp(self):
self.launcher = TrustedLauncher()
def test_init(self):
self.assertEqual(self.launcher._job_list, [])
def test_add_job_list(self):
job = mock.Mock(spec=JobDefinition, name='job')
self.launcher.add_job_list([job])
# Ensure that the job was added correctly
self.assertEqual(self.launcher._job_list, [job])
def test_find_job_when_it_doesnt_work(self):
job = mock.Mock(spec=JobDefinition, name='job')
self.launcher.add_job_list([job])
with self.assertRaises(LookupError) as boom:
self.launcher.find_job('foo')
# Ensure that LookupError is raised if a job cannot be found
self.assertIsInstance(boom.exception, LookupError)
self.assertEqual(boom.exception.args, (
'Cannot find job with checksum foo',))
def test_find_job_when_it_works(self):
job = mock.Mock(spec=JobDefinition, name='job')
self.launcher.add_job_list([job])
# Ensure that the job was found correctly
self.assertIs(self.launcher.find_job(job.checksum), job)
@mock.patch.dict('os.environ', clear=True)
@mock.patch('subprocess.call')
def test_run_shell_from_job(self, mock_call):
# Create a mock job and add it to the launcher
job = mock.Mock(spec=JobDefinition, name='job')
self.launcher.add_job_list([job])
# Create a environment we'll pass (empty)
env = {'key': 'value'}
# Run the tested method
retval = self.launcher.run_shell_from_job(job.checksum, env)
# Ensure that we run the job command via job.shell
mock_call.assert_called_once_with(
[job.shell, '-c', job.command], env=env)
# Ensure that the return value of subprocess.call() is returned
self.assertEqual(retval, mock_call())
@mock.patch.dict('os.environ', clear=True, DISPLAY='foo')
@mock.patch('subprocess.call')
def test_run_shell_from_job_with_env_preserved(self, mock_call):
# Create a mock job and add it to the launcher
job = mock.Mock(spec=JobDefinition, name='job')
self.launcher.add_job_list([job])
# Create a environment we'll pass (empty)
env = {'key': 'value'}
# Run the tested method
retval = self.launcher.run_shell_from_job(job.checksum, env)
# Ensure that we run the job command via job.shell with a preserved env
expected_env = dict(os.environ)
expected_env.update(env)
mock_call.assert_called_once_with(
[job.shell, '-c', job.command], env=expected_env)
# Ensure that the return value of subprocess.call() is returned
self.assertEqual(retval, mock_call())
@mock.patch.dict('os.environ', clear=True)
@mock.patch('plainbox.impl.job.JobDefinition.from_rfc822_record')
@mock.patch('plainbox.impl.secure.launcher1.load_rfc822_records')
@mock.patch('subprocess.check_output')
def test_run_local_job(self, mock_check_output, mock_load_rfc822_records,
mock_from_rfc822_record):
# Create a mock job and add it to the launcher
job = mock.Mock(spec=JobDefinition, name='job', plugin='local')
self.launcher.add_job_list([job])
# Create two mock rfc822 records
record1 = mock.Mock(spec=RFC822Record, name='record')
record2 = mock.Mock(spec=RFC822Record, name='record')
# Ensure that load_rfc822_records() returns some mocked records
mock_load_rfc822_records.return_value = [record1, record2]
# Run the tested method
job_list = self.launcher.run_generator_job(job.checksum, None)
# Ensure that we run the job command via job.shell
mock_check_output.assert_called_with(
[job.shell, '-c', job.command], env={}, universal_newlines=True)
# Ensure that we parse all of the output
mock_load_rfc822_records.assert_called_with(
mock_check_output(), source=JobOutputTextSource(job))
# Ensure that we return the jobs back
self.assertEqual(len(job_list), 2)
self.assertEqual(job_list[0], mock_from_rfc822_record(record1))
self.assertEqual(job_list[1], mock_from_rfc822_record(record2))
class MainTests(TestCase):
"""
Unit tests for the main() function that implements
plainbox-trusted-launcher-1
"""
def setUp(self):
self.provider = mock.Mock(name='provider', spec=Provider1)
all_providers.fake_plugins([
mock.Mock(
name='plugin',
spec=Provider1PlugIn,
plugin_name='{}/fake.provider'.format(
get_secure_PROVIDERPATH_list()[0]),
plugin_object=self.provider)
])
def test_help(self):
"""
verify how `plainbox-trusted-launcher-1 --help` looks like
"""
# Run the program with io intercept
with TestIO(combined=True) as io:
with self.assertRaises(SystemExit) as call:
main(['--help'])
self.assertEqual(call.exception.args, (0,))
self.maxDiff = None
expected = """
usage: plainbox-trusted-launcher-1 [-h] (-w | -t CHECKSUM)
[-T NAME=VALUE [NAME=VALUE ...]]
[-g CHECKSUM]
[-G NAME=VALUE [NAME=VALUE ...]]
Security elevation mechanism for plainbox
optional arguments:
-h, --help show this help message and exit
-w, --warmup return immediately, only useful when used with
pkexec(1)
-t CHECKSUM, --target CHECKSUM
run a job with this checksum
target job specification:
-T NAME=VALUE [NAME=VALUE ...], --target-environment NAME=VALUE [NAME=VALUE ...]
environment passed to the target job
generator job specification:
-g CHECKSUM, --generator CHECKSUM
also run a job with this checksum (assuming it is a
local job)
-G NAME=VALUE [NAME=VALUE ...], --generator-environment NAME=VALUE [NAME=VALUE ...]
environment passed to the generator job
"""
self.assertEqual(io.combined, cleandoc(expected) + "\n")
def test_warmup(self):
"""
verify what `plainbox-trusted-launcher-1 --warmup` does
"""
# Run the program with io intercept
with TestIO(combined=True) as io:
retval = main(['--warmup'])
# Ensure that it just returns 0
self.assertEqual(retval, 0)
# Without printing anything
self.assertEqual(io.combined, '')
def test_run_without_args(self):
"""
verify what `plainbox-trusted-launcher-1` does
"""
# Run the program with io intercept
with TestIO(combined=True) as io:
with self.assertRaises(SystemExit) as call:
main([])
self.assertEqual(call.exception.args, (2,))
expected = """
usage: plainbox-trusted-launcher-1 [-h] (-w | -t CHECKSUM)
[-T NAME=VALUE [NAME=VALUE ...]]
[-g CHECKSUM]
[-G NAME=VALUE [NAME=VALUE ...]]
plainbox-trusted-launcher-1: error: one of the arguments -w/--warmup -t/--target is required
"""
self.assertEqual(io.combined, cleandoc(expected) + "\n")
@mock.patch('plainbox.impl.secure.launcher1.TrustedLauncher')
def test_run_valid_hash(self, mock_launcher):
"""
verify what happens when `plainbox-trusted-launcher-1` is called with
--hash that designates an existing job.
"""
# Create a mock job, give it a predictable checksum
job = mock.Mock(name='job', spec=JobDefinition, checksum='1234')
# Ensure this job is enumerated by the provider
self.provider.job_list = [job]
# Run the program with io intercept
with TestIO(combined=True) as io:
retval = main([
'--target=1234', '-T', 'key=value', '-T', 'other=value'])
# Ensure that the job command was invoked
# and that environment was properly parsed and provided
mock_launcher().run_shell_from_job.assert_called_with(
job.checksum, {'key': 'value', 'other': 'value'})
# Ensure that the return code is propagated
self.assertEqual(retval, mock_launcher().run_shell_from_job())
# Ensure that we didn't print anything (we normally do but this is not
# tested here since we mock that part away)
self.assertEqual(io.combined, '')
@mock.patch('plainbox.impl.secure.launcher1.TrustedLauncher')
def test_run_valid_hash_and_via(self, mock_launcher):
"""
verify what happens when `plainbox-trusted-launcher-1` is called with
both --hash and --via that both are okay and designate existing jobs.
"""
# Create a mock (local) job, give it a predictable checksum
local_job = mock.Mock(
name='local_job',
spec=JobDefinition,
checksum='5678')
# Create a mock (target) job, give it a predictable checksum
target_job = mock.Mock(
name='target_job',
spec=JobDefinition,
checksum='1234')
# Ensure this local job is enumerated by the provider
self.provider.job_list = [local_job]
# Ensure that the target job is generated by the local job
mock_launcher.run_local_job.return_value = [target_job]
# Run the program with io intercept
with TestIO(combined=True) as io:
retval = main(['--target=1234', '--generator=5678'])
# Ensure that the local job command was invoked
mock_launcher().run_generator_job.assert_called_with(local_job.checksum, None)
# Ensure that the target job command was invoked
mock_launcher().run_shell_from_job.assert_called_with(
target_job.checksum, None)
# Ensure that the return code is propagated
self.assertEqual(retval, mock_launcher().run_shell_from_job())
# Ensure that we didn't print anything (we normally do but this is not
# tested here since we mock that part away)
self.assertEqual(io.combined, '')
def test_run_invalid_target_checksum(self):
"""
verify what happens when `plainbox-trusted-launcher-1` is called with a
target job checksum that cannot be found in any of the providers.
"""
# Ensure this there are no jobs that the launcher knows about
self.provider.job_list = []
# Run the program with io intercept
with TestIO(combined=True) as io:
with self.assertRaises(SystemExit) as call:
main(['--target=1234'])
# Ensure that the error message contains the checksum of the target job
self.assertEqual(call.exception.args, (
'Cannot find job with checksum 1234',))
self.assertEqual(io.combined, '')
def test_run_invalid_generator_checksum(self):
"""
verify what happens when `plainbox-trusted-launcher-1` is called with a
generator job checksum that cannot be found in any of the providers.
"""
# Ensure this there are no jobs that the launcher knows about
self.provider.job_list = []
# Run the program with io intercept
with TestIO(combined=True) as io:
with self.assertRaises(SystemExit) as call:
main(['--target=1234', '--generator=4567'])
# Ensure that the error message contains the checksum of the via job
self.assertEqual(call.exception.args, (
'Cannot find job with checksum 4567',))
# Ensure that we didn't print anything (we normally do but this is not
# tested here since we mock that part away)
self.assertEqual(io.combined, '')
def test_run_invalid_env(self):
"""
verify what happens when `plainbox-trusted-launcher-1` is called with a
checksum that cannot be found in any of the providers.
"""
# Run the program with io intercept
with TestIO(combined=True) as io:
with self.assertRaises(SystemExit) as call:
main(['--target=1234', '-T', 'blarg'])
# Ensure that we exit with an error code
self.assertEqual(call.exception.args, (2,))
# Ensure that we print a meaningful error message
expected = """
usage: plainbox-trusted-launcher-1 [-h] (-w | -t CHECKSUM)
[-T NAME=VALUE [NAME=VALUE ...]]
[-g CHECKSUM]
[-G NAME=VALUE [NAME=VALUE ...]]
plainbox-trusted-launcher-1: error: argument -T/--target-environment: expected NAME=VALUE
"""
self.assertEqual(io.combined, cleandoc(expected) + "\n")
plainbox-0.25/plainbox/impl/secure/providers/ 0000775 0001750 0001750 00000000000 12633675274 022153 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/impl/secure/providers/__init__.py 0000664 0001750 0001750 00000004127 12627266441 024263 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.secure.providers` -- providers package
==========================================================
Providers are a mechanism by which PlainBox can enumerate jobs and whitelists.
Currently there are only v1 (as in version one) providers that basically have
to behave as CheckBox itself (mini CheckBox forks for example)
There is ongoing work and discussion on V2 providers that would have a
lower-level interface and would be able to define new job types, new whitelist
types and generally all the next-gen semantics.
PlainBox does not come with any real provider by default. PlainBox sometimes
creates special dummy providers that have particular data in them for testing.
V1 providers
------------
The first (current) version of PlainBox providers has the following properties,
this is also described by :class:`plainbox.abc.IProvider1`::
* there is a directory with '.txt' or '.txt.in' files with RFC822-encoded
job definitions. The definitions need a particular set of keys to work.
* there is a directory with '.whitelist' files that contain a list (one per
line) of job definitions to execute.
* there is a directory with additional executables (added to PATH)
* there is a directory with an additional python3 libraries (added to
PYTHONPATH)
"""
class ProviderNotFound(LookupError):
"""
Exception used to report that a provider cannot be located
"""
plainbox-0.25/plainbox/impl/secure/providers/v1.py 0000664 0001750 0001750 00000173327 12627266441 023063 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.secure.providers.v1` -- Implementation of V1 provider
=========================================================================
"""
import collections
import gettext
import logging
import os
from plainbox.abc import IProvider1
from plainbox.i18n import gettext as _
from plainbox.impl.secure.config import Config, Variable
from plainbox.impl.secure.config import (
ValidationError as ConfigValidationError)
from plainbox.impl.secure.config import IValidator
from plainbox.impl.secure.config import NotEmptyValidator
from plainbox.impl.secure.config import NotUnsetValidator
from plainbox.impl.secure.config import PatternValidator
from plainbox.impl.secure.config import Unset
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.plugins import FsPlugInCollection
from plainbox.impl.secure.plugins import LazyFsPlugInCollection
from plainbox.impl.secure.plugins import PlugIn
from plainbox.impl.secure.plugins import PlugInError
from plainbox.impl.secure.plugins import now
from plainbox.impl.secure.qualifiers import WhiteList
from plainbox.impl.secure.rfc822 import FileTextSource
from plainbox.impl.secure.rfc822 import RFC822SyntaxError
from plainbox.impl.secure.rfc822 import load_rfc822_records
from plainbox.impl.unit import all_units
from plainbox.impl.unit.file import FileRole
from plainbox.impl.unit.file import FileUnit
from plainbox.impl.unit.testplan import TestPlanUnit
from plainbox.impl.validation import Severity
from plainbox.impl.validation import ValidationError
logger = logging.getLogger("plainbox.secure.providers.v1")
class ProviderContentPlugIn(PlugIn):
"""
PlugIn class for loading provider content.
Provider content comes in two shapes and sizes:
- units (of any kind)
- whitelists
The actual logic on how to load everything is encapsulated in
:meth:`wrap()` though its return value is not so useful.
:attr unit_list:
The list of loaded units
:attr whitelist_list:
The list of loaded whitelists
"""
def __init__(self, filename, text, load_time, provider, *,
validate=False, validation_kwargs=None,
check=True, context=None):
start_time = now()
try:
# Inspect the file
inspect_result = self.inspect(
filename, text, provider,
validate, validation_kwargs or {}, # legacy validation
check, context # modern validation
)
except PlugInError as exc:
raise exc
except Exception as exc:
raise PlugInError(_("Cannot load {!r}: {}").format(filename, exc))
wrap_time = now() - start_time
super().__init__(filename, inspect_result, load_time, wrap_time)
self.unit_list = []
self.whitelist_list = []
# And load all of the content from that file
self.unit_list.extend(self.discover_units(
inspect_result, filename, text, provider))
self.whitelist_list.extend(self.discover_whitelists(
inspect_result, filename, text, provider))
def inspect(self, filename: str, text: str, provider: "Provider1",
validate: bool, validation_kwargs: "Dict[str, Any]", check:
bool, context: "???") -> "Any":
"""
Interpret and wrap the content of the filename as whatever is
appropriate. The return value of this class becomes the
:meth:`plugin_object`
.. note::
This method must *not* access neither :attr:`unit_list` nor
:attr:`whitelist_list`. If needed, it can collect its own state in
private instance attributes.
"""
def discover_units(
self, inspect_result: "Any", filename: str, text: str,
provider: "Provider1"
) -> "Iterable[Unit]":
"""
Discover all units that were loaded by this plug-in
:param wrap_result:
whatever was returned on the call to :meth:`wrap()`.
:returns:
an iterable of units.
.. note::
this method is always called *after* :meth:`wrap()`.
"""
yield self.make_file_unit(filename, provider)
def discover_whitelists(
self, inspect_result: "Any", filename: str, text: str,
provider: "Provider1"
) -> "Iterable[WhiteList]":
"""
Discover all whitelists that were loaded by this plug-in
:param wrap_result:
whatever was returned on the call to :meth:`wrap()`.
:returns:
an iterable of whitelists.
.. note::
this method is always called *after* :meth:`wrap()`.
"""
return ()
def make_file_unit(self, filename, provider, role=None, base=None):
if role is None or base is None:
role, base, plugin_cls = provider.classify(filename)
return FileUnit({
'unit': FileUnit.Meta.name,
'path': filename,
'base': base,
'role': role,
}, origin=Origin(FileTextSource(filename)), provider=provider,
virtual=True)
class WhiteListPlugIn(ProviderContentPlugIn):
"""
A specialized :class:`plainbox.impl.secure.plugins.IPlugIn` that loads
:class:`plainbox.impl.secure.qualifiers.WhiteList` instances from a file.
"""
def inspect(self, filename: str, text: str, provider: "Provider1",
validate: bool, validation_kwargs: "Dict[str, Any]", check:
bool, context: "???") -> "WhiteList":
if provider is not None:
implicit_namespace = provider.namespace
else:
implicit_namespace = None
origin = Origin(FileTextSource(filename), 1, text.count('\n'))
return WhiteList.from_string(
text, filename=filename, origin=origin,
implicit_namespace=implicit_namespace)
def discover_units(
self, inspect_result: "WhiteList", filename: str, text: str,
provider: "Provider1"
) -> "Iterable[Unit]":
if provider is not None:
yield self.make_file_unit(
filename, provider,
# NOTE: don't guess what this file is for
role=FileRole.legacy_whitelist, base=provider.whitelists_dir)
yield self.make_test_plan_unit(filename, text, provider)
def discover_whitelists(
self, inspect_result: "WhiteList", filename: str, text: str,
provider: "Provider1"
) -> "Iterable[WhiteList]":
yield inspect_result
def make_test_plan_unit(self, filename, text, provider):
name = os.path.basename(os.path.splitext(filename)[0])
origin = Origin(FileTextSource(filename), 1, text.count('\n'))
field_offset_map = {'include': 0}
return TestPlanUnit({
'unit': TestPlanUnit.Meta.name,
'id': name,
'name': name,
'include': str(text), # delazify content
}, origin=origin, provider=provider, field_offset_map=field_offset_map,
virtual=True)
# NOTE: This version of __init__() exists solely so that provider can
# default to None. This is still used in some places and must be supported.
def __init__(self, filename, text, load_time, provider=None, *,
validate=False, validation_kwargs=None,
check=True, context=None):
super().__init__(
filename, text, load_time, provider, validate=validate,
validation_kwargs=validation_kwargs, check=check, context=context)
# NOTE: this version of plugin_name() is just for legacy code support
@property
def plugin_name(self):
"""
plugin name, the name of the WhiteList
"""
return self.plugin_object.name
class UnitPlugIn(ProviderContentPlugIn):
"""
A specialized :class:`plainbox.impl.secure.plugins.IPlugIn` that loads a
list of :class:`plainbox.impl.unit.Unit` instances from a file.
"""
def inspect(
self, filename: str, text: str, provider: "Provider1", validate: bool,
validation_kwargs: "Dict[str, Any]", check: bool, context: "???"
) -> "Any":
"""
Load all units from their PXU representation.
:param filename:
Name of the file with unit definitions
:param text:
Full text of the file with unit definitions (lazy)
:param provider:
A provider object to which those units belong to
:param validate:
Enable unit validation. Incorrect unit definitions will not be
loaded and will abort the process of loading of the remainder of
the jobs. This is ON by default to prevent broken units from being
used. This is a keyword-only argument.
:param validation_kwargs:
Keyword arguments to pass to the Unit.validate(). Note, this is a
single argument. This is a keyword-only argument.
:param check:
Enable unit checking. Incorrect unit definitions will not be loaded
and will abort the process of loading of the remainder of the jobs.
This is OFF by default to prevent broken units from being used.
This is a keyword-only argument.
:param context:
If checking, use this validation context.
"""
logger.debug(_("Loading units from %r..."), filename)
try:
records = load_rfc822_records(
text, source=FileTextSource(filename))
except RFC822SyntaxError as exc:
raise PlugInError(
_("Cannot load job definitions from {!r}: {}").format(
filename, exc))
unit_list = []
for record in records:
unit_name = record.data.get('unit', 'job')
try:
unit_cls = self._get_unit_cls(unit_name)
except KeyError:
raise PlugInError(
_("Unknown unit type: {!r}").format(unit_name))
try:
unit = unit_cls.from_rfc822_record(record, provider)
except ValueError as exc:
raise PlugInError(
_("Cannot define unit from record {!r}: {}").format(
record, exc))
if check:
for issue in unit.check(context=context, live=True):
if issue.severity is Severity.error:
raise PlugInError(
_("Problem in unit definition, {}").format(issue))
if validate:
try:
unit.validate(**validation_kwargs)
except ValidationError as exc:
raise PlugInError(
_("Problem in unit definition, field {}: {}").format(
exc.field, exc.problem))
unit_list.append(unit)
logger.debug(_("Loaded %r"), unit)
return unit_list
def discover_units(
self, inspect_result: "List[Unit]", filename: str, text: str,
provider: "Provider1"
) -> "Iterable[Unit]":
for unit in inspect_result:
yield unit
yield self.make_file_unit(filename, provider)
def discover_whitelists(
self, inspect_result: "List[Unit]", filename: str, text: str,
provider: "Provider1"
) -> "Iterable[WhiteList]":
for unit in (unit for unit in inspect_result
if unit.Meta.name == 'test plan'):
if unit.include is not None:
yield WhiteList(
unit.include, name=unit.partial_id, origin=unit.origin,
implicit_namespace=unit.provider.namespace)
# NOTE: this version of plugin_object() is just for legacy code support
@property
def plugin_object(self):
return self.unit_list
@staticmethod
def _get_unit_cls(unit_name):
"""
Get a class that implements the specified unit
"""
# TODO: transition to lazy plugin collection
all_units.load()
return all_units.get_by_name(unit_name).plugin_object
class ProviderContentEnumerator:
"""
Support class for enumerating provider content.
The only role of this class is to expose a plug in collection that can
enumerate all of the files reachable from a provider. This collection
is consumed by other parts of provider loading machinery.
Since it is a stock plug in collection it can be easily "mocked" to provide
alternate content without involving modifications of the real file system.
.. note::
This class is automatically instantiated by :class:`Provider1`. The
:meth:`content_collection` property is exposed as
:meth:`Provider1.content_collection`.
"""
def __init__(self, provider: "Provider1"):
"""
Initialize a new provider content enumerator
:param provider:
The associated provider
"""
# NOTE: This code tries to account for two possible layouts. In one
# layout we don't have the base directory and everything is spread
# across the filesystem. This is how a packaged provider looks like.
# The second layout is the old flat layout that is not being used
# anymore. The only modern exception is when working with a provider
# from source. To take that into account, the src_dir and build_bin_dir
# are optional.
if provider.base_dir:
dir_list = [provider.base_dir]
if provider.src_dir:
dir_list.append(provider.src_dir)
# NOTE: in source layout we may also see virtual executables
# that are not loaded yet. Those are listed by
# "$src_dir/EXECUTABLES"
if provider.build_bin_dir:
dir_list.append(provider.build_bin_dir)
if provider.build_mo_dir:
dir_list.append(provider.build_mo_dir)
else:
dir_list = []
if provider.units_dir:
dir_list.append(provider.units_dir)
if provider.jobs_dir:
dir_list.append(provider.jobs_dir)
if provider.data_dir:
dir_list.append(provider.data_dir)
if provider.bin_dir:
dir_list.append(provider.bin_dir)
if provider.locale_dir:
dir_list.append(provider.locale_dir)
if provider.whitelists_dir:
dir_list.append(provider.whitelists_dir)
# Find all the files that belong to a provider
self._content_collection = LazyFsPlugInCollection(
dir_list, ext=None, recursive=True)
@property
def content_collection(self) -> "IPlugInCollection":
"""
An plugin collection that enumerates all of the files in the provider.
This collections exposes all of the files in a provider. It can also be
mocked for easier testing. It is the only part of the provider codebase
that tries to discover data in a file system.
.. note::
By default the collection is **not** loaded. Make sure to call
``.load()`` to see the actual data. This is, again, a way to
simplify testing and to de-couple it from file-system activity.
"""
return self._content_collection
class ProviderContentClassifier:
"""
Support class for classifying content inside a provider.
The primary role of this class is to come up with the role of each file
inside the provider. That includes all files reachable from any of the
directories that constitute a provider definition. In addition, each file
is associated with a *base directory*. This directory can be used to
re-construct the same provider at a different location or in a different
layout.
The secondary role is to provide a hint on what PlugIn to use to load such
content (as units). In practice the majority of files are loaded with the
:class:`UnitPlugIn` class. Legacy ``.whitelist`` files are loaded with the
:class:`WhiteListPlugIn` class instead. All other files are handled by the
:class:`ProviderContentPlugIn`.
.. note::
This class is automatically instantiated by :class:`Provider1`. The
:meth:`classify` method is exposed as :meth:`Provider1.classify()`.
"""
LEGAL_SET = frozenset(['COPYING', 'COPYING.LESSER', 'LICENSE'])
DOC_SET = frozenset(['README', 'README.md', 'README.rst', 'README.txt'])
def __init__(self, provider: "Provider1"):
"""
Initialize a new provider content classifier
:param provider:
The associated provider
"""
self.provider = provider
self._classify_fn_list = None
self._EXECUTABLES = None
def classify(self, filename: str) -> "Tuple[Symbol, str, type]":
"""
Classify a file belonging to the provider
:param filename:
Full pathname of the file to classify
:returns:
A tuple of information about the file. The first element is the
:class:`FileRole` symbol that describes the role of the file. The
second element is the base path of the file. It can be subtracted
from the actual filename to obtain a relative directory where the
file needs to be located in case of provider re-location. The last,
third element is the plug-in class that can be used to load units
from that file.
:raises ValueError:
If the file cannot be classified. This can only happen if the file
is not in any way related to the provider. All (including random
junk) files can be classified correctly, as long as they are inside
one of the well-known directories.
"""
for fn in self.classify_fn_list:
result = fn(filename)
if result is not None:
return result
else:
raise ValueError("Unable to classify: {!r}".format(filename))
@property
def classify_fn_list(
self
) -> "List[Callable[[str], Tuple[Symbol, str, type]]]":
"""
List of functions that aid in the classification process.
"""
if self._classify_fn_list is None:
self._classify_fn_list = self._get_classify_fn_list()
return self._classify_fn_list
def _get_classify_fn_list(
self
) -> "List[Callable[[str], Tuple[Symbol, str, type]]]":
"""
Get a list of function that can classify any file reachable from our
provider. The returned function list depends on which directories are
present.
:returns:
A list of functions ``fn(filename) -> (Symbol, str, plugin_cls)``
where the return value is a tuple (file_role, base_dir, type).
The plugin_cls can be used to find all of the units stored in that
file.
"""
classify_fn_list = []
if self.provider.jobs_dir:
classify_fn_list.append(self._classify_pxu_jobs)
if self.provider.units_dir:
classify_fn_list.append(self._classify_pxu_units)
if self.provider.whitelists_dir:
classify_fn_list.append(self._classify_whitelist)
if self.provider.data_dir:
classify_fn_list.append(self._classify_data)
if self.provider.bin_dir:
classify_fn_list.append(self._classify_exec)
if self.provider.build_bin_dir:
classify_fn_list.append(self._classify_built_exec)
if self.provider.build_mo_dir:
classify_fn_list.append(self._classify_built_i18n)
if self.provider.build_dir:
classify_fn_list.append(self._classify_build)
if self.provider.po_dir:
classify_fn_list.append(self._classify_po)
if self.provider.src_dir:
classify_fn_list.append(self._classify_src)
if self.provider.base_dir:
classify_fn_list.append(self._classify_legal)
classify_fn_list.append(self._classify_docs)
classify_fn_list.append(self._classify_manage_py)
classify_fn_list.append(self._classify_vcs)
# NOTE: this one always has to be last
classify_fn_list.append(self._classify_unknown)
return classify_fn_list
def _get_EXECUTABLES(self):
assert self.provider.src_dir is not None
hint_file = os.path.join(self.provider.src_dir, 'EXECUTABLES')
if os.path.isfile(hint_file):
with open(hint_file, "rt", encoding='UTF-8') as stream:
return frozenset(line.strip() for line in stream)
else:
return frozenset()
@property
def EXECUTABLES(self) -> "Set[str]":
"""
A set of executables that are expected to be built from source.
"""
if self._EXECUTABLES is None:
self._EXECUTABLES = self._get_EXECUTABLES()
return self._EXECUTABLES
def _classify_pxu_jobs(self, filename: str):
""" classify certain files in jobs_dir as unit source"""
if filename.startswith(self.provider.jobs_dir):
ext = os.path.splitext(filename)[1]
if ext in (".txt", ".in", ".pxu"):
return (FileRole.unit_source, self.provider.jobs_dir,
UnitPlugIn)
def _classify_pxu_units(self, filename: str):
""" classify certain files in units_dir as unit source"""
if filename.startswith(self.provider.units_dir):
ext = os.path.splitext(filename)[1]
# TODO: later on just let .pxu files in the units_dir
if ext in (".txt", ".txt.in", ".pxu"):
return (FileRole.unit_source, self.provider.units_dir,
UnitPlugIn)
def _classify_whitelist(self, filename: str):
""" classify .whitelist files in whitelist_dir as whitelist """
if (filename.startswith(self.provider.whitelists_dir) and
filename.endswith(".whitelist")):
return (FileRole.legacy_whitelist, self.provider.whitelists_dir,
WhiteListPlugIn)
def _classify_data(self, filename: str):
""" classify files in data_dir as data """
if filename.startswith(self.provider.data_dir):
return (FileRole.data, self.provider.data_dir,
ProviderContentPlugIn)
def _classify_exec(self, filename: str):
""" classify files in bin_dir as scripts/executables """
if (filename.startswith(self.provider.bin_dir) and
os.access(filename, os.F_OK | os.X_OK)):
with open(filename, 'rb') as stream:
chunk = stream.read(2)
role = FileRole.script if chunk == b'#!' else FileRole.binary
return (role, self.provider.bin_dir, ProviderContentPlugIn)
def _classify_built_exec(self, filename: str):
""" classify files in build_bin_dir as scripts/executables """
if (filename.startswith(self.provider.build_bin_dir) and
os.access(filename, os.F_OK | os.X_OK) and
os.path.basename(filename) in self.EXECUTABLES):
with open(filename, 'rb') as stream:
chunk = stream.read(2)
role = FileRole.script if chunk == b'#!' else FileRole.binary
return (role, self.provider.build_bin_dir, ProviderContentPlugIn)
def _classify_built_i18n(self, filename: str):
""" classify files in build_mo_dir as i18n """
if (filename.startswith(self.provider.build_mo_dir) and
os.path.splitext(filename)[1] == '.mo'):
return (FileRole.i18n, self.provider.build_bin_dir,
ProviderContentPlugIn)
def _classify_build(self, filename: str):
""" classify anything in build_dir as a build artefact """
if filename.startswith(self.provider.build_dir):
return (FileRole.build, self.provider.build_dir, None)
def _classify_legal(self, filename: str):
""" classify file as a legal document """
if os.path.basename(filename) in self.LEGAL_SET:
return (FileRole.legal, self.provider.base_dir,
ProviderContentPlugIn)
def _classify_docs(self, filename: str):
""" classify certain files as documentation """
if os.path.basename(filename) in self.DOC_SET:
return (FileRole.docs, self.provider.base_dir,
ProviderContentPlugIn)
def _classify_manage_py(self, filename: str):
""" classify the manage.py file """
if os.path.join(self.provider.base_dir, 'manage.py') == filename:
return (FileRole.manage_py, self.provider.base_dir, None)
def _classify_po(self, filename: str):
if (os.path.dirname(filename) == self.provider.po_dir and
(os.path.splitext(filename)[1] in ('.po', '.pot') or
os.path.basename(filename) == 'POTFILES.in')):
return (FileRole.src, self.provider.base_dir, None)
def _classify_src(self, filename: str):
if filename.startswith(self.provider.src_dir):
return (FileRole.src, self.provider.base_dir, None)
def _classify_vcs(self, filename: str):
if os.path.basename(filename) in ('.gitignore', '.bzrignore'):
return (FileRole.vcs, self.provider.base_dir, None)
head = filename
# NOTE: first condition is for correct cases, the rest are for broken
# cases that may be caused if we get passed some garbage argument.
while head != self.provider.base_dir and head != '' and head != '/':
head, tail = os.path.split(head)
if tail in ('.git', '.bzr'):
return (FileRole.vcs, self.provider.base_dir, None)
def _classify_unknown(self, filename: str):
""" classify anything as an unknown file """
return (FileRole.unknown, self.provider.base_dir, None)
class ProviderContentLoader:
"""
Support class for enumerating provider content.
The only role of this class is to expose a plug in collection that can
enumerate all of the files reachable from a provider. This collection
is consumed by other parts of provider loading machinery.
Since it is a stock plug in collection it can be easily "mocked" to provide
alternate content without involving modifications of the real file system.
.. note::
This class is automatically instantiated by :class:`Provider1`. All
four attributes of this class are directly exposed as properties on the
provider object.
:attr provider:
The provider back-reference
:attr is_loaded:
Flag indicating if the content loader has loaded all of the content
:attr unit_list:
A list of loaded whitelist objects
:attr problem_list:
A list of problems experienced while loading any of the content
:attr path_map:
A dictionary mapping from the path of each file to the list of units
stored there.
:attr id_map:
A dictionary mapping from the identifier of each unit to the list of
units that have that identifier.
"""
def __init__(self, provider):
self.provider = provider
self.is_loaded = False
self.unit_list = []
self.whitelist_list = []
self.problem_list = []
self.path_map = collections.defaultdict(list) # path -> list(unit)
self.id_map = collections.defaultdict(list) # id -> list(unit)
def load(self, plugin_kwargs):
logger.info("Loading content for provider %s", self.provider)
self.provider.content_collection.load()
for file_plugin in self.provider.content_collection.get_all_plugins():
filename = file_plugin.plugin_name
text = file_plugin.plugin_object
self._load_file(filename, text, plugin_kwargs)
self.problem_list.extend(self.provider.content_collection.problem_list)
self.is_loaded = True
def _load_file(self, filename, text, plugin_kwargs):
# NOTE: text is lazy, call str() or iter() to see the real content This
# prevents us from trying to read binary blobs.
classification = self.provider.classify(filename)
role, base_dir, plugin_cls = classification
if plugin_cls is None:
return
try:
plugin = plugin_cls(
filename, text, 0, self.provider, **plugin_kwargs)
except PlugInError as exc:
self.problem_list.append(exc)
else:
self.unit_list.extend(plugin.unit_list)
self.whitelist_list.extend(plugin.whitelist_list)
for unit in plugin.unit_list:
if hasattr(unit.Meta.fields, 'id'):
self.id_map[unit.id].append(unit)
if hasattr(unit.Meta.fields, 'path'):
self.path_map[unit.path].append(unit)
class Provider1(IProvider1):
"""
A v1 provider implementation.
A provider is a container of jobs and whitelists. It provides additional
meta-data and knows about location of essential directories to both load
structured data and provide runtime information for job execution.
Providers are normally loaded with :class:`Provider1PlugIn`, due to the
number of fields involved in basic initialization.
"""
def __init__(self, name, namespace, version, description, secure,
gettext_domain, units_dir, jobs_dir, whitelists_dir, data_dir,
bin_dir, locale_dir, base_dir, *, validate=False,
validation_kwargs=None, check=True, context=None):
"""
Initialize a provider with a set of meta-data and directories.
:param name:
provider name / ID
:param namespace:
provider namespace
:param version:
provider version
:param description:
provider version
This is the untranslated version of this field. Implementations may
obtain the localized version based on the gettext_domain property.
:param secure:
secure bit
When True jobs from this provider should be available via the
trusted launcher mechanism. It should be set to True for
system-wide installed providers.
:param gettext_domain:
gettext domain that contains translations for this provider
:param units_dir:
path of the directory with unit definitions
:param jobs_dir:
path of the directory with job definitions
:param whitelists_dir:
path of the directory with whitelists definitions (aka test-plans)
:param data_dir:
path of the directory with files used by jobs at runtime
:param bin_dir:
path of the directory with additional executables
:param locale_dir:
path of the directory with locale database (translation catalogs)
:param base_dir:
path of the directory with (perhaps) all of jobs_dir,
whitelists_dir, data_dir, bin_dir, locale_dir. This may be None.
This is also the effective value of $CHECKBOX_SHARE
:param validate:
Enable job validation. Incorrect job definitions will not be loaded
and will abort the process of loading of the remainder of the jobs.
This is ON by default to prevent broken job definitions from being
used. This is a keyword-only argument.
:param validation_kwargs:
Keyword arguments to pass to the JobDefinition.validate(). Note,
this is a single argument. This is a keyword-only argument.
"""
# Meta-data
if namespace is None:
namespace = name.split(':', 1)[0]
self._has_dedicated_namespace = False
else:
self._has_dedicated_namespace = True
self._name = name
self._namespace = namespace
self._version = version
self._description = description
self._secure = secure
self._gettext_domain = gettext_domain
# Directories
self._units_dir = units_dir
self._jobs_dir = jobs_dir
self._whitelists_dir = whitelists_dir
self._data_dir = data_dir
self._bin_dir = bin_dir
self._locale_dir = locale_dir
self._base_dir = base_dir
# Create support classes
self._enumerator = ProviderContentEnumerator(self)
self._classifier = ProviderContentClassifier(self)
self._loader = ProviderContentLoader(self)
self._load_kwargs = {
'validate': validate,
'validation_kwargs': validation_kwargs,
'check': check,
'context': context,
}
# Setup provider specific i18n
self._setup_translations()
logger.info("Provider initialized %s", self)
def _ensure_loaded(self):
if not self._loader.is_loaded:
self._loader.load(self._load_kwargs)
def _load_whitelists(self):
self._ensure_loaded()
def _load_units(self, validate, validation_kwargs, check, context):
self._ensure_loaded()
def _setup_translations(self):
if self._gettext_domain and self._locale_dir:
gettext.bindtextdomain(self._gettext_domain, self._locale_dir)
@classmethod
def from_definition(cls, definition, secure, *,
validate=False, validation_kwargs=None, check=True,
context=None):
"""
Initialize a provider from Provider1Definition object
:param definition:
A Provider1Definition object to use as reference
:param secure:
Value of the secure flag. This cannot be expressed by a definition
object.
:param validate:
Enable job validation. Incorrect job definitions will not be loaded
and will abort the process of loading of the remainder of the jobs.
This is ON by default to prevent broken job definitions from being
used. This is a keyword-only argument.
:param validation_kwargs:
Keyword arguments to pass to the JobDefinition.validate(). Note,
this is a single argument. This is a keyword-only argument.
This method simplifies initialization of a Provider1 object where the
caller already has a Provider1Definition object. Depending on the value
of ``definition.location`` all of the directories are either None or
initialized to a *good* (typical) value relative to *location*
The only value that you may want to adjust, for working with source
providers, is *locale_dir*, by default it would be ``location/locale``
but ``manage.py i18n`` creates ``location/build/mo``
"""
logger.debug("Loading provider from definition %r", definition)
# Initialize the provider object
return cls(
definition.name, definition.namespace or None, definition.version,
definition.description, secure,
definition.effective_gettext_domain,
definition.effective_units_dir, definition.effective_jobs_dir,
definition.effective_whitelists_dir, definition.effective_data_dir,
definition.effective_bin_dir, definition.effective_locale_dir,
definition.location or None, validate=validate,
validation_kwargs=validation_kwargs, check=check, context=context)
def __repr__(self):
return "<{} name:{!r}>".format(self.__class__.__name__, self.name)
def __str__(self):
return "{}, version {}".format(self.name, self.version)
@property
def name(self):
"""
name of this provider
"""
return self._name
@property
def namespace(self):
"""
namespace component of the provider name
This property defines the namespace in which all provider jobs are
defined in. Jobs within one namespace do not need to be fully qualified
by prefixing their partial identifier with provider namespace (so all
stays 'as-is'). Jobs that need to interact with other provider
namespaces need to use the fully qualified job identifier instead.
The identifier is defined as the part of the provider name, up to the
colon. This effectively gives organizations flat namespace within one
year-domain pair and allows to create private namespaces by using
sub-domains.
"""
return self._namespace
@property
def has_dedicated_namespace(self):
"""Flag set if namespace was defined by a dedicated field."""
return self._has_dedicated_namespace
@property
def version(self):
"""
version of this provider
"""
return self._version
@property
def description(self):
"""
description of this provider
"""
return self._description
def tr_description(self):
"""
Get the translated version of :meth:`description`
"""
return self.get_translated_data(self.description)
@property
def units_dir(self):
"""
absolute path of the units directory
"""
return self._units_dir
@property
def jobs_dir(self):
"""
absolute path of the jobs directory
"""
return self._jobs_dir
@property
def whitelists_dir(self):
"""
absolute path of the whitelist directory
"""
return self._whitelists_dir
@property
def data_dir(self):
"""
absolute path of the data directory
"""
return self._data_dir
@property
def bin_dir(self):
"""
absolute path of the bin directory
.. note::
The programs in that directory may not work without setting
PYTHONPATH and CHECKBOX_SHARE.
"""
return self._bin_dir
@property
def locale_dir(self):
"""
absolute path of the directory with locale data
The value is applicable as argument bindtextdomain()
"""
return self._locale_dir
@property
def base_dir(self):
"""
path of the directory with (perhaps) all of jobs_dir, whitelists_dir,
data_dir, bin_dir, locale_dir. This may be None
"""
return self._base_dir
@property
def build_dir(self):
"""
absolute path of the build directory
This value may be None. It depends on location/base_dir being set.
"""
if self.base_dir is not None:
return os.path.join(self.base_dir, 'build')
@property
def build_bin_dir(self):
"""
absolute path of the build/bin directory
This value may be None. It depends on location/base_dir being set.
"""
if self.base_dir is not None:
return os.path.join(self.base_dir, 'build', 'bin')
@property
def build_mo_dir(self):
"""
absolute path of the build/mo directory
This value may be None. It depends on location/base_dir being set.
"""
if self.base_dir is not None:
return os.path.join(self.base_dir, 'build', 'mo')
@property
def src_dir(self):
"""
absolute path of the src/ directory
This value may be None. It depends on location/base_dir set.
"""
if self.base_dir is not None:
return os.path.join(self.base_dir, 'src')
@property
def po_dir(self):
"""
absolute path of the po/ directory
This value may be None. It depends on location/base_dir set.
"""
if self.base_dir is not None:
return os.path.join(self.base_dir, 'po')
@property
def CHECKBOX_SHARE(self):
"""
required value of CHECKBOX_SHARE environment variable.
.. note::
This variable is only required by one script.
It would be nice to remove this later on.
"""
return self.base_dir
@property
def extra_PYTHONPATH(self):
"""
additional entry for PYTHONPATH, if needed.
This entry is required for CheckBox scripts to import the correct
CheckBox python libraries.
.. note::
The result may be None
"""
return None
@property
def secure(self):
"""
flag indicating that this provider was loaded from the secure portion
of PROVIDERPATH and thus can be used with the
plainbox-trusted-launcher-1.
"""
return self._secure
@property
def gettext_domain(self):
"""
the name of the gettext domain associated with this provider
This value may be empty, in such case provider data cannot be localized
for the user environment.
"""
return self._gettext_domain
@property
def unit_list(self):
"""
List of loaded units.
This list may contain units of various types. You should not assume all
of them are :class:`JobDefinition` instances. You may use filtering to
obtain units of a given type.
>>> [unit for unit in provider.unit_list
... if unit.Meta.name == 'job']
[...]
"""
self._ensure_loaded()
return self._loader.unit_list
@property
def job_list(self):
"""
A sorted list of loaded job definition units.
"""
return sorted(
(unit for unit in self.unit_list if unit.Meta.name == 'job'),
key=lambda unit: unit.id)
@property
def executable_list(self):
"""
List of all the executables
"""
return sorted(
unit.path for unit in self.unit_list
if unit.Meta.name == 'file' and
unit.role in (FileRole.script, FileRole.binary))
@property
def whitelist_list(self):
"""
List of loaded whitelists.
.. warning::
:class:`WhiteList` is currently deprecated. You should never need
to access them in any new code. They are entirely replaced by
:class:`TestPlan`. This property is provided for completeness and
it will be **removed** once whitelists classes are no longer used.
"""
self._ensure_loaded()
return self._loader.whitelist_list
@property
def problem_list(self):
"""
list of problems encountered by the loading process
"""
self._ensure_loaded()
return self._loader.problem_list
@property
def id_map(self):
"""
A mapping from unit identifier to list of units with that identifier.
.. note::
Typically the list will be one element long but invalid providers
may break that guarantee. Code defensively if you can.
"""
self._ensure_loaded()
return self._loader.id_map
@property
def path_map(self):
"""
A mapping from filename path to a list of units stored in that file.
.. note::
For ``.pxu`` files this will enumerate all units stored there. For
other things it will typically be just the FileUnit.
"""
self._ensure_loaded()
return self._loader.path_map
def get_translated_data(self, msgid):
"""
Get a localized piece of data
:param msgid:
data to translate
:returns:
translated data obtained from the provider if msgid is not False
(empty string and None both are) and this provider has a
gettext_domain defined for it, msgid itself otherwise.
"""
if msgid and self._gettext_domain:
return gettext.dgettext(self._gettext_domain, msgid)
else:
return msgid
@property
def classify(self):
"""
Exposed :meth:`ProviderContentClassifier.classify()`
"""
return self._classifier.classify
@property
def content_collection(self) -> "IPlugInCollection":
"""
Exposed :meth:`ProviderContentEnumerator.content_collection`
"""
return self._enumerator.content_collection
@property
def fake(self):
"""
Bridge to ``.content_collection.fake_plugins`` that's shorter to type.
"""
return self._enumerator.content_collection.fake_plugins
class IQNValidator(PatternValidator):
"""
A validator for provider name.
Provider names use a RFC3720 IQN-like identifiers composed of the follwing
parts:
* year
* (dot separating the next section)
* domain name
* (colon separating the next section)
* identifier
Each of the fields has an informal definition below:
year:
four digit number
domain name:
identifiers separated by dots, at least one dot has to be present
identifier:
`[a-z][a-z0-9-]*`
"""
def __init__(self):
super(IQNValidator, self).__init__(
"^[0-9]{4}\.[a-z][a-z0-9-]*(\.[a-z][a-z0-9-]*)+:[a-z][a-z0-9-]*$")
def __call__(self, variable, new_value):
if super(IQNValidator, self).__call__(variable, new_value):
return _("must look like RFC3720 IQN")
class ProviderNameValidator(PatternValidator):
"""
Validator for the provider name.
Two forms are allowed:
- short form (requires a separate namespace definition)
- verbose form (based on RFC3720 IQN-like strings)
The short form is supposed to look like Debian package name.
"""
_PATTERN = (
"^"
"([0-9]{4}\.[a-z][a-z0-9-]*(\.[a-z][a-z0-9-]*)+:[a-z][a-z0-9-]*)"
"|"
"([a-z0-9-]+)"
"$"
)
def __init__(self):
super().__init__(self._PATTERN)
def __call__(self, variable, new_value):
if super().__call__(variable, new_value):
return _("must look like RFC3720 IQN")
class VersionValidator(PatternValidator):
"""
A validator for provider provider version.
Provider version must be a sequence of non-negative numbers separated by
dots. At most one version number must be present, which may be followed by
any sub-versions.
"""
def __init__(self):
super().__init__("^[0-9]+(\.[0-9]+)*$")
def __call__(self, variable, new_value):
if super().__call__(variable, new_value):
return _("must be a sequence of digits separated by dots")
class ExistingDirectoryValidator(IValidator):
"""
A validator that checks that the value points to an existing directory
"""
def __call__(self, variable, new_value):
if not os.path.isdir(new_value):
return _("no such directory")
class AbsolutePathValidator(IValidator):
"""
A validator that checks that the value is an absolute path
"""
def __call__(self, variable, new_value):
if not os.path.isabs(new_value):
return _("cannot be relative")
class Provider1Definition(Config):
"""
A Config-like class for parsing plainbox provider definition files
.. note::
The location attribute is special, if set, it defines the base
directory of *all* the other directory attributes. If location is
unset, then all the directory attributes default to None (that is,
there is no directory of that type). This is actually a convention that
is implemented in :class:`Provider1PlugIn`. Here, all the attributes
can be Unset and their validators only check values other than Unset.
"""
# NOTE: See the implementation note in :class:`Provider1PluginIn` to
# understand the effect of this flag.
relocatable = Variable(
section='PlainBox Provider',
help_text=_("Flag indicating if the provider is relocatable"),
kind=bool,
)
location = Variable(
section='PlainBox Provider',
help_text=_("Base directory with provider data"),
validator_list=[
# NOTE: it *can* be unset!
NotEmptyValidator(),
AbsolutePathValidator(),
ExistingDirectoryValidator(),
])
name = Variable(
section='PlainBox Provider',
help_text=_("Name of the provider"),
validator_list=[
NotUnsetValidator(),
NotEmptyValidator(),
ProviderNameValidator(),
])
namespace = Variable(
section='PlainBox Provider',
help_text=_("Namespace of the provider"),
validator_list=[
# NOTE: it *can* be unset, then name must be IQN
NotEmptyValidator(),
])
@property
def name_without_colon(self):
if ':' in self.name:
return self.name.replace(':', '.')
else:
return self.name
version = Variable(
section='PlainBox Provider',
help_text=_("Version of the provider"),
validator_list=[
NotUnsetValidator(),
NotEmptyValidator(),
VersionValidator(),
])
description = Variable(
section='PlainBox Provider',
help_text=_("Description of the provider"))
gettext_domain = Variable(
section='PlainBox Provider',
help_text=_("Name of the gettext domain for translations"),
validator_list=[
# NOTE: it *can* be unset!
PatternValidator("[a-z0-9_-]+"),
])
@property
def effective_gettext_domain(self):
"""
effective value of gettext_domian
The effective value is :meth:`gettex_domain` itself, unless it is
Unset. If it is Unset the effective value None.
"""
if self.gettext_domain is not Unset:
return self.gettext_domain
units_dir = Variable(
section='PlainBox Provider',
help_text=_("Pathname of the directory with unit definitions"),
validator_list=[
# NOTE: it *can* be unset
NotEmptyValidator(),
AbsolutePathValidator(),
ExistingDirectoryValidator(),
])
@property
def implicit_units_dir(self):
"""
implicit value of units_dir (if Unset)
The implicit value is only defined if location is not Unset. It is the
'units' subdirectory of the directory that location points to.
"""
if self.location is not Unset:
return os.path.join(self.location, "units")
@property
def effective_units_dir(self):
"""
effective value of units_dir
The effective value is :meth:`units_dir` itself, unless it is Unset. If
it is Unset the effective value is the :meth:`implicit_units_dir`, if
that value would be valid. The effective value may be None.
"""
if self.units_dir is not Unset:
return self.units_dir
implicit = self.implicit_units_dir
if implicit is not None and os.path.isdir(implicit):
return implicit
jobs_dir = Variable(
section='PlainBox Provider',
help_text=_("Pathname of the directory with job definitions"),
validator_list=[
# NOTE: it *can* be unset
NotEmptyValidator(),
AbsolutePathValidator(),
ExistingDirectoryValidator(),
])
@property
def implicit_jobs_dir(self):
"""
implicit value of jobs_dir (if Unset)
The implicit value is only defined if location is not Unset. It is the
'jobs' subdirectory of the directory that location points to.
"""
if self.location is not Unset:
return os.path.join(self.location, "jobs")
@property
def effective_jobs_dir(self):
"""
effective value of jobs_dir
The effective value is :meth:`jobs_dir` itself, unless it is Unset. If
it is Unset the effective value is the :meth:`implicit_jobs_dir`, if
that value would be valid. The effective value may be None.
"""
if self.jobs_dir is not Unset:
return self.jobs_dir
implicit = self.implicit_jobs_dir
if implicit is not None and os.path.isdir(implicit):
return implicit
whitelists_dir = Variable(
section='PlainBox Provider',
help_text=_("Pathname of the directory with whitelists definitions"),
validator_list=[
# NOTE: it *can* be unset
NotEmptyValidator(),
AbsolutePathValidator(),
ExistingDirectoryValidator(),
])
@property
def implicit_whitelists_dir(self):
"""
implicit value of whitelists_dir (if Unset)
The implicit value is only defined if location is not Unset. It is the
'whitelists' subdirectory of the directory that location points to.
"""
if self.location is not Unset:
return os.path.join(self.location, "whitelists")
@property
def effective_whitelists_dir(self):
"""
effective value of whitelists_dir
The effective value is :meth:`whitelists_dir` itself, unless it is
Unset. If it is Unset the effective value is the
:meth:`implicit_whitelists_dir`, if that value would be valid. The
effective value may be None.
"""
if self.whitelists_dir is not Unset:
return self.whitelists_dir
implicit = self.implicit_whitelists_dir
if implicit is not None and os.path.isdir(implicit):
return implicit
data_dir = Variable(
section='PlainBox Provider',
help_text=_("Pathname of the directory with provider data"),
validator_list=[
# NOTE: it *can* be unset
NotEmptyValidator(),
AbsolutePathValidator(),
ExistingDirectoryValidator(),
])
@property
def implicit_data_dir(self):
"""
implicit value of data_dir (if Unset)
The implicit value is only defined if location is not Unset. It is the
'data' subdirectory of the directory that location points to.
"""
if self.location is not Unset:
return os.path.join(self.location, "data")
@property
def effective_data_dir(self):
"""
effective value of data_dir
The effective value is :meth:`data_dir` itself, unless it is Unset. If
it is Unset the effective value is the :meth:`implicit_data_dir`, if
that value would be valid. The effective value may be None.
"""
if self.data_dir is not Unset:
return self.data_dir
implicit = self.implicit_data_dir
if implicit is not None and os.path.isdir(implicit):
return implicit
bin_dir = Variable(
section='PlainBox Provider',
help_text=_("Pathname of the directory with provider executables"),
validator_list=[
# NOTE: it *can* be unset
NotEmptyValidator(),
AbsolutePathValidator(),
ExistingDirectoryValidator(),
])
@property
def implicit_bin_dir(self):
"""
implicit value of bin_dir (if Unset)
The implicit value is only defined if location is not Unset. It is the
'bin' subdirectory of the directory that location points to.
"""
if self.location is not Unset:
return os.path.join(self.location, "bin")
@property
def effective_bin_dir(self):
"""
effective value of bin_dir
The effective value is :meth:`bin_dir` itself, unless it is Unset. If
it is Unset the effective value is the :meth:`implicit_bin_dir`, if
that value would be valid. The effective value may be None.
"""
if self.bin_dir is not Unset:
return self.bin_dir
implicit = self.implicit_bin_dir
if implicit is not None and os.path.isdir(implicit):
return implicit
locale_dir = Variable(
section='PlainBox Provider',
help_text=_("Pathname of the directory with locale data"),
validator_list=[
# NOTE: it *can* be unset
NotEmptyValidator(),
AbsolutePathValidator(),
ExistingDirectoryValidator(),
])
@property
def implicit_locale_dir(self):
"""
implicit value of locale_dir (if Unset)
The implicit value is only defined if location is not Unset. It is the
'locale' subdirectory of the directory that location points to.
"""
if self.location is not Unset:
return os.path.join(self.location, "locale")
@property
def implicit_build_locale_dir(self):
"""
implicit value of locale_dir (if Unset) as laid out in the source tree
This value is only applicable to source layouts, where the built
translation catalogs are in the build/mo directory.
"""
if self.location is not Unset:
return os.path.join(self.location, "build", "mo")
@property
def effective_locale_dir(self):
"""
effective value of locale_dir
The effective value is :meth:`locale_dir` itself, unless it is Unset.
If it is Unset the effective value is the :meth:`implicit_locale_dir`,
if that value would be valid. The effective value may be None.
"""
if self.locale_dir is not Unset:
return self.locale_dir
implicit1 = self.implicit_locale_dir
if implicit1 is not None and os.path.isdir(implicit1):
return implicit1
implicit2 = self.implicit_build_locale_dir
if implicit2 is not None and os.path.isdir(implicit2):
return implicit2
def validate_whole(self):
"""
Validate the provider definition object.
:raises ValidationError:
If the namespace is not defined and name is using a simplified
format that doesn't contain an embedded namespace part.
"""
super().validate_whole()
if not self.namespace:
variable = self.__class__.name
value = self.name
validator = IQNValidator()
message = validator(variable, value)
if message is not None:
raise ConfigValidationError(variable, value, message)
class Provider1PlugIn(PlugIn):
"""
A specialized IPlugIn that loads Provider1 instances from their definition
files
"""
def __init__(self, filename, definition_text, load_time, *, validate=None,
validation_kwargs=None, check=None, context=None):
"""
Initialize the plug-in with the specified name and external object
"""
start = now()
self._load_time = load_time
definition = Provider1Definition()
# Load the provider definition
definition.read_string(definition_text)
# If the relocatable flag is set, set location to the base directory of
# the filename and reset all the other directories (to Unset). This is
# to allow creation of .provider files that can be moved entirely, and
# as long as they follow the implicit source layout, they will work
# okay.
if definition.relocatable:
definition.location = os.path.dirname(filename)
definition.units_dir = Unset
definition.jobs_dir = Unset
definition.whitelists_dir = Unset
definition.data_dir = Unset
definition.bin_dir = Unset
definition.locale_dir = Unset
# any validation issues prevent plugin from being used
if definition.problem_list:
# take the earliest problem and report it
exc = definition.problem_list[0]
raise PlugInError(
_("Problem in provider definition, field {!a}: {}").format(
exc.variable.name, exc.message))
# Get the secure flag
secure = os.path.dirname(filename) in get_secure_PROVIDERPATH_list()
# Initialize the provider object
provider = Provider1.from_definition(
definition, secure, validate=validate,
validation_kwargs=validation_kwargs, check=check, context=context)
wrap_time = now() - start
super().__init__(provider.name, provider, load_time, wrap_time)
def __repr__(self):
return "<{!s} plugin_name:{!r}>".format(
type(self).__name__, self.plugin_name)
def get_secure_PROVIDERPATH_list():
"""
Computes the secure value of PROVIDERPATH
This value is used by `plainbox-trusted-launcher-1` executable to discover
all secure providers.
:returns:
A list of two strings:
* `/usr/local/share/plainbox-providers-1`
* `/usr/share/plainbox-providers-1`
"""
return ["/usr/local/share/plainbox-providers-1",
"/usr/share/plainbox-providers-1"]
class SecureProvider1PlugInCollection(FsPlugInCollection):
"""
A collection of v1 provider plugins.
This FsPlugInCollection subclass carries proper, built-in defaults, that
make loading providers easier.
This particular class loads providers from the system-wide managed
locations. This defines the security boundary, as if someone can compromise
those locations then they already own the corresponding system. In
consequence this plug in collection does not respect ``PROVIDERPATH``, it
cannot be customized to load provider definitions from any other location.
This feature is supported by the
:class:`plainbox.impl.providers.v1.InsecureProvider1PlugInCollection`
"""
def __init__(self, **kwargs):
dir_list = get_secure_PROVIDERPATH_list()
super().__init__(dir_list, '.provider', wrapper=Provider1PlugIn,
**kwargs)
# Collection of all providers
all_providers = SecureProvider1PlugInCollection()
plainbox-0.25/plainbox/impl/secure/providers/test_v1.py 0000664 0001750 0001750 00000104537 12627266441 024117 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.secure.providers.test_v1
======================================
Test definitions for plainbox.impl.secure.providers.v1 module
"""
from unittest import TestCase
from plainbox.impl.job import JobDefinition
from plainbox.impl.secure.config import Unset
from plainbox.impl.secure.config import ValidationError
from plainbox.impl.secure.plugins import PlugIn
from plainbox.impl.secure.plugins import PlugInError
from plainbox.impl.secure.providers.v1 import AbsolutePathValidator
from plainbox.impl.secure.providers.v1 import ExistingDirectoryValidator
from plainbox.impl.secure.providers.v1 import IQNValidator
from plainbox.impl.secure.providers.v1 import Provider1
from plainbox.impl.secure.providers.v1 import Provider1Definition
from plainbox.impl.secure.providers.v1 import Provider1PlugIn
from plainbox.impl.secure.providers.v1 import UnitPlugIn
from plainbox.impl.secure.providers.v1 import VersionValidator
from plainbox.impl.secure.providers.v1 import WhiteListPlugIn
from plainbox.impl.secure.qualifiers import WhiteList
from plainbox.impl.secure.rfc822 import FileTextSource
from plainbox.impl.secure.rfc822 import Origin
from plainbox.impl.unit.file import FileUnit
from plainbox.vendor import mock
class IQNValidatorTests(TestCase):
def setUp(self):
self.validator = IQNValidator()
self.variable = None
def test_good_values_work(self):
name = "2013.com.canonical:certification-resources-server"
self.assertEqual(self.validator(self.variable, name), None)
def test_must_match_whole_string(self):
name = "2013.com.canonical:certification-resources-server BOGUS"
self.assertNotEqual(self.validator(self.variable, name), None)
def test_bad_values_dont(self):
self.assertEqual(
self.validator(self.variable, ""),
"must look like RFC3720 IQN")
class VersionValidatorTests(TestCase):
def setUp(self):
self.validator = VersionValidator()
self.variable = None
def test_typical_versions_work(self):
version = "1.10.7"
self.assertEqual(self.validator(self.variable, version), None)
def test_single_digit_versions_work(self):
version = "5"
self.assertEqual(self.validator(self.variable, version), None)
def test_bad_values_dont(self):
version = "1.5a7"
self.assertEqual(
self.validator(self.variable, version),
"must be a sequence of digits separated by dots")
class ExistingDirectoryValidatorTests(TestCase):
_PATH = "/some/directory"
def setUp(self):
self.validator = ExistingDirectoryValidator()
self.variable = None
@mock.patch('os.path.isdir')
def test_existing_directories_work(self, mock_isdir):
mock_isdir.return_value = True
self.assertEqual(self.validator(self.variable, self._PATH), None)
mock_isdir.assert_called_with(self._PATH)
@mock.patch('os.path.isdir')
def test_missing_directories_dont(self, mock_isdir):
mock_isdir.return_value = False
self.assertEqual(
self.validator(self.variable, self._PATH),
"no such directory")
mock_isdir.assert_called_with(self._PATH)
class AbsolutePathValidatorTests(TestCase):
def setUp(self):
self.validator = AbsolutePathValidator()
self.variable = None
def test_absolute_values_work(self):
self.assertEqual(self.validator(self.variable, '/path'), None)
def test_relative_values_dont(self):
self.assertEqual(
self.validator(self.variable, 'path'),
"cannot be relative")
class Provider1DefinitionTests(TestCase):
def test_definition_without_location(self):
"""
Smoke test to ensure we can load a typical provider definition that is
not using the location field. Those are similar to what a packaged
provider would look like.
"""
def_ = Provider1Definition()
with mock.patch('os.path.isdir') as mock_isdir:
# Mock os.path.isdir so that we can validate all of the directory
# variables.
mock_isdir.return_value = True
def_.read_string(
"[PlainBox Provider]\n"
"name = 2013.org.example:smoke-test\n"
"version = 1.0\n"
"description = a description\n"
"gettext_domain = domain\n"
"units_dir = /some/directory/units\n"
"jobs_dir = /some/directory/jobs\n"
"whitelists_dir = /some/directory/whitelists\n"
"data_dir = /some/directory/data\n"
"bin_dir = /some/directory/bin\n"
"locale_dir = /some/directory/locale\n"
)
self.assertEqual(def_.name, "2013.org.example:smoke-test")
self.assertEqual(def_.version, "1.0")
self.assertEqual(def_.description, "a description")
self.assertEqual(def_.gettext_domain, "domain")
self.assertEqual(def_.location, Unset)
self.assertEqual(def_.units_dir, "/some/directory/units")
self.assertEqual(def_.jobs_dir, "/some/directory/jobs")
self.assertEqual(def_.whitelists_dir, "/some/directory/whitelists")
self.assertEqual(def_.data_dir, "/some/directory/data")
self.assertEqual(def_.bin_dir, "/some/directory/bin")
self.assertEqual(def_.locale_dir, "/some/directory/locale")
def test_name_without_colon(self):
"""
Verify that the property Provider1Definition.name_without_colon
is computed correctly
"""
def_ = Provider1Definition()
def_.name = "2013.org.example:smoke-test"
self.assertEqual(def_.name, "2013.org.example:smoke-test")
self.assertEqual(
def_.name_without_colon, "2013.org.example.smoke-test")
def test_definition_with_location(self):
"""
Smoke test to ensure we can load a typical provider definition that is
using the location field and is not using any other directory fields.
Those are similar to what a unpackaged, under development provider
would look like.
"""
def_ = Provider1Definition()
with mock.patch('os.path.isdir') as mock_isdir:
# Mock os.path.isdir so that we can validate all of the directory
# variables.
mock_isdir.return_value = True
def_.read_string(
"[PlainBox Provider]\n"
"name = 2013.org.example:smoke-test\n"
"version = 1.0\n"
"description = a description\n"
"gettext_domain = domain\n"
"location = /some/directory"
)
self.assertEqual(def_.name, "2013.org.example:smoke-test")
self.assertEqual(def_.version, "1.0")
self.assertEqual(def_.description, "a description")
self.assertEqual(def_.gettext_domain, "domain")
self.assertEqual(def_.location, "/some/directory")
self.assertEqual(def_.units_dir, Unset)
self.assertEqual(def_.jobs_dir, Unset)
self.assertEqual(def_.whitelists_dir, Unset)
self.assertEqual(def_.data_dir, Unset)
self.assertEqual(def_.bin_dir, Unset)
self.assertEqual(def_.locale_dir, Unset)
def test_init_validation__location_unset(self):
"""
verify that Provider1Definition allows 'location' field to be unset
"""
def_ = Provider1Definition()
def_.location = Unset
self.assertEqual(def_.location, Unset)
def test_init_validation__location_is_empty(self):
"""
verify that Provider1Definition ensures that 'location' field is not
empty
"""
def_ = Provider1Definition()
with self.assertRaises(ValidationError) as boom:
def_.location = ''
self.assertEqual(str(boom.exception), "cannot be empty")
def test_init_validation__location_relative(self):
"""
verify that Provider1Definition ensures that 'location' is not a
relative pathname
"""
def_ = Provider1Definition()
with self.assertRaises(ValidationError) as boom:
def_.location = 'some/place'
self.assertEqual(str(boom.exception), "cannot be relative")
def test_init_validation__location_doesnt_exist(self):
"""
verify that Provider1Definition ensures that 'location' field is not
pointing to an non-existing directory
"""
def_ = Provider1Definition()
with self.assertRaises(ValidationError) as boom:
with mock.patch('os.path.isdir') as mock_isdir:
mock_isdir.return_value = False
def_.location = '/some/place'
self.assertEqual(str(boom.exception), "no such directory")
def test_init_validation__no_name(self):
"""
verify that Provider1Definition ensures that 'name' field is not unset
"""
def_ = Provider1Definition()
with self.assertRaises(ValidationError) as boom:
def_.name = Unset
self.assertEqual(str(boom.exception), "must be set to something")
def test_init_validation__empty_name(self):
"""
verify that Provider1Definition ensures that 'name' field is not empty
"""
def_ = Provider1Definition()
with self.assertRaises(ValidationError) as boom:
def_.name = ""
self.assertEqual(str(boom.exception), "cannot be empty")
def test_init_validation__non_iqn_name(self):
"""
verify that Provider1Definition ensures that 'name' field rejects names
that don't look like RFC3720 IQN
"""
def_ = Provider1Definition()
with self.assertRaises(ValidationError) as boom:
def_.name = "name = my pretty name\n"
self.assertEqual(str(boom.exception), "must look like RFC3720 IQN")
def test_init_validation__typical_name(self):
"""
verify that Provider1Definition allows typical values for 'name' field
"""
def_ = Provider1Definition()
for name in ('2013.org.example:tests',
'2013.com.canonical.certification:usb-testing'):
def_.name = name
self.assertEqual(def_.name, name)
def test_init_validation__no_version(self):
"""
verify that Provider1Definition ensures that 'version' field is not
unset
"""
def_ = Provider1Definition()
with self.assertRaises(ValidationError) as boom:
def_.version = Unset
self.assertEqual(str(boom.exception), "must be set to something")
def test_init_validation__empty_version(self):
"""
verify that Provider1Definition ensures that 'version' field is not
empty
"""
def_ = Provider1Definition()
with self.assertRaises(ValidationError) as boom:
def_.version = ''
self.assertEqual(str(boom.exception), "cannot be empty")
def test_init_validation__incorrect_looking_version(self):
"""
verify that Provider1Definition ensures that 'version' field rejects
values that don't look like a typical version
"""
def_ = Provider1Definition()
with self.assertRaises(ValidationError) as boom:
def_.version = "2014.4+bzr46"
self.assertEqual(
str(boom.exception),
"must be a sequence of digits separated by dots")
def test_init_validation__typical_version(self):
"""
verify that Provider1Definition allows typical values for the 'version'
field
"""
for ver in ('0.7.1', '0.7', '0', '2014.4', '12.04.5'):
def_ = Provider1Definition()
def_.version = ver
self.assertEqual(def_.version, ver)
def test_init_validation__any_description(self):
"""
verify that Provider1Definition allows any value for the 'description'
field
"""
for desc in (Unset, "", "description"):
def_ = Provider1Definition()
def_.description = desc
self.assertEqual(def_.description, desc)
def test_init_validation__gettext_domain_can_be_unset(self):
"""
verify that Provider1Definition allows 'gettext_domain' field to be
unset
"""
def_ = Provider1Definition()
def_.gettext_domain = Unset
self.assertEqual(def_.gettext_domain, Unset)
def test_init_validation__typical_gettext_domain(self):
"""
verify that Provider1Definition allows 'gettext_domain' field to have
typical values
"""
for gettext_domain in ("plainbox", "checkbox",
"2014_com_canonical_provider_name",
"2014-com-canonical-provider-name"):
def_ = Provider1Definition()
def_.gettext_domain = gettext_domain
self.assertEqual(def_.gettext_domain, gettext_domain)
def test_init_validation__foo_dir_unset(self):
"""
verify that Provider1Definition allows 'jobs_dir', 'whitelists_dir',
'data_dir', 'bin_dir' and 'locale_dir' fields to be unset
"""
for attr in ('units_dir', 'jobs_dir', 'whitelists_dir', 'data_dir',
'bin_dir', 'locale_dir'):
def_ = Provider1Definition()
setattr(def_, attr, Unset)
self.assertEqual(getattr(def_, attr), Unset)
def test_init_validation__foo_dir_is_empty(self):
"""
verify that Provider1Definition ensures that 'jobs_dir',
'whitelists_dir', 'data_dir', 'bin_dir' and 'locale_dir' fields are not
empty
"""
for attr in ('units_dir', 'jobs_dir', 'whitelists_dir', 'data_dir',
'bin_dir', 'locale_dir'):
def_ = Provider1Definition()
with self.assertRaises(ValidationError) as boom:
setattr(def_, attr, '')
self.assertEqual(str(boom.exception), "cannot be empty")
def test_init_validation__foo_dir_relative(self):
"""
verify that Provider1Definition ensures that 'jobs_dir',
'whitelists_dir', 'data_dir', 'bin_dir' and 'locale_dir' fields are not
a relative pathname
"""
for attr in ('units_dir', 'jobs_dir', 'whitelists_dir', 'data_dir',
'bin_dir', 'locale_dir'):
def_ = Provider1Definition()
with self.assertRaises(ValidationError) as boom:
setattr(def_, attr, 'some/place')
self.assertEqual(str(boom.exception), "cannot be relative")
def test_init_validation__foo_dir_doesnt_exist(self):
"""
verify that Provider1Definition ensures that 'jobs_dir',
'whitelists_dir', 'data_dir', 'bin_dir' and 'locale_dir' fields are not
pointing to a non-existing directory
"""
for attr in ('units_dir', 'jobs_dir', 'whitelists_dir', 'data_dir',
'bin_dir', 'locale_dir'):
def_ = Provider1Definition()
with self.assertRaises(ValidationError) as boom:
with mock.patch('os.path.isdir') as mock_isdir:
mock_isdir.return_value = False
setattr(def_, attr, '/some/place')
self.assertEqual(str(boom.exception), "no such directory")
class Provider1PlugInTests(TestCase):
DEF_TEXT = (
"[PlainBox Provider]\n"
"name = 2013.org.example:smoke-test\n"
"version = 1.0\n"
"description = a description\n"
"gettext_domain = domain\n"
)
DEF_TEXT_w_location = DEF_TEXT + (
"location = /some/directory\n"
)
DEF_TEXT_w_dirs = DEF_TEXT + (
"units_dir = /some/directory/units\n"
"jobs_dir = /some/directory/jobs\n"
"whitelists_dir = /some/directory/whitelists\n"
"data_dir = /some/directory/data\n"
"bin_dir = /some/directory/bin\n"
"locale_dir = /some/directory/locale\n"
)
LOAD_TIME = 42
def setUp(self):
with mock.patch('os.path.isdir') as mock_isdir:
# Mock os.path.isdir so that we can validate location
mock_isdir.return_value = True
self.plugin = Provider1PlugIn(
"a.provider", self.DEF_TEXT, self.LOAD_TIME)
self.plugin_w_location = Provider1PlugIn(
"a.provider", self.DEF_TEXT_w_location, self.LOAD_TIME)
self.plugin_w_dirs = Provider1PlugIn(
"a.provider", self.DEF_TEXT_w_dirs, self.LOAD_TIME)
# Mock os.path.isdir so that none of the sub-directories of the
# location directory seem to exist. This is essential for
# Provider1.from_definition()'s special behavior.
mock_isdir.side_effect = lambda dn: dn == "/some/directory"
self.plugin_w_location_w_no_dirs = Provider1PlugIn(
"a.provider", self.DEF_TEXT_w_location, self.LOAD_TIME)
def test_plugin_name(self):
self.assertEqual(
self.plugin.plugin_name, "2013.org.example:smoke-test")
def test_plugin_object(self):
self.assertIsInstance(self.plugin.plugin_object, Provider1)
def test_plugin_load_time(self):
self.assertEqual(self.plugin.plugin_load_time, self.LOAD_TIME)
def test_provider_metadata(self):
provider = self.plugin.plugin_object
self.assertEqual(provider.name, "2013.org.example:smoke-test")
self.assertEqual(provider.version, "1.0")
self.assertEqual(provider.description, "a description")
self.assertEqual(provider.gettext_domain, "domain")
def test_provider_directories__no_location_no_dirs(self):
"""
verify that none of the provider directories are set when loading a
provider definition devoid of actual entries and the base location
entry.
"""
provider = self.plugin.plugin_object
self.assertEqual(provider.units_dir, None)
self.assertEqual(provider.jobs_dir, None)
self.assertEqual(provider.whitelists_dir, None)
self.assertEqual(provider.data_dir, None)
self.assertEqual(provider.bin_dir, None)
self.assertEqual(provider.build_bin_dir, None)
self.assertEqual(provider.src_dir, None)
self.assertEqual(provider.locale_dir, None)
self.assertEqual(provider.base_dir, None)
def test_provider_directories__w_location(self):
"""
verify that all of the provider directories are set when loading a
provider definition devoid of actual entries but the base location
entry.
"""
provider = self.plugin_w_location.plugin_object
self.assertEqual(provider.units_dir, "/some/directory/units")
self.assertEqual(provider.jobs_dir, "/some/directory/jobs")
self.assertEqual(provider.whitelists_dir, "/some/directory/whitelists")
self.assertEqual(provider.data_dir, "/some/directory/data")
self.assertEqual(provider.bin_dir, "/some/directory/bin")
self.assertEqual(provider.build_bin_dir, "/some/directory/build/bin")
self.assertEqual(provider.src_dir, "/some/directory/src")
self.assertEqual(provider.locale_dir, "/some/directory/locale")
self.assertEqual(provider.base_dir, "/some/directory")
def test_provider_directories__w_location_w_no_dirs(self):
"""
verify that all of the provider directories are set to None when
loading a provider definition devoid of actual entries but the base
location entry *and* the filesystem reporting that those directories
don't exist.
"""
provider = self.plugin_w_location_w_no_dirs.plugin_object
self.assertEqual(provider.units_dir, None)
self.assertEqual(provider.jobs_dir, None)
self.assertEqual(provider.whitelists_dir, None)
self.assertEqual(provider.data_dir, None)
self.assertEqual(provider.bin_dir, None)
self.assertEqual(provider.build_bin_dir, "/some/directory/build/bin")
self.assertEqual(provider.src_dir, "/some/directory/src")
self.assertEqual(provider.locale_dir, None)
self.assertEqual(provider.base_dir, "/some/directory")
def test_provider_directories__w_dirs(self):
"""
verify that all of the provider directories are set when loading a
provider definition with a specific entry for each directory
"""
provider = self.plugin_w_dirs.plugin_object
self.assertEqual(provider.units_dir, "/some/directory/units")
self.assertEqual(provider.jobs_dir, "/some/directory/jobs")
self.assertEqual(provider.whitelists_dir, "/some/directory/whitelists")
self.assertEqual(provider.data_dir, "/some/directory/data")
self.assertEqual(provider.bin_dir, "/some/directory/bin")
self.assertEqual(provider.build_bin_dir, None)
self.assertEqual(provider.src_dir, None)
self.assertEqual(provider.locale_dir, "/some/directory/locale")
self.assertEqual(provider.base_dir, None)
class WhiteListPlugInTests(TestCase):
"""
Tests for WhiteListPlugIn
"""
LOAD_TIME = 42
def setUp(self):
self.plugin = WhiteListPlugIn(
"/path/to/some.whitelist", "foo\nbar\n", self.LOAD_TIME)
def test_plugin_name(self):
"""
verify that the WhiteListPlugIn.plugin_name property returns
WhiteList.name
"""
self.assertEqual(self.plugin.plugin_name, "some")
def test_plugin_object(self):
"""
verify that the WhiteListPlugIn.plugin_object property returns a
WhiteList
"""
self.assertIsInstance(self.plugin.plugin_object, WhiteList)
def test_plugin_load_time(self):
self.assertEqual(self.plugin.plugin_load_time, self.LOAD_TIME)
def test_whitelist_data(self):
"""
verify the contents of the loaded whitelist object
"""
self.assertEqual(
self.plugin.plugin_object.qualifier_list[0].pattern_text, "^foo$")
self.assertEqual(
self.plugin.plugin_object.qualifier_list[1].pattern_text, "^bar$")
self.assertEqual(self.plugin.plugin_object.name, 'some')
self.assertEqual(
self.plugin.plugin_object.origin,
Origin(FileTextSource('/path/to/some.whitelist'), 1, 2))
def test_init_failing(self):
"""
verify how WhiteList() initializer works if something is wrong
"""
# The pattern is purposefully invalid
with self.assertRaises(PlugInError) as boom:
WhiteListPlugIn("/path/to/some.whitelist", "*", self.LOAD_TIME)
# NOTE: we should have syntax error for whitelists that keeps track or
# line we're at to help developers figure out where errors such as this
# are coming from.
self.assertEqual(
str(boom.exception),
("Cannot load '/path/to/some.whitelist': nothing to repeat"))
class UnitPlugInTests(TestCase):
"""
Tests for UnitPlugIn
"""
LOAD_TIME = 42
def setUp(self):
self.provider = mock.Mock(name="provider", spec=Provider1)
self.provider.classify.return_value = (
mock.Mock("role"), mock.Mock("base"), mock.Mock("plugin_cls"))
self.provider.namespace = "2013.com.canonical.plainbox"
self.plugin = UnitPlugIn(
"/path/to/jobs.txt", (
"id: test/job\n"
"plugin: shell\n"
"command: true\n"),
self.LOAD_TIME, self.provider)
def test_plugin_name(self):
"""
verify that the UnitPlugIn.plugin_name property returns
pathname of the job definition file
"""
self.assertEqual(self.plugin.plugin_name, "/path/to/jobs.txt")
def test_plugin_object(self):
"""
verify that the UnitPlugIn.plugin_object property returns a
list of JobDefintion instances
"""
self.assertEqual(len(self.plugin.plugin_object), 2)
self.assertIsInstance(self.plugin.plugin_object[0], JobDefinition)
self.assertIsInstance(self.plugin.plugin_object[1], FileUnit)
def test_plugin_load_time(self):
self.assertEqual(self.plugin.plugin_load_time, self.LOAD_TIME)
def test_job_data(self):
"""
verify the contents of the loaded JobDefinition object
"""
job = self.plugin.plugin_object[0]
self.assertEqual(job.partial_id, "test/job")
self.assertEqual(job.id, "2013.com.canonical.plainbox::test/job")
self.assertEqual(job.plugin, "shell")
self.assertEqual(job.command, "true")
self.assertEqual(
job.origin, Origin(FileTextSource("/path/to/jobs.txt"), 1, 3))
def test_job_provider(self):
"""
verify the loaded job got the provider from the plugin
"""
job = self.plugin.plugin_object[0]
self.assertIs(job.provider, self.provider)
def test_init_failing(self):
"""
verify how UnitPlugIn() initializer works if something is
wrong
"""
# The pattern is purposefully invalid
with self.assertRaises(PlugInError) as boom:
UnitPlugIn(
"/path/to/jobs.txt", "broken", self.LOAD_TIME, self.provider)
self.assertEqual(
str(boom.exception),
("Cannot load job definitions from '/path/to/jobs.txt': "
"Unexpected non-empty line: 'broken' (line 1)"))
class Provider1Tests(TestCase):
NAME = "name"
NAMESPACE = "2013.org.example"
VERSION = "1.0"
DESCRIPTION = "description"
SECURE = True
GETTEXT_DOMAIN = "domain"
UNITS_DIR = "units-dir"
JOBS_DIR = "jobs-dir"
WHITELISTS_DIR = "whitelists-dir"
DATA_DIR = "data-dir"
BIN_DIR = "bin-dir"
LOCALE_DIR = "locale-dir"
BASE_DIR = "base-dir"
LOAD_TIME = 42
def setUp(self):
self.provider = Provider1(
self.NAME, self.NAMESPACE, self.VERSION, self.DESCRIPTION, self.SECURE,
self.GETTEXT_DOMAIN, self.UNITS_DIR, self.JOBS_DIR,
self.WHITELISTS_DIR, self.DATA_DIR, self.BIN_DIR, self.LOCALE_DIR,
self.BASE_DIR,
# We are using dummy job definitions so let's not shout about those
# being invalid in each test
validate=False)
self.fake_context = self.provider.fake([])
self.fake_context.__enter__()
def tearDown(self):
self.fake_context.__exit__(None, None, None)
def test_repr(self):
self.assertEqual(
repr(self.provider),
"")
def test_name(self):
"""
Verify that Provider1.name attribute is set correctly
"""
self.assertEqual(self.provider.name, self.NAME)
def test_namespace(self):
"""
Verify that Provider1.namespace is computed correctly
"""
self.assertEqual(self.provider.namespace, self.NAMESPACE)
def test_version(self):
"""
Verify that Provider1.version attribute is set correctly
"""
self.assertEqual(self.provider.version, self.VERSION)
def test_description(self):
"""
Verify that Provider1.description attribute is set correctly
"""
self.assertEqual(self.provider.description, self.DESCRIPTION)
def test_secure(self):
"""
Verify that Provider1.secure attribute is set correctly
"""
self.assertEqual(self.provider.secure, self.SECURE)
def test_gettext_domain(self):
"""
Verify that Provider1.gettext_domain attribute is set correctly
"""
self.assertEqual(self.provider.gettext_domain, self.GETTEXT_DOMAIN)
def test_units_dir(self):
"""
Verify that Provider1.jobs_dir attribute is set correctly
"""
self.assertEqual(self.provider.units_dir, self.UNITS_DIR)
def test_jobs_dir(self):
"""
Verify that Provider1.jobs_dir attribute is set correctly
"""
self.assertEqual(self.provider.jobs_dir, self.JOBS_DIR)
def test_whitelists_dir(self):
"""
Verify that Provider1.whitelists_dir attribute is set correctly
"""
self.assertEqual(self.provider.whitelists_dir, self.WHITELISTS_DIR)
def test_data_dir(self):
"""
Verify that Provider1.data_dir attribute is set correctly
"""
self.assertEqual(self.provider.data_dir, self.DATA_DIR)
def test_bin_dir(self):
"""
Verify that Provider1.bin_dir attribute is set correctly
"""
self.assertEqual(self.provider.bin_dir, self.BIN_DIR)
def test_locale_dir(self):
"""
Verify that Provider1.locale_dir attribute is set correctly
"""
self.assertEqual(self.provider.locale_dir, self.LOCALE_DIR)
def test_base_dir(self):
"""
Verify that Provider1.base_dir attribute is set correctly
"""
self.assertEqual(self.provider.base_dir, self.BASE_DIR)
def test_CHECKBOX_SHARE(self):
"""
Verify that Provider1.CHECKBOX_SHARE is defined as the parent directory
of data_dir
"""
self.assertEqual(self.provider.CHECKBOX_SHARE, self.BASE_DIR)
def test_CHECKBOX_SHARE__without_base_dir(self):
"""
Verify that Provider1.CHECKBOX_SHARE is None without base_dir
"""
self.provider._base_dir = None
self.assertEqual(self.provider.CHECKBOX_SHARE, None)
def test_extra_PYTHONPATH(self):
"""
Verify that Provider1.extra_PYTHONPATH is always None
"""
self.assertIsNone(self.provider.extra_PYTHONPATH)
def test_fake(self):
"""
Verify that fake() redirects the provider to look for fake content.
"""
# Create unsorted job definitions that define a1, a2, a3 and a4
fake_content = [
PlugIn(self.JOBS_DIR + "/path/to/jobs1.txt", (
"id: a2\n"
"plugin: shell\n"
"command: true\n"
"\n"
"id: a1\n"
"plugin: shell\n"
"command: true\n"
)),
PlugIn(self.JOBS_DIR + "/path/to/jobs2.txt", (
"id: a3\n"
"plugin: shell\n"
"command: true\n"
"\n"
"id: a4\n"
"plugin: shell\n"
"command: true\n"
))]
fake_problems = [IOError("first problem"), OSError("second problem")]
with self.provider.fake(fake_content, fake_problems):
job_list = self.provider.job_list
problem_list = self.provider.problem_list
self.assertEqual(len(job_list), 4)
self.assertEqual(job_list[0].partial_id, "a1")
self.assertEqual(job_list[1].partial_id, "a2")
self.assertEqual(job_list[2].partial_id, "a3")
self.assertEqual(job_list[3].partial_id, "a4")
self.assertEqual(problem_list, fake_problems)
@mock.patch("plainbox.impl.secure.providers.v1.gettext")
def test_get_translated_data__typical(self, mock_gettext):
"""
Verify the runtime behavior of get_translated_data()
"""
self.provider._gettext_domain = "some-fake-domain"
retval = self.provider.get_translated_data("foo")
mock_gettext.dgettext.assert_called_with("some-fake-domain", "foo")
self.assertEqual(retval, mock_gettext.dgettext())
@mock.patch("plainbox.impl.secure.providers.v1.gettext")
def test_get_translated_data__empty_string(self, mock_gettext):
"""
Verify the runtime behavior of get_translated_data()
"""
self.provider._gettext_domain = "some-fake-domain"
retval = self.provider.get_translated_data("")
# This should never go through gettext
self.assertEqual(retval, "")
# And dgettext should never be called
self.assertEqual(mock_gettext.dgettext.call_args_list, [])
@mock.patch("plainbox.impl.secure.providers.v1.gettext")
def test_get_translated_data__None(self, mock_gettext):
"""
Verify the runtime behavior of get_translated_data()
"""
self.provider._gettext_domain = "some-fake-domain"
retval = self.provider.get_translated_data(None)
# This should never go through gettext
self.assertEqual(retval, None)
# And dgettext should never be called
self.assertEqual(mock_gettext.dgettext.call_args_list, [])
def test_tr_description(self):
"""
Verify that Provider1.tr_description() works as expected
"""
with mock.patch.object(self.provider, "get_translated_data") as mgtd:
retval = self.provider.tr_description()
# Ensure that get_translated_data() was called
mgtd.assert_called_once_with(self.provider.description)
# Ensure tr_description() returned its return value
self.assertEqual(retval, mgtd())
@mock.patch("plainbox.impl.secure.providers.v1.gettext")
def test_init_bindtextdomain__called(self, mock_gettext):
"""
Verify that Provider1() calls bindtextdomain under certain
circumstances
"""
Provider1(
self.NAME, self.NAMESPACE, self.VERSION, self.DESCRIPTION, self.SECURE,
self.GETTEXT_DOMAIN, self.UNITS_DIR, self.JOBS_DIR,
self.WHITELISTS_DIR, self.DATA_DIR, self.BIN_DIR, self.LOCALE_DIR,
self.BASE_DIR)
mock_gettext.bindtextdomain.assert_called_once_with(
self.GETTEXT_DOMAIN, self.LOCALE_DIR)
@mock.patch("plainbox.impl.secure.providers.v1.gettext")
def test_init_bindtextdomain__not_called(self, mock_gettext):
"""
Verify that Provider1() calls bindtextdomain under certain
circumstances
"""
Provider1(
self.NAME, self.NAMESPACE, self.VERSION, self.DESCRIPTION, self.SECURE,
self.GETTEXT_DOMAIN, self.UNITS_DIR, self.JOBS_DIR,
self.WHITELISTS_DIR, self.DATA_DIR, self.BIN_DIR, locale_dir=None,
base_dir=self.BASE_DIR)
self.assertEqual(mock_gettext.bindtextdomain.call_args_list, [])
plainbox-0.25/plainbox/impl/secure/test_rfc822.py 0000664 0001750 0001750 00000052507 12627266441 022561 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Sylvain Pineau
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.secure.test_rfc822
================================
Test definitions for plainbox.impl.secure.rfc822 module
"""
from io import StringIO
from unittest import TestCase
from plainbox.impl.secure.origin import FileTextSource
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.origin import UnknownTextSource
from plainbox.impl.secure.rfc822 import RFC822Record
from plainbox.impl.secure.rfc822 import RFC822SyntaxError
from plainbox.impl.secure.rfc822 import load_rfc822_records
from plainbox.impl.secure.rfc822 import normalize_rfc822_value
class NormalizationTests(TestCase):
"""
Tests for normalize_rfc822_value()
"""
def test_smoke(self):
n = normalize_rfc822_value
self.assertEqual(n("foo"), "foo")
self.assertEqual(n(" foo"), "foo")
self.assertEqual(n("foo "), "foo")
self.assertEqual(n(" foo "), "foo")
self.assertEqual(n(" foo\n"
" bar\n"),
("foo\n"
"bar"))
def test_dot_handling(self):
n = normalize_rfc822_value
# single leading dot is stripped
self.assertEqual(n("foo\n"
".\n"
"bar\n"),
("foo\n"
"\n"
"bar"))
# the dot is stripped even if whitespace is present
self.assertEqual(n(" foo\n"
" .\n"
" bar\n"),
("foo\n"
"\n"
"bar"))
# Two dots don't invoke the special behaviour though
self.assertEqual(n(" foo\n"
" ..\n"
" bar\n"),
("foo\n"
"..\n"
"bar"))
# Regardless of whitespace
self.assertEqual(n("foo\n"
"..\n"
"bar\n"),
("foo\n"
"..\n"
"bar"))
class RFC822RecordTests(TestCase):
def setUp(self):
self.raw_data = {'key': ' value'}
self.data = {'key': 'value'}
self.origin = Origin(FileTextSource('file.txt'), 1, 1)
self.record = RFC822Record(self.data, self.origin, self.raw_data)
def test_raw_data(self):
self.assertEqual(self.record.raw_data, self.raw_data)
def test_data(self):
self.assertEqual(self.record.data, self.data)
def test_origin(self):
self.assertEqual(self.record.origin, self.origin)
def test_equality(self):
# Equality is compared by normalized data, the raw data doesn't count
other_raw_data = {'key': 'value '}
# This other raw data is actually different to the one we're going to
# test against
self.assertNotEqual(other_raw_data, self.raw_data)
# Let's make another record with different raw data
other_record = RFC822Record(self.data, self.origin, other_raw_data)
# The normalized data is identical
self.assertEqual(other_record.data, self.record.data)
# The raw data is not
self.assertNotEqual(other_record.raw_data, self.record.raw_data)
# The origin is the same (just a sanity check)
self.assertEqual(other_record.origin, self.record.origin)
# Let's look at the whole object, they should be equal
self.assertTrue(other_record == self.record)
self.assertTrue(not(other_record != self.record))
class RFC822ParserTests(TestCase):
loader = load_rfc822_records
def test_empty(self):
with StringIO("") as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 0)
def test_parsing_strings_preserves_newlines(self):
"""
Ensure that the special behavior, when a string is passed instead of a
stream, is parsed the same way as regular streams are, that is, that
newlines are preserved.
"""
text = ("key:\n"
" line1\n"
" line2\n")
records_str = type(self).loader(text)
with StringIO(text) as stream:
records_stream = type(self).loader(stream)
self.assertEqual(records_str, records_stream)
def test_preserves_whitespace1(self):
with StringIO("key: value ") as stream:
records = type(self).loader(stream)
self.assertEqual(records[0].data, {'key': 'value'})
self.assertEqual(records[0].raw_data, {'key': 'value '})
def test_preserves_whitespace2(self):
with StringIO("key:\n value ") as stream:
records = type(self).loader(stream)
self.assertEqual(records[0].data, {'key': 'value'})
self.assertEqual(records[0].raw_data, {'key': 'value '})
def test_strips_newlines1(self):
with StringIO("key: value \n") as stream:
records = type(self).loader(stream)
self.assertEqual(records[0].data, {'key': 'value'})
self.assertEqual(records[0].raw_data, {'key': 'value \n'})
def test_strips_newlines2(self):
with StringIO("key:\n value \n") as stream:
records = type(self).loader(stream)
self.assertEqual(records[0].data, {'key': 'value'})
self.assertEqual(records[0].raw_data, {'key': 'value \n'})
def test_single_record(self):
with StringIO("key:value") as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 1)
self.assertEqual(records[0].data, {'key': 'value'})
self.assertEqual(records[0].raw_data, {'key': 'value'})
def test_comments(self):
"""
Ensure that comments are stripped and don't break multi-line handling
"""
text = (
"# this is a comment\n"
"key:\n"
" multi-line value\n"
"# this is a comment\n"
)
with StringIO(text) as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 1)
self.assertEqual(records[0].data, {'key': 'multi-line value'})
self.assertEqual(records[0].raw_data, {'key': 'multi-line value\n'})
def test_dot_escape(self):
"""
Ensure that the dot is not processed in any way
This part of the code is now handled by another layer.
"""
text = (
"key: something\n"
" .\n"
" .this\n"
" ..should\n"
" ...work\n"
)
expected_value = (
"something\n"
"\n"
".this\n"
"..should\n"
"...work"
)
expected_raw_value = (
"something\n"
".\n"
".this\n"
"..should\n"
"...work\n"
)
with StringIO(text) as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 1)
self.assertEqual(records[0].data, {'key': expected_value})
self.assertEqual(records[0].raw_data, {'key': expected_raw_value})
def test_many_newlines(self):
text = (
"\n"
"\n"
"key1:value1\n"
"\n"
"\n"
"\n"
"key2:value2\n"
"\n"
"\n"
"key3:value3\n"
"\n"
"\n"
)
with StringIO(text) as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 3)
self.assertEqual(records[0].data, {'key1': 'value1'})
self.assertEqual(records[1].data, {'key2': 'value2'})
self.assertEqual(records[2].data, {'key3': 'value3'})
self.assertEqual(records[0].raw_data, {'key1': 'value1\n'})
self.assertEqual(records[1].raw_data, {'key2': 'value2\n'})
self.assertEqual(records[2].raw_data, {'key3': 'value3\n'})
def test_many_records(self):
text = (
"key1:value1\n"
"\n"
"key2:value2\n"
"\n"
"key3:value3\n"
)
with StringIO(text) as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 3)
self.assertEqual(records[0].data, {'key1': 'value1'})
self.assertEqual(records[1].data, {'key2': 'value2'})
self.assertEqual(records[2].data, {'key3': 'value3'})
self.assertEqual(records[0].raw_data, {'key1': 'value1\n'})
self.assertEqual(records[1].raw_data, {'key2': 'value2\n'})
self.assertEqual(records[2].raw_data, {'key3': 'value3\n'})
def test_multiline_value(self):
text = (
"key:\n"
" longer\n"
" value\n"
)
expected_value = (
"longer\n"
"value"
)
expected_raw_value = (
"longer\n"
"value\n"
)
with StringIO(text) as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 1)
self.assertEqual(records[0].data, {'key': expected_value})
self.assertEqual(records[0].raw_data, {'key': expected_raw_value})
def test_multiline_value_with_space(self):
text = (
"key:\n"
" longer\n"
" .\n"
" value\n"
)
expected_value = (
"longer\n"
"\n"
"value"
)
expected_raw_value = (
"longer\n"
".\n"
"value\n"
)
with StringIO(text) as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 1)
self.assertEqual(records[0].data, {'key': expected_value})
self.assertEqual(records[0].raw_data, {'key': expected_raw_value})
def test_multiline_value_with_space__deep_indent(self):
"""
Ensure that equally indented spaces are removed, even if multiple
spaces are used (more than one that is typically removed). The raw
value should have just the one space removed
"""
text = (
"key:\n"
" longer\n"
" .\n"
" value\n"
)
expected_value = (
"longer\n"
"\n"
"value"
)
# HINT: exactly as the original above but one space shorter
expected_raw_value = (
" longer\n"
" .\n"
" value\n"
)
with StringIO(text) as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 1)
self.assertEqual(records[0].data, {'key': expected_value})
self.assertEqual(records[0].raw_data, {'key': expected_raw_value})
def test_multiline_value_with_period(self):
"""
Ensure that the dot is not processed in any way
This part of the code is now handled by another layer.
"""
text = (
"key:\n"
" longer\n"
" ..\n"
" value\n"
)
expected_value = (
"longer\n"
"..\n"
"value"
)
expected_raw_value = (
"longer\n"
"..\n"
"value\n"
)
with StringIO(text) as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 1)
self.assertEqual(records[0].data, {'key': expected_value})
self.assertEqual(records[0].raw_data, {'key': expected_raw_value})
def test_many_multiline_values(self):
text = (
"key1:initial\n"
" longer\n"
" value 1\n"
"\n"
"key2:\n"
" longer\n"
" value 2\n"
)
expected_value1 = (
"initial\n"
"longer\n"
"value 1"
)
expected_value2 = (
"longer\n"
"value 2"
)
expected_raw_value1 = (
"initial\n"
"longer\n"
"value 1\n"
)
expected_raw_value2 = (
"longer\n"
"value 2\n"
)
with StringIO(text) as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 2)
self.assertEqual(records[0].data, {'key1': expected_value1})
self.assertEqual(records[1].data, {'key2': expected_value2})
self.assertEqual(records[0].raw_data, {'key1': expected_raw_value1})
self.assertEqual(records[1].raw_data, {'key2': expected_raw_value2})
def test_proper_parsing_nested_multiline(self):
text = (
"key:\n"
" nested: stuff\n"
" even:\n"
" more\n"
" text\n"
)
expected_value = (
"nested: stuff\n"
"even:\n"
" more\n"
" text"
)
expected_raw_value = (
"nested: stuff\n"
"even:\n"
" more\n"
" text\n"
)
with StringIO(text) as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 1)
self.assertEqual(records[0].data, {'key': expected_value})
self.assertEqual(records[0].raw_data, {'key': expected_raw_value})
def test_proper_parsing_nested_multiline__deep_indent(self):
text = (
"key:\n"
" nested: stuff\n"
" even:\n"
" more\n"
" text\n"
)
expected_value = (
"nested: stuff\n"
"even:\n"
" more\n"
" text"
)
# HINT: exactly as the original above but one space shorter
expected_raw_value = (
" nested: stuff\n"
" even:\n"
" more\n"
" text\n"
)
with StringIO(text) as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 1)
self.assertEqual(records[0].data, {'key': expected_value})
self.assertEqual(records[0].raw_data, {'key': expected_raw_value})
def test_irrelevant_whitespace(self):
text = "key : value "
with StringIO(text) as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 1)
self.assertEqual(records[0].data, {'key': 'value'})
self.assertEqual(records[0].raw_data, {'key': 'value '})
def test_relevant_whitespace(self):
text = (
"key:\n"
" value\n"
)
with StringIO(text) as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 1)
self.assertEqual(records[0].data, {'key': 'value'})
self.assertEqual(records[0].raw_data, {'key': 'value\n'})
def test_bad_multiline(self):
text = " extra value"
with StringIO(text) as stream:
with self.assertRaises(RFC822SyntaxError) as call:
type(self).loader(stream)
self.assertEqual(call.exception.msg, "Unexpected multi-line value")
def test_garbage(self):
text = "garbage"
with StringIO(text) as stream:
with self.assertRaises(RFC822SyntaxError) as call:
type(self).loader(stream)
self.assertEqual(
call.exception.msg,
"Unexpected non-empty line: 'garbage'")
def test_syntax_error(self):
text = "key1 = value1"
with StringIO(text) as stream:
with self.assertRaises(RFC822SyntaxError) as call:
type(self).loader(stream)
self.assertEqual(
call.exception.msg,
"Unexpected non-empty line: 'key1 = value1'")
def test_duplicate_error(self):
text = (
"key1: value1\n"
"key1: value2\n"
)
with StringIO(text) as stream:
with self.assertRaises(RFC822SyntaxError) as call:
type(self).loader(stream)
self.assertEqual(call.exception.msg, (
"Job has a duplicate key 'key1' with old value 'value1\\n'"
" and new value 'value2\\n'"))
def test_origin_from_stream_is_Unknown(self):
"""
verify that gen_rfc822_records() uses origin instances with source
equal to UnknownTextSource, when no explicit source is provided and the
stream has no name to infer a FileTextSource() from.
"""
expected_origin = Origin(UnknownTextSource(), 1, 1)
with StringIO("key:value") as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 1)
self.assertEqual(records[0].data, {'key': 'value'})
self.assertEqual(records[0].origin, expected_origin)
def test_origin_from_filename_is_filename(self):
# If the test's origin has a filename, we need a valid origin
# with proper data.
# We're faking the name by using a StringIO subclass with a
# name property, which is how rfc822 gets that data.
expected_origin = Origin(FileTextSource("file.txt"), 1, 1)
with NamedStringIO("key:value",
fake_filename="file.txt") as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 1)
self.assertEqual(records[0].data, {'key': 'value'})
self.assertEqual(records[0].origin, expected_origin)
def test_field_offset_map_is_computed(self):
text = (
"a: value-a\n" # offset 0
"b: value-b\n" # offset 1
"# comment\n" # offset 2
"c:\n" # offset 3
" value-c.1\n" # offset 4
" value-c.2\n" # offset 5
"\n"
"d: value-d\n" # offset 0
)
with StringIO(text) as stream:
records = type(self).loader(stream)
self.assertEqual(len(records), 2)
self.assertEqual(records[0].data, {
'a': 'value-a',
'b': 'value-b',
'c': 'value-c.1\nvalue-c.2',
})
self.assertEqual(records[0].field_offset_map, {
'a': 0,
'b': 1,
'c': 4,
})
self.assertEqual(records[1].data, {
'd': 'value-d',
})
self.assertEqual(records[1].field_offset_map, {
'd': 0,
})
class NamedStringIO(StringIO):
"""
Subclass of StringIO with a name attribute.
Use only for testing purposes, it's not guaranteed to be 100%
compatible with StringIO.
"""
def __init__(self, string, fake_filename=None):
super(NamedStringIO, self).__init__(string)
self._fake_filename = fake_filename
@property
def name(self):
return(self._fake_filename)
class RFC822WriterTests(TestCase):
"""
Tests for the :meth:`RFC822Record.dump()` method.
"""
def test_single_record(self):
with StringIO() as stream:
RFC822Record({'key': 'value'}).dump(stream)
self.assertEqual(stream.getvalue(), "key: value\n\n")
def test_multiple_record(self):
with StringIO() as stream:
RFC822Record({'key1': 'value1', 'key2': 'value2'}).dump(stream)
self.assertIn(
stream.getvalue(), (
"key1: value1\nkey2: value2\n\n",
"key2: value2\nkey1: value1\n\n"))
def test_multiline_value(self):
text = (
"key:\n"
" longer\n"
" value\n\n"
)
with StringIO() as stream:
RFC822Record({'key': 'longer\nvalue'}).dump(stream)
self.assertEqual(stream.getvalue(), text)
def test_multiline_value_with_space(self):
text = (
"key:\n"
" longer\n"
" .\n"
" value\n\n"
)
with StringIO() as stream:
RFC822Record({'key': 'longer\n\nvalue'}).dump(stream)
self.assertEqual(stream.getvalue(), text)
def test_multiline_value_with_period(self):
text = (
"key:\n"
" longer\n"
" ..\n"
" value\n\n"
)
with StringIO() as stream:
RFC822Record({'key': 'longer\n.\nvalue'}).dump(stream)
self.assertEqual(stream.getvalue(), text)
def test_type_error(self):
with StringIO() as stream:
with self.assertRaises(AttributeError):
RFC822Record(['key', 'value']).dump(stream)
class RFC822SyntaxErrorTests(TestCase):
"""
Tests for RFC822SyntaxError class
"""
def test_hash(self):
"""
verify that RFC822SyntaxError is hashable
"""
self.assertEqual(
hash(RFC822SyntaxError("file.txt", 10, "msg")),
hash(RFC822SyntaxError("file.txt", 10, "msg")))
plainbox-0.25/plainbox/impl/secure/test_config.py 0000664 0001750 0001750 00000042330 12627266441 023011 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013, 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.secure.test_config
================================
Test definitions for plainbox.impl.secure.config module
"""
from io import StringIO
from unittest import TestCase
import configparser
from plainbox.impl.secure.config import ChoiceValidator
from plainbox.impl.secure.config import ConfigMetaData
from plainbox.impl.secure.config import KindValidator
from plainbox.impl.secure.config import NotEmptyValidator
from plainbox.impl.secure.config import NotUnsetValidator
from plainbox.impl.secure.config import PatternValidator
from plainbox.impl.secure.config import PlainBoxConfigParser, Config
from plainbox.impl.secure.config import ValidationError
from plainbox.impl.secure.config import Variable, Section, Unset
from plainbox.impl.secure.config import understands_Unset
from plainbox.vendor import mock
class UnsetTests(TestCase):
def test_str(self):
self.assertEqual(str(Unset), "unset")
def test_repr(self):
self.assertEqual(repr(Unset), "Unset")
def test_bool(self):
self.assertEqual(bool(Unset), False)
class understands_Unset_Tests(TestCase):
def test_func(self):
@understands_Unset
def func():
pass
self.assertTrue(hasattr(func, 'understands_Unset'))
self.assertTrue(getattr(func, 'understands_Unset'))
def test_cls(self):
@understands_Unset
class cls:
pass
self.assertTrue(hasattr(cls, 'understands_Unset'))
self.assertTrue(getattr(cls, 'understands_Unset'))
class VariableTests(TestCase):
def test_name(self):
v1 = Variable()
self.assertIsNone(v1.name)
v2 = Variable('var')
self.assertEqual(v2.name, 'var')
v3 = Variable(name='var')
self.assertEqual(v3.name, 'var')
def test_section(self):
v1 = Variable()
self.assertEqual(v1.section, 'DEFAULT')
v2 = Variable(section='foo')
self.assertEqual(v2.section, 'foo')
def test_kind(self):
v1 = Variable(kind=bool)
self.assertIs(v1.kind, bool)
v2 = Variable(kind=int)
self.assertIs(v2.kind, int)
v3 = Variable(kind=float)
self.assertIs(v3.kind, float)
v4 = Variable(kind=str)
self.assertIs(v4.kind, str)
v5 = Variable()
self.assertIs(v5.kind, str)
with self.assertRaises(ValueError):
Variable(kind=list)
def test_validator_list__default(self):
"""
verify that each Variable has a validator_list and that by default,
that list contains a KindValidator as the first element
"""
self.assertEqual(Variable().validator_list, [KindValidator])
def test_validator_list__explicit(self):
"""
verify that each Variable has a validator_list and that, if
customized, the list contains the custom validators, preceded by
the implicit KindValidator object
"""
def DummyValidator(variable, new_value):
""" Dummy validator for the test below"""
pass
var = Variable(validator_list=[DummyValidator])
self.assertEqual(var.validator_list, [KindValidator, DummyValidator])
def test_validator_list__with_NotUnsetValidator(self):
"""
verify that each Variable has a validator_list and that, if
customized, and if using NotUnsetValidator it will take precedence
over all other validators, including the implicit KindValidator
"""
var = Variable(validator_list=[NotUnsetValidator()])
self.assertEqual(
var.validator_list, [NotUnsetValidator(), KindValidator])
class SectionTests(TestCase):
def test_name(self):
s1 = Section()
self.assertIsNone(s1.name)
s2 = Section('sec')
self.assertEqual(s2.name, 'sec')
s3 = Variable(name='sec')
self.assertEqual(s3.name, 'sec')
class ConfigTests(TestCase):
def test_Meta_present(self):
class TestConfig(Config):
pass
self.assertTrue(hasattr(TestConfig, 'Meta'))
def test_Meta_base_cls(self):
class TestConfig(Config):
pass
self.assertTrue(issubclass(TestConfig.Meta, ConfigMetaData))
class HelperMeta:
pass
class TestConfigWMeta(Config):
Meta = HelperMeta
self.assertTrue(issubclass(TestConfigWMeta.Meta, ConfigMetaData))
self.assertTrue(issubclass(TestConfigWMeta.Meta, HelperMeta))
def test_Meta_variable_list(self):
class TestConfig(Config):
v1 = Variable()
v2 = Variable()
self.assertEqual(
TestConfig.Meta.variable_list,
[TestConfig.v1, TestConfig.v2])
def test_variable_smoke(self):
class TestConfig(Config):
v = Variable()
conf = TestConfig()
self.assertIs(conf.v, Unset)
conf.v = "value"
self.assertEqual(conf.v, "value")
del conf.v
self.assertIs(conf.v, Unset)
def _get_featureful_config(self):
# define a featureful config class
class TestConfig(Config):
v1 = Variable()
v2 = Variable(section="v23_section")
v3 = Variable(section="v23_section")
v_unset = Variable()
v_bool = Variable(section="type_section", kind=bool)
v_int = Variable(section="type_section", kind=int)
v_float = Variable(section="type_section", kind=float)
v_str = Variable(section="type_section", kind=str)
s = Section()
conf = TestConfig()
# assign value to each variable, except v3_unset
conf.v1 = "v1 value"
conf.v2 = "v2 value"
conf.v3 = "v3 value"
conf.v_bool = True
conf.v_int = -7
conf.v_float = 1.5
conf.v_str = "hi"
# assign value to the section
conf.s = {"a": 1, "b": 2}
return conf
def test_get_parser_obj(self):
"""
verify that Config.get_parser_obj() properly writes all the data to the
ConfigParser object.
"""
conf = self._get_featureful_config()
parser = conf.get_parser_obj()
# verify that section and section-less variables work
self.assertEqual(parser.get("DEFAULT", "v1"), "v1 value")
self.assertEqual(parser.get("v23_section", "v2"), "v2 value")
self.assertEqual(parser.get("v23_section", "v3"), "v3 value")
# verify that unset variable is not getting set to anything
with self.assertRaises(configparser.Error):
parser.get("DEFAULT", "v_unset")
# verify that various types got converted correctly and still resolve
# to correct typed values
self.assertEqual(parser.get("type_section", "v_bool"), "True")
self.assertEqual(parser.getboolean("type_section", "v_bool"), True)
self.assertEqual(parser.get("type_section", "v_int"), "-7")
self.assertEqual(parser.getint("type_section", "v_int"), -7)
self.assertEqual(parser.get("type_section", "v_float"), "1.5")
self.assertEqual(parser.getfloat("type_section", "v_float"), 1.5)
self.assertEqual(parser.get("type_section", "v_str"), "hi")
# verify that section work okay
self.assertEqual(parser.get("s", "a"), "1")
self.assertEqual(parser.get("s", "b"), "2")
def test_write(self):
"""
verify that Config.write() works
"""
conf = self._get_featureful_config()
with StringIO() as stream:
conf.write(stream)
self.assertEqual(stream.getvalue(), (
"[DEFAULT]\n"
"v1 = v1 value\n"
"\n"
"[v23_section]\n"
"v2 = v2 value\n"
"v3 = v3 value\n"
"\n"
"[type_section]\n"
"v_bool = True\n"
"v_float = 1.5\n"
"v_int = -7\n"
"v_str = hi\n"
"\n"
"[s]\n"
"a = 1\n"
"b = 2\n"
"\n"))
def test_section_smoke(self):
class TestConfig(Config):
s = Section()
conf = TestConfig()
self.assertIs(conf.s, Unset)
with self.assertRaises(TypeError):
conf.s['key'] = "key-value"
conf.s = {}
self.assertEqual(conf.s, {})
conf.s['key'] = "key-value"
self.assertEqual(conf.s['key'], "key-value")
del conf.s
self.assertIs(conf.s, Unset)
def test_read_string(self):
class TestConfig(Config):
v = Variable()
conf = TestConfig()
conf.read_string(
"[DEFAULT]\n"
"v = 1")
self.assertEqual(conf.v, "1")
self.assertEqual(len(conf.problem_list), 0)
def test_read_string_calls_validate_whole(self):
"""
verify that Config.read_string() calls validate_whole()"
"""
conf = Config()
with mock.patch.object(conf, 'validate_whole') as mocked_validate:
conf.read_string('')
mocked_validate.assert_called_once_with()
def test_read_calls_validate_whole(self):
"""
verify that Config.read() calls validate_whole()"
"""
conf = Config()
with mock.patch.object(conf, 'validate_whole') as mocked_validate:
conf.read([])
mocked_validate.assert_called_once_with()
def test_read__handles_errors_from_validate_whole(self):
"""
verify that Config.read() collects errors from validate_whole()".
"""
class TestConfig(Config):
v = Variable()
def validate_whole(self):
raise ValidationError(TestConfig.v, self.v, "v is evil")
conf = TestConfig()
conf.read([])
self.assertEqual(len(conf.problem_list), 1)
self.assertEqual(conf.problem_list[0].variable, TestConfig.v)
self.assertEqual(conf.problem_list[0].new_value, Unset)
self.assertEqual(conf.problem_list[0].message, "v is evil")
def test_read_string__does_not_ignore_nonmentioned_variables(self):
class TestConfig(Config):
v = Variable(validator_list=[NotUnsetValidator()])
conf = TestConfig()
conf.read_string("")
# Because Unset is the default, sadly
self.assertEqual(conf.v, Unset)
# But there was a problem noticed
self.assertEqual(len(conf.problem_list), 1)
self.assertEqual(conf.problem_list[0].variable, TestConfig.v)
self.assertEqual(conf.problem_list[0].new_value, Unset)
self.assertEqual(conf.problem_list[0].message,
"must be set to something")
def test_read_string__handles_errors_from_validate_whole(self):
"""
verify that Config.read_strig() collects errors from validate_whole()".
"""
class TestConfig(Config):
v = Variable()
def validate_whole(self):
raise ValidationError(TestConfig.v, self.v, "v is evil")
conf = TestConfig()
conf.read_string("")
self.assertEqual(len(conf.problem_list), 1)
self.assertEqual(conf.problem_list[0].variable, TestConfig.v)
self.assertEqual(conf.problem_list[0].new_value, Unset)
self.assertEqual(conf.problem_list[0].message, "v is evil")
class ConfigMetaDataTests(TestCase):
def test_filename_list(self):
self.assertEqual(ConfigMetaData.filename_list, [])
def test_variable_list(self):
self.assertEqual(ConfigMetaData.variable_list, [])
def test_section_list(self):
self.assertEqual(ConfigMetaData.section_list, [])
class PlainBoxConfigParserTest(TestCase):
def test_parser(self):
conf_file = StringIO("[testsection]\nlower = low\nUPPER = up")
config = PlainBoxConfigParser()
config.read_file(conf_file)
self.assertEqual(['testsection'], config.sections())
all_keys = list(config['testsection'].keys())
self.assertTrue('lower' in all_keys)
self.assertTrue('UPPER' in all_keys)
self.assertFalse('upper' in all_keys)
class KindValidatorTests(TestCase):
class _Config(Config):
var_bool = Variable(kind=bool)
var_int = Variable(kind=int)
var_float = Variable(kind=float)
var_str = Variable(kind=str)
def test_error_msg(self):
"""
verify that KindValidator() has correct error message for each type
"""
bad_value = object()
self.assertEqual(
KindValidator(self._Config.var_bool, bad_value),
"expected a boolean")
self.assertEqual(
KindValidator(self._Config.var_int, bad_value),
"expected an integer")
self.assertEqual(
KindValidator(self._Config.var_float, bad_value),
"expected a floating point number")
self.assertEqual(
KindValidator(self._Config.var_str, bad_value),
"expected a string")
class PatternValidatorTests(TestCase):
class _Config(Config):
var = Variable()
def test_smoke(self):
"""
verify that PatternValidator works as intended
"""
validator = PatternValidator("foo.+")
self.assertEqual(validator(self._Config.var, "foobar"), None)
self.assertEqual(
validator(self._Config.var, "foo"),
"does not match pattern: 'foo.+'")
def test_comparison_works(self):
self.assertTrue(PatternValidator('foo') == PatternValidator('foo'))
self.assertTrue(PatternValidator('foo') != PatternValidator('bar'))
self.assertTrue(PatternValidator('foo') != object())
class ChoiceValidatorTests(TestCase):
class _Config(Config):
var = Variable()
def test_smoke(self):
"""
verify that ChoiceValidator works as intended
"""
validator = ChoiceValidator(["foo", "bar"])
self.assertEqual(validator(self._Config.var, "foo"), None)
self.assertEqual(
validator(self._Config.var, "omg"), "must be one of foo, bar")
def test_comparison_works(self):
self.assertTrue(ChoiceValidator(["a"]) == ChoiceValidator(["a"]))
self.assertTrue(ChoiceValidator(["a"]) != ChoiceValidator(["b"]))
self.assertTrue(ChoiceValidator(["a"]) != object())
class NotUnsetValidatorTests(TestCase):
"""
Tests for the NotUnsetValidator class
"""
class _Config(Config):
var = Variable()
def test_understands_Unset(self):
"""
verify that Unset can be handled at all
"""
self.assertTrue(getattr(NotUnsetValidator, "understands_Unset"))
def test_rejects_unset_values(self):
"""
verify that Unset variables are rejected
"""
validator = NotUnsetValidator()
self.assertEqual(
validator(self._Config.var, Unset), "must be set to something")
def test_accepts_other_values(self):
"""
verify that other values are accepted
"""
validator = NotUnsetValidator()
self.assertIsNone(validator(self._Config.var, None))
self.assertIsNone(validator(self._Config.var, "string"))
self.assertIsNone(validator(self._Config.var, 15))
def test_supports_custom_message(self):
"""
verify that custom message is used
"""
validator = NotUnsetValidator("value required!")
self.assertEqual(
validator(self._Config.var, Unset), "value required!")
def test_comparison_works(self):
"""
verify that comparison works as expected
"""
self.assertTrue(NotUnsetValidator() == NotUnsetValidator())
self.assertTrue(NotUnsetValidator("?") == NotUnsetValidator("?"))
self.assertTrue(NotUnsetValidator() != NotUnsetValidator("?"))
self.assertTrue(NotUnsetValidator() != object())
class NotEmptyValidatorTests(TestCase):
class _Config(Config):
var = Variable()
def test_rejects_empty_values(self):
validator = NotEmptyValidator()
self.assertEqual(validator(self._Config.var, ""), "cannot be empty")
def test_supports_custom_message(self):
validator = NotEmptyValidator("name required!")
self.assertEqual(validator(self._Config.var, ""), "name required!")
def test_isnt_broken(self):
validator = NotEmptyValidator()
self.assertEqual(validator(self._Config.var, "some value"), None)
def test_comparison_works(self):
self.assertTrue(NotEmptyValidator() == NotEmptyValidator())
self.assertTrue(NotEmptyValidator("?") == NotEmptyValidator("?"))
self.assertTrue(NotEmptyValidator() != NotEmptyValidator("?"))
self.assertTrue(NotEmptyValidator() != object())
plainbox-0.25/plainbox/impl/xscanners.py 0000664 0001750 0001750 00000025107 12627266441 021226 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
import logging
from plainbox.vendor.enum import Enum, unique
__all__ = ['WordScanner']
_logger = logging.getLogger("plainbox.xscanners")
class ScannerBase:
def __init__(self, text):
self._text = text
self._text_len = len(text)
self._pos = 0
def __iter__(self):
return self
def __next__(self):
token, lexeme = self.get_token()
if token is self.TOKEN_EOF:
raise StopIteration
return token, lexeme
def get_token(self):
"""
Get the next pair (token, lexeme)
"""
_logger.debug("inner: get_token()")
state = self.STATE_START
lexeme = ""
stack = [self.STATE_BAD]
while state is not self.STATE_ERROR:
_logger.debug("inner: ------ (next loop)")
_logger.debug("inner: text: %r", self._text)
_logger.debug(" %s^ (pos: %d of %d)",
'-' * self._pos, self._pos, self._text_len)
char = self._next_char()
_logger.debug("inner: char: %r", char)
_logger.debug("inner: state: %s", state)
_logger.debug("inner: stack: %s", stack)
_logger.debug("inner: lexeme: %r", lexeme)
lexeme += char
if state.is_accepting:
stack[:] = ()
_logger.debug("inner: rollback stack cleared")
stack.append(state)
state = self._next_state_for(state, char)
_logger.debug("inner: state becomes %s", state)
if state is self.STATE_ERROR:
_logger.debug("inner/rollback: REACHED ERROR STATE, ROLLING BACK")
while (not state.is_accepting and state is not self.STATE_BAD):
state = stack.pop()
_logger.debug("inner/rollback: popped new state %s", state)
lexeme = lexeme[:-1]
_logger.debug("inner/rollback: lexeme trimmed to: %r", lexeme)
self._rollback()
_logger.debug("inner/rollback: DONE")
lexeme = lexeme.rstrip("\0")
lexeme = state.modify_lexeme(lexeme)
if state.is_accepting:
_logger.debug(
"inner: accepting/returning: %r, %r", state.token, lexeme)
return state.token, lexeme
else:
_logger.debug("inner: not accepting: %r", state)
return state.token, None
def _rollback(self):
if self._pos > 0:
self._pos -= 1
else:
assert False, "rolling back before start of input?"
def _next_char(self):
assert self._pos >= 0
if self._pos < self._text_len:
char = self._text[self._pos]
self._pos += 1
return char
else:
# NOTE: this solves a lot of problems
self._pos = self._text_len + 1
return '\0'
def _next_state_for(self, state, char):
raise NotImplementedError
@unique
class WordScannerToken(Enum):
""" Token kind produced by :class:`WordScanner` """
INVALID = -1
EOF = 0
WORD = 1
SPACE = 2
COMMENT = 3
COMMA = 4
EQUALS = 5
@property
def is_irrelevant(self):
return self in (WordScannerToken.SPACE, WordScannerToken.COMMENT)
@unique
class WordScannerState(Enum):
""" State of the :class:`WordScanner` """
BAD = -1 # the bad state, used only once as a canary
START = 0 # the initial state
EOF = 1 # state for end-of-input
ERROR = 2 # state for all kinds of bad input
BARE_WORD = 3 # state when we're seeing bare words
QUOTED_WORD_INNER = 4 # state when we're seeing "-quoted word
QUOTED_WORD_END = 5
SPACE = 6 # state when we're seeing spaces
COMMENT_INNER = 7 # state when we're seeing comments
COMMENT_END = 8 # state when we've seen \n or ''
COMMA = 9 # state where we saw a comma
EQUALS = 10 # state where we saw the equals sign
@property
def is_accepting(self):
return self in WordScannerState._ACCEPTING
def modify_lexeme(self, lexeme):
""" Get the value of a given lexeme """
if self is WordScannerState.QUOTED_WORD_END:
return lexeme[1:-1]
else:
return lexeme
@property
def token(self):
""" Get the token corresponding to this state """
return WordScannerState._TOKEN_MAP.get(self, WordScannerToken.INVALID)
# Inject some helper attributes into WordScannerState
WordScannerState._ACCEPTING = frozenset([
WordScannerState.EOF, WordScannerState.BARE_WORD,
WordScannerState.QUOTED_WORD_END, WordScannerState.SPACE,
WordScannerState.COMMENT_END, WordScannerState.COMMA,
WordScannerState.EQUALS
])
WordScannerState._TOKEN_MAP = {
WordScannerState.EOF: WordScannerToken.EOF,
WordScannerState.BARE_WORD: WordScannerToken.WORD,
WordScannerState.QUOTED_WORD_END: WordScannerToken.WORD,
WordScannerState.SPACE: WordScannerToken.SPACE,
WordScannerState.COMMENT_END: WordScannerToken.COMMENT,
WordScannerState.COMMA: WordScannerToken.COMMA,
WordScannerState.EQUALS: WordScannerToken.EQUALS,
}
class WordScanner(ScannerBase):
"""
Support class for tokenizing a stream of words with shell comments.
A word is anything that's not whitespace (of any kind). Since everything
other than whitespace is a word, there is no way to break the scanner and
end up in an error state. Comments are introduced with the ``#`` character
and run to the end of the line.
Iterating over the scanner will produce subsequent pairs of (token, lexeme)
where the kind is one of the constants from :class:`WordScannerToken` and
lexeme is the actual text (value) of the token
>>> for token, lexeme in WordScanner('ala ma kota'):
... print(lexeme)
ala
ma
kota
Empty input produces an EOF token:
>>> WordScanner('').get_token()
(, '')
Words with white space can be quoted using double quotes:
>>> WordScanner('"quoted word"').get_token()
(, 'quoted word')
White space is ignored and is not returned in any way (normally):
>>> WordScanner('\\n\\t\\v\\rword').get_token()
(, 'word')
Though if you *really* want to, you can see everything by passing the
``ignore_irrelevant=False`` argument to :meth:`get_token()`:
>>> scanner = WordScanner('\\n\\t\\v\\rword')
>>> while True:
... token, lexeme = scanner.get_token(ignore_irrelevant=False)
... print('{:6} {!a}'.format(token.name, lexeme))
... if token == scanner.TOKEN_EOF:
... break
SPACE '\\n\\t\\x0b\\r'
WORD 'word'
EOF ''
The scanner has special provisions for recognizing some punctuation, this
includes the comma and the equals sign as shown below:
>>> for token, lexeme in WordScanner("k1=v1, k2=v2"):
... print('{:6} {!a}'.format(token.name, lexeme))
WORD 'k1'
EQUALS '='
WORD 'v1'
COMMA ','
WORD 'k2'
EQUALS '='
WORD 'v2'
Since both can appear in regular expressions, they can be quoted to prevent
being recognized for their special meaning:
>>> for token, lexeme in WordScanner('k1="v1, k2=v2"'):
... print('{:6} {!a}'.format(token.name, lexeme))
WORD 'k1'
EQUALS '='
WORD 'v1, k2=v2'
"""
STATE_ERROR = WordScannerState.ERROR
STATE_START = WordScannerState.START
STATE_BAD = WordScannerState.BAD
TOKEN_EOF = WordScannerToken.EOF
TokenEnum = WordScannerToken
def get_token(self, ignore_irrelevant=True):
while True:
token, lexeme = super().get_token()
_logger.debug("outer: GOT %r %r", token, lexeme)
if ignore_irrelevant and token.is_irrelevant:
_logger.debug("outer: CONTINUING (irrelevant token found)")
continue
break
return token, lexeme
def _next_state_for(self, state, char):
if state is WordScannerState.START:
if char.isspace():
return WordScannerState.SPACE
elif char == '\0':
return WordScannerState.EOF
elif char == '#':
return WordScannerState.COMMENT_INNER
elif char == '"':
return WordScannerState.QUOTED_WORD_INNER
elif char == ',':
return WordScannerState.COMMA
elif char == '=':
return WordScannerState.EQUALS
else:
return WordScannerState.BARE_WORD
elif state is WordScannerState.SPACE:
if char.isspace():
return WordScannerState.SPACE
elif state is WordScannerState.BARE_WORD:
if char.isspace() or char in '\0#,=':
return WordScannerState.ERROR
else:
return WordScannerState.BARE_WORD
elif state is WordScannerState.COMMENT_INNER:
if char == '\n' or char == '\0':
return WordScannerState.COMMENT_END
else:
return WordScannerState.COMMENT_INNER
elif state is WordScannerState.QUOTED_WORD_INNER:
if char == '"':
return WordScannerState.QUOTED_WORD_END
if char == '\x00':
return WordScannerState.ERROR
else:
return WordScannerState.QUOTED_WORD_INNER
if char.isspace() or char == '\0' or char == '#':
return WordScannerState.ERROR
else:
return WordScannerState.WORD
elif state is WordScannerState.QUOTED_WORD_END:
pass
elif state is WordScannerState.COMMENT_END:
pass
elif state is WordScannerState.COMMA:
pass
elif state is WordScannerState.EQUALS:
pass
return WordScannerState.ERROR
plainbox-0.25/plainbox/impl/test_box.py 0000664 0001750 0001750 00000034446 12627266441 021057 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.test_box
======================
Test definitions for plainbox.impl.box module
"""
from collections import defaultdict
from inspect import cleandoc
from io import TextIOWrapper
from unittest import TestCase
import warnings
from plainbox import __version__ as version
from plainbox.abc import IProvider1
from plainbox.impl.box import main
from plainbox.impl.box import stubbox_main
from plainbox.impl.clitools import ToolBase
from plainbox.impl.commands.checkbox import CheckBoxInvocationMixIn
from plainbox.impl.testing_utils import MockJobDefinition, suppress_warnings
from plainbox.testing_utils.io import TestIO
from plainbox.vendor.mock import Mock
def setUpModule():
warnings.filterwarnings(
'ignore', 'validate is deprecated since version 0.11')
def tearDownModule():
warnings.resetwarnings()
def mock_whitelist(name, text, filename):
"""
Create a mocked whitelist for
CheckBoxInvocationMixIn._get_matching_job_list(). Specifically
for ``ns.whitelists`` as passed to that function.
:param name:
Name of the mocked object, helps in debugging
:param text:
Full text of the whitelist
:param filename:
Filename of the whitelist file
"""
whitelist = Mock(spec=TextIOWrapper, name=name)
whitelist.name = filename
whitelist.read.return_value = text
return whitelist
class MiscTests(TestCase):
def setUp(self):
self.provider1 = Mock(spec=IProvider1)
self.job_foo = MockJobDefinition(id='foo', provider=self.provider1)
self.job_bar = MockJobDefinition(id='bar', provider=self.provider1)
self.job_baz = MockJobDefinition(id='baz', provider=self.provider1)
self.provider1.whitelist_list = []
self.provider1.id_map = defaultdict(
list, foo=[self.job_foo], bar=[self.job_bar], baz=[self.job_baz])
self.provider1.unit_list = [self.job_foo, self.job_bar, self.job_baz]
self.config = Mock(name='config')
self.provider_loader = lambda: [self.provider1]
self.obj = CheckBoxInvocationMixIn(self.provider_loader, self.config)
def test_matching_job_list(self):
# Nothing gets selected automatically
ns = Mock(name="ns")
ns.whitelist = []
ns.include_pattern_list = []
ns.exclude_pattern_list = []
observed = self.obj._get_matching_job_list(ns, [
self.job_foo, self.job_bar])
self.assertEqual(observed, [])
def test_matching_job_list_including(self):
# Including jobs with glob pattern works
ns = Mock(name="ns")
ns.whitelist = []
ns.include_pattern_list = ['f.+']
ns.exclude_pattern_list = []
observed = self.obj._get_matching_job_list(ns, [
self.job_foo, self.job_bar])
self.assertEqual(observed, [self.job_foo])
def test_matching_job_list_excluding(self):
# Excluding jobs with glob pattern works
ns = Mock(name="ns")
ns.whitelist = []
ns.include_pattern_list = ['.+']
ns.exclude_pattern_list = ['f.+']
observed = self.obj._get_matching_job_list(ns, [
self.job_foo, self.job_bar])
self.assertEqual(observed, [self.job_bar])
def test_matching_job_list_whitelist(self):
# whitelists contain list of include patterns
# that are read and interpreted as usual
ns = Mock(name="ns")
ns.whitelist = [
mock_whitelist("foo_whitelist", "foo", "foo.whitelist")]
ns.include_pattern_list = []
ns.exclude_pattern_list = []
observed = self.obj._get_matching_job_list(ns, [
self.job_foo, self.job_bar])
self.assertEqual(observed, [self.job_foo])
def test_matching_job_list_multiple_whitelists(self):
ns = Mock(name="ns")
ns.whitelist = [
mock_whitelist("whitelist_a", "foo", "a.whitelist"),
mock_whitelist("whitelist_b", "baz", "b.whitelist"),
]
ns.include_pattern_list = []
ns.exclude_pattern_list = []
observed = self.obj._get_matching_job_list(ns, [
self.job_foo, self.job_bar, self.job_baz])
self.assertEqual(observed, [self.job_foo, self.job_baz])
def test_no_prefix_matching_including(self):
# Include patterns should only match whole job name
ns = Mock(name="ns")
ns.whitelist = [
mock_whitelist("whitelist_a", "fo", "a.whitelist"),
mock_whitelist("whitelist_b", "ba.+", "b.whitelist"),
]
ns.include_pattern_list = ['fo', 'ba.+']
ns.exclude_pattern_list = []
observed = self.obj._get_matching_job_list(ns, [self.job_foo,
self.job_bar])
self.assertEqual(observed, [self.job_bar])
def test_no_prefix_matching_excluding(self):
# Exclude patterns should only match whole job name
ns = Mock(name="ns")
ns.whitelist = []
ns.include_pattern_list = ['.+']
ns.exclude_pattern_list = ['fo', 'ba.+']
observed = self.obj._get_matching_job_list(
ns, [self.job_foo, self.job_bar])
self.assertEqual(observed, [self.job_foo])
def test_invalid_pattern_including(self):
ns = Mock(name="ns")
ns.whitelist = []
ns.include_pattern_list = ['?']
ns.exclude_pattern_list = []
observed = self.obj._get_matching_job_list(
ns, [self.job_foo, self.job_bar])
self.assertEqual(observed, [])
def test_invalid_pattern_excluding(self):
ns = Mock(name="ns")
ns.whitelist = []
ns.include_pattern_list = ['fo.*']
ns.exclude_pattern_list = ['[bar']
observed = self.obj._get_matching_job_list(
ns, [self.job_foo, self.job_bar])
self.assertEqual(observed, [self.job_foo])
class TestMain(TestCase):
def test_version(self):
with TestIO(combined=True) as io:
with self.assertRaises(SystemExit) as call:
stubbox_main(['--version'])
self.assertEqual(call.exception.args, (0,))
self.assertEqual(io.combined, "{}\n".format(
ToolBase.format_version_tuple(version)))
@suppress_warnings
# Temporarily supress warnings (i.e. ResourceWarning) to work around
# Issue #341 in distribute (< 0.6.33).
# See: https://bitbucket.org/tarek/distribute/issue/341
def test_help(self):
with TestIO(combined=True) as io:
with self.assertRaises(SystemExit) as call:
main(['--help'])
self.assertEqual(call.exception.args, (0,))
self.maxDiff = None
expected = """
usage: plainbox [--help] [--version] | [options] ...
positional arguments:
{run,session,device,self-test,check-config,dev,startprovider}
run run a test job
session session management commands
device device management commands
self-test run unit and integration tests
check-config check and display plainbox configuration
dev development commands
startprovider create a new provider (directory)
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
logging and debugging:
-v, --verbose be more verbose (same as --log-level=INFO)
-D, --debug enable DEBUG messages on the root logger
-C, --debug-console display DEBUG messages in the console
-T LOGGER, --trace LOGGER
enable DEBUG messages on the specified logger (can be
used multiple times)
-P, --pdb jump into pdb (python debugger) when a command crashes
-I, --debug-interrupt
crash on SIGINT/KeyboardInterrupt, useful with --pdb
"""
self.assertEqual(io.combined, cleandoc(expected) + "\n")
def test_run_without_args(self):
with TestIO(combined=True) as io:
with self.assertRaises(SystemExit) as call:
main([])
self.assertEqual(call.exception.args, (2,))
expected = """
usage: plainbox [--help] [--version] | [options] ...
plainbox: error: too few arguments
"""
self.assertEqual(io.combined, cleandoc(expected) + "\n")
class TestSpecial(TestCase):
def test_help(self):
with TestIO(combined=True) as io:
with self.assertRaises(SystemExit) as call:
main(['dev', 'special', '--help'])
self.assertEqual(call.exception.args, (0,))
self.maxDiff = None
expected = """
usage: plainbox dev special [-h] (-j | -J | -e | -d) [--dot-resources]
[-T TEST-PLAN-ID] [-i PATTERN] [-x PATTERN]
[-w WHITELIST]
optional arguments:
-h, --help show this help message and exit
-j, --list-jobs list jobs instead of running them
-J, --list-job-hashes
list jobs with cheksums instead of running them
-e, --list-expressions
list all unique resource expressions
-d, --dot print a graph of jobs instead of running them
--dot-resources show resource relationships (for --dot)
test selection options:
-T TEST-PLAN-ID, --test-plan TEST-PLAN-ID
load the specified test plan
-i PATTERN, --include-pattern PATTERN
include jobs matching the given regular expression
-x PATTERN, --exclude-pattern PATTERN
exclude jobs matching the given regular expression
-w WHITELIST, --whitelist WHITELIST
load whitelist containing run patterns
"""
self.assertEqual(io.combined, cleandoc(expected) + "\n")
def test_run_without_args(self):
with TestIO(combined=True) as io:
with self.assertRaises(SystemExit) as call:
main(['dev', 'special'])
self.assertEqual(call.exception.args, (2,))
expected = """
usage: plainbox dev special [-h] (-j | -J | -e | -d) [--dot-resources]
[-T TEST-PLAN-ID] [-i PATTERN] [-x PATTERN]
[-w WHITELIST]
plainbox dev special: error: one of the arguments -j/--list-jobs -J/--list-job-hashes -e/--list-expressions -d/--dot is required
"""
self.assertEqual(io.combined, cleandoc(expected) + "\n")
def test_run_list_jobs(self):
with TestIO() as io:
with self.assertRaises(SystemExit) as call:
stubbox_main(['dev', 'special', '--list-jobs'])
self.assertEqual(call.exception.args, (0,))
self.assertIn(
"2013.com.canonical.plainbox::stub/false", io.stdout.splitlines())
self.assertIn(
"2013.com.canonical.plainbox::stub/true", io.stdout.splitlines())
def test_run_list_jobs_with_filtering(self):
with TestIO() as io:
with self.assertRaises(SystemExit) as call:
stubbox_main(['dev', 'special',
('--include-pattern='
'2013.com.canonical.plainbox::stub/false'),
'--list-jobs'])
self.assertEqual(call.exception.args, (0,))
self.assertIn(
"2013.com.canonical.plainbox::stub/false", io.stdout.splitlines())
self.assertNotIn(
"2013.com.canonical.plainbox::stub/true", io.stdout.splitlines())
def test_run_list_expressions(self):
with TestIO() as io:
with self.assertRaises(SystemExit) as call:
stubbox_main(['dev', 'special', '--list-expressions'])
self.assertEqual(call.exception.args, (0,))
self.assertIn(
'stub_package.name == "checkbox"', io.stdout.splitlines())
def test_run_dot(self):
with TestIO() as io:
with self.assertRaises(SystemExit) as call:
stubbox_main(['dev', 'special', '--dot'])
self.assertEqual(call.exception.args, (0,))
self.assertIn(
'\t"2013.com.canonical.plainbox::stub/true" [];',
io.stdout.splitlines())
# Do basic graph checks
self._check_digraph_sanity(io)
def test_run_dot_with_resources(self):
with TestIO() as io:
with self.assertRaises(SystemExit) as call:
stubbox_main(['dev', 'special', '--dot', '--dot-resources'])
self.assertEqual(call.exception.args, (0,))
self.assertIn(
'\t"2013.com.canonical.plainbox::stub/true" [];',
io.stdout.splitlines())
self.assertIn(
('\t"2013.com.canonical.plainbox::stub/requirement/good" -> '
'"2013.com.canonical.plainbox::stub_package" [style=dashed, label'
'="stub_package.name == \'checkbox\'"];'),
io.stdout.splitlines())
# Do basic graph checks
self._check_digraph_sanity(io)
def _check_digraph_sanity(self, io):
# Ensure that all lines inside the graph are terminated with a
# semicolon
for line in io.stdout.splitlines()[1:-2]:
self.assertTrue(line.endswith(';'))
# Ensure that graph header and footer are there
self.assertEqual("digraph dependency_graph {",
io.stdout.splitlines()[0])
self.assertEqual("}", io.stdout.splitlines()[-1])
plainbox-0.25/plainbox/impl/symbol.py 0000664 0001750 0001750 00000014517 12627266441 020532 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.symbol` -- Symbol Type
==========================================
Symbols are special values that evaluate back to themselves. They are global,
unlike enumeration values, and are not bound to any container that defined
them. Symbols can be easily converted to strings and back and are a useful way
to store constants for use inside applications or libraries.
Applications can use Symbol class directly or use the SymbolDef helper to
quickly construct symbols without syntax overhead.
"""
__all__ = ['Symbol', 'SymbolDef']
import functools
import inspect
@functools.total_ordering
class Symbol:
"""
Symbol type.
Instances of this class behave as self-interning strings. All instances are
tracked and at most one instance with a given symbol name can be
constructed. The name is immutable.
"""
__symbols = {}
def __new__(cls, name):
"""
Create a new symbol instance.
If the name was already used in another symbol then that object is
returned directly. If the name was not used before then construct a new
Symbol instance and return it.
"""
try:
return cls.__symbols[name]
except KeyError:
symbol = object.__new__(cls)
cls.__symbols[name] = symbol
return symbol
def __init__(self, name):
"""
Initialize a symbol with the given name
"""
self._name = name
@property
def name(self):
"""
name of the symbol
"""
return self._name
def __str__(self):
"""
Convert the symbol object to its name
"""
return self._name
def __repr__(self):
"""
Convert the symbol object to its representation in python
"""
return "{}({!r})".format(self.__class__.__name__, self._name)
def __eq__(self, other):
"""
Compare two symbols or a string and a symbol for equality
"""
if isinstance(other, Symbol):
return self is other
elif isinstance(other, str):
return self._name == other
else:
return NotImplemented
def __lt__(self, other):
"""
Compare two symbols or a string and a symbol for inequality
"""
if isinstance(other, Symbol):
return self._name < other._name
elif isinstance(other, str):
return self._name < other
else:
return NotImplemented
def __hash__(self):
"""
Has the name of the symbol
"""
return hash(self._name)
class SymbolDefNs:
"""
Internal implementation detail of the symbol module.
A special namespace used by :class:`SymbolDefMeta` to keep track of names
that were being accessed. Each accessed name is converted to a
:class:`Symbol` and added to the namespace.
"""
PASSTHRU = frozenset(('__name__', '__qualname__', '__doc__', '__module__'))
def __init__(self, allow_outer=None):
self.data = {}
self.allow_outer = allow_outer
def __setitem__(self, name, value):
if name in self.PASSTHRU:
self.data[name] = value
elif isinstance(value, Symbol):
self.data[name] = value
elif isinstance(value, str):
self.data[name] = Symbol(value)
else:
raise ValueError("Only Symbol() instances can be assigned here")
def __getitem__(self, name):
if name in self.PASSTHRU:
return self.data[name]
elif self.allow_outer is not None and name in self.allow_outer:
raise KeyError(name)
elif name in self.data:
return self.data[name]
elif name == 'Symbol':
return Symbol
else:
symbol = Symbol(name)
self.data[name] = symbol
return symbol
class SymbolDefMeta(type):
"""
Metaclass for :class:`SymbolDef` which helps to construct multiple symbol
objects easily. Uses :class:`SymbolDefNs` to keep track of all the symbol
definitions inside the class and convert them to a list of candidate
symbols to define.
"""
@classmethod
def __prepare__(mcls, name, bases, allow_outer=None, **kwargs):
return SymbolDefNs(allow_outer)
def __new__(mcls, name, bases, ns, allow_outer=None):
classdict = ns.data
classdict['get_all_symbols'] = classmethod(mcls.get_all_symbols)
return type.__new__(mcls, name, bases, classdict)
def __init__(mcls, name, bases, ns, allow_outer=None):
super().__init__(name, bases, ns)
# This is inserted via a simple trick because it's very hard to do any
# normal method definition inside SymbolDef blocks.
@staticmethod
def get_all_symbols(cls):
"""
Get all symbols defined by this symbol definition block
"""
# NOTE: This feels a bit like Enum and the extra property that it
# carries which holds all values. I don't know if we should have that
# as symbols are not 'bound' to any 'container' like enumeration values
# are.
return [value for name, kind, defcls, value
in inspect.classify_class_attrs(cls)
if name != '__locals__' and kind == 'data'
and isinstance(value, Symbol)]
class SymbolDef(metaclass=SymbolDefMeta):
"""
Helper class that allows to easily define symbols.
All sub-classes of SymbolDef are evaluated specially. Each word used inside
the class definition becomes a Symbol() instance. In addition explicit
assignment can create new symbols. This can be used to create symbols with
value different from their identifiers.
"""
plainbox-0.25/plainbox/impl/_shlex.py 0000664 0001750 0001750 00000001274 12627266441 020503 0 ustar pierre pierre 0000000 0000000 # Module and documentation by Eric S. Raymond, 21 Dec 1998
# Input stacking and error message cleanup added by ESR, March 2000
# push_source() and pop_source() made explicit by ESR, January 2001.
# Posix compliance, split(), string arguments, and
# iterator interface by Gustavo Niemeyer, April 2003.
import re
_find_unsafe = re.compile(r'[^\w@%+=:,./-]', re.ASCII).search
def quote(s):
"""Return a shell-escaped version of the string *s*."""
if not s:
return "''"
if _find_unsafe(s) is None:
return s
# use single quotes, and put single quotes into double quotes
# the string $'b is then quoted as '$'"'"'b'
return "'" + s.replace("'", "'\"'\"'") + "'"
plainbox-0.25/plainbox/impl/developer.py 0000664 0001750 0001750 00000020206 12627266441 021202 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""Support code for enforcing usage expectations on public API."""
import inspect
import logging
import warnings
__all__ = ('UsageExpectation',)
_logger = logging.getLogger("plainbox.developer")
class OffByOneBackWarning(UserWarning):
"""Warning on incorrect use of UsageExpectations(self).enforce(back=2)."""
class DeveloperError(Exception):
"""
Exception raised when program flow is incorrect.
This exception is meant to gently educate the developer about a mistake in
his or her choices in the flow of calls. Some classes may use it to explain
that a precondition was not met. Applications are not intended to catch
this exception.
"""
pass # Eh, PEP-257 checkers...
# NOTE: This is not meant for internationalization. There is some string
# manipulation associated with this that would be a bit more cumbersome to do
# "correctly" for the small benefit.
_msg_template = """
Uh, oh...
You are not expected to call {cls_name}.{fn_name}() at this time.
If you see this message then there is a bug somewhere in your code. We are
sorry for this. Perhaps the documentation is flawed, incomplete or confusing.
Please reach out to us if this happens more often than you'd like.
The set of allowed calls, at this time, is:
{allowed_calls}
Refer to the documentation of {cls_name} for details.
TIP: python -m pydoc {cls_module}.{cls_name}
"""
class UnexpectedMethodCall(DeveloperError):
"""
Developer error reported when an unexpected method call is made.
This type of error is reported when some set of methods is expected to be
called in a given way but that expectation was not followed.
"""
def __init__(self, cls, fn_name, allowed_pairs):
"""
Initialize a new exception.
:param cls:
The class this exception refers to (the code user calls must be a
method on that class).
:param fn_name:
Name of the method that was unexpectedly called.
:param allowed_pairs:
A sequence of pairs ``(fn_name, why)`` that explain the set of
allowed function calls. There is a certain pattern on how the
``why`` strings are expected to be structured. They will be used as
a part of a string that looks like this: ``' - call {fn_name}() to
{why}.'``. Developers should use explanations that look natural in
this context. This text is not meant for internationalization.
"""
self.cls = cls
self.fn_name = fn_name
self.allowed_pairs = allowed_pairs
def __str__(self):
"""Get a developer-friendly message that describes the problem."""
return _msg_template.format(
cls_module=self.cls.__module__,
cls_name=self.cls.__name__,
fn_name=self.fn_name,
allowed_calls='\n'.join(
' - call {}.{}() to {}.'.format(
self.cls.__name__, allowed_fn_name, why)
for allowed_fn_name, why in self.allowed_pairs))
class UsageExpectation:
"""
Class representing API usage expectation at any given time.
Expectations help formalize the way developers are expected to use some set
of classes, methods and other instruments. Technically, they also encode
the expectations and can raise :class:`DeveloperError`.
:attr allowed_calls:
A dictionary mapping from bound methods / functions to the use case
explaining how that method can be used at the given moment. This works
best if the usage is mostly linear (call foo.a(), then foo.b(), then
foo.c()).
This attribute can be set directly for simplicity.
:attr cls:
The class of objects this expectation object applies to.
"""
@classmethod
def of(cls, obj):
"""
Get the usage expectation of a given object.
:param obj:
The object for which usage expectation is to be set
:returns:
Either a previously made object or a fresh instance of
:class:`UsageExpectation`.
"""
try:
return obj.__usage_expectation
except AttributeError:
ua = cls(type(obj))
obj.__usage_expectation = ua
return ua
def __init__(self, cls):
"""
Initialize a new, empty, usage expectations object.
:param cls:
The class of objects that this usage expectation applies to. This
is used only to inform the developer where to look for help when
something goes wrong.
"""
self.cls = cls
self.allowed_calls = {}
def enforce(self, back=1):
"""
Enforce that usage expectations of the caller are met.
:param back:
How many function call frames to climb to look for caller. By
default we always go one frame back (the immediate caller) but if
this is used in some decorator or other similar construct then you
may need to pass a bigger value.
Depending on this value, the error message displayed to the
developer will be either spot-on or downright wrong and confusing.
Make sure the value you use it correct!
:raises DeveloperError:
If the expectations are not met.
"""
# XXX: Allowed calls is a dictionary that may be freely changed by the
# outside caller. We're unable to protect against it. Therefore the
# optimized values (for computing what is really allowed) must be
# obtained each time we are about to check, in enforce()
allowed_code = frozenset(
func.__wrapped__.__code__
if hasattr(func, '__wrapped__') else func.__code__
for func in self.allowed_calls
)
caller_frame = inspect.stack(0)[back][0]
if back > 1:
alt_caller_frame = inspect.stack(0)[back - 1][0]
else:
alt_caller_frame = None
_logger.debug("Caller code: %r", caller_frame.f_code)
_logger.debug("Alternate code: %r",
alt_caller_frame.f_code if alt_caller_frame else None)
_logger.debug("Allowed code: %r", allowed_code)
try:
if caller_frame.f_code in allowed_code:
return
# This can be removed later, it allows the caller to make an
# off-by-one mistake and go away with it.
if (alt_caller_frame is not None and
alt_caller_frame.f_code in allowed_code):
warnings.warn(
"Please back={}. Properly constructed decorators are"
" automatically handled and do not require the use of the"
" back argument.".format(back - 1), OffByOneBackWarning,
back)
return
fn_name = caller_frame.f_code.co_name
allowed_undecorated_calls = {
func.__wrapped__ if hasattr(func, '__wrapped__') else func: msg
for func, msg in self.allowed_calls.items()
}
allowed_pairs = tuple(
(fn.__code__.co_name, why)
for fn, why in sorted(
allowed_undecorated_calls.items(),
key=lambda fn_why: fn_why[0].__code__.co_name)
)
raise UnexpectedMethodCall(self.cls, fn_name, allowed_pairs)
finally:
del caller_frame
if alt_caller_frame is not None:
del alt_caller_frame
plainbox-0.25/plainbox/impl/xparsers.py 0000664 0001750 0001750 00000070110 12627266441 021063 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.xparsers` -- parsers for various plainbox formats
=====================================================================
This module contains parsers for several formats that plainbox has to deal
with. They are not real parsers (as they can be handled with simple regular
expressions most of the time) but rather simple top-down parsing snippets
spread around some classes.
What is interesting though, is the set of classes and their relationships (and
attributes) as that helps to work with the code.
Node and Visitor
----------------
The basic class for everything parsed is :class:`Node`. It contains two
attributes, :attr:`Node.lineno` and :attr:`Node.col_offset` (mimicking the
python AST) and a similar, but not identical visitor mechanism. The precise way
in which the visitor class operates is documented on :class:`Visitor`. In
general application code can freely explore (but not modify as everything is
strictly read-only) the AST.
Regular expressions
-------------------
We have to deal with regular expressions in many places so there's a dedicated
AST node for handling them. The root class is :class:`Re` but it's just a base
for one of the three concrete sub-classes :class:`ReErr`, :class:`ReFixed` and
:class:`RePattern`. ``ReErr`` is an error wrapper (when the regular expression
is incorrect and doesn't work) and the other two (which also share a common
base class :class:`ReOk`) can be used to do text matching. Since other parts of
the code already contain optimizations for regular expressions that are just a
plain string comparison there is a special class to highlight that fact
(``ReFixed``)
White Lists
-----------
White lists are a poor man's test plan which describes a list of regular
expressions with optional comments. The root class is :class:`WhiteList` who's
:attr:`WhiteList.entries` attribute contains a sequence of either
:class:`Comment` or a subclass of :class:`Re`.
"""
import abc
import itertools
import re
import sre_constants
import sre_parse
import sys
from plainbox.i18n import gettext as _
from plainbox.impl import pod
from plainbox.impl.censoREd import PatternProxy
from plainbox.impl.xscanners import WordScanner
__all__ = [
'Comment',
'Node',
'Re',
'ReErr',
'ReFixed',
'ReOk',
'RePattern',
'Visitor',
'WhiteList',
]
Pattern = type(re.compile(""))
afn_typed_const = (pod.typed, pod.const)
def F(doc, type, initial_fn=None):
""" shortcut for creating fields """
if type is list:
return pod.Field(
doc, type, initial_fn=type,
assign_filter_list=afn_typed_const)
else:
return pod.Field(
doc, type, pod.MANDATORY,
assign_filter_list=afn_typed_const)
@pod.modify_field_docstring("not negative")
def not_negative(
instance: pod.POD, field: pod.Field, old: "Any", new: "Any"
) -> "Any":
if new < 0:
raise ValueError("{}.{} cannot be negative".format(
instance.__class__.__name__, field.name, field.type.__name__))
return new
class Node(pod.POD):
""" base node type """
lineno = pod.Field(
"Line number (1-based)", int, 0,
assign_filter_list=[pod.typed, not_negative, pod.const])
col_offset = pod.Field(
"Column offset (0-based)", int, 0,
assign_filter_list=[pod.typed, not_negative, pod.const])
def __repr__(self):
return "{}({})".format(
self.__class__.__name__,
', '.join([
'{}={!r}'.format(field.name, getattr(self, field.name))
for field in self.__class__.field_list
if field.name not in ('lineno', 'col_offset')]))
def visit(self, visitor: 'Visitor'):
"""
Visit all of the sub-nodes reachable from this node
:param visitor:
Visitor object that gets to explore this and all the other nodes
:returns:
The return value of the visitor's :meth:`Visitor.visit()` method,
if any. The default visitor doesn't return anything.
"""
return visitor.visit(self)
def enumerate_entries(self) -> "Generator[node]":
for field in self.__class__.field_list:
obj = field.__get__(self, self.__class__)
if isinstance(obj, Node):
yield obj
elif isinstance(obj, list):
for list_item in obj:
if isinstance(list_item, Node):
yield list_item
class Visitor:
"""
Class assisting in traversing :class:`Node` trees.
This class can be used to explore the AST of any of the plainbox-parsed
text formats. The way to use this method is to create a custom sub-class of
the :class:`Visitor` class and to define methods that correspond to the
class of node one is interested in.
Example:
>>> class Text(Node):
... text = F("text", str)
>>> class Group(Node):
... items = F("items", list)
>>> class demo_visitor(Visitor):
... def visit_Text_node(self, node: Text):
... print("visiting text node: {}".format(node.text))
... return self.generic_visit(node)
... def visit_Group_node(self, node: Group):
... print("visiting list node")
... return self.generic_visit(node)
>>> Group(items=[
... Text(text="foo"), Text(text="bar")
... ]).visit(demo_visitor())
visiting list node
visiting text node: foo
visiting text node: bar
"""
def generic_visit(self, node: Node) -> None:
""" visit method called on nodes without a dedicated visit method"""
# XXX: I don't love the way this works, perhaps we should be less smart
# and just require implicit hints as to where to go? Perhaps children
# should be something that any node can carry?
for child_node in node.enumerate_entries():
self.visit(child_node)
def visit(self, node: Node) -> "Any":
""" visit the specified node """
node_name = node.__class__.__name__
visit_meth_name = 'visit_{}_node'.format(node_name)
if hasattr(self, visit_meth_name):
visit_meth = getattr(self, visit_meth_name)
return visit_meth(node)
else:
return self.generic_visit(node)
class Re(Node):
""" node representing a regular expression """
text = F("Text of the regular expression (perhaps invalid)", str)
@staticmethod
def parse(text: str, lineno: int=0, col_offset: int=0) -> "Re":
"""
Parse a bit of text and return a concrete subclass of ``Re``
:param text:
The text to parse
:returns:
If ``text`` is a correct regular expression then an instance of
:class:`ReOk` is returned. In practice exactly one of
:class:`ReFixed` or :class:`RePattern` may be returned.
If ``text`` is incorrect then an instance of :class:`ReErr` is
returned.
Examples:
>>> Re.parse("text")
ReFixed(text='text')
>>> Re.parse("pa[tT]ern")
RePattern(text='pa[tT]ern', re=re.compile('pa[tT]ern'))
>>> from sre_constants import error
>>> Re.parse("+")
ReErr(text='+', exc=error('nothing to repeat',))
"""
try:
pyre_ast = sre_parse.parse(text)
except sre_constants.error as exc:
assert len(exc.args) == 1
# XXX: This is a bit crazy but this lets us have identical error
# messages across python3.2 all the way to 3.5. I really really
# wish there was a better way at fixing this.
exc.args = (re.sub(" at position \d+", "", exc.args[0]), )
return ReErr(lineno, col_offset, text, exc)
else:
# Check if the AST of this regular expression is composed
# of just a flat list of 'literal' nodes. In other words,
# check if it is a simple string match in disguise
if ((sys.version_info[:2] >= (3, 5) and
all(t == sre_constants.LITERAL for t, rest in pyre_ast)) or
all(t == 'literal' for t, rest in pyre_ast)):
return ReFixed(lineno, col_offset, text)
else:
# NOTE: we might save time by calling some internal function to
# convert pyre_ast to the pattern object.
#
# XXX: The actual compiled pattern is wrapped in PatternProxy
# to ensure that it can be repr()'ed sensibly on Python 3.2
return RePattern(
lineno, col_offset, text, PatternProxy(re.compile(text)))
class ReOk(Re):
""" node representing a correct regular expression """
@abc.abstractmethod
def match(self, text: str) -> bool:
"""
check if the given text matches the expression
This method is provided by all of the subclasses of
:class:`ReOk`, sometimes the implementation is faster than a
naive regular expression match.
>>> Re.parse("foo").match("foo")
True
>>> Re.parse("foo").match("f")
False
>>> Re.parse("[fF]oo").match("foo")
True
>>> Re.parse("[fF]oo").match("Foo")
True
"""
class ReFixed(ReOk):
""" node representing a trivial regular expression (fixed string)"""
def match(self, text: str) -> bool:
return text == self.text
class RePattern(ReOk):
""" node representing a regular expression pattern """
re = F("regular expression object", Pattern)
def match(self, text: str) -> bool:
return self.re.match(text) is not None
class ReErr(Re):
""" node representing an incorrect regular expression """
exc = F("exception describing the problem", Exception)
class Comment(Node):
""" node representing single comment """
comment = F("comment text, including any comment markers", str)
class WhiteList(Node):
""" node representing a whole plainbox whitelist """
entries = pod.Field("a list of comments and patterns", list,
initial_fn=list, assign_filter_list=[
pod.typed, pod.typed.sequence(Node), pod.const])
@staticmethod
def parse(text: str, lineno: int=1, col_offset: int=0) -> "WhiteList":
"""
Parse a plainbox *whitelist*
Empty string is still a valid (though empty) whitelist
>>> WhiteList.parse("")
WhiteList(entries=[])
White space is irrelevant and gets ignored if it's not of any
semantic value. Since whitespace was never a part of the de-facto
allowed pattern syntax one cannot create a job with " ".
>>> WhiteList.parse(" ")
WhiteList(entries=[])
As soon as there's something interesting though, it starts to have
meaning. Note that we differentiate the raw text ' a ' from the
pattern object is represents '^namespace::a$' but at this time,
when we parse the text this contextual, semantic information is not
available and is not a part of the AST.
>>> WhiteList.parse(" data ")
WhiteList(entries=[ReFixed(text=' data ')])
Data gets separated into line-based records. Any number of lines
may exist in a single whitelist.
>>> WhiteList.parse("line")
WhiteList(entries=[ReFixed(text='line')])
>>> WhiteList.parse("line 1\\nline 2\\n")
WhiteList(entries=[ReFixed(text='line 1'), ReFixed(text='line 2')])
Empty lines are just ignored. You can re-create them by observing lack
of continuity in the values of the ``lineno`` field.
>>> WhiteList.parse("line 1\\n\\nline 3\\n")
WhiteList(entries=[ReFixed(text='line 1'), ReFixed(text='line 3')])
Data can be mixed with comments. Note that col_offset is finally
non-zero here as the comments starts on the fourth character into the
line:
>>> WhiteList.parse("foo # pick foo")
... # doctest: +NORMALIZE_WHITESPACE
WhiteList(entries=[ReFixed(text='foo '),
Comment(comment='# pick foo')])
Comments can also exist without any data:
>>> WhiteList.parse("# this is a comment")
WhiteList(entries=[Comment(comment='# this is a comment')])
Lastly, there are no *exceptions* at this stage, broken patterns are
represented as such but no exceptions are ever raised:
>>> WhiteList.parse("[]")
... # doctest: +ELLIPSIS
WhiteList(entries=[ReErr(text='[]', exc=error('un...',))])
"""
entries = []
initial_lineno = lineno
# NOTE: lineno is consciously shadowed below
for lineno, line in enumerate(text.splitlines(), lineno):
if '#' in line:
cindex = line.index('#')
comment = line[cindex:]
data = line[:cindex]
else:
cindex = None
comment = None
data = line
if not data.strip():
data = None
if data:
entries.append(Re.parse(data, lineno, col_offset))
if comment:
entries.append(Comment(lineno, col_offset + cindex, comment))
return WhiteList(initial_lineno, col_offset, entries)
class Error(Node):
""" node representing a syntax error """
msg = F("message", str)
class Text(Node):
""" node representing a bit of text """
text = F("text", str)
class FieldOverride(Node):
""" node representing a single override statement """
value = F("value to apply (override value)", Text)
pattern = F("pattern that selects things to override", Re)
@staticmethod
def parse(
text: str, lineno: int=1, col_offset: int=0
) -> "Union[FieldOverride, Error]":
"""
Parse a single test plan field override line
Using correct syntax will result in a FieldOverride node with
appropriate data in the ``value`` and ``pattern`` fields. Note that
``pattern`` may be either a :class:`RePattern` or a :class:`ReFixed` or
:class:`ReErr` which is not a valid pattern and cannot be used.
>>> FieldOverride.parse("apply new-value to pattern")
... # doctest: +NORMALIZE_WHITESPACE
FieldOverride(value=Text(text='new-value'),
pattern=ReFixed(text='pattern'))
>>> FieldOverride.parse("apply blocker to .*")
... # doctest: +NORMALIZE_WHITESPACE
FieldOverride(value=Text(text='blocker'),
pattern=RePattern(text='.*', re=re.compile('.*')))
Using incorrect syntax will result in a single Error node being
returned. The message (``msg``) field contains useful information on
the cause of the problem, as depicted below:
>>> FieldOverride.parse("")
Error(msg="expected 'apply' near ''")
>>> FieldOverride.parse("apply")
Error(msg='expected override value')
>>> FieldOverride.parse("apply value")
Error(msg="expected 'to' near ''")
>>> FieldOverride.parse("apply value to")
Error(msg='expected override pattern')
>>> FieldOverride.parse("apply value to pattern junk")
Error(msg="unexpected garbage: 'junk'")
Lastly, shell-style comments are supported. They are discarded by the
scanner code though.
>>> FieldOverride.parse("apply value to pattern # comment")
... # doctest: +NORMALIZE_WHITESPACE
FieldOverride(value=Text(text='value'),
pattern=ReFixed(text='pattern'))
"""
# XXX Until our home-grown scanner is ready col_offset values below
# are all dummy. This is not strictly critical but should be improved
# upon later.
scanner = WordScanner(text)
# 'APPLY' ...
token, lexeme = scanner.get_token()
if token != scanner.TokenEnum.WORD or lexeme != 'apply':
return Error(lineno, col_offset,
_("expected {!a} near {!r}").format('apply', lexeme))
# 'APPLY' VALUE ...
token, lexeme = scanner.get_token()
if token != scanner.TokenEnum.WORD:
return Error(lineno, col_offset, _("expected override value"))
value = Text(lineno, col_offset, lexeme)
# 'APPLY' VALUE 'TO' ...
token, lexeme = scanner.get_token()
if token != scanner.TokenEnum.WORD or lexeme != 'to':
return Error(lineno, col_offset,
_("expected {!a} near {!r}").format('to', lexeme))
# 'APPLY' VALUE 'TO' PATTERN...
token, lexeme = scanner.get_token()
if token != scanner.TokenEnum.WORD:
return Error(lineno, col_offset, _("expected override pattern"))
pattern = Re.parse(lexeme, lineno, col_offset)
# 'APPLY' VALUE 'TO' PATTERN
token, lexeme = scanner.get_token()
if token != scanner.TokenEnum.EOF:
return Error(lineno, col_offset,
_("unexpected garbage: {!r}").format(lexeme))
return FieldOverride(lineno, col_offset, value, pattern)
class OverrideFieldList(Node):
""" node representing a whole plainbox field override list"""
entries = pod.Field("a list of comments and patterns", list,
initial_fn=list, assign_filter_list=[
pod.typed, pod.typed.sequence(Node), pod.const])
@staticmethod
def parse(
text: str, lineno: int=1, col_offset: int=0
) -> "OverrideFieldList":
entries = []
initial_lineno = lineno
# NOTE: lineno is consciously shadowed below
for lineno, line in enumerate(text.splitlines(), lineno):
entries.append(FieldOverride.parse(line, lineno, col_offset))
return OverrideFieldList(initial_lineno, col_offset, entries)
class OverrideExpression(Node):
""" node representing a single override statement """
field = F("field to override", Text)
value = F("value to apply", Text)
class IncludeStmt(Node):
""" node representing a single include statement """
pattern = F("the pattern used for selecting jobs", Re)
overrides = pod.Field("list of overrides to apply", list, initial_fn=list,
assign_filter_list=[
pod.typed,
pod.typed.sequence(OverrideExpression),
pod.const])
@staticmethod
def parse(
text: str, lineno: int=1, col_offset: int=0
) -> "Union[IncludeStmt, Error]":
"""
Parse a single test plan include line
Using correct syntax will result in a IncludeStmt node with
appropriate data in the ``pattern`` and ``overrides`` fields. Note that
``pattern`` may be either a :class:`RePattern` or a :class:`ReFixed` or
:class:`ReErr` which is not a valid pattern and cannot be used.
Overrides are a list of :class:`OverrideExpression`. The list may
contain incorrect, or duplicate values but that's up to higher-level
analysis to check for.
The whole overrides section is optional so a single pattern is a good
include statement:
>>> IncludeStmt.parse("usb.*")
... # doctest: +NORMALIZE_WHITESPACE
IncludeStmt(pattern=RePattern(text='usb.*',
re=re.compile('usb.*')),
overrides=[])
Any number of key=value override pairs can be used using commas in
between each pair:
>>> IncludeStmt.parse("usb.* f1=o1")
... # doctest: +NORMALIZE_WHITESPACE
IncludeStmt(pattern=RePattern(text='usb.*',
re=re.compile('usb.*')),
overrides=[OverrideExpression(field=Text(text='f1'),
value=Text(text='o1'))])
>>> IncludeStmt.parse("usb.* f1=o1, f2=o2")
... # doctest: +NORMALIZE_WHITESPACE
IncludeStmt(pattern=RePattern(text='usb.*',
re=re.compile('usb.*')),
overrides=[OverrideExpression(field=Text(text='f1'),
value=Text(text='o1')),
OverrideExpression(field=Text(text='f2'),
value=Text(text='o2'))])
>>> IncludeStmt.parse("usb.* f1=o1, f2=o2, f3=o3")
... # doctest: +NORMALIZE_WHITESPACE
IncludeStmt(pattern=RePattern(text='usb.*',
re=re.compile('usb.*')),
overrides=[OverrideExpression(field=Text(text='f1'),
value=Text(text='o1')),
OverrideExpression(field=Text(text='f2'),
value=Text(text='o2')),
OverrideExpression(field=Text(text='f3'),
value=Text(text='o3'))])
Obviously some things can fail, the following examples show various
error states that are possible. In each state an Error node is returned
instead of the whole statement.
>>> IncludeStmt.parse("")
Error(msg='expected pattern')
>>> IncludeStmt.parse("pattern field")
Error(msg="expected '='")
>>> IncludeStmt.parse("pattern field=")
Error(msg='expected override value')
>>> IncludeStmt.parse("pattern field=override junk")
Error(msg="expected ','")
>>> IncludeStmt.parse("pattern field=override, ")
Error(msg='expected override field')
"""
scanner = WordScanner(text)
# PATTERN ...
token, lexeme = scanner.get_token()
if token != scanner.TokenEnum.WORD:
return Error(lineno, col_offset, _("expected pattern"))
pattern = Re.parse(lexeme, lineno, col_offset)
overrides = []
for i in itertools.count():
# PATTERN FIELD ...
token, lexeme = scanner.get_token()
if token == scanner.TokenEnum.EOF and i == 0:
# The whole override section is optional so the sequence may
# end with EOF on the first iteration of the loop.
break
elif token != scanner.TokenEnum.WORD:
return Error(lineno, col_offset, _("expected override field"))
field = Text(lineno, col_offset, lexeme)
# PATTERN FIELD = ...
token, lexeme = scanner.get_token()
if token != scanner.TokenEnum.EQUALS:
return Error(lineno, col_offset, _("expected '='"))
# PATTERN FIELD = VALUE ...
token, lexeme = scanner.get_token()
if token != scanner.TokenEnum.WORD:
return Error(lineno, col_offset, _("expected override value"))
value = Text(lineno, col_offset, lexeme)
expr = OverrideExpression(lineno, col_offset, field, value)
overrides.append(expr)
# is there any more?
# PATTERN FIELD = VALUE , ...
token, lexeme = scanner.get_token()
if token == scanner.TokenEnum.COMMA:
# (and again)
continue
elif token == scanner.TokenEnum.EOF:
break
else:
return Error(lineno, col_offset, _("expected ','"))
return IncludeStmt(lineno, col_offset, pattern, overrides)
class IncludeStmtList(Node):
""" node representing a list of include statements"""
entries = pod.Field("a list of include statements", list,
initial_fn=list, assign_filter_list=[
pod.typed, pod.typed.sequence(Node), pod.const])
@staticmethod
def parse(
text: str, lineno: int=1, col_offset: int=0
) -> "IncludeStmtList":
"""
Parse a multi-line ``include`` field.
This field is a simple list of :class:`IncludeStmt` with the added
twist that empty lines (including lines containing just irrelevant
white-space or comments) are silently ignored.
Example:
>>> IncludeStmtList.parse('''
... foo
... # comment
... bar''')
... # doctest: +NORMALIZE_WHITESPACE
IncludeStmtList(entries=[IncludeStmt(pattern=ReFixed(text='foo'),
overrides=[]),
IncludeStmt(pattern=ReFixed(text='bar'),
overrides=[])])
"""
entries = []
initial_lineno = lineno
# NOTE: lineno is consciously shadowed below
for lineno, line in enumerate(text.splitlines(), lineno):
if WordScanner(line).get_token()[0] == WordScanner.TOKEN_EOF:
# XXX: hack to work around the fact that each line is scanned
# separately so there is no way to naturally progress to the
# next line yet.
continue
entries.append(IncludeStmt.parse(line, lineno, col_offset))
return IncludeStmtList(initial_lineno, col_offset, entries)
class WordList(Node):
""" node representing a list of words"""
entries = pod.Field("a list of words", list, initial_fn=list,
assign_filter_list=[pod.typed,
pod.typed.sequence(Node),
pod.const])
@staticmethod
def parse(
text: str, lineno: int=1, col_offset: int=0
) -> "WordList":
"""
Parse a list of words.
Words are naturally separated by whitespace. Words can be quoted using
double quotes. Words can be optionally separated with commas although
those are discarded and entirely optional.
Some basic examples:
>>> WordList.parse("foo, bar")
WordList(entries=[Text(text='foo'), Text(text='bar')])
>>> WordList.parse("foo,bar")
WordList(entries=[Text(text='foo'), Text(text='bar')])
>>> WordList.parse("foo,,,,bar")
WordList(entries=[Text(text='foo'), Text(text='bar')])
>>> WordList.parse("foo,,,,bar,,")
WordList(entries=[Text(text='foo'), Text(text='bar')])
Words can be quoted, this allows us to include all kinds of characters
inside:
>>> WordList.parse('"foo bar"')
WordList(entries=[Text(text='foo bar')])
One word of caution, since we use one (and not a very smart one at
that) scanner, the equals sign is recognized and rejected as incorrect
input.
>>> WordList.parse("=")
WordList(entries=[Error(msg="Unexpected input: '='")])
"""
entries = []
scanner = WordScanner(text)
while True:
token, lexeme = scanner.get_token()
if token == scanner.TOKEN_EOF:
break
elif token == scanner.TokenEnum.COMMA:
continue
elif token == scanner.TokenEnum.WORD:
entries.append(Text(lineno, col_offset, lexeme))
else:
entries.append(
Error(lineno, col_offset,
"Unexpected input: {!r}".format(lexeme)))
return WordList(lineno, col_offset, entries)
plainbox-0.25/plainbox/impl/test_developer.py 0000664 0001750 0001750 00000005435 12627266441 022250 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""Tests for the developer support module."""
import unittest
from plainbox.impl.developer import DeveloperError
from plainbox.impl.developer import UnexpectedMethodCall
from plainbox.impl.developer import UsageExpectation
class _Foo:
def m1(self):
UsageExpectation.of(self).enforce()
def m2(self):
UsageExpectation.of(self).enforce()
class UnexpectedMethodCallTests(unittest.TestCase):
"""Tests for the UnexpectedMethodCall class."""
def test_ancestry(self):
"""Check that UnexpectedMethodCall is a subclass of DeveloperError."""
self.assertTrue(issubclass(UnexpectedMethodCall, DeveloperError))
class UsageExpectationTests(unittest.TestCase):
"""Tests for the UsageExpectation class."""
def test_of(self):
"""Check that .of() returns the same object for each target."""
foo1 = _Foo()
foo2 = _Foo()
ue1 = UsageExpectation.of(foo1)
ue2 = UsageExpectation.of(foo2)
self.assertIsInstance(ue1, UsageExpectation)
self.assertIsInstance(ue2, UsageExpectation)
self.assertIs(ue1, UsageExpectation.of(foo1))
self.assertIs(ue2, UsageExpectation.of(foo2))
self.assertIsNot(ue1, ue2)
def test_enforce(self):
"""Check that .enforce() works and produces useful messages."""
foo = _Foo()
UsageExpectation.of(foo).allowed_calls = {
foo.m1: "call m1 now"
}
# Nothing should happen here
foo.m1()
# Exception should be raised here
with self.assertRaises(UnexpectedMethodCall) as boom:
foo.m2()
self.assertEqual(str(boom.exception), """
Uh, oh...
You are not expected to call _Foo.m2() at this time.
If you see this message then there is a bug somewhere in your code. We are
sorry for this. Perhaps the documentation is flawed, incomplete or confusing.
Please reach out to us if this happens more often than you'd like.
The set of allowed calls, at this time, is:
- call _Foo.m1() to call m1 now.
Refer to the documentation of _Foo for details.
TIP: python -m pydoc plainbox.impl.test_developer._Foo
""")
plainbox-0.25/plainbox/impl/pod.py 0000664 0001750 0001750 00000077623 12627266441 020016 0 ustar pierre pierre 0000000 0000000 # encoding: utf-8
# This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
Plain Old Data.
:mod:`plainbox.impl.pod`
========================
This module contains the :class:`POD` and :class:`Field` classes that simplify
creation of declarative struct-like data holding classes. POD classes get a
useful repr() method, useful initializer and accessors for each of the fields
defined inside. POD classes can be inherited (properly detecting any field
clashes)
Defining POD classes:
>>> class Person(POD):
... name = Field("name of the person", str, MANDATORY)
... age = Field("age of the person", int)
Creating POD instances, positional arguments match field definition order:
>>> joe = Person("joe", age=42)
Full-blown comparison (not only equality):
>>> joe == Person("joe", 42)
True
Reading and writing attributes also works (obviously):
>>> joe.name
'joe'
>>> joe.age
42
>>> joe.age = 24
>>> joe.age
24
For a full description check out the documentation of the :class:`POD` and
:class:`Field`.
"""
from collections import OrderedDict
from collections import namedtuple
from functools import total_ordering
from logging import getLogger
from textwrap import dedent
from plainbox.i18n import gettext as _
from plainbox.vendor import morris
__all__ = ('POD', 'PODBase', 'podify', 'Field', 'MANDATORY', 'UNSET',
'read_only_assign_filter', 'type_convert_assign_filter',
'type_check_assign_filter', 'modify_field_docstring')
_logger = getLogger("plainbox.pod")
class _Singleton:
"""A simple object()-like singleton that has a more useful repr()."""
def __repr__(self):
return self.__class__.__name__
class MANDATORY(_Singleton):
"""
Class for the special MANDATORY object.
This object can be used as a value in :attr:`Field.initial`.
Using ``MANDATORY`` on a field like that makes the explicit initialization
of the field mandatory during POD initialization. Please use this value to
require that the caller supplies a given argument to the POD you are
working with.
"""
MANDATORY = MANDATORY()
class UNSET(_Singleton):
"""
Class of the special UNSET object.
Singleton that is implicitly assigned to the values of all fields during
POD initialization. This way all fields will have a value, even early at
the time a POD is initialized. This can be important if the POD is somehow
repr()-ed or inspected in other means.
This object is also used by the :func:`read_only_assign_filter` function.
"""
UNSET = UNSET()
class Field:
"""
A field in a plain-old-data class.
Each field declares one attribute that can be read and written to. Just
like a C structure. Attributes are readable _and_ writable but there is a
lot of flexibility in what happens.
:attr name:
Name of the field (this is how this field can be accessed on the class
or instance that contains it). This gets set by
:meth:`_FieldCollection.inspect_namespace()`
:attr instance_attr:
Name of the POD dictionary entry used as backing store. This is set the
same as ``name`` above. By default that's just name prepended with the
``'_'`` character.
:attr type:
An optional type hit. This is not used by default but assign filters
can inspect and use this for type checking. It can also be used for
documenting the intent of the field.
:attr __doc__:
The docstring of the field, as initialized by the caller.
:attr initial:
Initial value of this field, can be changed by passing arguments to
:meth:`POD.__init__()`. May be set to ``MANDATORY`` for a special
meaning (see below).
:attr initial_fn:
If not None this is a callable that produces the ``initial`` value for
each new POD object.
:attr notify:
If True, a on_{name}_changed
A flag controlling if notification events are sent for each
modification of POD data through field.
:attr notify_fn:
An (optional) function to use as the first responder to the change
notification signal. This field is only used if the ``notify``
attribute is set to ``True``.
:attr assign_filter_list:
An (optional) list of assignment filter functions.
A field is initialized based on the arguments passed to the POD
initializer. If no argument is passed that would correspond to a given
field the *initial* value is used. The *initial* value is either a constant
(reference) stored in the ``initial`` property of the field or the return
value of the callable in ``initial_fn``. Please make sure to use
``initial_fn`` if the value is not immutable as otherwise the produced
value may be unintentionally shared by multiple objects.
If the ``initial`` value is the special constant ``MANDATORY`` then the
corresponding field must be explicitly initialized by the POD initializer
argument list or a TypeError is raised.
The ``notify`` flag controls the existence of the ``on_{name}_changed(old,
new)`` signal on the class that includes the field. Applications can
connect to that signal to observe changes. The signal is fired whenever the
newly-assigned value compares *unequal* to the value currently stored in
the POD.
The ``notify_fn`` is an optional function that is used instead of the
default (internal) :meth:`on_changed()` method of the Field class itself.
If specified it must have the same three-argument signature. It will be
called whenever the value of the field changes. Note that it will also be
called on the initial assignment, when the ``old`` argument it receives
will be set to the special ``UNSET`` object.
Lastly a docstring and type hint can be provided for documentation. The
type check is not enforced.
Assignment filters are used to inspect and optionally modify a value during
assignment (including the assignment done on object initialization) and can
be used for various operations (including type conversions and validation).
Assignment filters are called whenever a field is used to write to a POD.
Since assignment filters are arranged in a list and executed in-order, they
can also be used to modify the value as it gets propagated through the list
of filters.
The signature of each filter is ``fn(pod, field, old_value, new_value)``.
The return value is the value shown to the subsequent filter or finally
assigned to the POD.
"""
_counter = 0
def __init__(self, doc=None, type=None, initial=None, initial_fn=None,
notify=False, notify_fn=None, assign_filter_list=None):
"""Initialize (define) a new POD field."""
self.__doc__ = dedent(doc) if doc is not None else None
self.type = type
self.initial = initial
self.initial_fn = initial_fn
self.notify = notify
self.notify_fn = notify_fn
self.assign_filter_list = assign_filter_list
self.name = None # Set via :meth:`gain_name()`
self.instance_attr = None # ditto
self.signal_name = None # ditto
doc_extra = []
for fn in self.assign_filter_list or ():
if hasattr(fn, 'field_docstring_ext'):
doc_extra.append(fn.field_docstring_ext.format(field=self))
if doc_extra:
self.__doc__ += (
'\n\nSide effects of assign filters:\n'
+ '\n'.join(' - {}'.format(extra) for extra in doc_extra))
self.counter = self.__class__._counter
self.__class__._counter += 1
@property
def change_notifier(self):
"""
Decorator for changing the change notification function.
This decorator can be used to define all the fields in one block and
all the notification function in another block. It helps to make the
code easier to read.
Example::
>>> class Person(POD):
... name = Field()
...
... @name.change_notifier
... def _name_changed(self, old, new):
... print("changed from {!r} to {!r}".format(old, new))
>>> person = Person()
changed from UNSET to None
>>> person.name = "bob"
changed from None to 'bob'
.. note::
Keep in mind that the decorated function is converted to a signal
automatically. The name of the function is also irrelevant, the POD
core automatically creates signals that have consistent names of
``on_{field}_changed()``.
"""
def decorator(fn):
self.notify = True
self.notify_fn = fn
return fn
return decorator
def __repr__(self):
"""Get a debugging representation of a field."""
return "<{} name:{!r}>".format(self.__class__.__name__, self.name)
@property
def is_mandatory(self) -> bool:
"""Flag indicating if the field needs a mandatory initializer."""
return self.initial is MANDATORY
def gain_name(self, name: str) -> None:
"""
Set field name.
:param name:
Name of the field as it appears in a class definition
Method called at most once on each Field instance embedded in a
:class:`POD` subclass. This method informs the field of the name it was
assigned to in the class.
"""
self.name = name
self.instance_attr = "_{}".format(name)
self.signal_name = "on_{}_changed".format(name)
def alter_cls(self, cls: type) -> None:
"""
Modify class definition this field belongs to.
This method is called during class construction. It allows the field to
alter the class and add the on_{field.name}_changed signal. The signal
is only added if notification is enabled *and* if there is no such
signal in the first place (this allows inheritance not to create
separate but identically-named signals and allows signal handlers
connected via the base class to work on child classes.
"""
if not self.notify:
return
assert self.signal_name is not None
if not hasattr(cls, self.signal_name):
signal_def = morris.signal(
self.notify_fn if self.notify_fn is not None
else self.on_changed,
signal_name='{}.{}'.format(cls.__name__, self.signal_name))
setattr(cls, self.signal_name, signal_def)
def __get__(self, instance: object, owner: type) -> "Any":
"""
Get field value from an object or from a class.
This method is part of the Python descriptor protocol.
"""
if instance is None:
return self
else:
return getattr(instance, self.instance_attr)
def __set__(self, instance: object, new_value: "Any") -> None:
"""
Set field value from on an object.
This method is part of the Python descriptor protocol.
Assignments respect the assign filter chain, that is, the new value is
being pushed through the chain of callbacks (each has a chance to alter
the value) until it is finally assigned. Any of the callbacks can raise
an exception and abort the setting process.
This can be used to implement simple type checking, value checking or
even type and value conversions.
"""
if self.assign_filter_list is not None or self.notify:
old_value = getattr(instance, self.instance_attr, UNSET)
# Run the value through assign filters
if self.assign_filter_list is not None:
for assign_filter in self.assign_filter_list:
new_value = assign_filter(instance, self, old_value, new_value)
# Do value modification check if change notification is enabled
if self.notify and hasattr(instance, self.instance_attr):
if new_value != old_value:
setattr(instance, self.instance_attr, new_value)
on_field_change = getattr(instance, self.signal_name)
on_field_change(old_value, new_value)
else:
# Or just fire away
setattr(instance, self.instance_attr, new_value)
def on_changed(self, pod: "POD", old: "Any", new: "Any") -> None:
"""
The first responder of the per-field modification signal.
:param pod:
The object that contains the modified values
:param old:
The old value of the field
:param new:
The new value of the field
"""
_logger.debug("<%s %s>.%s(%r, %r)", pod.__class__.__name__, id(pod),
self.signal_name, old, new)
@total_ordering
class PODBase:
"""Base class for POD-like classes."""
field_list = []
namedtuple_cls = namedtuple('PODBase', '')
def __init__(self, *args, **kwargs):
"""
Initialize a new POD object.
Positional arguments bind to fields in declaration order. Keyword
arguments bind to fields in any order but fields cannot be initialized
twice.
:raises TypeError:
If there are more positional arguments than fields to initialize
:raises TypeError:
If a keyword argument doesn't correspond to a field name.
:raises TypeError:
If a field is initialized twice (first with positional arguments,
then again with keyword arguments).
:raises TypeError:
If a ``MANDATORY`` field is not initialized.
"""
field_list = self.__class__.field_list
# Set all of the instance attributes to the special UNSET value, this
# is useful if something fails and the object is inspected somehow.
# Then all the attributes will be still UNSET.
for field in field_list:
setattr(self, field.instance_attr, UNSET)
# Check if the number of positional arguments is correct
if len(args) > len(field_list):
raise TypeError("too many arguments")
# Initialize mandatory fields using positional arguments
for field, field_value in zip(field_list, args):
setattr(self, field.name, field_value)
# Initialize fields using keyword arguments
for field_name, field_value in kwargs.items():
field = getattr(self.__class__, field_name, None)
if not isinstance(field, Field):
raise TypeError("no such field: {}".format(field_name))
if getattr(self, field.instance_attr) is not UNSET:
raise TypeError(
"field initialized twice: {}".format(field_name))
setattr(self, field_name, field_value)
# Initialize remaining fields using their default initializers
for field in field_list:
if getattr(self, field.instance_attr) is not UNSET:
continue
if field.is_mandatory:
raise TypeError(
"mandatory argument missing: {}".format(field.name))
if field.initial_fn is not None:
field_value = field.initial_fn()
else:
field_value = field.initial
setattr(self, field.name, field_value)
def __repr__(self):
"""Get a debugging representation of a POD object."""
return "{}({})".format(
self.__class__.__name__,
', '.join([
'{}={!r}'.format(field.name, getattr(self, field.name))
for field in self.__class__.field_list]))
def __eq__(self, other: "POD") -> bool:
"""
Check that this POD is equal to another POD.
POD comparison is implemented by converting them to tuples and
comparing the two tuples.
"""
if not isinstance(other, POD):
return NotImplemented
return self.as_tuple() == other.as_tuple()
def __lt__(self, other: "POD") -> bool:
"""
Check that this POD is "less" than an another POD.
POD comparison is implemented by converting them to tuples and
comparing the two tuples.
"""
if not isinstance(other, POD):
return NotImplemented
return self.as_tuple() < other.as_tuple()
def as_tuple(self) -> tuple:
"""
Return the data in this POD as a tuple.
Order of elements in the tuple corresponds to the order of field
declarations.
"""
return self.__class__.namedtuple_cls(*[
getattr(self, field.name)
for field in self.__class__.field_list
])
def as_dict(self) -> dict:
"""
Return the data in this POD as a dictionary.
.. note::
UNSET values are not added to the dictionary.
"""
return {
field.name: getattr(self, field.name)
for field in self.__class__.field_list
if getattr(self, field.name) is not UNSET
}
class _FieldCollection:
"""
Support class for constructing POD meta-data information.
Helper class that simplifies :class:`PODMeta` code that harvests
:class:`Field` instances during class construction. Looking at the
namespace and a list of base classes come up with a list of Field objects
that belong to the given POD.
:attr field_list:
A list of :class:`Field` instances
:attr field_origin_map:
A dictionary mapping from field name to the *name* of the class that
defines it.
"""
def __init__(self):
self.field_list = []
self.field_origin_map = {} # field name -> defining class name
def inspect_cls_for_decorator(self, cls: type) -> None:
"""Analyze a bare POD class."""
self.inspect_base_classes(cls.__bases__)
self.inspect_namespace(cls.__dict__, cls.__name__)
def inspect_base_classes(self, base_cls_list: "List[type]") -> None:
"""
Analyze base classes of a POD class.
Analyze a list of base classes and check if they have consistent
fields. All analyzed fields are added to the internal data structures.
:param base_cls_list:
A list of classes to inspect. Only subclasses of POD are inspected.
"""
for base_cls in base_cls_list:
if not issubclass(base_cls, PODBase):
continue
base_cls_name = base_cls.__name__
for field in base_cls.field_list:
self.add_field(field, base_cls_name)
def inspect_namespace(self, namespace: dict, cls_name: str) -> None:
"""
Analyze namespace of a POD class.
Analyze a namespace of a newly (being formed) class and check if it has
consistent fields. All analyzed fields are added to the internal data
structures.
.. note::
This method calls :meth:`Field.gain_name()` on all fields it finds.
"""
fields = []
for field_name, field in namespace.items():
if not isinstance(field, Field):
continue
field.gain_name(field_name)
fields.append(field)
fields.sort(key=lambda field: field.counter)
for field in fields:
self.add_field(field, cls_name)
def get_namedtuple_cls(self, name: str) -> type:
"""
Create a new namedtuple that corresponds to the fields seen so far.
:parm name:
Name of the namedtuple class
:returns:
A new namedtuple class
"""
return namedtuple(name, [field.name for field in self.field_list])
def add_field(self, field: Field, base_cls_name: str) -> None:
"""
Add a field to the collection.
:param field:
A :class:`Field` instance
:param base_cls_name:
The name of the class that defines the field
:raises TypeError:
If any of the base classes have overlapping fields.
"""
assert field.name is not None
field_name = field.name
if field_name not in self.field_origin_map:
self.field_origin_map[field_name] = base_cls_name
self.field_list.append(field)
else:
raise TypeError("field {1}.{0} clashes with {2}.{0}".format(
field_name, base_cls_name, self.field_origin_map[field_name]))
class PODMeta(type):
"""
Meta-class for all POD classes.
This meta-class is responsible for correctly handling field inheritance.
This class sets up ``field_list`` and ``namedtuple_cls`` attributes on the
newly-created class.
"""
def __new__(mcls, name, bases, namespace):
fc = _FieldCollection()
fc.inspect_base_classes(bases)
fc.inspect_namespace(namespace, name)
namespace['field_list'] = fc.field_list
namespace['namedtuple_cls'] = fc.get_namedtuple_cls(name)
cls = super().__new__(mcls, name, bases, namespace)
for field in fc.field_list:
field.alter_cls(cls)
return cls
@classmethod
def __prepare__(mcls, name, bases, **kwargs):
"""
Get a namespace for defining new POD classes.
Prepare the namespace for the definition of a class using PODMeta as a
meta-class. Since we want to observe the order of fields, using an
OrderedDict makes that task trivial.
"""
return OrderedDict()
def podify(cls):
"""
Decorator for POD classes.
The decorator offers an alternative from using the POD class (with the
PODMeta meta-class). Instead of using that, one can use the ``@podify``
decorator on a PODBase-derived class.
"""
if not isinstance(cls, type) or not issubclass(cls, PODBase):
raise TypeError("cls must be a subclass of PODBase")
fc = _FieldCollection()
fc.inspect_cls_for_decorator(cls)
cls.field_list = fc.field_list
cls.namedtuple_cls = fc.get_namedtuple_cls(cls.__name__)
for field in fc.field_list:
field.alter_cls(cls)
return cls
@total_ordering
class POD(PODBase, metaclass=PODMeta):
"""
Base class that removes boilerplate from plain-old-data classes.
Use POD as your base class and define :class:`Field` objects inside. Don't
define any __init__() (unless you really, really have to have one) and
instead set appropriate attributes on the initializer of a particular field
object.
What you get for *free* is, all the properties (for each field),
documentation, initializer, comparison methods (PODs have total ordering)
and the __repr__() method.
There are some additional methods, such as :meth:`as_tuple()` and
:meth:`as_dict()` that may be of use in some circumstances.
All fields in a single POD subclass are collected (including all of the
fields in the parent classes) and arranged in a list. That list is
available as ``POD.field_list``.
In addition each POD class has an unique named tuple that corresponds to
each field stored inside the POD, the named tuple is available as
``POD.namedtuple_cls``. The return value of :meth:`as_tuple()` actually
uses that type.
"""
def modify_field_docstring(field_docstring_ext: str):
"""
Decorator for altering field docstrings via assign filter functions.
A decorator for assign filter functions that allows them to declaratively
modify the docstring of the field they are used on.
:param field_docstring_ext:
A string compatible with python's str.format() method. The string
should be one line long (newlines will look odd) and may reference any
of the field attributes, as exposed by the {field} named format
attribute.
Example:
>>> @modify_field_docstring("not even")
... def not_even(instance, field, old, new):
... if new % 2 == 0:
... raise ValueError("value cannot be even")
... return new
"""
def decorator(fn):
fn.field_docstring_ext = field_docstring_ext
return fn
return decorator
@modify_field_docstring("constant (read-only after initialization)")
def read_only_assign_filter(
instance: POD, field: Field, old: "Any", new: "Any") -> "Any":
"""
An assign filter that makes a field read-only.
The field can be only assigned if the old value is ``UNSET``, that is,
during the initial construction of a POD object.
:param instance:
A subclass of :class:`POD` that contains ``field``
:param field:
The :class:`Field` being assigned to
:param old:
The current value of the field
:param new:
The proposed value of the field
:returns:
``new``, as-is
:raises AttributeError:
if ``old`` is anything but the special object ``UNSET``
"""
if old is UNSET:
return new
raise AttributeError(_(
"{}.{} is read-only"
).format(instance.__class__.__name__, field.name))
const = read_only_assign_filter
@modify_field_docstring(
"type-converted (value must be convertible to {field.type.__name__})")
def type_convert_assign_filter(
instance: POD, field: Field, old: "Any", new: "Any") -> "Any":
"""
An assign filter that converts the value to the field type.
The field must have a valid python type object stored in the .type field.
:param instance:
A subclass of :class:`POD` that contains ``field``
:param field:
The :class:`Field` being assigned to
:param old:
The current value of the field
:param new:
The proposed value of the field
:returns:
``new`` type-converted to ``field.type``.
:raises ValueError:
if ``new`` cannot be converted to ``field.type``
"""
return field.type(new)
@modify_field_docstring(
"type-checked (value must be of type {field.type.__name__})")
def type_check_assign_filter(
instance: POD, field: Field, old: "Any", new: "Any") -> "Any":
"""
An assign filter that type-checks the value according to the field type.
The field must have a valid python type object stored in the .type field.
:param instance:
A subclass of :class:`POD` that contains ``field``
:param field:
The :class:`Field` being assigned to
:param old:
The current value of the field
:param new:
The proposed value of the field
:returns:
``new``, as-is
:raises TypeError:
if ``new`` is not an instance of ``field.type``
"""
if isinstance(new, field.type):
return new
raise TypeError("{}.{} requires objects of type {}".format(
instance.__class__.__name__, field.name, field.type.__name__))
typed = type_check_assign_filter
@modify_field_docstring(
"unset or type-checked (value must be of type {field.type.__name__})")
def unset_or_type_check_assign_filter(
instance: POD, field: Field, old: "Any", new: "Any") -> "Any":
"""
An assign filter that type-checks the value according to the field type.
.. note::
This filter allows (passes through) the special ``UNSET`` value as-is.
The field must have a valid python type object stored in the .type field.
:param instance:
A subclass of :class:`POD` that contains ``field``
:param field:
The :class:`Field` being assigned to
:param old:
The current value of the field
:param new:
The proposed value of the field
:returns:
``new``, as-is
:raises TypeError:
if ``new`` is not an instance of ``field.type``
"""
if new is UNSET:
return new
return type_check_assign_filter(instance, field, old, new)
unset_or_typed = unset_or_type_check_assign_filter
class sequence_type_check_assign_filter:
"""
Assign filter for typed sequences.
An assign filter for typed sequences (lists or tuples) that must contain an
object of the given type.
"""
def __init__(self, item_type: type):
"""
Initialize the assign filter with the given sequence item type.
:param item_type:
Desired type of each sequence item.
"""
self.item_type = item_type
@property
def field_docstring_ext(self) -> str:
return "type-checked sequence (items must be of type {})".format(
self.item_type.__name__)
def __call__(
self, instance: POD, field: Field, old: "Any", new: "Any"
) -> "Any":
"""
An assign filter that type-checks the value of all sequence elements.
:param instance:
A subclass of :class:`POD` that contains ``field``
:param field:
The :class:`Field` being assigned to
:param old:
The current value of the field
:param new:
The proposed value of the field
:returns:
``new``, as-is
:raises TypeError:
if ``new`` is not an instance of ``field.type``
"""
for item in new:
if not isinstance(item, self.item_type):
raise TypeError(
"{}.{} requires all sequence elements of type {}".format(
instance.__class__.__name__, field.name,
self.item_type.__name__))
return new
typed.sequence = sequence_type_check_assign_filter
class unset_or_sequence_type_check_assign_filter(typed.sequence):
"""
Assign filter for typed sequences.
.. note::
This filter allows (passes through) the special ``UNSET`` value as-is.
An assign filter for typed sequences (lists or tuples) that must contain an
object of the given type.
"""
@property
def field_docstring_ext(self) -> str:
return (
"unset or type-checked sequence (items must be of type {})"
).format(self.item_type.__name__)
def __call__(
self, instance: POD, field: Field, old: "Any", new: "Any"
) -> "Any":
"""
An assign filter that type-checks the value of all sequence elements.
.. note::
This filter allows (passes through) the special ``UNSET`` value
as-is.
:param instance:
A subclass of :class:`POD` that contains ``field``
:param field:
The :class:`Field` being assigned to
:param old:
The current value of the field
:param new:
The proposed value of the field
:returns:
``new``, as-is
:raises TypeError:
if ``new`` is not an instance of ``field.type``
"""
if new is UNSET:
return new
return super().__call__(instance, field, old, new)
unset_or_typed.sequence = unset_or_sequence_type_check_assign_filter
@modify_field_docstring("unique elements (sequence elements cannot repeat)")
def unique_elements_assign_filter(
instance: POD, field: Field, old: "Any", new: "Any") -> "Any":
"""
An assign filter that ensures a sequence has non-repeating items.
:param instance:
A subclass of :class:`POD` that contains ``field``
:param field:
The :class:`Field` being assigned to
:param old:
The current value of the field
:param new:
The proposed value of the field
:returns:
``new``, as-is
:raises ValueError:
if ``new`` contains any duplicates
"""
seen = set()
for item in new:
if new in seen:
raise ValueError("Duplicate element: {!r}".format(item))
seen.add(item)
return new
unique = unique_elements_assign_filter
plainbox-0.25/plainbox/impl/test_runner.py 0000664 0001750 0001750 00000013506 12627266441 021572 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Sylvain Pineau
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.test_runner
=========================
Test definitions for plainbox.impl.runner module
"""
from tempfile import TemporaryDirectory
from unittest import TestCase
import os
from plainbox.abc import IExecutionController
from plainbox.abc import IJobDefinition
from plainbox.impl.runner import CommandOutputWriter
from plainbox.impl.runner import FallbackCommandOutputPrinter
from plainbox.impl.runner import IOLogRecordGenerator
from plainbox.impl.runner import JobRunner
from plainbox.impl.runner import slugify
from plainbox.testing_utils.io import TestIO
from plainbox.vendor.mock import Mock
class SlugifyTests(TestCase):
def test_random_strings(self):
self.assertEqual(slugify("A "), "A_")
self.assertEqual(slugify("A-"), "A-")
self.assertEqual(slugify("A_"), "A_")
self.assertEqual(slugify(".b"), ".b")
self.assertEqual(slugify("\z"), "_z")
self.assertEqual(slugify("/z"), "_z")
self.assertEqual(slugify("1k"), "1k")
class IOLogGeneratorTests(TestCase):
def test_smoke(self):
builder = IOLogRecordGenerator()
# Calling on_begin() resets internal state
builder.on_begin(None, None)
builder.on_new_record.connect(
lambda record: setattr(self, 'last_record', record))
# Calling on_line generates records
builder.on_line('stdout', b'text\n')
self.assertEqual(self.last_record.stream_name, 'stdout')
self.assertEqual(self.last_record.data, b'text\n')
builder.on_line('stdout', b'different text\n')
self.assertEqual(self.last_record.stream_name, 'stdout')
self.assertEqual(self.last_record.data, b'different text\n')
builder.on_line('stderr', b'error message\n')
self.assertEqual(self.last_record.stream_name, 'stderr')
self.assertEqual(self.last_record.data, b'error message\n')
class FallbackCommandOutputPrinterTests(TestCase):
def test_smoke(self):
with TestIO(combined=False) as io:
obj = FallbackCommandOutputPrinter("example")
# Whatever gets printed by the job...
obj.on_line('stdout', b'line 1\n')
obj.on_line('stderr', b'line 1\n')
obj.on_line('stdout', b'line 2\n')
obj.on_line('stdout', b'line 3\n')
obj.on_line('stderr', b'line 2\n')
# Gets printed to stdout _only_, stderr is combined with stdout here
self.assertEqual(io.stdout, (
"(job example, ) line 1\n"
"(job example, ) line 1\n"
"(job example, ) line 2\n"
"(job example, ) line 3\n"
"(job example, ) line 2\n"
))
class CommandOutputWriterTests(TestCase):
def assertFileContentsEqual(self, pathname, contents):
with open(pathname, 'rb') as stream:
self.assertEqual(stream.read(), contents)
def test_smoke(self):
with TemporaryDirectory() as scratch_dir:
stdout = os.path.join(scratch_dir, "stdout")
stderr = os.path.join(scratch_dir, "stderr")
writer = CommandOutputWriter(stdout, stderr)
# Initially nothing is created
self.assertFalse(os.path.exists(stdout))
self.assertFalse(os.path.exists(stderr))
# Logs are created when the command is first started
writer.on_begin(None, None)
self.assertTrue(os.path.exists(stdout))
self.assertTrue(os.path.exists(stderr))
# Each line simply gets saved
writer.on_line('stdout', b'text\n')
writer.on_line('stderr', b'error\n')
# (but it may not be on disk yet because of buffering)
# After the command is done the logs are left on disk
writer.on_end(None)
self.assertFileContentsEqual(stdout, b'text\n')
self.assertFileContentsEqual(stderr, b'error\n')
class RunnerTests(TestCase):
def test_get_warm_up_sequence(self):
# create a mocked execution controller
ctrl = Mock(spec_set=IExecutionController, name='ctrl')
# create a fake warm up function
warm_up_func = Mock(name='warm_up_func')
# make the execution controller accept any job
ctrl.get_score.return_value = 1
# make the execution controller return warm_up_func as warm-up
ctrl.get_warm_up_for_job.return_value = warm_up_func
# make a pair of mock jobs for our controller to see
job1 = Mock(spec_set=IJobDefinition, name='job1')
job2 = Mock(spec_set=IJobDefinition, name='job2')
with TemporaryDirectory() as session_dir:
# Create a real runner with a fake execution controller, empty list
# of providers and fake io-log directory.
runner = JobRunner(
session_dir, provider_list=[],
jobs_io_log_dir=os.path.join(session_dir, 'io-log'),
execution_ctrl_list=[ctrl])
# Ensure that we got the warm up function we expected
self.assertEqual(
runner.get_warm_up_sequence([job1, job2]), [warm_up_func])
plainbox-0.25/plainbox/impl/__init__.py 0000664 0001750 0001750 00000015145 12627266441 020762 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl` -- implementation package
==============================================
.. warning::
THIS MODULE DOES NOT HAVE STABLE PUBLIC API
"""
from functools import wraps
from inspect import getabsfile
from warnings import warn
import os.path
import sys
import textwrap
import plainbox
from plainbox.impl._textwrap import _textwrap_indent
def _get_doc_margin(doc):
"""
Find minimum indentation of any non-blank lines after first line.
"""
lines = doc.expandtabs().split('\n')
margin = sys.maxsize
for line in lines[1:]:
content = len(line.lstrip())
if content:
indent = len(line) - content
margin = min(margin, indent)
return 0 if margin == sys.maxsize else margin
def public(import_path, introduced=None, deprecated=None):
"""
Public API decorator generator.
This decorator serves multiple uses:
* It clearly documents all public APIs. This is visible to
both developers reading the source code directly and to people
reading code documentation (by adjusting __doc__)
* It provides a stable import location while allowing to move the
implementation around as the code evolves. This unbinds the name and
documentation of the symbol from the code.
* It documents when each function was introduced. This is also visible
in the generated documentation.
* It documents when each function will be decommissioned. This is
visible in the generated documentation and at runtime. Each initial
call to a deprecated function will cause a PendingDeprecationWarnings
to be logged.
The actual implementation of the function must be in in a module specified
by import_path. It can be a module name or a module name and a function
name, when separated by a colon.
"""
# Create a forwarding decorator for the shim function The shim argument is
# the actual empty function from the public module that serves as
# documentation carrier.
def decorator(shim):
# Allow to override function name by specifying it in the import path
# after a colon. If missing it defaults to the name of the shim
try:
module_name, func_name = import_path.split(":", 1)
except ValueError:
module_name, func_name = import_path, shim.__name__
# Import the module with the implementation and extract the function
module = __import__(module_name, fromlist=[''])
try:
impl = getattr(module, func_name)
except AttributeError:
raise NotImplementedError(
"%s.%s does not exist" % (module_name, func_name))
@wraps(shim)
def call_impl(*args, **kwargs):
return impl(*args, **kwargs)
# Document the public nature of the function
call_impl.__doc__ += "\n".join([
"",
" This function is a part of the public API",
" The private implementation is in {}:{}".format(
import_path, shim.__name__)
])
if introduced is None:
call_impl.__doc__ += "\n".join([
"",
" This function was introduced in the initial version of"
" plainbox",
])
else:
call_impl.__doc__ += "\n".join([
"",
" This function was introduced in version: {}".format(
introduced)
])
# Document deprecation status, if any
if deprecated is not None:
call_impl.__doc__ += "\n".join([
" warn:",
" This function is deprecated",
" It will be removed in version: {}".format(deprecated),
])
# Add implementation docs, if any
if impl.__doc__ is not None:
call_impl.__doc__ += "\n".join([
" Additional documentation from the private"
" implementation:"])
call_impl.__doc__ += impl.__doc__
return call_impl
return decorator
def deprecated(version, explanation=None):
"""
Decorator for marking functions as deprecated
:param version:
Version in which a function is deprecated
:param explanation:
Explanation of the deprecation. Ideally this will include hints on how
to get a modern replacement.
Deprecated functions are candidates for removal. Existing code should be
adapted not to make any calls to the deprecated functions. New code should
not use such functions.
..note::
Due to the way python warning module works, to see deprecated function
notices re-run your application with PYTHONWARNINGS=once
"""
if not isinstance(version, str):
# Due to a common mistake, 'version' is probably the decorated function
# and @deprecated was called without ()
raise SyntaxError("@deprecated() must be called with a parameter")
def decorator(func):
"""
The @deprecated decorator with deprecation information
"""
msg = "{0} is deprecated since version {1}".format(
func.__name__, version)
if func.__doc__ is None:
func.__doc__ = ''
indent = 4 * ' '
else:
indent = _get_doc_margin(func.__doc__) * ' '
func.__doc__ += indent + '\n'
func.__doc__ += indent + '.. deprecated:: {}'.format(version)
if explanation is not None:
func.__doc__ += _textwrap_indent(
textwrap.dedent(explanation), prefix=indent * 2)
@wraps(func)
def wrapper(*args, **kwargs):
warn(DeprecationWarning(msg), stacklevel=2)
return func(*args, **kwargs)
return wrapper
return decorator
def get_plainbox_dir():
"""
Return the root directory of the plainbox package.
"""
return os.path.dirname(getabsfile(plainbox))
plainbox-0.25/plainbox/impl/depmgr.py 0000664 0001750 0001750 00000033164 12627266441 020502 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
Job Dependency Solver.
:mod:`plainbox.impl.depmgr`
===========================
.. warning::
THIS MODULE DOES NOT HAVE STABLE PUBLIC API
"""
from abc import ABCMeta
from abc import abstractproperty
from logging import getLogger
from plainbox.i18n import gettext as _
from plainbox.vendor import enum
logger = getLogger("plainbox.depmgr")
class DependencyError(Exception, metaclass=ABCMeta):
""" Exception raised when a dependency error is detected. """
@abstractproperty
def affected_job(self):
""" job that is affected by the dependency error. """
@abstractproperty
def affecting_job(self):
"""
job that is affecting the :attr:`affected_job`.
This may be None in certain cases (eg, when the job does not exist and
is merely referred to by id). If this job exists removing it SHOULD
fix this problem from occurring.
This may be the same as :attr:`affected_job`
"""
class DependencyUnknownError(DependencyError):
"""
Exception raised when an unknown job is mentioned.
.. note::
This class differs from :class:`DependencyMissingError` in that the
unknown job is not a dependency of anything. It can only happen when
the job is explicitly mentioned in the list of jobs to visit.
"""
def __init__(self, job):
""" Initialize a new DependencyUnknownError with a given job. """
self.job = job
@property
def affected_job(self):
"""
job that is affected by the dependency error.
Here it's a job that on the ``visit_list`` but not on the ``job_list``.
"""
return self.job
@property
def affecting_job(self):
"""
job that is affecting the :attr:`affected_job`.
Here, it is always None.
"""
def __str__(self):
""" Get a printable description of an error. """
return _("unknown job referenced: {!a}").format(self.job.id)
def __repr__(self):
""" Get a debugging representation of an error. """
return "<{} job:{!r}>".format(self.__class__.__name__, self.job)
def __eq__(self, other):
""" Check if one error is equal to another. """
if not isinstance(other, DependencyUnknownError):
return NotImplemented
return self.job == other.job
def __hash__(self):
""" Calculate the hash of an error. """
return hash((self.job,))
class DependencyCycleError(DependencyError):
""" Exception raised when a cyclic dependency is detected. """
def __init__(self, job_list):
"""
Initialize with a list of jobs that form a dependency loop.
The dependencies satisfy the given expression:
job_list[n - 1] depends-on job_list[n]
The error exists because job_list[0] is job_list[-1].
Each item is a JobDefinition instance.
"""
assert len(job_list) > 1
assert job_list[0] is job_list[-1]
self.job_list = job_list
@property
def affected_job(self):
"""
job that is affected by the dependency error.
Here it is the job that has a cyclic dependency on itself.
"""
return self.job_list[0]
@property
def affecting_job(self):
"""
job that is affecting the :attr:`affected_job`.
Here it's always the same as :attr:`~DependencyCycleError.affected_job`
"""
return self.affected_job
def __str__(self):
""" Get a printable description of an error. """
return _("dependency cycle detected: {}").format(
" -> ".join([job.id for job in self.job_list]))
def __repr__(self):
""" Get a debugging representation of an error. """
return "<{} job_list:{!r}>".format(
self.__class__.__name__, self.job_list)
class DependencyMissingError(DependencyError):
""" Exception raised when a job has an unsatisfied dependency. """
DEP_TYPE_RESOURCE = "resource"
DEP_TYPE_DIRECT = "direct"
DEP_TYPE_ORDERING = "ordering"
def __init__(self, job, missing_job_id, dep_type):
""" Initialize a new error with given data. """
self.job = job
self.missing_job_id = missing_job_id
self.dep_type = dep_type
@property
def affected_job(self):
"""
job that is affected by the dependency error.
Here it is the job that has a missing dependency.
"""
return self.job
@property
def affecting_job(self):
"""
job that is affecting the :attr:`affected_job`.
Here it is always None as we have not seen this job at all and that's
what's causing the problem in the first place.
"""
def __str__(self):
""" Get a printable description of an error. """
return _("missing dependency: {!r} ({})").format(
self.missing_job_id, self.dep_type)
def __repr__(self):
""" Get a debugging representation of an error. """
return "<{} job:{!r} missing_job_id:{!r} dep_type:{!r}>".format(
self.__class__.__name__,
self.job, self.missing_job_id, self.dep_type)
def __eq__(self, other):
""" Check if one error is equal to another. """
if not isinstance(other, DependencyMissingError):
return NotImplemented
return (self.job == other.job
and self.missing_job_id == other.missing_job_id
and self.dep_type == other.dep_type)
def __hash__(self):
""" Calculate the hash of an error. """
return hash((self.job, self.missing_job_id, self.dep_type))
class DependencyDuplicateError(DependencyError):
""" Exception raised when two jobs have the same id. """
def __init__(self, job, duplicate_job):
""" Initialize a new error with given data. """
assert job.id == duplicate_job.id
self.job = job
self.duplicate_job = duplicate_job
@property
def affected_job(self):
"""
job that is affected by the dependency error.
Here it is the job that is already known by the system.
"""
return self.job
@property
def affecting_job(self):
"""
job that is affecting the :attr:`affected_job`.
Here it is the job that is clashing with another job already present in
the system.
"""
return self.duplicate_job
def __str__(self):
""" Get a printable description of an error. """
return _("duplicate job id: {!r}").format(self.affected_job.id)
def __repr__(self):
""" Get a debugging representation of an error. """
return "<{} job:{!r} duplicate_job:{!r}>".format(
self.__class__.__name__, self.job, self.duplicate_job)
class Color(enum.Enum):
"""
Three classic colors for recursive graph visitor.
WHITE:
For nodes have not been visited yet.
GRAY:
For nodes that are currently being visited but the visit is not
complete.
BLACK:
For nodes that have been visited and are complete.
"""
WHITE = 'white'
GRAY = 'gray'
BLACK = 'black'
class DependencySolver:
"""
Dependency solver for Jobs.
Uses a simple depth-first search to discover the sequence of jobs that can
run. Use the resolve_dependencies() class method to get the solution.
"""
COLOR_WHITE = Color.WHITE
COLOR_GRAY = Color.GRAY
COLOR_BLACK = Color.BLACK
@classmethod
def resolve_dependencies(cls, job_list, visit_list=None):
"""
Solve the dependency graph expressed as a list of job definitions.
:param list job_list: list of known jobs
:param list visit_list: (optional) list of jobs to solve
The visit_list, if specified, allows to consider only a part of the
graph while still having access and knowledge of all jobs.
:returns list: the solution (a list of jobs to execute in order)
:raises DependencyDuplicateError:
if a duplicate job definition is present
:raises DependencyCycleError:
if a cyclic dependency is present.
:raises DependencyMissingErorr:
if a required job does not exist.
"""
return cls(job_list)._solve(visit_list)
def __init__(self, job_list):
"""
Instantiate a new dependency solver with the specified list of jobs.
:raises DependencyDuplicateError:
if the initial job_list has any duplicate jobs
"""
# Remember the jobs that were passed
self._job_list = job_list
# Build a map of jobs (by id)
self._job_map = self._get_job_map(job_list)
# Job colors, maps from job.id to COLOR_xxx
self._job_color_map = {job.id: self.COLOR_WHITE for job in job_list}
# The computed solution, made out of job instances. This is not
# necessarily the only solution but the algorithm computes the same
# value each time, given the same input.
self._solution = []
def _solve(self, visit_list=None):
"""
Internal method of DependencySolver.
Solves the dependency graph and returns the solution.
Calls _visit() on each of the initial nodes/jobs
"""
# Visit the visit list
logger.debug(_("Starting solve"))
logger.debug(_("Solver job list: %r"), self._job_list)
logger.debug(_("Solver visit list: %r"), visit_list)
if visit_list is None:
visit_list = self._job_list
for job in visit_list:
self._visit(job)
logger.debug(_("Done solving"))
# Return the solution
return self._solution
def _visit(self, job, trail=None):
"""
Internal method of DependencySolver.
Called each time a node is visited. Nodes already seen in _visited are
skipped. Attempts to enumerate all dependencies (both direct and
resource) and resolve them. Missing jobs cause DependencyMissingError
to be raised. Calls _visit recursively on all dependencies.
"""
try:
color = self._job_color_map[job.id]
except KeyError:
logger.debug(_("Visiting job that's not on the job_list: %r"), job)
raise DependencyUnknownError(job)
logger.debug(_("Visiting job %s (color %s)"), job.id, color)
if color == self.COLOR_WHITE:
# This node has not been visited yet. Let's mark it as GRAY (being
# visited) and iterate through the list of dependencies
self._job_color_map[job.id] = self.COLOR_GRAY
# If the trail was not specified start a trail for this node
if trail is None:
trail = [job]
for dep_type, job_id in job.controller.get_dependency_set(job):
# Dependency is just an id, we need to resolve it
# to a job instance. This can fail (missing dependencies)
# so let's guard against that.
try:
next_job = self._job_map[job_id]
except KeyError:
logger.debug(_("Found missing dependency: %r from %r"),
job_id, job)
raise DependencyMissingError(job, job_id, dep_type)
else:
# For each dependency that we visit let's reuse the trail
# to give proper error messages if a dependency loop exists
logger.debug(_("Visiting dependency: %r"), next_job)
# Update the trail as we visit that node
trail.append(next_job)
self._visit(next_job, trail)
trail.pop()
# We've visited (recursively) all dependencies of this node,
# let's color it black and append it to the solution list.
logger.debug(_("Appending %r to solution"), job)
self._job_color_map[job.id] = self.COLOR_BLACK
self._solution.append(job)
elif color == self.COLOR_GRAY:
# This node is not fully traced yet but has been visited already
# so we've found a dependency loop. We need to cut the initial
# part of the trail so that we only report the part that actually
# forms a loop
trail = trail[trail.index(job):]
logger.debug(_("Found dependency cycle: %r"), trail)
raise DependencyCycleError(trail)
else:
assert color == self.COLOR_BLACK
# This node has been visited and is fully traced.
# We can just skip it and go back
@staticmethod
def _get_job_map(job_list):
"""
Internal method of DependencySolver.
Computes a map of job.id => job
Raises DependencyDuplicateError if a collision is found
"""
job_map = {}
for job in job_list:
if job.id in job_map:
raise DependencyDuplicateError(job_map[job.id], job)
else:
job_map[job.id] = job
return job_map
plainbox-0.25/plainbox/impl/session/ 0000775 0001750 0001750 00000000000 12633675274 020333 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/impl/session/resume.py 0000664 0001750 0001750 00000135564 12627266441 022216 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012, 2013, 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
Session resume handling.
:mod:`plainbox.impl.session.resume` -- session resume handling
==============================================================
This module contains classes that can resume a dormant session from
a binary representation. See docs for the suspend module for details.
The resume logic provides a compromise between usefulness and correctness
so two assumptions are made:
* We assume that a checksum of a job changes when their behavior changes.
This way we can detect when job definitions were updated after
suspending but before resuming.
* We assume that software and hardware *may* change while the session is
suspended but this is not something that framework (PlainBox) is
concerned with. Applications should provide job definitions that
are capable of detecting this and acting appropriately.
This is true since the user may install additional packages
or upgrade existing packages. The user can also add or remove pluggable
hardware. Lastly actual machine suspend (or hibernate) and resume *may*
cause alterations to the hardware as it is visible from within
the system. In any case the framework does not care about this.
"""
from collections import deque
import base64
import binascii
import gzip
import json
import logging
import os
import re
from plainbox.i18n import gettext as _
from plainbox.impl.result import DiskJobResult
from plainbox.impl.result import IOLogRecord
from plainbox.impl.result import MemoryJobResult
from plainbox.impl.result import OUTCOME_METADATA_MAP
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.qualifiers import SimpleQualifier
from plainbox.impl.session.state import SessionMetaData
from plainbox.impl.session.state import SessionState
logger = logging.getLogger("plainbox.session.resume")
class SessionResumeError(Exception):
"""
Base for all session resume exceptions.
Base class for exceptions that can be raised when attempting to
resume a dormant session.
"""
class CorruptedSessionError(SessionResumeError):
"""
Exception raised when suspended session is corrupted.
Exception raised when :class:`SessionResumeHelper` cannot decode
the session byte stream. This exception will be raised with additional
context that captures the actual underlying cause. Having this exception
class makes it easier to handle resume errors.
"""
class IncompatibleSessionError(SessionResumeError):
"""
Exception raised when suspended session is correct but incompatible.
Exception raised when :class:`SessionResumeHelper` comes across malformed
or unsupported data that was (presumably) produced by
:class:`SessionSuspendHelper`
"""
class IncompatibleJobError(SessionResumeError):
"""
Exception raised when suspended session needs a different version of a job.
Exception raised when :class:`SessionResumeHelper` detects that the set of
jobs it knows about is incompatible with what was saved before.
"""
class BrokenReferenceToExternalFile(SessionResumeError):
"""
Exception raised when suspended session needs an external file that's gone.
Exception raised when :class:`SessionResumeHelper` detects that a file
needed by the session to resume is not present. This is typically used to
signal inaccessible log files.
"""
class EnvelopeUnpackMixIn:
"""
A mix-in class capable of unpacking the envelope of the session storage.
This class assists in unpacking the "envelope" in which the session data is
actually stored. The envelope is simply gzip but other kinds of envelope
can be added later.
"""
def unpack_envelope(self, data):
"""
Unpack the binary envelope and get access to a JSON object.
:param data:
Bytes representing the dormant session
:returns:
the JSON representation of a session stored in the envelope
:raises CorruptedSessionError:
if the representation of the session is corrupted in any way
"""
try:
data = gzip.decompress(data)
except IOError:
raise CorruptedSessionError(_("Cannot decompress session data"))
try:
text = data.decode("UTF-8")
except UnicodeDecodeError:
raise CorruptedSessionError(_("Cannot decode session text"))
try:
return json.loads(text)
except ValueError:
raise CorruptedSessionError(_("Cannot interpret session JSON"))
class SessionPeekHelper(EnvelopeUnpackMixIn):
"""A helper class to peek at session state meta-data quickly."""
def peek(self, data):
"""
Peek at the meta-data of a dormant session.
:param data:
Bytes representing the dormant session
:returns:
a SessionMetaData object
:raises CorruptedSessionError:
if the representation of the session is corrupted in any way
:raises IncompatibleSessionError:
if session serialization format is not supported
"""
json_repr = self.unpack_envelope(data)
return self._peek_json(json_repr)
def _peek_json(self, json_repr):
"""
Resume a SessionMetaData object from the JSON representation.
This method is called by :meth:`peek()` after the initial envelope
and parsing is done. The only error conditions that can happen
are related to semantic incompatibilities or corrupted internal state.
"""
logger.debug(_("Peeking at json... (see below)"))
logger.debug(json.dumps(json_repr, indent=4))
_validate(json_repr, value_type=dict)
version = _validate(json_repr, key="version", choice=[1])
if version == 1:
return SessionPeekHelper1().peek_json(json_repr)
elif version == 2:
return SessionPeekHelper2().peek_json(json_repr)
elif version == 3:
return SessionPeekHelper3().peek_json(json_repr)
elif version == 4:
return SessionPeekHelper4().peek_json(json_repr)
elif version == 5:
return SessionPeekHelper5().peek_json(json_repr)
elif version == 6:
return SessionPeekHelper6().peek_json(json_repr)
else:
raise IncompatibleSessionError(
_("Unsupported version {}").format(version))
class SessionResumeHelper(EnvelopeUnpackMixIn):
"""
Helper class for implementing session resume feature.
This class is a facade that does enough of the resume process to know which
version is being resumed and delegate the rest of the process to an
appropriate, format specific, resume class.
"""
def __init__(
self, job_list: 'List[JobDefinition]',
flags: 'Optional[Iterable[str]]', location: 'Optional[str]'
):
"""
Initialize the helper with a list of known jobs and support data.
:param job_list:
List of known jobs
:param flags:
Any iterable object with string versions of resume support flags.
This can be None, if the application doesn't wish to enable any of
the feature flags.
:param location:
Location of the session directory. This is the same as
``session_dir`` in the corresponding suspend API. It is also the
same as ``storage.location`` (where ``storage`` is a
:class:`plainbox.impl.session.storage.SessionStorage` object.
Applicable flags are ``FLAG_FILE_REFERENCE_CHECKS_S``,
``FLAG_REWRITE_LOG_PATHNAMES_S`` and ``FLAG_IGNORE_JOB_CHECKSUMS_S``.
Their meaning is described below.
``FLAG_FILE_REFERENCE_CHECKS_S``:
Flag controlling reference checks from within the session file to
external files. If enabled such checks are performed and can cause
additional exceptions to be raised. Currently this only affects the
representation of the DiskJobResult instances.
``FLAG_REWRITE_LOG_PATHNAMES_S``:
Flag controlling rewriting of log file pathnames. It depends on the
location to be non-None and then rewrites pathnames of all them
missing log files to be relative to the session storage location.
It effectively depends on FLAG_FILE_REFERENCE_CHECKS_F being set at
the same time, otherwise it is ignored.
``FLAG_IGNORE_JOB_CHECKSUMS_S``:
Flag controlling integrity checks between jobs present at resume
time and jobs present at suspend time. Since providers cannot be
serialized (nor should they) this integrity check prevents anyone
from resuming a session if job definitions have changed. Using this
flag effectively disables that check.
"""
self.job_list = job_list
logger.debug("Session Resume Helper started with jobs: %r", job_list)
self.flags = flags
self.location = location
def resume(self, data, early_cb=None):
"""
Resume a dormant session.
:param data:
Bytes representing the dormant session
:param early_cb:
A callback that allows the caller to "see" the session object
early, before the bulk of resume operation happens. This method can
be used to register signal listeners on the new session before this
method call returns. The callback accepts one argument, session,
which is being resumed.
:returns:
resumed session instance
:rtype:
:class:`~plainbox.impl.session.state.SessionState`
This method validates the representation of a dormant session and
re-creates an identical SessionState instance. It can fail in multiple
ways, some of which are a part of normal operation and should always be
handled (:class:`IncompatibleJobError` and
:class:`IncompatibleJobError`). Applications may wish to capture
:class:`SessionResumeError` as a generic base exception for all the
possible problems.
:raises CorruptedSessionError:
if the representation of the session is corrupted in any way
:raises IncompatibleSessionError:
if session serialization format is not supported
:raises IncompatibleJobError:
if serialized jobs are not the same as current jobs
"""
json_repr = self.unpack_envelope(data)
return self._resume_json(json_repr, early_cb)
def _resume_json(self, json_repr, early_cb=None):
"""
Resume a SessionState object from the JSON representation.
This method is called by :meth:`resume()` after the initial envelope
and parsing is done. The only error conditions that can happen
are related to semantic incompatibilities or corrupted internal state.
"""
logger.debug(_("Resuming from json... (see below)"))
logger.debug(json.dumps(json_repr, indent=4))
_validate(json_repr, value_type=dict)
version = _validate(json_repr, key="version", choice=[1])
if version == 1:
helper = SessionResumeHelper1(
self.job_list, self.flags, self.location)
elif version == 2:
helper = SessionResumeHelper2(
self.job_list, self.flags, self.location)
elif version == 3:
helper = SessionResumeHelper3(
self.job_list, self.flags, self.location)
elif version == 4:
helper = SessionResumeHelper4(
self.job_list, self.flags, self.location)
elif version == 5:
helper = SessionResumeHelper5(
self.job_list, self.flags, self.location)
elif version == 6:
helper = SessionResumeHelper6(
self.job_list, self.flags, self.location)
else:
raise IncompatibleSessionError(
_("Unsupported version {}").format(version))
return helper.resume_json(json_repr, early_cb)
class ResumeDiscardQualifier(SimpleQualifier):
"""
Qualifier for jobs that need to be discarded after resume.
A job qualifier that designates jobs that should be removed
after doing a session resume.
"""
def __init__(self, retain_id_set):
"""
Initialize the qualifier.
:param retain_id_set:
The set of job identifiers that should be retained on resume.
"""
super().__init__(Origin.get_caller_origin())
self._retain_id_set = frozenset(retain_id_set)
def get_simple_match(self, job):
"""Check if a job should be listed by this qualifier."""
return job.id not in self._retain_id_set
class MetaDataHelper1MixIn:
"""Mix-in class for working with v1 meta-data."""
@classmethod
def _restore_SessionState_metadata(cls, metadata, session_repr):
"""
Reconstruct the session state meta-data.
Extract meta-data information from the representation of the session
and set it in the given session object
"""
# Get the representation of the meta-data
metadata_repr = _validate(
session_repr, key='metadata', value_type=dict)
# Set each bit back to the session
metadata.title = _validate(
metadata_repr, key='title', value_type=str, value_none=True)
metadata.flags = set([
_validate(
flag, value_type=str,
value_type_msg=_("Each flag must be a string"))
for flag in _validate(
metadata_repr, key='flags', value_type=list)])
metadata.running_job_name = _validate(
metadata_repr, key='running_job_name', value_type=str,
value_none=True)
class MetaDataHelper2MixIn(MetaDataHelper1MixIn):
"""Mix-in class for working with v2 meta-data."""
@classmethod
def _restore_SessionState_metadata(cls, metadata, session_repr):
"""
Reconstruct the session state meta-data.
Extract meta-data information from the representation of the session
and set it in the given session object
"""
super()._restore_SessionState_metadata(metadata, session_repr)
# Get the representation of the meta-data
metadata_repr = _validate(
session_repr, key='metadata', value_type=dict)
app_blob = _validate(
metadata_repr, key='app_blob', value_type=str,
value_none=True)
if app_blob is not None:
try:
app_blob = app_blob.encode("ASCII")
except UnicodeEncodeError:
# TRANSLATORS: please don't translate app_blob
raise CorruptedSessionError(_("app_blob is not ASCII"))
try:
app_blob = base64.standard_b64decode(app_blob)
except binascii.Error:
# TRANSLATORS: please don't translate app_blob
raise CorruptedSessionError(_("Cannot base64 decode app_blob"))
metadata.app_blob = app_blob
class MetaDataHelper3MixIn(MetaDataHelper2MixIn):
"""Mix-in class for working with v3 meta-data."""
@classmethod
def _restore_SessionState_metadata(cls, metadata, session_repr):
"""
Reconstruct the session state meta-data.
Extract meta-data information from the representation of the session
and set it in the given session object
"""
super()._restore_SessionState_metadata(metadata, session_repr)
# Get the representation of the meta-data
metadata_repr = _validate(
session_repr, key='metadata', value_type=dict)
metadata.app_id = _validate(
metadata_repr, key='app_id', value_type=str,
value_none=True)
class SessionPeekHelper1(MetaDataHelper1MixIn):
"""
Helper class for implementing session peek feature.
This class works with data constructed by
:class:`~plainbox.impl.session.suspend.SessionSuspendHelper1` which has
been pre-processed by :class:`SessionPeekHelper` (to strip the initial
envelope).
The only goal of this class is to reconstruct session state meta-data.
"""
def peek_json(self, json_repr):
"""
Resume a SessionState object from the JSON representation.
This method is called by :meth:`peek()` after the initial envelope and
parsing is done. The only error conditions that can happen are related
to semantic incompatibilities or corrupted internal state.
"""
_validate(json_repr, key="version", choice=[1])
session_repr = _validate(json_repr, key='session', value_type=dict)
metadata = SessionMetaData()
self._restore_SessionState_metadata(metadata, session_repr)
return metadata
def _build_SessionState(self, session_repr, early_cb=None):
"""
Reconstruct the session state object.
This method creates a fresh SessionState instance and restores
jobs, results, meta-data and desired job list using helper methods.
"""
logger.debug(_("Starting to restore metadata..."))
metadata = SessionMetaData()
self._peek_SessionState_metadata(metadata, session_repr)
return metadata
class SessionPeekHelper2(MetaDataHelper2MixIn, SessionPeekHelper1):
"""
Helper class for implementing session peek feature.
This class works with data constructed by
:class:`~plainbox.impl.session.suspend.SessionSuspendHelper1` which has
been pre-processed by :class:`SessionPeekHelper` (to strip the initial
envelope).
The only goal of this class is to reconstruct session state meta-data.
"""
class SessionPeekHelper3(MetaDataHelper3MixIn, SessionPeekHelper2):
"""
Helper class for implementing session peek feature.
This class works with data constructed by
:class:`~plainbox.impl.session.suspend.SessionSuspendHelper1` which has
been pre-processed by :class:`SessionPeekHelper` (to strip the initial
envelope).
The only goal of this class is to reconstruct session state meta-data.
"""
class SessionPeekHelper4(SessionPeekHelper3):
"""
Helper class for implementing session peek feature.
This class works with data constructed by
:class:`~plainbox.impl.session.suspend.SessionSuspendHelper1` which has
been pre-processed by :class:`SessionPeekHelper` (to strip the initial
envelope).
The only goal of this class is to reconstruct session state meta-data.
"""
class SessionPeekHelper5(SessionPeekHelper4):
"""
Helper class for implementing session peek feature.
This class works with data constructed by
:class:`~plainbox.impl.session.suspend.SessionSuspendHelper5` which has
been pre-processed by :class:`SessionPeekHelper` (to strip the initial
envelope).
The only goal of this class is to reconstruct session state meta-data.
"""
class SessionPeekHelper6(SessionPeekHelper5):
"""
Helper class for implementing session peek feature
This class works with data constructed by
:class:`~plainbox.impl.session.suspend.SessionSuspendHelper6` which has
been pre-processed by :class:`SessionPeekHelper` (to strip the initial
envelope).
The only goal of this class is to reconstruct session state meta-data.
"""
class SessionResumeHelper1(MetaDataHelper1MixIn):
"""
Helper class for implementing session resume feature.
This class works with data constructed by
:class:`~plainbox.impl.session.suspend.SessionSuspendHelper1` which has
been pre-processed by :class:`SessionResumeHelper` (to strip the initial
envelope).
Due to the constraints of what can be represented in a suspended session,
this class cannot work in isolation. It must operate with a list of know
jobs.
Since (most of the) jobs are being provided externally (as they represent
the non-serialized parts of checkbox or other job providers) several
failure modes are possible. Those are documented in :meth:`resume()`
"""
# Flag controlling reference checks from within the session file to
# external files. If enabled such checks are performed and can cause
# additional exceptions to be raised. Currently this only affects the
# representation of the DiskJobResult instances.
FLAG_FILE_REFERENCE_CHECKS_S = 'file-reference-checks'
FLAG_FILE_REFERENCE_CHECKS_F = 0x01
# Flag controlling rewriting of log file pathnames. It depends on the
# location to be non-None and then rewrites pathnames of all them missing
# log files to be relative to the session storage location. It effectively
# depends on FLAG_FILE_REFERENCE_CHECKS_F being set at the same time,
# otherwise it is ignored.
FLAG_REWRITE_LOG_PATHNAMES_S = 'rewrite-log-pathnames'
FLAG_REWRITE_LOG_PATHNAMES_F = 0x02
# Flag controlling integrity checks between jobs present at resume time and
# jobs present at suspend time. Since providers cannot be serialized (nor
# should they) this integrity check prevents anyone from resuming a session
# if job definitions have changed. Using this flag effectively disables
# that check.
FLAG_IGNORE_JOB_CHECKSUMS_S = 'ignore-job-checksums'
FLAG_IGNORE_JOB_CHECKSUMS_F = 0x04
def __init__(
self, job_list: 'List[JobDefinition]',
flags: 'Optional[Iterable[str]]', location: 'Optional[str]'
):
"""
Initialize the helper with a list of known jobs and support data.
:param job_list:
List of known jobs
:param flags:
Any iterable object with string versions of resume support flags.
This can be None, if the application doesn't wish to enable any of
the feature flags.
:param location:
Location of the session directory. This is the same as
``session_dir`` in the corresponding suspend API. It is also the
same as ``storage.location`` (where ``storage`` is a
:class:`plainbox.impl.session.storage.SessionStorage` object.
See :meth:`SessionResumeHelper.__init__()` for description and meaning
of each flag.
"""
self.job_list = job_list
self.flags = 0
self.location = location
# Convert flag string constants into numeric flags
if flags is not None:
if self.FLAG_FILE_REFERENCE_CHECKS_S in flags:
self.flags |= self.FLAG_FILE_REFERENCE_CHECKS_F
if self.FLAG_REWRITE_LOG_PATHNAMES_S in flags:
self.flags |= self.FLAG_REWRITE_LOG_PATHNAMES_F
if self.FLAG_IGNORE_JOB_CHECKSUMS_S in flags:
self.flags |= self.FLAG_IGNORE_JOB_CHECKSUMS_F
def resume_json(self, json_repr, early_cb=None):
"""
Resume a SessionState object from the JSON representation.
This method is called by :meth:`resume()` after the initial envelope
and parsing is done. The only error conditions that can happen
are related to semantic incompatibilities or corrupted internal state.
"""
_validate(json_repr, key="version", choice=[1])
session_repr = _validate(json_repr, key='session', value_type=dict)
return self._build_SessionState(session_repr, early_cb)
def _build_SessionState(self, session_repr, early_cb=None):
"""
Reconstruct the session state object.
This method creates a fresh SessionState instance and restores
jobs, results, meta-data and desired job list using helper methods.
"""
# Construct a fresh session object.
session = SessionState(self.job_list)
logger.debug(_("Constructed new session for resume %r"), session)
# Give early_cb a chance to see the session before we start resuming.
# This way applications can see, among other things, generated jobs
# as they are added to the session, by registering appropriate signal
# handlers on the freshly-constructed session instance.
if early_cb is not None:
logger.debug(_("Invoking early callback %r"), early_cb)
new_session = early_cb(session)
if new_session is not None:
logger.debug(
_("Using different session for resume: %r"), new_session)
session = new_session
# Restore bits and pieces of state
logger.debug(
_("Starting to restore jobs and results to %r..."), session)
self._restore_SessionState_jobs_and_results(session, session_repr)
logger.debug(_("Starting to restore metadata..."))
self._restore_SessionState_metadata(session.metadata, session_repr)
logger.debug(_("restored metadata %r"), session.metadata)
logger.debug(_("Starting to restore desired job list..."))
self._restore_SessionState_desired_job_list(session, session_repr)
logger.debug(_("Starting to restore job list..."))
self._restore_SessionState_job_list(session, session_repr)
# Return whatever we've got
logger.debug(_("Resume complete!"))
return session
def _restore_SessionState_jobs_and_results(self, session, session_repr):
"""
Process representation of a session and restore jobs and results.
This method reconstructs all jobs and results in several stages.
The first pass just goes over all the jobs and results and restores
all of the non-generated jobs using :meth:`_process_job()` method.
Any jobs that cannot be processed (generated job) is saved for further
processing.
"""
# Representation of all of the job definitions
jobs_repr = _validate(session_repr, key='jobs', value_type=dict)
# Representation of all of the job results
results_repr = _validate(session_repr, key='results', value_type=dict)
# List of jobs (ids) that could not be processed on the first pass
leftover_jobs = deque()
# Run a first pass through jobs and results. Anything that didn't
# work (generated jobs) gets added to leftover_jobs list.
# To make this bit deterministic (we like determinism) we're always
# going to process job results in alphabetic orderer.
first_pass_list = sorted(
set(jobs_repr.keys()) | set(results_repr.keys()))
for job_id in first_pass_list:
try:
self._process_job(session, jobs_repr, results_repr, job_id)
except KeyError:
leftover_jobs.append(job_id)
# Process leftovers. For each iteration the leftover_jobs list should
# shrink or we're not making any progress. If that happens we've got
# undefined jobs (in general the session is corrupted)
while leftover_jobs:
# Append a sentinel object so that we can know when we're
# done "iterating" over the collection once.
# Also: https://twitter.com/zygoon/status/370213046678872065
leftover_jobs.append(None)
leftover_shrunk = False
while leftover_jobs: # pragma: no branch
job_id = leftover_jobs.popleft()
# Treat the sentinel None object as the end of the iteration
if job_id is None:
break
try:
self._process_job(
session, jobs_repr, results_repr, job_id)
except KeyError as exc:
logger.debug("Seen KeyError for %r", exc)
leftover_jobs.append(job_id)
else:
leftover_shrunk = True
# Check if we're making any progress.
# We don't want to keep spinning on a list of some bogus jobs
# that nothing generated so we need an end condition for that case
if not leftover_shrunk:
raise CorruptedSessionError(
_("Unknown jobs remaining: {}").format(
", ".join(leftover_jobs)))
def _process_job(self, session, jobs_repr, results_repr, job_id):
"""
Process all representation details associated with a particular job.
This method takes a session object, representation of all the jobs
and all the results (and a job id) and tries to reconstruct the
state associated with that job in the session object.
Jobs are verified to match existing (known) jobs. Results are
rebuilt from their representation and presented back to the session
for processing (this restores resources and generated jobs).
This method can fail in normal operation, when the job that was
being processed is a generated job and has not been reintroduced into
the session. When that happens a KeyError is raised.
.. note::
Since the representation format for results can support storing
and restoring a list of results (per job) but the SessionState
cannot yet do that the implementation of this method restores
the state of the _last_ result object only.
"""
_validate(job_id, value_type=str)
# Get the checksum from the representation
checksum = _validate(
jobs_repr, key=job_id, value_type=str)
# Look up the actual job definition in the session.
# This can raise KeyError but it is okay, callers expect that
job = session.job_state_map[job_id].job
# Check if job definition has not changed
if job.checksum != checksum:
if self.flags & self.FLAG_IGNORE_JOB_CHECKSUMS_F:
logger.warning(_("Ignoring changes to job %r)"), job_id)
else:
raise IncompatibleJobError(
_("Definition of job {!r} has changed").format(job_id))
# The result may not be there. This method is called for all the jobs
# we're supposed to check but not all such jobs need to have results
if job.id not in results_repr:
return
# Collect all of the result objects into result_list
result_list = []
result_list_repr = _validate(
results_repr, key=job_id, value_type=list, value_none=True)
for result_repr in result_list_repr:
_validate(result_repr, value_type=dict)
result = self._build_JobResult(
result_repr, self.flags, self.location)
result_list.append(result)
# Replay each result, one by one
for result in result_list:
logger.debug(_("calling update_job_result(%r, %r)"), job, result)
session.update_job_result(job, result)
@classmethod
def _restore_SessionState_desired_job_list(cls, session, session_repr):
"""
Reconstruct the list of desired jobs.
Extract the representation of desired_job_list from the session and
set it back to the session object. This method should be called after
all the jobs are discovered.
:raises CorruptedSessionError:
if desired_job_list refers to unknown job
"""
# List of all the _ids_ of the jobs that were selected
desired_job_list = [
_validate(
job_id, value_type=str,
value_type_msg=_("Each job id must be a string"))
for job_id in _validate(
session_repr, key='desired_job_list', value_type=list)]
# Restore job selection
logger.debug(
_("calling update_desired_job_list(%r)"), desired_job_list)
try:
session.update_desired_job_list([
session.job_state_map[job_id].job
for job_id in desired_job_list])
except KeyError as exc:
raise CorruptedSessionError(
_("'desired_job_list' refers to unknown job {!r}").format(
exc.args[0]))
@classmethod
def _restore_SessionState_mandatory_job_list(cls, session, session_repr):
"""
Extract the representation of mandatory_job_list from the session and
set it back to the session object. This method should be called after
all the jobs are discovered.
:raises CorruptedSessionError:
if mandatory_job_list refers to unknown job
"""
# List of all the _ids_ of the jobs that were selected
mandatory_job_list = [
_validate(
job_id, value_type=str,
value_type_msg=_("Each job id must be a string"))
for job_id in _validate(
session_repr, key='mandatory_job_list', value_type=list)]
# Restore job selection
logger.debug(
_("calling update_mandatory_job_list(%r)"), mandatory_job_list)
try:
session.update_mandatory_job_list([
session.job_state_map[job_id].job
for job_id in mandatory_job_list])
except KeyError as exc:
raise CorruptedSessionError(
_("'mandatory_job_list' refers to unknown job {!r}").format(
exc.args[0]))
@classmethod
def _restore_SessionState_job_list(cls, session, session_repr):
"""
Reconstruct the list of known jobs.
Trim job_list so that it has only those jobs that are mentioned by the
session representation. This should never fail as anything that might
go wrong must have gone wrong before.
"""
# Representation of all of the important job definitions
jobs_repr = _validate(session_repr, key='jobs', value_type=dict)
# Qualifier ready to select jobs to remove
qualifier = ResumeDiscardQualifier(
# This qualifier must select jobs that we want to KEEP:
# - All of the jobs that we need to run (aka, the desired jobs
# list). This is pretty obvious and it is exactly what must
# be preserved or trim_job_list() will complain
set([job.id for job in session.run_list])
# - All of the jobs that have representation (aka checksum).
# We want those jobs because they have results (or they would not
# end up in the list as of format v4). If they have results we
# just have to keep them. Perhaps the session had a different
# selection earlier, who knows.
| set(jobs_repr)
)
try:
# NOTE: this should never raise ValueError (which signals that we
# tried to remove a job which is in the run list) because it should
# only remove jobs that were not in the representation and any job
# in the run list must be in the representation already.
session.trim_job_list(qualifier)
except ValueError:
logger.error("BUG in session resume logic / assumptions")
raise
@classmethod
def _build_JobResult(cls, result_repr, flags, location):
"""
Reconstruct a single job result.
Convert the representation of MemoryJobResult or DiskJobResult
back into an actual instance.
"""
# Load all common attributes...
outcome = _validate(
result_repr, key='outcome', value_type=str,
value_choice=sorted(
OUTCOME_METADATA_MAP.keys(),
key=lambda outcome: outcome or "none"
), value_none=True)
comments = _validate(
result_repr, key='comments', value_type=str, value_none=True)
return_code = _validate(
result_repr, key='return_code', value_type=int, value_none=True)
execution_duration = _validate(
result_repr, key='execution_duration', value_type=float,
value_none=True)
# Construct either DiskJobResult or MemoryJobResult
if 'io_log_filename' in result_repr:
io_log_filename = cls._load_io_log_filename(
result_repr, flags, location)
if (flags & cls.FLAG_FILE_REFERENCE_CHECKS_F
and not os.path.isfile(io_log_filename)
and flags & cls.FLAG_REWRITE_LOG_PATHNAMES_F):
io_log_filename2 = cls._rewrite_pathname(io_log_filename,
location)
logger.warning(_("Rewrote file name from %r to %r"),
io_log_filename, io_log_filename2)
io_log_filename = io_log_filename2
if (flags & cls.FLAG_FILE_REFERENCE_CHECKS_F
and not os.path.isfile(io_log_filename)):
raise BrokenReferenceToExternalFile(
_("cannot access file: {!r}").format(io_log_filename))
return DiskJobResult({
'outcome': outcome,
'comments': comments,
'execution_duration': execution_duration,
'io_log_filename': io_log_filename,
'return_code': return_code
})
else:
io_log = [
cls._build_IOLogRecord(record_repr)
for record_repr in _validate(
result_repr, key='io_log', value_type=list)]
return MemoryJobResult({
'outcome': outcome,
'comments': comments,
'execution_duration': execution_duration,
'io_log': io_log,
'return_code': return_code
})
@classmethod
def _load_io_log_filename(cls, result_repr, flags, location):
return _validate(result_repr, key='io_log_filename', value_type=str)
@classmethod
def _rewrite_pathname(cls, pathname, location):
return re.sub(
'.*\/\.cache\/plainbox\/sessions/[^//]+', location, pathname)
@classmethod
def _build_IOLogRecord(cls, record_repr):
"""Convert the representation of IOLogRecord back the object."""
_validate(record_repr, value_type=list)
delay = _validate(record_repr, key=0, value_type=float)
if delay < 0:
# TRANSLATORS: please keep delay untranslated
raise CorruptedSessionError(_("delay cannot be negative"))
stream_name = _validate(
record_repr, key=1, value_type=str,
value_choice=['stdout', 'stderr'])
data = _validate(record_repr, key=2, value_type=str)
# Each data item is a base64 string created by encoding the bytes and
# converting them to ASCII. To get the original we need to undo that
# operation.
try:
data = data.encode("ASCII")
except UnicodeEncodeError:
raise CorruptedSessionError(
_("record data {!r} is not ASCII").format(data))
try:
data = base64.standard_b64decode(data)
except binascii.Error:
raise CorruptedSessionError(
_("record data {!r} is not correct base64").format(data))
return IOLogRecord(delay, stream_name, data)
class SessionResumeHelper2(MetaDataHelper2MixIn, SessionResumeHelper1):
"""
Helper class for implementing session resume feature.
This class works with data constructed by
:class:`~plainbox.impl.session.suspend.SessionSuspendHelper2` which has
been pre-processed by :class:`SessionResumeHelper` (to strip the initial
envelope).
Due to the constraints of what can be represented in a suspended session,
this class cannot work in isolation. It must operate with a list of know
jobs.
Since (most of the) jobs are being provided externally (as they represent
the non-serialized parts of checkbox or other job providers) several
failure modes are possible. Those are documented in :meth:`resume()`
"""
class SessionResumeHelper3(MetaDataHelper3MixIn, SessionResumeHelper2):
"""
Helper class for implementing session resume feature.
This class works with data constructed by
:class:`~plainbox.impl.session.suspend.SessionSuspendHelper3` which has
been pre-processed by :class:`SessionResumeHelper` (to strip the initial
envelope).
Due to the constraints of what can be represented in a suspended session,
this class cannot work in isolation. It must operate with a list of know
jobs.
Since (most of the) jobs are being provided externally (as they represent
the non-serialized parts of checkbox or other job providers) several
failure modes are possible. Those are documented in :meth:`resume()`
"""
class SessionResumeHelper4(SessionResumeHelper3):
"""
Helper class for implementing session resume feature.
This class works with data constructed by
:class:`~plainbox.impl.session.suspend.SessionSuspendHelper4` which has
been pre-processed by :class:`SessionResumeHelper` (to strip the initial
envelope).
Due to the constraints of what can be represented in a suspended session,
this class cannot work in isolation. It must operate with a list of know
jobs.
Since (most of the) jobs are being provided externally (as they represent
the non-serialized parts of checkbox or other job providers) several
failure modes are possible. Those are documented in :meth:`resume()`
"""
class SessionResumeHelper5(SessionResumeHelper4):
"""
Helper class for implementing session resume feature.
This class works with data constructed by
:class:`~plainbox.impl.session.suspend.SessionSuspendHelper5` which has
been pre-processed by :class:`SessionResumeHelper` (to strip the initial
envelope).
Due to the constraints of what can be represented in a suspended session,
this class cannot work in isolation. It must operate with a list of know
jobs.
Since (most of the) jobs are being provided externally (as they represent
the non-serialized parts of checkbox or other job providers) several
failure modes are possible. Those are documented in :meth:`resume()`
"""
@classmethod
def _load_io_log_filename(cls, result_repr, flags, location):
io_log_filename = super()._load_io_log_filename(
result_repr, flags, location)
if os.path.isabs(io_log_filename):
return io_log_filename
if location is None:
raise ValueError("Location must be a directory name")
return os.path.join(location, io_log_filename)
class SessionResumeHelper6(SessionResumeHelper5):
"""
Helper class for implementing session resume feature
This class works with data constructed by
:class:`~plainbox.impl.session.suspend.SessionSuspendHelper5` which has
been pre-processed by :class:`SessionResumeHelper` (to strip the initial
envelope).
Due to the constraints of what can be represented in a suspended session,
this class cannot work in isolation. It must operate with a list of know
jobs.
Since (most of the) jobs are being provided externally (as they represent
the non-serialized parts of checkbox or other job providers) several
failure modes are possible. Those are documented in :meth:`resume()`
"""
def _build_SessionState(self, session_repr, early_cb=None):
"""
Reconstruct the session state object.
This method creates a fresh SessionState instance and restores
jobs, results, meta-data and desired job list using helper methods.
"""
# Construct a fresh session object.
session = SessionState(self.job_list)
logger.debug(_("Constructed new session for resume %r"), session)
# Give early_cb a chance to see the session before we start resuming.
# This way applications can see, among other things, generated jobs
# as they are added to the session, by registering appropriate signal
# handlers on the freshly-constructed session instance.
if early_cb is not None:
logger.debug(_("Invoking early callback %r"), early_cb)
new_session = early_cb(session)
if new_session is not None:
logger.debug(
_("Using different session for resume: %r"), new_session)
session = new_session
# Restore bits and pieces of state
logger.debug(
_("Starting to restore jobs and results to %r..."), session)
self._restore_SessionState_jobs_and_results(session, session_repr)
logger.debug(_("Starting to restore metadata..."))
self._restore_SessionState_metadata(session.metadata, session_repr)
logger.debug(_("restored metadata %r"), session.metadata)
logger.debug(_("Starting to restore mandatory job list..."))
self._restore_SessionState_mandatory_job_list(session, session_repr)
logger.debug(_("Starting to restore desired job list..."))
self._restore_SessionState_desired_job_list(session, session_repr)
logger.debug(_("Starting to restore job list..."))
self._restore_SessionState_job_list(session, session_repr)
# Return whatever we've got
logger.debug(_("Resume complete!"))
return session
def _validate(obj, **flags):
"""Multi-purpose extraction and validation function."""
# Fetch data from the container OR use json_repr directly
if 'key' in flags:
key = flags['key']
obj_name = _("key {!r}").format(key)
try:
value = obj[key]
except (TypeError, IndexError, KeyError):
error_msg = flags.get(
"missing_key_msg",
_("Missing value for key {!r}").format(key))
raise CorruptedSessionError(error_msg)
else:
value = obj
obj_name = _("object")
# Check if value can be None (defaulting to "no")
value_none = flags.get('value_none', False)
if value is None and value_none is False:
error_msg = flags.get(
"value_none_msg",
_("Value of {} cannot be None").format(obj_name))
raise CorruptedSessionError(error_msg)
# Check if value is of correct type
if value is not None and "value_type" in flags:
value_type = flags['value_type']
if not isinstance(value, value_type):
error_msg = flags.get(
"value_type_msg",
_("Value of {} is of incorrect type {}").format(
obj_name, type(value).__name__))
raise CorruptedSessionError(error_msg)
# Check if value is in the set of correct values
if "value_choice" in flags:
value_choice = flags['value_choice']
if value not in value_choice:
error_msg = flags.get(
"value_choice_msg",
_("Value for {} not in allowed set {!r}").format(
obj_name, value_choice))
raise CorruptedSessionError(error_msg)
return value
plainbox-0.25/plainbox/impl/session/test_assistant.py 0000664 0001750 0001750 00000017231 12627266441 023754 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""Tests for the session assistant module class."""
import tempfile
from plainbox.impl.providers.special import get_stubbox
from plainbox.impl.secure.providers.v1 import Provider1
from plainbox.impl.session.assistant import SessionAssistant
from plainbox.impl.session.assistant import UsageExpectation
from plainbox.vendor import mock
from plainbox.vendor import morris
@mock.patch('plainbox.impl.session.assistant.get_providers')
class SessionAssistantTests(morris.SignalTestCase):
"""Tests for the SessionAssitant class."""
APP_ID = 'app-id'
APP_VERSION = '1.0'
API_VERSION = '0.99'
API_FLAGS = []
def setUp(self):
"""Common set-up code."""
self.sa = SessionAssistant(
self.APP_ID, self.APP_VERSION, self.API_VERSION, self.API_FLAGS)
# NOTE: setup a custom repository so that all tests are done in
# isolation from the user account. While we're doing that, let's check
# that this this function is allowed just after setting up the session.
# We cannot really do that in tests later.
self.repo_dir = tempfile.TemporaryDirectory()
self.assertIn(
self.sa.use_alternate_repository,
UsageExpectation.of(self.sa).allowed_calls)
self.sa.use_alternate_repository(self.repo_dir.name)
self.assertNotIn(
self.sa.use_alternate_repository,
UsageExpectation.of(self.sa).allowed_calls)
# Monitor the provider_selected signal since some tests check it
self.watchSignal(self.sa.provider_selected)
# Create a few mocked providers that tests can use.
# The all-important plainbox provider
self.p1 = mock.Mock(spec_set=Provider1, name='p1')
self.p1.namespace = '2013.com.canonical.plainbox'
self.p1.name = '2013.com.canonical.plainbox:special'
# An example 3rd party provider
self.p2 = mock.Mock(spec_set=Provider1, name='p2')
self.p2.namespace = '2015.pl.zygoon'
self.p2.name = '2015.pl.zygoon:example'
# A Canonical certification provider
self.p3 = mock.Mock(spec_set=Provider1, name='p3')
self.p3.namespace = '2013.com.canonical.certification'
self.p3.name = '2013.com.canonical.certification:stuff'
# The stubbox provider, non-mocked, with lots of useful jobs
self.stubbox = get_stubbox()
def tearDown(self):
"""Common tear-down code."""
self.repo_dir.cleanup()
def _get_mock_providers(self):
"""Get some mocked provides for testing."""
return [self.p1, self.p2, self.p3]
def _get_test_providers(self):
"""Get the stubbox provider, it's fully functional."""
return [self.stubbox]
def test_select_providers__loads_plainbox(self, mock_get_providers):
"""Check that select_providers() loads special plainbox providers."""
mock_get_providers.return_value = self._get_mock_providers()
selected_providers = self.sa.select_providers()
# We're expecting to see just [p1]
self.assertEqual(selected_providers, [self.p1])
# p1 is always auto-loaded
self.assertSignalFired(self.sa.provider_selected, self.p1, auto=True)
# p2 is not loaded
self.assertSignalNotFired(
self.sa.provider_selected, self.p2, auto=True)
self.assertSignalNotFired(
self.sa.provider_selected, self.p2, auto=False)
# p3 is not loaded
self.assertSignalNotFired(
self.sa.provider_selected, self.p3, auto=True)
self.assertSignalNotFired(
self.sa.provider_selected, self.p3, auto=False)
def test_select_providers__loads_by_id(self, mock_get_providers):
"""Check that select_providers() loads providers with given name."""
mock_get_providers.return_value = self._get_mock_providers()
selected_providers = self.sa.select_providers(self.p2.name)
# We're expecting to see both providers [p1, p2]
self.assertEqual(selected_providers, [self.p1, self.p2])
# p1 is always auto-loaded
self.assertSignalFired(
self.sa.provider_selected, self.p1, auto=True)
# p2 is loaded on demand
self.assertSignalFired(
self.sa.provider_selected, self.p2, auto=False)
# p3 is not loaded
self.assertSignalNotFired(
self.sa.provider_selected, self.p3, auto=False)
self.assertSignalNotFired(
self.sa.provider_selected, self.p3, auto=True)
def test_select_providers__loads_by_pattern(self, mock_get_providers):
"""Check that select_providers() loads providers matching a pattern."""
mock_get_providers.return_value = self._get_mock_providers()
selected_providers = self.sa.select_providers("*canonical*")
# We're expecting to see both canonical providers [p1, p3]
self.assertEqual(selected_providers, [self.p1, self.p3])
# p1 is always auto-loaded
self.assertSignalFired(
self.sa.provider_selected, self.p1, auto=True)
# p2 is not loaded
self.assertSignalNotFired(
self.sa.provider_selected, self.p2, auto=False)
self.assertSignalNotFired(
self.sa.provider_selected, self.p2, auto=True)
# p3 is loaded on demand
self.assertSignalFired(
self.sa.provider_selected, self.p3, auto=False)
def test_select_providers__reports_bogus_names(self, mock_get_providers):
"""Check that select_providers() reports wrong names and patterns."""
mock_get_providers.return_value = self._get_mock_providers()
with self.assertRaises(ValueError) as boom:
self.sa.select_providers("*bimbo*")
self.assertEqual(str(boom.exception), "nothing selected with: *bimbo*")
def test_expected_call_sequence(self, mock_get_providers):
"""Track the sequence of allowed method calls."""
mock_get_providers.return_value = self._get_test_providers()
# SessionAssistant.select_providers() must be allowed
self.assertIn(self.sa.select_providers,
UsageExpectation.of(self.sa).allowed_calls)
# Call SessionAssistant.select_providers()
self.sa.select_providers()
# SessionAssistant.select_providers() must no longer be allowed
self.assertNotIn(self.sa.select_providers,
UsageExpectation.of(self.sa).allowed_calls)
# SessionAssistant.start_new_session() must now be allowed
self.assertIn(self.sa.start_new_session,
UsageExpectation.of(self.sa).allowed_calls)
# Call SessionAssistant.start_new_session()
self.sa.start_new_session("just for testing")
# SessionAssistant.start_new_session() must no longer allowed
self.assertNotIn(self.sa.start_new_session,
UsageExpectation.of(self.sa).allowed_calls)
# SessionAssistant.select_test_plan() must now be allowed
self.assertIn(self.sa.select_test_plan,
UsageExpectation.of(self.sa).allowed_calls)
plainbox-0.25/plainbox/impl/session/restart.py 0000664 0001750 0001750 00000010615 12627266441 022367 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""Interfaces and implementation of application restart strategies."""
import abc
import errno
import os
from plainbox.impl.secure.config import PlainBoxConfigParser
class IRestartStrategy(metaclass=abc.ABCMeta):
"""Interface for managing application restarts."""
@abc.abstractmethod
def prime_application_restart(self, app_id: str, cmd: str,) -> None:
"""
Configure the system to restart the testing application.
:param app_id:
Identifier of the testing application.
:param cmd:
The command to execute to resume the session.
"""
@abc.abstractmethod
def diffuse_application_restart(self, app_id: str) -> None:
"""
Configure the system not to restart the testing application.
:param app_id:
Identifier of the testing application.
"""
class XDGRestartStrategy(IRestartStrategy):
"""
Restart strategy implemented with the XDG auto-start mechanism.
See: https://developer.gnome.org/autostart-spec/
"""
def __init__(
self, *,
app_name: str=None,
app_generic_name: str=None,
app_comment: str=None,
app_icon: str=None,
app_terminal: bool=False,
app_categories: str=None,
app_startup_notify: bool=False
):
"""
Initialize the XDG resume strategy.
:param cmd_callback:
The command callback
"""
self.config = config = PlainBoxConfigParser()
section = 'Desktop Entry'
config.add_section(section)
config.set(section, 'Type', 'Application')
config.set(section, 'Version', '1.0')
config.set(section, 'Name',
app_name or 'Resume Testing Session')
config.set(section, 'GenericName',
app_generic_name or 'Resume Testing Session')
config.set(section, 'Comment',
app_comment or 'Automatically resume the testing session')
config.set(section, 'Terminal', 'true' if app_terminal else 'false')
if app_icon:
config.set(section, 'Icon', app_icon)
config.set(section, 'Categories', app_categories or 'System')
config.set(section, 'StartupNotify',
'true' if app_startup_notify else 'false')
def get_desktop_filename(self, app_id: str) -> str:
# TODO: use correct xdg lookup mechanism
return os.path.expandvars(
"$HOME/.config/autostart/{}.desktop".format(app_id))
def prime_application_restart(self, app_id: str, cmd: str) -> None:
filename = self.get_desktop_filename(app_id)
self.config.set('Desktop Entry', 'Exec', cmd)
os.makedirs(os.path.dirname(filename), exist_ok=True)
with open(filename, 'wt') as stream:
self.config.write(stream, space_around_delimiters=False)
def diffuse_application_restart(self, app_id: str) -> None:
filename = self.get_desktop_filename(app_id)
try:
os.remove(filename)
except OSError as exc:
if exc.errno == errno.ENOENT:
pass
else:
raise
def detect_restart_strategy() -> IRestartStrategy:
"""
Detect the restart strategy for the current environment.
:returns:
A restart strategy object.
:raises LookupError:
When no such object can be found.
"""
desktop = os.getenv("XDG_CURRENT_DESKTOP")
# TODO: add support for other desktops after testing them
supported_desktops = {'Unity'}
if desktop in supported_desktops:
# NOTE: Assume this is a terminal application
return XDGRestartStrategy(app_terminal=True)
else:
raise LookupError("Unable to find appropriate strategy.""")
plainbox-0.25/plainbox/impl/session/test_suspend.py 0000664 0001750 0001750 00000112734 12627266441 023430 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012, 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.session.test_suspend`
=========================================
Test definitions for :mod:`plainbox.impl.session.suspend` module
"""
from functools import partial
from unittest import TestCase
import gzip
from plainbox.abc import IJobResult
from plainbox.impl.job import JobDefinition
from plainbox.impl.result import DiskJobResult
from plainbox.impl.result import IOLogRecord
from plainbox.impl.result import MemoryJobResult
from plainbox.impl.session.state import SessionMetaData
from plainbox.impl.session.state import SessionState
from plainbox.impl.session.suspend import SessionSuspendHelper1
from plainbox.impl.session.suspend import SessionSuspendHelper2
from plainbox.impl.session.suspend import SessionSuspendHelper3
from plainbox.impl.session.suspend import SessionSuspendHelper4
from plainbox.impl.session.suspend import SessionSuspendHelper5
from plainbox.impl.session.suspend import SessionSuspendHelper6
from plainbox.impl.testing_utils import make_job
from plainbox.vendor import mock
class BaseJobResultTestsTestsMixIn:
"""
Mix-in that tests a number of shared aspects of DiskJobResult
and MemoryJobResult. To use sub-class this mix-in with TestCase
and set ``repr_method`` and ``TESTED_CLS`` to something sensible.
:cvar:`repr_method`` should be one of
:meth:`plainbox.impl.session.suspend.SessionSuspendHelper.
_repr_DiskJobResult()`, :meth:`plainbox.impl.session.suspend.
SessionSuspendHelper._repr_MemoryJobResult()`.
:cvar:`TESTED_CLS` should be one of
:class:`plainbox.impl.result.MemoryJobResult`
or :class:`plainbox.impl.result.DiskJobResult`
"""
def setUp(self):
self.helper = self.HELPER_CLS()
self.empty_result = self.TESTED_CLS({})
self.typical_result = self.TESTED_CLS({
"outcome": self.TESTED_CLS.OUTCOME_PASS,
"execution_duration": 42.5,
"comments": "the screen was corrupted",
"return_code": 1,
# NOTE: those are actually specific to TESTED_CLS but it is
# a simple hack that gets the job done
"io_log_filename": "/path/to/log.txt",
"io_log": [
(0, 'stdout', b'first part\n'),
(0.1, 'stdout', b'second part\n'),
]
})
self.session_dir = None
def test_repr_xxxJobResult_outcome(self):
"""
verify that DiskJobResult.outcome is serialized correctly
"""
data = self.repr_method(self.typical_result, self.session_dir)
self.assertEqual(data['outcome'], DiskJobResult.OUTCOME_PASS)
def test_repr_xxxJobResult_execution_duration(self):
"""
verify that DiskJobResult.execution_duration is serialized correctly
"""
data = self.repr_method(self.typical_result, self.session_dir)
self.assertAlmostEqual(data['execution_duration'], 42.5)
def test_repr_xxxJobResult_comments(self):
"""
verify that DiskJobResult.comments is serialized correctly
"""
data = self.repr_method(self.typical_result, self.session_dir)
self.assertEqual(data['comments'], "the screen was corrupted")
def test_repr_xxxJobResult_return_code(self):
"""
verify that DiskJobResult.return_code is serialized correctly
"""
data = self.repr_method(self.typical_result, self.session_dir)
self.assertEqual(data['return_code'], 1)
class SuspendMemoryJobResultTests(BaseJobResultTestsTestsMixIn, TestCase):
"""
Tests that check how MemoryJobResult is represented by SessionSuspendHelper
"""
TESTED_CLS = MemoryJobResult
HELPER_CLS = SessionSuspendHelper1
def setUp(self):
super(SuspendMemoryJobResultTests, self).setUp()
self.repr_method = self.helper._repr_MemoryJobResult
def test_repr_MemoryJobResult_empty(self):
"""
verify that the representation of an empty MemoryJobResult is okay
"""
data = self.repr_method(self.empty_result, self.session_dir)
self.assertEqual(data, {
"outcome": None,
"execution_duration": None,
"comments": None,
"return_code": None,
"io_log": [],
})
def test_repr_MemoryJobResult_io_log(self):
"""
verify that MemoryJobResult.io_log is serialized correctly
"""
data = self.helper._repr_MemoryJobResult(
self.typical_result, self.session_dir)
self.assertEqual(data['io_log'], [
[0, 'stdout', 'Zmlyc3QgcGFydAo='],
[0.1, 'stdout', 'c2Vjb25kIHBhcnQK'],
])
class SuspendDiskJobResultTests(BaseJobResultTestsTestsMixIn, TestCase):
"""
Tests that check how DiskJobResult is represented by SessionSuspendHelper
"""
TESTED_CLS = DiskJobResult
HELPER_CLS = SessionSuspendHelper1
def setUp(self):
super(SuspendDiskJobResultTests, self).setUp()
self.repr_method = self.helper._repr_DiskJobResult
def test_repr_DiskJobResult_empty(self):
"""
verify that the representation of an empty DiskJobResult is okay
"""
data = self.repr_method(self.empty_result, self.session_dir)
self.assertEqual(data, {
"outcome": None,
"execution_duration": None,
"comments": None,
"return_code": None,
"io_log_filename": None,
})
def test_repr_DiskJobResult_io_log_filename(self):
"""
verify that DiskJobResult.io_log_filename is serialized correctly
"""
data = self.helper._repr_DiskJobResult(
self.typical_result, self.session_dir)
self.assertEqual(data['io_log_filename'], "/path/to/log.txt")
class Suspend5DiskJobResultTests(SuspendDiskJobResultTests):
"""
Tests that check how DiskJobResult is represented by SessionSuspendHelper5
"""
TESTED_CLS = DiskJobResult
HELPER_CLS = SessionSuspendHelper5
def test_repr_DiskJobResult_io_log_filename__no_session_dir(self):
""" io_log_filename is absolute in session_dir is not used. """
data = self.helper._repr_DiskJobResult(
self.typical_result, None)
self.assertEqual(data['io_log_filename'], "/path/to/log.txt")
def test_repr_DiskJobResult_io_log_filename__session_dir(self):
""" io_log_filename is relative if session_dir is used. """
data = self.helper._repr_DiskJobResult(
self.typical_result, "/path/to")
self.assertEqual(data['io_log_filename'], "log.txt")
class SessionSuspendHelper1Tests(TestCase):
"""
Tests for various methods of SessionSuspendHelper
"""
def setUp(self):
self.helper = SessionSuspendHelper1()
self.session_dir = None
def test_repr_IOLogRecord(self):
"""
verify that the representation of IOLogRecord is okay
"""
record = IOLogRecord(0.0, "stdout", b"binary data")
data = self.helper._repr_IOLogRecord(record)
self.assertEqual(data, [0.0, "stdout", "YmluYXJ5IGRhdGE="])
def test_repr_JobResult_with_MemoryJobResult(self):
"""
verify that _repr_JobResult() called with MemoryJobResult
calls _repr_MemoryJobResult
"""
mpo = mock.patch.object
with mpo(self.helper, '_repr_MemoryJobResult'):
result = MemoryJobResult({})
self.helper._repr_JobResult(result, self.session_dir)
self.helper._repr_MemoryJobResult.assert_called_once_with(
result, None)
def test_repr_JobResult_with_DiskJobResult(self):
"""
verify that _repr_JobResult() called with DiskJobResult
calls _repr_DiskJobResult
"""
mpo = mock.patch.object
with mpo(self.helper, '_repr_DiskJobResult'):
result = DiskJobResult({})
self.helper._repr_JobResult(result, self.session_dir)
self.helper._repr_DiskJobResult.assert_called_once_with(
result, None)
def test_repr_JobResult_with_junk(self):
"""
verify that _repr_JobResult() raises TypeError when
called with something other than JobResult instances
"""
with self.assertRaises(TypeError):
self.helper._repr_JobResult(None)
def test_repr_SessionMetaData_empty_metadata(self):
"""
verify that representation of empty SessionMetaData is okay
"""
# all defaults with empty values
data = self.helper._repr_SessionMetaData(
SessionMetaData(), self.session_dir)
self.assertEqual(data, {
'title': None,
'flags': [],
'running_job_name': None
})
def test_repr_SessionMetaData_typical_metadata(self):
"""
verify that representation of typical SessionMetaData is okay
"""
# no surprises here, just the same data copied over
data = self.helper._repr_SessionMetaData(SessionMetaData(
title='USB Testing session',
flags=['incomplete'],
running_job_name='usb/detect'
), self.session_dir)
self.assertEqual(data, {
'title': 'USB Testing session',
'flags': ['incomplete'],
'running_job_name': 'usb/detect',
})
def test_repr_SessionState_empty_session(self):
"""
verify that representation of empty SessionState is okay
"""
data = self.helper._repr_SessionState(
SessionState([]), self.session_dir)
self.assertEqual(data, {
'jobs': {},
'results': {},
'desired_job_list': [],
'mandatory_job_list': [],
'metadata': {
'title': None,
'flags': [],
'running_job_name': None,
},
})
def test_json_repr_has_version_field(self):
"""
verify that the json representation has the 'version' field
"""
data = self.helper._json_repr(SessionState([]), self.session_dir)
self.assertIn("version", data)
def test_json_repr_current_version(self):
"""
verify what the version field is
"""
data = self.helper._json_repr(SessionState([]), self.session_dir)
self.assertEqual(data['version'], 1)
def test_json_repr_stores_session_state(self):
"""
verify that the json representation has the 'session' field
"""
data = self.helper._json_repr(SessionState([]), self.session_dir)
self.assertIn("session", data)
def test_suspend(self):
"""
verify that the suspend() method returns gzipped JSON representation
"""
data = self.helper.suspend(SessionState([]), self.session_dir)
# XXX: we cannot really test what the compressed data looks like
# because apparently python3.2 gzip output is non-deterministic.
# It seems to be an instance of the gzip bug that was fixed a few
# years ago.
#
# I've filed a bug on python3.2 in Ubuntu and Python upstream project
# https://bugs.launchpad.net/ubuntu/+source/python3.2/+bug/871083
#
# In the meantime we can only test that we got bytes out
self.assertIsInstance(data, bytes)
# And that we can gzip uncompress them and get what we expected
self.assertEqual(gzip.decompress(data), (
b'{"session":{"desired_job_list":[],"jobs":{},'
b'"mandatory_job_list":[],"metadata":'
b'{"flags":[],"running_job_name":null,"title":null},"results":{}'
b'},"version":1}'))
class GeneratedJobSuspendTests(TestCase):
"""
Tests that check how SessionSuspendHelper behaves when faced with
generated jobs. This tests sets up the following job hierarchy:
__category__
\-> generator
\-> generated
The "__category__" job is a typical "catter" job that cats an existing
job from somewhere else in the filesystem. This type of generated job
is used often for category assignment.
The "generator" job is a typical non-catter job that actually creates
new jobs in some way. In this test it generates a job called "generated".
"""
def setUp(self):
self.session_dir = None
# Crete a "__category__" job
self.category_job = JobDefinition({
"plugin": "local",
"id": "__category__"
})
# Create a "generator" job
self.generator_job = JobDefinition({
"plugin": "local",
"id": "generator",
"command": "fake",
})
# Keep a variable for the (future) generated job
self.generated_job = None
# Create a result for the "__category__" job.
# It must define a verbatim copy of the "generator" job
self.category_result = MemoryJobResult({
"io_log": [
(0.0, "stdout", b'plugin:local\n'),
(0.1, "stdout", b'id:generator\n'),
(0.2, "stdout", b'command:fake\n'),
]
})
# Create a result for the "generator" job.
# It will define the "generated" job
self.generator_result = MemoryJobResult({
"io_log": [
(0.0, 'stdout', b'id:generated'),
(0.1, 'stdout', b'plugin:shell'),
(0.2, 'stdout', b'command:fake'),
]
})
# Create a session that knows about the two jobs that exist
# directly as files (__category__ and generator)
self.session_state = SessionState([
self.category_job, self.generator_job])
# Select both of them for execution.
self.session_state.update_desired_job_list([
self.category_job, self.generator_job])
# "execute" the "__category__" job by showing the session the result
self.session_state.update_job_result(
self.category_job, self.category_result)
# Ensure that the generator job gained the "via" attribute
# This is how we know the code above has no typos or anything.
self.assertIs(
self.session_state.job_state_map[self.generator_job.id].via_job,
self.category_job)
# "execute" the "generator" job by showing the session the result.
# Connect the 'on_job_added' signal to a helper function that
# extracts the "generated" job
def job_added(self, job):
self.generated_job = job
# Use partial to supply 'self' from the class into the function above
self.session_state.on_job_added.connect(partial(job_added, self))
# Show the result of the "generator" job to the session,
# this will define the "generated" job, fire the signal
# and call our callback
self.session_state.update_job_result(
self.generator_job, self.generator_result)
# Ensure that we got the generated_job variable assigned
# (by the event/signal handled above)
self.assertIsNot(self.generated_job, None)
# Now the stage is set for testing. Let's create the suspend helper
# and use the data we've defined so far to create JSON-friendly
# description of the session state.
self.helper = SessionSuspendHelper1()
self.data = self.helper._repr_SessionState(
self.session_state, self.session_dir)
def test_state_tracked_for_all_jobs(self):
"""
verify that 'state' keeps track of all three jobs
"""
self.assertIn(self.category_job.id, self.data['jobs'])
self.assertIn(self.generator_job.id, self.data['jobs'])
self.assertIn(self.generated_job.id, self.data['jobs'])
def test_category_job_result_is_saved(self):
"""
verify that the 'category' job result was saved
"""
# This result is essential to re-create the association
# with the 'generator' job. In theory we could get it from
# the 'via' attribute but that is only true for category assignment
# where the child job already exists and is defined on the
# filesystem. This would not work in the case of truly generated jobs
# so for consistency it is done the same way.
self.assertEqual(
self.data['results']['__category__'], [{
'comments': None,
'execution_duration': None,
'outcome': None,
'return_code': None,
'io_log': [
[0.0, 'stdout', 'cGx1Z2luOmxvY2FsCg=='],
[0.1, 'stdout', 'aWQ6Z2VuZXJhdG9yCg=='],
[0.2, 'stdout', 'Y29tbWFuZDpmYWtlCg==']
]
}]
)
def test_generator_job_result_is_saved(self):
"""
verify that the 'generator' job result was saved
"""
self.assertEqual(
self.data['results']['generator'], [{
'comments': None,
'execution_duration': None,
'outcome': None,
'return_code': None,
'io_log': [
[0.0, 'stdout', 'aWQ6Z2VuZXJhdGVk'],
[0.1, 'stdout', 'cGx1Z2luOnNoZWxs'],
[0.2, 'stdout', 'Y29tbWFuZDpmYWtl'],
]
}]
)
def test_generated_job_result_is_saved(self):
"""
verify that the 'generated' job result was saved
"""
# This is the implicit "empty" result that all jobs have
self.assertEqual(
self.data['results']['generated'], [{
'comments': None,
'execution_duration': None,
'outcome': None,
'return_code': None,
'io_log': []
}]
)
def test_sanity_check(self):
"""
verify that the whole suspend data looks right
"""
# This test is pretty much a "eyeball" inspection test
# where we can see everything at a glance and not have to
# deduce how each part looks like from the tests above.
#
# All the data below is verbatim copy of the generated suspend data
# that was created when this test was written. The only modification
# was wrapping of the checksums in ( ) to make them wrap correctly
# so that the file can stay PEP-8 clean
self.maxDiff = None
self.assertEqual(self.data, {
'jobs': {
'__category__': (
'e2475434e4c0b2c825541430e526fe0565780dfeb67'
'050f3b7f3453aa3cc439b'),
'generator': (
'7015c949ce3ae91f37e10b304212022fdbc4b10acbc'
'cb78ac58ff10ef7a2c8c8'),
'generated': (
'47dd5e318ef99184e4dee8adf818a7f7548978a9470'
'8114c7b3dd2169b9a7a67')
},
'results': {
'__category__': [{
'comments': None,
'execution_duration': None,
'io_log': [
[0.0, 'stdout', 'cGx1Z2luOmxvY2FsCg=='],
[0.1, 'stdout', 'aWQ6Z2VuZXJhdG9yCg=='],
[0.2, 'stdout', 'Y29tbWFuZDpmYWtlCg==']],
'outcome': None,
'return_code': None,
}],
'generator': [{
'comments': None,
'execution_duration': None,
'io_log': [
[0.0, 'stdout', 'aWQ6Z2VuZXJhdGVk'],
[0.1, 'stdout', 'cGx1Z2luOnNoZWxs'],
[0.2, 'stdout', 'Y29tbWFuZDpmYWtl']],
'outcome': None,
'return_code': None,
}],
'generated': [{
'comments': None,
'execution_duration': None,
'io_log': [],
'outcome': None,
'return_code': None,
}]
},
'desired_job_list': ['__category__', 'generator'],
'mandatory_job_list': [],
'metadata': {
'flags': [],
'running_job_name': None,
'title': None
},
})
class SessionSuspendHelper2Tests(SessionSuspendHelper1Tests):
"""
Tests for various methods of SessionSuspendHelper2
"""
def setUp(self):
self.helper = SessionSuspendHelper2()
self.session_dir = None
def test_json_repr_current_version(self):
"""
verify what the version field is
"""
data = self.helper._json_repr(SessionState([]), self.session_dir)
self.assertEqual(data['version'], 2)
def test_repr_SessionMetaData_empty_metadata(self):
"""
verify that representation of empty SessionMetaData is okay
"""
# all defaults with empty values
data = self.helper._repr_SessionMetaData(
SessionMetaData(), self.session_dir)
self.assertEqual(data, {
'title': None,
'flags': [],
'running_job_name': None,
'app_blob': None
})
def test_repr_SessionMetaData_typical_metadata(self):
"""
verify that representation of typical SessionMetaData is okay
"""
# no surprises here, just the same data copied over
data = self.helper._repr_SessionMetaData(SessionMetaData(
title='USB Testing session',
flags=['incomplete'],
running_job_name='usb/detect',
app_blob=b'blob',
), self.session_dir)
self.assertEqual(data, {
'title': 'USB Testing session',
'flags': ['incomplete'],
'running_job_name': 'usb/detect',
'app_blob': 'YmxvYg==',
})
def test_repr_SessionState_empty_session(self):
"""
verify that representation of empty SessionState is okay
"""
data = self.helper._repr_SessionState(
SessionState([]), self.session_dir)
self.assertEqual(data, {
'jobs': {},
'results': {},
'desired_job_list': [],
'mandatory_job_list': [],
'metadata': {
'title': None,
'flags': [],
'running_job_name': None,
'app_blob': None,
},
})
def test_suspend(self):
"""
verify that the suspend() method returns gzipped JSON representation
"""
data = self.helper.suspend(
SessionState([]), self.session_dir)
# XXX: we cannot really test what the compressed data looks like
# because apparently python3.2 gzip output is non-deterministic.
# It seems to be an instance of the gzip bug that was fixed a few
# years ago.
#
# I've filed a bug on python3.2 in Ubuntu and Python upstream project
# https://bugs.launchpad.net/ubuntu/+source/python3.2/+bug/871083
#
# In the meantime we can only test that we got bytes out
self.assertIsInstance(data, bytes)
# And that we can gzip uncompress them and get what we expected
self.assertEqual(gzip.decompress(data), (
b'{"session":{"desired_job_list":[],"jobs":{},'
b'"mandatory_job_list":[],"metadata":'
b'{"app_blob":null,"flags":[],"running_job_name":null,"title":null'
b'},"results":{}},"version":2}'))
class SessionSuspendHelper3Tests(SessionSuspendHelper2Tests):
"""
Tests for various methods of SessionSuspendHelper3
"""
def setUp(self):
self.helper = SessionSuspendHelper3()
self.session_dir = None
def test_json_repr_current_version(self):
"""
verify what the version field is
"""
data = self.helper._json_repr(SessionState([]), self.session_dir)
self.assertEqual(data['version'], 3)
def test_repr_SessionMetaData_empty_metadata(self):
"""
verify that representation of empty SessionMetaData is okay
"""
# all defaults with empty values
data = self.helper._repr_SessionMetaData(
SessionMetaData(), self.session_dir)
self.assertEqual(data, {
'title': None,
'flags': [],
'running_job_name': None,
'app_blob': None,
'app_id': None
})
def test_repr_SessionMetaData_typical_metadata(self):
"""
verify that representation of typical SessionMetaData is okay
"""
# no surprises here, just the same data copied over
data = self.helper._repr_SessionMetaData(SessionMetaData(
title='USB Testing session',
flags=['incomplete'],
running_job_name='usb/detect',
app_blob=b'blob',
app_id='com.canonical.certification.plainbox',
), self.session_dir)
self.assertEqual(data, {
'title': 'USB Testing session',
'flags': ['incomplete'],
'running_job_name': 'usb/detect',
'app_blob': 'YmxvYg==',
'app_id': 'com.canonical.certification.plainbox'
})
def test_repr_SessionState_empty_session(self):
"""
verify that representation of empty SessionState is okay
"""
data = self.helper._repr_SessionState(
SessionState([]), self.session_dir)
self.assertEqual(data, {
'jobs': {},
'results': {},
'desired_job_list': [],
'mandatory_job_list': [],
'metadata': {
'title': None,
'flags': [],
'running_job_name': None,
'app_blob': None,
'app_id': None,
},
})
def test_suspend(self):
"""
verify that the suspend() method returns gzipped JSON representation
"""
data = self.helper.suspend(SessionState([]), self.session_dir)
# XXX: we cannot really test what the compressed data looks like
# because apparently python3.2 gzip output is non-deterministic.
# It seems to be an instance of the gzip bug that was fixed a few
# years ago.
#
# I've filed a bug on python3.2 in Ubuntu and Python upstream project
# https://bugs.launchpad.net/ubuntu/+source/python3.2/+bug/871083
#
# In the meantime we can only test that we got bytes out
self.assertIsInstance(data, bytes)
# And that we can gzip uncompress them and get what we expected
self.assertEqual(gzip.decompress(data), (
b'{"session":{"desired_job_list":[],"jobs":{},'
b'"mandatory_job_list":[],"metadata":'
b'{"app_blob":null,"app_id":null,"flags":[],'
b'"running_job_name":null,"title":null},"results":{}},'
b'"version":3}'))
class SessionSuspendHelper4Tests(SessionSuspendHelper3Tests):
"""
Tests for various methods of SessionSuspendHelper4
"""
def setUp(self):
self.helper = SessionSuspendHelper4()
self.session_dir = None
def test_json_repr_current_version(self):
"""
verify what the version field is
"""
data = self.helper._json_repr(SessionState([]), self.session_dir)
self.assertEqual(data['version'], 4)
def test_repr_SessionState_typical_session(self):
"""
verify the representation of a SessionState with some unused jobs
Unused jobs should just have no representation. Their checksum
should not be mentioned. Their results (empty results) should be
ignored.
"""
used_job = JobDefinition({
"plugin": "shell",
"id": "used",
"command": "echo 'hello world'",
})
unused_job = JobDefinition({
"plugin": "shell",
"id": "unused",
"command": "echo 'hello world'",
})
used_result = MemoryJobResult({
"io_log": [
(0.0, "stdout", b'hello world\n'),
],
'outcome': IJobResult.OUTCOME_PASS
})
session_state = SessionState([used_job, unused_job])
session_state.update_desired_job_list([used_job])
session_state.update_job_result(used_job, used_result)
data = self.helper._repr_SessionState(session_state, self.session_dir)
self.assertEqual(data, {
'jobs': {
'used': ('8c393c19fdfde1b6afc5b79d0a1617ecf7531cd832a16450dc'
'2f3f50d329d373')
},
'results': {
'used': [{
'comments': None,
'execution_duration': None,
'io_log': [[0.0, 'stdout', 'aGVsbG8gd29ybGQK']],
'outcome': 'pass',
'return_code': None
}]
},
'desired_job_list': ['used'],
'mandatory_job_list': [],
'metadata': {
'title': None,
'flags': [],
'running_job_name': None,
'app_blob': None,
'app_id': None,
},
})
def test_suspend(self):
"""
verify that the suspend() method returns gzipped JSON representation
"""
data = self.helper.suspend(SessionState([]), self.session_dir)
# XXX: we cannot really test what the compressed data looks like
# because apparently python3.2 gzip output is non-deterministic.
# It seems to be an instance of the gzip bug that was fixed a few
# years ago.
#
# I've filed a bug on python3.2 in Ubuntu and Python upstream project
# https://bugs.launchpad.net/ubuntu/+source/python3.2/+bug/871083
#
# In the meantime we can only test that we got bytes out
self.assertIsInstance(data, bytes)
# And that we can gzip uncompress them and get what we expected
self.assertEqual(gzip.decompress(data), (
b'{"session":{"desired_job_list":[],"jobs":{},'
b'"mandatory_job_list":[],"metadata":'
b'{"app_blob":null,"app_id":null,"flags":[],'
b'"running_job_name":null,"title":null},"results":{}},'
b'"version":4}'))
class SessionSuspendHelper5Tests(SessionSuspendHelper4Tests):
"""
Tests for various methods of SessionSuspendHelper5
"""
def setUp(self):
self.helper = SessionSuspendHelper5()
self.session_dir = None
def test_json_repr_current_version(self):
"""
verify what the version field is
"""
data = self.helper._json_repr(SessionState([]), self.session_dir)
self.assertEqual(data['version'], 5)
def test_suspend(self):
"""
verify that the suspend() method returns gzipped JSON representation
"""
data = self.helper.suspend(SessionState([]), self.session_dir)
# XXX: we cannot really test what the compressed data looks like
# because apparently python3.2 gzip output is non-deterministic.
# It seems to be an instance of the gzip bug that was fixed a few
# years ago.
#
# I've filed a bug on python3.2 in Ubuntu and Python upstream project
# https://bugs.launchpad.net/ubuntu/+source/python3.2/+bug/871083
#
# In the meantime we can only test that we got bytes out
self.assertIsInstance(data, bytes)
# And that we can gzip uncompress them and get what we expected
self.assertEqual(gzip.decompress(data), (
b'{"session":{"desired_job_list":[],"jobs":{},'
b'"mandatory_job_list":[],"metadata":'
b'{"app_blob":null,"app_id":null,"flags":[],'
b'"running_job_name":null,"title":null},"results":{}},'
b'"version":5}'))
class SessionSuspendHelper6Tests(SessionSuspendHelper5Tests):
"""
Tests for various methods of SessionSuspendHelper6
"""
def setUp(self):
self.helper = SessionSuspendHelper6()
self.session_dir = None
def test_json_repr_current_version(self):
"""
verify what the version field is
"""
data = self.helper._json_repr(SessionState([]), self.session_dir)
self.assertEqual(data['version'], 6)
def test_suspend(self):
"""
verify that the suspend() method returns gzipped JSON representation
"""
data = self.helper.suspend(SessionState([]), self.session_dir)
# XXX: we cannot really test what the compressed data looks like
# because apparently python3.2 gzip output is non-deterministic.
# It seems to be an instance of the gzip bug that was fixed a few
# years ago.
#
# I've filed a bug on python3.2 in Ubuntu and Python upstream project
# https://bugs.launchpad.net/ubuntu/+source/python3.2/+bug/871083
#
# In the meantime we can only test that we got bytes out
self.assertIsInstance(data, bytes)
# And that we can gzip uncompress them and get what we expected
self.assertEqual(gzip.decompress(data), (
b'{"session":{"desired_job_list":[],"jobs":{},'
b'"mandatory_job_list":[],"metadata":'
b'{"app_blob":null,"app_id":null,"flags":[],'
b'"running_job_name":null,"title":null},"results":{}},'
b'"version":6}'))
def test_repr_SessionState_typical_session(self):
"""
verify the representation of a SessionState with some unused jobs
Unused jobs should just have no representation. Their checksum
should not be mentioned. Their results (empty results) should be
ignored.
"""
used_job = JobDefinition({
"plugin": "shell",
"id": "used",
"command": "echo 'hello world'",
})
unused_job = JobDefinition({
"plugin": "shell",
"id": "unused",
"command": "echo 'hello world'",
})
used_result = MemoryJobResult({
"io_log": [
(0.0, "stdout", b'hello world\n'),
],
'outcome': IJobResult.OUTCOME_PASS
})
session_state = SessionState([used_job, unused_job])
session_state.update_desired_job_list([used_job])
session_state.update_job_result(used_job, used_result)
data = self.helper._repr_SessionState(session_state, self.session_dir)
self.assertEqual(data, {
'jobs': {
'used': ('8c393c19fdfde1b6afc5b79d0a1617ecf7531cd832a16450dc'
'2f3f50d329d373')
},
'results': {
'used': [{
'comments': None,
'execution_duration': None,
'io_log': [[0.0, 'stdout', 'aGVsbG8gd29ybGQK']],
'outcome': 'pass',
'return_code': None
}]
},
'desired_job_list': ['used'],
'mandatory_job_list': [],
'metadata': {
'title': None,
'flags': [],
'running_job_name': None,
'app_blob': None,
'app_id': None,
},
})
def test_repr_SessionState_empty_session(self):
"""
verify that representation of empty SessionState is okay
"""
data = self.helper._repr_SessionState(
SessionState([]), self.session_dir)
self.assertEqual(data, {
'jobs': {},
'results': {},
'desired_job_list': [],
'mandatory_job_list': [],
'metadata': {
'title': None,
'flags': [],
'running_job_name': None,
'app_blob': None,
'app_id': None,
},
})
class RegressionTests(TestCase):
def test_1388055(self):
"""
https://bugs.launchpad.net/plainbox/+bug/1388055
"""
# This bug is about being able to resume a session despite job database
# modification. Let's assume the following session first:
# - desired job list: [a]
# - run list [a_dep, a] (computed)
# - job_repr: {a_dep: checksum}
job_a = make_job(id='a', depends='a_dep')
job_a_dep = make_job(id='a_dep')
state = SessionState([job_a, job_a_dep])
state.update_desired_job_list([job_a])
self.assertEqual(state.run_list, [job_a_dep, job_a])
self.assertEqual(state.desired_job_list, [job_a])
helper = SessionSuspendHelper4()
session_dir = None
# Mock away the meta-data as we're not testing that
with mock.patch.object(helper, '_repr_SessionMetaData') as m:
m.return_value = 'mocked'
actual = helper._repr_SessionState(state, session_dir)
expected = {
'jobs': {
job_a_dep.id: job_a_dep.checksum,
job_a.id: job_a.checksum,
},
'desired_job_list': [job_a.id],
'mandatory_job_list': [],
'results': {},
'metadata': 'mocked'
}
self.assertEqual(expected, actual)
plainbox-0.25/plainbox/impl/session/storage.py 0000664 0001750 0001750 00000111635 12627266441 022353 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012, 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.session.storage` -- storage for sessions
============================================================
This module contains storage support code for handling sessions. Using the
:class:`SessionStorageRepository` one can enumerate sessions at a particular
location. Each location is wrapped by a :class:`SessionStorage` instance. That
latter class be used to create (allocate) and remove all of the files
associated with a particular session.
"""
import errno
import logging
import os
import shutil
import stat
import sys
import tempfile
from plainbox.i18n import gettext as _, ngettext
logger = logging.getLogger("plainbox.session.storage")
class SessionStorageRepository:
"""
Helper class to enumerate filesystem artefacts of current or past Sessions
This class collaborates with :class:`SessionStorage`. The basic
use-case is to open a well-known location and enumerate all the sessions
that are stored there. This allows to create :class:`SessionStorage`
instances to further manage each session (such as remove them by calling
:meth:SessionStorage.remove()`)
"""
_LAST_SESSION_SYMLINK = "last-session"
def __init__(self, location=None):
"""
Initialize new repository at the specified location.
The location does not have to be an existing directory. It will be
created on demand. Typically it should be instantiated with the default
location.
"""
if location is None:
location = self.get_default_location()
self._location = location
@property
def location(self):
"""
pathname of the repository
"""
return self._location
def get_last_storage(self):
"""
Find the last session storage object created in this repository.
:returns:
SessionStorage object associated with the last session created in
this repository using legacy mode.
.. note::
This will only return storage objects that were created using
legacy mode. Nonlegacy storage objects will not be returned this
way.
"""
pathname = os.path.join(self.location, self._LAST_SESSION_SYMLINK)
try:
last_storage = os.readlink(pathname)
except OSError:
# The symlink can be gone or not be a real symlink
# in that case just ignore it and return None
return None
else:
# The link may be relative so let's ensure we know the full
# pathname for the subsequent check (which may be performed
# from another directory)
last_storage = os.path.join(self._location, last_storage)
# If the link points to a directory, assume it's okay
if os.path.isdir(last_storage):
return SessionStorage(last_storage)
def get_storage_list(self):
"""
Enumerate stored sessions in the repository.
If the repository directory is not present then an empty list is
returned.
:returns:
list of :class:`SessionStorage` representing discovered sessions
sorted by their age (youngest first)
"""
logger.debug(_("Enumerating sessions in %s"), self._location)
try:
# Try to enumerate the directory
item_list = sorted(os.listdir(self._location),
key=lambda x: os.stat(os.path.join(
self._location, x)).st_mtime, reverse=True)
except OSError as exc:
# If the directory does not exist,
# silently return empty collection
if exc.errno == errno.ENOENT:
return []
# Don't silence any other errors
raise
session_list = []
# Check each item by looking for directories
for item in item_list:
pathname = os.path.join(self.location, item)
# Make sure not to follow any symlinks here
stat_result = os.lstat(pathname)
# Consider non-hidden directories that end with the word .session
if (not item.startswith(".") and item.endswith(".session")
and stat.S_ISDIR(stat_result.st_mode)):
logger.debug(_("Found possible session in %r"), pathname)
session = SessionStorage(pathname)
session_list.append(session)
# Return the full list
return session_list
def __iter__(self):
"""
Same as :meth:`get_storage_list()`
"""
return iter(self.get_storage_list())
@classmethod
def get_default_location(cls):
"""
Get the default location of the session state repository
The default location is defined by ``$PLAINBOX_SESSION_REPOSITORY``
which must be a writable directory (created if needed) where plainbox
will keep its session data. The default location, if the environment
variable is not provided, is
``${XDG_CACHE_HOME:-$HOME/.cache}/plainbox/sessions``
"""
repo_dir = os.environ.get('PLAINBOX_SESSION_REPOSITORY')
if repo_dir is not None:
repo_dir = os.path.abspath(repo_dir)
else:
# Pick XDG_CACHE_HOME from environment
xdg_cache_home = os.environ.get('XDG_CACHE_HOME')
# If not set or empty use the default ~/.cache/
if not xdg_cache_home:
xdg_cache_home = os.path.join(
os.path.expanduser('~'), '.cache')
# Use a directory relative to XDG_CACHE_HOME
repo_dir = os.path.join(xdg_cache_home, 'plainbox', 'sessions')
if (repo_dir is not None and os.path.exists(repo_dir)
and not os.path.isdir(repo_dir)):
logger.warning(
_("Session repository %s it not a directory"), repo_dir)
repo_dir = None
if (repo_dir is not None and os.path.exists(repo_dir)
and not os.access(repo_dir, os.W_OK)):
logger.warning(
_("Session repository %s is read-only"), repo_dir)
repo_dir = None
if repo_dir is None:
repo_dir = tempfile.mkdtemp()
logger.warning(
_("Using temporary directory %s as session repository"),
repo_dir)
return repo_dir
class LockedStorageError(IOError):
"""
Exception raised when SessionStorage.save_checkpoint() finds an existing
'next' file from a (presumably) previous call to save_checkpoint() that
got interrupted
"""
class SessionStorage:
"""
Abstraction for storage area that is used by :class:`SessionState` to
keep some persistent and volatile data.
This class implements functions performing input/output operations
on session checkpoint data. The location property can be used for keeping
any additional files or directories but keep in mind that they will
be removed by :meth:`SessionStorage.remove()`
This class indirectly collaborates with :class:`SessionSuspendHelper` and
:class:`SessionResumeHelper`.
"""
_SESSION_FILE = 'session'
_SESSION_FILE_NEXT = 'session.next'
def __init__(self, location):
"""
Initialize a :class:`SessionStorage` with the given location.
The location is not created. If you want to ensure that it exists
call :meth:`create()` instead.
"""
self._location = location
def __repr__(self):
return "<{} location:{!r}>".format(
self.__class__.__name__, self.location)
@property
def location(self):
"""
location of the session storage
"""
return self._location
@property
def id(self):
"""
identifier of the session storage (name of the random directory)
"""
return os.path.splitext(os.path.basename(self.location))[0]
@property
def session_file(self):
"""
pathname of the session state file
"""
return os.path.join(self._location, self._SESSION_FILE)
@classmethod
def create(cls, base_dir, legacy_mode=False):
"""
Create a new :class:`SessionStorage` in a random subdirectory
of the specified base directory. The base directory is also
created if necessary.
:param base_dir:
Directory in which a random session directory will be created.
Typically the base directory should be obtained from
:meth:`SessionStorageRepository.get_default_location()`
:param legacy_mode:
If False (defaults to True) then the caller is expected to
handle multiple sessions by itself.
.. note::
Legacy mode is where applications using PlainBox API can only
handle one session. Creating another session replaces whatever was
stored before. In non-legacy mode applications can enumerate
sessions, create arbitrary number of sessions at the same time
and remove sessions once they are no longer necessary.
Legacy mode is implemented with a symbolic link called
'last-session' that keeps track of the last session created using
``legacy_mode=True``. When a new legacy-mode session is created
the target of that symlink is read and recursively removed.
"""
if not os.path.exists(base_dir):
os.makedirs(base_dir)
location = tempfile.mkdtemp(
prefix='pbox-', suffix='.session', dir=base_dir)
logger.debug(_("Created new storage in %r"), location)
self = cls(location)
if legacy_mode:
self._replace_legacy_session(base_dir)
return self
def _replace_legacy_session(self, base_dir):
"""
Remove the previous legacy session and update the 'last-session'
symlink so that it points to this session storage directory.
"""
symlink_pathname = os.path.join(
base_dir, SessionStorageRepository._LAST_SESSION_SYMLINK)
# Try to read and remove the storage referenced to by last-session
# symlink. This can fail if the link file is gone (which is harmless)
# or when it is not an actual symlink (which means that the
# repository is corrupted).
try:
symlink_target = os.readlink(symlink_pathname)
except OSError as exc:
if exc.errno == errno.ENOENT:
pass
elif exc.errno == errno.EINVAL:
logger.warning(
_("%r is not a symlink, repository %r must be corrupted"),
symlink_pathname, base_dir)
else:
logger.warning(
_("Unable to read symlink target from %r: %r"),
symlink_pathname, exc)
else:
logger.debug(
_("Removing storage associated with last session %r"),
symlink_target)
# Remove the old session, note that the symlink may be broken so
# let's ignore any errors here
shutil.rmtree(symlink_target, ignore_errors=True)
# Remove the last-session symlink itself
logger.debug(
_("Removing symlink associated with last session: %r"),
symlink_pathname)
os.unlink(symlink_pathname)
finally:
# Finally put the last-session synlink that points to this storage
logger.debug(
_("Linking storage %r to last session"), self.location)
try:
os.symlink(self.location, symlink_pathname)
except OSError as exc:
logger.error(
_("Cannot link %r as %r: %r"),
self.location, symlink_pathname, exc)
def remove(self):
"""
Remove all filesystem entries associated with this instance.
"""
logger.debug(_("Removing session storage from %r"), self._location)
shutil.rmtree(self._location)
def load_checkpoint(self):
"""
Load checkpoint data from the filesystem
:returns: data from the most recent checkpoint
:rtype: bytes
:raises IOError, OSError:
on various problems related to accessing the filesystem
:raises NotImplementedError:
when openat(2) is not available on this platform. Should never
happen on Linux or Windows where appropriate checks divert to a
correct implementation that is not using them.
"""
if sys.platform == 'linux' or sys.platform == 'linux2':
if sys.version_info[0:2] >= (3, 3):
return self._load_checkpoint_unix_py33()
else:
return self._load_checkpoint_unix_py32()
elif sys.platform == 'win32':
return self._load_checkpoint_win32_py33()
raise NotImplementedError(
"platform/python combination is not supported: {} + {}".format(
sys.version, sys.platform))
def save_checkpoint(self, data):
"""
Save checkpoint data to the filesystem.
The directory associated with this :class:`SessionStorage` must already
exist. Typically the instance should be obtained by calling
:meth:`SessionStorage.create()` which will ensure that this is already
the case.
:raises TypeError:
if data is not a bytes object.
:raises LockedStorageError:
if leftovers from previous save_checkpoint() have been detected.
Normally those should never be here but in certain cases that is
possible. Callers might want to call :meth:`break_lock()`
to resolve the problem and try again.
:raises IOError, OSError:
on various problems related to accessing the filesystem.
Typically permission errors may be reported here.
:raises NotImplementedError:
when openat(2), renameat(2), unlinkat(2) are not available on this
platform. Should never happen on Linux or Windows where appropriate
checks divert to a correct implementation that is not using them.
"""
if sys.platform == 'linux' or sys.platform == 'linux2':
if sys.version_info[0:2] >= (3, 3):
return self._save_checkpoint_unix_py33(data)
else:
return self._save_checkpoint_unix_py32(data)
elif sys.platform == 'win32':
if sys.version_info[0:2] >= (3, 3):
return self._save_checkpoint_win32_py33(data)
raise NotImplementedError(
"platform/python combination is not supported: {} + {}".format(
sys.version, sys.platform))
def break_lock(self):
"""
Forcibly unlock the storage by removing a file created during
atomic filesystem operations of save_checkpoint().
This method might be useful if save_checkpoint()
raises LockedStorageError. It removes the "next" file that is used
for atomic rename.
"""
_next_session_pathname = os.path.join(
self._location, self._SESSION_FILE_NEXT)
logger.debug(
# TRANSLATORS: unlinking as in deleting a file
# Please keep the 'next' string untranslated
_("Forcibly unlinking 'next' file %r"), _next_session_pathname)
os.unlink(_next_session_pathname)
def _load_checkpoint_win32_py33(self):
logger.debug(_("Loading checkpoint (%s)"), "Windows")
_session_pathname = os.path.join(self._location, self._SESSION_FILE)
try:
# Open the current session file in the location directory
session_fd = os.open(_session_pathname, os.O_RDONLY | os.O_BINARY)
logger.debug(
_("Opened session state file %r as descriptor %d"),
_session_pathname, session_fd)
# Stat the file to know how much to read
session_stat = os.fstat(session_fd)
logger.debug(
# TRANSLATORS: stat is a system call name, don't translate it
_("Stat'ed session state file: %s"), session_stat)
try:
# Read session data
logger.debug(ngettext(
"Reading %d byte of session state",
"Reading %d bytes of session state",
session_stat.st_size), session_stat.st_size)
data = os.read(session_fd, session_stat.st_size)
logger.debug(ngettext(
"Read %d byte of session state",
"Read %d bytes of session state", len(data)), len(data))
if len(data) != session_stat.st_size:
raise IOError(_("partial read?"))
finally:
# Close the session file
logger.debug(_("Closed descriptor %d"), session_fd)
os.close(session_fd)
except IOError as exc:
if exc.errno == errno.ENOENT:
# Treat lack of 'session' file as an empty file
return b''
raise
else:
return data
def _load_checkpoint_unix_py32(self):
_session_pathname = os.path.join(self._location, self._SESSION_FILE)
# Open the location directory
location_fd = os.open(self._location, os.O_DIRECTORY)
logger.debug(
_("Opened session directory %r as descriptor %d"),
self._location, location_fd)
try:
# Open the current session file in the location directory
session_fd = os.open(_session_pathname, os.O_RDONLY)
logger.debug(
_("Opened session state file %r as descriptor %d"),
_session_pathname, session_fd)
# Stat the file to know how much to read
session_stat = os.fstat(session_fd)
logger.debug(
# TRANSLATORS: stat is a system call name, don't translate it
_("Stat'ed session state file: %s"), session_stat)
try:
# Read session data
logger.debug(ngettext(
"Reading %d byte of session state",
"Reading %d bytes of session state",
session_stat.st_size), session_stat.st_size)
data = os.read(session_fd, session_stat.st_size)
logger.debug(ngettext(
"Read %d byte of session state",
"Read %d bytes of session state", len(data)), len(data))
if len(data) != session_stat.st_size:
raise IOError(_("partial read?"))
finally:
# Close the session file
logger.debug(_("Closed descriptor %d"), session_fd)
os.close(session_fd)
except IOError as exc:
if exc.errno == errno.ENOENT:
# Treat lack of 'session' file as an empty file
return b''
raise
else:
return data
finally:
# Close the location directory
logger.debug(_("Closed descriptor %d"), location_fd)
os.close(location_fd)
def _load_checkpoint_unix_py33(self):
# Open the location directory
location_fd = os.open(self._location, os.O_DIRECTORY)
try:
# Open the current session file in the location directory
session_fd = os.open(
self._SESSION_FILE, os.O_RDONLY, dir_fd=location_fd)
# Stat the file to know how much to read
session_stat = os.fstat(session_fd)
try:
# Read session data
data = os.read(session_fd, session_stat.st_size)
if len(data) != session_stat.st_size:
raise IOError(_("partial read?"))
finally:
# Close the session file
os.close(session_fd)
except IOError as exc:
if exc.errno == errno.ENOENT:
# Treat lack of 'session' file as an empty file
return b''
raise
else:
return data
finally:
# Close the location directory
os.close(location_fd)
def _save_checkpoint_win32_py33(self, data):
# NOTE: this is like _save_checkpoint_py32 but without location_fd
# wich cannot be opened on windows (no os.O_DIRECTORY)
#
# NOTE: The windows version is relatively new and under-tested
# but then again we don't expect to run tests *on* windows, only
# *from* windows so hard data retention requirements are of lesser
# importance.
if not isinstance(data, bytes):
raise TypeError("data must be bytes")
logger.debug(ngettext(
"Saving %d byte of data (%s)",
"Saving %d bytes of data (%s)",
len(data)), len(data), "Windows")
# Helper pathnames, needed because we don't have *at functions
_next_session_pathname = os.path.join(
self._location, self._SESSION_FILE_NEXT)
_session_pathname = os.path.join(self._location, self._SESSION_FILE)
# Open the "next" file in the location_directory
#
# Use "write" + "create" + "exclusive" flags so that no race
# condition is possible.
#
# This will never return -1, it throws IOError when anything is
# wrong. The caller has to catch this.
#
# As a special exception, this code handles EEXISTS and converts
# that to LockedStorageError that can be especially handled by
# some layer above.
try:
next_session_fd = os.open(
_next_session_pathname,
os.O_WRONLY | os.O_CREAT | os.O_EXCL | os.O_BINARY, 0o644)
except IOError as exc:
if exc.errno == errno.EEXISTS:
raise LockedStorageError()
else:
raise
logger.debug(
_("Opened next session file %s as descriptor %d"),
_next_session_pathname, next_session_fd)
try:
# Write session data to disk
#
# I cannot find conclusive evidence but it seems that
# os.write() handles partial writes internally. In case we do
# get a partial write _or_ we run out of disk space, raise an
# explicit IOError.
num_written = os.write(next_session_fd, data)
logger.debug(ngettext(
"Wrote %d byte of data to descriptor %d",
"Wrote %d bytes of data to descriptor %d",
num_written), num_written, next_session_fd)
if num_written != len(data):
raise IOError(_("partial write?"))
except Exception as exc:
logger.warning(_("Unable to complete write: %s"), exc)
# If anything goes wrong we should unlink the next file.
# TRANSLATORS: unlinking as in deleting a file
logger.warning(_("Unlinking %r: %r"), _next_session_pathname, exc)
os.unlink(_next_session_pathname)
else:
# If the write was successful we must flush kernel buffers.
#
# We want to be sure this data is really on disk by now as we
# may crash the machine soon after this method exits.
logger.debug(
# TRANSLATORS: please don't translate fsync()
_("Calling fsync() on descriptor %d"), next_session_fd)
try:
os.fsync(next_session_fd)
except OSError as exc:
logger.warning(_("Cannot synchronize file %r: %s"),
_next_session_pathname, exc)
finally:
# Close the new session file
logger.debug(_("Closing descriptor %d"), next_session_fd)
os.close(next_session_fd)
# Rename FILE_NEXT over FILE.
logger.debug(_("Renaming %r to %r"),
_next_session_pathname, _session_pathname)
try:
os.replace(_next_session_pathname, _session_pathname)
except Exception as exc:
# Same as above, if we fail we need to unlink the next file
# otherwise any other attempts will not be able to open() it
# with O_EXCL flag.
logger.warning(
_("Unable to rename/overwrite %r to %r: %r"),
_next_session_pathname, _session_pathname, exc)
# TRANSLATORS: unlinking as in deleting a file
logger.warning(_("Unlinking %r"), _next_session_pathname)
os.unlink(_next_session_pathname)
def _save_checkpoint_unix_py32(self, data):
# NOTE: this is like _save_checkpoint_py33 but without all the
# *at() functions (openat, renameat)
#
# Since we cannot use those functions there is an implicit race
# condition on all open() calls with another process that renames
# any of the directories that are part of the opened path.
#
# I don't think we can really do anything about this in userspace
# so this, python 3.2 specific version, just does the best effort
# implementation. Some of the comments were redacted but
# but keep in mind that the rename race is always there.
if not isinstance(data, bytes):
raise TypeError("data must be bytes")
logger.debug(ngettext(
"Saving %d byte of data (%s)",
"Saving %d bytes of data (%s)",
len(data)), len(data), "UNIX, python 3.2 or older")
# Helper pathnames, needed because we don't have *at functions
_next_session_pathname = os.path.join(
self._location, self._SESSION_FILE_NEXT)
_session_pathname = os.path.join(self._location, self._SESSION_FILE)
# Open the location directory, we need to fsync that later
# XXX: this may fail, maybe we should keep the fd open all the time?
location_fd = os.open(self._location, os.O_DIRECTORY)
logger.debug(
_("Opened %r as descriptor %d"), self._location, location_fd)
try:
# Open the "next" file in the location_directory
#
# Use "write" + "create" + "exclusive" flags so that no race
# condition is possible.
#
# This will never return -1, it throws IOError when anything is
# wrong. The caller has to catch this.
#
# As a special exception, this code handles EEXISTS and converts
# that to LockedStorageError that can be especially handled by
# some layer above.
try:
next_session_fd = os.open(
_next_session_pathname,
os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o644)
except IOError as exc:
if exc.errno == errno.EEXISTS:
raise LockedStorageError()
else:
raise
logger.debug(
_("Opened next session file %s as descriptor %d"),
_next_session_pathname, next_session_fd)
try:
# Write session data to disk
#
# I cannot find conclusive evidence but it seems that
# os.write() handles partial writes internally. In case we do
# get a partial write _or_ we run out of disk space, raise an
# explicit IOError.
num_written = os.write(next_session_fd, data)
logger.debug(ngettext(
"Wrote %d byte of data to descriptor %d",
"Wrote %d bytes of data to descriptor %d",
num_written), num_written, next_session_fd)
if num_written != len(data):
raise IOError(_("partial write?"))
except Exception as exc:
logger.warning(_("Unable to complete write: %r"), exc)
# If anything goes wrong we should unlink the next file.
# TRANSLATORS: unlinking as in deleting a file
logger.warning(_("Unlinking %r"), _next_session_pathname)
os.unlink(_next_session_pathname)
else:
# If the write was successful we must flush kernel buffers.
#
# We want to be sure this data is really on disk by now as we
# may crash the machine soon after this method exits.
logger.debug(
# TRANSLATORS: please don't translate fsync()
_("Calling fsync() on descriptor %d"), next_session_fd)
try:
os.fsync(next_session_fd)
except OSError as exc:
logger.warning(_("Cannot synchronize file %r: %s"),
_next_session_pathname, exc)
finally:
# Close the new session file
logger.debug(_("Closing descriptor %d"), next_session_fd)
os.close(next_session_fd)
# Rename FILE_NEXT over FILE.
logger.debug(_("Renaming %r to %r"),
_next_session_pathname, _session_pathname)
try:
os.rename(_next_session_pathname, _session_pathname)
except Exception as exc:
# Same as above, if we fail we need to unlink the next file
# otherwise any other attempts will not be able to open() it
# with O_EXCL flag.
logger.warning(
_("Unable to rename/overwrite %r to %r: %r"),
_next_session_pathname, _session_pathname, exc)
# Same as above, if we fail we need to unlink the next file
# otherwise any other attempts will not be able to open() it
# with O_EXCL flag.
# TRANSLATORS: unlinking as in deleting a file
logger.warning(_("Unlinking %r"), _next_session_pathname)
os.unlink(_next_session_pathname)
# Flush kernel buffers on the directory.
#
# This should ensure the rename operation is really on disk by now.
# As noted above, this is essential for being able to survive
# system crash immediately after exiting this method.
# TRANSLATORS: please don't translate fsync()
logger.debug(_("Calling fsync() on descriptor %d"), location_fd)
try:
os.fsync(location_fd)
except OSError as exc:
logger.warning(_("Cannot synchronize directory %r: %s"),
self._location, exc)
finally:
# Close the location directory
logger.debug(_("Closing descriptor %d"), location_fd)
os.close(location_fd)
def _save_checkpoint_unix_py33(self, data):
if not isinstance(data, bytes):
raise TypeError("data must be bytes")
logger.debug(ngettext(
"Saving %d byte of data (%s)",
"Saving %d bytes of data (%s)",
len(data)), len(data), "UNIX, python 3.3 or newer")
# Open the location directory, we need to fsync that later
# XXX: this may fail, maybe we should keep the fd open all the time?
location_fd = os.open(self._location, os.O_DIRECTORY)
logger.debug(
_("Opened %r as descriptor %d"), self._location, location_fd)
try:
# Open the "next" file in the location_directory
#
# Use openat(2) to ensure we always open a file relative to the
# directory we already opened above. This is essential for fsync(2)
# calls made below.
#
# Use "write" + "create" + "exclusive" flags so that no race
# condition is possible.
#
# This will never return -1, it throws IOError when anything is
# wrong. The caller has to catch this.
#
# As a special exception, this code handles EEXISTS
# (FIleExistsError) and converts that to LockedStorageError
# that can be especially handled by some layer above.
try:
next_session_fd = os.open(
self._SESSION_FILE_NEXT,
os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o644,
dir_fd=location_fd)
except FileExistsError:
raise LockedStorageError()
logger.debug(
_("Opened next session file %s as descriptor %d"),
self._SESSION_FILE_NEXT, next_session_fd)
try:
# Write session data to disk
#
# I cannot find conclusive evidence but it seems that
# os.write() handles partial writes internally. In case we do
# get a partial write _or_ we run out of disk space, raise an
# explicit IOError.
num_written = os.write(next_session_fd, data)
logger.debug(ngettext(
"Wrote %d byte of data to descriptor %d",
"Wrote %d bytes of data to descriptor %d", num_written),
num_written, next_session_fd)
if num_written != len(data):
raise IOError(_("partial write?"))
except Exception as exc:
logger.warning(_("Unable to complete write: %r"), exc)
# If anything goes wrong we should unlink the next file. As
# with the open() call above we use unlinkat to prevent race
# conditions.
# TRANSLATORS: unlinking as in deleting a file
logger.warning(_("Unlinking %r"), self._SESSION_FILE_NEXT)
os.unlink(self._SESSION_FILE_NEXT, dir_fd=location_fd)
else:
# If the write was successful we must flush kernel buffers.
#
# We want to be sure this data is really on disk by now as we
# may crash the machine soon after this method exits.
logger.debug(
# TRANSLATORS: please don't translate fsync()
_("Calling fsync() on descriptor %d"), next_session_fd)
try:
os.fsync(next_session_fd)
except OSError as exc:
logger.warning(_("Cannot synchronize file %r: %s"),
self._SESSION_FILE_NEXT, exc)
finally:
# Close the new session file
logger.debug(_("Closing descriptor %d"), next_session_fd)
os.close(next_session_fd)
# Rename FILE_NEXT over FILE.
#
# Use renameat(2) to ensure that there is no race condition if the
# location (directory) is being moved
logger.debug(
_("Renaming %r to %r"),
self._SESSION_FILE_NEXT, self._SESSION_FILE)
try:
os.rename(self._SESSION_FILE_NEXT, self._SESSION_FILE,
src_dir_fd=location_fd, dst_dir_fd=location_fd)
except Exception as exc:
# Same as above, if we fail we need to unlink the next file
# otherwise any other attempts will not be able to open() it
# with O_EXCL flag.
logger.warning(
_("Unable to rename/overwrite %r to %r: %r"),
self._SESSION_FILE_NEXT, self._SESSION_FILE, exc)
# TRANSLATORS: unlinking as in deleting a file
logger.warning(_("Unlinking %r"), self._SESSION_FILE_NEXT)
os.unlink(self._SESSION_FILE_NEXT, dir_fd=location_fd)
# Flush kernel buffers on the directory.
#
# This should ensure the rename operation is really on disk by now.
# As noted above, this is essential for being able to survive
# system crash immediately after exiting this method.
# TRANSLATORS: please don't translate fsync()
logger.debug(_("Calling fsync() on descriptor %d"), location_fd)
try:
os.fsync(location_fd)
except OSError as exc:
logger.warning(_("Cannot synchronize directory %r: %s"),
self._location, exc)
finally:
# Close the location directory
logger.debug(_("Closing descriptor %d"), location_fd)
os.close(location_fd)
plainbox-0.25/plainbox/impl/session/test_manager.py 0000664 0001750 0001750 00000032266 12627266441 023362 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.test_manager
==========================
Test definitions for plainbox.impl.session.manager module
"""
from unittest import expectedFailure
from plainbox.abc import IJobDefinition
from plainbox.impl.session import SessionManager
from plainbox.impl.session import SessionState
from plainbox.impl.session import SessionStorage
from plainbox.impl.session.state import SessionDeviceContext
from plainbox.impl.session.suspend import SessionSuspendHelper
from plainbox.vendor import mock
from plainbox.vendor.morris import SignalTestCase
class SessionManagerTests(SignalTestCase):
def setUp(self):
self.storage = mock.Mock(name="storage", spec=SessionStorage)
self.state = mock.Mock(name="state", spec=SessionState)
self.context = mock.Mock(name="context", spec=SessionDeviceContext)
self.context2 = mock.Mock(
name='context2', spec_set=SessionDeviceContext)
self.context_list = [self.context] # NOTE: just the first context
self.manager = SessionManager(self.context_list, self.storage)
def test_device_context_list(self):
"""
Verify that accessing SessionManager.device_context_list works okay
"""
self.assertEqual(self.manager.device_context_list, self.context_list)
def test_default_device_context__typical(self):
"""
Verify that accessing SessionManager.default_device_context returns
the first context from the context list
"""
self.assertEqual(self.manager.default_device_context, self.context)
def test_default_device_context__no_contexts(self):
"""
Verify that accessing SessionManager.default_device_context returns
None when the manager doesn't have any device context objects yet
"""
manager = SessionManager([], self.storage)
self.assertIsNone(manager.default_device_context, None)
def test_state(self):
"""
verify that accessing SessionManager.state works okay
"""
self.assertIs(self.manager.state, self.context.state)
def test_storage(self):
"""
verify that accessing SessionManager.storage works okay
"""
self.assertIs(self.manager.storage, self.storage)
def test_checkpoint(self):
"""
verify that SessionManager.checkpoint() creates an image of the
suspended session and writes it using the storage system.
"""
# Mock the suspend helper, we don't want to suspend our mock objects
helper_name = "plainbox.impl.session.manager.SessionSuspendHelper"
with mock.patch(helper_name, spec=SessionSuspendHelper) as helper_cls:
# Call the tested method
self.manager.checkpoint()
# Ensure that a fresh instance of the suspend helper was used to
# call the suspend() method and that the session state parameter
# was passed to it.
helper_cls().suspend.assert_called_with(
self.context.state, self.storage.location)
# Ensure that save_checkpoint() was called on the storage object with
# the return value of what the suspend helper produced.
self.storage.save_checkpoint.assert_called_with(
helper_cls().suspend(self.context.state))
def test_load_session(self):
"""
verify that SessionManager.load_session() correctly delegates the task
to various other objects
"""
job = mock.Mock(name='job', spec_set=IJobDefinition)
unit_list = [job]
flags = None
helper_name = "plainbox.impl.session.manager.SessionResumeHelper"
with mock.patch(helper_name) as helper_cls:
resumed_state = mock.Mock(spec_set=SessionState)
resumed_state.unit_list = unit_list
helper_cls().resume.return_value = resumed_state
# NOTE: mock away _propagate_test_plans() so that we don't get
# unwanted side effects we're not testing here.
with mock.patch.object(SessionManager, '_propagate_test_plans'):
manager = SessionManager.load_session(unit_list, self.storage)
# Ensure that the storage object was used to load the session snapshot
self.storage.load_checkpoint.assert_called_with()
# Ensure that the helper was instantiated with the unit list, flags and
# location
helper_cls.assert_called_with(unit_list, flags, self.storage.location)
# Ensure that the helper instance was asked to recreate session state
helper_cls().resume.assert_called_with(
self.storage.load_checkpoint(), None)
# Ensure that the resulting manager has correct data inside
self.assertEqual(manager.state, helper_cls().resume())
self.assertEqual(manager.storage, self.storage)
@mock.patch.multiple(
"plainbox.impl.session.manager", spec_set=True,
SessionStorageRepository=mock.DEFAULT,
SessionStorage=mock.DEFAULT,
WellKnownDirsHelper=mock.DEFAULT)
def test_create(self, **mocks):
"""
verify that SessionManager.create() correctly sets up
storage repository and creates session directories
"""
mocks['SessionStorage'].create.return_value = mock.MagicMock(
spec_set=SessionStorage)
# Create the new manager
manager = SessionManager.create()
# Ensure that a default repository was created
mocks['SessionStorageRepository'].assert_called_with()
repo = mocks['SessionStorageRepository']()
# Ensure that a storage was created, with repository location and
# without legacy mode turned on
mocks['SessionStorage'].create.assert_called_with(repo.location, False)
storage = mocks['SessionStorage'].create()
# Ensure that a default directories were created
mocks['WellKnownDirsHelper'].assert_called_with(storage)
helper = mocks['WellKnownDirsHelper']()
helper.populate.assert_called_with()
# Ensure that the resulting manager has correct data inside
self.assertEqual(manager.device_context_list, [])
self.assertEqual(manager.storage, storage)
@mock.patch.multiple(
"plainbox.impl.session.manager", spec_set=True,
SessionStorageRepository=mock.DEFAULT,
SessionState=mock.DEFAULT,
SessionStorage=mock.DEFAULT,
WellKnownDirsHelper=mock.DEFAULT)
def test_create_with_unit_list(self, **mocks):
"""
verify that SessionManager.create_with_unit_list() correctly sets up
storage repository and creates session directories
"""
mocks['SessionStorage'].create.return_value = mock.MagicMock(
spec_set=SessionStorage)
# Mock unit list
unit_list = mock.Mock(name='unit_list')
# Create the new manager
manager = SessionManager.create_with_unit_list(unit_list)
# Ensure that a state object was created
mocks['SessionState'].assert_called_with(unit_list)
state = mocks['SessionState']()
# Ensure that a default repository was created
mocks['SessionStorageRepository'].assert_called_with()
repo = mocks['SessionStorageRepository']()
# Ensure that a storage was created, with repository location and
# without legacy mode turned on
mocks['SessionStorage'].create.assert_called_with(repo.location, False)
storage = mocks['SessionStorage'].create()
# Ensure that a default directories were created
mocks['WellKnownDirsHelper'].assert_called_with(storage)
helper = mocks['WellKnownDirsHelper']()
helper.populate.assert_called_with()
# Ensure that the resulting manager has correct data inside
self.assertEqual(manager.state, state)
self.assertEqual(manager.storage, storage)
@mock.patch.multiple(
"plainbox.impl.session.manager", spec_set=True,
SessionStorageRepository=mock.DEFAULT,
SessionState=mock.DEFAULT,
SessionStorage=mock.DEFAULT,
SessionDeviceContext=mock.DEFAULT,
WellKnownDirsHelper=mock.DEFAULT)
def test_create_with_state(self, **mocks):
"""
verify that SessionManager.create_with_state() correctly sets up
storage repository and creates session directories
"""
mocks['SessionStorage'].create.return_value = mock.MagicMock(
spec_set=SessionStorage)
# Mock an empty list of units in teh session state object
self.state.unit_list = []
# Create the new manager
manager = SessionManager.create_with_state(self.state)
# Ensure that a default repository was created
mocks['SessionStorageRepository'].assert_called_with()
repo = mocks['SessionStorageRepository']()
# Ensure that a storage was created, with repository location and
# without legacy mode turned on
mocks['SessionStorage'].create.assert_called_with(repo.location, False)
storage = mocks['SessionStorage'].create()
# Ensure that a default directories were created
mocks['WellKnownDirsHelper'].assert_called_with(storage)
helper = mocks['WellKnownDirsHelper']()
helper.populate.assert_called_with()
# Ensure that the device context was created with the right state
# object
mocks['SessionDeviceContext'].assert_called_with(self.state)
# Ensure that the resulting manager has correct data inside
self.assertEqual(
manager.device_context_list, [mocks['SessionDeviceContext']()])
# self.assertEqual(manager.state, self.state)
self.assertEqual(manager.storage, storage)
def test_add_device_context(self):
"""
Ensure that adding a device context works
"""
manager = SessionManager([], self.storage)
manager.add_device_context(self.context)
self.assertIn(self.context, manager.device_context_list)
@expectedFailure
def test_add_device_context__add_another(self):
"""
Ensure that adding a second context also works
"""
manager = SessionManager([], self.storage)
manager.add_device_context(self.context)
manager.add_device_context(self.context2)
self.assertIn(self.context, manager.device_context_list)
self.assertIn(self.context2, manager.device_context_list)
def test_add_device_context__twice(self):
"""
Ensure that you cannot add the same device context twice
"""
manager = SessionManager([], self.storage)
manager.add_device_context(self.context)
with self.assertRaises(ValueError):
manager.add_device_context(self.context)
def test_remove_context(self):
"""
Ensure that removing a device context works
"""
manager = SessionManager([], self.storage)
manager.add_device_context(self.context)
manager.remove_device_context(self.context)
self.assertNotIn(self.context, manager.device_context_list)
def test_remove_context__missing(self):
"""
Ensure that you cannot remove a device context that is not added first
"""
with self.assertRaises(ValueError):
self.manager.remove_device_context(self.context2)
def test_on_device_context_added(self):
"""
Ensure that adding a device context sends the appropriate signal
"""
manager = SessionManager([], self.storage)
self.watchSignal(manager.on_device_context_added)
manager.add_device_context(self.context)
self.assertSignalFired(manager.on_device_context_added, self.context)
def test_on_device_context_removed(self):
"""
Ensure that removing a device context sends the appropriate signal
"""
manager = SessionManager([self.context], self.storage)
self.watchSignal(manager.on_device_context_removed)
manager.remove_device_context(self.context)
self.assertSignalFired(manager.on_device_context_removed, self.context)
def test_add_local_device_context(self):
"""
Ensure that using add_local_device_context() adds a context with
a special 'local' device and fires the appropriate signal
"""
manager = SessionManager([], self.storage)
self.watchSignal(manager.on_device_context_added)
cls_name = "plainbox.impl.session.manager.SessionDeviceContext"
with mock.patch(cls_name) as sdc:
manager.add_local_device_context()
self.assertSignalFired(manager.on_device_context_added, sdc())
self.assertIn(sdc(), manager.device_context_list)
plainbox-0.25/plainbox/impl/session/jobs.py 0000664 0001750 0001750 00000037140 12627266441 021642 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
Job State.
:mod:`plainbox.impl.session.jobs` -- jobs state handling
========================================================
This module contains a helper class for associating job state within a
particular session. The :class:`JobState` class holds references to a
:class:`JobDefinition` and :class:`JobResult` as well as a list of inhibitors
that prevent the job from being runnable in a particular session.
"""
import logging
from plainbox.abc import IJobResult
from plainbox.i18n import gettext as _
from plainbox.impl import pod
from plainbox.impl.resource import ResourceExpression
from plainbox.impl.result import MemoryJobResult
from plainbox.impl.unit.job import JobDefinition
from plainbox.vendor.enum import IntEnum
logger = logging.getLogger("plainbox.session.jobs")
class InhibitionCause(IntEnum):
"""
There are four possible not-ready causes.
UNDESIRED:
This job was not selected to run in this session
PENDING_DEP:
This job depends on another job which was not started yet
FAILED_DEP:
This job depends on another job which was started and failed
PENDING_RESOURCE:
This job has a resource requirement expression that uses a resource
produced by another job which was not started yet
FAILED_RESOURCE:
This job has a resource requirement that evaluated to a false value
"""
UNDESIRED = 0
PENDING_DEP = 1
FAILED_DEP = 2
PENDING_RESOURCE = 3
FAILED_RESOURCE = 4
def cause_convert_assign_filter(
instance: pod.POD, field: pod.Field, old: "Any", new: "Any") -> "Any":
"""
Assign filter for for JobReadinessInhibitor.cause.
Custom assign filter for the JobReadinessInhibitor.cause field that
produces a very specific error message.
"""
try:
return pod.type_convert_assign_filter(instance, field, old, new)
except ValueError:
raise ValueError(_("unsupported value for cause"))
class JobReadinessInhibitor(pod.POD):
"""
Class representing the cause of a job not being ready to execute.
It is intended to be consumed by UI layers and to provide them with enough
information to render informative error messages or other visual feedback
that will aid the user in understanding why a job cannot be started.
There are four possible not ready causes:
UNDESIRED:
This job was not selected to run in this session
PENDING_DEP:
This job depends on another job which was not started yet
FAILED_DEP:
This job depends on another job which was started and failed
PENDING_RESOURCE:
This job has a resource requirement expression that uses a resource
produced by another job which was not started yet
FAILED_RESOURCE:
This job has a resource requirement that evaluated to a false value
All causes apart from UNDESIRED use the related_job property to encode a
job that is related to the problem. The PENDING_RESOURCE and
FAILED_RESOURCE causes also store related_expression that describes the
relevant requirement expression.
There are three attributes that can be accessed:
cause:
Encodes the reason why a job is not ready, see
:class:`InhibitionCause`.
related_job:
Provides additional context for the problem. This is not the job
that is affected, rather, the job that is causing the problem.
related_expression:
Provides additional context for the problem caused by a failing
resource expression.
"""
# XXX: PENDING_RESOURCE is not strict, there are multiple states that are
# clumped here which is something I don't like. A resource may be still
# "pending" as in PENDING_DEP (it has not ran yet) or it could have ran but
# failed to produce any data, it could also be prevented from running
# because it has unmet dependencies. In essence it tells us nothing about
# if related_job.can_start() is true or not.
#
# XXX: FAILED_RESOURCE is "correct" but somehow misleading, FAILED_RESOURCE
# is used to represent a resource expression that evaluated to a non-True
# value
cause = pod.Field(
doc="cause (constant) of the inhibitor",
type=InhibitionCause,
initial=pod.MANDATORY,
assign_filter_list=[cause_convert_assign_filter,
pod.read_only_assign_filter])
related_job = pod.Field(
doc="an (optional) job reference",
type=JobDefinition,
assign_filter_list=[pod.read_only_assign_filter])
related_expression = pod.Field(
doc="an (optional) resource expression reference",
type=ResourceExpression,
assign_filter_list=[pod.read_only_assign_filter])
def __init__(self, cause, related_job=None, related_expression=None):
"""
Initialize a new inhibitor with the specified cause.
If cause is other than UNDESIRED a related_job is necessary. If cause
is either PENDING_RESOURCE or FAILED_RESOURCE related_expression is
necessary as well. A ValueError is raised when this is violated.
"""
super().__init__(cause, related_job, related_expression)
if (self.cause != InhibitionCause.UNDESIRED and
self.related_job is None):
raise ValueError(
# TRANSLATORS: please don't translate related_job, None and
# cause
_("related_job must not be None when cause is {}").format(
self.cause.name))
if (self.cause in (InhibitionCause.PENDING_RESOURCE,
InhibitionCause.FAILED_RESOURCE) and
self.related_expression is None):
raise ValueError(_(
# TRANSLATORS: please don't translate related_expression, None
# and cause.
"related_expression must not be None when cause is {}"
).format(self.cause.name))
def __repr__(self):
"""Get a custom debugging representation of an inhibitor."""
return "<{} cause:{} related_job:{!r} related_expression:{!r}>".format(
self.__class__.__name__, self.cause.name, self.related_job,
self.related_expression)
def __str__(self):
"""Get a human-readable text representation of an inhibitor."""
if self.cause == InhibitionCause.UNDESIRED:
# TRANSLATORS: as in undesired job
return _("undesired")
elif self.cause == InhibitionCause.PENDING_DEP:
return _("required dependency {!r} did not run yet").format(
self.related_job.id)
elif self.cause == InhibitionCause.FAILED_DEP:
return _("required dependency {!r} has failed").format(
self.related_job.id)
elif self.cause == InhibitionCause.PENDING_RESOURCE:
return _(
"resource expression {!r} could not be evaluated because"
" the resource it depends on did not run yet"
).format(self.related_expression.text)
else:
assert self.cause == InhibitionCause.FAILED_RESOURCE
return _("resource expression {!r} evaluates to false").format(
self.related_expression.text)
# A global instance of :class:`JobReadinessInhibitor` with the UNDESIRED cause.
# This is used a lot and it makes no sense to instantiate all the time.
UndesiredJobReadinessInhibitor = JobReadinessInhibitor(
InhibitionCause.UNDESIRED)
JOB_VALUE = object()
class OverridableJobField(pod.Field):
"""
A custom Field for modeling job values that can be overridden.
A readable-writable field that has a special initial value ``JOB_VALUE``
which is interpreted as "load this value from the corresponding job
definition".
This field class facilitates implementation of fields that have some
per-job value but can be also overridden in a session state context.
"""
def __init__(self, job_field, doc=None, type=None, notify=False,
assign_filter_list=None):
"""Initialize a new overridable job field."""
super().__init__(
doc, type, JOB_VALUE, None, notify, assign_filter_list)
self.job_field = job_field
def __get__(self, instance, owner):
"""Get an overriden (if any) value of an overridable job field."""
value = super().__get__(instance, owner)
if value is JOB_VALUE:
return getattr(instance.job, self.job_field)
else:
return value
def job_assign_filter(instance, field, old_value, new_value):
"""
A custom setter for the JobState.job.
.. warning::
This setter should not exist. job attribute should be read-only. This
is a temporary kludge to get session restoring over DBus working. Once
a solution that doesn't involve setting a JobState's job attribute is
implemented, please remove this awful method.
"""
return new_value
def job_via_assign_filter(instance, field, old_value, new_value):
"""A custom setter for JobState.via_job."""
if (old_value is not pod.UNSET and
not isinstance(new_value, JobDefinition) and
new_value is not None):
raise TypeError("via_job must be the actual job, not the checksum")
return new_value
class JobState(pod.POD):
"""
Class representing the state of a job in a session.
Contains the following basic properties of each job:
* the readiness_inhibitor_list that prevent the job form starting
* the result (outcome) of the run (IJobResult)
* the effective category identifier
* the effective certification status
* the job that was used to create it (via_job)
For convenience (to SessionState implementation) it also has a reference to
the job itself. This class is a pure state holder an will typically
collaborate with the SessionState class and the UI layer.
"""
job = pod.Field(
doc="the job associated with this state",
type=JobDefinition,
initial=pod.MANDATORY,
assign_filter_list=[job_assign_filter])
readiness_inhibitor_list = pod.Field(
doc="the list of readiness inhibitors of the associated job",
type="List[JobReadinessInhibitor]",
initial_fn=lambda: [UndesiredJobReadinessInhibitor])
result = pod.Field(
doc="the result of running the associated job",
type=IJobResult,
initial_fn=lambda: MemoryJobResult({}),
notify=True)
result_history = pod.Field(
doc="a tuple of result_history of the associated job",
type=tuple, initial=(), notify=True,
assign_filter_list=[pod.typed, pod.typed.sequence(IJobResult)])
via_job = pod.Field(
doc="the parent job definition",
type=JobDefinition,
assign_filter_list=[job_via_assign_filter])
effective_category_id = OverridableJobField(
job_field="category_id",
doc="the effective categorization of this test in a session",
type=str)
effective_certification_status = OverridableJobField(
job_field="certification_status",
doc="the effective certification status of this job",
type=str)
# NOTE: the `result` property just exposes the last result from the
# `result_history` tuple above. The API is used everywhere so it should not
# be broken in any way but the way forward is the sequence stored in
# `result_history`.
#
# The one particularly annoying part of this implementation is that each
# job state always has at least one result. Even if there was no testing
# done yet. This OUTCOME_NONE result needs to be filtered out at various
# times. I think it would be better if we could not have it in the
# sequence-based API anymore. Otherwise each test will have two
# result_history (more if you count things like resuming a session).
@result.change_notifier
def _result_changed(self, old, new):
# Don't track the initial assignment over UNSET
if old is pod.UNSET:
return
assert new != old
assert isinstance(new, IJobResult)
if new.is_hollow:
return
logger.debug(
"Appending result %r to history: %r", new, self.result_history)
self.result_history += (new,)
def can_start(self):
"""Quickly check if the associated job can run right now."""
return len(self.readiness_inhibitor_list) == 0
def get_readiness_description(self):
"""Get a human readable description of the current readiness state."""
if self.readiness_inhibitor_list:
return _("job cannot be started: {}").format(
", ".join((str(inhibitor)
for inhibitor in self.readiness_inhibitor_list)))
else:
return _("job can be started")
def apply_overrides(self, override_list: "List[Tuple[str, Any]]"):
"""
Apply overrides to effective jop values.
This method is automatically called by :class:`SessionDeviceContext`
to implement effective overrides originating from test plan data.
:param override_list:
A list, as exposed by values of
:attr:`TestPlanUnitSupport.override_list`, composed of a sequence
of pairs ``(field, value)``, where ``field`` is the name of the
field to override (without the prefix ``effective_``) and value is
any valid value of that field.
:raises AttributeError:
If any of the ``field``s refer to an unknown field.
:raises ValueError:
If any of the ``field``s refer to fields that are not designated
as overridable.
:raises ValueError:
If the ``value`` supplied is incorrect for the given field.
:raises TypeError:
If the type of the ``value`` supplied is incorrect for the given
field.
.. note::
Consult field specification for details on what types and values
are valid for that field.
Example:
>>> from plainbox.vendor.mock import Mock
>>> job = Mock(spec=JobDefinition)
>>> job_state = JobState(job)
>>> job_state.apply_overrides([
... ('category_id', 'new-category-id'),
... ('certification_status', 'blocker')])
>>> job_state.effective_category_id
'new-category-id'
>>> job_state.effective_certification_status
'blocker'
"""
for field, value in override_list:
effective_field = 'effective_{}'.format(field)
effective_field_obj = getattr(self.__class__, effective_field)
if not isinstance(effective_field_obj, OverridableJobField):
raise ValueError(_('{!r} is not overridable').format(field))
setattr(self, effective_field, value)
logger.debug("Applied overrides %r to job %r", override_list, self.job)
plainbox-0.25/plainbox/impl/session/manager.py 0000664 0001750 0001750 00000053162 12627266441 022321 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012, 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.session.manager` -- manager for sessions
============================================================
This module contains glue code that allows one to create and manage sessions
and their filesystem presence. It allows
:class:`~plainbox.impl.session.state.SessionState` to be de-coupled
from :class:`~plainbox.impl.session.storage.SessionStorageRepository`,
:class:`~plainbox.impl.session.storage.SessionStorage`,
:class:`~plainbox.impl.session.suspend.SessionSuspendHelper`
and :class:`~plainbox.impl.session.suspend.SessionResumeHelper`.
"""
from collections import OrderedDict
import contextlib
import errno
import logging
import os
from plainbox.i18n import gettext as _, ngettext
from plainbox.impl import pod
from plainbox.impl.session.resume import SessionResumeHelper
from plainbox.impl.session.state import SessionDeviceContext
from plainbox.impl.session.state import SessionState
from plainbox.impl.session.storage import LockedStorageError
from plainbox.impl.session.storage import SessionStorage
from plainbox.impl.session.storage import SessionStorageRepository
from plainbox.impl.session.suspend import SessionSuspendHelper
from plainbox.impl.unit.testplan import TestPlanUnit
from plainbox.public import get_providers
from plainbox.vendor import morris
logger = logging.getLogger("plainbox.session.manager")
class WellKnownDirsHelper(pod.POD):
"""
Helper class that knows about well known directories for SessionStorage.
This class simply gets rid of various magic directory names that we
associate with session storage. It also provides a convenience utility
method :meth:`populate()` to create all of those directories, if needed.
"""
storage = pod.Field(
doc="SessionStorage associated with this helper",
type=SessionStorage,
initial=pod.MANDATORY,
assign_filter_list=[pod.const, pod.typed])
def populate(self):
"""
Create all of the well known directories that are expected to exist
inside a freshly created session storage directory
"""
for dirname in self.all_directories:
if not os.path.exists(dirname):
os.makedirs(dirname)
@property
def all_directories(self):
"""
a list of all well-known directories
"""
return [self.io_log_pathname]
@property
def io_log_pathname(self):
"""
full path of the directory where per-job IO logs are stored
"""
return os.path.join(self.storage.location, "io-logs")
def at_most_one_context_filter(
instance: pod.POD, field: pod.Field, old: "Any", new: "Any"
):
if len(new) > 1:
raise ValueError(_(
"session manager currently doesn't support sessions"
" involving multiple devices (a.k.a multi-node testing)"
))
return new
class SessionManager(pod.POD):
"""
Manager class for coupling SessionStorage with SessionState.
This class allows application code to manage disk state of sessions. Using
the :meth:`checkpoint()` method applications can create persistent
snapshots of the :class:`~plainbox.impl.session.state.SessionState`
associated with each :class:`SessionManager`.
"""
device_context_list = pod.Field(
doc="""
A list of session device context objects
.. note::
You must not modify this field directly.
This is not enforced but please use the
:meth:`add_device_context()` or :meth:`remove_device_context()` if
you want to manipulate the list. Currently you cannot reorder the
list of context objects.
""",
type=list,
initial=pod.MANDATORY,
assign_filter_list=[
pod.typed, pod.typed.sequence(SessionDeviceContext),
pod.const, at_most_one_context_filter])
storage = pod.Field(
doc="A SesssionStorage instance",
type=SessionStorage,
initial=pod.MANDATORY,
assign_filter_list=[pod.typed, pod.const])
def _on_test_plans_changed(self, old: "Any", new: "Any") -> None:
self._propagate_test_plans()
test_plans = pod.Field(
doc="""
Test plans that this session is processing.
This field contains a tuple of test plans that are active in the
session. Any changes here are propagated to each device context
participating in the session. This in turn makes all of the overrides
defined by those test plans effective.
.. note::
Currently there is no facitly that would allow to use this field to
drive test execution. Such facility is likely to be added later.
""",
type=tuple,
initial=(),
notify=True,
notify_fn=_on_test_plans_changed,
assign_filter_list=[
pod.typed, pod.typed.sequence(TestPlanUnit), pod.unique])
@property
def default_device_context(self):
"""
The default (first) session device context if available
In single-device sessions this is the context that is used to execute
every single job definition. Applications that use multiple devices
must access and use the context list directly.
.. note:
The default context may be none if there are no context objects
present in the session. This is never the case for applications
using the single-device APIs.
"""
return (self.device_context_list[0]
if len(self.device_context_list) > 0 else None)
@property
def state(self):
"""
:class:`~plainbox.impl.session.state.SessionState` associated with this
manager
"""
if self.default_device_context is not None:
return self.default_device_context.state
@classmethod
def create(cls, repo=None, legacy_mode=False):
"""
Create an empty session manager.
This method creates an empty session manager. This is the most generic
API that allows applications to freely work with any set of devices.
Typically applications will use the :meth:`add_device_context()` method
to add additional context objects at a later time. This method creates
and populates the session storage with all of the well known
directories (using :meth:`WellKnownDirsHelper.populate()`).
:param repo:
If specified then this particular repository will be used to create
the storage for this session. If left out, a new repository is
constructed with the default location.
:ptype repo:
:class:`~plainbox.impl.session.storage.SessionStorageRepository`.
:param legacy_mode:
Propagated to
:meth:`~plainbox.impl.session.storage.SessionStorage.create()` to
ensure that legacy (single session) mode is used.
:ptype legacy_mode:
bool
:return:
fresh :class:`SessionManager` instance
"""
logger.debug("SessionManager.create()")
if repo is None:
repo = SessionStorageRepository()
storage = SessionStorage.create(repo.location, legacy_mode)
WellKnownDirsHelper(storage).populate()
return cls([], storage)
@classmethod
def create_with_state(cls, state, repo=None, legacy_mode=False):
"""
Create a session manager by wrapping existing session state.
This method populates the session storage with all of the well known
directories (using :meth:`WellKnownDirsHelper.populate()`)
:param stage:
A pre-existing SessionState object.
:param repo:
If specified then this particular repository will be used to create
the storage for this session. If left out, a new repository is
constructed with the default location.
:ptype repo:
:class:`~plainbox.impl.session.storage.SessionStorageRepository`.
:param legacy_mode:
Propagated to
:meth:`~plainbox.impl.session.storage.SessionStorage.create()`
to ensure that legacy (single session) mode is used.
:ptype legacy_mode:
bool
:return:
fresh :class:`SessionManager` instance
"""
logger.debug("SessionManager.create_with_state()")
if repo is None:
repo = SessionStorageRepository()
storage = SessionStorage.create(repo.location, legacy_mode)
WellKnownDirsHelper(storage).populate()
context = SessionDeviceContext(state)
return cls([context], storage)
@classmethod
def create_with_unit_list(cls, unit_list=None, repo=None,
legacy_mode=False):
"""
Create a session manager with a fresh session.
This method populates the session storage with all of the well known
directories (using :meth:`WellKnownDirsHelper.populate()`)
:param unit_list:
If specified then this will be the initial list of units known by
the session state object.
:param repo:
If specified then this particular repository will be used to create
the storage for this session. If left out, a new repository is
constructed with the default location.
:ptype repo:
:class:`~plainbox.impl.session.storage.SessionStorageRepository`.
:param legacy_mode:
Propagated to
:meth:`~plainbox.impl.session.storage.SessionStorage.create()`
to ensure that legacy (single session) mode is used.
:ptype legacy_mode:
bool
:return:
fresh :class:`SessionManager` instance
"""
logger.debug("SessionManager.create_with_unit_list()")
if unit_list is None:
unit_list = []
state = SessionState(unit_list)
if repo is None:
repo = SessionStorageRepository()
storage = SessionStorage.create(repo.location, legacy_mode)
context = SessionDeviceContext(state)
WellKnownDirsHelper(storage).populate()
return cls([context], storage)
@classmethod
def load_session(cls, unit_list, storage, early_cb=None, flags=None):
"""
Load a previously checkpointed session.
This method allows one to re-open a session that was previously
created by :meth:`SessionManager.checkpoint()`
:param unit_list:
List of all known units. This argument is used to reconstruct the
session from a dormant state. Since the suspended data cannot
capture implementation details of each unit reliably, actual units
need to be provided externally. Unlike in :meth:`create_session()`
this list really needs to be complete, it must also include any
generated units.
:param storage:
The storage that should be used for this particular session.
The storage object holds references to existing directories
in the file system. When restoring an existing dormant session
it is important to use the correct storage object, the one that
corresponds to the file system location used be the session
before it was saved.
:ptype storage:
:class:`~plainbox.impl.session.storage.SessionStorage`
:param early_cb:
A callback that allows the caller to "see" the session object
early, before the bulk of resume operation happens. This method can
be used to register callbacks on the new session before this method
call returns. The callback accepts one argument, session, which is
being resumed. This is being passed directly to
:meth:`plainbox.impl.session.resume.SessionResumeHelper.resume()`
:param flags:
An optional set of flags that may influence the resume process.
Currently this is an internal implementation detail and no "public"
flags are provided. Passing None here is a safe equvalent of using
this API before it was introduced.
:raises:
Anything that can be raised by
:meth:`~plainbox.impl.session.storage.SessionStorage.
load_checkpoint()` and :meth:`~plainbox.impl.session.suspend.
SessionResumeHelper.resume()`
:returns:
Fresh instance of :class:`SessionManager`
"""
logger.debug("SessionManager.load_session()")
try:
data = storage.load_checkpoint()
except IOError as exc:
if exc.errno == errno.ENOENT:
state = SessionState(unit_list)
else:
raise
else:
state = SessionResumeHelper(
unit_list, flags, storage.location
).resume(data, early_cb)
context = SessionDeviceContext(state)
return cls([context], storage)
def checkpoint(self):
"""
Create a checkpoint of the session.
After calling this method you can later reopen the same session with
:meth:`SessionManager.load_session()`.
"""
logger.debug("SessionManager.checkpoint()")
data = SessionSuspendHelper().suspend(
self.state, self.storage.location)
logger.debug(
ngettext(
"Saving %d byte of checkpoint data to %r",
"Saving %d bytes of checkpoint data to %r", len(data)
), len(data), self.storage.location)
try:
self.storage.save_checkpoint(data)
except LockedStorageError:
self.storage.break_lock()
self.storage.save_checkpoint(data)
def destroy(self):
"""
Destroy all of the filesystem artifacts of the session.
This basically calls
:meth:`~plainbox.impl.session.storage.SessionStorage.remove()`
"""
logger.debug("SessionManager.destroy()")
self.storage.remove()
def add_device_context(self, context):
"""
Add a device context to the session manager
:param context:
The :class:`SessionDeviceContext` to add.
:raises ValueError:
If the context is already in the session manager or the device
represented by that context is already present in the session
manager.
This method fires the :meth:`on_device_context_added()` signal
"""
if any(other_context.device == context.device
for other_context in self.device_context_list):
raise ValueError(
_("attmpting to add a context for device {} which is"
" already represented in this session"
" manager").format(context.device))
if len(self.device_context_list) > 0:
self._too_many_device_context_objects()
self.device_context_list.append(context)
self.on_device_context_added(context)
return context
def add_local_device_context(self):
"""
Create and add a SessionDeviceContext that describes the local device.
The local device is always the device executing plainbox. Other devices
may execute jobs or parts of plainbox but they don't need to store or
run the full plainbox code.
"""
return self.add_device_context(SessionDeviceContext())
def remove_device_context(self, context):
"""
Remove an device context from the session manager
:param unit:
The :class:`SessionDeviceContext` to remove.
This method fires the :meth:`on_device_context_removed()` signal
"""
if context not in self.device_context_list:
raise ValueError(_(
"attempting to remove a device context not present in this"
" session manager"))
self.device_context_list.remove(context)
self.on_device_context_removed(context)
@morris.signal
def on_device_context_added(self, context):
"""
Signal fired when a session device context object is added
"""
logger.debug(
_("Device context %s added to session manager %s"),
context, self)
self._propagate_test_plans()
@morris.signal
def on_device_context_removed(self, context):
"""
Signal fired when a session device context object is removed
"""
logger.debug(
_("Device context %s removed from session manager %s"),
context, self)
self._propagate_test_plans()
def _too_many_device_context_objects(self):
raise ValueError(_(
"session manager currently doesn't support sessions"
" involving multiple devices (a.k.a multi-node testing)"
))
def _propagate_test_plans(self):
logger.debug(_("Propagating test plans to all devices"))
test_plans = self.test_plans
for context in self.device_context_list:
context.set_test_plan_list(test_plans)
@property
def exporter_map(self):
""" Map from exporter id to the corresponding exporter unit. """
exporter_map = OrderedDict()
for unit in self.state.unit_list:
if unit.Meta.name == 'exporter':
support = unit.support
if support:
exporter_map[unit.id] = support
# Patch exporter map to expose short names
legacy_mapping = {
'2013.com.canonical.plainbox::hexr': 'xml',
'2013.com.canonical.plainbox::html': 'html',
'2013.com.canonical.plainbox::json': 'json',
'2013.com.canonical.plainbox::rfc822': 'rfc822',
'2013.com.canonical.plainbox::text': 'text',
'2013.com.canonical.plainbox::xlsx': 'xlsx'
}
for new_id, legacy_id in legacy_mapping.items():
if new_id in exporter_map:
exporter_map[legacy_id] = exporter_map[new_id]
return exporter_map
def create_exporter(self, exporter_id, option_list=(), strict=True):
"""
Create an exporter object with the specified name and options.
:param exporter_id:
Identifier of the exporter unit (which must have been loaded
into the session device context of the first device). For
backwards compatibility this can also be any of the legacy
identifiers ``xml``, ``html``, ``json``, ``rfc822``, ``text`` or
``xlsx``.
:param option_list:
(optional) A list of options to pass to the exporter. Each option
is a string. Some strings may be of form 'key=value' but those are
handled by each exporter separately. By default an empty tuple is
used so no special options are enabled.
:param strict:
(optional) Strict mode, in this mode ``option_list`` must not
contain any options that are unrecognized by the exporter. Since
many options (but not all) are shared among various exporters,
using non-strict mode might make it easier to use a single superset
of options to all exporters and let them silently ignore those that
they don't understand.
:raises LookupError:
If the exporter identifier cannot be found. Note that this might
indicate that appropriate provider has not been loaded yet.
:returns:
A ISessionStateExporter instance with appropriate configuration.
"""
exporter_support = self.exporter_map[exporter_id]
if not strict:
# In non-strict mode silently discard unsupported options.
supported_options = frozenset(
exporter_support.exporter_cls.supported_option_list)
option_list = [
item for item in option_list if item in supported_options
]
return exporter_support.exporter_cls(
option_list, exporter_unit=exporter_support)
@classmethod
@contextlib.contextmanager
def get_throwaway_manager(cls, provider_list=None):
"""
Create a temporary session manager.
:param provider_list:
(optional) A list of providers to put into the session manager. By
default all known providers are added. You can use this argument to
customize the behaviour beyond defaults.
:returns:
A new SessionManager object that will be destroyed when the context
manager is left.
This method can be used to create a throw-away session manager which is
not really meant for running jobs but can be useful to access exporters
and other objects stored in providers.
"""
if provider_list is None:
provider_list = get_providers()
manager = cls.create()
try:
manager.add_local_device_context()
device_context = manager.default_device_context
for provider in provider_list:
device_context.add_provider(provider)
yield manager
finally:
manager.destroy()
plainbox-0.25/plainbox/impl/session/__init__.py 0000664 0001750 0001750 00000007273 12627266441 022450 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012, 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.session` -- session handling
================================================
Sessions are central state holders and one of the most important classes in
PlainBox. Since they are all named alike it's a bit hard to find what the
actual responsibilities are. Here's a small shortcut, do read the description
of each module and class for additional details though.
:class:`SessionState`
This a class that holds all of the state and program logic. It
:class:`SessionManager` is a class that couples :class:`SessionState` and
:class:`SessionStorage`. It has the methods required to alter the state by
introducing additional jobs or results. It's main responsibility is to keep
track of all of the jobs, their results, if they are runnable or not
(technically what is preventing them from being runnable) and to compute
the order of execution that can satisfy all of the dependencies.
It holds a number of references to other pieces of PlainBox (jobs,
resources and other things) but one thing stands out. This class holds
references to a number of :class:`JobState` objects that couple a
:class:`JobResult` and :class:`JobDefinition` together.
:class:`JobState`
A coupling class between :class:`JobDefinition` and :class:`JobResult`.
This class also knows why a job cannot be started in a particular session,
by maintaining a set of "inhibitors" that prevent it from being runnable.
The actual inhibitors are managed by :class:`SessionState`.
:class:`SessionStorage`
This class knows how properly to save and load bytes and manages a
directory for all the filesystem entries associated with a particular
session. It holds no references to a session though. Typically the class
is not instantiated directly but instead comes from helper methods of
:class:`SessionStorageRepository`.
:class:`SessionStorageRepository`
This class knows how to enumerate possible instances of
:class:`SessionStorage` from a given location in the filesystem. It also
knows how to obtain a default location using XDG standards.
"""
from plainbox.impl.session.jobs import InhibitionCause
from plainbox.impl.session.jobs import JobReadinessInhibitor
from plainbox.impl.session.jobs import JobState
from plainbox.impl.session.jobs import UndesiredJobReadinessInhibitor
from plainbox.impl.session.manager import SessionManager
from plainbox.impl.session.resume import SessionPeekHelper
from plainbox.impl.session.resume import SessionResumeError
from plainbox.impl.session.state import SessionMetaData
from plainbox.impl.session.state import SessionState
from plainbox.impl.session.storage import SessionStorage
from plainbox.impl.session.storage import SessionStorageRepository
__all__ = (
'JobReadinessInhibitor',
'JobState',
'SessionManager',
'SessionMetaData',
'SessionPeekHelper',
'SessionResumeError',
'SessionState',
'SessionStorage',
'SessionStorageRepository',
'UndesiredJobReadinessInhibitor',
'InhibitionCause',
)
plainbox-0.25/plainbox/impl/session/test_storage.py 0000664 0001750 0001750 00000017475 12627266441 023421 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012, 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.session.test_storage`
=========================================
Test definitions for :mod:`plainbox.impl.session.storage`
"""
from tempfile import TemporaryDirectory
from unittest import TestCase
import os
from plainbox.impl.session.storage import SessionStorage
from plainbox.impl.session.storage import SessionStorageRepository
from plainbox.vendor import mock
class SessionStorageRepositoryTests(TestCase):
def _populate_dummy_repo(self, repo,
session_list=['s1.session', 's2.session'],
last_session='s1.session'):
# Add session directories
for session_name in session_list:
os.mkdir(os.path.join(repo.location, session_name))
# And a symlink to the last session
if last_session is not None:
os.symlink(last_session, os.path.join(
repo.location, repo._LAST_SESSION_SYMLINK))
def test_smoke(self):
# Empty directory looks like an empty repository
with TemporaryDirectory() as tmp:
repo = SessionStorageRepository(tmp)
self.assertEqual(repo.location, tmp)
self.assertEqual(repo.get_storage_list(), [])
self.assertEqual(list(iter(repo)), [])
self.assertEqual(repo.get_last_storage(), None)
def test_get_storage_list(self):
# Directory with some sub-directories looks like a repository
# with a bunch of sessions inside.
with TemporaryDirectory() as tmp:
# Create a repository and some dummy data
repo = SessionStorageRepository(tmp)
self._populate_dummy_repo(repo)
# Get a list of storage objects
storage_list = repo.get_storage_list()
# Check if we got our data right.
# The results are not sorted so we sort them for testing
storage_name_list = [
os.path.basename(storage.location)
for storage in storage_list]
self.assertEqual(
sorted(storage_name_list), ["s1.session", "s2.session"])
def test_get_last_storage(self):
# Directory with some sub-directories looks like a repository
# with a bunch of sessions inside.
with TemporaryDirectory() as tmp:
# Create a repository and some dummy data
repo = SessionStorageRepository(tmp)
self._populate_dummy_repo(repo)
# Get the last storage object
storage = repo.get_last_storage()
# Check that we got session1
self.assertEqual(
os.path.basename(storage.location), 's1.session')
def test_get_last_storage__broken_symlink(self):
# Directory with some sub-directories looks like a repository
# with a bunch of sessions inside.
with TemporaryDirectory() as tmp:
# Create a repository without any sessions and one broken symlink
repo = SessionStorageRepository(tmp)
self._populate_dummy_repo(repo, [], "b0rken.session")
# Get the last storage object
storage = repo.get_last_storage()
# Make sure it's not valid
self.assertEqual(storage, None)
def test_get_default_location_with_XDG_CACHE_HOME(self):
"""
verify return value of get_default_location() when XDG_CACHE_HOME is
set and HOME has any value.
"""
env_patch = {'XDG_CACHE_HOME': 'XDG_CACHE_HOME'}
with mock.patch.dict('os.environ', values=env_patch):
measured = SessionStorageRepository.get_default_location()
expected = "XDG_CACHE_HOME/plainbox/sessions"
self.assertEqual(measured, expected)
def test_get_default_location_with_HOME(self):
"""
verify return value of get_default_location() when XDG_CACHE_HOME is
not set but HOME is set
"""
env_patch = {'HOME': 'HOME'}
with mock.patch.dict('os.environ', values=env_patch, clear=True):
measured = SessionStorageRepository.get_default_location()
expected = "HOME/.cache/plainbox/sessions"
self.assertEqual(measured, expected)
class SessionStorageTests(TestCase):
def test_smoke(self):
storage = SessionStorage('foo')
self.assertEqual(storage.location, 'foo')
def test_create_remove__modern(self):
with TemporaryDirectory() as tmp:
# Create a new storage in the specified directory
storage = SessionStorage.create(tmp, legacy_mode=False)
# The location should have been created
self.assertTrue(os.path.exists(storage.location))
# And it should be in the directory we indicated
self.assertEqual(os.path.dirname(storage.location), tmp)
# There should not be any symlink now, pointing to this storage
self.assertFalse(
os.path.exists(os.path.join(
tmp, SessionStorageRepository._LAST_SESSION_SYMLINK)))
# Remove the storage now
storage.remove()
# And make sure the storage is gone
self.assertFalse(os.path.exists(storage.location))
def test_create_remove__legacy(self):
with TemporaryDirectory() as tmp:
# Create a new storage in the specified directory
storage = SessionStorage.create(tmp, legacy_mode=True)
# The location should have been created
self.assertTrue(os.path.exists(storage.location))
# And it should be in the directory we indicated
self.assertEqual(os.path.dirname(storage.location), tmp)
# There should be a symlink now, pointing to this storage
self.assertEqual(
os.readlink(
os.path.join(
tmp, SessionStorageRepository._LAST_SESSION_SYMLINK)),
storage.location)
# Remove the storage now
storage.remove()
# And make sure the storage is gone
self.assertFalse(os.path.exists(storage.location))
# NOTE: this does not check if the symlink is gone but we don't
# touch it, it's just left as a dangling link there
def test_load_save_checkpoint__legacy(self):
with TemporaryDirectory() as tmp:
# Create a new storage in the specified directory
storage = SessionStorage.create(tmp, legacy_mode=True)
# Save some checkpoint data
data_out = b'some data'
storage.save_checkpoint(data_out)
# Load it back
data_in = storage.load_checkpoint()
# Check if it's right
self.assertEqual(data_out, data_in)
def test_load_save_checkpoint__modern(self):
with TemporaryDirectory() as tmp:
# Create a new storage in the specified directory
storage = SessionStorage.create(tmp, legacy_mode=False)
# Save some checkpoint data
data_out = b'some data'
storage.save_checkpoint(data_out)
# Load it back
data_in = storage.load_checkpoint()
# Check if it's right
self.assertEqual(data_out, data_in)
plainbox-0.25/plainbox/impl/session/suspend.py 0000664 0001750 0001750 00000054257 12627266441 022376 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
Implementation of session suspend feature.
:mod:`plainbox.impl.session.suspend` -- session suspend support
===============================================================
This module contains classes that can suspend an instance of
:class:`~plainbox.impl.session.state.SessionState`. The general idea is that
:class:`~plainbox.impl.session.resume.SessionSuspendHelper` knows how to
describe the session and
:class:`~plainbox.impl.session.resume.SessionResumeHelper` knows how to
recreate the session from that description.
Both of the helper classes are only used by
:class:`~plainbox.impl.session.manager.SessionManager` and in the
the legacy suspend/resume code paths of
:class:`~plainbox.impl.session.state._LegacySessionState`.
Applications should use one of those APIs to work with session snapshots.
The design of the on-disk format is not like typical pickle or raw dump of all
of the objects. Instead it is designed to create a smart representation of a
subset of the data and explicitly support migrations, so that some future
version of PlainBox can change the format and still read old sessions (to the
extent that it makes sense) or at least reject them with an intelligent
message.
One important consideration of the format is that we suspend very often and
resume very infrequently so everything is optimized around saving big
chunks of data incrementally (all the big job results and their log files)
and to keep most of the data we save over and over small.
The key limitation in how the suspend code works is that we cannot really
serialize jobs at all. There are two reasons for that, one very obvious
and one which is more of a design decision.
The basic reason for why we cannot serialize jobs is that we cannot really,
meaningfully serialize the code that runs inside a job. That may the shell
command or a call into python module. Without this limitation we would
be basically pretending that we are running the same job as before while the
job definition has transparently changed and the results would not be
sensible anymore.
The design decision is to allow abstract, opaque Providers to offer various
types of JobDefinitions (that may be radically different to what current
CheckBox jobs look like). This is why the resume interface requires one to
provide a full list of job definitions to resume. This is also why the checksum
attribute can be implemented differently in non-CheckBox jobs.
As an exception to this rule we _do_ serialize generated jobs. Those are a
compromise between ease-of-use of the framework and the external
considerations mentioned above. Generated jobs are re-created from whatever
results that created them. The framework has special support code for knowing
how to resume in light of the fact that some jobs might be generated during
the resume process itself.
Serialization format versions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1) The initial version
2) Same as '1' but suspends
:attr:`plainbox.impl.session.state.SessionMetaData.app_blob`
3) Same as '2' but suspends
:attr:`plainbox.impl.session.state.SessionMetaData.app_id`
4) Same as '3' but hollow results are not saved and jobs that only
have hollow results are not mentioned in the job -> checksum map.
5) Same as '4' but DiskJobResult is stored with a relative pathname to the log
file if session_dir is provided.
6) Same as '5' plus store the list of mandatory jobs.
"""
import base64
import gzip
import json
import logging
import os
from plainbox.impl.result import DiskJobResult
from plainbox.impl.result import MemoryJobResult
logger = logging.getLogger("plainbox.session.suspend")
class SessionSuspendHelper1:
"""
Helper class for computing binary representation of a session.
The helper only creates a bytes object to save. Actual saving should
be performed using some other means, preferably using
:class:`~plainbox.impl.session.storage.SessionStorage`.
This class creates version '1' snapshots.
"""
VERSION = 1
def suspend(self, session, session_dir=None):
"""
Compute suspend representation.
Compute the data that is saved by :class:`SessionStorage` as a
part of :meth:`SessionStorage.save_checkpoint()`.
:param session:
The SessionState object to represent.
:param session_dir:
(optional) The base directory of the session. If this argument is
used then it can alter the representation of some objects related
to filesystem artefacts. It is recommended to always pass the
session directory.
:returns bytes: the serialized data
"""
json_repr = self._json_repr(session, session_dir)
data = json.dumps(
json_repr,
ensure_ascii=False,
sort_keys=True,
indent=None,
separators=(',', ':')
).encode("UTF-8")
# NOTE: gzip.compress is not deterministic on python3.2
return gzip.compress(data)
def _json_repr(self, session, session_dir):
"""
Compute the representation of all of the data that needs to be saved.
:returns:
JSON-friendly representation
:rtype:
dict
The dictionary has the following keys:
``version``
A integral number describing the version of the representation.
See the version table for details.
``session``
Representation of the session as computed by
:meth:`_repr_SessionState()`
"""
return {
"version": self.VERSION,
"session": self._repr_SessionState(session, session_dir),
}
def _repr_SessionState(self, obj, session_dir):
"""
Compute the representation of SessionState.
:returns:
JSON-friendly representation
:rtype:
dict
The result is a dictionary with the following items:
``jobs``:
Dictionary mapping job id to job checksum.
The checksum is computed with
:attr:`~plainbox.impl.job.JobDefinition.checksum`
``results``
Dictionary mapping job id to a list of results.
Each result is represented by data computed by
:meth:`_repr_JobResult()`
``desired_job_list``:
List of (ids) of jobs that are desired (to be executed)
``mandatory_job_list``:
List of (ids) of jobs that must be executed
``metadata``:
The representation of meta-data associated with the session
state object.
"""
return {
"jobs": {
state.job.id: state.job.checksum
for state in obj.job_state_map.values()
},
"results": {
# Currently we store only one result but we may store
# more than that in a later version.
state.job.id: [self._repr_JobResult(state.result, session_dir)]
for state in obj.job_state_map.values()
},
"desired_job_list": [
job.id for job in obj.desired_job_list
],
"mandatory_job_list": [
job.id for job in obj.mandatory_job_list
],
"metadata": self._repr_SessionMetaData(obj.metadata, session_dir),
}
def _repr_SessionMetaData(self, obj, session_dir):
"""
Compute the representation of SessionMetaData.
:returns:
JSON-friendly representation.
:rtype:
dict
The result is a dictionary with the following items:
``title``:
Title of the session. Arbitrary text provided by the
application.
``flags``:
List of strings that enumerate the flags the session is in.
There are some well-known flags but this list can have any
items it it.
``running_job_name``:
Id of the job that was about to be executed before
snapshotting took place. Can be None.
"""
return {
"title": obj.title,
"flags": list(sorted(obj.flags)),
"running_job_name": obj.running_job_name
}
def _repr_JobResult(self, obj, session_dir):
"""Compute the representation of one of IJobResult subclasses."""
if isinstance(obj, DiskJobResult):
return self._repr_DiskJobResult(obj, session_dir)
elif isinstance(obj, MemoryJobResult):
return self._repr_MemoryJobResult(obj, session_dir)
else:
raise TypeError(
"_repr_JobResult() supports DiskJobResult or MemoryJobResult")
def _repr_JobResultBase(self, obj, session_dir):
"""
Compute the representation of _JobResultBase.
:returns:
JSON-friendly representation
:rtype:
dict
The dictionary has the following keys:
``outcome``
The outcome of the test
``execution_duration``
Time it took to execute the test command in seconds
``comments``
Tester-supplied comments
``return_code``
The exit code of the application.
.. note::
return_code can have unexpected values when the process was killed
by a signal
"""
return {
"outcome": obj.outcome,
"execution_duration": obj.execution_duration,
"comments": obj.comments,
"return_code": obj.return_code,
}
def _repr_MemoryJobResult(self, obj, session_dir):
"""
Compute the representation of MemoryJobResult.
:returns:
JSON-friendly representation
:rtype:
dict
The dictionary has the following keys *in addition to* what is
produced by :meth:`_repr_JobResultBase()`:
``io_log``
Representation of the list of IO Log records
"""
assert isinstance(obj, MemoryJobResult)
result = self._repr_JobResultBase(obj, session_dir)
result.update({
"io_log": [self._repr_IOLogRecord(record)
for record in obj.io_log],
})
return result
def _repr_DiskJobResult(self, obj, session_dir):
"""
Compute the representation of DiskJobResult.
:returns:
JSON-friendly representation
:rtype:
dict
The dictionary has the following keys *in addition to* what is
produced by :meth:`_repr_JobResultBase()`:
``io_log_filename``
The name of the file that keeps the serialized IO log
"""
assert isinstance(obj, DiskJobResult)
result = self._repr_JobResultBase(obj, session_dir)
result.update({
"io_log_filename": obj.io_log_filename,
})
return result
def _repr_IOLogRecord(self, obj):
"""
Compute the representation of IOLogRecord.
:returns:
JSON-friendly representation
:rtype:
list
The list has three elements:
* delay, copied from :attr:`~plainbox.impl.result.IOLogRecord.delay`
* stream name, copied from
:attr:`~plainbox.impl.result.IOLogRecord.stream_name`
* data, base64 encoded ASCII string, computed from
:attr:`~plainbox.impl.result.IOLogRecord.data`
"""
return [obj[0], obj[1],
base64.standard_b64encode(obj[2]).decode("ASCII")]
class SessionSuspendHelper2(SessionSuspendHelper1):
"""
Helper class for computing binary representation of a session.
The helper only creates a bytes object to save. Actual saving should
be performed using some other means, preferably using
:class:`~plainbox.impl.session.storage.SessionStorage`.
This class creates version '2' snapshots.
"""
VERSION = 2
def _repr_SessionMetaData(self, obj, session_dir):
"""
Compute the representation of :class:`SessionMetaData`.
:returns:
JSON-friendly representation.
:rtype:
dict
The result is a dictionary with the following items:
``title``:
Title of the session. Arbitrary text provided by the
application.
``flags``:
List of strings that enumerate the flags the session is in.
There are some well-known flags but this list can have any
items it it.
``running_job_name``:
Id of the job that was about to be executed before
snapshotting took place. Can be None.
``app_blob``:
Arbitrary application specific binary blob encoded with base64.
This field may be null.
"""
data = super(SessionSuspendHelper2, self)._repr_SessionMetaData(
obj, session_dir)
if obj.app_blob is None:
data['app_blob'] = None
else:
data['app_blob'] = base64.standard_b64encode(
obj.app_blob
).decode("ASCII")
return data
class SessionSuspendHelper3(SessionSuspendHelper2):
"""
Helper class for computing binary representation of a session.
The helper only creates a bytes object to save. Actual saving should
be performed using some other means, preferably using
:class:`~plainbox.impl.session.storage.SessionStorage`.
This class creates version '3' snapshots.
"""
VERSION = 3
def _repr_SessionMetaData(self, obj, session_dir):
"""
Compute the representation of :class:`SessionMetaData`.
:returns:
JSON-friendly representation.
:rtype:
dict
The result is a dictionary with the following items:
``title``:
Title of the session. Arbitrary text provided by the
application.
``flags``:
List of strings that enumerate the flags the session is in.
There are some well-known flags but this list can have any
items it it.
``running_job_name``:
Id of the job that was about to be executed before
snapshotting took place. Can be None.
``app_blob``:
Arbitrary application specific binary blob encoded with base64.
This field may be null.
``app_id``:
A string identifying the application that stored app_blob.
Thirs field may be null.
"""
data = super(SessionSuspendHelper3, self)._repr_SessionMetaData(
obj, session_dir)
data['app_id'] = obj.app_id
return data
class SessionSuspendHelper4(SessionSuspendHelper3):
"""
Helper class for computing binary representation of a session.
The helper only creates a bytes object to save. Actual saving should
be performed using some other means, preferably using
:class:`~plainbox.impl.session.storage.SessionStorage`.
This class creates version '4' snapshots.
"""
VERSION = 4
def _repr_SessionState(self, obj, session_dir):
"""
Compute the representation of :class:`SessionState`.
:returns:
JSON-friendly representation
:rtype:
dict
The result is a dictionary with the following items:
``jobs``:
Dictionary mapping job id to job checksum.
The checksum is computed with
:attr:`~plainbox.impl.job.JobDefinition.checksum`.
Two kinds of jobs are mentioned here:
- jobs that ever ran and have a result
- jobs that may run (are on the run list now)
The idea is to capture the "state" of the jobs that are
"important" to this session, that should be checked for
modifications when the session resumes later.
``results``
Dictionary mapping job id to a list of results.
Each result is represented by data computed by
:meth:`_repr_JobResult()`. Only jobs that actually have
a result are mentioned here. The automatically generated
"None" result that is always present for every job is skipped.
``desired_job_list``:
List of (ids) of jobs that are desired (to be executed)
``mandatory_job_list``:
List of (ids) of jobs that must be executed
``metadata``:
The representation of meta-data associated with the session
state object.
"""
id_run_list = frozenset([job.id for job in obj.run_list])
return {
"jobs": {
state.job.id: state.job.checksum
for state in obj.job_state_map.values()
if not state.result.is_hollow or state.job.id in id_run_list
},
"results": {
state.job.id: [self._repr_JobResult(result, session_dir)
for result in state.result_history]
for state in obj.job_state_map.values()
if len(state.result_history) > 0
},
"desired_job_list": [
job.id for job in obj.desired_job_list
],
"mandatory_job_list": [
job.id for job in obj.mandatory_job_list
],
"metadata": self._repr_SessionMetaData(obj.metadata, session_dir),
}
class SessionSuspendHelper5(SessionSuspendHelper4):
"""
Helper class for computing binary representation of a session.
The helper only creates a bytes object to save. Actual saving should
be performed using some other means, preferably using
:class:`~plainbox.impl.session.storage.SessionStorage`.
This class creates version '5' snapshots.
"""
VERSION = 5
def _repr_DiskJobResult(self, obj, session_dir):
"""
Compute the representation of DiskJobResult.
:returns:
JSON-friendly representation
:rtype:
dict
The dictionary has the following keys *in addition to* what is
produced by :meth:`_repr_JobResultBase()`:
``io_log_filename``
The path of the file that keeps the serialized IO log relative
to the session directory.
"""
result = super()._repr_DiskJobResult(obj, session_dir)
if session_dir is not None:
result["io_log_filename"] = os.path.relpath(
obj.io_log_filename, session_dir)
return result
class SessionSuspendHelper6(SessionSuspendHelper5):
"""
Helper class for computing binary representation of a session.
The helper only creates a bytes object to save. Actual saving should
be performed using some other means, preferably using
:class:`~plainbox.impl.session.storage.SessionStorage`.
This class creates version '6' snapshots.
"""
VERSION = 6
def _repr_SessionState(self, obj, session_dir):
"""
Compute the representation of :class:`SessionState`.
:returns:
JSON-friendly representation
:rtype:
dict
The result is a dictionary with the following items:
``jobs``:
Dictionary mapping job id to job checksum.
The checksum is computed with
:attr:`~plainbox.impl.job.JobDefinition.checksum`.
Two kinds of jobs are mentioned here:
- jobs that ever ran and have a result
- jobs that may run (are on the run list now)
The idea is to capture the "state" of the jobs that are
"important" to this session, that should be checked for
modifications when the session resumes later.
``results``
Dictionary mapping job id to a list of results.
Each result is represented by data computed by
:meth:`_repr_JobResult()`. Only jobs that actually have
a result are mentioned here. The automatically generated
"None" result that is always present for every job is skipped.
``desired_job_list``:
List of (ids) of jobs that are desired (to be executed)
``mandatory_job_list``:
List of (ids) of jobs that must be executed
``metadata``:
The representation of meta-data associated with the session
state object.
"""
id_run_list = frozenset([job.id for job in obj.run_list])
return {
"jobs": {
state.job.id: state.job.checksum
for state in obj.job_state_map.values()
if not state.result.is_hollow or state.job.id in id_run_list
},
"results": {
state.job.id: [self._repr_JobResult(result, session_dir)
for result in state.result_history]
for state in obj.job_state_map.values()
if len(state.result_history) > 0
},
"desired_job_list": [
job.id for job in obj.desired_job_list
],
"mandatory_job_list": [
job.id for job in obj.mandatory_job_list
],
"metadata": self._repr_SessionMetaData(obj.metadata, session_dir),
}
# Alias for the most recent version
SessionSuspendHelper = SessionSuspendHelper6
plainbox-0.25/plainbox/impl/session/test_jobs.py 0000664 0001750 0001750 00000025266 12627266441 022707 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.test_session
==========================
Test definitions for plainbox.impl.session module
"""
from doctest import DocTestSuite
from doctest import REPORT_NDIFF
from unittest import TestCase, expectedFailure
from plainbox.abc import IJobResult
from plainbox.impl.session import InhibitionCause
from plainbox.impl.session import JobReadinessInhibitor
from plainbox.impl.session import JobState
from plainbox.impl.session import UndesiredJobReadinessInhibitor
from plainbox.impl.testing_utils import make_job, make_job_result
def load_tests(loader, tests, ignore):
tests.addTests(DocTestSuite(
'plainbox.impl.session.jobs', optionflags=REPORT_NDIFF))
return tests
class JobReadinessInhibitorTests(TestCase):
def test_bad_initialization(self):
self.assertRaises(ValueError, JobReadinessInhibitor,
InhibitionCause.UNDESIRED - 1)
self.assertRaises(ValueError, JobReadinessInhibitor,
InhibitionCause.FAILED_RESOURCE + 1)
self.assertRaises(ValueError, JobReadinessInhibitor,
InhibitionCause.PENDING_DEP)
self.assertRaises(ValueError, JobReadinessInhibitor,
InhibitionCause.FAILED_DEP)
self.assertRaises(ValueError, JobReadinessInhibitor,
InhibitionCause.PENDING_RESOURCE)
self.assertRaises(ValueError, JobReadinessInhibitor,
InhibitionCause.FAILED_RESOURCE)
job = make_job("A")
self.assertRaises(ValueError, JobReadinessInhibitor,
InhibitionCause.PENDING_RESOURCE, job)
self.assertRaises(ValueError, JobReadinessInhibitor,
InhibitionCause.FAILED_RESOURCE, job)
def test_unknown(self):
obj = JobReadinessInhibitor(InhibitionCause.UNDESIRED)
self.assertEqual(
repr(obj), (
""))
self.assertEqual(str(obj), "undesired")
def test_pending_dep(self):
job = make_job("A")
obj = JobReadinessInhibitor(
InhibitionCause.PENDING_DEP, related_job=job)
self.assertEqual(
repr(obj), (
""
" related_expression:None>"))
self.assertEqual(str(obj), "required dependency 'A' did not run yet")
def test_failed_dep(self):
job = make_job("A")
obj = JobReadinessInhibitor(
InhibitionCause.FAILED_DEP, related_job=job)
self.assertEqual(
repr(obj), (
""
" related_expression:None>"))
self.assertEqual(str(obj), "required dependency 'A' has failed")
def test_pending_resource(self):
job = make_job("A", requires="resource.attr == 'value'")
expr = job.get_resource_program().expression_list[0]
obj = JobReadinessInhibitor(
InhibitionCause.PENDING_RESOURCE, related_job=job,
related_expression=expr)
self.assertEqual(
repr(obj), (
""
" related_expression:"
">"))
self.assertEqual(
str(obj), (
"resource expression \"resource.attr == 'value'\" could not be"
" evaluated because the resource it depends on did not run"
" yet"))
def test_failed_resource(self):
job = make_job("A", requires="resource.attr == 'value'")
expr = job.get_resource_program().expression_list[0]
obj = JobReadinessInhibitor(
InhibitionCause.FAILED_RESOURCE, related_job=job,
related_expression=expr)
self.assertEqual(
repr(obj), (
""
" related_expression:"
">"))
self.assertEqual(
str(obj), (
"resource expression \"resource.attr == 'value'\""
" evaluates to false"))
def test_unknown_global(self):
self.assertEqual(UndesiredJobReadinessInhibitor.cause,
InhibitionCause.UNDESIRED)
class JobStateTests(TestCase):
def setUp(self):
self.job = make_job("A")
self.job_state = JobState(self.job)
def test_smoke(self):
self.assertIsNotNone(self.job_state.result)
self.assertIs(self.job_state.result.outcome, IJobResult.OUTCOME_NONE)
self.assertEqual(self.job_state.result_history, ())
self.assertEqual(self.job_state.readiness_inhibitor_list, [
UndesiredJobReadinessInhibitor])
self.assertEqual(self.job_state.effective_category_id,
self.job.category_id)
self.assertEqual(self.job_state.effective_certification_status,
self.job.certification_status)
self.assertIsNone(self.job_state.via_job)
def test_getting_job(self):
self.assertIs(self.job_state.job, self.job)
@expectedFailure
def test_setting_job_is_not_allowed(self):
# FIXME: We want this test to come back at some point so I didn't
# delete it, but at the moment we need it to always pass because
# a JobState's job attribute needs to be writable.
with self.assertRaises(AttributeError):
self.job_state.job = None
def test_setting_result(self):
result = make_job_result()
self.job_state.result = result
self.assertIs(self.job_state.result, result)
def test_result_history_keeps_track_of_result_changes(self):
# XXX: this example will fail if subsequent results are identical
self.assertEqual(self.job_state.result_history, ())
result1 = make_job_result(outcome='fail')
self.job_state.result = result1
self.assertEqual(self.job_state.result_history, (result1,))
result2 = make_job_result(outcome='pass')
self.job_state.result = result2
self.assertEqual(self.job_state.result_history, (result1, result2))
def test_setting_result_fires_signal(self):
"""
verify that assigning state.result fires the on_result_changed signal
"""
# Remember both new and old result for verification
new_result = make_job_result()
old_result = self.job_state.result
def changed_callback(old, new):
# Verify that new and old are correct and not swapped
self.assertIs(new, new_result)
self.assertIs(old, old_result)
# Set a flag that we verify below in case this never gets called
self.on_changed_fired = True
# Connect the signal handler
self.job_state.on_result_changed.connect(changed_callback)
# Assign the new result
self.job_state.result = new_result
# Ensure that the signal was fired and called our callback
self.assertTrue(self.on_changed_fired)
def test_setting_result_fires_signal_only_when_real_change_happens(self):
"""
verify that assigning state.result does NOT fire the signal when the
new result is the same
"""
# Assume we never get called and reset the flag
self.on_changed_fired = False
def changed_callback(old, new):
# Set the flag in case we do get called
self.on_changed_fired = True
# Connect the signal handler
self.job_state.on_result_changed.connect(changed_callback)
# Assign the same result again
self.job_state.result = self.job_state.result
# Ensure that the signal was NOT fired
self.assertFalse(self.on_changed_fired)
def test_setting_readiness_inhibitor_list(self):
inhibitor = JobReadinessInhibitor(InhibitionCause.UNDESIRED)
self.job_state.readiness_inhibitor_list = [inhibitor]
self.assertEqual(self.job_state.readiness_inhibitor_list, [inhibitor])
def test_can_start(self):
self.job_state.readiness_inhibitor_list = []
self.assertTrue(self.job_state.can_start())
self.job_state.readiness_inhibitor_list = [
UndesiredJobReadinessInhibitor]
self.assertFalse(self.job_state.can_start())
def test_readiness_description(self):
self.job_state.readiness_inhibitor_list = []
self.assertEqual(self.job_state.get_readiness_description(),
"job can be started")
self.job_state.readiness_inhibitor_list = [
UndesiredJobReadinessInhibitor]
self.assertTrue(
self.job_state.get_readiness_description().startswith(
"job cannot be started: "))
def test_setting_effective_category_id(self):
self.job_state.effective_category_id = 'value'
self.assertEqual(self.job_state.effective_category_id, 'value')
def test_setting_effective_cert_certification_status(self):
self.job_state.effective_certification_status = 'value'
self.assertEqual(self.job_state.effective_certification_status,
'value')
def test_setting_via_job__TypeError(self):
with self.assertRaises(TypeError):
self.job_state.via_job = 'value'
def test_setting_via_job(self):
parent = make_job("parent")
self.job_state.via_job = parent
self.assertIs(self.job_state.via_job, parent)
def test_resetting_via_job(self):
parent = make_job("parent")
self.job_state.via_job = parent
self.assertIs(self.job_state.via_job, parent)
self.job_state.via_job = None
self.assertIs(self.job_state.via_job, None)
plainbox-0.25/plainbox/impl/session/assistant.py 0000664 0001750 0001750 00000203155 12627266441 022717 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
# Maciej Kisielewski
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""Session Assistant."""
import collections
import datetime
import fnmatch
import io
import itertools
import logging
import os
import shlex
import time
from plainbox.abc import IJobResult
from plainbox.abc import IJobRunnerUI
from plainbox.abc import ISessionStateTransport
from plainbox.impl.applogic import PlainBoxConfig
from plainbox.impl.decorators import raises
from plainbox.impl.developer import UnexpectedMethodCall
from plainbox.impl.developer import UsageExpectation
from plainbox.impl.result import JobResultBuilder
from plainbox.impl.runner import JobRunner
from plainbox.impl.runner import JobRunnerUIDelegate
from plainbox.impl.secure.qualifiers import select_jobs
from plainbox.impl.session import SessionMetaData
from plainbox.impl.session import SessionPeekHelper
from plainbox.impl.session import SessionResumeError
from plainbox.impl.session.jobs import InhibitionCause
from plainbox.impl.session.manager import SessionManager
from plainbox.impl.session.restart import IRestartStrategy
from plainbox.impl.session.restart import detect_restart_strategy
from plainbox.impl.session.storage import SessionStorageRepository
from plainbox.impl.transport import CertificationTransport
from plainbox.impl.transport import TransportError
from plainbox.public import get_providers
from plainbox.vendor import morris
_logger = logging.getLogger("plainbox.session.assistant")
__all__ = ('SessionAssistant', 'SA_RESTARTABLE')
# NOTE: There are two tuples related to resume candidates. The internal tuple
# uses the raw SessionStorage object. Since we don't wish to make that a public
# API yet it is not exposed in any of the public side of SessionAssistant APIs.
# The public variant uses the storage identifier (which is just a string) that
# applications are expected to handle as an opaque blob.
InternalResumeCandidate = collections.namedtuple(
'InternalResumeCandidate', ['storage', 'metadata'])
ResumeCandidate = collections.namedtuple(
'ResumeCandidate', ['id', 'metadata'])
SA_RESTARTABLE = "restartable"
class SessionAssistant:
"""
Assisting class to simplify common testing scenarios.
The assistant acts as a middle-man between the session manager and the
application. It handles all currently known stages of the testing
work-flow.
.. note::
The assistant class assumes single-threaded applications. Classic event
loop or threaded applications can be developed with a little bit of
care. The main problem is that plainbox doesn't support event loops
yet. Certain blocking operations (running jobs mostly) need to be done
from another thread. It is recommended to run all of plainbox in a
thread (either python or native thread embedding python runtime)
A typical application flow will look like this:
* The application calls :meth:`__init__()` to create a new session
assistant object with its own identifier as the only argument. This lets
multiple programs that use the plainbox APIs co-exists without clashes.
* (optionally) The application can call :meth:`use_alternate_repository()`
to change the location of the session storage repository. This is where
various files are created so if you don't want to use the default
location for any reason this is the only chance you have.
* The application selects a set of providers to load using
:meth:`select_providers()`. Typically applications will work with a
well-defined set of providers, either maintained by the same set of
developers or (sometimes) by reusing some third party test providers.
A small set of wild-cards are supported so that applications can load all
providers from a given name-space or even all available providers.
"""
# TODO: create a flowchart of possible states
def __init__(self, app_id, app_version=None, api_version='0.99',
api_flags=()):
"""
Initialize a new session assistant.
:param app_id:
Identifier of the testing application. The identifier should be
unique and constant throughout the support cycle of the
application.
:param app_version:
Version of the testing application.
:param api_version:
Expected API of SessionAssistant. Currently only "0.99" API is
defined.
:param api_flags:
Flags that describe optional API features that this application
wishes to support. Flags may change the usage expectation of
session assistant. Currently no flags are supported.
:raises ValueError:
When api_version is not recognized.
:raises ValueError:
When api_flags contains an unrecognized flag.
The application identifier is useful to implement session resume
functionality where the application can easily filter out sessions from
other programs.
"""
if api_version != '0.99':
raise ValueError("Unrecognized API version")
self._flags = set()
for flag in api_flags:
if flag == SA_RESTARTABLE:
self._flags.add(flag)
else:
raise ValueError("Unrecognized API flag: {!r}".format(flag))
self._app_id = app_id
self._app_version = app_version
self._api_version = api_version
self._api_flags = api_flags
self._repo = SessionStorageRepository()
self._config = PlainBoxConfig().get()
self._execution_ctrl_list = None # None is "default"
self._ctrl_setup_list = []
# List of providers that were selected. This is buffered until a
# session is created or resumed.
self._selected_providers = []
# All the key state for the active session. Technically just the
# manager matters, the context and metadata are just shortcuts to stuff
# available on the manager.
self._manager = None
self._context = None
self._metadata = None
self._runner = None
# Expect that select_providers() be called
UsageExpectation.of(self).allowed_calls = {
self.use_alternate_repository: (
"use an alternate storage repository"),
self.use_alternate_configuration: (
"use an alternate configuration system"),
self.use_alternate_execution_controllers: (
"use an alternate execution controllers"),
self.select_providers: (
"select the providers to work with"),
self.get_canonical_certification_transport: (
"create a transport for the C3 system"),
self.get_canonical_hexr_transport: (
"create a transport for the HEXR system"),
}
# Restart support
self._restart_cmd_callback = None
self._restart_strategy = None # None implies auto-detection
if SA_RESTARTABLE in self._flags:
allowed_calls = UsageExpectation.of(self).allowed_calls
allowed_calls[self.configure_application_restart] = (
"configure automatic restart capability")
allowed_calls[self.use_alternate_restart_strategy] = (
"configure automatic restart capability")
@raises(UnexpectedMethodCall, LookupError)
def configure_application_restart(
self, cmd_callback: 'Callable[[str], List[str]]') -> None:
"""
Configure automatic restart capability.
:param cmd_callback:
A callable (function or lambda) that when called with a single
string argument, session_id, returns a list of strings describing
how to execute the tool in order to restart a particular session.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
:raises LookupError:
If no restart strategy was explicitly configured and no strategy
was found with the auto-detection process.
.. note:
This method is only available when the application has initialized
session assistant with the SA_RESTARTABLE API flag.
This method configures session assistant for automatic application
restart. When a job is expected to reboot or shut down the machine but
the intent is to somehow resume testing automatically after that event,
test designers can use the 'noreturn' and 'restartable' flags together
to indicate that the testing process is should be automatically
resumed when the machine is turned on again.
The means of re-starting the testing process are unique to each
operating system environment. Plainbox knows about some restart
strategies internally. Applications can create additional strategies
using the :meth:`use_alternate_restart_strategy()` method.
"""
UsageExpectation.of(self).enforce()
if self._restart_strategy is None:
self._restart_strategy = detect_restart_strategy()
self._restart_cmd_callback = cmd_callback
# Prevent second call to this method and to the
# use_alternate_restart_strategy() method.
allowed_calls = UsageExpectation.of(self).allowed_calls
del allowed_calls[self.configure_application_restart]
if self.use_alternate_restart_strategy in allowed_calls:
del allowed_calls[self.use_alternate_restart_strategy]
@raises(UnexpectedMethodCall)
def use_alternate_restart_strategy(
self, strategy: IRestartStrategy
) -> None:
"""
Setup an alternate restart strategy object.
:param restart_strategy:
An object implementing the restart strategy interface. This object
is used to prepare the system for application restart.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
When this method is called all automatic environment auto-detection is
disabled and application restart is solely under the control of the
application.
The restart interface is very simple, it is comprised of a pair of
methods, :meth:`IRestartStrategy.prime_application_restart()` and
:meth:`IRestartStrategy.diffuse_application_restart(). When the
application is in a state where it will soon terminate, plainbox will
call the former of the two methods to _prime_ the system so that
application will be re-started when the machine is started (or
rebooted). When the application successfully starts, the _diffuse_
method will undo what prime did so that the application restart is a
one-off action.
The primary use of this method is to let applications support
environments that are not automatically handled correctly by plainbox.
"""
UsageExpectation.of(self).enforce()
self._restart_strategy = strategy
del UsageExpectation.of(self).allowed_calls[
self.use_alternate_restart_strategy]
@raises(UnexpectedMethodCall)
def use_alternate_repository(self, pathname: str) -> None:
"""
Setup an alternate location for the session storage repository.
:param pathname:
Directory name (that is created on demand) where sessions are
supposed to be stored.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This method can be used to use a non-standard repository location. This
is useful for testing, where it is good to separate test sessions from
any real data that the user may be using.
On some platforms, this can be also used to use a better default
location. If you have to call this in your application then please open
a bug. Plainbox should integrate with all the platforms correctly out
of the box.
"""
UsageExpectation.of(self).enforce()
self._repo = SessionStorageRepository(pathname)
_logger.debug("Using alternate repository: %r", pathname)
# NOTE: We expect applications to call this at most once.
del UsageExpectation.of(self).allowed_calls[
self.use_alternate_repository]
@raises(UnexpectedMethodCall)
def use_alternate_configuration(self, config):
"""
Use alternate configuration object.
:param config:
A configuration object that implements a superset of the plainbox
configuration.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
.. note::
Please check the source code to understand which values to pass
here. This method is currently experimental.
"""
UsageExpectation.of(self).enforce()
self._config = config
# NOTE: We expect applications to call this at most once.
del UsageExpectation.of(self).allowed_calls[
self.use_alternate_configuration]
@raises(UnexpectedMethodCall)
def use_alternate_execution_controllers(
self, ctrl_setup_list:
'Iterable[Tuple[IExecutionController, Tuple[Any], Dict[Any]]]'
) -> None:
"""
Use alternate execution controllers.
:param ctrl_setup_list:
An iterable with tuples, where each tuple represents a class of
controller to instantiate, together with *args and **kwargs to use
when calling its __init__.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This method can be used to use any custom execution controllers to
execute jobs. Normally those should be offered by the
``SessionDeviceContext`` (which is a part of the implementation) and
they should be _good_ for any use but as we learned some applications
needed to offer alternate controllers.
.. note::
Please check the source code to understand which values to pass
here. This method is currently experimental.
"""
UsageExpectation.of(self).enforce()
self._ctrl_setup_list = ctrl_setup_list
# NOTE: We expect applications to call this at most once.
del UsageExpectation.of(self).allowed_calls[
self.use_alternate_execution_controllers]
@raises(ValueError, UnexpectedMethodCall)
def select_providers(
self, *patterns, additional_providers: 'Iterable[Provider1]'=()
) -> 'List[Provider1]':
"""
Load plainbox providers.
:param patterns:
The list of patterns (or just names) of providers to load.
Note that some special provides are always loaded, regardless of if
the application wants that or not. Those providers are a part of
plainbox itself and are required for normal operation of the
framework.
The names may include the ``*`` character (asterisk) to indicate
"any". This includes both the namespace part and the provider name
part, e.g. ``2013.com.canonical.certification::*`` will load all of
providers made by the Canonical certification team. To load
everything just pass ``*``.
:param additional_providers:
A list of providers that were loaded by other means (usually in
some app-custom way).
:returns:
The list of loaded providers (including plainbox providers)
:raises ValueError:
If any of the patterns didn't match any provider.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
Providers are loaded into a temporary area so that they are ready for a
session that you can either create from scratch or resume one you may
have created earlier. In either case, this is the first method you
should call.
A provider is used to supply tests (or in general, jobs) to execute.
Typically applications will have an associated, well-known provider
that they wish to load.
Providers can be broken and can, in fact, load in a partially or
entirely damaged state. Applications should inspect the problem list of
each loaded provider to see if they wish to abort.
.. todo::
Delegate correctness checking to a mediator class that also
implements some useful, default behavior for this.
"""
UsageExpectation.of(self).enforce()
# NOTE: providers are actually enumerated here, they are only loaded
# and validated on demand so this is is not going to expose any
# problems from utterly broken providers we don't care about.
provider_list = get_providers()
# NOTE: copy the list as we don't want to mutate the object returned by
# get_providers(). This helps unit tests that actually return a fixed
# list here.
provider_list = provider_list[:] + list(additional_providers)
# Select all of the plainbox providers in a separate iteration. This
# way they get loaded unconditionally, regardless of what patterns are
# passed to the function (including not passing *any* patterns).
for provider in provider_list[:]:
if provider.namespace == "2013.com.canonical.plainbox":
provider_list.remove(provider)
self._selected_providers.append(provider)
self.provider_selected(provider, auto=True)
# Select all of the providers matched by any of the patterns.
for pat in patterns:
# Track useless patterns so that we can report them
useless = True
for provider in provider_list[:]:
if (provider.name == pat or
fnmatch.fnmatchcase(provider.name, pat)):
# Once a provider is selected, remove it from the list of
# candidates. This saves us from checking if we're adding
# something twice at each iteration.
provider_list.remove(provider)
self._selected_providers.append(provider)
self.provider_selected(provider, auto=False)
useless = False
if useless:
raise ValueError("nothing selected with: {}".format(pat))
# Set expectations for subsequent calls.
allowed_calls = UsageExpectation.of(self).allowed_calls
del allowed_calls[self.select_providers]
allowed_calls[self.start_new_session] = (
"create a new session from scratch")
allowed_calls[self.get_resumable_sessions] = (
"get resume candidates")
return self._selected_providers
@morris.signal
def provider_selected(self, provider, auto):
"""
Signal fired when a provider is loaded.
:param provider:
The provider object that was loaded.
:param auto:
Flag indicating if the provider was loaded automatically by the
framework or explicitly by the application.
This signal is fired after a provider is loaded and added to the
session. It can be safely ignored but applications may wish to use this
to show some UI element.
"""
_logger.debug("Provider selected: %r", provider)
@raises(UnexpectedMethodCall)
def start_new_session(self, title: str):
"""
Create a new testing session.
:param title:
Title of the session.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This method can be used to create a new session. This will create some
filesystem entries related to the session.
The session title should be a human-readable string, as much as the
application can create one, that describes the goal of the session.
Some user interfaces will display this information.
Using this method always creates a _new_ session. If the application
intends to use session resuming functionality it should use other
methods to see if session should be resumed instead.
"""
UsageExpectation.of(self).enforce()
self._manager = SessionManager.create(self._repo)
self._context = self._manager.add_local_device_context()
for provider in self._selected_providers:
self._context.add_provider(provider)
self._metadata = self._context.state.metadata
self._metadata.app_id = self._app_id
self._metadata.title = title
self._metadata.flags = {'bootstrapping'}
self._manager.checkpoint()
self._command_io_delegate = JobRunnerUIDelegate(_SilentUI())
self._init_runner()
self.session_available(self._manager.storage.id)
_logger.debug("New session created: %s", title)
UsageExpectation.of(self).allowed_calls = {
self.get_test_plans: "to get the list of available test plans",
self.get_test_plan: "to get particular test plan object",
self.select_test_plan: "select the test plan to execute",
self.get_session_id: "to get the id of currently running session",
self.get_session_dir: ("to get the path where current session is"
"stored"),
}
@raises(KeyError, UnexpectedMethodCall)
def resume_session(self, session_id: str) -> 'SessionMetaData':
"""
Resume a session.
:param session_id:
The identifier of the session to resume.
:returns:
Resumed session metadata.
:raises KeyError:
If the session with a given session_id cannot be found.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This method restores internal state of the plainbox runtime as it was
the last time session assistant did a checkpoint, i.e. session
assistant's clients commited any information (e.g. saves job result,
runs bootstrapping, updates app blob, etc.)
"""
UsageExpectation.of(self).enforce()
all_units = list(itertools.chain(
*[p.unit_list for p in self._selected_providers]))
self._manager = SessionManager.load_session(
all_units, self._resume_candidates[session_id][0])
self._context = self._manager.default_device_context
self._metadata = self._context.state.metadata
self._command_io_delegate = JobRunnerUIDelegate(_SilentUI())
self._init_runner()
if self._metadata.running_job_name:
job = self._context.get_unit(
self._metadata.running_job_name, 'job')
if 'autorestart' in job.get_flag_set():
result = JobResultBuilder(
outcome=(
IJobResult.OUTCOME_PASS
if 'noreturn' in job.get_flag_set() else
IJobResult.OUTCOME_FAIL),
return_code=0,
io_log_filename=self._runner.get_record_path_for_job(job),
).get_result()
self._context.state.update_job_result(job, result)
if self._restart_strategy is not None:
self._restart_strategy.diffuse_application_restart(self._app_id)
self.session_available(self._manager.storage.id)
_logger.debug("Session resumed: %s", session_id)
UsageExpectation.of(self).allowed_calls = {
self.select_test_plan: "to save test plan selection",
}
return self._resume_candidates[session_id].metadata
@raises(UnexpectedMethodCall)
def get_resumable_sessions(self) -> 'Tuple[str, SessionMetaData]':
"""
Check repository for sessions that could be resumed.
:returns:
A generator that yields namedtuples with (id, metadata) of
subsequent resumable sessions, starting from the youngest one.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This method iterates through incomplete sessions saved in the storage
repository and looks for the ones that were created using the same
app_id as the one currently used.
Applications can use sessions' metadata (and the app_blob contained
in them) to decide which session is the best one to propose resuming.
"""
UsageExpectation.of(self).enforce()
# let's keep resume_candidates, so we don't have to load data again
self._resume_candidates = {}
for storage in self._repo.get_storage_list():
data = storage.load_checkpoint()
if len(data) == 0:
continue
try:
metadata = SessionPeekHelper().peek(data)
except SessionResumeError:
_logger.info("Exception raised when trying to resume"
"session: %s", str(storage.id))
else:
if (metadata.app_id == self._app_id and
SessionMetaData.FLAG_INCOMPLETE in metadata.flags):
self._resume_candidates[storage.id] = (
InternalResumeCandidate(storage, metadata))
UsageExpectation.of(self).allowed_calls[
self.resume_session] = "resume session"
yield ResumeCandidate(storage.id, metadata)
def update_app_blob(self, app_blob: bytes) -> None:
"""
Update custom app data and save the session in the session storage.
:param app_blob:
Bytes sequence containing JSON-ised app_blob object.
"""
self._context.state.metadata.app_blob = app_blob
self._manager.checkpoint()
@morris.signal
def session_available(self, session_id):
"""
Signal sent when a session is available.
:param session_id:
Identifier of the session. This identifier is randomly generated
and allocated by plainbox, you cannot influence it.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
The identifier is persistent. You can use it to resume the session
later. Certain tools will allow the user to operate on a session as
long as the identifier is known. You can use this signal to obtain this
identifier.
.. note::
The identifier is unique within the storage repository. If you made
use of :meth:`use_alternate_repository() then please keep this in
mind.
"""
_logger.debug("Session is now available: %s", session_id)
@raises(UnexpectedMethodCall)
def get_session_id(self):
"""
Get the identifier of the session.
:returns:
The string that identifies the session in the repository being
used. The identifier is a short, random directory name (without the
full path), relative to the session storage repository.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
Applications can use this method and some side-channel to remember the
session that was executed most recently. This can be useful in resuming
that session without the need to search and analyze all of the sessions
in the repository.
"""
UsageExpectation.of(self).enforce()
return self._manager.storage.id
@raises(UnexpectedMethodCall)
def get_session_dir(self):
"""
Get the pathname of the session directory.
:returns:
The string that represents the absolute pathname of the session
directory. All of the files and directories inside that directory
constitute session state.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
.. note::
The layout of the session is documented but is considered volatile
at this stage. The only thing that can be done reliably is a
complete archive (backup) of the directory. This is guaranteed to
work.
"""
UsageExpectation.of(self).enforce()
return self._manager.storage.location
@raises(UnexpectedMethodCall)
def get_test_plans(self) -> 'List[str]':
"""
Get a set of test plan identifiers.
:returns:
A list of test plan identifiers.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This method computes the set of category identifiers that contains each
category for which at least test might be executed in this session.
This set does not include bootstrap jobs as they must be executed prior
to actually allowing the user to know what jobs are available.
"""
UsageExpectation.of(self).enforce()
return [unit.id for unit in self._context.unit_list
if unit.Meta.name == 'test plan']
@raises(KeyError, UnexpectedMethodCall)
def select_test_plan(self, test_plan_id):
"""
Select a test plan for execution.
:param test_plan_id:
The identifier of the test plan to execute.
:raises KeyError:
If the test plan with that identifier cannot be found.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
Test plans describe all of the essential details needed to execute a
set of tests. Like other plainbox components each test plan has an
unique identifier.
Upon making the selection the application can inspect the execution
plan which is expressed as a list of jobs to execute.
"""
UsageExpectation.of(self).enforce()
test_plan = self._context.get_unit(test_plan_id, 'test plan')
self._manager.test_plans = (test_plan, )
self._manager.checkpoint()
UsageExpectation.of(self).allowed_calls = {
self.bootstrap: "to run the bootstrap process"
}
@raises(UnexpectedMethodCall)
def bootstrap(self):
"""
Perform session bootstrap process to discover all content.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
The session assistant offers two mechanism for generating additional
content (primarily jobs). Understanding this mechanism is important for
applications that wish to display a list of jobs before the test
operator finally commits to running a subset of them.
During the bootstrap phase resource jobs that are associated with job
templates may generate new jobs according to the information specified
in the template. In addition, local jobs can generate arbitrary
(unrestricted) units. Both of those mechanism are subject to the
validation system (invalid units are discarded).
When this method returns (which can take a while) the session is now
ready for running any jobs.
.. warning:
This method will not return until the bootstrap process is
finished. This can take any amount of time (easily over one minute)
"""
UsageExpectation.of(self).enforce()
# NOTE: there is next-to-none UI here as bootstrap jobs are limited to
# just resource and local jobs (including their dependencies) so there
# should be very little UI required.
desired_job_list = select_jobs(
self._context.state.job_list,
[plan.get_bootstrap_qualifier() for plan in (
self._manager.test_plans)])
self._context.state.update_desired_job_list(desired_job_list)
for job in self._context.state.run_list:
UsageExpectation.of(self).allowed_calls[self.run_job] = (
"to run bootstrapping job")
rb = self.run_job(job.id, 'silent', False)
self.use_job_result(job.id, rb.get_result())
# Perform initial selection -- we want to run everything that is
# described by the test plan that was selected earlier.
desired_job_list = select_jobs(
self._context.state.job_list,
[plan.get_qualifier() for plan in self._manager.test_plans])
self._context.state.update_desired_job_list(desired_job_list)
# Set subsequent usage expectations i.e. all of the runtime parts are
# available now.
UsageExpectation.of(self).allowed_calls = (
self._get_allowed_calls_in_normal_state())
self._metadata.flags = {'incomplete'}
self._manager.checkpoint()
@raises(KeyError, UnexpectedMethodCall)
def use_alternate_selection(self, selection: 'Iterable[str]'):
"""
Setup an alternate set of jobs to run.
:param selection:
A sequence of identifiers of jobs that the user would like to run.
:raises KeyError:
If the selection refers to unknown jobs.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This method can be called at any time to change the _selection_ of jobs
that the user wishes to run. Any job present in the session can be
used.
By default, after selecting a test plan, the job selection includes all
of the jobs described by that test plan.
.. note::
Calling this method will alter the result of
:meth:`get_static_todo_list()` and :meth:`get_dynamic_todo_list()`.
"""
UsageExpectation.of(self).enforce()
desired_job_list = [
self._context.get_unit(job_id, 'job') for job_id in
self.get_static_todo_list() if job_id in selection]
self._context.state.update_desired_job_list(desired_job_list)
@raises(UnexpectedMethodCall)
def filter_jobs_by_categories(self, categories: 'Iterable[str]'):
"""
Filter out jobs with categories that don't match given ones.
:param categories:
A sequence of category identifiers of jobs that should stay in the
todo list.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This method can be called at any time to unselect jobs that belong to
a category not present in `categories`.
.. note::
Calling this method will alter the result of
:meth:`get_static_todo_list()` and :meth:`get_dynamic_todo_list()`.
"""
UsageExpectation.of(self).enforce()
selection = [job.id for job in [
self.get_job(job_id) for job_id in self.get_static_todo_list()] if
job.category_id in categories]
self.use_alternate_selection(selection)
@raises(UnexpectedMethodCall)
def remove_all_filters(self):
"""
Bring back original job list.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This method can be called to remove all filters applied from currently
reigning job selection.
"""
UsageExpectation.of(self).enforce()
desired_job_list = select_jobs(
self._context.state.job_list,
[plan.get_qualifier() for plan in self._manager.test_plans])
self._context.state.update_desired_job_list(desired_job_list)
@raises(KeyError, UnexpectedMethodCall)
def get_job_state(self, job_id: str) -> 'JobState':
"""
Get the mutable state of the job with the given identifier.
:returns:
The JobState object that corresponds to the given identifier.
:raises KeyError:
If no such job exists
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
.. note::
The returned object contains parts that may not be covered by the
public api stability promise. Refer to the documentation of the
JobState class for details.
"""
UsageExpectation.of(self).enforce()
# XXX: job_state_map is a bit low level, can we avoid that?
return self._context.state.job_state_map[job_id]
@raises(KeyError, UnexpectedMethodCall)
def get_job(self, job_id):
"""
Get the definition of the job with the given identifier.
:returns:
The JobDefinition object that corresponds to the given identifier.
:raises KeyError:
If no such job exists
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
.. note::
The returned object contains parts that may not be covered by the
public api stability promise. Refer to the documentation of the
JobDefinition class for details.
"""
UsageExpectation.of(self).enforce()
# we may want to decide early about the result of the job, without
# running it (e.g. when skipping the job)
allowed_calls = UsageExpectation.of(self).allowed_calls
allowed_calls[self.use_job_result] = "remember the result of this job"
return self._context.get_unit(job_id, 'job')
@raises(KeyError, UnexpectedMethodCall)
def get_test_plan(self, test_plan_id: str) -> 'TestPlanUnit':
"""
Get the test plan with the given identifier.
:returns:
The TestPlanUnit object that corresponds to the given identifier.
:raises KeyError:
If no such test plan exists
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
.. note::
The returned object contains parts that may not be covered by the
public api stability promise. Refer to the documentation of the
TestPlanUnit class for details.
"""
UsageExpectation.of(self).enforce()
return self._context.get_unit(test_plan_id, 'test plan')
@raises(KeyError, UnexpectedMethodCall)
def get_category(self, category_id: str) -> 'CategoryUnit':
"""
Get the category with the given identifier.
:returns:
The Category Unit object that corresponds to the given identifier.
:raises KeyError:
If no such category exists.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
.. note::
The returned object contains parts that may not be covered by the
public api stability promise. Refer to the documentation of the
CategoryUnit class for details.
"""
UsageExpectation.of(self).enforce()
return self._context.get_unit(category_id, 'category')
@raises(UnexpectedMethodCall)
def get_participating_categories(self) -> 'List[str]':
"""
Get a set of category identifiers associated with current test plan.
:returns:
A list of category identifiers.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This method computes the set of category identifiers that contains each
category for which at least test might be executed in this session.
This set does not include boostrap jobs as they must be executed prior
to actually allowing the user to know what jobs are available.
"""
UsageExpectation.of(self).enforce()
test_plan = self._manager.test_plans[0]
potential_job_list = select_jobs(
self._context.state.job_list, [test_plan.get_qualifier()])
return list(set(
test_plan.get_effective_category_map(potential_job_list).values()))
@raises(UnexpectedMethodCall)
def get_static_todo_list(self) -> 'Iterable[str]':
"""
Get the (static) list of jobs to run.
:returns:
A list of identifiers of jobs to run.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This method can be used to obtain the full sequence of jobs that are
described by the test plan. The result is only influenced by
:meth:`use_alternate_selection()`. It never grows or shrinks during
execution of subsequent jobs.
Please note that returned identifiers may refer to jobs that were
automatically selected via some mechanism, not necessarily a job
explicitly requested by the user. Examples of such mechanisms include
job dependencies, resource dependencies or mandatory jobs.
"""
UsageExpectation.of(self).enforce()
return [job.id for job in self._context.state.run_list]
@raises(UnexpectedMethodCall)
def get_dynamic_todo_list(self) -> 'List[str]':
"""
Get the (dynamic) list of jobs to run.
:returns:
A list of identifiers of jobs to run.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This methods can be used to obtain the sequence of jobs that are yet to
be executed. The result is affected by
:meth:`use_alternate_selection()` as well as :meth:`run_job()`.
Jobs that cannot be started (due to failed dependencies or unsatisfied
requirements) are also returned here. Any attempts to run them via
:meth:`run_job()` will produce a correct result object with appropriate
information.
Please note that returned identifiers may refer to jobs that were
automatically selected via some mechanism, not necessarily a job
explicitly requested by the user. Examples of such mechanisms include
job dependencies, resource dependencies or mandatory jobs.
.. note::
It is correct and safe if applications only execute this method
once and iterate over the result from start to finish, calling
:meth:`run_job()` and :meth:`use_job_result()`. All dynamics of
generating jobs is hidden and handled by the :meth:`boostrap()`
method.
"""
UsageExpectation.of(self).enforce()
# XXX: job_state_map is a bit low level, can we avoid that?
jsm = self._context.state.job_state_map
return [
job.id for job in self._context.state.run_list
if jsm[job.id].result.outcome is None
]
@raises(ValueError, TypeError, UnexpectedMethodCall)
def run_job(
self, job_id: str, ui: 'Union[str, IJobRunnerUI]',
native: bool
) -> 'JobResultBuilder':
"""
Run a job with the specific identifier.
:param job_id:
Identifier of the job to run.
:param ui:
The user interface delegate to use. As a special case it can be a
well-known name of a stock user interface. Currently only the
'silent' user interface is available.
:param native:
Flag indicating that the job will be run natively by the
application. Normal runner won't be used to execute the job
:raises KeyError:
If no such job exists
:raises ValueError:
If the well known UI name is not recognized.
:raises TypeError:
If the UI is not a IJobRunnerUI subclass.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
:returns:
JobResultBuilder instance.
This method can be used to run any job available in the session (not
only those jobs that are selected, or on the todo list). The result is
a ResultBuilder object which can be modified if necessary. The result
builder object can be also converted to a result object and fed back to
the session via the :meth:`use_job_result()` method.
It is expected that the caller will follow this protocol for each
executed job. This API complexity is required to let users interact
with interactive jobs and let the application do anything it needs to
to accomplish that.
"""
UsageExpectation.of(self).enforce()
if isinstance(ui, IJobRunnerUI):
pass
elif isinstance(ui, str):
if ui == 'silent':
ui = _SilentUI()
else:
raise ValueError("unknown user interface: {!r}".format(ui))
else:
raise TypeError("incorrect UI type")
# XXX: job_state_map is a bit low level, can we avoid that?
start_time = time.time()
job_state = self._context.state.job_state_map[job_id]
job = job_state.job
ui.considering_job(job, job_state)
if job_state.can_start():
ui.about_to_start_running(job, job_state)
self._context.state.metadata.running_job_name = job.id
self._manager.checkpoint()
autorestart = (self._restart_strategy is not None and
'autorestart' in job.get_flag_set())
if autorestart:
restart_cmd = ' '.join(
shlex.quote(cmd_part)
for cmd_part in self._restart_cmd_callback(
self._manager.storage.id))
self._restart_strategy.prime_application_restart(
self._app_id, restart_cmd)
ui.started_running(job, job_state)
if not native:
builder = self._runner.run_job(
job, job_state, self._config, ui
).get_builder()
else:
builder = JobResultBuilder(
outcome=IJobResult.OUTCOME_UNDECIDED,
)
builder.execution_duration = time.time() - start_time
if autorestart:
self._restart_strategy.diffuse_application_restart(
self._app_id)
self._context.state.metadata.running_job_name = None
self._manager.checkpoint()
ui.finished_running(job, job_state, builder.get_result())
else:
# Set the outcome of jobs that cannot start to
# OUTCOME_NOT_SUPPORTED _except_ if any of the inhibitors point to
# a job with an OUTCOME_SKIP outcome, if that is the case mirror
# that outcome. This makes 'skip' stronger than 'not-supported'
outcome = IJobResult.OUTCOME_NOT_SUPPORTED
for inhibitor in job_state.readiness_inhibitor_list:
if inhibitor.cause != InhibitionCause.FAILED_DEP:
continue
related_job_state = self._context.state.job_state_map[
inhibitor.related_job.id]
if related_job_state.result.outcome == IJobResult.OUTCOME_SKIP:
outcome = IJobResult.OUTCOME_SKIP
builder = JobResultBuilder(
outcome=outcome,
comments=job_state.get_readiness_description())
ui.job_cannot_start(job, job_state, builder.get_result())
ui.finished(job, job_state, builder.get_result())
# Set up expectations so that run_job() and use_job_result() must be
# called in pairs and applications cannot just forget and call
# run_job() all the time.
allowed_calls = UsageExpectation.of(self).allowed_calls
del allowed_calls[self.run_job]
allowed_calls[self.use_job_result] = "remember the result of last job"
return builder
@raises(UnexpectedMethodCall)
def use_job_result(self, job_id: str, result: 'IJobResult') -> None:
"""
Feed job result back to the session.
:param job_id:
Identifier of the job the result is for
:param result:
The result object that contains all the information about running
that job. You can obtain one from a result builder by calling the
``builder.get_result()` method.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This method is meant to complement :meth:`run_job()`. They are split so
that the application can freely modify the result object in a single
_atomic_ operation.
Note that running a single job and presenting the result back to the
session may unlock or lock other jobs. For example, running a resource
job may allow or disallow another job to run (via requirement
programs). Similar system exists for job dependencies. A job that
depends on another job will not be able to run if any of its
dependencies did not complete successfully.
"""
UsageExpectation.of(self).enforce()
job = self._context.get_unit(job_id, 'job')
self._context.state.update_job_result(job, result)
# Set up expectations so that run_job() and use_job_result() must be
# called in pairs and applications cannot just forget and call
# run_job() all the time.
allowed_calls = UsageExpectation.of(self).allowed_calls
del allowed_calls[self.use_job_result]
allowed_calls[self.run_job] = "run another job"
def get_summary(self) -> 'defaultdict':
"""
Get a grand total statistic for the jobs that ran.
:returns:
A defaultdict mapping the number of jobs that have a given outcome
to the kind of outcome. E.g. {IJobResult.OUTCOME_PASS: 6, (...)}.
"""
stats = collections.defaultdict(int)
for job_state in self._context.state.job_state_map.values():
if not job_state.result.outcome:
# job not considered for runnning - let's not pollute summary
# with data from those jobs
continue
stats[job_state.result.outcome] += 1
return stats
@raises(UnexpectedMethodCall)
def finalize_session(self) -> None:
"""
Finish the execution of the current session.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
Mark the session as complete, which prohibits running (or rerunning)
any job. finalize_session will be ignored if session has already been
finalized; this frees applications from keeping state information in
them.
"""
UsageExpectation.of(self).enforce()
if SessionMetaData.FLAG_INCOMPLETE not in self._metadata.flags:
_logger.info("finalize_session called for already finalized"
" session: %s", self._manager.storage.id)
# leave the same usage expectations
return
if SessionMetaData.FLAG_SUBMITTED not in self._metadata.flags:
_logger.warning("Finalizing session that hasn't been submitted "
"anywhere: %s", self._manager.storage.id)
self._metadata.flags.remove(SessionMetaData.FLAG_INCOMPLETE)
self._manager.checkpoint()
UsageExpectation.of(self).allowed_calls = {
self.finalize_session: "to finalize session",
self.export_to_transport: "to export the results and send them",
self.export_to_file: "to export the results to a file",
self.export_to_stream: "to export the results to a stream",
self.get_resumable_sessions: "to get resume candidates",
self.start_new_session: "to create a new session",
self.get_canonical_certification_transport: (
"create a transport for the C3 system"),
self.get_canonical_hexr_transport: (
"create a transport for the HEXR system"),
}
@raises(KeyError, TransportError, UnexpectedMethodCall)
def export_to_transport(
self, exporter_id: str, transport: ISessionStateTransport
) -> dict:
"""
Export the session using given exporter ID and transport object.
:param exporter_id:
The identifier of the exporter unit to use. This must have been
loaded into the session from an existing provider. Many users will
want to load the ``2013.com.canonical.palainbox:exporter`` provider
(via :meth:`load_providers()`.
:param transport:
A pre-created transport object such as the `CertificationTransport`
that is useful for sending data to the Canonical Certification
Website and HEXR. This can also be any object conforming to the
appropriate API.
:returns:
pass
:raises KeyError:
When the exporter unit cannot be found.
:raises TransportError:
If the transport fails in any way:
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
"""
UsageExpectation.of(self).enforce()
exporter = self._manager.create_exporter(exporter_id)
exported_stream = io.BytesIO()
exporter.dump_from_session_manager(self._manager, exported_stream)
exported_stream.seek(0)
return transport.send(exported_stream)
@raises(KeyError, OSError)
def export_to_file(
self, exporter_id: str, option_list: 'list[str]', dir_path: str
) -> str:
"""
Export the session to file using given exporter ID.
:param exporter_id:
The identifier of the exporter unit to use. This must have been
loaded into the session from an existing provider. Many users will
want to load the ``2013.com.canonical.palainbox:exporter`` provider
(via :meth:`load_providers()`.
:param option_list:
List of options customary to the exporter that is being created.
:param dir_path:
Path to the directory where session file should be written to.
Note that the file name is automatically generated, based on
creation time and type of exporter.
:returns:
Path to the written file.
:raises KeyError:
When the exporter unit cannot be found.
:raises OSError:
When there is a problem when writing the output.
"""
UsageExpectation.of(self).enforce()
exporter = self._manager.create_exporter(exporter_id, option_list)
timestamp = datetime.datetime.utcnow().isoformat()
path = os.path.join(dir_path, ''.join(
['submission_', timestamp, '.', exporter.unit.file_extension]))
with open(path, 'wb') as stream:
exporter.dump_from_session_manager(self._manager, stream)
return path
@raises(KeyError, OSError)
def export_to_stream(
self, exporter_id: str, option_list: 'list[str]', stream
) -> None:
"""
Export the session to file using given exporter ID.
:param exporter_id:
The identifier of the exporter unit to use. This must have been
loaded into the session from an existing provider. Many users will
want to load the ``2013.com.canonical.palainbox:exporter`` provider
(via :meth:`load_providers()`.
:param option_list:
List of options customary to the exporter that is being created.
:param stream:
Stream to write the report to.
:returns:
Path to the written file.
:raises KeyError:
When the exporter unit cannot be found.
:raises OSError:
When there is a problem when writing the output.
"""
UsageExpectation.of(self).enforce()
exporter = self._manager.create_exporter(exporter_id, option_list)
exporter.dump_from_session_manager(self._manager, stream)
if SessionMetaData.FLAG_SUBMITTED not in self._metadata.flags:
self._metadata.flags.add(SessionMetaData.FLAG_SUBMITTED)
self._manager.checkpoint()
@raises(ValueError, UnexpectedMethodCall)
def get_canonical_certification_transport(
self, secure_id: str, *, staging: bool=False
) -> "ISesssionStateTransport":
"""
Get a transport for the Canonical Certification website.
:param secure_id:
The _secure identifier_ of the machine. This is an identifier
issued by Canonical. It is only applicable to machines that are
tested by the Hardware Certification team.
:param staging:
Flag indicating if the staging server should be used.
:returns:
A ISessionStateTransport instance with appropriate configuration.
In practice the transport object should be passed to
:meth:`export_to_transport()` and not handled in any other way.
:raises ValueError:
if the ``secure_id`` is malformed.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This transport, same as the hexr transport, expects the data created by
the ``"hexr"`` exporter.
"""
UsageExpectation.of(self).enforce()
if staging:
url = ('https://certification.staging.canonical.com/'
'submissions/submit/')
else:
url = 'https://certification.canonical.com/submissions/submit/'
options = "secure_id={}".format(secure_id)
return CertificationTransport(url, options)
@raises(UnexpectedMethodCall)
def get_canonical_hexr_transport(
self, *, staging: bool=False
) -> "ISesssionStateTransport":
"""
Get a transport for the Canonical HEXR website.
:param staging:
Flag indicating if the staging server should be used.
:returns:
A ISessionStateTransport instance with appropriate configuration.
In practice the transport object should be passed to
:meth:`export_to_transport()` and not handled in any other way.
:raises UnexpectedMethodCall:
If the call is made at an unexpected time. Do not catch this error.
It is a bug in your program. The error message will indicate what
is the likely cause.
This transport, same as the certification transport, expects the data
created by the ``"hexr"`` exporter.
"""
UsageExpectation.of(self).enforce()
if staging:
url = 'https://hexr.staging.canonical.com/checkbox/submit/'
else:
url = 'https://hexr.canonical.com/checkbox/submit/'
options = "submit_to_hexr=1"
return CertificationTransport(url, options)
def _get_allowed_calls_in_normal_state(self) -> dict:
return {
self.get_job_state: "to access the state of any job",
self.get_job: "to access the definition of any job",
self.get_test_plan: "to access the definition of any test plan",
self.get_category: "to access the definition of ant category",
self.get_participating_categories: (
"to access participating categories"),
self.filter_jobs_by_categories: (
"to select the jobs that match particular category"),
self.remove_all_filters: "to remove all filters",
self.get_static_todo_list: "to see what is meant to be executed",
self.get_dynamic_todo_list: "to see what is yet to be executed",
self.run_job: "to run a given job",
self.use_alternate_selection: "to change the selection",
self.use_job_result: "to feed job result back to the session",
# XXX: should this be available right off the bat or should we wait
# until all of the mandatory jobs have been executed.
self.export_to_transport: "to export the results and send them",
self.export_to_file: "to export the results to a file",
self.export_to_stream: "to export the results to a stream",
self.finalize_session: "to mark the session as complete",
self.get_session_id: "to get the id of currently running session",
self.get_session_dir: ("to get the path where current session is"
"stored"),
}
def _init_runner(self):
self._execution_ctrl_list = []
for ctrl_cls, args, kwargs in self._ctrl_setup_list:
self._execution_ctrl_list.append(
ctrl_cls(self._context.provider_list, *args, **kwargs))
self._runner = JobRunner(
self._manager.storage.location,
self._context.provider_list,
jobs_io_log_dir=os.path.join(
self._manager.storage.location, 'io-logs'),
command_io_delegate=self._command_io_delegate,
execution_ctrl_list=self._execution_ctrl_list or None)
return
class _SilentUI(IJobRunnerUI):
def considering_job(self, job, job_state):
pass
def about_to_start_running(self, job, job_state):
pass
def wait_for_interaction_prompt(self, job):
pass
def started_running(self, job, job_state):
pass
def about_to_execute_program(self, args, kwargs):
pass
def finished_executing_program(self, returncode):
pass
def got_program_output(self, stream_name, line):
pass
def finished_running(self, job, job_state, job_result):
pass
def notify_about_description(self, job):
pass
def notify_about_purpose(self, job):
pass
def notify_about_steps(self, job):
pass
def notify_about_verification(self, job):
pass
def job_cannot_start(self, job, job_state, job_result):
pass
def finished(self, job, job_state, job_result):
pass
def pick_action_cmd(self, action_list, prompt=None):
pass
def noreturn_job(self):
pass
plainbox-0.25/plainbox/impl/session/test_resume.py 0000664 0001750 0001750 00000252724 12627266441 023253 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012, 2013, 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.session.test_resume`
========================================
Test definitions for :mod:`plainbox.impl.session.resume` module
"""
from unittest import TestCase
import base64
import binascii
import copy
import gzip
import json
from plainbox.abc import IJobQualifier
from plainbox.abc import IJobResult
from plainbox.impl.job import JobDefinition
from plainbox.impl.resource import Resource
from plainbox.impl.result import DiskJobResult
from plainbox.impl.result import IOLogRecord
from plainbox.impl.result import MemoryJobResult
from plainbox.impl.session.resume import CorruptedSessionError
from plainbox.impl.session.resume import IncompatibleJobError
from plainbox.impl.session.resume import IncompatibleSessionError
from plainbox.impl.session.resume import ResumeDiscardQualifier
from plainbox.impl.session.resume import SessionPeekHelper
from plainbox.impl.session.resume import SessionPeekHelper1
from plainbox.impl.session.resume import SessionPeekHelper2
from plainbox.impl.session.resume import SessionPeekHelper3
from plainbox.impl.session.resume import SessionPeekHelper4
from plainbox.impl.session.resume import SessionPeekHelper5
from plainbox.impl.session.resume import SessionPeekHelper6
from plainbox.impl.session.resume import SessionResumeError
from plainbox.impl.session.resume import SessionResumeHelper
from plainbox.impl.session.resume import SessionResumeHelper1
from plainbox.impl.session.resume import SessionResumeHelper2
from plainbox.impl.session.resume import SessionResumeHelper3
from plainbox.impl.session.resume import SessionResumeHelper4
from plainbox.impl.session.resume import SessionResumeHelper5
from plainbox.impl.session.resume import SessionResumeHelper6
from plainbox.impl.session.state import SessionState
from plainbox.impl.testing_utils import make_job
from plainbox.testing_utils.testcases import TestCaseWithParameters
from plainbox.vendor import mock
class ResumeDiscardQualifierTests(TestCase):
"""
Tests for the ResumeDiscardQualifier class
"""
def setUp(self):
# The initializer accepts a collection of job IDs to retain
self.obj = ResumeDiscardQualifier({'foo', 'bar', 'froz'})
def test_init(self):
self.assertEqual(
self.obj._retain_id_set, frozenset(['foo', 'bar', 'froz']))
def test_get_simple_match(self):
# Direct hits return the IGNORE vote as those jobs are not to be
# removed. Everything else should return VOTE_INCLUDE (include for
# removal)
self.assertEqual(
self.obj.get_vote(JobDefinition({'id': 'foo'})),
IJobQualifier.VOTE_IGNORE)
self.assertEqual(
self.obj.get_vote(JobDefinition({'id': 'bar'})),
IJobQualifier.VOTE_IGNORE)
self.assertEqual(
self.obj.get_vote(JobDefinition({'id': 'froz'})),
IJobQualifier.VOTE_IGNORE)
# Jobs that are in the retain set are NOT designated
self.assertEqual(
self.obj.designates(JobDefinition({'id': 'bar'})), False)
self.assertEqual(
self.obj.designates(JobDefinition({'id': 'foo'})), False)
# Jobs that are not on the retain list are INCLUDED and marked for
# removal. This includes jobs that are substrings of strings in the
# retain set, ids are matched exactly, not by pattern.
self.assertEqual(
self.obj.get_vote(JobDefinition({'id': 'foobar'})),
IJobQualifier.VOTE_INCLUDE)
self.assertEqual(
self.obj.get_vote(JobDefinition({'id': 'fo'})),
IJobQualifier.VOTE_INCLUDE)
class SessionResumeExceptionTests(TestCase):
"""
Tests for the various exceptions defined in the resume module
"""
def test_resume_exception_inheritance(self):
"""
verify that all three exception classes inherit from the common base
"""
self.assertTrue(issubclass(
CorruptedSessionError, SessionResumeError))
self.assertTrue(issubclass(
IncompatibleSessionError, SessionResumeError))
self.assertTrue(issubclass(
IncompatibleJobError, SessionResumeError))
class SessionResumeHelperTests(TestCase):
def test_resume_dispatch_v1(self):
helper1 = SessionResumeHelper1
with mock.patch.object(helper1, 'resume_json'):
data = gzip.compress(
b'{"session":{"desired_job_list":[],"jobs":{},"metadata":'
b'{"app_blob":null,"flags":[],"running_job_name":null,'
b'"title":null},"results":{}},"version":1}')
SessionResumeHelper([], None, None).resume(data)
helper1.resume_json.assert_called_once_with(
{'session': {'jobs': {},
'metadata': {'title': None,
'running_job_name': None,
'app_blob': None,
'flags': []},
'desired_job_list': [],
'results': {}},
'version': 1}, None)
def test_resume_dispatch_v2(self):
helper2 = SessionResumeHelper2
with mock.patch.object(helper2, 'resume_json'):
data = gzip.compress(
b'{"session":{"desired_job_list":[],"jobs":{},"metadata":'
b'{"app_blob":null,"flags":[],"running_job_name":null,'
b'"title":null},"results":{}},"version":2}')
SessionResumeHelper([], None, None).resume(data)
helper2.resume_json.assert_called_once_with(
{'session': {'jobs': {},
'metadata': {'title': None,
'running_job_name': None,
'app_blob': None,
'flags': []},
'desired_job_list': [],
'results': {}},
'version': 2}, None)
def test_resume_dispatch_v3(self):
helper3 = SessionResumeHelper3
with mock.patch.object(helper3, 'resume_json'):
data = gzip.compress(
b'{"session":{"desired_job_list":[],"jobs":{},"metadata":'
b'{"app_blob":null,"app_id":null,"flags":[],'
b'"running_job_name":null,"title":null'
b'},"results":{}},"version":3}')
SessionResumeHelper([], None, None).resume(data)
helper3.resume_json.assert_called_once_with(
{'session': {'jobs': {},
'metadata': {'title': None,
'app_id': None,
'running_job_name': None,
'app_blob': None,
'flags': []},
'desired_job_list': [],
'results': {}},
'version': 3}, None)
def test_resume_dispatch_v4(self):
helper4 = SessionResumeHelper4
with mock.patch.object(helper4, 'resume_json'):
data = gzip.compress(
b'{"session":{"desired_job_list":[],"jobs":{},"metadata":'
b'{"app_blob":null,"app_id":null,"flags":[],'
b'"running_job_name":null,"title":null'
b'},"results":{}},"version":4}')
SessionResumeHelper([], None, None).resume(data)
helper4.resume_json.assert_called_once_with(
{'session': {'jobs': {},
'metadata': {'title': None,
'app_id': None,
'running_job_name': None,
'app_blob': None,
'flags': []},
'desired_job_list': [],
'results': {}},
'version': 4}, None)
def test_resume_dispatch_v5(self):
helper5 = SessionResumeHelper5
with mock.patch.object(helper5, 'resume_json'):
data = gzip.compress(
b'{"session":{"desired_job_list":[],"jobs":{},"metadata":'
b'{"app_blob":null,"app_id":null,"flags":[],'
b'"running_job_name":null,"title":null'
b'},"results":{}},"version":5}')
SessionResumeHelper([], None, None).resume(data)
helper5.resume_json.assert_called_once_with(
{'session': {'jobs': {},
'metadata': {'title': None,
'app_id': None,
'running_job_name': None,
'app_blob': None,
'flags': []},
'desired_job_list': [],
'results': {}},
'version': 5}, None)
def test_resume_dispatch_v6(self):
helper6 = SessionResumeHelper6
with mock.patch.object(helper6, 'resume_json'):
data = gzip.compress(
b'{"session":{"desired_job_list":[],"jobs":{},"metadata":'
b'{"app_blob":null,"app_id":null,"flags":[],'
b'"running_job_name":null,"title":null'
b'},"results":{}},"version":6}')
SessionResumeHelper([], None, None).resume(data)
helper6.resume_json.assert_called_once_with(
{'session': {'jobs': {},
'metadata': {'title': None,
'app_id': None,
'running_job_name': None,
'app_blob': None,
'flags': []},
'desired_job_list': [],
'results': {}},
'version': 6}, None)
def test_resume_dispatch_v7(self):
data = gzip.compress(
b'{"version":7}')
with self.assertRaises(IncompatibleSessionError) as boom:
SessionResumeHelper([], None, None).resume(data)
self.assertEqual(str(boom.exception), "Unsupported version 7")
class SessionPeekHelperTests(TestCase):
def test_peek_dispatch_v1(self):
helper1 = SessionPeekHelper1
with mock.patch.object(helper1, 'peek_json'):
data = gzip.compress(
b'{"session":{"desired_job_list":[],"jobs":{},"metadata":'
b'{"app_blob":null,"flags":[],"running_job_name":null,'
b'"title":null},"results":{}},"version":1}')
SessionPeekHelper().peek(data)
helper1.peek_json.assert_called_once_with(
{'session': {'jobs': {},
'metadata': {'title': None,
'running_job_name': None,
'app_blob': None,
'flags': []},
'desired_job_list': [],
'results': {}},
'version': 1})
def test_peek_dispatch_v2(self):
helper2 = SessionPeekHelper2
with mock.patch.object(helper2, 'peek_json'):
data = gzip.compress(
b'{"session":{"desired_job_list":[],"jobs":{},"metadata":'
b'{"app_blob":null,"flags":[],"running_job_name":null,'
b'"title":null},"results":{}},"version":2}')
SessionPeekHelper().peek(data)
helper2.peek_json.assert_called_once_with(
{'session': {'jobs': {},
'metadata': {'title': None,
'running_job_name': None,
'app_blob': None,
'flags': []},
'desired_job_list': [],
'results': {}},
'version': 2})
def test_peek_dispatch_v3(self):
helper3 = SessionPeekHelper3
with mock.patch.object(helper3, 'peek_json'):
data = gzip.compress(
b'{"session":{"desired_job_list":[],"jobs":{},"metadata":'
b'{"app_blob":null,"flags":[],"running_job_name":null,'
b'"title":null},"results":{}},"version":3}')
SessionPeekHelper().peek(data)
helper3.peek_json.assert_called_once_with(
{'session': {'jobs': {},
'metadata': {'title': None,
'running_job_name': None,
'app_blob': None,
'flags': []},
'desired_job_list': [],
'results': {}},
'version': 3})
def test_peek_dispatch_v4(self):
helper4 = SessionPeekHelper4
with mock.patch.object(helper4, 'peek_json'):
data = gzip.compress(
b'{"session":{"desired_job_list":[],"jobs":{},"metadata":'
b'{"app_blob":null,"flags":[],"running_job_name":null,'
b'"title":null},"results":{}},"version":4}')
SessionPeekHelper().peek(data)
helper4.peek_json.assert_called_once_with(
{'session': {'jobs': {},
'metadata': {'title': None,
'running_job_name': None,
'app_blob': None,
'flags': []},
'desired_job_list': [],
'results': {}},
'version': 4})
def test_peek_dispatch_v5(self):
helper5 = SessionPeekHelper5
with mock.patch.object(helper5, 'peek_json'):
data = gzip.compress(
b'{"session":{"desired_job_list":[],"jobs":{},"metadata":'
b'{"app_blob":null,"flags":[],"running_job_name":null,'
b'"title":null},"results":{}},"version":5}')
SessionPeekHelper().peek(data)
helper5.peek_json.assert_called_once_with(
{'session': {'jobs': {},
'metadata': {'title': None,
'running_job_name': None,
'app_blob': None,
'flags': []},
'desired_job_list': [],
'results': {}},
'version': 5})
def test_peek_dispatch_v6(self):
helper6 = SessionPeekHelper6
with mock.patch.object(helper6, 'peek_json'):
data = gzip.compress(
b'{"session":{"desired_job_list":[],"jobs":{},"metadata":'
b'{"app_blob":null,"flags":[],"running_job_name":null,'
b'"title":null},"results":{}},"version":6}')
SessionPeekHelper().peek(data)
helper6.peek_json.assert_called_once_with(
{'session': {'jobs': {},
'metadata': {'title': None,
'running_job_name': None,
'app_blob': None,
'flags': []},
'desired_job_list': [],
'results': {}},
'version': 6})
class SessionResumeTests(TestCase):
"""
Tests for :class:`~plainbox.impl.session.resume.SessionResumeHelper`
"""
def test_resume_garbage_gzip(self):
"""
verify that CorruptedSessionError raised when we try to decompress
garbage bytes. By "garbage" we mean that it's not a valid
gzip-compressed stream. Internally IOError is raised but we wrap
that for simplicity.
"""
data = b"foo"
with self.assertRaises(CorruptedSessionError) as boom:
SessionResumeHelper([], None, None).resume(data)
self.assertIsInstance(boom.exception.__context__, IOError)
def test_resume_garbage_unicode(self):
"""
verify that CorruptedSessionError is raised when we try to interpret
incorrect bytes as UTF-8. Internally UnicodeDecodeError is raised
but we wrap that for simplicity.
"""
# This is just a sanity check that b"\xff" is not a valid UTF-8 string
with self.assertRaises(UnicodeDecodeError):
b"\xff".decode('UTF-8')
data = gzip.compress(b"\xff")
with self.assertRaises(CorruptedSessionError) as boom:
SessionResumeHelper([], None, None).resume(data)
self.assertIsInstance(boom.exception.__context__, UnicodeDecodeError)
def test_resume_garbage_json(self):
"""
verify that CorruptedSessionError is raised when we try to interpret
malformed JSON text. Internally ValueError is raised but we wrap that
for simplicity.
"""
data = gzip.compress(b"{")
with self.assertRaises(CorruptedSessionError) as boom:
SessionResumeHelper([], None, None).resume(data)
self.assertIsInstance(boom.exception.__context__, ValueError)
class EndToEndTests(TestCaseWithParameters):
parameter_names = ('format',)
parameter_values = (('1',), ('2',), ('3',))
full_repr_1 = {
'version': 1,
'session': {
'jobs': {
'__category__': (
'e2475434e4c0b2c825541430e526fe0565780dfeb67'
'050f3b7f3453aa3cc439b'),
'generator': (
'7015c949ce3ae91f37e10b304212022fdbc4b10acbc'
'cb78ac58ff10ef7a2c8c8'),
'generated': (
'47dd5e318ef99184e4dee8adf818a7f7548978a9470'
'8114c7b3dd2169b9a7a67')
},
'results': {
'__category__': [{
'comments': None,
'execution_duration': None,
'io_log': [
[0.0, 'stdout', 'cGx1Z2luOmxvY2FsCg=='],
[0.1, 'stdout', 'aWQ6Z2VuZXJhdG9yCg=='],
[0.2, 'stdout', 'Y29tbWFuZDpmYWtlCg==']],
'outcome': None,
'return_code': None,
}],
'generator': [{
'comments': None,
'execution_duration': None,
'io_log': [
[0.0, 'stdout', 'aWQ6Z2VuZXJhdGVk'],
[0.1, 'stdout', 'cGx1Z2luOnNoZWxs'],
[0.2, 'stdout', 'Y29tbWFuZDpmYWtl']],
'outcome': None,
'return_code': None,
}],
'generated': [{
'comments': None,
'execution_duration': None,
'io_log': [],
'outcome': None,
'return_code': None,
}]
},
'desired_job_list': ['__category__', 'generator'],
'mandatory_job_list': [],
'metadata': {
'flags': [],
'running_job_name': None,
'title': None
},
}
}
# Copy and patch the v1 representation to get a v2 representation
full_repr_2 = copy.deepcopy(full_repr_1)
full_repr_2['version'] = 2
full_repr_2['session']['metadata']['app_blob'] = None
# Copy and patch the v2 representation to get a v3 representation
full_repr_3 = copy.deepcopy(full_repr_2)
full_repr_3['version'] = 3
full_repr_3['session']['metadata']['app_id'] = None
# Map of representation ids to representations
full_repr = {
'1': full_repr_1,
'2': full_repr_2,
'3': full_repr_3
}
def setUp(self):
# Crete a "__category__" job
self.category_job = JobDefinition({
"plugin": "local",
"id": "__category__"
})
# Create a "generator" job
self.generator_job = JobDefinition({
"plugin": "local",
"id": "generator",
"command": "fake",
})
# Keep a variable for the (future) generated job
self.generated_job = None
# Create a result for the "__category__" job.
# It must define a verbatim copy of the "generator" job
self.category_result = MemoryJobResult({
"io_log": [
(0.0, "stdout", b'plugin:local\n'),
(0.1, "stdout", b'id:generator\n'),
(0.2, "stdout", b'command:fake\n'),
]
})
# Create a result for the "generator" job.
# It will define the "generated" job
self.generator_result = MemoryJobResult({
"io_log": [
(0.0, 'stdout', b'id:generated'),
(0.1, 'stdout', b'plugin:shell'),
(0.2, 'stdout', b'command:fake'),
]
})
self.job_list = [self.category_job, self.generator_job]
self.suspend_data = gzip.compress(
json.dumps(self.full_repr[self.parameters.format]).encode("UTF-8"))
def test_resume_early_callback(self):
"""
verify that early_cb is called with a session object
"""
def early_cb(session):
self.seen_session = session
session = SessionResumeHelper(self.job_list, None, None).resume(
self.suspend_data, early_cb)
self.assertIs(session, self.seen_session)
class SessionStateResumeTests(TestCaseWithParameters):
"""
Tests for :class:`~plainbox.impl.session.resume.SessionResumeHelper1`,
:class:`~plainbox.impl.session.resume.SessionResumeHelper2' and
:class:`~plainbox.impl.session.resume.SessionResumeHelper3' and how they
handle resuming SessionState inside _build_SessionState() method.
"""
parameter_names = ('resume_cls',)
parameter_values = ((SessionResumeHelper1,), (SessionResumeHelper2,),
(SessionResumeHelper3,), (SessionResumeHelper4,),
(SessionResumeHelper5,), (SessionResumeHelper6,))
def setUp(self):
self.session_repr = {}
self.helper = self.parameters.resume_cls([], None, None)
def test_calls_build_SessionState(self):
"""
verify that _build_SessionState() gets called
"""
with mock.patch.object(self.helper, attribute='_build_SessionState'):
self.helper._build_SessionState(self.session_repr)
self.helper._build_SessionState.assert_called_once_with(
self.session_repr)
def test_calls_restore_SessionState_jobs_and_results(self):
"""
verify that _restore_SessionState_jobs_and_results() gets called by
_build_SessionState().
"""
mpo = mock.patch.object
with mpo(self.helper, '_restore_SessionState_jobs_and_results'), \
mpo(self.helper, '_restore_SessionState_metadata'), \
mpo(self.helper, '_restore_SessionState_job_list'), \
mpo(self.helper, '_restore_SessionState_mandatory_job_list'), \
mpo(self.helper, '_restore_SessionState_desired_job_list'):
session = self.helper._build_SessionState(self.session_repr)
self.helper._restore_SessionState_jobs_and_results. \
assert_called_once_with(session, self.session_repr)
def test_calls_restore_SessionState_metadata(self):
"""
verify that _restore_SessionState_metadata() gets called by
_build_SessionState().
"""
mpo = mock.patch.object
with mpo(self.helper, '_restore_SessionState_jobs_and_results'), \
mpo(self.helper, '_restore_SessionState_metadata'), \
mpo(self.helper, '_restore_SessionState_job_list'), \
mpo(self.helper, '_restore_SessionState_mandatory_job_list'), \
mpo(self.helper, '_restore_SessionState_desired_job_list'):
session = self.helper._build_SessionState(self.session_repr)
self.helper._restore_SessionState_metadata. \
assert_called_once_with(session.metadata, self.session_repr)
def test_calls_restore_SessionState_desired_job_list(self):
"""
verify that _restore_SessionState_desired_job_list() gets called by
_build_SessionState().
"""
mpo = mock.patch.object
with mpo(self.helper, '_restore_SessionState_jobs_and_results'), \
mpo(self.helper, '_restore_SessionState_metadata'), \
mpo(self.helper, '_restore_SessionState_job_list'), \
mpo(self.helper, '_restore_SessionState_mandatory_job_list'), \
mpo(self.helper, '_restore_SessionState_desired_job_list'):
session = self.helper._build_SessionState(self.session_repr)
self.helper._restore_SessionState_desired_job_list. \
assert_called_once_with(session, self.session_repr)
def test_calls_restore_SessionState_job_list(self):
"""
verify that _restore_SessionState_job_list() gets called by
_build_SessionState().
"""
mpo = mock.patch.object
with mpo(self.helper, '_restore_SessionState_jobs_and_results'), \
mpo(self.helper, '_restore_SessionState_metadata'), \
mpo(self.helper, '_restore_SessionState_job_list'), \
mpo(self.helper, '_restore_SessionState_mandatory_job_list'), \
mpo(self.helper, '_restore_SessionState_desired_job_list'):
session = self.helper._build_SessionState(self.session_repr)
self.helper._restore_SessionState_job_list.assert_called_once_with(
session, self.session_repr)
class IOLogRecordResumeTests(TestCaseWithParameters):
"""
Tests for :class:`~plainbox.impl.session.resume.SessionResumeHelper1`,
:class:`~plainbox.impl.session.resume.SessionResumeHelper2' and
:class:`~plainbox.impl.session.resume.SessionResumeHelper3' and how they
handle resuming IOLogRecord objects
"""
parameter_names = ('resume_cls',)
parameter_values = ((SessionResumeHelper1,), (SessionResumeHelper2,),
(SessionResumeHelper3,), (SessionResumeHelper4,),
(SessionResumeHelper5,), (SessionResumeHelper6,))
def test_build_IOLogRecord_missing_delay(self):
"""
verify that _build_IOLogRecord() checks for missing ``delay``
"""
with self.assertRaises(CorruptedSessionError):
self.parameters.resume_cls._build_IOLogRecord([])
def test_build_IOLogRecord_bad_type_for_delay(self):
"""
verify that _build_IOLogRecord() checks that ``delay`` is float
"""
with self.assertRaises(CorruptedSessionError):
self.parameters.resume_cls._build_IOLogRecord([0, 'stdout', ''])
def test_build_IOLogRecord_negative_delay(self):
"""
verify that _build_IOLogRecord() checks for negative ``delay``
"""
with self.assertRaises(CorruptedSessionError):
self.parameters.resume_cls._build_IOLogRecord([-1.0, 'stdout', ''])
def test_build_IOLogRecord_missing_stream_name(self):
"""
verify that _build_IOLogRecord() checks for missing ``stream-name``
"""
with self.assertRaises(CorruptedSessionError):
self.parameters.resume_cls._build_IOLogRecord([0.0])
def test_build_IOLogRecord_bad_type_stream_name(self):
"""
verify that _build_IOLogRecord() checks that ``stream-name``
is a string
"""
with self.assertRaises(CorruptedSessionError):
self.parameters.resume_cls._build_IOLogRecord([0.0, 1])
def test_build_IOLogRecord_bad_value_stream_name(self):
"""
verify that _build_IOLogRecord() checks that ``stream-name`` looks sane
"""
with self.assertRaises(CorruptedSessionError):
self.parameters.resume_cls._build_IOLogRecord([0.0, "foo", ""])
def test_build_IOLogRecord_missing_data(self):
"""
verify that _build_IOLogRecord() checks for missing ``data``
"""
with self.assertRaises(CorruptedSessionError):
self.parameters.resume_cls._build_IOLogRecord([0.0, 'stdout'])
def test_build_IOLogRecord_non_ascii_data(self):
"""
verify that _build_IOLogRecord() checks that ``data`` is ASCII
"""
with self.assertRaises(CorruptedSessionError) as boom:
self.parameters.resume_cls._build_IOLogRecord(
[0.0, 'stdout', '\uFFFD'])
self.assertIsInstance(boom.exception.__context__, UnicodeEncodeError)
def test_build_IOLogRecord_non_base64_ascii_data(self):
"""
verify that _build_IOLogRecord() checks that ``data`` is valid base64
"""
with self.assertRaises(CorruptedSessionError) as boom:
self.parameters.resume_cls._build_IOLogRecord(
[0.0, 'stdout', '==broken'])
# base64.standard_b64decode() raises binascii.Error
self.assertIsInstance(boom.exception.__context__, binascii.Error)
def test_build_IOLogRecord_values(self):
"""
verify that _build_IOLogRecord() returns a proper IOLogRecord object
with all the values in order
"""
record = self.parameters.resume_cls._build_IOLogRecord(
[1.5, 'stderr', 'dGhpcyB3b3Jrcw=='])
self.assertAlmostEqual(record.delay, 1.5)
self.assertEqual(record.stream_name, 'stderr')
self.assertEqual(record.data, b"this works")
class JobResultResumeMixIn:
"""
Mix-in class the defines most of the common tests for both
MemoryJobResult and DiskJobResult.
Sub-classes should define ``good_repr`` at class level
"""
good_repr = None
def test_build_JobResult_checks_for_missing_outcome(self):
"""
verify that _build_JobResult() checks if ``outcome`` is present
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
del obj_repr['outcome']
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(
str(boom.exception), "Missing value for key 'outcome'")
def test_build_JobResult_checks_type_of_outcome(self):
"""
verify that _build_JobResult() checks if ``outcome`` is a string
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['outcome'] = 42
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(
str(boom.exception),
"Value of key 'outcome' is of incorrect type int")
def test_build_JobResult_checks_value_of_outcome(self):
"""
verify that _build_JobResult() checks if the value of ``outcome`` is
in the set of known-good values.
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['outcome'] = 'maybe'
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(
str(boom.exception), (
"Value for key 'outcome' not in allowed set ['crash', 'fail',"
" None, 'not-implemented', 'not-supported', 'pass', 'skip', "
"'undecided']"))
def test_build_JobResult_allows_none_outcome(self):
"""
verify that _build_JobResult() allows for the value of ``outcome`` to
be None
"""
obj_repr = copy.copy(self.good_repr)
obj_repr['outcome'] = None
obj = self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(obj.outcome, None)
def test_build_JobResult_restores_outcome(self):
"""
verify that _build_JobResult() restores the value of ``outcome``
"""
obj_repr = copy.copy(self.good_repr)
obj_repr['outcome'] = 'fail'
obj = self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(obj.outcome, 'fail')
def test_build_JobResult_checks_for_missing_comments(self):
"""
verify that _build_JobResult() checks if ``comments`` is present
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
del obj_repr['comments']
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(
str(boom.exception), "Missing value for key 'comments'")
def test_build_JobResult_checks_type_of_comments(self):
"""
verify that _build_JobResult() checks if ``comments`` is a string
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['comments'] = False
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(
str(boom.exception),
"Value of key 'comments' is of incorrect type bool")
def test_build_JobResult_allows_for_none_comments(self):
"""
verify that _build_JobResult() allows for the value of ``comments``
to be None
"""
obj_repr = copy.copy(self.good_repr)
obj_repr['comments'] = None
obj = self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(obj.comments, None)
def test_build_JobResult_restores_comments(self):
"""
verify that _build_JobResult() restores the value of ``comments``
"""
obj_repr = copy.copy(self.good_repr)
obj_repr['comments'] = 'this is a comment'
obj = self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(obj.comments, 'this is a comment')
def test_build_JobResult_checks_for_missing_return_code(self):
"""
verify that _build_JobResult() checks if ``return_code`` is present
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
del obj_repr['return_code']
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(
str(boom.exception), "Missing value for key 'return_code'")
def test_build_JobResult_checks_type_of_return_code(self):
"""
verify that _build_JobResult() checks if ``return_code`` is an integer
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['return_code'] = "text"
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(
str(boom.exception),
"Value of key 'return_code' is of incorrect type str")
def test_build_JobResult_allows_for_none_return_code(self):
"""
verify that _build_JobResult() allows for the value of ``return_code``
to be None
"""
obj_repr = copy.copy(self.good_repr)
obj_repr['return_code'] = None
obj = self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(obj.return_code, None)
def test_build_JobResult_restores_return_code(self):
"""
verify that _build_JobResult() restores the value of ``return_code``
"""
obj_repr = copy.copy(self.good_repr)
obj_repr['return_code'] = 42
obj = self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(obj.return_code, 42)
def test_build_JobResult_checks_for_missing_execution_duration(self):
"""
verify that _build_JobResult() checks if ``execution_duration``
is present
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
del obj_repr['execution_duration']
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(
str(boom.exception), "Missing value for key 'execution_duration'")
def test_build_JobResult_checks_type_of_execution_duration(self):
"""
verify that _build_JobResult() checks if ``execution_duration``
is a float
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['execution_duration'] = "text"
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(
str(boom.exception),
"Value of key 'execution_duration' is of incorrect type str")
def test_build_JobResult_allows_for_none_execution_duration(self):
"""
verify that _build_JobResult() allows for the value of
``execution_duration`` to be None
"""
obj_repr = copy.copy(self.good_repr)
obj_repr['execution_duration'] = None
obj = self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(obj.execution_duration, None)
def test_build_JobResult_restores_execution_duration(self):
"""
verify that _build_JobResult() restores the value of
``execution_duration``
"""
obj_repr = copy.copy(self.good_repr)
obj_repr['execution_duration'] = 5.1
obj = self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertAlmostEqual(obj.execution_duration, 5.1)
class MemoryJobResultResumeTests(JobResultResumeMixIn, TestCaseWithParameters):
"""
Tests for :class:`~plainbox.impl.session.resume.SessionResumeHelper1`,
:class:`~plainbox.impl.session.resume.SessionResumeHelper2' and
:class:`~plainbox.impl.session.resume.SessionResumeHelper3' and how they
handle recreating MemoryJobResult form their representations
"""
parameter_names = ('resume_cls',)
parameter_values = ((SessionResumeHelper1,), (SessionResumeHelper2,),
(SessionResumeHelper3,), (SessionResumeHelper4,),
(SessionResumeHelper5,), (SessionResumeHelper6,))
good_repr = {
'outcome': "pass",
'comments': None,
'return_code': None,
'execution_duration': None,
'io_log': []
}
def test_build_JobResult_restores_MemoryJobResult_representations(self):
obj = self.parameters.resume_cls._build_JobResult(
self.good_repr, 0, None)
self.assertIsInstance(obj, MemoryJobResult)
def test_build_JobResult_checks_for_missing_io_log(self):
"""
verify that _build_JobResult() checks if ``io_log`` is present
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
del obj_repr['io_log']
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(
str(boom.exception), "Missing value for key 'io_log'")
def test_build_JobResult_checks_type_of_io_log(self):
"""
verify that _build_JobResult() checks if ``io_log``
is a list
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['io_log'] = "text"
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(
str(boom.exception),
"Value of key 'io_log' is of incorrect type str")
def test_build_JobResult_checks_for_none_io_log(self):
"""
verify that _build_JobResult() checks if the value of
``io_log`` is not None
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['io_log'] = None
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(
str(boom.exception),
"Value of key 'io_log' cannot be None")
def test_build_JobResult_restores_io_log(self):
"""
verify that _build_JobResult() checks if ``io_log``
is restored for MemoryJobResult representations
"""
obj_repr = copy.copy(self.good_repr)
obj_repr['io_log'] = [[0.0, 'stdout', '']]
obj = self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
# NOTE: MemoryJobResult.io_log is a property that converts
# whatever was stored to IOLogRecord and returns a _tuple_
# so the original list is not visible
self.assertEqual(obj.io_log, tuple([
IOLogRecord(0.0, 'stdout', b'')
]))
class DiskJobResultResumeTestsCommon(JobResultResumeMixIn,
TestCaseWithParameters):
""" Tests for common behavior of DiskJobResult resume for all formats. """
parameter_names = ('resume_cls',)
parameter_values = ((SessionResumeHelper1,), (SessionResumeHelper2,),
(SessionResumeHelper3,), (SessionResumeHelper4,),
(SessionResumeHelper5,), (SessionResumeHelper6,))
good_repr = {
'outcome': "pass",
'comments': None,
'return_code': None,
'execution_duration': None,
# NOTE: path is absolute (realistic data required by most of tests)
'io_log_filename': "/file.txt"
}
def test_build_JobResult_restores_DiskJobResult_representations(self):
obj = self.parameters.resume_cls._build_JobResult(
self.good_repr, 0, None)
self.assertIsInstance(obj, DiskJobResult)
def test_build_JobResult_does_not_check_for_missing_io_log_filename(self):
"""
verify that _build_JobResult() does not check if
``io_log_filename`` is present as that signifies that MemoryJobResult
should be recreated instead
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
del obj_repr['io_log_filename']
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
# NOTE: the error message explicitly talks about 'io_log', not
# about 'io_log_filename' because we're hitting the other path
# of the restore function
self.assertEqual(
str(boom.exception), "Missing value for key 'io_log'")
def test_build_JobResult_checks_type_of_io_log_filename(self):
"""
verify that _build_JobResult() checks if ``io_log_filename``
is a string
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['io_log_filename'] = False
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(
str(boom.exception),
"Value of key 'io_log_filename' is of incorrect type bool")
def test_build_JobResult_checks_for_none_io_log_filename(self):
"""
verify that _build_JobResult() checks if the value of
``io_log_filename`` is not None
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['io_log_filename'] = None
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(
str(boom.exception),
"Value of key 'io_log_filename' cannot be None")
class DiskJobResultResumeTests1to4(TestCaseWithParameters):
""" Tests for behavior of DiskJobResult resume for formats 1 to 4. """
parameter_names = ('resume_cls',)
parameter_values = ((SessionResumeHelper1,), (SessionResumeHelper2,),
(SessionResumeHelper3,), (SessionResumeHelper4,))
good_repr = {
'outcome': "pass",
'comments': None,
'return_code': None,
'execution_duration': None,
'io_log_filename': "/file.txt"
}
def test_build_JobResult_restores_io_log_filename(self):
""" _build_JobResult() accepts relative paths without location. """
obj_repr = copy.copy(self.good_repr)
obj_repr['io_log_filename'] = "some-file.txt"
obj = self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
self.assertEqual(obj.io_log_filename, "some-file.txt")
def test_build_JobResult_restores_relative_io_log_filename(self):
""" _build_JobResult() ignores location for relative paths. """
obj_repr = copy.copy(self.good_repr)
obj_repr['io_log_filename'] = "some-file.txt"
obj = self.parameters.resume_cls._build_JobResult(
obj_repr, 0, '/path/to')
self.assertEqual(obj.io_log_filename, "some-file.txt")
def test_build_JobResult_restores_absolute_io_log_filename(self):
""" _build_JobResult() preserves absolute paths. """
obj_repr = copy.copy(self.good_repr)
obj_repr['io_log_filename'] = "/some-file.txt"
obj = self.parameters.resume_cls._build_JobResult(
obj_repr, 0, '/path/to')
self.assertEqual(obj.io_log_filename, "/some-file.txt")
class DiskJobResultResumeTests5(TestCaseWithParameters):
""" Tests for behavior of DiskJobResult resume for format 5. """
parameter_names = ('resume_cls',)
parameter_values = ((SessionResumeHelper6,),)
good_repr = {
'outcome': "pass",
'comments': None,
'return_code': None,
'execution_duration': None,
'io_log_filename': "/file.txt"
}
def test_build_JobResult_restores_io_log_filename(self):
""" _build_JobResult() rejects relative paths without location. """
obj_repr = copy.copy(self.good_repr)
obj_repr['io_log_filename'] = "some-file.txt"
with self.assertRaisesRegex(ValueError, "Location "):
self.parameters.resume_cls._build_JobResult(obj_repr, 0, None)
def test_build_JobResult_restores_relative_io_log_filename(self):
""" _build_JobResult() uses location for relative paths. """
obj_repr = copy.copy(self.good_repr)
obj_repr['io_log_filename'] = "some-file.txt"
obj = self.parameters.resume_cls._build_JobResult(
obj_repr, 0, '/path/to')
self.assertEqual(obj.io_log_filename, "/path/to/some-file.txt")
def test_build_JobResult_restores_absolute_io_log_filename(self):
""" _build_JobResult() preserves absolute paths. """
obj_repr = copy.copy(self.good_repr)
obj_repr['io_log_filename'] = "/some-file.txt"
obj = self.parameters.resume_cls._build_JobResult(
obj_repr, 0, '/path/to')
self.assertEqual(obj.io_log_filename, "/some-file.txt")
class DesiredJobListResumeTests(TestCaseWithParameters):
"""
Tests for :class:`~plainbox.impl.session.resume.SessionResumeHelper1`,
:class:`~plainbox.impl.session.resume.SessionResumeHelper2' and
:class:`~plainbox.impl.session.resume.SessionResumeHelper3' and how they
handle recreating SessionState.desired_job_list form its representation
"""
parameter_names = ('resume_cls',)
parameter_values = ((SessionResumeHelper1,), (SessionResumeHelper2,),
(SessionResumeHelper3,), (SessionResumeHelper4,),
(SessionResumeHelper5,), (SessionResumeHelper6,))
def setUp(self):
# All of the tests need a SessionState object and some jobs to work
# with. Actual values don't matter much.
self.job_a = make_job(id='a')
self.job_b = make_job(id='b')
self.session = SessionState([self.job_a, self.job_b])
self.good_repr = {
"desired_job_list": ['a', 'b']
}
self.resume_fn = (
self.parameters.resume_cls._restore_SessionState_desired_job_list)
def test_restore_SessionState_desired_job_list_checks_for_repr_type(self):
"""
verify that _restore_SessionState_desired_job_list() checks the
type of the representation of ``desired_job_list``.
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['desired_job_list'] = 1
self.resume_fn(self.session, obj_repr)
self.assertEqual(
str(boom.exception),
"Value of key 'desired_job_list' is of incorrect type int")
def test_restore_SessionState_desired_job_list_checks_job_id_type(self):
"""
verify that _restore_SessionState_desired_job_list() checks the
type of each job id listed in ``desired_job_list``.
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['desired_job_list'] = [1]
self.resume_fn(self.session, obj_repr)
self.assertEqual(str(boom.exception), "Each job id must be a string")
def test_restore_SessionState_desired_job_list_checks_for_bogus_jobs(self):
"""
verify that _restore_SessionState_desired_job_list() checks if
each of the mentioned jobs are known and defined in the session
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['desired_job_list'] = ['bogus']
self.resume_fn(self.session, obj_repr)
self.assertEqual(
str(boom.exception),
"'desired_job_list' refers to unknown job 'bogus'")
def test_restore_SessionState_desired_job_list_works(self):
"""
verify that _restore_SessionState_desired_job_list() actually
restores desired_job_list on the session
"""
self.assertEqual(
self.session.desired_job_list, [])
self.resume_fn(self.session, self.good_repr)
# Good representation has two jobs, 'a' and 'b', in that order
self.assertEqual(
self.session.desired_job_list,
[self.job_a, self.job_b])
class SessionMetaDataResumeTests(TestCaseWithParameters):
"""
Tests for :class:`~plainbox.impl.session.resume.SessionResumeHelper1`,
:class:`~plainbox.impl.session.resume.SessionResumeHelper2' and
:class:`~plainbox.impl.session.resume.SessionResumeHelper3' and how they
handle recreating SessionMetaData form its representation
"""
parameter_names = ('format',)
parameter_values = ((1,), (2,), (3,))
good_repr_v1 = {
"metadata": {
"title": "some title",
"flags": ["flag1", "flag2"],
"running_job_name": "job1"
}
}
good_repr_v2 = {
"metadata": {
"title": "some title",
"flags": ["flag1", "flag2"],
"running_job_name": "job1",
"app_blob": None,
}
}
good_repr_v3 = {
"metadata": {
"title": "some title",
"flags": ["flag1", "flag2"],
"running_job_name": "job1",
"app_blob": None,
"app_id": None,
}
}
good_repr_map = {
1: good_repr_v1,
2: good_repr_v2,
3: good_repr_v3
}
resume_cls_map = {
1: SessionResumeHelper1,
2: SessionResumeHelper2,
3: SessionResumeHelper3,
}
def setUp(self):
# All of the tests need a SessionState object
self.session = SessionState([])
self.good_repr = copy.deepcopy(
self.good_repr_map[self.parameters.format])
self.resume_fn = (
self.resume_cls_map[
self.parameters.format
]._restore_SessionState_metadata)
def test_restore_SessionState_metadata_cheks_for_representation_type(self):
"""
verify that _restore_SessionState_metadata() checks the type of
the representation object
"""
with self.assertRaises(CorruptedSessionError) as boom:
self.good_repr['metadata'] = 1
self.resume_fn(self.session.metadata, self.good_repr)
self.assertEqual(
str(boom.exception),
"Value of key 'metadata' is of incorrect type int")
def test_restore_SessionState_metadata_checks_title_type(self):
"""
verify that _restore_SessionState_metadata() checks the type of
the ``title`` field.
"""
with self.assertRaises(CorruptedSessionError) as boom:
self.good_repr['metadata']['title'] = 1
self.resume_fn(self.session.metadata, self.good_repr)
self.assertEqual(
str(boom.exception),
"Value of key 'title' is of incorrect type int")
def test_restore_SessionState_metadata_allows_for_none_title(self):
"""
verify that _restore_SessionState_metadata() allows for
``title`` to be None
"""
self.good_repr['metadata']['title'] = None
self.resume_fn(self.session.metadata, self.good_repr)
self.assertEqual(self.session.metadata.title, None)
def test_restore_SessionState_metadata_restores_title(self):
"""
verify that _restore_SessionState_metadata() restores ``title``
"""
self.good_repr['metadata']['title'] = "a title"
self.resume_fn(self.session.metadata, self.good_repr)
self.assertEqual(self.session.metadata.title, "a title")
def test_restore_SessionState_metadata_checks_flags_type(self):
"""
verify that _restore_SessionState_metadata() checks the type of
the ``flags`` field.
"""
with self.assertRaises(CorruptedSessionError) as boom:
self.good_repr['metadata']['flags'] = 1
self.resume_fn(self.session.metadata, self.good_repr)
self.assertEqual(
str(boom.exception),
"Value of key 'flags' is of incorrect type int")
def test_restore_SessionState_metadata_cheks_if_flags_are_none(self):
"""
verify that _restore_SessionState_metadata() checks if
``flags`` are None
"""
with self.assertRaises(CorruptedSessionError) as boom:
self.good_repr['metadata']['flags'] = None
self.resume_fn(self.session.metadata, self.good_repr)
self.assertEqual(
str(boom.exception),
"Value of key 'flags' cannot be None")
def test_restore_SessionState_metadata_checks_type_of_each_flag(self):
"""
verify that _restore_SessionState_metadata() checks the type of each
value of ``flags``
"""
with self.assertRaises(CorruptedSessionError) as boom:
self.good_repr['metadata']['flags'] = [1]
self.resume_fn(self.session.metadata, self.good_repr)
self.assertEqual(
str(boom.exception),
"Each flag must be a string")
def test_restore_SessionState_metadata_restores_flags(self):
"""
verify that _restore_SessionState_metadata() restores ``flags``
"""
self.good_repr['metadata']['flags'] = ["flag1", "flag2"]
self.resume_fn(self.session.metadata, self.good_repr)
self.assertEqual(self.session.metadata.flags, set(['flag1', 'flag2']))
def test_restore_SessionState_metadata_checks_running_job_name_type(self):
"""
verify that _restore_SessionState_metadata() checks the type of
``running_job_name``.
"""
with self.assertRaises(CorruptedSessionError) as boom:
self.good_repr['metadata']['running_job_name'] = 1
self.resume_fn(self.session.metadata, self.good_repr)
self.assertEqual(
str(boom.exception),
"Value of key 'running_job_name' is of incorrect type int")
def test_restore_SessionState_metadata_allows__none_running_job_name(self):
"""
verify that _restore_SessionState_metadata() allows for
``running_job_name`` to be None
"""
self.good_repr['metadata']['running_job_name'] = None
self.resume_fn(self.session.metadata, self.good_repr)
self.assertEqual(self.session.metadata.running_job_name, None)
def test_restore_SessionState_metadata_restores_running_job_name(self):
"""
verify that _restore_SessionState_metadata() restores
the value of ``running_job_name``
"""
self.good_repr['metadata']['running_job_name'] = "a job"
self.resume_fn(self.session.metadata, self.good_repr)
self.assertEqual(self.session.metadata.running_job_name, "a job")
class SessionMetaDataResumeTests2(TestCase):
"""
Tests for :class:`~plainbox.impl.session.resume.SessionResumeHelper2`
and how it handles recreating SessionMetaData form its representation
"""
def setUp(self):
# All of the tests need a SessionState object
self.session = SessionState([])
self.good_repr = {
"metadata": {
"title": "some title",
"flags": ["flag1", "flag2"],
"running_job_name": "job1",
"app_blob": "YmxvYg==" # this is b'blob', encoded
}
}
self.resume_fn = SessionResumeHelper2._restore_SessionState_metadata
def test_restore_SessionState_metadata_checks_app_blob_type(self):
"""
verify that _restore_SessionState_metadata() checks the type of
the ``app_blob`` field.
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['metadata']['app_blob'] = 1
self.resume_fn(self.session.metadata, obj_repr)
self.assertEqual(
str(boom.exception),
"Value of key 'app_blob' is of incorrect type int")
def test_restore_SessionState_metadata_allows_for_none_app_blob(self):
"""
verify that _restore_SessionState_metadata() allows for
``app_blob`` to be None
"""
obj_repr = copy.copy(self.good_repr)
obj_repr['metadata']['app_blob'] = None
self.resume_fn(self.session.metadata, obj_repr)
self.assertEqual(self.session.metadata.app_blob, None)
def test_restore_SessionState_metadata_restores_app_blob(self):
"""
verify that _restore_SessionState_metadata() restores ``app_blob``
"""
obj_repr = copy.copy(self.good_repr)
obj_repr['metadata']['app_blob'] = "YmxvYg=="
self.resume_fn(self.session.metadata, obj_repr)
self.assertEqual(self.session.metadata.app_blob, b"blob")
def test_restore_SessionState_metadata_non_ascii_app_blob(self):
"""
verify that _restore_SessionState_metadata() checks that ``app_blob``
is ASCII
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['metadata']['app_blob'] = '\uFFFD'
self.resume_fn(self.session.metadata, obj_repr)
self.assertEqual(str(boom.exception), "app_blob is not ASCII")
self.assertIsInstance(boom.exception.__context__, UnicodeEncodeError)
def test_restore_SessionState_metadata_non_base64_app_blob(self):
"""
verify that _restore_SessionState_metadata() checks that ``app_blob``
is valid base64
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['metadata']['app_blob'] = '==broken'
self.resume_fn(self.session.metadata, obj_repr)
self.assertEqual(str(boom.exception), "Cannot base64 decode app_blob")
# base64.standard_b64decode() raises binascii.Error
self.assertIsInstance(boom.exception.__context__, binascii.Error)
class SessionMetaDataResumeTest3(SessionMetaDataResumeTests2):
"""
Tests for :class:`~plainbox.impl.session.resume.SessionResumeHelper3`
and how it handles recreating SessionMetaData form its representation
"""
def setUp(self):
# All of the tests need a SessionState object
self.session = SessionState([])
self.good_repr = {
"metadata": {
"title": "some title",
"flags": ["flag1", "flag2"],
"running_job_name": "job1",
"app_blob": "YmxvYg==", # this is b'blob', encoded
"app_id": "id"
}
}
self.resume_fn = SessionResumeHelper3._restore_SessionState_metadata
def test_restore_SessionState_metadata_checks_app_id_type(self):
"""
verify that _restore_SessionState_metadata() checks the type of
the ``app_id`` field.
"""
with self.assertRaises(CorruptedSessionError) as boom:
obj_repr = copy.copy(self.good_repr)
obj_repr['metadata']['app_id'] = 1
self.resume_fn(self.session.metadata, obj_repr)
self.assertEqual(
str(boom.exception),
"Value of key 'app_id' is of incorrect type int")
def test_restore_SessionState_metadata_allows_for_none_app_id(self):
"""
verify that _restore_SessionState_metadata() allows for
``app_id`` to be None
"""
obj_repr = copy.copy(self.good_repr)
obj_repr['metadata']['app_id'] = None
self.resume_fn(self.session.metadata, obj_repr)
self.assertEqual(self.session.metadata.app_id, None)
def test_restore_SessionState_metadata_restores_app_id(self):
"""
verify that _restore_SessionState_metadata() restores ``app_id``
"""
obj_repr = copy.copy(self.good_repr)
obj_repr['metadata']['app_id'] = "id"
self.resume_fn(self.session.metadata, obj_repr)
self.assertEqual(self.session.metadata.app_id, "id")
class ProcessJobTests(TestCaseWithParameters):
"""
Tests for :class:`~plainbox.impl.session.resume.SessionResumeHelper1`,
:class:`~plainbox.impl.session.resume.SessionResumeHelper2` and
:class:`~plainbox.impl.session.resume.SessionResumeHelper3` and how they
handle processing jobs using _process_job() method
"""
parameter_names = ('resume_cls',)
parameter_values = ((SessionResumeHelper1,), (SessionResumeHelper2,),
(SessionResumeHelper3,), (SessionResumeHelper4,),
(SessionResumeHelper5,), (SessionResumeHelper6,))
def setUp(self):
self.job_id = 'job'
self.job = make_job(id=self.job_id)
self.jobs_repr = {
self.job_id: self.job.checksum
}
self.results_repr = {
self.job_id: [{
'outcome': 'fail',
'comments': None,
'execution_duration': None,
'return_code': None,
'io_log': [],
}]
}
self.helper = self.parameters.resume_cls([self.job], None, None)
# This object is artificial and would be constructed internally
# by the helper but having it here makes testing easier as we
# can reliably test a single method in isolation.
self.session = SessionState([self.job])
def test_process_job_checks_type_of_job_id(self):
"""
verify that _process_job() checks the type of ``job_id``
"""
with self.assertRaises(CorruptedSessionError) as boom:
# Pass a job id of the wrong type
job_id = 1
self.helper._process_job(
self.session, self.jobs_repr, self.results_repr, job_id)
self.assertEqual(
str(boom.exception), "Value of object is of incorrect type int")
def test_process_job_checks_for_missing_checksum(self):
"""
verify that _process_job() checks if ``checksum`` is missing
"""
with self.assertRaises(CorruptedSessionError) as boom:
# Pass a jobs_repr that has no checksums (for any job)
jobs_repr = {}
self.helper._process_job(
self.session, jobs_repr, self.results_repr, self.job_id)
self.assertEqual(str(boom.exception), "Missing value for key 'job'")
def test_process_job_checks_if_job_is_known(self):
"""
verify that _process_job() checks if job is known or raises KeyError
"""
with self.assertRaises(KeyError) as boom:
# Pass a session that does not know about any jobs
session = SessionState([])
self.helper._process_job(
session, self.jobs_repr, self.results_repr, self.job_id)
self.assertEqual(boom.exception.args[0], 'job')
def test_process_job_checks_if_job_checksum_matches(self):
"""
verify that _process_job() checks if job checksum matches the
checksum of a job with the same id that was passed to the helper.
"""
with self.assertRaises(IncompatibleJobError) as boom:
# Pass a jobs_repr with a bad checksum
jobs_repr = {self.job_id: 'bad-checksum'}
self.helper._process_job(
self.session, jobs_repr, self.results_repr, self.job_id)
self.assertEqual(
str(boom.exception), "Definition of job 'job' has changed")
def test_process_job_handles_ignores_empty_results(self):
"""
verify that _process_job() does not crash if we have no results
for a particular job
"""
self.assertEqual(
self.session.job_state_map[self.job_id].result.outcome, None)
results_repr = {
self.job_id: []
}
self.helper._process_job(
self.session, self.jobs_repr, results_repr, self.job_id)
self.assertEqual(
self.session.job_state_map[self.job_id].result.outcome, None)
def test_process_job_handles_only_result_back_to_the_session(self):
"""
verify that _process_job() passes the only result to the session
"""
self.assertEqual(
self.session.job_state_map[self.job_id].result.outcome, None)
self.helper._process_job(
self.session, self.jobs_repr, self.results_repr, self.job_id)
# The result in self.results_repr is a failure so we should see it here
self.assertEqual(
self.session.job_state_map[self.job_id].result.outcome, "fail")
def test_process_job_handles_last_result_back_to_the_session(self):
"""
verify that _process_job() passes last of the results to the session
"""
self.assertEqual(
self.session.job_state_map[self.job_id].result.outcome, None)
results_repr = {
self.job_id: [{
'outcome': 'fail',
'comments': None,
'execution_duration': None,
'return_code': None,
'io_log': [],
}, {
'outcome': 'pass',
'comments': None,
'execution_duration': None,
'return_code': None,
'io_log': [],
}]
}
self.helper._process_job(
self.session, self.jobs_repr, results_repr, self.job_id)
# results_repr has two entries: [fail, pass] so we should see
# the passing entry only
self.assertEqual(
self.session.job_state_map[self.job_id].result.outcome, "pass")
def test_process_job_checks_results_repr_is_a_list(self):
"""
verify that _process_job() checks if results_repr is a dictionary
of lists.
"""
with self.assertRaises(CorruptedSessionError) as boom:
results_repr = {self.job_id: 1}
self.helper._process_job(
self.session, self.jobs_repr, results_repr, self.job_id)
self.assertEqual(
str(boom.exception),
"Value of key 'job' is of incorrect type int")
def test_process_job_checks_results_repr_values_are_dicts(self):
"""
verify that _process_job() checks if results_repr is a dictionary
of lists, each of which holds a dictionary.
"""
with self.assertRaises(CorruptedSessionError) as boom:
results_repr = {self.job_id: [1]}
self.helper._process_job(
self.session, self.jobs_repr, results_repr, self.job_id)
self.assertEqual(
str(boom.exception),
"Value of object is of incorrect type int")
class JobPluginSpecificTests(TestCaseWithParameters):
"""
Tests for :class:`~plainbox.impl.session.resume.SessionResumeHelper1`,
:class:`~plainbox.impl.session.resume.SessionResumeHelper2' and
:class:`~plainbox.impl.session.resume.SessionResumeHelper3' and how they
handle processing jobs using _process_job() method. This class focuses on
plugin-specific test such as for local and resource jobs
"""
parameter_names = ('resume_cls',)
parameter_values = ((SessionResumeHelper1,), (SessionResumeHelper2,),
(SessionResumeHelper3,), (SessionResumeHelper4,),
(SessionResumeHelper5,), (SessionResumeHelper6,))
def test_process_job_restores_resources(self):
"""
verify that _process_job() recreates resources
"""
# Set the stage for testing. Setup a session with a known
# resource job, representation of the job (checksum)
# and representation of a single result, which has a single line
# that defines a 'key': 'value' resource record.
job_id = 'resource'
job = make_job(id=job_id, plugin='resource')
jobs_repr = {
job_id: job.checksum
}
results_repr = {
job_id: [{
'outcome': IJobResult.OUTCOME_PASS,
'comments': None,
'execution_duration': None,
'return_code': None,
'io_log': [
# A bit convoluted but this is how we encode each chunk
# of IOLogRecord
[0.0, 'stdout', base64.standard_b64encode(
b'key: value'
).decode('ASCII')]
],
}]
}
helper = self.parameters.resume_cls([job], None, None)
session = SessionState([job])
# Ensure that the resource was not there initially
self.assertNotIn(job_id, session.resource_map)
# Process the representation data defined above
helper._process_job(session, jobs_repr, results_repr, job_id)
# Ensure that we now have the resource in the resource map
self.assertIn(job_id, session.resource_map)
# And that it looks right
self.assertEqual(
session.resource_map[job_id],
[Resource({'key': 'value'})])
def test_process_job_restores_jobs(self):
"""
verify that _process_job() recreates generated jobs
"""
# Set the stage for testing. Setup a session with a known local job,
# representation of the job (checksum) and representation of a single
# result, which has a trivial definition for a 'generated' job.
job_id = 'local'
job = make_job(id=job_id, plugin='local')
jobs_repr = {
job_id: job.checksum
}
results_repr = {
job_id: [{
'outcome': None,
'comments': None,
'execution_duration': None,
'return_code': None,
'io_log': [
[0.0, 'stdout', base64.standard_b64encode(
b'id: generated'
).decode('ASCII')],
[0.1, 'stdout', base64.standard_b64encode(
b'plugin: shell'
).decode('ASCII')],
[0.2, 'stdout', base64.standard_b64encode(
b'command: fake'
).decode('ASCII')]
],
}]
}
helper = self.parameters.resume_cls([job], None, None)
session = SessionState([job])
# Ensure that the 'generated' job was not there initially
self.assertNotIn('generated', session.job_state_map)
self.assertEqual(session.job_list, [job])
# Process the representation data defined above
helper._process_job(session, jobs_repr, results_repr, job_id)
# Ensure that we now have the 'generated' job in the job_state_map
self.assertIn('generated', session.job_state_map)
# And that it looks right
self.assertEqual(
session.job_state_map['generated'].job.id, 'generated')
self.assertIn(
session.job_state_map['generated'].job, session.job_list)
class SessionJobsAndResultsResumeTests(TestCaseWithParameters):
"""
Tests for :class:`~plainbox.impl.session.resume.SessionResumeHelper1`,
:class:`~plainbox.impl.session.resume.SessionResumeHelper2' and
:class:`~plainbox.impl.session.resume.SessionResumeHelper3' and how they
handle resume the session using _restore_SessionState_jobs_and_results()
method.
"""
parameter_names = ('resume_cls',)
parameter_values = ((SessionResumeHelper1,), (SessionResumeHelper2,),
(SessionResumeHelper3,), (SessionResumeHelper4,),
(SessionResumeHelper5,), (SessionResumeHelper6,))
def test_empty_session(self):
"""
verify that _restore_SessionState_jobs_and_results() works when
faced with a representation of an empty session. This is mostly
to do sanity checking on the 'easy' parts of the code before
testing specific cases in the rest of the code.
"""
session_repr = {
'jobs': {},
'results': {}
}
helper = self.parameters.resume_cls([], None, None)
session = SessionState([])
helper._restore_SessionState_jobs_and_results(session, session_repr)
self.assertEqual(session.job_list, [])
self.assertEqual(session.resource_map, {})
self.assertEqual(session.job_state_map, {})
def test_simple_session(self):
"""
verify that _restore_SessionState_jobs_and_results() works when
faced with a representation of a simple session (no generated jobs
or anything "exotic").
"""
job = make_job(id='job')
session_repr = {
'jobs': {
job.id: job.checksum,
},
'results': {
job.id: [{
'outcome': 'pass',
'comments': None,
'execution_duration': None,
'return_code': None,
'io_log': [],
}]
}
}
helper = self.parameters.resume_cls([], None, None)
session = SessionState([job])
helper._restore_SessionState_jobs_and_results(session, session_repr)
# Session still has one job in it
self.assertEqual(session.job_list, [job])
# Resources don't have anything (no resource jobs)
self.assertEqual(session.resource_map, {})
# The result was restored correctly. This is just a smoke test
# as specific tests for restoring results are written elsewhere
self.assertEqual(
session.job_state_map[job.id].result.outcome, 'pass')
def test_session_with_generated_jobs(self):
"""
verify that _restore_SessionState_jobs_and_results() works when
faced with a representation of a non-trivial session where one
job generates another one.
"""
parent = make_job(id='parent', plugin='local')
# The child job is only here so that we can get the checksum.
# We don't actually introduce it into the resume machinery
# caveat: make_job() has a default value for
# plugin='dummy' which we don't want here
child = make_job(id='child', plugin='shell', command='fake')
session_repr = {
'jobs': {
parent.id: parent.checksum,
child.id: child.checksum,
},
'results': {
parent.id: [{
'outcome': 'pass',
'comments': None,
'execution_duration': None,
'return_code': None,
'io_log': [
# This record will generate a job identical
# to the 'child' job defined above.
[0.0, 'stdout', base64.standard_b64encode(
b'id: child\n'
).decode('ASCII')],
[0.1, 'stdout', base64.standard_b64encode(
b'plugin: shell\n'
).decode('ASCII')],
[0.2, 'stdout', base64.standard_b64encode(
b'command: fake\n'
).decode('ASCII')]
],
}],
child.id: [],
}
}
# We only pass the parent to the helper! Child will be re-created
helper = self.parameters.resume_cls([parent], None, None)
session = SessionState([parent])
helper._restore_SessionState_jobs_and_results(session, session_repr)
# We should now have two jobs, parent and child
self.assertEqual(session.job_list, [parent, child])
# Resources don't have anything (no resource jobs)
self.assertEqual(session.resource_map, {})
def test_session_with_generated_jobs2(self):
"""
verify that _restore_SessionState_jobs_and_results() works when
faced with a representation of a non-trivial session where one
job generates another one and that one generates one more.
"""
# XXX: Important information about this test.
# This test uses a very subtle ordering of jobs to achieve
# the desired testing effect. This does not belong in this test case
# and should be split into a dedicated, very well documented method
# The only information I'll leave here now is that
# _restore_SessionState_jobs_and_results() is processing jobs
# in alphabetical order. This coupled with ordering:
# a_grandparent (generated)
# b_child (generated)
# c_parent
# creates the most pathological case possible.
parent = make_job(id='c_parent', plugin='local')
# The child job is only here so that we can get the checksum.
# We don't actually introduce it into the resume machinery
child = make_job(id='b_child', plugin='local', command='fake')
# caveat: make_job() has a default value for
# plugin='dummy' which we don't want here
grandchild = make_job(id='a_grandchild', plugin='shell',
command='fake')
session_repr = {
'jobs': {
parent.id: parent.checksum,
child.id: child.checksum,
grandchild.id: grandchild.checksum,
},
'results': {
parent.id: [{
'outcome': 'pass',
'comments': None,
'execution_duration': None,
'return_code': None,
'io_log': [
# This record will generate a job identical
# to the 'child' job defined above.
[0.0, 'stdout', base64.standard_b64encode(
b'id: b_child\n'
).decode('ASCII')],
[0.1, 'stdout', base64.standard_b64encode(
b'plugin: local\n'
).decode('ASCII')],
[0.2, 'stdout', base64.standard_b64encode(
b'command: fake\n'
).decode('ASCII')]
],
}],
child.id: [{
'outcome': 'pass',
'comments': None,
'execution_duration': None,
'return_code': None,
'io_log': [
# This record will generate a job identical
# to the 'child' job defined above.
[0.0, 'stdout', base64.standard_b64encode(
b'id: a_grandchild\n'
).decode('ASCII')],
[0.1, 'stdout', base64.standard_b64encode(
b'plugin: shell\n'
).decode('ASCII')],
[0.2, 'stdout', base64.standard_b64encode(
b'command: fake\n'
).decode('ASCII')]
],
}],
grandchild.id: [],
}
}
# We only pass the parent to the helper!
# The 'child' and 'grandchild' jobs will be re-created
helper = self.parameters.resume_cls([parent], None, None)
session = SessionState([parent])
helper._restore_SessionState_jobs_and_results(session, session_repr)
# We should now have two jobs, parent and child
self.assertEqual(session.job_list, [parent, child, grandchild])
# Resources don't have anything (no resource jobs)
self.assertEqual(session.resource_map, {})
def test_unknown_jobs_get_reported(self):
"""
verify that _restore_SessionState_jobs_and_results() reports
all unresolved jobs (as CorruptedSessionError exception)
"""
session_repr = {
'jobs': {
'job-id': 'job-checksum',
},
'results': {
'job-id': []
}
}
helper = self.parameters.resume_cls([], None, None)
session = SessionState([])
with self.assertRaises(CorruptedSessionError) as boom:
helper._restore_SessionState_jobs_and_results(
session, session_repr)
self.assertEqual(
str(boom.exception), "Unknown jobs remaining: job-id")
class SessionJobListResumeTests(TestCaseWithParameters):
"""
Tests for :class:`~plainbox.impl.session.resume.SessionResumeHelper1`,
:class:`~plainbox.impl.session.resume.SessionResumeHelper2' and
:class:`~plainbox.impl.session.resume.SessionResumeHelper3' and how they
handle resume session.job_list using _restore_SessionState_job_list()
method.
"""
parameter_names = ('resume_cls',)
parameter_values = ((SessionResumeHelper1,), (SessionResumeHelper2,),
(SessionResumeHelper3,), (SessionResumeHelper4,),
(SessionResumeHelper5,), (SessionResumeHelper6,))
def test_simple_session(self):
"""
verify that _restore_SessionState_job_list() does restore job_list
"""
job_a = make_job(id='a')
job_b = make_job(id='b')
session_repr = {
'jobs': {
job_a.id: job_a.checksum
},
'desired_job_list': [
job_a.id
],
'results': {
job_a.id: [],
}
}
helper = self.parameters.resume_cls([job_a, job_b], None, None)
session = SessionState([job_a, job_b])
helper._restore_SessionState_job_list(session, session_repr)
# Job "a" is still in the list but job "b" got removed
self.assertEqual(session.job_list, [job_a])
# The rest is tested by trim_job_list() tests
class RegressionTests(TestCase):
def test_1387782(self):
"""
https://bugs.launchpad.net/plainbox/+bug/1387782
"""
# This bug is about not being able to resume a session like this:
# - desired job list: [a]
# - run list [a_dep, a] (computed)
# - job_repr: [] # assume a_dep is not there
job_a = make_job(id='a', depends='a_dep')
job_a_dep = make_job(id='a_dep')
job_unrelated = make_job('unrelated')
session_repr = {
'version': 4,
'session': {
'jobs': {}, # nothing ran yet
'desired_job_list': [job_a.id], # we want a to run
'mandatory_job_list': [],
'results': {}, # nothing ran yet
},
}
helper = SessionResumeHelper4([job_a, job_a_dep, job_unrelated],
None, None)
# Mock away meta-data restore code as we're not testing that
with mock.patch.object(helper, '_restore_SessionState_metadata'):
session = helper.resume_json(session_repr)
# Both job_a and job_a_dep are there but job_unrelated is now gone
self.assertEqual(session.job_list, [job_a, job_a_dep])
def test_1388747(self):
"""
https://bugs.launchpad.net/plainbox/+bug/1388747
"""
# This bug is about not being able to resume a session like this:
# - job repr: a => a.checksum
# - desired job list, run list: [a]
# - results: (empty), no a there at all
job_a = make_job(id='a')
session_repr = {
'version': 4,
'session': {
'jobs': {
# a is about to run so it's mentioned in the checksum map
job_a.id: job_a.checksum
},
'desired_job_list': [job_a.id], # we want to run a
'mandatory_job_list': [],
'results': {}, # nothing ran yet
}
}
helper = SessionResumeHelper4([job_a], None, None)
# Mock away meta-data restore code as we're not testing that
with mock.patch.object(helper, '_restore_SessionState_metadata'):
session = helper.resume_json(session_repr)
# Both job_a has a default hollow result
self.assertTrue(session.job_state_map[job_a.id].result.is_hollow)
plainbox-0.25/plainbox/impl/session/state.py 0000664 0001750 0001750 00000144072 12627266441 022030 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
Session State Handling.
:mod:`plainbox.impl.session.state` -- session state handling
============================================================
"""
import collections
import logging
import re
from plainbox.abc import IJobResult
from plainbox.i18n import gettext as _
from plainbox.impl import deprecated
from plainbox.impl.depmgr import DependencyDuplicateError
from plainbox.impl.depmgr import DependencyError
from plainbox.impl.depmgr import DependencySolver
from plainbox.impl.secure.qualifiers import select_jobs
from plainbox.impl.session.jobs import JobState
from plainbox.impl.session.jobs import UndesiredJobReadinessInhibitor
from plainbox.impl.unit.job import JobDefinition
from plainbox.impl.unit.unit_with_id import UnitWithId
from plainbox.impl.unit.testplan import TestPlanUnitSupport
from plainbox.vendor import morris
logger = logging.getLogger("plainbox.session.state")
class SessionMetaData:
"""
Class representing non-critical state of the session.
The data held here allows applications to reason about sessions in general
but is not relevant to the runner or the core in general
"""
# Flag indicating that the testing session is not complete and additional
# testing is expected. Applications are encouraged to add this flag
# immediately after creating a new session. Applications are also
# encouraged to remove this flag after the expected test plan is complete
FLAG_INCOMPLETE = "incomplete"
# Flag indicating that results of this testing session have been submitted
# to some central results repository. Applications are encouraged to
# set this flag after successfully sending the result somewhere.
FLAG_SUBMITTED = "submitted"
# Flag indicating that session was just established and requires some
# additional actions before test can commence. Applications are encouraged
# to set this flag after session is created and then add incomplete flag
# once the testing begin
FLAG_BOOTSTRAPPING = "bootstrapping"
def __init__(self, title=None, flags=None, running_job_name=None,
app_blob=None, app_id=None):
"""Initialize a new session state meta-data object."""
if flags is None:
flags = []
self._title = title
self._flags = set(flags)
self._running_job_name = running_job_name
self._app_blob = app_blob
self._app_id = app_id
def __repr__(self):
"""Get the representation of the session state meta-data."""
return "<{} title:{!r} flags:{!r} running_job_name:{!r}>".format(
self.__class__.__name__, self.title, self.flags,
self.running_job_name)
@property
def title(self):
"""
the session title.
Title is just an arbitrary string that can be used to distinguish
between multiple sessions.
The value can be changed at any time.
"""
return self._title
@title.setter
def title(self, title):
"""set the session title to the given value."""
self._title = title
@property
def flags(self):
"""
a set of flags that are associated with this session.
This set is persisted by persistent_save() and can be used to keep
track of how the application wants to interpret this session state.
Intended usage is to keep track of "testing finished" and
"results submitted" flags. Some flags are added as constants to this
class.
"""
return self._flags
@flags.setter
def flags(self, flags):
"""set the session flags to the given set."""
self._flags = flags
@property
def running_job_name(self):
"""
id of the running job.
.. note::
This property has a confusing name. It actually refers to job ID,
not name.
This property should be updated to keep track of the name of the
job that is being executed. When either plainbox or the machine it
was running on crashes during the execution of a job this value
should be preserved and can help the GUI to resume and provide an
error message.
The property MUST be set before starting the job itself.
"""
return self._running_job_name
@running_job_name.setter
def running_job_name(self, running_job_name):
"""set the id of the running job."""
self._running_job_name = running_job_name
@property
def app_blob(self):
"""
Custom, application specific binary blob.
The type and value of this property is irrelevant as it is not
inspected by plainbox at all. Reasonable applications will not make use
of this property for storing large amounts of data. If you are tempted
to do that, please redesign your application or propose changes to
plainbox.
"""
return self._app_blob
@app_blob.setter
def app_blob(self, value):
"""set the application specific binary blob to the given value."""
if value is not None and not isinstance(value, bytes):
# TRANSLATORS: please don't translate app_blob, None and bytes
raise TypeError(_("app_blob must be either None or bytes"))
self._app_blob = value
@property
def app_id(self):
"""
Application identifier.
A string identifying the application that stored app_blob. It is
recommended to use reverse domain names or UUIDs.
"""
return self._app_id
@app_id.setter
def app_id(self, value):
"""Set the application identifier to the given value."""
if value is not None and not isinstance(value, str):
# TRANSLATORS: please don't translate app_blob, None and bytes
raise TypeError(_("app_id must be either None or str"))
self._app_id = value
class SessionDeviceContext:
"""
Session context specific to a given device.
This class exposes access to a "world view" unique to a specific device.
The view is composed of the following attributes:
:attr _provider_list:
A list of providers known by this device. All of those providers
are compatible with the device.
:attr _unit_list:
A list of all the units known by this device. Initially it is identical
to the union of all the units from ``_provider_list`` but it is in fact
mutable and can be grown (or shrunk in some cases) when jobs are
created at runtime.
:attr _test_plan_list:
A list of test plans that this device will be executing. This is stored
so that all job changes can automatically apply field overrides to job
state.
:attr _device:
Always None, this is a future extension point
:attr _state:
A :class:`SessionState` object that holds all of the job results
and also exposes some legacy API for computing the run list and the
desired job list
"""
# Cache key that stores the list of execution controllers
_CACHE_EXECUTION_CTRL_LIST = 'execution_controller_list'
# Cache key that stores the map of field overrides
_CACHE_OVERRIDE_MAP = 'override_map'
def __init__(self, state=None):
"""
Initialize a new SessionDeviceContext.
:param state:
An (optional) state to use
Note that using an initial state will not cause any of the signals to
fire for the initial list of units nor the list of providers (derived
from the same list).
"""
self._device = None
# Setup an empty computation cache for this context
self._shared_cache = {}
if state is None:
# If we don't have to work with an existing state object
# (the preferred mode) then all life is easy as we control both
# the unit list and the provider list
self._unit_list = []
self._provider_list = []
self._state = SessionState(self._unit_list)
self._unit_id_map = {}
else:
if not isinstance(state, SessionState):
raise TypeError
# If we do have an existing state object then our lists must be
# obtained / derived from the state object's data
self._unit_list = state.unit_list
self._provider_list = list({
unit.provider for unit in self._unit_list
})
self._state = state
self._unit_id_map = {unit.id: unit for unit in state.unit_list if
isinstance(unit, UnitWithId)}
self._test_plan_list = []
# Connect SessionState's signals to fire our signals. This
# way all manipulation done through the SessionState object
# can be observed through the SessionDeviceContext object
# (and vice versa, as all the manipulation is forwarded to
# the SessionState)
self._state.on_unit_added.connect(self.on_unit_added)
self._state.on_unit_removed.connect(self.on_unit_removed)
@property
def device(self):
"""
The device associated with this context.
.. warning::
Currently this method will always return None. In the future it
will return an object that describes the device.
"""
return self._device
@property
def state(self):
"""
The session state object associated with this context.
.. note::
You can use both the session state and the session device context
to query and monitor the changes to all the participating units
"""
return self._state
@property
def provider_list(self):
"""
The list of providers currently available in this context.
.. note::
You must not modify the return value.
This is not enforced but please use the :meth:`add_provider()`
method if you want to add a provider. Currently you cannot
remove providers or reorder the list of providers.
"""
return self._provider_list
@property
def unit_list(self):
"""
The list of units currently available in this context.
.. note::
You must not modify the return value.
This is not enforced but please use the :meth:`add_unit()`
or :meth:`remove_unit()` if you want to manipulate the list.
Currently you cannot reorder the list of units.
"""
return self._unit_list
@property
def execution_controller_list(self):
"""
A list of execution controllers applicable in this context.
:returns:
A list of IExecutionController objects
.. note::
The return value is different whenever a provider is added to the
context. If you have obtained this value in the past it may be
no longer accurate.
"""
return self.compute_shared(
self._CACHE_EXECUTION_CTRL_LIST, self._compute_execution_ctrl_list)
@property
def override_map(self):
"""
A list of execution controllers applicable in this context.
:returns:
A list of IExecutionController objects
.. note::
The return value is different whenever a provider is added to the
context. If you have obtained this value in the past it may be
no longer accurate.
"""
return self.compute_shared(
self._CACHE_OVERRIDE_MAP, self._compute_override_map)
def set_test_plan_list(self, test_plan_list: "List[TestPlanUnit]"):
"""
Compute all of the effective job state values.
:param test_plan_list:
The list of test plans to consider
This method is intended to be called exactly once per session, after
the application determines the set of test plans it intends to execute.
The method will collect all of the override values exposed by all of
the test plans and apply them in sequence. Note that correct
applications must also perform micro-updates whenever a new test job is
added to the system.
"""
self._test_plan_list = test_plan_list
self._invalidate_override_map()
self._bulk_override_update()
if test_plan_list:
self._update_mandatory_job_list()
def add_provider(self, provider, add_units=True):
"""
Add a provider to the context.
:param provider:
The :class:`Provider1` to add
:param add_units:
An optional flag that controls if all of the units from that
provider should be added. Defaults to True.
:raises ValueError:
If the provider is already in the context
This method can be used to add a provider to the context. It also adds
all of the units of that provider automatically.
.. note::
This method fires the :meth:`on_provider_added()` signal but
it does so before any of the units from that provider are added.
"""
if provider in self._provider_list:
raise ValueError(_("attempting to add the same provider twice"))
self._provider_list.append(provider)
self.on_provider_added(provider)
if add_units:
for unit in provider.unit_list:
self.add_unit(unit)
def add_unit(self, unit):
"""
Add a unit to the context.
:param unit:
The :class:`Unit` to add.
:raises ValueError:
If the unit is already in the context
This method can be used to register both the initially-known units
as well as units generated at runtime.
This method fires the :meth:`on_unit_added()` signal
"""
if unit in self._unit_list:
raise ValueError(_("attempting to add the same unit twice"))
self.state.add_unit(unit)
# NOTE: no need to fire the on_unit_added() signal because the state
# object and we've connected it to will fire our version.
def remove_unit(self, unit):
"""
Remove an unit from the context.
:param unit:
The :class:`Unit` to remove.
This method fires the :meth:`on_unit_removed()` signal
"""
if unit not in self._unit_list:
raise ValueError(
_("attempting to remove unit not in this context"))
self.state.remove_unit(unit)
# NOTE: no need to fire the on_unit_removed() signal because the state
# object and we've connected it to will fire our version.
def get_unit(self, unit_id, kind_name=None):
"""
Get an unit with a specific identifier.
:param unit_id:
The identifier of the unit to find
:param kind_name:
(optional) Name of the type of unit. By default units of any type
can be found. Unit kind is the value of the ``unit.Meta.name``
attribute. Using this argument allows the caller to quickly find
only units of a particular type without having to do the filtering
on their side.
:raises KeyError:
If the matching unit does not exists.
"""
unit = self._unit_id_map[unit_id]
if kind_name is not None and unit.Meta.name != kind_name:
raise KeyError(unit_id)
return unit
def get_ctrl_for_job(self, job):
"""
Get the execution controller most applicable to run this job.
:param job:
A job definition to run
:returns:
An execution controller instance
:raises LookupError:
if no execution controller capable of running the specified job can
be found
The best controller is the controller that has the highest score
(as computed by :meth:`IExecutionController.get_score()) for the
job in question.
"""
# Compute the score of each controller
ctrl_score = [
(ctrl, ctrl.get_score(job))
for ctrl in self.execution_controller_list]
# Sort scores
ctrl_score.sort(key=lambda pair: pair[1])
# Get the best score
ctrl, score = ctrl_score[-1]
# Ensure that the controller is viable
if score < 0:
raise LookupError(
_("No exec controller supports job {}").format(job))
logger.debug(
_("Selected execution controller %s (score %d) for job %r"),
ctrl.__class__.__name__, score, job.id)
return ctrl
@morris.signal
def on_provider_added(self, provider):
"""Signal sent whenever a provider is added to the context."""
logger.info(_("Provider %s added to context %s"), provider, self)
# Invalidate the list of execution controllers as they depend
# on the accuracy of provider_list
self._invalidate_execution_ctrl_list()
@morris.signal
def on_unit_added(self, unit):
"""Signal sent whenever a unit is added to the context."""
logger.debug(_("Unit %s added to context %s"), unit, self)
if unit.Meta.name == 'job':
self.on_job_added(unit)
if isinstance(unit, UnitWithId):
self._unit_id_map[unit.id] = unit
@morris.signal
def on_job_added(self, job):
"""Signal sent whenever a new job unit is added to the context."""
self._override_update(job)
@morris.signal
def on_unit_removed(self, unit):
"""Signal sent whenever a unit is removed from the context."""
logger.debug(_("Unit %s removed from context %s"), unit, self)
if isinstance(unit, UnitWithId):
del self._unit_id_map[unit.id]
def compute_shared(self, cache_key, func, *args, **kwargs):
"""
Compute a shared helper.
:param cache_key:
Key to use to lookup the helper value
:param func:
Function that computes the helper value. The function is called
with the context as the only argument
:returns:
Return value of func(self, *args, **kwargs) (possibly computed
earlier).
Compute something that can be shared by all users of the device context
This allows certain expensive computations to be performed only once.
.. note::
The caller is responsible for ensuring that ``args`` and ``kwargs``
match the `cache_key` each time this function is called.
"""
if cache_key not in self._shared_cache:
self._shared_cache[cache_key] = func(*args, **kwargs)
return self._shared_cache[cache_key]
def invalidate_shared(self, cache_key):
"""Invalidate a cached shared value."""
if cache_key in self._shared_cache:
del self._shared_cache[cache_key]
def _compute_execution_ctrl_list(self):
"""Compute the list of execution controllers."""
# TODO: tie this with the upcoming device patches
import sys
if sys.platform == 'linux':
from plainbox.impl.ctrl import RootViaPkexecExecutionController
from plainbox.impl.ctrl import RootViaPTL1ExecutionController
from plainbox.impl.ctrl import RootViaSudoExecutionController
from plainbox.impl.ctrl import UserJobExecutionController
return [
RootViaPTL1ExecutionController(self.provider_list),
RootViaPkexecExecutionController(self.provider_list),
# XXX: maybe this one should be only used on command line
RootViaSudoExecutionController(self.provider_list),
UserJobExecutionController(self.provider_list),
]
elif sys.platform == 'win32':
from plainbox.impl.ctrl import UserJobExecutionController
return [UserJobExecutionController(self.provider_list)]
else:
logger.warning("Unsupported platform: %s", sys.platform)
return []
def _invalidate_execution_ctrl_list(self, *args, **kwargs):
"""Invalidate the list of execution controllers."""
self.invalidate_shared(self._CACHE_EXECUTION_CTRL_LIST)
def _compute_override_map(self):
"""Compute the map of field overrides."""
override_map = collections.defaultdict(list)
for test_plan in self._test_plan_list:
support = TestPlanUnitSupport(test_plan)
for pattern, override_list in support.override_list:
override_map[pattern].extend(override_list)
return override_map
def _invalidate_override_map(self, *args, **kwargs):
"""Invalidate the cached field override map."""
self.invalidate_shared(self._CACHE_OVERRIDE_MAP)
def _bulk_override_update(self):
# NOTE: there is an O(N) algorithm for that solves this but it is more
# complicated than I was able to write without a hard-copy reference
# that describes it. I will improve this method once I complete the
# required research.
for job_state in self.state.job_state_map.values():
job = job_state.job
for pattern, override_list in self.override_map.items():
if re.match(pattern, job.id):
job_state.apply_overrides(override_list)
def _override_update(self, job):
job_state = self.state.job_state_map[job.id]
for pattern, override_list in self.override_map.items():
if re.match(pattern, job.id):
job_state.apply_overrides(override_list)
def _update_mandatory_job_list(self):
qualifier_list = []
for test_plan in self._test_plan_list:
qualifier_list.append(test_plan.get_mandatory_qualifier())
mandatory_job_list = select_jobs(
self.state.job_list, qualifier_list)
self.state.update_mandatory_job_list(mandatory_job_list)
self.state.update_desired_job_list(self.state.desired_job_list)
class SessionState:
"""
Class representing all state needed during a single program session.
This is the central glue/entry-point for applications. It connects user
intents to the rest of the system / plumbing and keeps all of the state in
one place.
The set of utility methods and properties allow applications to easily
handle the lower levels of dependencies, resources and ready states.
:class:`SessionState` has the following instance variables, all of which
are currently exposed as properties.
:ivar list job_list: A list of all known jobs
Not all the jobs from this list are going to be executed (or selected
for execution) by the user.
It may change at runtime because of local jobs. Note that in upcoming
changes this will start out empty and will be changeable dynamically.
It can still change due to local jobs but there is no API yes.
This list cannot have any duplicates, if that is the case a
:class:`DependencyDuplicateError` is raised. This has to be handled
externally and is a sign that the job database is corrupted or has
wrong data. As an exception if duplicates are perfectly identical this
error is silently corrected.
:ivar list unit_list: A list of all known units
This list contains all the known units, including all the know job
definitions (and in the future, all test plans).
It may change at runtime because of local jobs and template
instantiations.
:ivar dict job_state_map: mapping that tracks the state of each job
Mapping from job id to :class:`JobState`. This basically has the test
result and the inhibitor of each job. It also serves as a
:attr:`plainbox.impl.job.JobDefinition.id`-> job lookup helper.
Directly exposed with the intent to fuel part of the UI. This is a way
to get at the readiness state, result and readiness inhibitors, if any.
XXX: this can loose data job_list has jobs with the same id. It would
be better to use job id as the keys here. A separate map could be used
for the id->job lookup. This will be fixed when session controller
branch lands in trunk as then jobs are dynamically added to the system
one at a time and proper error conditions can be detected and reported.
:ivar list desired_job_list: subset of jobs selected for execution
This is used to compute :attr:`run_list`. It can only be changed by
calling :meth:`update_desired_job_list()` which returns meaningful
values so this is not a settable property.
:ivar list run_list: sorted list of jobs to execute
This is basically a superset of desired_job_list and a subset of
job_list that is topologically sorted to allowing all desired jobs to
run. This property is updated whenever desired_job_list is changed.
:ivar dict resource_map: all known resources
A mapping from resource id to a list of
:class:`plainbox.impl.resource.Resource` objects. This encapsulates all
"knowledge" about the system plainbox is running on.
It is needed to compute job readiness (as it stores resource data
needed by resource programs). It is also available to exporters.
This is computed internally from the output of checkbox resource jobs,
it can only be changed by calling :meth:`update_job_result()`
:ivar dict metadata: instance of :class:`SessionMetaData`
"""
@morris.signal
def on_job_state_map_changed(self):
"""
Signal fired after job_state_map is changed in any way.
This signal is always fired before any more specialized signals
such as :meth:`on_job_result_changed()` and :meth:`on_job_added()`.
This signal is fired pretty often, each time a job result is
presented to the session and each time a job is added. When
both of those events happen at the same time only one notification
is sent. The actual state is not sent as it is quite extensive
and can be easily looked at by the application.
"""
@morris.signal
def on_job_result_changed(self, job, result):
"""
Signal fired after a job get changed (set).
This signal is fired each time a result is presented to the session.
This signal is fired **after** :meth:`on_job_state_map_changed()`
"""
logger.info(_("Job %s result changed to %r"), job, result)
@morris.signal
def on_job_added(self, job):
"""
Signal sent whenever a job is added to the session.
This signal is fired **after** :meth:`on_job_state_map_changed()`
"""
@morris.signal
def on_job_removed(self, job):
"""
Signal sent whenever a job is removed from the session.
This signal is fired **after** :meth:`on_job_state_map_changed()`
"""
@morris.signal
def on_unit_added(self, unit):
"""Signal sent whenever a unit is added to the session."""
@morris.signal
def on_unit_removed(self, unit):
"""Signal sent whenever a unit is removed from the session."""
def __init__(self, unit_list):
"""
Initialize a new SessionState with a given list of units.
The units are all of the units (including jobs) that the
session knows about.
"""
# Start by making a copy of job_list as we may modify it below
job_list = [unit for unit in unit_list
if isinstance(unit, JobDefinition)]
while True:
try:
# Construct a solver with the job list as passed by the caller.
# This will do a little bit of validation and might raise
# DepdendencyDuplicateError if there are any duplicates at this
# stage.
#
# There's a single case that is handled here though, if both
# jobs are identical this problem is silently fixed. This
# should not happen in normal circumstances but is non the less
# harmless (as long as both jobs are perfectly identical)
#
# Since this problem can happen any number of times (many
# duplicates) this is performed in a loop. The loop breaks when
# we cannot solve the problem _OR_ when no error occurs.
DependencySolver(job_list)
except DependencyDuplicateError as exc:
# If both jobs are identical then silently fix the problem by
# removing one of the jobs (here the second one we've seen but
# it's not relevant as they are possibly identical) and try
# again
if exc.job == exc.duplicate_job:
job_list.remove(exc.duplicate_job)
continue
else:
# If the jobs differ report this back to the caller
raise
else:
# If there are no problems then break the loop
break
self._job_list = job_list
self._unit_list = unit_list
self._job_state_map = {job.id: JobState(job)
for job in self._job_list}
self._desired_job_list = []
self._mandatory_job_list = []
self._run_list = []
self._resource_map = {}
self._metadata = SessionMetaData()
super(SessionState, self).__init__()
def trim_job_list(self, qualifier):
"""
Discard jobs that are selected by the given qualifier.
:param qualifier:
A qualifier that selects jobs to be removed
:ptype qualifier:
IJobQualifier
:raises ValueError:
If any of the jobs selected by the qualifier is on the desired job
list (or the run list)
This function correctly and safely discards certain jobs from the job
list. It also removes the associated job state (and referenced job
result) and results (for jobs that were resource jobs)
"""
# Build a list for each of the jobs in job_list, that tells us if we
# should remove that job. This way we only call the qualifier once per
# job and can do efficient operations later.
#
# The whole function should be O(N), where N is len(job_list)
remove_flags = [
qualifier.designates(job) for job in self._job_list]
# Build a list of (job, should_remove) flags, we'll be using this list
# a few times below.
job_and_flag_list = list(zip(self._job_list, remove_flags))
# Build a set of ids of jobs that we'll be removing
remove_job_id_set = frozenset([
job.id for job, should_remove in job_and_flag_list
if should_remove is True])
# Build a set of ids of jobs that are on the run list
run_list_id_set = frozenset([job.id for job in self.run_list])
# Check if this is safe to do. None of the jobs may be in the run list
# (or the desired job list which is always a subset of run list)
unremovable_job_id_set = remove_job_id_set.intersection(
run_list_id_set)
if unremovable_job_id_set:
raise ValueError(
_("cannot remove jobs that are on the run list: {}").format(
', '.join(sorted(unremovable_job_id_set))))
# Remove job state and resources (if present) for all the jobs we're
# about to remove. Note that while each job has a state object not all
# jobs generated resources so that removal is conditional.
for job, should_remove in job_and_flag_list:
if should_remove:
del self._job_state_map[job.id]
if job.id in self._resource_map:
del self._resource_map[job.id]
# Compute a list of jobs to retain
retain_list = [
job for job, should_remove in job_and_flag_list
if should_remove is False]
# And a list of jobs to remove
remove_list = [
job for job, should_remove in job_and_flag_list
if should_remove is True]
# Replace job list with the filtered list
self._job_list = retain_list
if remove_list:
# Notify that the job state map has changed
self.on_job_state_map_changed()
# And that each removed job was actually removed
for job in remove_list:
self.on_job_removed(job)
self.on_unit_removed(job)
def update_mandatory_job_list(self, mandatory_job_list):
"""
Update the set of mandatory jobs (that must run).
This method simply stores the list of mandatory jobs inside the session
state. The next time the set of desired jobs is altered via a call
:meth:`update_desired_job_list()` the effective selection will also
include mandatory jobs.
"""
self._mandatory_job_list = mandatory_job_list
def update_desired_job_list(self, desired_job_list):
"""
Update the set of desired jobs (that ought to run).
This method can be used by the UI to recompute the dependency graph.
The argument 'desired_job_list' is a list of jobs that should run.
Those jobs must be a sub-collection of the job_list argument that was
passed to the constructor.
It never fails although it may reduce the actual permitted
desired_job_list to an empty list. It returns a list of problems (all
instances of DependencyError class), one for each job that had to be
removed.
"""
# Remember a copy of original desired job list. We may modify this list
# so let's not mess up data passed by the caller.
self._desired_job_list = list(
desired_job_list + self._mandatory_job_list)
# Reset run list just in case desired_job_list is empty
self._run_list = []
# Try to solve the dependency graph. This is done in a loop as may need
# to remove a problematic job and re-try. The loop provides a stop
# condition as we will eventually run out of jobs.
problems = []
# Get a copy of all the jobs as we'll be removing elements from this
# list to come to a stable set in the loop below.
job_list = self._job_list[:]
while self._desired_job_list:
# XXX: it might be more efficient to incorporate this 'recovery
# mode' right into the solver, this way we'd probably save some
# resources or runtime complexity.
try:
self._run_list = DependencySolver.resolve_dependencies(
job_list, self.mandatory_job_list + self._desired_job_list)
except DependencyError as exc:
# When a dependency error is detected remove the affected job
# form _desired_job_list and try again.
if exc.affected_job in self._desired_job_list:
# The job may have been removed by now:
# https://bugs.launchpad.net/plainbox/+bug/1444126
self._desired_job_list.remove(exc.affected_job)
if exc.affected_job in job_list:
# If the affected job is in the job list, remove it from
# the job list we're going to consider in the next run.
# This is done so that if a job depends on a broken but
# existing job, it won't constantly re-add the same broken
# job over and over (so that the algorithm can stop).
job_list.remove(exc.affected_job)
# Remember each problem, this can be presented by the UI
problems.append(exc)
continue
else:
# Don't iterate the loop if there was no exception
break
# Update all job readiness state
self._recompute_job_readiness()
# Return all dependency problems to the caller
return problems
def get_estimated_duration(self, manual_overhead=30.0):
"""
Estimate the total duration of the session.
Provide the estimated duration of the jobs that have been selected
to run in this session (maintained by calling update_desired_job_list).
Manual jobs have an arbitrary figure added to their runtime to allow
for execution of the test steps and verification of the result.
:returns: (estimate_automated, estimate_manual)
where estimate_automated is the value for automated jobs only and
estimate_manual is the value for manual jobs only. These can be
easily combined. Either value can be None if the value could not be
calculated due to any job lacking the required estimated_duration
field.
"""
estimate_automated = 0.0
estimate_manual = 0.0
for job in self._run_list:
if job.automated and estimate_automated is not None:
if job.estimated_duration is not None:
estimate_automated += job.estimated_duration
elif job.plugin != 'local':
estimate_automated = None
elif not job.automated and estimate_manual is not None:
# We add a fixed extra amount of seconds to the run time
# for manual jobs to account for the time taken in reading
# the description and performing any necessary steps
estimate_manual += manual_overhead
if job.estimated_duration is not None:
estimate_manual += job.estimated_duration
elif job.command:
estimate_manual = None
return (estimate_automated, estimate_manual)
def update_job_result(self, job, result):
"""
Notice the specified test result and update readiness state.
This function updates the internal result collection with the data from
the specified test result. Results can safely override older results.
Results also change the ready map (jobs that can run) because of
dependency relations.
Some results have deeper meaning, those are results for local and
resource jobs. They are discussed in detail below:
Resource jobs produce resource records which are used as data to run
requirement expressions against. Each time a result for a resource job
is presented to the session it will be parsed as a collection of RFC822
records. A new entry is created in the resource map (entirely replacing
any old entries), with a list of the resources that were parsed from
the IO log.
Local jobs produce more jobs. Like with resource jobs, their IO log is
parsed and interpreted as additional jobs. Unlike in resource jobs
local jobs don't replace anything. They cannot replace an existing job
with the same id.
"""
job.controller.observe_result(self, job, result)
self._recompute_job_readiness()
@deprecated('0.9', 'use the add_unit() method instead')
def add_job(self, new_job, recompute=True):
"""
Add a new job to the session.
:param new_job:
The job being added
:param recompute:
If True, recompute readiness inhibitors for all jobs.
You should only set this to False if you're adding
a number of jobs and will otherwise ensure that
:meth:`_recompute_job_readiness()` gets called before
session state users can see the state again.
:returns:
The job that was actually added or an existing, identical
job if a perfect clash was silently ignored.
:raises DependencyDuplicateError:
if a duplicate, clashing job definition is detected
The new_job gets added to all the state tracking objects of the
session. The job is initially not selected to run (it is not in the
desired_job_list and has the undesired inhibitor).
The new_job may clash with an existing job with the same id. Unless
both jobs are identical this will cause DependencyDuplicateError to be
raised. Identical jobs are silently discarded.
.. note::
This method recomputes job readiness for all jobs
"""
return self.add_unit(new_job, recompute)
def add_unit(self, new_unit, recompute=True):
"""
Add a new unit to the session.
:param new_unit:
The unit being added
:param recompute:
If True, recompute readiness inhibitors for all jobs.
You should only set this to False if you're adding
a number of jobs and will otherwise ensure that
:meth:`_recompute_job_readiness()` gets called before
session state users can see the state again.
:returns:
The unit that was actually added or an existing, identical
unit if a perfect clash was silently ignored.
:raises DependencyDuplicateError:
if a duplicate, clashing job definition is detected
.. note::
The following applies only to newly added job units:
The new_unit gets added to all the state tracking objects of the
session. The job unit is initially not selected to run (it is not
in the desired_job_list and has the undesired inhibitor).
The new_unit job may clash with an existing job with the same id.
Unless both jobs are identical this will cause
DependencyDuplicateError to be raised. Identical jobs are silently
discarded.
.. note::
This method recomputes job readiness for all jobs unless the
recompute=False argument is used. Recomputing takes a while so if
you want to add a lot of units consider setting that to False and
only recompute at the last call.
"""
if new_unit.Meta.name == 'job':
return self._add_job_unit(new_unit, recompute)
else:
return self._add_other_unit(new_unit)
def _add_other_unit(self, new_unit):
self.unit_list.append(new_unit)
self.on_unit_added(new_unit)
return new_unit
def _add_job_unit(self, new_job, recompute):
# See if we have a job with the same id already
try:
existing_job = self.job_state_map[new_job.id].job
except KeyError:
# Register the new job in our state
self.job_state_map[new_job.id] = JobState(new_job)
self.job_list.append(new_job)
self.unit_list.append(new_job)
self.on_job_state_map_changed()
self.on_unit_added(new_job)
self.on_job_added(new_job)
return new_job
else:
# If there is a clash report DependencyDuplicateError only when the
# hashes are different. This prevents a common "problem" where
# "__foo__" local jobs just load all jobs from the "foo" category.
if new_job != existing_job:
raise DependencyDuplicateError(existing_job, new_job)
return existing_job
finally:
# Update all job readiness state
if recompute:
self._recompute_job_readiness()
def remove_unit(self, unit, *, recompute=True):
"""
Remove an existing unit from the session.
:param unit:
The unit to remove
:param recompute:
If True, recompute readiness inhibitors for all jobs.
You should only set this to False if you're adding
a number of jobs and will otherwise ensure that
:meth:`_recompute_job_readiness()` gets called before
session state users can see the state again.
.. note::
This method recomputes job readiness for all jobs unless the
recompute=False argument is used. Recomputing takes a while so if
you want to add a lot of units consider setting that to False and
only recompute at the last call.
"""
self._unit_list.remove(unit)
self.on_unit_removed(unit)
if unit.Meta.name == 'job':
self._job_list.remove(unit)
del self._job_state_map[unit.id]
try:
del self._resource_map[unit.id]
except KeyError:
pass
if recompute:
self._recompute_job_readiness()
self.on_job_removed(unit)
self.on_job_state_map_changed()
def set_resource_list(self, resource_id, resource_list):
"""
Add or change a resource with the given id.
Resources silently overwrite any old resources with the same id.
"""
self._resource_map[resource_id] = resource_list
@property
def job_list(self):
"""
List of all known jobs.
Not necessarily all jobs from this list can be, or are desired to run.
For API simplicity this variable is read-only, if you wish to alter the
list of all jobs re-instantiate this class please.
"""
return self._job_list
@property
def mandatory_job_list(self):
"""
List of all mandatory jobs that must run.
Testplan units can specify a list of jobs that have to be run and are
not supposed to be deselected by the application user.
"""
return self._mandatory_job_list
@property
def unit_list(self):
"""List of all known units."""
return self._unit_list
@property
def desired_job_list(self):
"""
List of jobs that are on the "desired to run" list.
This is a list, not a set, because the dependency solver algorithm
retains as much of the original ordering as possible. Having said that,
the actual order can differ widely (for instance, be reversed)
"""
return self._desired_job_list
@property
def run_list(self):
"""
List of jobs that were intended to run, in the proper order.
The order is a result of topological sorting of the desired_job_list.
This value is recomputed when change_desired_run_list() is called. It
may be shorter than desired_run_list due to dependency errors.
"""
return self._run_list
@property
def job_state_map(self):
"""Map from job id to JobState associated with each job."""
return self._job_state_map
@property
def resource_map(self):
"""Map from resource id to a list of resource records."""
return self._resource_map
def get_outcome_stats(self):
"""
Process the JobState map to get stats about the job outcomes.
:returns:
a mapping of "outcome": "total" key/value pairs
.. note::
Only the outcomes seen during this session are reported, not all
possible values (such as crash, not implemented, ...).
"""
stats = collections.defaultdict(int)
for job_id, job_state in self.job_state_map.items():
if not job_state.result.outcome:
continue
stats[job_state.result.outcome] += 1
return stats
def get_certification_status_map(
self, outcome_filter=(IJobResult.OUTCOME_FAIL,),
certification_status_filter=('blocker',)
):
"""
Get a map of jobs that have a specific certification blocker status.
Filter the Job state map to only return items with given outcomes and
certification statuses.
:param outcome_filter:
Only consider job results with those outcome values
:param certification_status_filter:
Only consider jobs with those certification status values
:returns:
a Job state map only containing job with a given outcome and
certification status value.
"""
return {
job_id: job_state
for job_id, job_state in self.job_state_map.items()
if (job_state.result.outcome in outcome_filter and
job_state.effective_certification_status in
certification_status_filter)
}
@property
def metadata(self):
"""meta-data object associated with this session state."""
return self._metadata
def _recompute_job_readiness(self):
"""
Internal method of SessionState.
Re-computes [job_state.ready
for job_state in _job_state_map.values()]
"""
# Reset the state of all jobs to have the undesired inhibitor. Since
# we maintain a state object for _all_ jobs (including ones not in the
# _run_list this correctly updates all values in the _job_state_map
# (the UI can safely use the readiness state of all jobs)
for job_state in self._job_state_map.values():
job_state.readiness_inhibitor_list = [
UndesiredJobReadinessInhibitor]
# Take advantage of the fact that run_list is topologically sorted and
# do a single O(N) pass over _run_list. All "current/update" state is
# computed before it needs to be observed (thanks to the ordering)
for job in self._run_list:
job_state = self._job_state_map[job.id]
# Remove the undesired inhibitor as we want to run this job
job_state.readiness_inhibitor_list.remove(
UndesiredJobReadinessInhibitor)
# Ask the job controller about inhibitors affecting this job
for inhibitor in job.controller.get_inhibitor_list(self, job):
job_state.readiness_inhibitor_list.append(inhibitor)
plainbox-0.25/plainbox/impl/session/test_state.py 0000664 0001750 0001750 00000143701 12627266441 023065 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.test_session
==========================
Test definitions for plainbox.impl.session module
"""
from doctest import DocTestSuite
from doctest import REPORT_NDIFF
from unittest import TestCase
from plainbox.abc import IExecutionController
from plainbox.abc import IJobResult
from plainbox.impl.depmgr import DependencyDuplicateError
from plainbox.impl.depmgr import DependencyMissingError
from plainbox.impl.depmgr import DependencyUnknownError
from plainbox.impl.resource import Resource
from plainbox.impl.result import MemoryJobResult
from plainbox.impl.secure.origin import Origin
from plainbox.impl.secure.providers.v1 import Provider1
from plainbox.impl.secure.qualifiers import JobIdQualifier
from plainbox.impl.session import InhibitionCause
from plainbox.impl.session import SessionState
from plainbox.impl.session import UndesiredJobReadinessInhibitor
from plainbox.impl.session.state import SessionDeviceContext
from plainbox.impl.session.state import SessionMetaData
from plainbox.impl.testing_utils import make_job
from plainbox.impl.unit.job import JobDefinition
from plainbox.impl.unit.unit import Unit
from plainbox.vendor import mock
from plainbox.vendor.morris import SignalTestCase
def load_tests(loader, tests, ignore):
tests.addTests(DocTestSuite(
'plainbox.impl.session.state', optionflags=REPORT_NDIFF))
return tests
class SessionStateSmokeTests(TestCase):
def setUp(self):
A = make_job('A', requires='R.attr == "value"')
B = make_job('B', depends='C')
C = make_job('C')
self.job_list = [A, B, C]
self.session_state = SessionState(self.job_list)
def test_initial_job_list(self):
expected = self.job_list
observed = self.session_state.job_list
self.assertEqual(expected, observed)
def test_initial_desired_job_list(self):
expected = []
observed = self.session_state.desired_job_list
self.assertEqual(expected, observed)
def test_initial_run_list(self):
expected = []
observed = self.session_state.run_list
self.assertEqual(expected, observed)
def test_update_mandatory_job_list_updates(self):
D = make_job('D')
self.session_state.update_mandatory_job_list([D])
expected = [D]
observed = self.session_state.mandatory_job_list
self.assertEqual(expected, observed)
class RegressionTests(TestCase):
# Tests for bugfixes
def test_crash_on_missing_job(self):
""" http://pad.lv/1334296 """
A = make_job("A")
state = SessionState([])
problems = state.update_desired_job_list([A])
self.assertEqual(problems, [DependencyUnknownError(A)])
self.assertEqual(state.desired_job_list, [])
def test_crash_in_update_desired_job_list(self):
# This checks if a DependencyError can cause crash
# update_desired_job_list() with a ValueError, in certain conditions.
A = make_job('A', depends='X')
L = make_job('L', plugin='local')
session = SessionState([A, L])
problems = session.update_desired_job_list([A, L])
# We should get exactly one DependencyMissingError related to job A and
# the undefined job X (that is presumably defined by the local job L)
self.assertEqual(len(problems), 1)
self.assertIsInstance(problems[0], DependencyMissingError)
self.assertIs(problems[0].affected_job, A)
def test_init_with_identical_jobs(self):
A = make_job("A")
second_A = make_job("A")
third_A = make_job("A")
# Identical jobs are folded for backwards compatibility with some local
# jobs that re-added existing jobs
session = SessionState([A, second_A, third_A])
# But we don't really store both, just the first one
self.assertEqual(session.job_list, [A])
def test_init_with_colliding_jobs(self):
# This is similar to the test above but the jobs actually differ In
# this case the _second_ job is rejected but it really signifies a
# deeper problem that should only occur during development of jobs
A = make_job("A")
different_A = make_job("A", plugin="resource")
with self.assertRaises(DependencyDuplicateError) as call:
SessionState([A, different_A])
self.assertIs(call.exception.job, A)
self.assertIs(call.exception.duplicate_job, different_A)
self.assertIs(call.exception.affected_job, different_A)
def test_dont_remove_missing_jobs(self):
""" http://pad.lv/1444126 """
A = make_job("A", depends="B")
B = make_job("B", depends="C")
state = SessionState([A, B])
problems = state.update_desired_job_list([A, B])
self.assertEqual(problems, [
DependencyMissingError(B, 'C', 'direct'),
DependencyMissingError(A, 'B', 'direct'),
])
self.assertEqual(state.desired_job_list, [])
self.assertEqual(state.run_list, [])
class SessionStateAPITests(TestCase):
def test_set_resource_list(self):
# Define an empty session
session = SessionState([])
# Define a resource
old_res = Resource({'attr': 'old value'})
# Set the resource list with the old resource
# So here the old result is stored into a new 'R' resource
session.set_resource_list('R', [old_res])
# Ensure that it worked
self.assertEqual(session._resource_map, {'R': [old_res]})
# Define another resource
new_res = Resource({'attr': 'new value'})
# Now we present the second result for the same job
session.set_resource_list('R', [new_res])
# What should happen here is that the R resource is entirely replaced
# by the data from the new result. The data should not be merged or
# appended in any way.
self.assertEqual(session._resource_map, {'R': [new_res]})
def test_add_job(self):
# Define a job
job = make_job("A")
# Define an empty session
session = SessionState([])
# Add the job to the session
session.add_job(job)
# The job got added to job list
self.assertIn(job, session.job_list)
# The job got added to job state map
self.assertIs(session.job_state_map[job.id].job, job)
# The job is not added to the desired job list
self.assertNotIn(job, session.desired_job_list)
# The job is not in the run list
self.assertNotIn(job, session.run_list)
# The job is not selected to run
self.assertEqual(
session.job_state_map[job.id].readiness_inhibitor_list,
[UndesiredJobReadinessInhibitor])
def test_add_job_duplicate_job(self):
# Define a job
job = make_job("A")
# Define an empty session
session = SessionState([])
# Add the job to the session
session.add_job(job)
# The job got added to job list
self.assertIn(job, session.job_list)
# Define a perfectly identical job
duplicate_job = make_job("A")
self.assertEqual(job, duplicate_job)
# Try adding it to the session
#
# Note that this does not raise any exceptions as the jobs are perfect
# duplicates.
session.add_job(duplicate_job)
# The new job _did not_ get added to the job list
self.assertEqual(len(session.job_list), 1)
self.assertIsNot(duplicate_job, session.job_list[0])
def test_add_job_clashing_job(self):
# Define a job
job = make_job("A")
# Define an empty session
session = SessionState([])
# Add the job to the session
session.add_job(job)
# The job got added to job list
self.assertIn(job, session.job_list)
# Define a different job that clashes with the initial job
clashing_job = make_job("A", plugin='other')
self.assertNotEqual(job, clashing_job)
self.assertEqual(job.id, clashing_job.id)
# Try adding it to the session
#
# This raises an exception
with self.assertRaises(DependencyDuplicateError) as call:
session.add_job(clashing_job)
# The exception gets job in the right order
self.assertIs(call.exception.affected_job, job)
self.assertIs(call.exception.affecting_job, clashing_job)
# The new job _did not_ get added to the job list
self.assertEqual(len(session.job_list), 1)
self.assertIsNot(clashing_job, session.job_list[0])
def test_get_estimated_duration_auto(self):
# Define jobs with an estimated duration
one_second = make_job("one_second", plugin="shell",
command="foobar",
estimated_duration=1.0)
half_second = make_job("half_second", plugin="shell",
command="barfoo",
estimated_duration=0.5)
session = SessionState([one_second, half_second])
session.update_desired_job_list([one_second, half_second])
self.assertEqual(session.get_estimated_duration(), (1.5, 0.0))
def test_get_estimated_duration_manual(self):
two_seconds = make_job("two_seconds", plugin="manual",
command="farboo",
estimated_duration=2.0)
shell_job = make_job("shell_job", plugin="shell",
command="boofar",
estimated_duration=0.6)
session = SessionState([two_seconds, shell_job])
session.update_desired_job_list([two_seconds, shell_job])
self.assertEqual(session.get_estimated_duration(), (0.6, 32.0))
def test_get_estimated_duration_automated_unknown(self):
three_seconds = make_job("three_seconds", plugin="shell",
command="frob",
estimated_duration=3.0)
no_estimated_duration = make_job("no_estimated_duration",
plugin="shell",
command="borf")
session = SessionState([three_seconds, no_estimated_duration])
session.update_desired_job_list([three_seconds, no_estimated_duration])
self.assertEqual(session.get_estimated_duration(), (None, 0.0))
def test_get_estimated_duration_manual_unknown(self):
four_seconds = make_job("four_seconds", plugin="shell",
command="fibble",
estimated_duration=4.0)
no_estimated_duration = make_job("no_estimated_duration",
plugin="user-verify",
command="bibble")
session = SessionState([four_seconds, no_estimated_duration])
session.update_desired_job_list([four_seconds, no_estimated_duration])
self.assertEqual(session.get_estimated_duration(), (4.0, None))
def test_update_mandatory_job_list_affects_run_list(self):
A = make_job('A')
session = SessionState([A])
session.update_mandatory_job_list([A])
session.update_desired_job_list([])
self.assertEqual(session.run_list, [A])
def test_mandatory_jobs_are_first_in_run_list(self):
A = make_job('A')
B = make_job('B')
session = SessionState([A, B])
session.update_mandatory_job_list([B])
session.update_desired_job_list([A])
self.assertEqual(session.run_list, [B, A])
class SessionStateTrimTests(TestCase):
"""
Tests for SessionState.trim_job_list()
"""
def setUp(self):
self.job_a = make_job("a")
self.job_b = make_job("b")
self.origin = mock.Mock(name='origin', spec_set=Origin)
self.session = SessionState([self.job_a, self.job_b])
def test_trim_does_remove_jobs(self):
"""
verify that trim_job_list() removes jobs as requested
"""
self.session.trim_job_list(JobIdQualifier("a", self.origin))
self.assertEqual(self.session.job_list, [self.job_b])
def test_trim_does_remove_job_state(self):
"""
verify that trim_job_list() removes job state for removed jobs
"""
self.assertIn("a", self.session.job_state_map)
self.session.trim_job_list(JobIdQualifier("a", self.origin))
self.assertNotIn("a", self.session.job_state_map)
def test_trim_does_remove_resources(self):
"""
verify that trim_job_list() removes resources for removed jobs
"""
self.session.set_resource_list("a", [Resource({'attr': 'value'})])
self.assertIn("a", self.session.resource_map)
self.session.trim_job_list(JobIdQualifier("a", self.origin))
self.assertNotIn("a", self.session.resource_map)
def test_trim_fires_on_job_removed(self):
"""
verify that trim_job_list() fires on_job_removed() signal
"""
signal_fired = False
def on_job_removed(job):
self.assertIs(job, self.job_a)
nonlocal signal_fired
signal_fired = True
self.session.on_job_removed.connect(on_job_removed)
self.session.trim_job_list(JobIdQualifier("a", self.origin))
self.assertTrue(signal_fired)
def test_trim_fires_on_job_state_map_changed(self):
"""
verify that trim_job_list() fires on_job_state_map_changed() signal
"""
signal_fired = False
def on_job_state_map_changed():
nonlocal signal_fired
signal_fired = True
self.session.on_job_state_map_changed.connect(on_job_state_map_changed)
self.session.trim_job_list(JobIdQualifier("a", self.origin))
self.assertTrue(signal_fired)
def test_trim_fires_on_job_state_map_changed_only_when_needed(self):
"""
verify that trim_job_list() does not fires on_job_state_map_changed()
signal needlessly, when no jobs is actually being removed.
"""
signal_fired = False
def on_job_state_map_changed():
nonlocal signal_fired
signal_fired = True
self.session.on_job_state_map_changed.connect(on_job_state_map_changed)
self.session.trim_job_list(JobIdQualifier("x", self.origin))
self.assertFalse(signal_fired)
def test_trim_raises_ValueError_for_jobs_on_run_list(self):
"""
verify that trim_job_list() raises ValueError when any of the jobs
marked for removal is in the run_list.
"""
self.session.update_desired_job_list([self.job_a])
with self.assertRaises(ValueError) as boom:
self.session.trim_job_list(JobIdQualifier("a", self.origin))
self.assertEqual(
str(boom.exception),
"cannot remove jobs that are on the run list: a")
class SessionStateSpecialTests(TestCase):
# NOTE: those tests are essential. They allow testing the behavior of
# complex stuff like resource jobs and local jobs in total isolation from
# the actual job runner with relative simplicity.
#
# There are many scenarios that need to be tested that I can think of right
# now. All the failure conditions are interesting as they are less likely
# to occur during typical correct operation. A few of those from the top of
# my head:
#
# *) resource job output altering the resource map
# *) resource changes altering the readiness state of jobs
# *) test results being remembered (those should be renamed to job results)
# *) local job output altering job list
# *) attachment job output altering yet unimplemented attachment store
#
# Local jobs are of super consideration as they can trigger various
# interesting error conditions (all of which are reported by the dependency
# solver as DependencyError objects. One interesting aspect of job
# generation is how an error that resulted by adding a job is resolved. Are
# we removing the newly-added job or some other job that was affected by
# the introduction of a new job? How are we handling duplicates? In all
# such cases it is important to properly track job origin to provide
# informative and correct error messages both at the UI level (hopefully
# our data won't cause such errors on a daily basis) but more importantly
# at the developer-console level where developers are actively spending
# most of their time adding (changing) jobs in an ever-growing pile that
# they don't necessarily fully know, comprehend or remember.
def test_resource_job_affects_resources(self):
pass
class SessionStateReactionToJobResultTests(TestCase):
# This test checks how a simple session with a few typical job reacts to
# job results of various kinds. It checks most of the resource presentation
# error conditions that I could think of.
def setUp(self):
# All of the tests below are using one session. The session has four
# jobs, clustered into two independent groups. Job A depends on a
# resource provided by job R which has no dependencies at all. Job X
# depends on job Y which in turn has no dependencies at all.
#
# A -(resource dependency)-> R
#
# X -(direct dependency) -> Y
self.job_A = make_job("A", requires="R.attr == 'value'")
self.job_A_expr = self.job_A.get_resource_program().expression_list[0]
self.job_R = make_job("R", plugin="resource")
self.job_X = make_job("X", depends='Y')
self.job_Y = make_job("Y")
self.job_L = make_job("L", plugin="local")
self.job_list = [
self.job_A, self.job_R, self.job_X, self.job_Y, self.job_L]
self.session = SessionState(self.job_list)
def job_state(self, id):
# A helper function to avoid overly long expressions
return self.session.job_state_map[id]
def job_inhibitor(self, id, index):
# Another helper that shortens deep object nesting
return self.job_state(id).readiness_inhibitor_list[index]
def test_assumptions(self):
# This function checks the assumptions of SessionState initial state.
# The job list is what we set when constructing the session.
#
self.assertEqual(self.session.job_list, self.job_list)
# The run_list is still empty because the desired_job_list is equally
# empty.
self.assertEqual(self.session.run_list, [])
self.assertEqual(self.session.desired_job_list, [])
# All jobs have state objects that indicate they cannot run (because
# they have the UNDESIRED inhibitor set for them by default).
self.assertFalse(self.job_state('A').can_start())
self.assertFalse(self.job_state('R').can_start())
self.assertFalse(self.job_state('X').can_start())
self.assertFalse(self.job_state('Y').can_start())
self.assertEqual(self.job_inhibitor('A', 0).cause,
InhibitionCause.UNDESIRED)
self.assertEqual(self.job_inhibitor('R', 0).cause,
InhibitionCause.UNDESIRED)
self.assertEqual(self.job_inhibitor('X', 0).cause,
InhibitionCause.UNDESIRED)
self.assertEqual(self.job_inhibitor('Y', 0).cause,
InhibitionCause.UNDESIRED)
def test_desire_job_A_updates_state_map(self):
# This function checks what happens when the job A becomes desired via
# the update_desired_job_list() call.
self.session.update_desired_job_list([self.job_A])
self.assertEqual(self.session.desired_job_list, [self.job_A])
# This should topologically sort the job list, according to the
# relationship created by the resource requirement. This is not really
# testing the dependency solver (it has separate tests), just that this
# basic property is established and that the run_list properly shows
# that R must run before A can run.
self.assertEqual(self.session.run_list, [self.job_R, self.job_A])
# This also recomputes job readiness state so that job R is no longer
# undesired, has no other inhibitor and thus can start
self.assertEqual(self.job_state('R').readiness_inhibitor_list, [])
self.assertTrue(self.job_state('R').can_start())
# While the A job still cannot run it now has a different inhibitor,
# one with the PENDING_RESOURCE cause. The inhibitor also properly
# pinpoints the related job and related expression.
self.assertNotEqual(self.job_state('A').readiness_inhibitor_list, [])
self.assertEqual(self.job_inhibitor('A', 0).cause,
InhibitionCause.PENDING_RESOURCE)
self.assertEqual(self.job_inhibitor('A', 0).related_job, self.job_R)
self.assertEqual(self.job_inhibitor('A', 0).related_expression,
self.job_A_expr)
self.assertFalse(self.job_state('A').can_start())
def test_resource_job_result_updates_resource_and_job_states(self):
# This function checks what happens when a JobResult for job R (which
# is a resource job via the resource plugin) is presented to the
# session.
result_R = MemoryJobResult({
'outcome': IJobResult.OUTCOME_PASS,
'io_log': [(0, 'stdout', b"attr: value\n")],
})
self.session.update_job_result(self.job_R, result_R)
# The most obvious thing that can happen, is that the result is simply
# stored in the associated job state object.
self.assertIs(self.job_state('R').result, result_R)
# Initially the _resource_map was empty. SessionState parses the io_log
# of results of resource jobs and creates appropriate resource objects.
self.assertIn("R", self.session._resource_map)
expected = {'R': [Resource({'attr': 'value'})]}
self.assertEqual(self.session._resource_map, expected)
# As job results are presented to the session the readiness of other
# jobs is changed. Since A depends on R via a resource expression and
# the particular resource that were produced by R in this test should
# allow the expression to match the readiness inhibitor from A should
# have been removed. Since this test does not use
# update_desired_job_list() a will still have the UNDESIRED inhibitor
# but it will no longer have the PENDING_RESOURCE inhibitor,
self.assertEqual(self.job_inhibitor('A', 0).cause,
InhibitionCause.UNDESIRED)
# Now if we put A on the desired list this should clear the UNDESIRED
# inhibitor and make A runnable.
self.session.update_desired_job_list([self.job_A])
self.assertTrue(self.job_state('A').can_start())
def test_normal_job_result_updates(self):
# This function checks what happens when a JobResult for job A is
# presented to the session. Set the outcome to a "different" value as
# the initial job result was pretty much identical and the comparison
# below would fail to work as the update would have been silently
# ignored.
result_A = MemoryJobResult({'outcome': 'different'})
self.session.update_job_result(self.job_A, result_A)
# As before the result should be stored as-is
self.assertIs(self.job_state('A').result, result_A)
# Unlike before _resource_map should be left unchanged
self.assertEqual(self.session._resource_map, {})
# One interesting observation is that readiness inhibitors are entirely
# unaffected by existing test results beyond dependency and resource
# relationships. While a result for job A was presented, job A is still
# inhibited by the UNDESIRED inhibitor.
self.assertEqual(self.job_inhibitor('A', 0).cause,
InhibitionCause.UNDESIRED)
def test_resource_job_with_broken_output(self):
# This function checks how SessionState parses partially broken
# resource jobs. A JobResult with broken output is constructed below.
# The output will describe one proper record, one broken record and
# another proper record in that order.
result_R = MemoryJobResult({
'outcome': IJobResult.OUTCOME_PASS,
'io_log': [
(0, 'stdout', b"attr: value-1\n"),
(1, 'stdout', b"\n"),
(1, 'stdout', b"I-sound-like-a-broken-record\n"),
(1, 'stdout', b"\n"),
(1, 'stdout', b"attr: value-2\n")
],
})
# Since we cannot control the output of scripts and people indeed make
# mistakes a warning is issued but no exception is raised to the
# caller.
self.session.update_job_result(self.job_R, result_R)
# The observation here is that the parser is not handling the exception
# in away which would allow for recovery. Out of all the output only
# the first record is created and stored properly. The third, proper
# record is entirely ignored.
expected = {'R': [Resource({'attr': 'value-1'})]}
self.assertEqual(self.session._resource_map, expected)
def test_desire_job_X_updates_state_map(self):
# This function checks what happens when the job X becomes desired via
# the update_desired_job_list() call.
self.session.update_desired_job_list([self.job_X])
self.assertEqual(self.session.desired_job_list, [self.job_X])
# As in the similar A - R test function above this topologically sorts
# all affected jobs. Here X depends on Y so Y should be before X on the
# run list.
self.assertEqual(self.session.run_list, [self.job_Y, self.job_X])
# As in the A - R test above this also recomputes the job readiness
# state. Job Y is now runnable but job X has a PENDING_DEP inhibitor.
self.assertEqual(self.job_state('Y').readiness_inhibitor_list, [])
# While the A job still cannot run it now has a different inhibitor,
# one with the PENDING_RESOURCE cause. The inhibitor also properly
# pinpoints the related job and related expression.
self.assertNotEqual(self.job_state('X').readiness_inhibitor_list, [])
self.assertEqual(self.job_inhibitor('X', 0).cause,
InhibitionCause.PENDING_DEP)
self.assertEqual(self.job_inhibitor('X', 0).related_job, self.job_Y)
self.assertFalse(self.job_state('X').can_start())
def test_desired_job_X_cannot_run_with_failed_job_Y(self):
# This function checks how SessionState reacts when the desired job X
# readiness state changes when presented with a failed result to job Y
self.session.update_desired_job_list([self.job_X])
# When X is desired, as above, it should be inhibited with PENDING_DEP
# on Y
self.assertNotEqual(self.job_state('X').readiness_inhibitor_list, [])
self.assertEqual(self.job_inhibitor('X', 0).cause,
InhibitionCause.PENDING_DEP)
self.assertEqual(self.job_inhibitor('X', 0).related_job, self.job_Y)
self.assertFalse(self.job_state('X').can_start())
# When a failed Y result is presented X should switch to FAILED_DEP
result_Y = MemoryJobResult({'outcome': IJobResult.OUTCOME_FAIL})
self.session.update_job_result(self.job_Y, result_Y)
# Now job X should have a FAILED_DEP inhibitor instead of the
# PENDING_DEP it had before. Everything else should stay as-is.
self.assertNotEqual(self.job_state('X').readiness_inhibitor_list, [])
self.assertEqual(self.job_inhibitor('X', 0).cause,
InhibitionCause.FAILED_DEP)
self.assertEqual(self.job_inhibitor('X', 0).related_job, self.job_Y)
self.assertFalse(self.job_state('X').can_start())
def test_desired_job_X_can_run_with_passing_job_Y(self):
# A variant of the test case above, simply Y passes this time, making X
# runnable
self.session.update_desired_job_list([self.job_X])
result_Y = MemoryJobResult({'outcome': IJobResult.OUTCOME_PASS})
self.session.update_job_result(self.job_Y, result_Y)
# Now X is runnable
self.assertEqual(self.job_state('X').readiness_inhibitor_list, [])
self.assertTrue(self.job_state('X').can_start())
def test_desired_job_X_cannot_run_with_no_resource_R(self):
# A variant of the two test cases above, using A-R jobs
self.session.update_desired_job_list([self.job_A])
result_R = MemoryJobResult({
'outcome': IJobResult.OUTCOME_PASS,
'io_log': [(0, 'stdout', b'attr: wrong value\n')],
})
self.session.update_job_result(self.job_R, result_R)
# Now A is inhibited by FAILED_RESOURCE
self.assertNotEqual(self.job_state('A').readiness_inhibitor_list, [])
self.assertEqual(self.job_inhibitor('A', 0).cause,
InhibitionCause.FAILED_RESOURCE)
self.assertEqual(self.job_inhibitor('A', 0).related_job, self.job_R)
self.assertEqual(self.job_inhibitor('A', 0).related_expression,
self.job_A_expr)
self.assertFalse(self.job_state('A').can_start())
def test_resource_job_result_overwrites_old_resources(self):
# This function checks what happens when a JobResult for job R is
# presented to a session that has some resources from that job already.
result_R_old = MemoryJobResult({
'outcome': IJobResult.OUTCOME_PASS,
'io_log': [(0, 'stdout', b"attr: old value\n")]
})
self.session.update_job_result(self.job_R, result_R_old)
# So here the old result is stored into a new 'R' resource
expected_before = {'R': [Resource({'attr': 'old value'})]}
self.assertEqual(self.session._resource_map, expected_before)
# Now we present the second result for the same job
result_R_new = MemoryJobResult({
'outcome': IJobResult.OUTCOME_PASS,
'io_log': [(0, 'stdout', b"attr: new value\n")]
})
self.session.update_job_result(self.job_R, result_R_new)
# What should happen here is that the R resource is entirely replaced
# by the data from the new result. The data should not be merged or
# appended in any way.
expected_after = {'R': [Resource({'attr': 'new value'})]}
self.assertEqual(self.session._resource_map, expected_after)
def test_local_job_creates_jobs(self):
# Create a result for the local job L
result_L = MemoryJobResult({
'io_log': [
(0, 'stdout', b'id: foo\n'),
(1, 'stdout', b'plugin: manual\n'),
(2, 'stdout', b'description: yada yada\n'),
],
})
# Show this result to the session
self.session.update_job_result(self.job_L, result_L)
# A job should be generated
self.assertTrue("foo" in self.session.job_state_map)
job_foo = self.session.job_state_map['foo'].job
self.assertTrue(job_foo.id, "foo")
self.assertTrue(job_foo.plugin, "manual")
# It should be linked to the job L via the via_job state attribute
self.assertIs(
self.session.job_state_map[job_foo.id].via_job, self.job_L)
def test_get_outcome_stats(self):
result_A = MemoryJobResult({'outcome': IJobResult.OUTCOME_PASS})
result_L = MemoryJobResult(
{'outcome': IJobResult.OUTCOME_NOT_SUPPORTED})
result_R = MemoryJobResult({'outcome': IJobResult.OUTCOME_FAIL})
result_Y = MemoryJobResult({'outcome': IJobResult.OUTCOME_FAIL})
self.session.update_job_result(self.job_A, result_A)
self.session.update_job_result(self.job_L, result_L)
self.session.update_job_result(self.job_R, result_R)
self.session.update_job_result(self.job_Y, result_Y)
self.assertEqual(self.session.get_outcome_stats(),
{IJobResult.OUTCOME_PASS: 1,
IJobResult.OUTCOME_NOT_SUPPORTED: 1,
IJobResult.OUTCOME_FAIL: 2})
def test_get_certification_status_map(self):
result_A = MemoryJobResult({'outcome': IJobResult.OUTCOME_PASS})
self.session.update_job_result(self.job_A, result_A)
self.session.job_state_map[
self.job_A.id].effective_certification_status = 'foo'
self.assertEqual(self.session.get_certification_status_map(), {})
self.assertEqual(self.session.get_certification_status_map(
outcome_filter=(IJobResult.OUTCOME_PASS,),
certification_status_filter=('foo',)),
{self.job_A.id: self.session.job_state_map[self.job_A.id]})
result_Y = MemoryJobResult({'outcome': IJobResult.OUTCOME_FAIL})
self.session.job_state_map[
self.job_Y.id].effective_certification_status = 'bar'
self.assertEqual(self.session.get_certification_status_map(), {})
self.assertEqual(self.session.get_certification_status_map(
outcome_filter=(IJobResult.OUTCOME_PASS, IJobResult.OUTCOME_FAIL),
certification_status_filter=('foo', 'bar')),
{self.job_A.id: self.session.job_state_map[self.job_A.id]})
self.session.update_job_result(self.job_Y, result_Y)
self.assertEqual(self.session.get_certification_status_map(
outcome_filter=(IJobResult.OUTCOME_PASS, IJobResult.OUTCOME_FAIL),
certification_status_filter=('foo', 'bar')),
{self.job_A.id: self.session.job_state_map[self.job_A.id],
self.job_Y.id: self.session.job_state_map[self.job_Y.id]})
class SessionMetadataTests(TestCase):
def test_smoke(self):
metadata = SessionMetaData()
self.assertEqual(metadata.title, None)
self.assertEqual(metadata.flags, set())
self.assertEqual(metadata.running_job_name, None)
def test_initializer(self):
metadata = SessionMetaData(
title="title", flags=['f1', 'f2'], running_job_name='id')
self.assertEqual(metadata.title, "title")
self.assertEqual(metadata.flags, set(["f1", "f2"]))
self.assertEqual(metadata.running_job_name, "id")
def test_accessors(self):
metadata = SessionMetaData()
metadata.title = "title"
self.assertEqual(metadata.title, "title")
metadata.flags = set(["f1", "f2"])
self.assertEqual(metadata.flags, set(["f1", "f2"]))
metadata.running_job_name = "id"
self.assertEqual(metadata.running_job_name, "id")
def test_app_blob_default_value(self):
metadata = SessionMetaData()
self.assertIs(metadata.app_blob, None)
def test_app_blob_assignment(self):
metadata = SessionMetaData()
metadata.app_blob = b'blob'
self.assertEqual(metadata.app_blob, b'blob')
metadata.app_blob = None
self.assertEqual(metadata.app_blob, None)
def test_app_blob_kwarg_to_init(self):
metadata = SessionMetaData(app_blob=b'blob')
self.assertEqual(metadata.app_blob, b'blob')
def test_app_id_default_value(self):
metadata = SessionMetaData()
self.assertIs(metadata.app_id, None)
def test_app_id_assignment(self):
metadata = SessionMetaData()
metadata.app_id = 'com.canonical.certification.plainbox'
self.assertEqual(
metadata.app_id, 'com.canonical.certification.plainbox')
metadata.app_id = None
self.assertEqual(metadata.app_id, None)
def test_app_id_kwarg_to_init(self):
metadata = SessionMetaData(
app_id='com.canonical.certification.plainbox')
self.assertEqual(
metadata.app_id, 'com.canonical.certification.plainbox')
class SessionDeviceContextTests(SignalTestCase):
def setUp(self):
self.ctx = SessionDeviceContext()
self.provider = mock.Mock(name='provider', spec_set=Provider1)
self.unit = mock.Mock(name='unit', spec_set=Unit)
self.unit.provider = self.provider
self.provider.unit_list = [self.unit]
self.provider.problem_list = []
self.job = mock.Mock(name='job', spec_set=JobDefinition)
self.job.Meta.name = 'job'
def test_smoke(self):
"""
Ensure that you can create a session device context and that
default values are what we expect
"""
self.assertIsNone(self.ctx.device)
self.assertIsInstance(self.ctx.state, SessionState)
self.assertEqual(self.ctx.provider_list, [])
self.assertEqual(self.ctx.unit_list, [])
def test_add_provider(self):
"""
Ensure that adding a provider works
"""
self.ctx.add_provider(self.provider)
self.assertIn(self.provider, self.ctx.provider_list)
def test_add_provider_twice(self):
"""
Ensure that you cannot add a provider twice
"""
self.ctx.add_provider(self.provider)
with self.assertRaises(ValueError):
self.ctx.add_provider(self.provider)
def test_add_provider__adds_units(self):
"""
Ensure that adding a provider adds the unit it knows about
"""
self.ctx.add_provider(self.provider)
self.assertIn(self.unit, self.ctx.unit_list)
def test_add_unit(self):
"""
Ensure that adding an unit works
"""
self.ctx.add_unit(self.unit)
self.assertIn(self.unit, self.ctx.unit_list)
self.assertIn(self.unit, self.ctx.state.unit_list)
def test_add_unit__job_unit(self):
"""
Ensure that adding a job unit works
"""
self.ctx.add_unit(self.job)
self.assertIn(self.job, self.ctx.unit_list)
self.assertIn(self.job, self.ctx.state.unit_list)
self.assertIn(self.job, self.ctx.state.job_list)
def test_add_unit_twice(self):
"""
Ensure that you cannot add an unit twice
"""
self.ctx.add_unit(self.unit)
with self.assertRaises(ValueError):
self.ctx.add_unit(self.unit)
def test_remove_unit(self):
"""
Ensure that removing an unit works
"""
self.ctx.add_unit(self.unit)
self.ctx.remove_unit(self.unit)
self.assertNotIn(self.unit, self.ctx.unit_list)
self.assertNotIn(self.unit, self.ctx.state.unit_list)
def test_remove_unit__missing(self):
"""
Ensure that you cannot remove an unit that is not added first
"""
with self.assertRaises(ValueError):
self.ctx.remove_unit(self.unit)
def test_remove_job_unit(self):
"""
Ensure that removing a job unit works
"""
self.ctx.add_unit(self.job)
self.ctx.remove_unit(self.job)
self.assertNotIn(self.job, self.ctx.unit_list)
self.assertNotIn(self.job, self.ctx.state.unit_list)
self.assertNotIn(self.job, self.ctx.state.job_list)
self.assertNotIn(self.job.id, self.ctx.state.job_state_map)
self.assertNotIn(self.job.id, self.ctx.state.resource_map)
def test_on_unit_added__via_ctx(self):
"""
Ensure that adding units produces same/correct signals
regardless of how that unit is added. This test checks the scenario
that happens when the context is used directly
"""
self.watchSignal(self.ctx.on_unit_added)
self.watchSignal(self.ctx.state.on_unit_added)
self.watchSignal(self.ctx.state.on_job_added)
self.ctx.add_unit(self.unit)
sig1 = self.assertSignalFired(self.ctx.on_unit_added, self.unit)
sig2 = self.assertSignalFired(self.ctx.state.on_unit_added, self.unit)
self.assertSignalOrdering(sig1, sig2)
self.assertSignalNotFired(self.ctx.state.on_job_added, self.unit)
def test_on_unit_added__via_state(self):
"""
Ensure that adding units produces same/correct signals
regardless of how that unit is added. This test checks the scenario
that happens when the session state is used.
"""
self.watchSignal(self.ctx.on_unit_added)
self.watchSignal(self.ctx.state.on_unit_added)
self.watchSignal(self.ctx.state.on_job_added)
self.ctx.state.add_unit(self.unit)
sig1 = self.assertSignalFired(self.ctx.on_unit_added, self.unit)
sig2 = self.assertSignalFired(self.ctx.state.on_unit_added, self.unit)
self.assertSignalOrdering(sig1, sig2)
self.assertSignalNotFired(self.ctx.state.on_job_added, self.unit)
def test_on_job_added__via_ctx(self):
"""
Ensure that adding job units produces same/correct signals
regardless of how that job is added. This test checks the scenario
that happens when the context is used directly
"""
self.watchSignal(self.ctx.on_unit_added)
self.watchSignal(self.ctx.state.on_unit_added)
self.watchSignal(self.ctx.state.on_job_added)
self.ctx.add_unit(self.job)
sig1 = self.assertSignalFired(self.ctx.on_unit_added, self.job)
sig2 = self.assertSignalFired(self.ctx.state.on_unit_added, self.job)
sig3 = self.assertSignalFired(self.ctx.state.on_job_added, self.job)
self.assertSignalOrdering(sig1, sig2, sig3)
def test_on_job_added__via_state(self):
"""
Ensure that adding job units produces same/correct signals
regardless of how that job is added. This test checks the scenario
that happens when the session state is used.
"""
self.watchSignal(self.ctx.on_unit_added)
self.watchSignal(self.ctx.state.on_unit_added)
self.watchSignal(self.ctx.state.on_job_added)
self.ctx.state.add_unit(self.job)
sig1 = self.assertSignalFired(self.ctx.on_unit_added, self.job)
sig2 = self.assertSignalFired(self.ctx.state.on_unit_added, self.job)
sig3 = self.assertSignalFired(self.ctx.state.on_job_added, self.job)
self.assertSignalOrdering(sig1, sig2, sig3)
def test_on_unit_removed__via_ctx(self):
"""
Ensure that removing units produces same/correct signals
regardless of how that unit is removed. This test checks the scenario
that happens when the context is used directly
"""
self.ctx.add_unit(self.unit)
self.watchSignal(self.ctx.on_unit_removed)
self.watchSignal(self.ctx.state.on_unit_removed)
self.watchSignal(self.ctx.state.on_job_removed)
self.ctx.remove_unit(self.unit)
sig1 = self.assertSignalFired(self.ctx.on_unit_removed, self.unit)
sig2 = self.assertSignalFired(
self.ctx.state.on_unit_removed, self.unit)
self.assertSignalOrdering(sig1, sig2)
self.assertSignalNotFired(self.ctx.state.on_job_removed, self.unit)
def test_on_unit_removed__via_state(self):
"""
Ensure that removing units produces same/correct signals
regardless of how that unit is removed. This test checks the scenario
that happens when the session state is used.
"""
self.ctx.add_unit(self.unit)
self.watchSignal(self.ctx.on_unit_removed)
self.watchSignal(self.ctx.state.on_unit_removed)
self.watchSignal(self.ctx.state.on_job_removed)
self.ctx.state.remove_unit(self.unit)
sig1 = self.assertSignalFired(self.ctx.on_unit_removed, self.unit)
sig2 = self.assertSignalFired(
self.ctx.state.on_unit_removed, self.unit)
self.assertSignalOrdering(sig1, sig2)
self.assertSignalNotFired(self.ctx.state.on_job_removed, self.unit)
def test_on_job_removed__via_ctx(self):
"""
Ensure that removing job units produces same/correct signals
regardless of how that job is removed. This test checks the scenario
that happens when the context is used directly
"""
self.ctx.add_unit(self.job)
self.watchSignal(self.ctx.on_unit_removed)
self.watchSignal(self.ctx.state.on_unit_removed)
self.watchSignal(self.ctx.state.on_job_removed)
self.ctx.remove_unit(self.job)
sig1 = self.assertSignalFired(self.ctx.on_unit_removed, self.job)
sig2 = self.assertSignalFired(self.ctx.state.on_unit_removed, self.job)
sig3 = self.assertSignalFired(self.ctx.state.on_job_removed, self.job)
self.assertSignalOrdering(sig1, sig2, sig3)
def test_on_job_removed__via_state(self):
"""
Ensure that removing job units produces same/correct signals
regardless of how that job is removed. This test checks the scenario
that happens when the session state is used.
"""
self.ctx.add_unit(self.job)
self.watchSignal(self.ctx.on_unit_removed)
self.watchSignal(self.ctx.state.on_unit_removed)
self.watchSignal(self.ctx.state.on_job_removed)
self.ctx.state.remove_unit(self.job)
sig1 = self.assertSignalFired(self.ctx.on_unit_removed, self.job)
sig2 = self.assertSignalFired(self.ctx.state.on_unit_removed, self.job)
sig3 = self.assertSignalFired(self.ctx.state.on_job_removed, self.job)
self.assertSignalOrdering(sig1, sig2, sig3)
def test_execution_controller_list__computed(self):
"""
Ensure that the list of execution controllers is computed correctly
"""
with mock.patch.object(self.ctx, '_compute_execution_ctrl_list') as m:
result = self.ctx.execution_controller_list
self.assertIs(result, m())
m.assert_any_call()
def test_execution_controller_list__cached(self):
"""
Ensure that the computed list of execution controllers is cached
"""
self.assertNotIn(
SessionDeviceContext._CACHE_EXECUTION_CTRL_LIST,
self.ctx._shared_cache)
with mock.patch.object(self.ctx, '_compute_execution_ctrl_list') as m:
result1 = self.ctx.execution_controller_list
result2 = self.ctx.execution_controller_list
self.assertIs(result1, result2)
m.assert_any_call()
self.assertIn(
SessionDeviceContext._CACHE_EXECUTION_CTRL_LIST,
self.ctx._shared_cache)
def test_execution_controller_list__invalidated(self):
"""
Ensure that the cached list of execution controllers is invalidated
when a new provider is added to the context
"""
# Let's have a fake provider ready. We need to mock unit/problem lists
# to let us add it to the context.
provider2 = mock.Mock(name='provider2', spec_set=Provider1)
provider2.unit_list = []
provider2.problem_list = []
with mock.patch.object(self.ctx, '_compute_execution_ctrl_list') as m:
m.side_effect = lambda: mock.Mock()
self.assertNotIn(
SessionDeviceContext._CACHE_EXECUTION_CTRL_LIST,
self.ctx._shared_cache)
result1 = self.ctx.execution_controller_list
self.assertIn(
SessionDeviceContext._CACHE_EXECUTION_CTRL_LIST,
self.ctx._shared_cache)
# Adding the second provider should invalidate the cache
self.ctx.add_provider(provider2)
self.assertNotIn(
SessionDeviceContext._CACHE_EXECUTION_CTRL_LIST,
self.ctx._shared_cache)
result2 = self.ctx.execution_controller_list
self.assertIn(
SessionDeviceContext._CACHE_EXECUTION_CTRL_LIST,
self.ctx._shared_cache)
# Both results are different
self.assertNotEqual(result1, result2)
# And _compute_execution_ctrl_list was called twice
m.assert_has_calls(((), {}), ((), {}))
def test_get_ctrl_for_job__best(self):
"""
Ensure that get_ctrl_for_job() picks the best execution controller
out of the available choices.
"""
ctrl1 = mock.Mock(name='ctrl1', spec_set=IExecutionController)
ctrl1.get_score.return_value = 5
ctrl2 = mock.Mock(name='ctrl2', spec_set=IExecutionController)
ctrl2.get_score.return_value = 7
ctrl3 = mock.Mock(name='ctrl3', spec_set=IExecutionController)
ctrl3.get_score.return_value = -1
with mock.patch.object(self.ctx, '_compute_execution_ctrl_list') as m:
m.return_value = [ctrl1, ctrl2, ctrl3]
best_ctrl = self.ctx.get_ctrl_for_job(self.job)
self.assertIs(best_ctrl, ctrl2)
def test_get_ctrl_for_job__tie(self):
"""
Ensure that get_ctrl_for_job() pick the last, best controller,
as determined by the order of entries in execution_controller_list
"""
ctrl1 = mock.Mock(name='ctrl1', spec_set=IExecutionController)
ctrl1.get_score.return_value = 1
ctrl2 = mock.Mock(name='ctrl2', spec_set=IExecutionController)
ctrl2.get_score.return_value = 1
with mock.patch.object(self.ctx, '_compute_execution_ctrl_list') as m:
m.return_value = [ctrl1, ctrl2]
best_ctrl = self.ctx.get_ctrl_for_job(self.job)
self.assertIs(best_ctrl, ctrl2)
def test_get_ctrl_for_job__no_candidates(self):
"""
Ensure that get_ctrl_for_job() raises LookupError if no controllers
are suitable for the requested job.
"""
ctrl1 = mock.Mock(name='ctrl1', spec_set=IExecutionController)
ctrl1.get_score.return_value = -1
ctrl2 = mock.Mock(name='ctrl1', spec_set=IExecutionController)
ctrl2.get_score.return_value = -1
ctrl3 = mock.Mock(name='ctrl1', spec_set=IExecutionController)
ctrl3.get_score.return_value = -1
with mock.patch.object(self.ctx, '_compute_execution_ctrl_list') as m:
m.return_value = [ctrl1, ctrl2, ctrl3]
with self.assertRaises(LookupError):
self.ctx.get_ctrl_for_job(self.job)
plainbox-0.25/plainbox/impl/highlevel.py 0000664 0001750 0001750 00000030204 12627266441 021163 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.highlevel` -- High-level API
================================================
"""
from collections import OrderedDict
from concurrent.futures import ThreadPoolExecutor
from io import BytesIO
import logging
from plainbox import __version__ as plainbox_version
from plainbox.impl.applogic import run_job_if_possible
from plainbox.impl.runner import JobRunner
from plainbox.impl.session import SessionStorageRepository
from plainbox.impl.transport import TransportError
from plainbox.impl.transport import get_all_transports
logger = logging.getLogger("plainbox.highlevel")
class PlainBoxObject:
"""
A thin wrapper around some other plainbox object.
"""
def __init__(self, impl, name=None, group=None, children=None, attrs=None):
"""
Initialize a new PlainBoxObject with the specified internal
implementation object and some meta-data.
:param impl:
The implementation object (internal API)
:param name:
Human-visible name of this object
:param group:
Human-visible group (class) this object belongs to
:param children:
A list of children that this object has
:param attrs:
A list of attributes that this object has
"""
self._impl = impl
self._name = name
if children is None:
children = []
self._children = children
self._group = group
if attrs is None:
attrs = {}
self._attrs = attrs
def __str__(self):
"""
String for of this object
:returns:
:attr:`name`.
"""
return self.name
def __iter__(self):
"""
Iterate over all of the children
"""
return iter(self._children)
@property
def name(self):
"""
name of this object
This may be an abbreviated form that assumes the group is displayed
before the name. It will probably take a few iterations before we get
right names (and other, additional properties) for everything.
"""
return self._name
@property
def group(self):
"""
group this object belongs to.
This is a way to distinguish high-level "classes" that may not map
one-to-one to a internal python class.
"""
return self._group
@property
def children(self):
"""
A list of children that this object has
This list is mutable and is always guaranteed to exist.
"""
return self._children
@property
def attrs(self):
"""
A mapping of key-value attributes that this object has
This mapping is mutable and is always guaranteed to exist.
"""
return self._attrs
# NOTE: This should merge with the service object below but I didn't want
# to do it right away as that would have to alter Service.__init__() and
# I want to get Explorer API right first.
class Explorer:
"""
Class simplifying discovery of various PlainBox objects.
"""
def __init__(self, provider_list=None, repository_list=None):
"""
Initialize a new Explorer
:param provider_list:
List of providers that this explorer will know about.
Defaults to nothing (BYOP - bring your own providers)
:param repository_list:
List of session storage repositories. Defaults to the
single default repository.
"""
if provider_list is None:
provider_list = []
self.provider_list = provider_list
if repository_list is None:
repo = SessionStorageRepository()
repository_list = [repo]
self.repository_list = repository_list
def get_object_tree(self):
"""
Get a tree of :class:`PlainBoxObject` that represents everything that
PlainBox knows about.
:returns:
A :class:`PlainBoxObject` that represents the explorer
object itself, along with all the children reachable from it.
This function computes the following set of data::
the explorer itself
- all providers
- all jobs
- all whitelists
- all executables
- all repositories
- all storages
"""
service_obj = PlainBoxObject(
self,
name='service object',
group="service")
# Milk each provider for jobs and whitelists
for provider in self.provider_list:
provider_obj = PlainBoxObject(
provider,
group="provider",
name=provider.name,
attrs=OrderedDict((
('broken_i18n',
provider.description == provider.tr_description()),
('name', provider.name),
('namespace', provider.namespace),
('version', provider.version),
('description', provider.description),
('tr_description', provider.tr_description()),
('jobs_dir', provider.jobs_dir),
('units_dir', provider.units_dir),
('whitelists_dir', provider.whitelists_dir),
('data_dir', provider.data_dir),
('locale_dir', provider.locale_dir),
('gettext_domain', provider.gettext_domain),
('base_dir', provider.base_dir),
)))
for unit in provider.unit_list:
provider_obj.children.append(self._unit_to_obj(unit))
service_obj.children.append(provider_obj)
# Milk each repository for session storage data
for repo in self.repository_list:
repo_obj = PlainBoxObject(
repo,
group='repository',
name=repo.location)
service_obj.children.append(repo_obj)
for storage in repo.get_storage_list():
storage_obj = PlainBoxObject(
storage,
group="storage",
name=storage.location,
attrs=OrderedDict((
('location', storage.location),
('session_file', storage.session_file),
)))
repo_obj.children.append(storage_obj)
return service_obj
def _unit_to_obj(self, unit):
# Yes, this should be moved to member methods
if unit.Meta.name == 'test plan':
return self._test_plan_to_obj(unit)
elif unit.Meta.name == 'job':
return self._job_to_obj(unit)
elif unit.Meta.name == 'category':
return self._category_to_obj(unit)
elif unit.Meta.name == 'file':
return self._file_to_obj(unit)
elif unit.Meta.name == 'template':
return self._template_to_obj(unit)
elif unit.Meta.name == 'manifest entry':
return self._manifest_entry_to_obj(unit)
elif unit.Meta.name == 'packaging meta-data':
return self._packaging_meta_data_to_obj(unit)
elif unit.Meta.name == 'exporter':
return self._exporter_entry_to_obj(unit)
else:
raise NotImplementedError(unit.Meta.name)
def _job_to_obj(self, unit):
return PlainBoxObject(
unit, group=unit.Meta.name, name=unit.id, attrs=OrderedDict((
('broken_i18n',
unit.summary == unit.tr_summary()
or unit.description == unit.tr_description()),
('id', unit.id),
('partial_id', unit.partial_id),
('summary', unit.summary),
('tr_summary', unit.tr_summary()),
('raw_summary', unit.get_raw_record_value('summary')),
('description', unit.description),
('raw_description',
unit.get_raw_record_value('description')),
('tr_description', unit.tr_description()),
('plugin', unit.plugin),
('command', unit.command),
('user', unit.user),
('environ', unit.environ),
('estimated_duration', unit.estimated_duration),
('depends', unit.depends),
('requires', unit.requires),
('origin', str(unit.origin)),
)))
def _test_plan_to_obj(self, unit):
return PlainBoxObject(
unit, group=unit.Meta.name, name=unit.id, attrs=OrderedDict((
('broken_i18n',
unit.name == unit.tr_name()
or unit.description == unit.tr_description()),
('id', unit.id),
('include', unit.include),
('exclude', unit.exclude),
('name', unit.name),
('tr_name', unit.tr_name()),
('description', unit.description),
('tr_description', unit.tr_description()),
('estimated_duration', unit.estimated_duration),
('icon', unit.icon),
('category_overrides', unit.category_overrides),
('virtual', unit.virtual),
('origin', str(unit.origin)),
)))
def _category_to_obj(self, unit):
return PlainBoxObject(
unit, group=unit.Meta.name, name=unit.id, attrs=OrderedDict((
('broken_i18n', unit.name == unit.tr_name()),
('id', unit.id),
('name', unit.name),
('tr_name', unit.tr_name()),
('origin', str(unit.origin)),
)))
def _file_to_obj(self, unit):
return PlainBoxObject(
unit, group=unit.Meta.name, name=unit.path, attrs=OrderedDict((
('path', unit.path),
('role', str(unit.role)),
('origin', str(unit.origin)),
)))
def _template_to_obj(self, unit):
return PlainBoxObject(
unit, group=unit.Meta.name, name=unit.id, attrs=OrderedDict((
('id', unit.id),
('partial_id', unit.partial_id),
('template_unit', unit.template_unit),
('template_resource', unit.template_resource),
('template_filter', unit.template_filter),
('template_imports', unit.template_imports),
('origin', str(unit.origin)),
)))
def _manifest_entry_to_obj(self, unit):
return PlainBoxObject(
unit, group=unit.Meta.name, name=unit.id, attrs=OrderedDict((
('id', unit.id),
('name', unit.name),
('tr_name', unit.tr_name()),
('value_type', unit.value_type),
('value_unit', unit.value_unit),
('resource_key', unit.resource_key),
('origin', str(unit.origin)),
)))
def _packaging_meta_data_to_obj(self, unit):
return PlainBoxObject(
unit, group=unit.Meta.name, name=unit.os_id, attrs=OrderedDict((
('os_id', unit.os_id),
('os_version_id', unit.os_version_id),
('origin', str(unit.origin)),
)))
def _exporter_entry_to_obj(self, unit):
return PlainBoxObject(
unit, group=unit.Meta.name, name=unit.id, attrs=OrderedDict((
('id', unit.id),
('summary', unit.summary),
('tr_summary', unit.tr_summary()),
)))
plainbox-0.25/plainbox/impl/device.py 0000664 0001750 0001750 00000016047 12627266441 020464 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
:mod:`plainbox.impl.device` -- device classes
==============================================
This module contains implementations
"""
import logging
import os
import shlex
import subprocess
import sys
from plainbox.i18n import gettext as _
from plainbox.impl.ctrl import RootViaPkexecExecutionController
from plainbox.impl.ctrl import RootViaPTL1ExecutionController
from plainbox.impl.ctrl import RootViaSudoExecutionController
from plainbox.impl.ctrl import UserJobExecutionController
_logger = logging.getLogger("plainbox.device")
def get_os_release(path='/etc/os-release'):
"""
Read and parse os-release(5) data
:param path:
(optional) alternate file to load and parse
:returns:
A dictionary with parsed data
"""
with open(path, 'rt', encoding='UTF-8') as stream:
return {
key: value
for key, value in (
entry.split('=', 1) for entry in shlex.split(stream.read()))
}
class LocalDevice:
"""
A device that corresponds to the local machine (the one running plainbox)
"""
def __init__(self, cookie):
"""
Initialize a new device with the specified cookie
"""
self._cookie = cookie
@property
def cookie(self):
"""
Cookie of the device
Cookie is an URL-like string that describes the current device.
All devices have a cookie of some kind.
"""
return self._cookie
@classmethod
def discover(cls):
"""
Discover available devices
:returns:
A list of devices of this type that are available. Since this
is a local device, the following cases are possible:
On Linux, we return a device based on /etc/os-release
On Windows, we return a device based on TBD
On all other platforms (mac?) we return an empty list
"""
# NOTE: sys.platform used to be 'linux2' on older pythons
if sys.platform == 'linux' or sys.platform == 'linux2':
return cls._discover_linux()
elif sys.platform == 'win32':
return cls._discover_windows()
else:
_logger.error(_("Unsupported platform: %s"), sys.platform)
return []
@classmethod
def _discover_linux(cls):
"""
A version of :meth:`discover()` that runs on Linux
:returns:
A list with one LocalDevice object based on discovered OS
properties or an empty list if something goes wrong.
This implementation uses /etc/os-release to figure out where it is
currently running on. If that fails for any reason (/etc/os-release
is pretty new by 2014's standards) we return an empty device list.
"""
# Get /etc/os-release data
try:
os_release = get_os_release()
except (OSError, IOError, ValueError) as exc:
_logger.error("Unable to analyze /etc/os-release: %s", exc)
return []
for arch_probe_fn in (cls._arch_linux_dpkg, cls._arch_linux_rpm):
try:
arch = arch_probe_fn()
except (OSError, subprocess.CalledProcessError):
pass
else:
break
else:
arch = cls.arch_linux_uname()
cookie = cls._cookie_linux_common(os_release, arch)
return [cls(cookie)]
@classmethod
def _discover_windows(cls):
return [cls("local://localhost/?os=win32")]
@classmethod
def _cookie_linux_common(cls, os_release, arch):
"""
Compute a cookie for a common linux that adheres to os-release(5)
:param os_release:
The data structure returned by :func:`get_os_release()`
:param arch:
The name of the architecture
:returns:
A connection cookie (see below)
Typical values returned by this method are:
- "local://localhost/?os=linux&id=debian&version_id=7&arch=amd64"
- "local://localhost/?os=linux&id=ubuntu&version_id=14.04&arch=amd64"
- "local://localhost/?os=linux&id=ubunty&version_id=14.09&arch=amd64"
- "local://localhost/os=linux&id=fedora&version_id=20&arch=x86_64"
"""
return "local://localhost/?os={}&id={}&version_id={}&arch={}".format(
"linux", os_release.get("ID", "Linux"),
os_release.get("VERSION_ID", ""), arch)
@classmethod
def _arch_linux_dpkg(cls):
"""
Query a dpkg-based system for the architecture name
:returns:
Debian architecture name, e.g. 'i386', 'amd64' or 'armhf'
:raises OSError:
If (typically) ``dpkg`` is not installed
:raises subprocess.CalledProcessError:
If dpkg fails for any reason
The returned cookie depends on the output of::
``dpkg --print-architecture``
"""
return subprocess.check_output(
['dpkg', '--print-architecture'], universal_newlines=True
).strip()
@classmethod
def _arch_linux_rpm(cls):
"""
Query a rpm-based system for the architecture name
:returns:
Debian architecture name, e.g. 'i386', 'x86_64'
:raises OSError:
If (typically) ``rpm`` is not installed
:raises subprocess.CalledProcessError:
If rpm fails for any reason
The returned cookie depends on the output of::
``rpm -E %_arch``
"""
return subprocess.check_output(
['rpm', '-E', '%_arch'], universal_newlines=True
).strip()
@classmethod
def _arch_linux_uname(cls):
"""
Query a linux system for the architecture name via uname(2)
:returns:
Architecture name, as returned by os.uname().machine
"""
return os.uname().machine
def push_provider(self, provider):
"""
Push the given provider to this device
"""
# TODO: raise ValueError if provider.arch is incompatible
# with self.arch
def compute_execution_ctrl_list(self, provider_list):
return [
RootViaPTL1ExecutionController(provider_list),
RootViaPkexecExecutionController(provider_list),
# XXX: maybe this one should be only used on command line
RootViaSudoExecutionController(provider_list),
UserJobExecutionController(provider_list),
]
plainbox-0.25/plainbox/impl/test_buildsystems.py 0000664 0001750 0001750 00000012020 12627266441 022776 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2014 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.test_buildsystems
===============================
Test definitions for plainbox.impl.buildsystems module
"""
from unittest import TestCase
from plainbox.impl.buildsystems import GoBuildSystem
from plainbox.impl.buildsystems import MakefileBuildSystem
from plainbox.impl.buildsystems import AutotoolsBuildSystem
from plainbox.vendor import mock
class GoBuildSystemTests(TestCase):
"""
Unit tests for the GoBuildSystem class
"""
def setUp(self):
self.buildsystem = GoBuildSystem()
@mock.patch('plainbox.impl.buildsystems.glob.glob')
def test_probe__go_sources(self, mock_glob):
"""
Ensure that if we have some go sources then the build system finds them
and signals suitability
"""
mock_glob.return_value = ['src/foo.go']
self.assertEqual(self.buildsystem.probe("src"), 50)
@mock.patch('plainbox.impl.buildsystems.glob.glob')
def test_probe__no_go_sources(self, mock_glob):
"""
Ensure that if we don't have any go sources the build system is not
suitable
"""
mock_glob.return_value = []
self.assertEqual(self.buildsystem.probe("src"), 0)
def test_get_build_command(self):
"""
Ensure that the build command is correct
"""
self.assertEqual(
self.buildsystem.get_build_command(
"/path/to/src", "/path/to/build/bin"),
"go build ../../src/*.go")
class MakefileBuildSystemTests(TestCase):
"""
Unit tests for the MakefileBuildSystem class
"""
def setUp(self):
self.buildsystem = MakefileBuildSystem()
@mock.patch('plainbox.impl.buildsystems.os.path.isfile')
def test_probe__Makefile(self, mock_isfile):
"""
Ensure that if we have a Makefile then the build system finds it and
signals suitability
"""
mock_isfile.side_effect = lambda path: path == 'src/Makefile'
self.assertEqual(self.buildsystem.probe("src"), 90)
@mock.patch('plainbox.impl.buildsystems.os.path.isfile')
def test_probe__no_Makefile(self, mock_isfile):
"""
Ensure that if we don't have a Makefile then the build system is not
suitable
"""
mock_isfile.side_effect = lambda path: False
self.assertEqual(self.buildsystem.probe("src"), 0)
@mock.patch('plainbox.impl.buildsystems.os.path.isfile')
def test_probe__configure_and_Makefile(self, mock_isfile):
"""
Ensure that if we have a configure script then the build system finds
it and signals lack of suitability, we want developers to specifically
tell us how to build with a configure script around.
"""
mock_isfile.side_effect = lambda path: path in ('src/Makefile',
'src/configure')
self.assertEqual(self.buildsystem.probe("src"), 0)
def test_get_build_command(self):
"""
Ensure that the build command is correct
"""
self.assertEqual(
self.buildsystem.get_build_command(
"/path/to/src", "/path/to/build/bin"),
"VPATH=../../src make -f ../../src/Makefile")
class AutotoolsBuildSystemTests(TestCase):
"""
Unit tests for the AutotoolsBuildSystem class
"""
def setUp(self):
self.buildsystem = AutotoolsBuildSystem()
@mock.patch('plainbox.impl.buildsystems.os.path.isfile')
def test_probe__probe(self, mock_isfile):
"""
Ensure that if we have a configure script then the build system finds
it and signals suitability
"""
mock_isfile.side_effect = lambda path: path == 'src/configure'
self.assertEqual(self.buildsystem.probe("src"), 90)
@mock.patch('plainbox.impl.buildsystems.os.path.isfile')
def test_probe__no_configure(self, mock_isfile):
"""
Ensure that if we don't have a configure script then the build system
is not suitable
"""
mock_isfile.side_effect = lambda path: False
self.assertEqual(self.buildsystem.probe("src"), 0)
def test_get_build_command(self):
"""
Ensure that the build command is correct
"""
self.assertEqual(
self.buildsystem.get_build_command(
"/path/to/src", "/path/to/build/bin"),
"../../src/configure && make")
plainbox-0.25/plainbox/impl/ingredients.py 0000664 0001750 0001750 00000020057 12627266441 021534 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""Guacamole ingredients specific to plainbox."""
import collections
import gettext
import sys
import textwrap
import traceback
from guacamole import Command
from guacamole.core import Ingredient
from guacamole.ingredients import ansi
from guacamole.ingredients import argparse
from guacamole.ingredients import cmdtree
from guacamole.recipes.cmd import CommandRecipe
from plainbox.impl.session.assistant import SessionAssistant
_ = gettext.gettext
box = collections.namedtuple("box", "top right bottom left")
class RenderingContext:
"""
Context for stateful text display.
The rendering context assists in displaying styled text by implementing a
very simple box model and top-to-bottom paragraph flow.
Particular attributes such as paragraph width, foreground and background
color, text justification (alignment) and padding can be set and made to
persist across calls.
"""
def __init__(self, ansi):
"""
Initialize the rendering context.
:param ansi:
The guacamole ANSIFormatter object. You want to extract it from
``ctx.ansi`` that is passed to the ``invoked()`` method of your
``gucamole.Command`` subclass.
By default, text is entirely plain (without any style or color) and the
terminal width is assumed to be exactly 80 columns. Padding around each
paragraph is ``(0, 0, 0, 0)`` and each paragraph is left-aligned.
"""
self.ansi = ansi
self.reset()
def reset(self):
"""Reset all rendering parameters to their default values."""
self.width = 80
self.bg = None
self.fg = None
self.bold = False
self._padding = box(0, 0, 0, 0)
self.align = 'left'
@property
def padding(self):
"""padding applied to each paragraph."""
return self._padding
@padding.setter
def padding(self, value):
"""Set the padding to the desired values."""
self._padding = box(*value)
def para(self, text):
"""
Display a paragraph.
The paragraph is re-formatted to match the current rendering mode
(width, and padding). Top and bottom padding is used to draw empty
lines. Left and right padding is used to emulate empty columns around
each content column.
"""
content_width = self.width - (self.padding.left + self.padding.right)
if isinstance(text, str):
chunks = textwrap.wrap(text, content_width, break_long_words=True)
elif isinstance(text, list):
chunks = text
else:
raise TypeError('text must be either str or list of str')
empty_line = ' ' * self.width
pad_left = ' ' * self.padding.left
pad_right = ' ' * self.padding.right
for i in range(self.padding.top):
print(self.ansi(empty_line, fg=self.fg, bg=self.bg))
for chunk in chunks:
for line in chunk.splitlines():
if self.align == 'left':
line = line.ljust(content_width)
elif self.align == 'right':
line = line.rjust(content_width)
elif self.align == 'center':
line = line.center(content_width)
print(self.ansi(
pad_left + line + pad_right,
fg=self.fg, bg=self.bg, bold=self.bold))
for i in range(self.padding.bottom):
print(self.ansi(empty_line, fg=self.fg, bg=self.bg))
class RenderingContextIngredient(Ingredient):
"""Ingredient that adds a RenderingContext to guacamole."""
def late_init(self, context):
"""Add a RenderingContext as ``rc`` to the guacamole context."""
context.rc = RenderingContext(context.ansi)
class SessionAssistantIngredient(Ingredient):
"""Ingredient that adds a SessionAssistant to guacamole."""
def late_init(self, context):
"""Add a SessionAssistant as ``sa`` to the guacamole context."""
context.sa = SessionAssistant(
context.cmd_toplevel.get_app_id(),
context.cmd_toplevel.get_cmd_version(),
context.cmd_toplevel.get_sa_api_version(),
context.cmd_toplevel.get_sa_api_flags(),
)
class CanonicalCrashIngredient(Ingredient):
"""Ingredient for handing crashes in a Canonical-theme way."""
def dispatch_failed(self, context):
"""Print the unhanded exception and exit the application."""
rc = context.rc
rc.reset()
rc.bg = 'red'
rc.fg = 'bright_white'
rc.bold = 1
rc.align = 'center'
rc.padding = (1, 1, 1, 1)
rc.para(_("Application Malfunction Detected"))
rc.align = 'left'
rc.bold = 0
rc.padding = (0, 0, 0, 0)
exc_type, exc_value, tb = sys.exc_info()
rc.para(traceback.format_exception(exc_type, exc_value, tb))
rc.padding = (2, 2, 0, 2)
rc.para(_(
"Please report a bug including the information from the "
"paragraph above. To report the bug visit {0}"
).format(context.cmd_toplevel.bug_report_url))
rc.padding = (1, 2, 1, 2)
rc.para(_("We are sorry for the inconvenience!"))
raise SystemExit(1)
class CanonicalCommandRecipe(CommandRecipe):
"""A recipe for using Canonical-enhanced commands."""
def get_ingredients(self):
"""Get a list of ingredients for guacamole."""
return [
cmdtree.CommandTreeBuilder(self.command),
cmdtree.CommandTreeDispatcher(),
argparse.ParserIngredient(),
CanonicalCrashIngredient(),
ansi.ANSIIngredient(),
RenderingContextIngredient(),
SessionAssistantIngredient(),
]
class CanonicalCommand(Command):
"""
A command with Canonical-enhanced ingredients.
This command has two additional items in the guacamole execution context,
the :class:`RenderingContext` object ``rc`` and the
:class:`SessionAssistant` object ``sa``.
"""
bug_report_url = "https://bugs.launchpad.net/checkbox/+filebug"
def get_sa_api_version(self):
"""
Get the SessionAssistant API this command needs to use.
:returns:
``self.sa_api_version`` if defined
:returns:
"0.99", otherwise
This method is used internally by CanonicalCommand to initialize
SessionAssistant. Applications can declare the API version they use by
defining the ``sa_api_version`` attribute at class level.
"""
try:
return self.sa_api_version
except AttributeError:
return '0.99'
def get_sa_api_flags(self):
"""
Get the SessionAssistant API flags this command needs to use.
:returns:
``self.sa_api_flags`` if defined
:returns:
``[]``, otherwise
This method is used internally by CanonicalCommand to initialize
SessionAssistant. Applications can declare the API flags they use by
defining the ``sa_api_flags`` attribute at class level.
"""
try:
return self.sa_api_flags
except AttributeError:
return []
def main(self, argv=None, exit=True):
"""
Shortcut for running a command.
See :meth:`guacamole.recipes.Recipe.main()` for details.
"""
return CanonicalCommandRecipe(self).main(argv, exit)
plainbox-0.25/plainbox/impl/test_censoREd.py 0000664 0001750 0001750 00000001621 12627266441 021756 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012-2015 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
import doctest
def load_tests(loader, tests, ignore):
tests.addTests(
doctest.DocTestSuite('plainbox.impl.censoREd',
optionflags=doctest.REPORT_NDIFF))
return tests
plainbox-0.25/plainbox/impl/test_result.py 0000664 0001750 0001750 00000020720 12627266441 021573 0 ustar pierre pierre 0000000 0000000 # This file is part of Checkbox.
#
# Copyright 2012 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
"""
plainbox.impl.test_result
=========================
Test definitions for plainbox.impl.result module
"""
from tempfile import TemporaryDirectory
from unittest import TestCase
import doctest
import io
from plainbox.abc import IJobResult
from plainbox.impl.result import DiskJobResult
from plainbox.impl.result import IOLogRecord
from plainbox.impl.result import IOLogRecordReader
from plainbox.impl.result import IOLogRecordWriter
from plainbox.impl.result import JobResultBuilder
from plainbox.impl.result import MemoryJobResult
from plainbox.impl.testing_utils import make_io_log
def load_tests(loader, tests, ignore):
tests.addTests(
doctest.DocTestSuite('plainbox.impl.result',
optionflags=doctest.REPORT_NDIFF))
return tests
class CommonTestsMixIn:
def test_append_comments(self):
result = self.result_cls({})
self.assertIsNone(result.comments)
class DiskJobResultTests(TestCase, CommonTestsMixIn):
result_cls = DiskJobResult
def setUp(self):
self.scratch_dir = TemporaryDirectory()
def tearDown(self):
self.scratch_dir.cleanup()
def test_smoke(self):
result = DiskJobResult({})
self.assertEqual(str(result), "None")
self.assertEqual(repr(result), "")
self.assertIsNone(result.outcome)
self.assertIsNone(result.comments)
self.assertEqual(result.io_log, ())
self.assertIsNone(result.return_code)
self.assertTrue(result.is_hollow)
def test_everything(self):
result = DiskJobResult({
'outcome': IJobResult.OUTCOME_PASS,
'comments': "it said blah",
'io_log_filename': make_io_log([
(0, 'stdout', b'blah\n')
], self.scratch_dir.name),
'return_code': 0
})
self.assertEqual(str(result), "pass")
# This result contains a random vale of io_log_filename so direct repr
# comparison is not feasable. All we want to check here is that it
# looks right and that it has the outcome value
self.assertTrue(repr(result).startswith(""))
self.assertIn("outcome:'pass'", repr(result))
self.assertEqual(result.outcome, IJobResult.OUTCOME_PASS)
self.assertEqual(result.comments, "it said blah")
self.assertEqual(result.io_log, ((0, 'stdout', b'blah\n'),))
self.assertEqual(result.io_log_as_flat_text, 'blah\n')
self.assertEqual(result.return_code, 0)
self.assertFalse(result.is_hollow)
def test_io_log_as_text_attachment(self):
result = MemoryJobResult({
'outcome': IJobResult.OUTCOME_PASS,
'comments': "it said blah",
'io_log': [(0, 'stdout', b'\x80\x456')],
'return_code': 0
})
self.assertEqual(result.io_log_as_text_attachment, '')
class MemoryJobResultTests(TestCase, CommonTestsMixIn):
result_cls = MemoryJobResult
def test_smoke(self):
result = MemoryJobResult({})
self.assertEqual(str(result), "None")
self.assertEqual(repr(result), "")
self.assertIsNone(result.outcome)
self.assertIsNone(result.comments)
self.assertEqual(result.io_log, ())
self.assertIsNone(result.return_code)
self.assertTrue(result.is_hollow)
def test_everything(self):
result = MemoryJobResult({
'outcome': IJobResult.OUTCOME_PASS,
'comments': "it said blah",
'io_log': [(0, 'stdout', b'blah\n')],
'return_code': 0
})
self.assertEqual(str(result), "pass")
self.assertEqual(
repr(result), (
""))
self.assertEqual(result.outcome, IJobResult.OUTCOME_PASS)
self.assertEqual(result.comments, "it said blah")
self.assertEqual(result.io_log, ((0, 'stdout', b'blah\n'),))
self.assertEqual(result.io_log_as_flat_text, 'blah\n')
self.assertEqual(result.return_code, 0)
self.assertFalse(result.is_hollow)
def test_io_log_as_text_attachment(self):
result = MemoryJobResult({
'outcome': IJobResult.OUTCOME_PASS,
'comments': "it said foo",
'io_log': [(0, 'stdout', b'foo')],
'return_code': 0
})
self.assertEqual(result.io_log_as_text_attachment, 'foo')
class IOLogRecordWriterTests(TestCase):
_RECORD = IOLogRecord(0.123, 'stdout', b'some\ndata')
_TEXT = '[0.123,"stdout","c29tZQpkYXRh"]\n'
def test_smoke_write(self):
stream = io.StringIO()
writer = IOLogRecordWriter(stream)
writer.write_record(self._RECORD)
self.assertEqual(stream.getvalue(), self._TEXT)
writer.close()
with self.assertRaises(ValueError):
stream.getvalue()
def test_smoke_read(self):
stream = io.StringIO(self._TEXT)
reader = IOLogRecordReader(stream)
record1 = reader.read_record()
self.assertEqual(record1, self._RECORD)
record2 = reader.read_record()
self.assertEqual(record2, None)
reader.close()
with self.assertRaises(ValueError):
stream.getvalue()
def test_iter_read(self):
stream = io.StringIO(self._TEXT)
reader = IOLogRecordReader(stream)
record_list = list(reader)
self.assertEqual(record_list, [self._RECORD])
class JobResultBuildeTests(TestCase):
def test_smoke_hollow(self):
self.assertTrue(JobResultBuilder().get_result().is_hollow)
def test_smoke_memory(self):
builder = JobResultBuilder()
builder.comments = 'it works'
builder.execution_duration = 0.1
builder.io_log = [(0, 'stdout', b'ok\n')]
builder.outcome = 'pass'
builder.return_code = 0
result = builder.get_result()
self.assertEqual(result.comments, "it works")
self.assertEqual(result.execution_duration, 0.1)
self.assertEqual(result.io_log, (
IOLogRecord(delay=0, stream_name='stdout', data=b'ok\n'),))
self.assertEqual(result.outcome, "pass")
self.assertEqual(result.return_code, 0)
# Sanity check: the builder we can re-create is identical
builder2 = result.get_builder()
self.assertEqual(builder, builder2)
def test_smoke_disk(self):
builder = JobResultBuilder()
builder.comments = 'it works'
builder.execution_duration = 0.1
builder.io_log_filename = 'log'
builder.outcome = 'pass'
builder.return_code = 0
result = builder.get_result()
self.assertEqual(result.comments, "it works")
self.assertEqual(result.execution_duration, 0.1)
self.assertEqual(result.io_log_filename, 'log')
self.assertEqual(result.outcome, "pass")
self.assertEqual(result.return_code, 0)
# Sanity check: the builder we can re-create is identical
builder2 = result.get_builder()
self.assertEqual(builder, builder2)
def test_io_log_clash(self):
builder = JobResultBuilder()
builder.io_log = [(0, 'stout', b'hi')]
builder.io_log_filename = 'log'
with self.assertRaises(ValueError):
builder.get_result()
def test_add_comment(self):
builder = JobResultBuilder()
builder.add_comment('first comment') # ;-)
self.assertEqual(builder.comments, 'first comment')
builder.add_comment('second comment')
self.assertEqual(builder.comments, 'first comment\nsecond comment')
def test_get_builder_kwargs(self):
result = JobResultBuilder(outcome='pass').get_result()
self.assertEqual(result.get_builder(outcome='fail').outcome, 'fail')
plainbox-0.25/plainbox/impl/providers/ 0000775 0001750 0001750 00000000000 12633675274 020665 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/impl/providers/stubbox/ 0000775 0001750 0001750 00000000000 12633675274 022353 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/impl/providers/stubbox/data/ 0000775 0001750 0001750 00000000000 12633675274 023264 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/impl/providers/stubbox/data/qml-simple.qml 0000664 0001750 0001750 00000001645 12627266441 026060 0 ustar pierre pierre 0000000 0000000 import QtQuick 2.0
import Ubuntu.Components 0.1
import QtQuick.Layouts 1.1
import Plainbox 0.1
QmlJob {
id: root
Component.onCompleted: testingShell.pageStack.push(testPage)
Page {
id: testPage
ColumnLayout {
spacing: units.gu(10)
anchors {
margins: units.gu(5)
fill: parent
}
Button {
Layout.fillWidth: true; Layout.fillHeight: true
text: i18n.tr("Pass")
color: "#38B44A"
onClicked: {
testDone({'outcome': 'pass'});
}
}
Button {
Layout.fillWidth: true; Layout.fillHeight: true
text: i18n.tr("Fail")
color: "#DF382C"
onClicked: {
testDone({"outcome": "fail"});
}
}
}
}
}
plainbox-0.25/plainbox/impl/providers/stubbox/data/all-bytes 0000664 0001750 0001750 00000000400 12627266441 025070 0 ustar pierre pierre 0000000 0000000
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ ¡¢£¤¥¦§¨©ª«¬®¯°±²³´µ¶·¸¹º»¼½¾¿ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿ plainbox-0.25/plainbox/impl/providers/stubbox/data/qml-navigation.qml 0000664 0001750 0001750 00000003137 12627266441 026724 0 ustar pierre pierre 0000000 0000000 import QtQuick 2.0
import Ubuntu.Components 0.1
import QtQuick.Layouts 1.1
Item {
id: root
signal testDone(var test);
property var testingShell;
Component.onCompleted: testingShell.pageStack.push(mainPage)
Page {
id: mainPage
title: i18n.tr("A simple test")
ColumnLayout {
spacing: units.gu(10)
anchors {
margins: units.gu(5)
fill: parent
}
Button {
Layout.fillWidth: true; Layout.fillHeight: true
text: i18n.tr("Next screen")
color: "#38B44A"
onClicked: {
testingShell.pageStack.push(subPage);
}
}
}
}
Page {
id: subPage
visible: false
ColumnLayout {
spacing: units.gu(10)
anchors {
margins: units.gu(5)
fill: parent
}
Text {
text: i18n.tr("You can use toolbar to nagivage back")
}
Button {
Layout.fillWidth: true; Layout.fillHeight: true
text: i18n.tr("Pass")
color: "#38B44A"
onClicked: {
testDone({'outcome': 'pass'});
}
}
Button {
Layout.fillWidth: true; Layout.fillHeight: true
text: i18n.tr("Fail")
color: "#DF382C"
onClicked: {
testDone({"outcome": "fail"});
}
}
}
}
}
plainbox-0.25/plainbox/impl/providers/stubbox/bin/ 0000775 0001750 0001750 00000000000 12633675274 023123 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/impl/providers/stubbox/bin/stub_package_list 0000775 0001750 0001750 00000000050 12627266441 026522 0 ustar pierre pierre 0000000 0000000 #!/bin/sh
echo "name: checkbox"
echo ""
plainbox-0.25/plainbox/impl/providers/stubbox/manage.py 0000775 0001750 0001750 00000004756 12627266441 024167 0 ustar pierre pierre 0000000 0000000 #!/usr/bin/env python3
# This file is part of Checkbox.
#
# Copyright 2012, 2013 Canonical Ltd.
# Written by:
# Zygmunt Krynicki
#
# Checkbox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# Checkbox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Checkbox. If not, see .
from gettext import bindtextdomain
from gettext import dgettext
from plainbox.impl.providers.special import get_stubbox_def
from plainbox.provider_manager import DevelopCommand
from plainbox.provider_manager import InstallCommand
from plainbox.provider_manager import N_
from plainbox.provider_manager import manage_py_extension
from plainbox.provider_manager import setup
# NOTE: this is not a good example of manage.py as it is internally bound to
# plainbox. Don't just copy paste this as good design, it's *not*.
# Use `plainbox startprovider` if you want to get a provider template to edit.
stubbox_def = get_stubbox_def()
def _(msgid):
"""
manage.py specific gettext that uses the stubbox provider domain
"""
return dgettext(stubbox_def.gettext_domain, msgid)
# This is stubbox_def.description,
# we need it here to extract is as a part of stubbox
N_("StubBox (dummy data for development)")
@manage_py_extension
class DevelopCommandExt(DevelopCommand):
__doc__ = DevelopCommand.__doc__
name = 'develop'
def invoked(self, ns):
print(_("The StubBox provider is special"))
print(_("You don't need to develop it explicitly"))
@manage_py_extension
class InstallCommandExt(InstallCommand):
__doc__ = InstallCommand.__doc__
name = 'install'
def invoked(self, ns):
print(_("The StubBox provider is special"))
print(_("You don't need to install it explicitly"))
if __name__ == "__main__":
if stubbox_def.effective_locale_dir:
bindtextdomain(
stubbox_def.gettext_domain, stubbox_def.effective_locale_dir)
setup(
name=stubbox_def.name,
version=stubbox_def.version,
description=stubbox_def.description,
gettext_domain=stubbox_def.gettext_domain,
strict=False,
)
plainbox-0.25/plainbox/impl/providers/stubbox/units/ 0000775 0001750 0001750 00000000000 12633675274 023515 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/impl/providers/stubbox/units/testplans/ 0000775 0001750 0001750 00000000000 12633675274 025532 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/impl/providers/stubbox/units/testplans/all.pxu 0000664 0001750 0001750 00000006064 12627266441 027041 0 ustar pierre pierre 0000000 0000000 id: category-override-test
_name: Category Override Test
_description:
This test plan can be used to verify that category overrides are working
correctly. It is assigning the "overridden" category to all the stubbox
jobs starting with stub/.
.
This test plan selects the stub/multilevel job to ensure that generated jobs
are classified corractly. It also selects the stub/true job to check regular
jobs (known in advance).
unit: test plan
include:
stub/multilevel
stub/multilevel_[12]
stub/true
category-overrides:
apply overridden to stub/.*
unit: test plan
id: cert-status-override/plan
_name: Certification Status Override Tests
_description:
This test plan can be used to verify that certification status overrides are
working correctly. This plan selects all of the
cert-status-override/(pass|fail)/-* jobs and assings all of the possible
override values to them. The results can be obtained quickly. Given the right
output format, the desired data should be visible.
include:
cert-status-override/values # NOTE: needed because it cannot be inferred yet
cert-status-override/pass/unspecified certification-status=unspecified
cert-status-override/pass/not-part-of-certification certification-status=not-part-of-certification
cert-status-override/pass/non-blocker certification-status=non-blocker
cert-status-override/pass/blocker certification-status=blocker
cert-status-override/fail/unspecified certification-status=unspecified
cert-status-override/fail/not-part-of-certification certification-status=not-part-of-certification
cert-status-override/fail/non-blocker certification-status=non-blocker
cert-status-override/fail/blocker certification-status=blocker
unit: template
template-resource: cert-status-override/values
template-unit: job
id: cert-status-override/pass/{status}
_summary: A job that always succeeds (unique for {status})
_description:
This test always passes. This test is expected to have the overriden
certification-status value of "{status}"
plugin: shell
command: : # {status}
flags: preserve-locale
estimated_duration: 0.1
unit: template
template-resource: cert-status-override/values
template-unit: job
id: cert-status-override/fail/{status}
_summary: A job that always fails (unique for {status})
_description:
This test always fails. This test is expected to have the overriden
certification-status value of "{status}"
plugin: shell
command: ! : # {status}
flags: preserve-locale
estimated_duration: 0.1
id: cert-status-override/values
_summary: A constant resource that enumerates all certification-status values
_description:
This resource simply enumerates all of the values of the certification-status
attribute as subsequent records containing the "status" key mapping to the
actual values..
unit: job
plugin: resource
command:
echo 'status: unspecified'
echo ''
echo 'status: not-part-of-certification'
echo ''
echo 'status: non-blocker'
echo ''
echo 'status: blocker'
echo ''
flags: preserve-locale
estimated_duration: 0.1
plainbox-0.25/plainbox/impl/providers/stubbox/units/jobs/ 0000775 0001750 0001750 00000000000 12633675274 024452 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/impl/providers/stubbox/units/jobs/representative.pxu 0000664 0001750 0001750 00000005052 12627266441 030245 0 ustar pierre pierre 0000000 0000000 # Definitions of jobs that are useful for testing. Whenever you need a scenario
# for testing that involves realistic jobs and you don't want to painstakingly
# define them manually just load stubbox and get all jobs matching the pattern
# 'representative/plugin/.*'
# NOTE:: all of the jobs below can be simplified to a template once static
# resources are available.
id: representative/plugin/shell
_summary: Job with plugin=shell
_description: Job with plugin=shell
plugin: shell
flags: preserve-locale
command: true;
estimated_duration: 0.1
category_id: plugin-representative
id: representative/plugin/resource
_summary: Job with plugin=resource
_description: Job with plugin=resource
plugin: resource
flags: preserve-locale
command:
echo "key_a: value-a-1"
echo "key_b: value-b-1"
echo
echo "key_a: value-a-2"
echo "key_b: value-b-2"
estimated_duration: 0.1
category_id: plugin-representative
id: representative/plugin/local
_summary: Job with plugin=local
_description: Job with plugin=local
plugin: local
flags: preserve-locale
command: :
estimated_duration: 0.1
category_id: plugin-representative
id: representative/plugin/attachment
_summary: Job with plugin=attachment
_description: Job with plugin=attachment
plugin: attachment
flags: preserve-locale
command:
echo "Line 1"
echo "Line 2"
echo "Line 3 (last)"
estimated_duration: 0.1
category_id: plugin-representative
id: representative/plugin/user-interact
_summary: Job with plugin=user-interact
_description: Job with plugin=user-interact
plugin: user-interact
flags: preserve-locale
command:
echo "(interaction)"
estimated_duration: 30
category_id: plugin-representative
id: representative/plugin/user-verify
_summary: Job with plugin=user-verify
_description: Job with plugin=user-verify
plugin: user-verify
flags: preserve-locale
command:
echo "(verification)"
estimated_duration: 30
category_id: plugin-representative
id: representative/plugin/user-interact-verify
_summary: Job with plugin=user-interact-verify
_description: Job with plugin=user-interact-verify
plugin: user-interact-verify
flags: preserve-locale
command:
echo "(interaction)"
echo "(verification)"
estimated_duration: 30
category_id: plugin-representative
id: representative/plugin/manual
_summary: Job with plugin=manual
_description: Job with plugin=manual
plugin: manual
estimated_duration: 1
category_id: plugin-representative
id: representative/plugin/qml
_summary: Job with plugin=qml
_description: Job with plugin=qml
plugin: qml
qml_file: qml-simple.qml
flags: preserve-locale
estimated_duration: 10
category_id: plugin-representative
plainbox-0.25/plainbox/impl/providers/stubbox/units/jobs/local.pxu 0000664 0001750 0001750 00000000414 12627266441 026274 0 ustar pierre pierre 0000000 0000000 id: stub/local/true
_summary: A job generated by another job
# TRANSLATORS: don't translate 'local' below.
_description:
Check success result from shell test case (generated from a local job)
plugin: shell
flags: preserve-locale
command: true
estimated_duration: 0.1
plainbox-0.25/plainbox/impl/providers/stubbox/units/jobs/multilevel.pxu 0000664 0001750 0001750 00000001132 12627266441 027362 0 ustar pierre pierre 0000000 0000000 id: stub/multilevel
_summary: A generated generator job that generates two more jobs
_description: Multilevel tests
plugin: local
flags: preserve-locale
command:
cat <<'EOF'
id: stub/multilevel_1
_summary: Generated multi-level job 1
_description: This is just a sample multilevel test. Test 1.
plugin: shell
command: echo 1
estimated_duration: 0.1
EOF
echo ""
cat <<'EOF'
id: stub/multilevel_2
_summary: Generated multi-level job 2
_description: This is just a sample multilevel test. Test 2.
plugin: shell
command: echo 2
estimated_duration: 0.1
EOF
echo ""
estimated_duration: 0.1
plainbox-0.25/plainbox/impl/providers/stubbox/units/jobs/win.pxu 0000664 0001750 0001750 00000000344 12627266441 026001 0 ustar pierre pierre 0000000 0000000 id: stub/win32
_summary: A windows specific job
_description:
Check success result from win32 shell
plugin: shell
flags: preserve-locale win32
command: echo "Windows!"
estimated_duration: 0.1
category_id: plugin-representative
plainbox-0.25/plainbox/impl/providers/stubbox/units/jobs/categories.pxu 0000664 0001750 0001750 00000001103 12627266441 027323 0 ustar pierre pierre 0000000 0000000 id: plugin-representative
unit: category
_name: Representative Jobs (per "plugin" value)
id: split-field-representative
unit: category
_name: Representative Jobs with description fields split
id: dependency-chain
unit: category
_name: Dependency Chaining
id: long
unit: category
_name: Long Jobs
id: misc
unit: category
_name: Miscellaneous Tests
id: generated
unit: category
_name: Generated Tests
id: superuser
unit: category
_name: Elevated Privilege Tests
id: overridden
unit: category
_name: Overridden Category
id: qml-native
unit: category
_name: QML-native tests
plainbox-0.25/plainbox/impl/providers/stubbox/units/jobs/stub.pxu 0000664 0001750 0001750 00000032177 12627266441 026172 0 ustar pierre pierre 0000000 0000000 id: stub/true
_summary: Passing shell job
_description:
Check success result from shell test case
plugin: shell
flags: preserve-locale
command: true; echo oops
estimated_duration: 0.1
category_id: plugin-representative
id: stub/false
_summary: Failing shell job
_description:
Check failed result from shell test case
plugin: shell
flags: preserve-locale
command: false
estimated_duration: 0.1
category_id: plugin-representative
id: stub/crash
_summary: A crashing shell job
_description:
Check crash result from a shell test case (killed with SIGTERM)
plugin: shell
flags: preserve-locale
command: kill -TERM $$
estimated_duration: 0.1
category_id: plugin-representative
id: stub/dependency/good
_summary: Passing shell job depending on a passing shell job
_description:
Check job is executed when dependency succeeds
plugin: shell
depends: stub/true
flags: preserve-locale
command: true
estimated_duration: 0.1
category_id: dependency-chain
plugin: shell
id: stub/dependency/bad
depends: stub/false
flags: preserve-locale
command: true
_summary: Passing shell job depending on a failing shell job
_description:
Check job result is set to uninitiated when dependency fails
estimated_duration: 0.1
category_id: dependency-chain
id: stub/sleep-60
_summary: Job sleeping for sixty seconds
_description: Sleep for sixty seconds
plugin: shell
flags: preserve-locale
command: sleep 60
estimated_duration: 60
category_id: long
id: stub/kill-ppid-if-KILLER-set
_summary: Job killing the parent, if KILLER=yes
_description: Kill $PPID if $KILLER is set to yes
plugin: shell
# XXX: why is this dependency here?
depends: stub/multilevel
flags: preserve-locale
command: if [ "$KILLER" == "yes" ]; then kill -9 $PPID; fi
estimated_duration: 0.1
category_id: misc
# FIXME: stub/package once resource_object is supported
id: stub_package
_summary: Job determining a fake list of packages (1)
_description:
This job generates a resource object with what looks
like a list of packages.
.
The actual packages are fake
plugin: resource
flags: preserve-locale
command: stub_package_list
estimated_duration: 0.5
category_id: plugin-representative
id: stub_package2
_summary: Job determining a fake list of packages (2)
_description:
This job generates a resource object with what looks
like a list of packages.
.
The actual packages are fake
plugin: resource
flags: preserve-locale
command: stub_package_list
estimated_duration: 0.5
id: stub/requirement/good
_summary: Passing shell job depending on an availalbe resource
_description:
Check job is executed when requirements are met
plugin: shell
requires: stub_package.name == "checkbox"
flags: preserve-locale
command: true
estimated_duration: 0.1
category_id: dependency-chain
id: stub/requirement/bad
_summary: Passing shell job depending on an unavailable resource
_description:
Check job result is set to "not required on this system" when requirements are not met
plugin: shell
requires: stub_package.name == "unknown-package"
flags: preserve-locale
command: true
estimated_duration: 0.1
category_id: dependency-chain
id: stub/manual
_summary: A simple manual job
_description:
PURPOSE:
This test checks that the manual plugin works fine
STEPS:
1. Add a comment
2. Set the result as passed
VERIFICATION:
Check that in the report the result is passed and the comment is displayed
plugin: manual
estimated_duration: 30
category_id: plugin-representative
id: stub/split-fields/manual
_summary: A simple manual job using finer description fields
_purpose:
This test checks that the manual plugin works fine
_steps:
1. Add a comment
2. Set the result as passed
_verification:
Check that in the report the result is passed and the comment is displayed
plugin: manual
estimated_duration: 30
category_id: split-field-representative
id: stub/user-interact
_summary: A simple user interaction job
_description:
PURPOSE:
This test checks that the user-interact plugin works fine
STEPS:
1. Read this description
2. Press the test button
VERIFICATION:
Check that in the report the result is passed
plugin: user-interact
flags: preserve-locale
command: true
estimated_duration: 30
category_id: plugin-representative
id: stub/split-fields/user-interact
_summary: User-interact job using finer description fields
_purpose:
This is a purpose part of test description
_steps:
1. First step in the user-iteract job
2. Second step in the user-iteract job
_verification:
Verification part of test description
plugin: user-interact
flags: preserve-locale
command: true
estimated_duration: 30
category_id: split-field-representative
id: stub/user-verify
_summary: A simple user verification job
_description:
PURPOSE:
This test checks that the user-verify plugin works fine
STEPS:
1. Read this description
2. Ensure that the command has been started automatically
3. Do not press the test button
4. Look at the output and determine the outcome of the test
VERIFICATION:
The command should have printed "Please select 'pass'"
plugin: user-verify
flags: preserve-locale
command: echo "Please select 'pass'"
estimated_duration: 30
category_id: plugin-representative
id: stub/split-fields/user-verify
_summary: User-verify job using finer description fields
_purpose:
This test checks that the user-verify plugin works fine and that
description field is split properly
_steps:
1. Read this description
2. Ensure that the command has been started automatically
3. Do not press the test button
4. Look at the output and determine the outcome of the test
_verification:
The command should have printed "Please select 'pass'"
plugin: user-verify
flags: preserve-locale
command: echo "Please select 'pass'"
estimated_duration: 30
category_id: split-field-representative
id: stub/user-interact-verify
_summary: A simple user interaction and verification job
_description:
PURPOSE:
This test checks that the user-interact-verify plugin works fine
STEPS:
1. Read this description
2. Ensure that the command has not been started yet
3. Press the test button
4. Look at the output and determine the outcome of the test
VERIFICATION:
The command should have printed "Please select 'pass'"
plugin: user-interact-verify
flags: preserve-locale
command: echo "Please select 'pass'"
estimated_duration: 25
category_id: plugin-representative
id: stub/split-fields/user-interact-verify
_summary: A simple user interaction and verification job using finer
description fields
_purpose:
This test checks that the user-interact-verify plugin works fine
_steps:
1. Read this description
2. Ensure that the command has not been started yet
3. Press the test button
4. Look at the output and determine the outcome of the test
_verification:
The command should have printed "Please select 'pass'"
plugin: user-interact-verify
flags: preserve-locale
command: echo "Please select 'pass'"
estimated_duration: 25
category_id: split-field-representative
id: stub/user-interact-verify-passing
_summary: A suggested-passing user-verification-interaction job
_description:
PURPOSE:
This test checks that the application user interface auto-suggests 'pass'
as the outcome of a test for user-interact-verify jobs that have a command
which completes successfully.
STEPS:
1. Read this description
2. Ensure that the command has not been started yet
3. Press the test button
4. Confirm the auto-suggested value
VERIFICATION:
The auto suggested value should have been 'pass'
plugin: user-interact-verify
flags: preserve-locale
command: true
estimated_duration: 25
category_id: plugin-representative
id: stub/split-fields/user-interact-verify-passing
_summary: A suggested-passing user-verification-interaction job using finer
description fields
_purpose:
This test checks that the application user interface auto-suggests 'pass'
as the outcome of a test for user-interact-verify jobs that have a command
which completes successfully.
_steps:
1. Read this description
2. Ensure that the command has not been started yet
3. Press the test button
4. Confirm the auto-suggested value
_verification:
The auto suggested value should have been 'pass'
plugin: user-interact-verify
flags: preserve-locale
command: true
estimated_duration: 25
category_id: split-field-representative
id: stub/user-interact-verify-failing
_summary: A suggested-failing user-verification-interaction job
_description:
PURPOSE:
This test checks that the application user interface auto-suggests 'fail'
as the outcome of a test for user-interact-verify jobs that have a command
which completes unsuccessfully.
STEPS:
1. Read this description
2. Ensure that the command has not been started yet
3. Press the test button
4. Confirm the auto-suggested value
VERIFICATION:
The auto suggested value should have been 'fail'
plugin: user-interact-verify
flags: preserve-locale
command: false
estimated_duration: 25
category_id: plugin-representative
id: stub/split-fields/user-interact-verify-failing
_summary: A suggested-failing user-verification-interaction job using finer
description fields
_purpose:
This test checks that the application user interface auto-suggests 'fail'
as the outcome of a test for user-interact-verify jobs that have a command
which completes unsuccessfully.
_steps:
1. Read this description
2. Ensure that the command has not been started yet
3. Press the test button
4. Confirm the auto-suggested value
_verification:
The auto suggested value should have been 'fail'
plugin: user-interact-verify
flags: preserve-locale
command: false
estimated_duration: 25
category_id: split-field-representative
id: __local__
_summary: A job generating one more job
_description:
This job generates the stub/local/true job
plugin: local
flags: preserve-locale
command:
shopt -s extglob
cat $PLAINBOX_PROVIDER_UNITS/jobs/local.pxu
estimated_duration: 0.1
category_id: plugin-representative
id: __multilevel__
_summary: A job generating more generator jobs
_description:
This job generates stub/multilevel which in turn can
generate stub/multilevel_1 and stub/multilevel_2
plugin: local
flags: preserve-locale
command:
shopt -s extglob
cat $PLAINBOX_PROVIDER_UNITS/jobs/multilevel.pxu
estimated_duration: 0.1
id: stub/root
_summary: A job that runs as root
_description:
Check that becoming root works
plugin: shell
user: root
flags: preserve-locale
command: test $(id -u) -eq 0
estimated_duration: 0.1
category_id: superuser
id: stub/text-attachment
_summary: A job that attaches a plain text file
_description:
This job attaches a simple, fixed, piece of UTF-8 encoded text as attachment
plugin: attachment
flags: preserve-locale
# The subsequent polish text is a typical 'the quick brown fox...' text that
# is used just because it's likely to expose any non-ASCII text handling bugs.
command:
echo "zazółć gęślą jaźń"
estimated_duration: 0.1
category_id: plugin-representative
id: stub/binary-attachment
_summary: A job that attaches representative binary data
_description:
This job generates bytes 0 through 255 to test handling of bytes that may
occur but be mishandled by our I/O processing engine.
plugin: attachment
flags: preserve-locale
# The all-bytes file can be generated with the following piece of bash but
# I wanted to avoid reliance on the obscure escape processing for
# portability:
# for i in $(seq 0 255); do
# echo -n -e "\x$(printf %x $i)"
# done
command:
cat $PLAINBOX_PROVIDER_DATA/all-bytes
id: stub/large-text-attachment
_summary: A job that attaches a plain text file
_description:
This job attaches a large, repeated sequence of UTF-8 encoded text as
attachment. It helps to stress the I/O handling code that might not happen in
a trivial (short / small) attachment.
plugin: attachment
flags: preserve-locale
# The subsequent polish text is a typical 'the quick brown fox...' text that
# is used just because it's likely to expose any non-ASCII text handling bugs.
command:
for i in $(seq 100000); do
echo "$i: zazółć gęślą jaźń"
done
estimated_duration: 0.1
category_id: stress
id: stub/large-binary-attachment
_summary: A job that attaches representative binary data
_description:
This job attaches 16GBs of zeros to see if we can handle (mostly on the memory
front) such types of attachments, e.g. someone attaching a swap file, or
something equally unexpected and very large.
plugin: attachment
flags: preserve-locale
command:
dd if=/dev/zero bs=1M count=16384
estimated_duration: 750
category_id: stress
id: stub/qml-simple
_summary: A QML job that runs simple GUI
_description:
This job displays a GUI that has two buttons determining outcome of the test.
It's similar to user-interact-verify, but this job is QML native.
plugin: qml
qml_file: qml-simple.qml
flags: preserve-locale
estimated_duration: 10
category_id: qml-native
id: stub/qml-navigation
_summary: A QML job that has its own navigation
_description:
This job displays a GUI with multiple screens using its own (independent) flow
control mechanism (page stack).
plugin: qml
qml_file: qml-navigation.qml
flags: preserve-locale
estimated_duration: 20
category_id: qml-native
plainbox-0.25/plainbox/impl/providers/stubbox/whitelists/ 0000775 0001750 0001750 00000000000 12633675274 024552 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/impl/providers/stubbox/whitelists/stub1.whitelist 0000664 0001750 0001750 00000000172 12627266441 027541 0 ustar pierre pierre 0000000 0000000 stub/true
stub/dependency/bad
# stub_package
stub/requirement/good
stub/requirement/bad
stub/multilevel
stub/multilevel.*
plainbox-0.25/plainbox/impl/providers/stubbox/whitelists/stub.whitelist 0000664 0001750 0001750 00000001070 12627266441 027456 0 ustar pierre pierre 0000000 0000000 # Shell job that always works
stub/true
# Shell job that always fails
stub/false
# User-* Job collection
stub/user-verify
stub/user-interact
stub/user-interact-verify
# A manual job
stub/manual
# A shell job with a dependency that always works
stub/dependency/good
# A shell job with a dependency that always fails
stub/dependency/bad
# A shell job that requires a resource which is available
stub/requirement/good
# A shell job that requires a resource which is not available
stub/requirement/bad
__local__
stub/local/true
stub/multilevel
stub/multilevel.*
stub/root
plainbox-0.25/plainbox/impl/providers/stubbox/whitelists/stub2.whitelist 0000664 0001750 0001750 00000000151 12627266441 027537 0 ustar pierre pierre 0000000 0000000 stub/false
stub/dependency/good
stub/dependency/bad
# stub_package
stub/manual
__local__
stub/local/true
plainbox-0.25/plainbox/impl/providers/stubbox/po/ 0000775 0001750 0001750 00000000000 12633675274 022771 5 ustar pierre pierre 0000000 0000000 plainbox-0.25/plainbox/impl/providers/stubbox/po/zh_TW.po 0000664 0001750 0001750 00000013057 12627266441 024365 0 ustar pierre pierre 0000000 0000000 # Chinese (Traditional) translation for checkbox
# Copyright (c) 2014 Rosetta Contributors and Canonical Ltd 2014
# This file is distributed under the same license as the checkbox package.
# FIRST AUTHOR , 2014.
#
msgid ""
msgstr ""
"Project-Id-Version: checkbox\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2014-03-17 18:33+0100\n"
"PO-Revision-Date: 2014-02-21 11:22+0000\n"
"Last-Translator: Taihsiang Ho \n"
"Language-Team: Chinese (Traditional) \n"
"Language: \n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=1; plural=0;\n"
#. summary
#: ../jobs/local.txt.in:2
msgid "A job generated by another job"
msgstr ""
#. description
#: ../jobs/local.txt.in:4
msgid "Check success result from shell test case (generated from a local job)"
msgstr ""
#. summary
#: ../jobs/multilevel.txt.in:2
msgid "A generated generator job that generates two more jobs"
msgstr ""
#. description
#: ../jobs/multilevel.txt.in:3
msgid "Multilevel tests"
msgstr ""
#. summary
#: ../jobs/stub.txt.in:2
msgid "Passing shell job"
msgstr ""
#. description
#: ../jobs/stub.txt.in:3
msgid "Check success result from shell test case"
msgstr ""
#. summary
#: ../jobs/stub.txt.in:9
msgid "Failing shell job"
msgstr ""
#. description
#: ../jobs/stub.txt.in:10
msgid "Check failed result from shell test case"
msgstr ""
#. summary
#: ../jobs/stub.txt.in:16
msgid "Passing shell job depending on a passing shell job"
msgstr ""
#. description
#: ../jobs/stub.txt.in:17
msgid "Check job is executed when dependency succeeds"
msgstr ""
#. summary
#: ../jobs/stub.txt.in:27
msgid "Passing shell job depending on a failing shell job"
msgstr ""
#. description
#: ../jobs/stub.txt.in:28
msgid "Check job result is set to uninitiated when dependency fails"
msgstr ""
#. summary
#: ../jobs/stub.txt.in:32
#, fuzzy
msgid "Job sleeping for sixty seconds "
msgstr "ä¼‘çœ å…åç§’é˜"
#. description
#: ../jobs/stub.txt.in:33
msgid "Sleep for sixty seconds"
msgstr "ä¼‘çœ å…åç§’é˜"
#. summary
#: ../jobs/stub.txt.in:38
msgid "Job killing the parent, if KILLER=yes"
msgstr ""
#. description
#: ../jobs/stub.txt.in:39
msgid "Kill $PPID if $KILLER is set to yes"
msgstr ""
#. summary
#: ../jobs/stub.txt.in:47
msgid "Job determining a fake list of packages"
msgstr ""
#. description
#: ../jobs/stub.txt.in:48
msgid ""
" This job generates a resource object with what looks\n"
" like a list of packages.\n"
" .\n"
" The actual packages are fake"
msgstr ""
#. summary
#: ../jobs/stub.txt.in:57
msgid "Passing shell job depending on an availalbe resource"
msgstr ""
#. description
#: ../jobs/stub.txt.in:58
msgid "Check job is executed when requirements are met"
msgstr ""
#. summary
#: ../jobs/stub.txt.in:65
msgid "Passing shell job depending on an unavailable resource"
msgstr ""
#. description
#: ../jobs/stub.txt.in:66
msgid ""
"Check job result is set to \"not required on this system\" when requirements "
"are not met"
msgstr ""
#. summary
#: ../jobs/stub.txt.in:73
msgid "A simple manual job"
msgstr ""
#. description
#: ../jobs/stub.txt.in:74
msgid ""
"PURPOSE:\n"
" This test checks that the manual plugin works fine\n"
"STEPS:\n"
" 1. Add a comment\n"
" 2. Set the result as passed\n"
"VERIFICATION:\n"
" Check that in the report the result is passed and the comment is "
"displayed"
msgstr ""
#. summary
#: ../jobs/stub.txt.in:85
msgid "A simple user interaction job"
msgstr ""
#. description
#: ../jobs/stub.txt.in:86
msgid ""
"PURPOSE:\n"
" This test checks that the user-interact plugin works fine\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Press the test button\n"
"VERIFICATION:\n"
" Check that in the report the result is passed"
msgstr ""
#. summary
#: ../jobs/stub.txt.in:98
msgid "A simple user verification job"
msgstr ""
#. description
#: ../jobs/stub.txt.in:99
msgid ""
"PURPOSE:\n"
" This test checks that the user-verify plugin works fine\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Ensure that the command has been started automatically\n"
" 3. Do not press the test button\n"
" 4. Look at the output and determine the outcome of the test\n"
"VERIFICATION:\n"
" The command should have printed \"Please select 'pass'\""
msgstr ""
#. summary
#: ../jobs/stub.txt.in:113
msgid "A simple user verification-interaction job"
msgstr ""
#. description
#: ../jobs/stub.txt.in:114
msgid ""
"PURPOSE:\n"
" This test checks that the user-interact-verify plugin works fine\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Look at the output and determine the outcome of the test\n"
"VERIFICATION:\n"
" The command should have printed \"Please select 'pass'\""
msgstr ""
#. summary
#: ../jobs/stub.txt.in:128
msgid "A job generating one more job"
msgstr ""
#. description
#: ../jobs/stub.txt.in:129
msgid " This job generates the stub/local/true job"
msgstr ""
#. summary
#: ../jobs/stub.txt.in:137
msgid "A job generating more generator jobs"
msgstr ""
#. description
#: ../jobs/stub.txt.in:138
msgid ""
" This job generates stub/multilevel which in turn can\n"
" generate stub/multilevel_1 and stub/multilevel_2"
msgstr ""
#. summary
#: ../jobs/stub.txt.in:147
msgid "A job that runs as root"
msgstr ""
#. description
#: ../jobs/stub.txt.in:148
msgid "Check that becoming root works"
msgstr ""
#. This is stubbox_def.description, we need it here to extract is as a part of
#. stubbox
#: .././manage.py:31
msgid "StubBox (dummy data for development)"
msgstr ""
plainbox-0.25/plainbox/impl/providers/stubbox/po/ug.po 0000664 0001750 0001750 00000030113 12627266441 023735 0 ustar pierre pierre 0000000 0000000 # Uyghur translation for checkbox
# Copyright (c) 2015 Rosetta Contributors and Canonical Ltd 2015
# This file is distributed under the same license as the checkbox package.
# FIRST AUTHOR , 2015.
#
msgid ""
msgstr ""
"Project-Id-Version: checkbox\n"
"Report-Msgid-Bugs-To: FULL NAME \n"
"POT-Creation-Date: 2014-12-03 14:33+0100\n"
"PO-Revision-Date: 2015-10-28 14:21+0000\n"
"Last-Translator: FULL NAME \n"
"Language-Team: Uyghur \n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2015-11-28 04:35+0000\n"
"X-Generator: Launchpad (build 17850)\n"
#. name
#: ../units/jobs/categories.pxu:3
msgid "Representative Jobs (per \"plugin\" value)"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:7
msgid "Representative Jobs with description fields split"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:11
msgid "Dependency Chaining"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:15
msgid "Long Jobs"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:19
msgid "Miscellaneous Tests"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:23
msgid "Generated Tests"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:27
msgid "Elevated Privilege Tests"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:31
msgid "Overridden Category"
msgstr ""
#. summary
#: ../units/jobs/local.pxu:2
msgid "A job generated by another job"
msgstr ""
#. description
#: ../units/jobs/local.pxu:4
msgid ""
"Check success result from shell test case (generated from a local job)"
msgstr ""
#. summary
#: ../units/jobs/multilevel.pxu:2
msgid "A generated generator job that generates two more jobs"
msgstr ""
#. description
#: ../units/jobs/multilevel.pxu:3
msgid "Multilevel tests"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:2
msgid "Passing shell job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:3
msgid "Check success result from shell test case"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:12
msgid "Failing shell job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:13
msgid "Check failed result from shell test case"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:22
msgid "Passing shell job depending on a passing shell job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:23
msgid "Check job is executed when dependency succeeds"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:37
msgid "Passing shell job depending on a failing shell job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:38
msgid "Check job result is set to uninitiated when dependency fails"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:44
msgid "Job sleeping for sixty seconds"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:45
msgid "Sleep for sixty seconds"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:53
msgid "Job killing the parent, if KILLER=yes"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:54
msgid "Kill $PPID if $KILLER is set to yes"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:65
msgid "Job determining a fake list of packages (1)"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:78
msgid "Job determining a fake list of packages (2)"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:79
msgid ""
" This job generates a resource object with what looks\n"
" like a list of packages.\n"
" .\n"
" The actual packages are fake"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:90
msgid "Passing shell job depending on an availalbe resource"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:91
msgid "Check job is executed when requirements are met"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:101
msgid "Passing shell job depending on an unavailable resource"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:102
msgid ""
"Check job result is set to \"not required on this system\" when requirements "
"are not met"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:112
msgid "A simple manual job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:113
msgid ""
"PURPOSE:\n"
" This test checks that the manual plugin works fine\n"
"STEPS:\n"
" 1. Add a comment\n"
" 2. Set the result as passed\n"
"VERIFICATION:\n"
" Check that in the report the result is passed and the comment is "
"displayed"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:126
msgid "A simple manual job using finer description fields"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:127
msgid " This test checks that the manual plugin works fine"
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:129
msgid ""
" 1. Add a comment\n"
" 2. Set the result as passed"
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:132
msgid ""
" Check that in the report the result is passed and the comment is displayed"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:139
msgid "A simple user interaction job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:140
msgid ""
"PURPOSE:\n"
" This test checks that the user-interact plugin works fine\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Press the test button\n"
"VERIFICATION:\n"
" Check that in the report the result is passed"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:155
msgid "User-interact job using finer description fields"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:156
msgid " This is a purpose part of test description"
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:158
msgid ""
" 1. First step in the user-iteract job\n"
" 2. Second step in the user-iteract job"
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:161
msgid " Verification part of test description"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:170
msgid "A simple user verification job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:171
msgid ""
"PURPOSE:\n"
" This test checks that the user-verify plugin works fine\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Ensure that the command has been started automatically\n"
" 3. Do not press the test button\n"
" 4. Look at the output and determine the outcome of the test\n"
"VERIFICATION:\n"
" The command should have printed \"Please select 'pass'\""
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:188
msgid "User-verify job using finer description fields"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:189
msgid ""
" This test checks that the user-verify plugin works fine and that\n"
" description field is split properly"
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:192
msgid ""
" 1. Read this description\n"
" 2. Ensure that the command has been started automatically\n"
" 3. Do not press the test button\n"
" 4. Look at the output and determine the outcome of the test"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:206
msgid "A simple user interaction and verification job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:207
msgid ""
"PURPOSE:\n"
" This test checks that the user-interact-verify plugin works fine\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Look at the output and determine the outcome of the test\n"
"VERIFICATION:\n"
" The command should have printed \"Please select 'pass'\""
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:224
msgid "A simple user interaction and verification job using finer"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:226
msgid " This test checks that the user-interact-verify plugin works fine"
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:228
msgid ""
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Look at the output and determine the outcome of the test"
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:233
msgid " The command should have printed \"Please select 'pass'\""
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:242
msgid "A suggested-passing user-verification-interaction job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:243
msgid ""
"PURPOSE:\n"
" This test checks that the application user interface auto-suggests "
"'pass'\n"
" as the outcome of a test for user-interact-verify jobs that have a "
"command\n"
" which completes successfully.\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Confirm the auto-suggested value\n"
"VERIFICATION:\n"
" The auto suggested value should have been 'pass'"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:262
msgid "A suggested-passing user-verification-interaction job using finer"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:264
msgid ""
" This test checks that the application user interface auto-suggests "
"'pass'\n"
" as the outcome of a test for user-interact-verify jobs that have a "
"command\n"
" which completes successfully."
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:273
msgid " The auto suggested value should have been 'pass'"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:282
msgid "A suggested-failing user-verification-interaction job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:283
msgid ""
"PURPOSE:\n"
" This test checks that the application user interface auto-suggests "
"'fail'\n"
" as the outcome of a test for user-interact-verify jobs that have a "
"command\n"
" which completes unsuccessfully.\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Confirm the auto-suggested value\n"
"VERIFICATION:\n"
" The auto suggested value should have been 'fail'"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:302
msgid "A suggested-failing user-verification-interaction job using finer"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:302
msgid " description fields"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:304
msgid ""
" This test checks that the application user interface auto-suggests "
"'fail'\n"
" as the outcome of a test for user-interact-verify jobs that have a "
"command\n"
" which completes unsuccessfully."
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:308
msgid ""
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Confirm the auto-suggested value"
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:313
msgid " The auto suggested value should have been 'fail'"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:322
msgid "A job generating one more job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:323
msgid " This job generates the stub/local/true job"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:334
msgid "A job generating more generator jobs"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:335
msgid ""
" This job generates stub/multilevel which in turn can\n"
" generate stub/multilevel_1 and stub/multilevel_2"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:346
msgid "A job that runs as root"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:347
msgid "Check that becoming root works"
msgstr ""
#. summary
#: ../units/jobs/win.pxu:2
msgid "A windows specific job"
msgstr ""
#. description
#: ../units/jobs/win.pxu:3
msgid "Check success result from win32 shell"
msgstr ""
#. name
#: ../units/testplans/all.pxu:2
msgid "Category Override Test"
msgstr ""
#. description
#: ../units/testplans/all.pxu:3
msgid ""
"This test plan can be used to verify that category overrides are working "
"correctly. It is assigning the \"overridden\" category to all the stubbox "
"jobs starting with stub/."
msgstr ""
#. description
#: ../units/testplans/all.pxu:3
msgid ""
"This test plan selects the stub/multilevel job to ensure that generated jobs "
"are classified corractly. It also selects the stub/true job to check regular "
"jobs (known in advance)."
msgstr ""
#. This is stubbox_def.description,
#. we need it here to extract is as a part of stubbox
#: .././manage.py:45
msgid "StubBox (dummy data for development)"
msgstr ""
#: .././manage.py:55 .././manage.py:66
msgid "The StubBox provider is special"
msgstr ""
#: .././manage.py:56
msgid "You don't need to develop it explicitly"
msgstr ""
#: .././manage.py:67
msgid "You don't need to install it explicitly"
msgstr ""
plainbox-0.25/plainbox/impl/providers/stubbox/po/de.po 0000664 0001750 0001750 00000031761 12627266441 023724 0 ustar pierre pierre 0000000 0000000 # German translation for plainbox
# Copyright (c) 2014 Rosetta Contributors and Canonical Ltd 2014
# This file is distributed under the same license as the plainbox package.
# FIRST AUTHOR , 2014.
#
msgid ""
msgstr ""
"Project-Id-Version: plainbox\n"
"Report-Msgid-Bugs-To: FULL NAME \n"
"POT-Creation-Date: 2014-12-03 14:33+0100\n"
"PO-Revision-Date: 2014-03-30 11:31+0000\n"
"Last-Translator: FULL NAME \n"
"Language-Team: German \n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2015-11-28 04:35+0000\n"
"X-Generator: Launchpad (build 17850)\n"
#. name
#: ../units/jobs/categories.pxu:3
msgid "Representative Jobs (per \"plugin\" value)"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:7
msgid "Representative Jobs with description fields split"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:11
msgid "Dependency Chaining"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:15
msgid "Long Jobs"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:19
msgid "Miscellaneous Tests"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:23
msgid "Generated Tests"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:27
msgid "Elevated Privilege Tests"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:31
msgid "Overridden Category"
msgstr ""
#. summary
#: ../units/jobs/local.pxu:2
msgid "A job generated by another job"
msgstr ""
#. description
#: ../units/jobs/local.pxu:4
msgid ""
"Check success result from shell test case (generated from a local job)"
msgstr ""
#. summary
#: ../units/jobs/multilevel.pxu:2
msgid "A generated generator job that generates two more jobs"
msgstr ""
#. description
#: ../units/jobs/multilevel.pxu:3
msgid "Multilevel tests"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:2
msgid "Passing shell job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:3
msgid "Check success result from shell test case"
msgstr "Überprüft, ob der Shell-Test das Ergebnis »Erfolgreich« liefert."
#. summary
#: ../units/jobs/stub.pxu:12
msgid "Failing shell job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:13
msgid "Check failed result from shell test case"
msgstr "Überprüft, ob der Shell-Test das Ergebnis »Gescheitert« liefert."
#. summary
#: ../units/jobs/stub.pxu:22
msgid "Passing shell job depending on a passing shell job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:23
msgid "Check job is executed when dependency succeeds"
msgstr ""
"Überprüft, ob der Auftrag ausgeführt wird, wenn die Abhängigkeiten erfüllt "
"sind."
#. summary
#: ../units/jobs/stub.pxu:37
msgid "Passing shell job depending on a failing shell job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:38
msgid "Check job result is set to uninitiated when dependency fails"
msgstr ""
"Überprüft, ob das Ergebnis des Auftrags »nicht ausgeführt« ist, wenn "
"Abhängigkeiten nicht erfüllt sind."
#. summary
#: ../units/jobs/stub.pxu:44
msgid "Job sleeping for sixty seconds"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:45
msgid "Sleep for sixty seconds"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:53
msgid "Job killing the parent, if KILLER=yes"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:54
msgid "Kill $PPID if $KILLER is set to yes"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:65
msgid "Job determining a fake list of packages (1)"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:78
msgid "Job determining a fake list of packages (2)"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:79
msgid ""
" This job generates a resource object with what looks\n"
" like a list of packages.\n"
" .\n"
" The actual packages are fake"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:90
msgid "Passing shell job depending on an availalbe resource"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:91
msgid "Check job is executed when requirements are met"
msgstr ""
"Überprüft, ob der Auftrag ausgeführt wird, wenn die Erfordernisse gegeben "
"sind."
#. summary
#: ../units/jobs/stub.pxu:101
msgid "Passing shell job depending on an unavailable resource"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:102
msgid ""
"Check job result is set to \"not required on this system\" when requirements "
"are not met"
msgstr ""
"Überprüft, ob das Ergebnis des Auftrags »auf diesem System nicht "
"erforderlich« ist, wenn die Erfordernisse nicht gegeben sind."
#. summary
#: ../units/jobs/stub.pxu:112
msgid "A simple manual job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:113
msgid ""
"PURPOSE:\n"
" This test checks that the manual plugin works fine\n"
"STEPS:\n"
" 1. Add a comment\n"
" 2. Set the result as passed\n"
"VERIFICATION:\n"
" Check that in the report the result is passed and the comment is "
"displayed"
msgstr ""
"ZWECK:\n"
" Dieser Test überprüft, ob der Erweiterung »Manuell« einwandfrei "
"funktioniert.\n"
"DURCHFÜHRUNG:\n"
" 1. Fügen Sie eine Bemerkung hinzu.\n"
" 2. Markieren Sie das Ergebnis als bestanden.\n"
"ÜBERPRÜFUNG:\n"
" Überprüfen Sie im Bericht, ob das Ergebnis des Tests »Bestanden« ist und "
"die eingegebene Bemerkung angezeigt wird."
#. summary
#: ../units/jobs/stub.pxu:126
msgid "A simple manual job using finer description fields"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:127
msgid " This test checks that the manual plugin works fine"
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:129
msgid ""
" 1. Add a comment\n"
" 2. Set the result as passed"
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:132
msgid ""
" Check that in the report the result is passed and the comment is displayed"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:139
msgid "A simple user interaction job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:140
msgid ""
"PURPOSE:\n"
" This test checks that the user-interact plugin works fine\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Press the test button\n"
"VERIFICATION:\n"
" Check that in the report the result is passed"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:155
msgid "User-interact job using finer description fields"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:156
msgid " This is a purpose part of test description"
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:158
msgid ""
" 1. First step in the user-iteract job\n"
" 2. Second step in the user-iteract job"
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:161
msgid " Verification part of test description"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:170
msgid "A simple user verification job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:171
msgid ""
"PURPOSE:\n"
" This test checks that the user-verify plugin works fine\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Ensure that the command has been started automatically\n"
" 3. Do not press the test button\n"
" 4. Look at the output and determine the outcome of the test\n"
"VERIFICATION:\n"
" The command should have printed \"Please select 'pass'\""
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:188
msgid "User-verify job using finer description fields"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:189
msgid ""
" This test checks that the user-verify plugin works fine and that\n"
" description field is split properly"
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:192
msgid ""
" 1. Read this description\n"
" 2. Ensure that the command has been started automatically\n"
" 3. Do not press the test button\n"
" 4. Look at the output and determine the outcome of the test"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:206
msgid "A simple user interaction and verification job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:207
msgid ""
"PURPOSE:\n"
" This test checks that the user-interact-verify plugin works fine\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Look at the output and determine the outcome of the test\n"
"VERIFICATION:\n"
" The command should have printed \"Please select 'pass'\""
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:224
msgid "A simple user interaction and verification job using finer"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:226
msgid " This test checks that the user-interact-verify plugin works fine"
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:228
msgid ""
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Look at the output and determine the outcome of the test"
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:233
msgid " The command should have printed \"Please select 'pass'\""
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:242
msgid "A suggested-passing user-verification-interaction job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:243
msgid ""
"PURPOSE:\n"
" This test checks that the application user interface auto-suggests "
"'pass'\n"
" as the outcome of a test for user-interact-verify jobs that have a "
"command\n"
" which completes successfully.\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Confirm the auto-suggested value\n"
"VERIFICATION:\n"
" The auto suggested value should have been 'pass'"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:262
msgid "A suggested-passing user-verification-interaction job using finer"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:264
msgid ""
" This test checks that the application user interface auto-suggests "
"'pass'\n"
" as the outcome of a test for user-interact-verify jobs that have a "
"command\n"
" which completes successfully."
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:273
msgid " The auto suggested value should have been 'pass'"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:282
msgid "A suggested-failing user-verification-interaction job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:283
msgid ""
"PURPOSE:\n"
" This test checks that the application user interface auto-suggests "
"'fail'\n"
" as the outcome of a test for user-interact-verify jobs that have a "
"command\n"
" which completes unsuccessfully.\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Confirm the auto-suggested value\n"
"VERIFICATION:\n"
" The auto suggested value should have been 'fail'"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:302
msgid "A suggested-failing user-verification-interaction job using finer"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:302
msgid " description fields"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:304
msgid ""
" This test checks that the application user interface auto-suggests "
"'fail'\n"
" as the outcome of a test for user-interact-verify jobs that have a "
"command\n"
" which completes unsuccessfully."
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:308
msgid ""
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Confirm the auto-suggested value"
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:313
msgid " The auto suggested value should have been 'fail'"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:322
msgid "A job generating one more job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:323
msgid " This job generates the stub/local/true job"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:334
msgid "A job generating more generator jobs"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:335
msgid ""
" This job generates stub/multilevel which in turn can\n"
" generate stub/multilevel_1 and stub/multilevel_2"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:346
msgid "A job that runs as root"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:347
msgid "Check that becoming root works"
msgstr ""
#. summary
#: ../units/jobs/win.pxu:2
msgid "A windows specific job"
msgstr ""
#. description
#: ../units/jobs/win.pxu:3
msgid "Check success result from win32 shell"
msgstr ""
#. name
#: ../units/testplans/all.pxu:2
msgid "Category Override Test"
msgstr ""
#. description
#: ../units/testplans/all.pxu:3
msgid ""
"This test plan can be used to verify that category overrides are working "
"correctly. It is assigning the \"overridden\" category to all the stubbox "
"jobs starting with stub/."
msgstr ""
#. description
#: ../units/testplans/all.pxu:3
msgid ""
"This test plan selects the stub/multilevel job to ensure that generated jobs "
"are classified corractly. It also selects the stub/true job to check regular "
"jobs (known in advance)."
msgstr ""
#. This is stubbox_def.description,
#. we need it here to extract is as a part of stubbox
#: .././manage.py:45
msgid "StubBox (dummy data for development)"
msgstr ""
#: .././manage.py:55 .././manage.py:66
msgid "The StubBox provider is special"
msgstr ""
#: .././manage.py:56
msgid "You don't need to develop it explicitly"
msgstr ""
#: .././manage.py:67
msgid "You don't need to install it explicitly"
msgstr ""
plainbox-0.25/plainbox/impl/providers/stubbox/po/POTFILES.in 0000664 0001750 0001750 00000000502 12627266441 024536 0 ustar pierre pierre 0000000 0000000 [encoding: UTF-8]
[type: gettext/rfc822deb] units/jobs/categories.pxu
[type: gettext/rfc822deb] units/jobs/local.pxu
[type: gettext/rfc822deb] units/jobs/multilevel.pxu
[type: gettext/rfc822deb] units/jobs/stub.pxu
[type: gettext/rfc822deb] units/jobs/win.pxu
[type: gettext/rfc822deb] units/testplans/all.pxu
./manage.py
plainbox-0.25/plainbox/impl/providers/stubbox/po/pt.po 0000664 0001750 0001750 00000030123 12627266441 023746 0 ustar pierre pierre 0000000 0000000 # Portuguese translation for plainbox
# Copyright (c) 2014 Rosetta Contributors and Canonical Ltd 2014
# This file is distributed under the same license as the plainbox package.
# FIRST AUTHOR , 2014.
#
msgid ""
msgstr ""
"Project-Id-Version: plainbox\n"
"Report-Msgid-Bugs-To: FULL NAME \n"
"POT-Creation-Date: 2014-12-03 14:33+0100\n"
"PO-Revision-Date: 2014-03-27 22:02+0000\n"
"Last-Translator: FULL NAME \n"
"Language-Team: Portuguese \n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2015-11-28 04:35+0000\n"
"X-Generator: Launchpad (build 17850)\n"
#. name
#: ../units/jobs/categories.pxu:3
msgid "Representative Jobs (per \"plugin\" value)"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:7
msgid "Representative Jobs with description fields split"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:11
msgid "Dependency Chaining"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:15
msgid "Long Jobs"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:19
msgid "Miscellaneous Tests"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:23
msgid "Generated Tests"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:27
msgid "Elevated Privilege Tests"
msgstr ""
#. name
#: ../units/jobs/categories.pxu:31
msgid "Overridden Category"
msgstr ""
#. summary
#: ../units/jobs/local.pxu:2
msgid "A job generated by another job"
msgstr ""
#. description
#: ../units/jobs/local.pxu:4
msgid ""
"Check success result from shell test case (generated from a local job)"
msgstr ""
#. summary
#: ../units/jobs/multilevel.pxu:2
msgid "A generated generator job that generates two more jobs"
msgstr ""
#. description
#: ../units/jobs/multilevel.pxu:3
msgid "Multilevel tests"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:2
msgid "Passing shell job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:3
msgid "Check success result from shell test case"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:12
msgid "Failing shell job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:13
msgid "Check failed result from shell test case"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:22
msgid "Passing shell job depending on a passing shell job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:23
msgid "Check job is executed when dependency succeeds"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:37
msgid "Passing shell job depending on a failing shell job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:38
msgid "Check job result is set to uninitiated when dependency fails"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:44
msgid "Job sleeping for sixty seconds"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:45
msgid "Sleep for sixty seconds"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:53
msgid "Job killing the parent, if KILLER=yes"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:54
msgid "Kill $PPID if $KILLER is set to yes"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:65
msgid "Job determining a fake list of packages (1)"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:78
msgid "Job determining a fake list of packages (2)"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:79
msgid ""
" This job generates a resource object with what looks\n"
" like a list of packages.\n"
" .\n"
" The actual packages are fake"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:90
msgid "Passing shell job depending on an availalbe resource"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:91
msgid "Check job is executed when requirements are met"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:101
msgid "Passing shell job depending on an unavailable resource"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:102
msgid ""
"Check job result is set to \"not required on this system\" when requirements "
"are not met"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:112
msgid "A simple manual job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:113
msgid ""
"PURPOSE:\n"
" This test checks that the manual plugin works fine\n"
"STEPS:\n"
" 1. Add a comment\n"
" 2. Set the result as passed\n"
"VERIFICATION:\n"
" Check that in the report the result is passed and the comment is "
"displayed"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:126
msgid "A simple manual job using finer description fields"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:127
msgid " This test checks that the manual plugin works fine"
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:129
msgid ""
" 1. Add a comment\n"
" 2. Set the result as passed"
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:132
msgid ""
" Check that in the report the result is passed and the comment is displayed"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:139
msgid "A simple user interaction job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:140
msgid ""
"PURPOSE:\n"
" This test checks that the user-interact plugin works fine\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Press the test button\n"
"VERIFICATION:\n"
" Check that in the report the result is passed"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:155
msgid "User-interact job using finer description fields"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:156
msgid " This is a purpose part of test description"
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:158
msgid ""
" 1. First step in the user-iteract job\n"
" 2. Second step in the user-iteract job"
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:161
msgid " Verification part of test description"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:170
msgid "A simple user verification job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:171
msgid ""
"PURPOSE:\n"
" This test checks that the user-verify plugin works fine\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Ensure that the command has been started automatically\n"
" 3. Do not press the test button\n"
" 4. Look at the output and determine the outcome of the test\n"
"VERIFICATION:\n"
" The command should have printed \"Please select 'pass'\""
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:188
msgid "User-verify job using finer description fields"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:189
msgid ""
" This test checks that the user-verify plugin works fine and that\n"
" description field is split properly"
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:192
msgid ""
" 1. Read this description\n"
" 2. Ensure that the command has been started automatically\n"
" 3. Do not press the test button\n"
" 4. Look at the output and determine the outcome of the test"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:206
msgid "A simple user interaction and verification job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:207
msgid ""
"PURPOSE:\n"
" This test checks that the user-interact-verify plugin works fine\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Look at the output and determine the outcome of the test\n"
"VERIFICATION:\n"
" The command should have printed \"Please select 'pass'\""
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:224
msgid "A simple user interaction and verification job using finer"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:226
msgid " This test checks that the user-interact-verify plugin works fine"
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:228
msgid ""
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Look at the output and determine the outcome of the test"
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:233
msgid " The command should have printed \"Please select 'pass'\""
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:242
msgid "A suggested-passing user-verification-interaction job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:243
msgid ""
"PURPOSE:\n"
" This test checks that the application user interface auto-suggests "
"'pass'\n"
" as the outcome of a test for user-interact-verify jobs that have a "
"command\n"
" which completes successfully.\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Confirm the auto-suggested value\n"
"VERIFICATION:\n"
" The auto suggested value should have been 'pass'"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:262
msgid "A suggested-passing user-verification-interaction job using finer"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:264
msgid ""
" This test checks that the application user interface auto-suggests "
"'pass'\n"
" as the outcome of a test for user-interact-verify jobs that have a "
"command\n"
" which completes successfully."
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:273
msgid " The auto suggested value should have been 'pass'"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:282
msgid "A suggested-failing user-verification-interaction job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:283
msgid ""
"PURPOSE:\n"
" This test checks that the application user interface auto-suggests "
"'fail'\n"
" as the outcome of a test for user-interact-verify jobs that have a "
"command\n"
" which completes unsuccessfully.\n"
"STEPS:\n"
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Confirm the auto-suggested value\n"
"VERIFICATION:\n"
" The auto suggested value should have been 'fail'"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:302
msgid "A suggested-failing user-verification-interaction job using finer"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:302
msgid " description fields"
msgstr ""
#. purpose
#: ../units/jobs/stub.pxu:304
msgid ""
" This test checks that the application user interface auto-suggests "
"'fail'\n"
" as the outcome of a test for user-interact-verify jobs that have a "
"command\n"
" which completes unsuccessfully."
msgstr ""
#. steps
#: ../units/jobs/stub.pxu:308
msgid ""
" 1. Read this description\n"
" 2. Ensure that the command has not been started yet\n"
" 3. Press the test button\n"
" 4. Confirm the auto-suggested value"
msgstr ""
#. verification
#: ../units/jobs/stub.pxu:313
msgid " The auto suggested value should have been 'fail'"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:322
msgid "A job generating one more job"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:323
msgid " This job generates the stub/local/true job"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:334
msgid "A job generating more generator jobs"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:335
msgid ""
" This job generates stub/multilevel which in turn can\n"
" generate stub/multilevel_1 and stub/multilevel_2"
msgstr ""
#. summary
#: ../units/jobs/stub.pxu:346
msgid "A job that runs as root"
msgstr ""
#. description
#: ../units/jobs/stub.pxu:347
msgid "Check that becoming root works"
msgstr ""
#. summary
#: ../units/jobs/win.pxu:2
msgid "A windows specific job"
msgstr ""
#. description
#: ../units/jobs/win.pxu:3
msgid "Check success result from win32 shell"
msgstr ""
#. name
#: ../units/testplans/all.pxu:2
msgid "Category Override Test"
msgstr ""
#. description
#: ../units/testplans/all.pxu:3
msgid ""
"This test plan can be used to verify that category overrides are working "
"correctly. It is assigning the \"overridden\" category to all the stubbox "
"jobs starting with stub/."
msgstr ""
#. description
#: ../units/testplans/all.pxu:3
msgid ""
"This test plan selects the stub/multilevel job to ensure that generated jobs "
"are classified corractly. It also selects the stub/true job to check regular "
"jobs (known in advance)."
msgstr ""
#. This is stubbox_def.description,
#. we need it here to extract is as a part of stubbox
#: .././manage.py:45
msgid "StubBox (dummy data for development)"
msgstr ""
#: .././manage.py:55 .././manage.py:66
msgid "The StubBox provider is special"
msgstr ""
#: .././manage.py:56
msgid "You don't need to develop it explicitly"
msgstr ""
#: .././manage.py:67
msgid "You don't need to install it explicitly"
msgstr ""
plainbox-0.25/plainbox/impl/providers/stubbox/po/pl.po 0000664 0001750 0001750 00000041226 12627266441 023744 0 ustar pierre pierre 0000000 0000000 # PlainBox translations
# Copyright (C) 2014 Canonical
# This file is distributed under the same license as the palinbox package.
# Zygmunt