././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1740236285.9275966 pyout-0.8.1/0000755000175100001660000000000014756362776012332 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/COPYING0000644000175100001660000000204414756362760013356 0ustar00runnerdockerCopyright (c) 2017-2020 Kyle Meyer Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1740236285.9275966 pyout-0.8.1/PKG-INFO0000644000175100001660000000560714756362776013437 0ustar00runnerdockerMetadata-Version: 2.2 Name: pyout Version: 0.8.1 Summary: Terminal styling for tabular data Home-page: https://github.com/pyout/pyout.git Author: Kyle Meyer Author-email: kyle@kyleam.com License: MIT Keywords: terminal,tty,console,formatting,style,color Classifier: Intended Audience :: Developers Classifier: Natural Language :: English Classifier: Environment :: Console Classifier: Development Status :: 1 - Planning Classifier: Topic :: Utilities Classifier: Operating System :: POSIX Classifier: Programming Language :: Python :: 3 Classifier: License :: OSI Approved :: MIT License Classifier: Topic :: Software Development :: Libraries Classifier: Topic :: Software Development :: User Interfaces Classifier: Topic :: Terminals Requires-Python: >=3.7 License-File: COPYING Requires-Dist: blessed; sys_platform != "win32" Requires-Dist: jsonschema>=3.0.0 Dynamic: author Dynamic: author-email Dynamic: classifier Dynamic: description Dynamic: home-page Dynamic: keywords Dynamic: license Dynamic: requires-dist Dynamic: requires-python Dynamic: summary =========================================== pyout: Terminal styling for structured data =========================================== .. image:: https://travis-ci.org/pyout/pyout.svg?branch=master :target: https://travis-ci.org/pyout/pyout .. image:: https://codecov.io/github/pyout/pyout/coverage.svg?branch=master :target: https://codecov.io/github/pyout/pyout?branch=master .. image:: https://img.shields.io/badge/License-MIT-yellow.svg :target: https://opensource.org/licenses/MIT ``pyout`` is a Python package that defines an interface for writing structured records as a table in a terminal. It is being developed to replace custom code for displaying tabular data in in ReproMan_ and DataLad_. See the Examples_ folder for how to get started. A primary goal of the interface is the separation of content from style and presentation. Current capabilities include - automatic width adjustment and updating of previous values - styling based on a field value or specified interval - defining a transform function that maps a raw value to the displayed value - defining a summary function that generates a summary of a column (e.g., value totals) - support for delayed, asynchronous values that are added to the table as they come in Status ====== This package is currently in early stages of development. While it should be usable in its current form, it may change in substantial ways that break backward compatibility, and many aspects currently lack polish and documentation. ``pyout`` requires Python 3 (>= 3.7). It is developed and tested in GNU/Linux environments and is expected to work in macOS environments as well. There is currently very limited Windows support. License ======= ``pyout`` is under the MIT License. See the COPYING file. .. _DataLad: https://datalad.org .. _ReproMan: http://reproman.repronim.org .. _Examples: examples/ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/README.rst0000644000175100001660000000354314756362760014017 0ustar00runnerdocker=========================================== pyout: Terminal styling for structured data =========================================== .. image:: https://travis-ci.org/pyout/pyout.svg?branch=master :target: https://travis-ci.org/pyout/pyout .. image:: https://codecov.io/github/pyout/pyout/coverage.svg?branch=master :target: https://codecov.io/github/pyout/pyout?branch=master .. image:: https://img.shields.io/badge/License-MIT-yellow.svg :target: https://opensource.org/licenses/MIT ``pyout`` is a Python package that defines an interface for writing structured records as a table in a terminal. It is being developed to replace custom code for displaying tabular data in in ReproMan_ and DataLad_. See the Examples_ folder for how to get started. A primary goal of the interface is the separation of content from style and presentation. Current capabilities include - automatic width adjustment and updating of previous values - styling based on a field value or specified interval - defining a transform function that maps a raw value to the displayed value - defining a summary function that generates a summary of a column (e.g., value totals) - support for delayed, asynchronous values that are added to the table as they come in Status ====== This package is currently in early stages of development. While it should be usable in its current form, it may change in substantial ways that break backward compatibility, and many aspects currently lack polish and documentation. ``pyout`` requires Python 3 (>= 3.7). It is developed and tested in GNU/Linux environments and is expected to work in macOS environments as well. There is currently very limited Windows support. License ======= ``pyout`` is under the MIT License. See the COPYING file. .. _DataLad: https://datalad.org .. _ReproMan: http://reproman.repronim.org .. _Examples: examples/ ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1740236285.9245965 pyout-0.8.1/pyout/0000755000175100001660000000000014756362776013512 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/__init__.py0000644000175100001660000000052214756362760015613 0ustar00runnerdocker"""Terminal styling for tabular data. Exposes a single entry point, the Tabular class. """ import sys from pyout.elements import schema if sys.platform == "win32": from pyout.tabular_dummy import Tabular else: from pyout.tabular import Tabular del sys from . import _version __version__ = _version.get_versions()['version'] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1740236285.9275966 pyout-0.8.1/pyout/_version.py0000644000175100001660000000076114756362776015714 0ustar00runnerdocker # This file was generated by 'versioneer.py' (0.29) from # revision-control system data, or from the parent directory name of an # unpacked source archive. Distribution tarballs contain a pre-generated copy # of this file. import json version_json = ''' { "date": "2025-02-22T14:58:02+0000", "dirty": false, "error": null, "full-revisionid": "d030220740961e2bce592557c56a1d3ba1aef4c3", "version": "0.8.1" } ''' # END VERSION_JSON def get_versions(): return json.loads(version_json) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/common.py0000644000175100001660000010307714756362760015355 0ustar00runnerdocker"""Common components for styled output. This modules contains things that would be shared across outputters if there were any besides Tabular. The Tabular class, though, still contains a good amount of general logic that should be extracted if any other outputter is actually added. """ from collections import defaultdict from collections import namedtuple from collections import OrderedDict from collections.abc import Mapping from collections.abc import Sequence from functools import partial import inspect from logging import getLogger from pyout import elements from pyout.field import Field from pyout.field import Nothing from pyout.truncate import Truncater from pyout.summary import Summary lgr = getLogger(__name__) NOTHING = Nothing() class UnknownColumns(Exception): """The row has unknown columns. Parameters ---------- unknown_columns : list """ def __init__(self, unknown_columns): self.unknown_columns = unknown_columns super(UnknownColumns, self).__init__( "Unknown columns: {}".format(unknown_columns)) class RowNormalizer(object): """Transform various input data forms to a common form. An un-normalized row can be one of three kinds: * a mapping from column names to keys * a sequence of values in the same order as `columns` * any other value will be taken as an object where the column values can be accessed via an attribute with the same name To normalize a row, it is * converted to a dict that maps from column names to values * all callables are stripped out and replaced with their initial values * if the value for a column is missing, it is replaced with a Nothing instance whose value is specified by the column's style (an empty string by default) Parameters ---------- columns : sequence of str Column names. style : dict, optional Column styles. Attributes ---------- method : callable A function that takes a row and returns a normalized one. This is chosen at time of the first call. All subsequent calls should use the same kind of row. nothings : dict Maps column name to the placeholder value to use if that column is missing. """ def __init__(self, columns, style): self._columns = columns self.method = None self.delayed = defaultdict(list) self.delayed_columns = set() self.nothings = {} # column => missing value self._known_columns = set() for column in columns: cstyle = style[column] if "delayed" in cstyle: lgr.debug("Registered delay for column %r", column) value = cstyle["delayed"] group = column if value is True else value self.delayed[group].append(column) self.delayed_columns.add(column) if "missing" in cstyle: self.nothings[column] = Nothing(cstyle["missing"]) else: self.nothings[column] = NOTHING def __call__(self, row): """Normalize `row` Parameters ---------- row : mapping, sequence, or other Data to normalize. Returns ------- A tuple (callables, row), where `callables` is a list (as returned by `strip_callables`) and `row` is the normalized row. """ if self.method is None: self.method = self._choose_normalizer(row) return self.method(row) def _choose_normalizer(self, row): if isinstance(row, Mapping): getter = self.getter_dict elif isinstance(row, Sequence): getter = self.getter_seq else: getter = self.getter_attrs lgr.debug("Selecting %s as normalizer", getter.__name__) return partial(self._normalize, getter) def _normalize(self, getter, row): columns = self._columns if isinstance(row, Mapping): callables0 = self.strip_callables(row) # The row may have new columns. All we're doing here is keeping # them around in the normalized row so that downstream code can # react to them. known = self._known_columns new_cols = [c for c in row.keys() if c not in known] if new_cols: if isinstance(self._columns, OrderedDict): columns = list(self._columns) columns = columns + new_cols else: callables0 = [] norm_row = self._maybe_delay(getter, row, columns) # We need a second pass with strip_callables because norm_row will # contain new callables for any delayed values. callables1 = self.strip_callables(norm_row) return callables0 + callables1, norm_row def _maybe_delay(self, getter, row, columns): row_norm = {} for column in columns: if column not in self.delayed_columns: row_norm[column] = getter(row, column) def delay(cols): return lambda: {c: getter(row, c) for c in cols} for cols in self.delayed.values(): key = cols[0] if len(cols) == 1 else tuple(cols) lgr.debug("Delaying %r for row %r", cols, row) row_norm[key] = delay(cols) return row_norm def strip_callables(self, row): """Extract callable values from `row`. Replace the callable values with the initial value (if specified) or an empty string. Parameters ---------- row : mapping A data row. The keys are either a single column name or a tuple of column names. The values take one of three forms: 1) a non-callable value, 2) a tuple (initial_value, callable), 3) or a single callable (in which case the initial value is set to an empty string). Returns ------- list of (column, callable) """ callables = [] to_delete = [] to_add = [] for columns, value in row.items(): if isinstance(value, tuple): initial, fn = value else: initial = NOTHING # Value could be a normal (non-callable) value or a # callable with no initial value. fn = value if callable(fn) or inspect.isgenerator(fn): lgr.debug("Using %r as the initial value " "for columns %r in row %r", initial, columns, row) if not isinstance(columns, tuple): columns = columns, else: to_delete.append(columns) for column in columns: to_add.append((column, initial)) callables.append((columns, fn)) for column, value in to_add: row[column] = value for multi_columns in to_delete: del row[multi_columns] return callables # Input-specific getters. These exist as their own methods so that they # can be wrapped in a callable and delayed. def getter_dict(self, row, column): # Note: We .get() from `nothings` because `row` is permitted to have an # unknown column. return row.get(column, self.nothings.get(column, NOTHING)) def getter_seq(self, row, column): col_to_idx = {c: idx for idx, c in enumerate(self._columns)} return row[col_to_idx[column]] def getter_attrs(self, row, column): return getattr(row, column, self.nothings[column]) class StyleFields(object): """Generate Fields based on the specified style and processors. Parameters ---------- style : dict A style that follows the schema defined in pyout.elements. procgen : StyleProcessors instance This instance is used to generate the fields from `style`. """ def __init__(self, style, procgen): self.init_style = style self.procgen = procgen self.style = None self.columns = None self.autowidth_columns = {} self._known_columns = set() self.width_fixed = None self.width_separtor = None self.fields = None self._truncaters = {} self.hidden = {} # column => {True, "if-empty", False} self._visible_columns = None # cached list of visible columns self._table_width = None def build(self, columns, table_width=None): """Build the style and fields. Parameters ---------- columns : list of str Column names. table_width : int, optional Table width to use instead of the previously specified width. """ self.columns = columns self._known_columns = set(columns) default = dict(elements.default("default_"), **self.init_style.get("default_", {})) self.style = elements.adopt({c: default for c in columns}, self.init_style) # Store special keys in _style so that they can be validated. self.style["default_"] = default self.style["header_"] = self._compose("header_", {"align", "width"}) self.style["aggregate_"] = self._compose("aggregate_", {"align", "width"}) self.style["separator_"] = self.init_style.get( "separator_", elements.default("separator_")) lgr.debug("Validating style %r", self.style) if table_width is not None: self._table_width = table_width elif self._table_width is None: self._table_width = self.init_style.get( "width_", elements.default("width_")) self.style["width_"] = self._table_width elements.validate(self.style) self._setup_fields() self.hidden = {c: self.style[c]["hide"] for c in columns} self._reset_width_info() def _compose(self, name, attributes): """Construct a style taking `attributes` from the column styles. Parameters ---------- name : str Name of main style (e.g., "header_"). attributes : set of str Adopt these elements from the column styles. Returns ------- The composite style for `name`. """ name_style = self.init_style.get(name, elements.default(name)) if self.init_style is not None and name_style is not None: result = {} for col in self.columns: cstyle = {k: v for k, v in self.style[col].items() if k in attributes} result[col] = dict(cstyle, **name_style) return result def _setup_fields(self): fields = {} style = self.style width_table = style["width_"] def frac_to_int(x): if x and 0 < x < 1: result = int(x * width_table) lgr.debug("Converted fraction %f to %d", x, result) else: result = x return result for column in self.columns: lgr.debug("Setting up field for column %r", column) cstyle = style[column] style_width = cstyle["width"] # Convert atomic values into the equivalent complex form. if style_width == "auto": style_width = {} elif not isinstance(style_width, Mapping): style_width = {"width": style_width} is_auto = "width" not in style_width if is_auto: lgr.debug("Automatically adjusting width for %s", column) width = frac_to_int(style_width.get("min", 0)) wmax = frac_to_int(style_width.get("max")) autoval = {"max": wmax, "min": width, "weight": style_width.get("weight", 1)} self.autowidth_columns[column] = autoval lgr.debug("Stored auto-width value for column %r: %s", column, autoval) else: if "min" in style_width or "max" in style_width: raise ValueError( "'min' and 'max' are incompatible with 'width'") width = frac_to_int(style_width["width"]) lgr.debug("Setting width of column %r to %d", column, width) # We are creating a distinction between "width" processors, that we # always want to be active and "default" processors that we want to # be active unless there's an overriding style (i.e., a header is # being written or the `style` argument to __call__ is specified). field = Field(width=width, align=cstyle["align"], default_keys=["width", "default"], other_keys=["override"]) field.add("pre", "default", *(self.procgen.pre_from_style(cstyle))) truncater = Truncater( width, style_width.get("marker", True), style_width.get("truncate", "right")) field.add("post", "width", truncater.truncate) field.add("post", "default", *(self.procgen.post_from_style(cstyle))) fields[column] = field self._truncaters[column] = truncater self.fields = fields @property def has_header(self): """Whether the style specifies a header. """ return self.style["header_"] is not None @property def visible_columns(self): """List of columns that are not marked as hidden. This value is cached and becomes invalid if column visibility has changed since the last `render` call. """ if self._visible_columns is None: hidden = self.hidden self._visible_columns = [c for c in self.columns if not hidden[c]] return self._visible_columns def _check_widths(self): visible = self.visible_columns autowidth_columns = self.autowidth_columns width_table = self.style["width_"] if width_table is None: # The table is unbounded (non-interactive). return if len(visible) > width_table: raise elements.StyleError( "Number of visible columns exceeds available table width") width_fixed = self.width_fixed width_auto = width_table - width_fixed if width_auto < len(set(autowidth_columns).intersection(visible)): raise elements.StyleError( "The number of visible auto-width columns ({}) " "exceeds the available width ({})" .format(len(autowidth_columns), width_auto)) def _set_fixed_widths(self): """Set fixed-width attributes. Previously calculated values are invalid if the number of visible columns changes. Call _reset_width_info() in that case. """ visible = self.visible_columns ngaps = len(visible) - 1 width_separtor = len(self.style["separator_"]) * ngaps lgr.debug("Calculated separator width as %d", width_separtor) autowidth_columns = self.autowidth_columns fields = self.fields width_fixed = sum([sum(fields[c].width for c in visible if c not in autowidth_columns), width_separtor]) lgr.debug("Calculated fixed width as %d", width_fixed) self.width_separtor = width_separtor self.width_fixed = width_fixed def _reset_width_info(self): """Reset visibility-dependent information. """ self._visible_columns = None self._set_fixed_widths() self._check_widths() def _set_widths(self, row, proc_group): """Update auto-width Fields based on `row`. Parameters ---------- row : dict proc_group : {'default', 'override'} Whether to consider 'default' or 'override' key for pre- and post-format processors. Returns ------- True if any widths required adjustment. """ autowidth_columns = self.autowidth_columns fields = self.fields width_table = self.style["width_"] width_fixed = self.width_fixed if width_table is None: width_auto = float("inf") else: width_auto = width_table - width_fixed if not autowidth_columns: return False # Check what width each row wants. lgr.debug("Checking width for row %r", row) hidden = self.hidden for column in autowidth_columns: if hidden[column]: lgr.debug("%r is hidden; setting width to 0", column) autowidth_columns[column]["wants"] = 0 continue field = fields[column] lgr.debug("Checking width of column %r (current field width: %d)", column, field.width) # If we've added any style transform functions as pre-format # processors, we want to measure the width of their result rather # than the raw value. if field.pre[proc_group]: value = field(row[column], keys=[proc_group], exclude_post=True) else: value = row[column] value = str(value) value_width = len(value) wmax = autowidth_columns[column]["max"] wmin = autowidth_columns[column]["min"] max_seen = max(value_width, field.width) requested_floor = max(max_seen, wmin) wants = min(requested_floor, wmax or requested_floor) lgr.debug("value=%r, value width=%d, old field length=%d, " "min width=%s, max width=%s => wants=%d", value, value_width, field.width, wmin, wmax, wants) autowidth_columns[column]["wants"] = wants # Considering those wants and the available width, assign widths to # each column. assigned = self._assign_widths(autowidth_columns, width_auto) # Set the assigned widths. adjusted = False for column, width_assigned in assigned.items(): field = fields[column] width_current = field.width if width_assigned != width_current: adjusted = True field.width = width_assigned lgr.debug("Adjusting width of %r column from %d to %d ", column, width_current, field.width) self._truncaters[column].length = field.width return adjusted @staticmethod def _assign_widths(columns, available): """Assign widths to auto-width columns. Parameters ---------- columns : dict A dictionary where each key is an auto-width column. The value should be a dictionary with the following information: - wants: how much width the column wants - min: the minimum that the width should set to, provided there is enough room - weight: if present, a "weight" key indicates the number of available characters the column should claim at a time. This is only in effect after each column has claimed one, and the specific column has claimed its minimum. available : int or float('inf') Width available to be assigned. Returns ------- Dictionary mapping each auto-width column to the assigned width. """ # NOTE: The method below is not very clever and does unnecessary # iteration. It may end up being too slow, but at least it should # serve to establish the baseline (along with tests) that show the # desired behavior. assigned = {} # Make sure every column gets at least one. for column in columns: col_wants = columns[column]["wants"] if col_wants > 0: available -= 1 assigned[column] = 1 assert available >= 0, "bug: upstream checks should make impossible" weights = {c: columns[c].get("weight", 1) for c in columns} # ATTN: The sorting here needs to be stable across calls with the same # row so that the same assignments come out. colnames = sorted(assigned.keys(), reverse=True, key=lambda c: (columns[c]["min"], weights[c], c)) columns_in_need = set(assigned.keys()) while available > 0 and columns_in_need: for column in colnames: if column not in columns_in_need: continue col_wants = columns[column]["wants"] - assigned[column] if col_wants < 1: columns_in_need.remove(column) continue wmin = columns[column]["min"] has = assigned[column] claim = min(weights[column] if has >= wmin else wmin - has, col_wants, available) available -= claim assigned[column] += claim lgr.log(9, "Claiming %d characters (of %d available) for %s", claim, available, column) if available == 0: break lgr.debug("Available width after assigned: %s", available) lgr.debug("Assigned widths: %r", assigned) return assigned def _proc_group(self, style, adopt=True): """Return whether group is "default" or "override". In the case of "override", the self.fields pre-format and post-format processors will be set under the "override" key. Parameters ---------- style : dict A style that follows the schema defined in pyout.elements. adopt : bool, optional Merge `self.style` and `style`, giving priority to the latter's keys when there are conflicts. If False, treat `style` as a standalone style. """ fields = self.fields if style is not None: if adopt: style = elements.adopt(self.style, style) elements.validate(style) for column in self.columns: fields[column].add( "pre", "override", *(self.procgen.pre_from_style(style[column]))) fields[column].add( "post", "override", *(self.procgen.post_from_style(style[column]))) return "override" else: return "default" def _check_for_unknown_columns(self, row): known = self._known_columns # The sorted() call here isn't necessary, but it makes testing the # expected output easier without relying on the order-preserving # implementation detail of the new dict implementation introduced in # Python 3.6. cols_new = sorted(c for c in row if c not in known) if cols_new: raise UnknownColumns(cols_new) def render(self, row, style=None, adopt=True, can_unhide=True): """Render fields with values from `row`. Parameters ---------- row : dict A normalized row. style : dict, optional A style that follows the schema defined in pyout.elements. If None, `self.style` is used. adopt : bool, optional Merge `self.style` and `style`, using the latter's keys when there are conflicts. If False, treat `style` as a standalone style. can_unhide : bool, optional Whether a non-missing value within `row` is able to unhide a column that is marked with "if_missing". Returns ------- A tuple with the rendered value (str) and a flag that indicates whether the field widths required adjustment (bool). """ self._check_for_unknown_columns(row) hidden = self.hidden any_unhidden = False if can_unhide: for c in row: val = row[c] if hidden[c] == "if_missing" and not isinstance(val, Nothing): lgr.debug("Unhiding column %r after encountering %r", c, val) hidden[c] = False any_unhidden = True if any_unhidden: self._reset_width_info() group = self._proc_group(style, adopt=adopt) if group == "override": # Override the "default" processor key. proc_keys = ["width", "override"] else: # Use the set of processors defined by _setup_fields. proc_keys = None adjusted = self._set_widths(row, group) cols = self.visible_columns proc_fields = ((self.fields[c], row[c]) for c in cols) # Exclude fields that weren't able to claim any width to avoid # surrounding empty values with separators. proc_fields = filter(lambda x: x[0].width > 0, proc_fields) proc_fields = (fld(val, keys=proc_keys) for fld, val in proc_fields) return self.style["separator_"].join(proc_fields) + "\n", adjusted class RedoContent(Exception): """The rendered content is stale and should be re-rendered. """ pass class ContentError(Exception): """An error occurred when generating the content representation. """ pass ContentRow = namedtuple("ContentRow", ["row", "kwds"]) class Content(object): """Concatenation of rendered fields. Parameters ---------- fields : StyleField instance """ def __init__(self, fields): self.fields = fields self.summary = None self.columns = None self.ids = None self._header = None self._rows = [] self._idkey_to_idx = {} self._idx_to_idkey = {} def init_columns(self, columns, ids, table_width=None): """Set up the fields for `columns`. Parameters ---------- columns : sequence or OrderedDict Names of the column. In the case of an OrderedDict, a map between short and long names. ids : sequence A collection of column names that uniquely identify a column. table_width : int, optional Update the table width to this value. """ self.fields.build(columns, table_width=table_width) self.columns = columns self.ids = ids if self._rows: # There are pre-existing rows, so this init_columns() call was due # to encountering unknown columns. Fill in the previous rows. style = self.fields.style for row in self._rows: for col in columns: if col not in row.row: cstyle = style[col] if "missing" in cstyle: missing = Nothing(cstyle["missing"]) else: missing = NOTHING row.row[col] = missing if self.fields.has_header: self._add_header() def __len__(self): return len(list(self.rows)) def __bool__(self): return bool(self._rows) def __getitem__(self, key): idx = self._idkey_to_idx[key] return self._rows[idx].row @property def rows(self): """Data and summary rows. """ if self._header: yield self._header for i in self._rows: yield i def _render(self, rows): adjusted = [] for row, kwds in rows: line, adj = self.fields.render(row, **kwds) yield line # Continue processing so that we get all the adjustments out of # the way. adjusted.append(adj) if any(adjusted): raise RedoContent def __str__(self): for redo in range(3): try: return "".join(self._render(self.rows)) except RedoContent: # FIXME: Only one additional _render() call is supposed to # be necessary, but as f34696a7 (Detect changes in the # terminal width, 2020-08-17), it's not sufficient in some # cases (see gh-114). Until that is figured out, allow one # more (i.e., three in total), which appears to get rid of # the issue. if redo: if redo == 1: lgr.debug("One redo was not enough. Trying again") else: raise def get_idkey(self, idx): """Return ID keys for a row. Parameters ---------- idx : int Index of row (determined by order it came in to `update`). Returns ------- ID key (tuple) matching row. If there is a header, None is return as its ID key. Raises ------ IndexError if `idx` does not match known row. """ if self._header: idx -= 1 if idx == -1: return None try: return self._idx_to_idkey[idx] except KeyError: msg = ("Index {!r} outside of current range: [0, {})" .format(idx, len(self._idkey_to_idx))) raise IndexError(msg) from None def update(self, row, style): """Modify the content. Parameters ---------- row : dict A normalized row. If the names specified by `self.ids` have already been seen in a previous call, the entry for the previous row is updated. Otherwise, a new entry is appended. style : Passed to `StyleFields.render`. Returns ------- A tuple of (content, status), where status is 'append', an integer, or 'repaint'. * append: the only change in the content is the addition of a line, and the returned content will consist of just this line. * an integer, N: the Nth line of the output needs to be updated, and the returned content will consist of just this line. * repaint: all lines need to be updated, and the returned content will consist of all the lines. """ called_before = bool(self) idkey = tuple(row[idx] for idx in self.ids) if not called_before and self.fields.has_header: lgr.debug("Registering header") self._add_header() self._rows.append(ContentRow(row, kwds={"style": style})) self._idkey_to_idx[idkey] = 0 self._idx_to_idkey[0] = idkey return str(self), "append" try: prev_idx = self._idkey_to_idx[idkey] except KeyError: prev_idx = None except TypeError: raise ContentError("ID columns must be hashable") if prev_idx is not None: lgr.debug("Updating content for row %r", idkey) row_update = {k: v for k, v in row.items() if not isinstance(v, Nothing)} self._rows[prev_idx].row.update(row_update) self._rows[prev_idx].kwds.update({"style": style}) # Replace the passed-in row since it may not have all the columns. row = self._rows[prev_idx][0] else: lgr.debug("Adding row %r to content for first time", idkey) nrows = len(self._rows) self._idkey_to_idx[idkey] = nrows self._idx_to_idkey[nrows] = idkey self._rows.append(ContentRow(row, kwds={"style": style})) line, adjusted = self.fields.render(row, style) lgr.log(9, "Rendered line as %r", line) if called_before and adjusted: return str(self), "repaint" if not adjusted and prev_idx is not None: return line, prev_idx + self.fields.has_header return line, "append" def _add_header(self): if isinstance(self.columns, OrderedDict): row = self.columns else: row = dict(zip(self.columns, self.columns)) self._header = ContentRow(row, kwds={"style": self.fields.style["header_"], "can_unhide": False, "adopt": False}) class ContentWithSummary(Content): """Like Content, but append a summary to the return value of `update`. """ def __init__(self, fields): super(ContentWithSummary, self).__init__(fields) self.summary = None def init_columns(self, columns, ids, table_width=None): super(ContentWithSummary, self).init_columns( columns, ids, table_width=table_width) self.summary = Summary(self.fields.style) def update(self, row, style): lgr.log(9, "Updating with .summary set to %s", self.summary) content, status = super(ContentWithSummary, self).update(row, style) if self.summary: summ_rows = self.summary.summarize( self.fields.visible_columns, [r.row for r in self._rows]) def join(): return "".join(self._render(summ_rows)) try: summ_content = join() except RedoContent: # If rendering the summary lines triggered an adjustment, we # need to re-render the main content as well. return str(self), "repaint", join() return content, status, summ_content return content, status, None ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/elements.py0000644000175100001660000003524314756362760015700 0ustar00runnerdocker"""Style elements and schema validation. """ from collections.abc import Mapping import functools import jsonschema import hashlib schema = { "$schema": "http://json-schema.org/draft-07/schema#", "definitions": { # Plain style elements "align": { "description": "Alignment of text", "type": "string", "enum": ["left", "right", "center"], "default": "left", "scope": "column"}, "bold": { "description": "Whether text is bold", "oneOf": [{"type": "boolean"}, {"$ref": "#/definitions/lookup"}, {"$ref": "#/definitions/re_lookup"}, {"$ref": "#/definitions/interval"}], "default": False, "scope": "field"}, "color": { "description": "Foreground color of text", "oneOf": [{"type": "string", "enum": ["black", "red", "green", "yellow", "blue", "magenta", "cyan", "white"]}, {"$ref": "#/definitions/lookup"}, {"$ref": "#/definitions/re_lookup"}, {"$ref": "#/definitions/interval"}], "default": "black", "scope": "field"}, "hide": { "description": """Whether to hide column. A value of True unconditionally hides the column. 'if_missing' hides the column until the first non-missing value is encountered.""", "oneOf": [{"type": "boolean"}, {"type": "string", "enum": ["if_missing"]}], "default": False, "scope": "column"}, "underline": { "description": "Whether text is underlined", "oneOf": [{"type": "boolean"}, {"$ref": "#/definitions/lookup"}, {"$ref": "#/definitions/re_lookup"}, {"$ref": "#/definitions/interval"}], "default": False, "scope": "field"}, "width_type": { "description": "Type for numeric values in 'width'", "oneOf": [{"type": "integer", "minimum": 1}, {"type": "number", "exclusiveMinimum": 0, "exclusiveMaximum": 1}]}, "width": { "description": """Width of field. With the default value, 'auto', the column width is automatically adjusted to fit the content and may be truncated to ensure that the entire row fits within the available output width. An integer value forces all fields in a column to have a width of the specified value. In addition, an object can be specified. Its 'min' and 'max' keys specify the minimum and maximum widths allowed, whereas the 'width' key specifies a fixed width. The values can be given as an integer (representing the number of characters) or as a fraction, which indicates the proportion of the total table width (typically the width of your terminal). The 'marker' key specifies the marker used for truncation ('...' by default). Where the field is truncated can be configured with 'truncate': 'right' (default), 'left', or 'center'. The object can also include a 'weight' key. Conceptually, assigning widths to each column can be (roughly) viewed as each column claiming _one_ character of available width at a time until a column is at its maximum width or there is no available width left. Setting a column's weight to an integer N makes it claim N characters each iteration.""", "oneOf": [{"$ref": "#/definitions/width_type"}, {"type": "string", "enum": ["auto"]}, {"type": "object", "properties": { "max": {"$ref": "#/definitions/width_type"}, "min": {"$ref": "#/definitions/width_type"}, "width": {"$ref": "#/definitions/width_type"}, "weight": {"type": "integer", "minimum": 1}, "marker": {"type": ["string", "boolean"]}, "truncate": {"type": "string", "enum": ["left", "right", "center"]}}, "additionalProperties": False}], "default": "auto", "scope": "column"}, # Other style elements "aggregate": { "description": """A function that produces a summary value. This function will be called with all of the column's (unprocessed) field values and should return a single value to be displayed.""", "scope": "column"}, "delayed": { "description": """Don't wait for this column's value. The accessor will be wrapped in a function and called asynchronously. This can be set to a string to mark columns as part of a "group". All columns within a group will be accessed within the same callable. True means to access the column's value in its own callable (i.e. independently of other columns).""", "type": ["boolean", "string"], "scope": "field"}, "missing": { "description": "Text to display for missing values", "type": "string", "default": "", "scope": "column" }, "re_flags": { "description": """Flags passed to re.search when using re_lookup. See the documentation of the re module for a description of possible values. 'I' (ignore case) is the most likely value of interest.""", "type": "array", "items": [{"type": "string", "enum": ["A", "I", "L", "M", "S", "U", "X"]}], "scope": "field"}, "transform": { "description": """An arbitrary function. This function will be called with the (unprocessed) field value as the single argument and should return a transformed value. Note: This function should not have side-effects because it may be called multiple times.""", "scope": "field"}, # Complete list of column style elements "styles": { "type": "object", "properties": {"aggregate": {"$ref": "#/definitions/aggregate"}, "align": {"$ref": "#/definitions/align"}, "bold": {"$ref": "#/definitions/bold"}, "color": {"$ref": "#/definitions/color"}, "delayed": {"$ref": "#/definitions/delayed"}, "hide": {"$ref": "#/definitions/hide"}, "missing": {"$ref": "#/definitions/missing"}, "re_flags": {"$ref": "#/definitions/re_flags"}, "transform": {"$ref": "#/definitions/transform"}, "underline": {"$ref": "#/definitions/underline"}, "width": {"$ref": "#/definitions/width"}}, "additionalProperties": False}, # Mapping elements "interval": { "description": "Map a value within an interval to a style", "type": "object", "properties": {"interval": {"type": "array", "items": [ {"type": "array", "items": [{"type": ["number", "null"]}, {"type": ["number", "null"]}, {"type": ["string", "boolean"]}], "additionalItems": False}]}}, "additionalProperties": False}, "lookup": { "description": "Map a value to a style", "type": "object", "properties": {"lookup": {"type": "object"}}, "additionalProperties": False}, "re_lookup": { "description": """Apply a style to values that match a regular expression. The regular expressions are matched with re.search and tried in order, stopping after the first match. Flags for re.search can be specified via the re_flags style attribute.""", "type": "object", "properties": {"re_lookup": {"type": "array", "items": [ {"type": "array", "items": [{"type": "string"}, {"type": ["string", "boolean"]}], "additionalItems": False}]}}, "additionalProperties": False}, }, "type": "object", "properties": { "aggregate_": { "description": "Shared attributes for the summary rows", "oneOf": [{"type": "object", "properties": {"color": {"$ref": "#/definitions/color"}, "bold": {"$ref": "#/definitions/bold"}, "underline": {"$ref": "#/definitions/underline"}}}, {"type": "null"}], "default": {}, "scope": "table"}, "default_": { "description": "Default style of columns", "oneOf": [{"$ref": "#/definitions/styles"}, {"type": "null"}], "default": {"align": "left", "hide": False, "width": "auto"}, "scope": "table"}, "header_": { "description": "Attributes for the header row", "oneOf": [{"type": "object", "properties": {"color": {"$ref": "#/definitions/color"}, "bold": {"$ref": "#/definitions/bold"}, "underline": {"$ref": "#/definitions/underline"}}}, {"type": "null"}], "default": None, "scope": "table"}, "separator_": { "description": "Separator used between fields", "type": "string", "default": " ", "scope": "table"}, "width_": { "description": """Total width of table. This is typically not set directly by the user. With the default null value, the width is set to the stream's width for interactive streams and as wide as needed to fit the content for non-interactive streams.""", "default": None, "oneOf": [{"type": "integer"}, {"type": "null"}], "scope": "table"} }, # All other keys are column names. "additionalProperties": {"$ref": "#/definitions/styles"} } def default(prop): """Return the default value schema property. Parameters ---------- prop : str A key for schema["properties"] """ return schema["properties"][prop]["default"] def adopt(style, new_style): if new_style is None: return style combined = {} for key, value in style.items(): if isinstance(value, Mapping): combined[key] = dict(value, **new_style.get(key, {})) else: combined[key] = new_style.get(key, value) return combined class StyleError(Exception): """Style is invalid or mispecified in some way. """ pass class StyleValidationError(StyleError): """Exception raised if the style schema does not validate. """ def __init__(self, original_exception): msg = ("Invalid style\n\n{}\n\n\n" "See pyout.schema for style definition." .format(original_exception)) super(StyleValidationError, self).__init__(msg) class StrHasher(): """A wrapper for objects that hashes based on the string representation. Primarily its purpose to provide a customized @StrHasher.lru_cache decorator which allows to cache based on the string representation of the argument of unhashable objects. """ def __init__(self, obj): self.value = obj self.hash = hashlib.md5(str(obj).encode()).hexdigest() def __hash__(self): return hash(self.hash) def __eq__(self, other): return self.hash == other.hash @staticmethod def _to(func): def cached_func(*args, **kwargs): wrapper = StrHasher((args, kwargs)) return func(wrapper) return cached_func @staticmethod def _from(func): def cached_func(wrapper): return func(*wrapper.value[0], **wrapper.value[1]) return cached_func @classmethod def lru_cache(cls, maxsize=128): """ Custom LRU cache decorator that allows for a custom hasher function. """ def decorator(func): @functools.lru_cache(maxsize=maxsize) @cls._from def lru_cached_func(*args, **kwargs): return func(*args, **kwargs) # Interface cache operations cached_func = functools.wraps(func)( cls._to( lru_cached_func ) ) for op in dir(lru_cached_func): if op.startswith('cache_'): setattr(cached_func, op, getattr(lru_cached_func, op)) return cached_func return decorator @StrHasher.lru_cache() # the same styles could be checked over again def validate(style): """Check `style` against pyout.styling.schema. Parameters ---------- style : dict Style object to validate. Raises ------ StyleValidationError if `style` is not valid. """ try: jsonschema.validate(style, schema) except jsonschema.ValidationError as exc: new_exc = StyleValidationError(exc) # Don't dump the original jsonschema exception because it is already # included in the StyleValidationError's message. new_exc.__cause__ = None raise new_exc def value_type(value): """Classify `value` of bold, color, and underline keys. Parameters ---------- value : style value Returns ------- str, {"simple", "lookup", "re_lookup", "interval"} """ try: keys = list(value.keys()) except AttributeError: return "simple" if keys in [["lookup"], ["re_lookup"], ["interval"]]: return keys[0] raise ValueError("Type of `value` could not be determined") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/field.py0000644000175100001660000004241314756362760015144 0ustar00runnerdocker"""Define a "field" based on a sequence of processor functions. """ from collections import defaultdict from collections import OrderedDict from itertools import chain from logging import getLogger import re import sys from pyout.elements import value_type lgr = getLogger(__name__) class Field(object): """Render values based on a list of processors. A Field instance is a template for a string that is defined by its width, text alignment, and its "processors". When a field is called with a value, the value is rendered in three steps. pre -> format -> post During the first step, the value is fed through the list of pre-format processor functions. The result of this value is then formatted as a string with the specified width and alignment. Finally, this result is fed through the list of the post-format processors. The rendered string is the result returned by the last processor Parameters ---------- width : int, optional align : {'left', 'right', 'center'}, optional default_keys, other_keys : sequence, optional Together, these define the "registered" set of processor keys that can be used in the `pre` and `post` dicts. Any key given to the `add` method or instance call must be contained in one of these collections. The processor lists for `default_keys` is used when the instance is called without a list of `keys`. `other_keys` defines additional keys that can be passed as the `keys` argument to the instance call. Attributes ---------- width : int registered_keys : set Set of available keys. default_keys : list Defines which processor lists are called by default and in what order. The values can be overridden by the `keys` argument when calling the instance. pre, post : dict of lists These map each registered key to a list of processors. Conceptually, `pre` and `post` are a list of functions that form a pipeline, but they are structured as a dict of lists to allow different processors to be grouped by key. By specifying keys, the caller can control which groups are "enabled". """ _align_values = {"left": "<", "right": ">", "center": "^"} def __init__(self, width=10, align="left", default_keys=None, other_keys=None): self._width = width self._align = align self._fmt = self._build_format() self.default_keys = default_keys or [] self.registered_keys = set(chain(self.default_keys, other_keys or [])) self.pre = defaultdict(list) self.post = defaultdict(list) def _check_if_registered(self, key): if key not in self.registered_keys: raise ValueError( "key '{}' was not specified at initialization".format(key)) def add(self, kind, key, *values): """Add processor functions. Any previous list of processors for `kind` and `key` will be overwritten. Parameters ---------- kind : {"pre", "post"} key : str A registered key. Add the functions (in order) to this key's list of processors. *values : callables Processors to add. """ if kind == "pre": procs = self.pre elif kind == "post": procs = self.post else: raise ValueError("kind is not 'pre' or 'post'") self._check_if_registered(key) procs[key] = values @property def width(self): return self._width @width.setter def width(self, value): self._width = value self._fmt = self._build_format() def _build_format(self): align = self._align_values[self._align] return "".join(["{:", align, str(self.width), "}"]) def _format(self, _, result): """Wrap format call as a two-argument processor function. """ return self._fmt.format(str(result)) def __call__(self, value, keys=None, exclude_post=False): """Render `value` by feeding it through the processors. Parameters ---------- value : str keys : sequence, optional These lists define which processor lists are called and in what order. If not specified, the `default_keys` attribute will be used. exclude_post : bool, optional Whether to return the value after the format step rather than feeding it through post-format processors. """ lgr.debug("Rendering field with value %r and %s keys %r", value, "default" if keys is None else "non-default", self.default_keys) if keys is None: keys = self.default_keys for key in keys: self._check_if_registered(key) pre_funcs = chain(*(self.pre[k] for k in keys)) if exclude_post: post_funcs = [] else: post_funcs = chain(*(self.post[k] for k in keys)) funcs = chain(pre_funcs, [self._format], post_funcs) result = value for fn in funcs: result = fn(value, result) return result class Nothing(object): """Internal class to represent missing values. This is used instead of a built-in like None, "", or 0 to allow us to unambiguously identify a missing value. In terms of methods, it tries to mimic the string `text` (an empty string by default) because that behavior is the most useful internally for formatting the output. Parameters ---------- text : str, optional Text to use for string representation of this object. """ def __init__(self, text=""): self._text = text def __str__(self): return self._text def __add__(self, right): return str(self) + right def __radd__(self, left): return left + str(self) def __bool__(self): return False def __format__(self, format_spec): return self._text.__format__(format_spec) def _pass_nothing_through(proc): """Make processor function `proc` skip Nothing objects. """ def wrapped(value, result): return result if isinstance(value, Nothing) else proc(value, result) return wrapped class StyleFunctionError(Exception): """Signal that a style function failed. """ def __init__(self, function, exc_type, exc_value): msg = "{} raised {}\n {}".format(function, exc_type.__name__, exc_value) super(StyleFunctionError, self).__init__(msg) class StyleProcessors(object): """A base class for generating Field.processors for styled output. Attributes ---------- style_keys : list of tuples Each pair consists of a style attribute (e.g., "bold") and the expected type. """ # Ordering the dict isn't required, but we'll loop over this to generate # processors, and it'd be good to have a predictable order across calls. style_types = OrderedDict([("bold", bool), ("underline", bool), ("color", str)]) def render(self, style_attr, value): """Render `value` according to a style key. Parameters ---------- style_attr : str A style attribute (e.g., "bold" or "blue"). value : str The value to render. Returns ------- An output-specific styling of `value` (str). """ raise NotImplementedError @staticmethod def transform(function): """Return a processor for a style's "transform" function. """ def transform_fn(_, result): lgr.debug("Transforming %r with %r", result, function) try: return function(result) except: exctype, value, tb = sys.exc_info() try: new_exc = StyleFunctionError(function, exctype, value) # Remove the "During handling ..." since we're # reraising with the traceback. raise new_exc.with_traceback(tb) from None finally: # Remove circular reference. # https://docs.python.org/2/library/sys.html#sys.exc_info del tb return transform_fn def by_key(self, style_key, style_value): """Return a processor for a "simple" style value. Parameters ---------- style_key : str A style key. style_value : bool or str A "simple" style value that is either a style attribute (str) and a boolean flag indicating to use the style attribute named by `style_key`. Returns ------- A function. """ if self.style_types[style_key] is bool: style_attr = style_key else: style_attr = style_value def proc(_, result): return self.render(style_attr, result) return proc def by_lookup(self, style_key, style_value): """Return a processor that extracts the style from `mapping`. Parameters ---------- style_key : str A style key. style_value : dict A dictionary with a "lookup" key whose value is a "mapping" style value that maps a field value to either a style attribute (str) and a boolean flag indicating to use the style attribute named by `style_key`. Returns ------- A function. """ style_attr = style_key if self.style_types[style_key] is bool else None mapping = style_value["lookup"] def proc(value, result): try: lookup_value = mapping[value] except KeyError: lgr.debug("by_lookup: Key %r not found in mapping %s", value, mapping) lookup_value = None except TypeError: lgr.debug("by_lookup: Key %r not hashable", value) lookup_value = None if not lookup_value: return result return self.render(style_attr or lookup_value, result) return proc def by_re_lookup(self, style_key, style_value, re_flags=0): """Return a processor for a "re_lookup" style value. Parameters ---------- style_key : str A style key. style_value : dict A dictionary with a "re_lookup" style value that consists of a sequence of items where each item should have the form `(regexp, x)`, where regexp is a regular expression to match against the field value and x is either a style attribute (str) and a boolean flag indicating to use the style attribute named by `style_key`. re_flags : int Passed through as flags argument to re.compile. Returns ------- A function. """ style_attr = style_key if self.style_types[style_key] is bool else None regexps = [(re.compile(r, flags=re_flags), v) for r, v in style_value["re_lookup"]] def proc(value, result): if not isinstance(value, str): lgr.debug("by_re_lookup: Skipping non-string value %r", value) return result for r, lookup_value in regexps: if r.search(value): if not lookup_value: return result return self.render(style_attr or lookup_value, result) return result return proc def by_interval_lookup(self, style_key, style_value): """Return a processor for an "interval" style value. Parameters ---------- style_key : str A style key. style_value : dict A dictionary with an "interval" key whose value consists of a sequence of tuples where each tuple should have the form `(start, end, x)`, where start is the start of the interval (inclusive), end is the end of the interval, and x is either a style attribute (str) and a boolean flag indicating to use the style attribute named by `style_key`. Returns ------- A function. """ style_attr = style_key if self.style_types[style_key] is bool else None intervals = style_value["interval"] def proc(value, result): try: value = float(value) except Exception as exc: lgr.debug("by_interval_lookup: Skipping %r: %s", value, exc) return result for start, end, lookup_value in intervals: if start is None: start = float("-inf") if end is None: end = float("inf") if start <= value < end: if not lookup_value: return result return self.render(style_attr or lookup_value, result) return result return proc def pre_from_style(self, column_style): """Yield pre-format processors based on `column_style`. Parameters ---------- column_style : dict A style where the top-level keys correspond to style attributes such as "bold" or "color". Returns ------- A generator object. """ if "transform" in column_style: yield _pass_nothing_through( self.transform(column_style["transform"])) def post_from_style(self, column_style): """Yield post-format processors based on `column_style`. Parameters ---------- column_style : dict A style where the top-level keys correspond to style attributes such as "bold" or "color". Returns ------- A generator object. """ flanks = Flanks() yield flanks.split_flanks fns = {"simple": self.by_key, "lookup": self.by_lookup, "re_lookup": self.by_re_lookup, "interval": self.by_interval_lookup} for key in self.style_types: if key not in column_style: continue vtype = value_type(column_style[key]) fn = fns[vtype] args = [key, column_style[key]] if vtype == "re_lookup": args.append(sum(getattr(re, f) for f in column_style.get("re_flags", []))) yield _pass_nothing_through(fn(*args)) yield flanks.join_flanks class Flanks(object): """A pair of processors that split and rejoin flanking whitespace. """ flank_re = re.compile(r"(\s*)(.*\S)(\s*)\Z", flags=re.DOTALL) def __init__(self): self.left, self.right = None, None def split_flanks(self, _, result): """Return `result` without flanking whitespace. """ if not result.strip(): self.left, self.right = "", "" return result match = self.flank_re.match(result) if not match: raise RuntimeError( "Flank regexp unexpectedly did not match result: " "{!r} (type: {})" .format(result, type(result))) self.left, self.right = match.group(1), match.group(3) return match.group(2) def join_flanks(self, _, result): """Add whitespace from last `split_flanks` call back to `result`. """ return self.left + result + self.right class PlainProcessors(StyleProcessors): """Ignore color, bold, or underline styling. """ style_types = {} class TermProcessors(StyleProcessors): """Generate Field.processors for styled Terminal output. Parameters ---------- term : blessed.Terminal or blessings.Terminal Notes ----- * Eventually we may want to retire blessings: https://github.com/pyout/pyout/issues/136 """ def __init__(self, term): self.term = term def render(self, style_attr, value): """Prepend terminal code for `key` to `value`. Parameters ---------- style_attr : str A style attribute (e.g., "bold" or "blue"). value : str The value to render. Returns ------- The code for `key` (e.g., "\x1b[1m" for bold) plus the original value. """ if not value.strip(): # We've got an empty string. Don't bother adding any # codes. return value return str(getattr(self.term, style_attr)) + value def _maybe_reset(self): def proc(_, result): if "\x1b" in result: return result + self.term.normal return result return proc def post_from_style(self, column_style): """A Terminal-specific reset to StyleProcessors.post_from_style. """ for proc in super(TermProcessors, self).post_from_style(column_style): if proc.__name__ == "join_flanks": # Reset any codes before adding back whitespace. yield self._maybe_reset() yield proc ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/interface.py0000644000175100001660000006306514756362760016027 0ustar00runnerdocker"""Core pyout interface definitions. """ import abc from collections import defaultdict from collections import OrderedDict from collections.abc import Mapping import concurrent.futures as cfut from concurrent.futures import ThreadPoolExecutor as Pool from contextlib import contextmanager from functools import wraps import inspect from itertools import chain from logging import getLogger import os import sys import threading import time from pyout.common import ContentWithSummary from pyout.common import RowNormalizer from pyout.common import StyleFields from pyout.common import UnknownColumns from pyout.field import PlainProcessors lgr = getLogger(__name__) class Stream(object, metaclass=abc.ABCMeta): """Output stream interface used by Writer. Parameters ---------- stream : stream, optional Stream to write output to. Defaults to sys.stdout. interactive : boolean, optional Whether this stream is interactive. If not specified, it will be set to the return value of `stream.isatty()`. Attributes ---------- interactive : bool supports_updates : boolean If true, the writer supports updating previous lines. """ supports_updates = True def __init__(self, stream=None, interactive=None): self.stream = stream or sys.stdout if interactive is None: self.interactive = self.stream.isatty() else: self.interactive = interactive self.supports_updates = self.interactive @abc.abstractproperty def width(self): """Maximum line width. """ @abc.abstractproperty def height(self): """Maximum number of rows that are visible.""" @abc.abstractmethod def write(self, text): """Write `text`. """ @abc.abstractmethod def clear_last_lines(self, n): """Clear previous N lines. """ @abc.abstractmethod def overwrite_line(self, n, text): """Go to the Nth previous line and overwrite it with `text` """ @abc.abstractmethod def move_to(self, n): """Move the Nth previous line. """ def skip_if_aborted(method): """Decorate Writer `method` to prevent execution if write has been aborted. """ @wraps(method) def wrapped(self, *args, **kwds): if self._aborted: lgr.debug("Write has been aborted; not calling %r", method) else: return method(self, *args, **kwds) return wrapped class Writer(object): """Base class implementing the core handling logic of pyout output. To define a writer, a subclass should inherit Writer and define __init__ to call Writer.__init__ and then the _init method. """ def __init__(self, columns=None, style=None, stream=None, interactive=None, mode=None, continue_on_failure=True, wait_for_top=3, max_workers=None): if columns and not isinstance(columns, (list, OrderedDict)): self._columns = list(columns) else: self._columns = columns or None self._ids = None self._last_content_len = 0 self._last_summary = None self._normalizer = None self._pool = None if max_workers is None and sys.version_info < (3, 8): # ThreadPoolExecutor's max_workers didn't get a default until # Python 3.5, and that default was changed in 3.8. Use Python # 3.8's default for consistent behavior. max_workers = min(32, (os.cpu_count() or 1) + 4) self._max_workers = max_workers self._lock = None self._aborted = False self._futures = defaultdict(list) self._continue_on_failure = continue_on_failure self._wait_for_top = wait_for_top self._mode = mode self._write_fn = None self._stream = None self._width_from_stream = False self._content = None def _init(self, style, streamer, processors=None): """Do writer-specific setup. Parameters ---------- style : dict Style, as passed to __init__. streamer : interface.Stream A stream interface that takes __init__'s `stream` and `interactive` arguments into account. processors : field.StyleProcessors, optional A writer-specific processors instance. Defaults to field.PlainProcessors(). """ self._stream = streamer self._init_mode(streamer) style = style or {} if style.get("width_") is None: lgr.debug("Setting width to stream width: %s", self._stream.width) style["width_"] = self._stream.width self._width_from_stream = True self._content = ContentWithSummary( StyleFields(style, processors or PlainProcessors())) def _init_prewrite(self, table_width=None): self._content.init_columns(self._columns, self.ids, table_width=table_width) self._normalizer = RowNormalizer(self._columns, self._content.fields.style) def _init_mode(self, streamer): value = self._mode lgr.debug("Initializing mode with given value of %s", value) if value is None: if streamer.interactive: if streamer.supports_updates: value = "update" else: value = "incremental" else: value = "final" valid = {"update", "incremental", "final"} if value not in valid: raise ValueError("{!r} is not a valid mode: {!r}" .format(value, valid)) lgr.debug("Setting write mode to %r", value) self._mode = value if value == "incremental": self._write_fn = self._write_incremental elif value == "final": self._write_fn = self._write_final else: if self._stream.supports_updates and self._stream.interactive: self._write_fn = self._write_update else: raise ValueError("Stream {} does not support updates" .format(self._stream)) def __enter__(self): return self def __exit__(self, _exc_type, exc_value, _tb): failed = None if exc_value is not None: self._abort(msg="\n{!r} raised\n".format(exc_value)) else: try: failed = self.wait() except KeyboardInterrupt: lgr.debug("Caught KeyboardInterrupt " "while waiting for asynchronous workers") self._abort(msg="\nKeyboard interrupt registered\n") # Raise so that caller can decide how to handle. raise if self._mode == "final": self._stream.write(str(self._content)) if self._mode != "update" and self._last_summary is not None: self._stream.write(str(self._last_summary)) if failed: self._print_async_exceptions(failed) @property def ids(self): """A list of unique IDs used to identify a row. If not explicitly set, it defaults to the first column name. """ if self._ids is None: if self._columns: if isinstance(self._columns, OrderedDict): return [list(self._columns.keys())[0]] return [self._columns[0]] else: return self._ids @ids.setter def ids(self, columns): self._ids = columns def _process_futures(self): """Process each future as it completes. If _continue_on_failure is false, raise the exception of the first failed future encountered. Otherwise return a list of futures that had an exception. """ failed = [] lgr.debug("Waiting for asynchronous calls") continue_on_failure = self._continue_on_failure for id_key, futures in self._futures.items(): for future in cfut.as_completed(futures): lgr.debug("Processing future %s", future) if not future.cancelled() and future.exception(): if continue_on_failure: failed.append((id_key, future)) else: future.result() # Raise exception. return failed def _print_async_exceptions(self, failed_futures): import traceback # Prevent any remaining callbacks from writing to stream. with self._write_lock(): self._aborted = True n_failed = len(failed_futures) stream = self._stream with self._write_lock(): stream.write("\n\n") stream.write("ERROR: {} asynchronous worker{} failed\n\n" .format(n_failed, "" if n_failed == 1 else "s")) for id_key, future in failed_futures: try: future.result() except Exception: stream.write( "Producing value for row {} failed:\n{}\n" .format(id_key, traceback.format_exc())) @skip_if_aborted def _abort(self, cause=None, msg=None): if self._pool is None: # No asynchronous calls; there's nothing to abort. return with self._write_lock(): self._aborted = cause or True stream = self._stream if msg: stream.write(msg) futures = list(chain(*self._futures.values())) for f in futures: lgr.debug("Calling .cancel() with for %s", f) f.cancel() n_running = len([f for f in futures if f.running()]) stream.write("Canceled pending asynchronous workers. " "{} worker{} already running\n" .format(n_running, "" if n_running == 1 else "s")) # Note: We can't call shutdown() with wait=True here. That will # trigger a RuntimeError in underlying .join() call. self._pool.shutdown(wait=False) def wait(self): """Wait for asynchronous calls to return. Returns ------- A list of futures for asynchronous calls had an exception. """ lgr.debug("Waiting for asynchronous calls") if self._pool is None: return aborted = self._aborted if aborted: if isinstance(aborted, cfut.Future): aborted.result() # Raise exception. else: failed = self._process_futures() self._pool.shutdown(wait=True) lgr.debug("Pool shut down") return failed @contextmanager def _write_lock(self): """Acquire and release the lock around output calls. This should allow multiple threads or processes to write output reliably. Code that modifies the `_content` attribute should also do so within this context. """ if self._lock: lgr.debug("Acquiring write lock") self._lock.acquire() try: yield finally: if self._lock: lgr.debug("Releasing write lock") self._lock.release() def _write(self, row, style=None): with self._write_lock(): if self._width_from_stream and self._mode != "final": width_current = self._content.fields.style["width_"] width_stream = self._stream.width if width_stream is not None and width_current != width_stream: lgr.debug("Current stream width (%d) different " "than last recorded (%d). Updating", width_stream, width_current) self._init_prewrite(table_width=width_stream) try: self._write_fn(row, style) except UnknownColumns as exc: self._columns.extend(exc.unknown_columns) self._init_prewrite() self._write_fn(row, style) def _get_last_summary_length(self): last_summary = self._last_summary return len(last_summary.splitlines()) if last_summary else 0 def _write_update(self, row, style=None): last_summary_len = self._get_last_summary_length() content, status, summary = self._content.update(row, style) if last_summary_len > 0: # Clear the summary because 1) it has very likely changed, 2) # it makes the counting for row updates simpler, 3) and it is # possible for the summary lines to shrink. lgr.debug("Clearing summary of %d line(s)", last_summary_len) self._stream.clear_last_lines(last_summary_len) single_row_updated = False if isinstance(status, int): height = self._stream.height n_visible = min( height - last_summary_len - 1, # -1 for current line. self._last_content_len) n_back = self._last_content_len - status if n_back > n_visible: lgr.debug("Cannot move back %d rows for update; " "only %d visible rows", n_back, n_visible) status = "repaint" content = str(self._content) else: lgr.debug("Moving up %d line(s) to overwrite line %d with %r", n_back, status, row) self._stream.overwrite_line(n_back, content) single_row_updated = True if not single_row_updated: if status == "repaint": lgr.debug("Moving up %d line(s) to repaint the whole thing. " "Blame row %r", self._last_content_len, row) self._stream.move_to(self._last_content_len) self._stream.write(content) if summary is not None: self._stream.write(summary) lgr.debug("Wrote summary") self._last_content_len = len(self._content) self._last_summary = summary def _write_incremental(self, row, style=None): content, status, summary = self._content.update(row, style) if isinstance(status, int): lgr.debug("Duplicating line %d with %r", status, row) elif status == "repaint": lgr.debug("Duplicating the whole thing. Blame row %r", row) self._stream.write(content) self._last_summary = summary def _write_final(self, row, style=None): _, _, summary = self._content.update(row, style) self._last_summary = summary @skip_if_aborted def _write_async_result(self, id_vals, cols, result): lgr.debug("Received result for %s: %s", cols, result) if isinstance(result, Mapping): lgr.debug("Processing result as mapping") elif isinstance(result, tuple): lgr.debug("Processing result as tuple") result = dict(zip(cols, result)) elif len(cols) == 1: lgr.debug("Processing result as atom") result = {cols[0]: result} else: raise ValueError( "Expected tuple or mapping for columns {!r}, got {!r}" .format(cols, result)) result.update(id_vals) self._write(result) @skip_if_aborted def _start_callables(self, row, callables): """Start running `callables` asynchronously. """ id_key = tuple(row[c] for c in self.ids) id_vals = {c: row[c] for c in self.ids} if self._pool is None: lgr.debug("Initializing pool with max workers=%s", self._max_workers) self._pool = Pool(max_workers=self._max_workers) if self._lock is None: lgr.debug("Initializing lock") self._lock = threading.Lock() for cols, fn in callables: gen = None if inspect.isgeneratorfunction(fn): gen = fn() elif inspect.isgenerator(fn): gen = fn def check_result(future): if future.cancelled(): ok = False elif future.exception(): ok = False if not self._continue_on_failure: self._abort(cause=future) else: ok = True return ok if gen: lgr.debug("Wrapping generator for cols %r of row %r", cols, id_vals) def async_fn(): for i in gen: self._write_async_result(id_vals, cols, i) callback = check_result else: async_fn = fn def callback(future): if check_result(future): self._write_async_result( id_vals, cols, future.result()) try: future = self._pool.submit(async_fn) except RuntimeError as exc: # We can get here if, between entering this method call and # calling .submit(), _aborted was set by a callback. if self._aborted: lgr.debug( "Submitting callable for %s failed " "because pool is already shutdown: %s", id_key, exc) else: raise else: future.add_done_callback(callback) lgr.debug("Registering future %s for %s", future, id_key) self._futures[id_key].append(future) def top_nrows_done(self, n, height=None): """Check if the top N rows' asynchronous workers are done. Parameters ---------- n : int Consider this many of the top rows (e.g., 1 would consider just the first row). height : int, optional Take this as the terminal height instead of querying the stream. Returns ------- True if the asynchronous workers for the top N rows have finished, and False if they have not. None is returned if Tabular is not operating in "update" mode. """ if self._mode != "update" or not self._content: return None # 0|.. <| # 1|.. | # 2|.. | # top_idx 3|oo <| <| <| | # 4|oo |--- n=3 | | | # 5|oo <| | | |--- content # 6|oo | | | length # 7|oo |--- n_free | | (including header) # 8|oo | | stream | # 9|oo | |--- height | # 10|oo <| | <| # |oo <| | # |oo |--- summary | # |oo <| | # |oo <------ cursor <| last_summary_len = self._get_last_summary_length() n_free = (height or self._stream.height) - last_summary_len - 1 top_idx = self._last_content_len - n_free if top_idx < 0: # The content lines haven't yet filled the screen. return True idxs = (top_idx + i for i in range(min(n, n_free))) id_keys = (self._content.get_idkey(i) for i in idxs if i is not None) futures = self._futures top_futures = list(chain(*(futures[k] for k in id_keys))) if not top_futures: # These rows have no registered producers. return True return all(f.done() for f in top_futures) def _maybe_wait_on_top_rows(self): n = self._wait_for_top if n: waited = 0 secs = 0.5 height = self._stream.height while self.top_nrows_done(n, height) is False: time.sleep(secs) waited += 1 if waited: lgr.debug("Waited for %s cycles of sleeping %s seconds", waited, secs) # Wait a bit longer so that the caller has a chance to see the # last updated row if it about to go off screen. time.sleep(secs) @skip_if_aborted def __call__(self, row, style=None): """Write styled `row`. Parameters ---------- row : mapping, sequence, or other If a mapping is given, the keys are the column names and values are the data to write. After the initial set of columns is defined (via the constructor's `columns` argument or based on inferring the names from the first row passed), rows can still include new keys, in which case the list of known columns will be expanded. For a sequence, the items represent the values and are taken to be in the same order as the constructor's `columns` argument. Any other object type should have an attribute for each column specified via `columns`. Instead of a plain value, a column's value can be a tuple of the form (initial_value, producer). If a producer is is a generator function or a generator object, each item produced replaces `initial_value`. Otherwise, a producer should be a function that will be called with no arguments and that returns the value with which to replace `initial_value`. For both generators and normal functions, the execution will happen asynchronously. Directly supplying a producer as the value rather than (initial_value, producer) is shorthand for ("", producer). The producer can return an update for multiple columns. To do so, the keys of `row` should include a tuple with the column names and the produced value should be a tuple with the same order as the key or a mapping from column name to the updated value. A mapping's keys may include unknown columns; these will be added to the set of known columns. Using the (initial_value, producer) form requires some additional steps. The `ids` property should be set unless the first column happens to be a suitable id. Also, to instruct the program to wait for the updated values, the instance calls should be followed by a call to the `wait` method or the instance should be used as a context manager. style : dict, optional Each top-level key should be a column name and the value should be a style dict that overrides the class instance style. """ self._maybe_wait_on_top_rows() if self._columns is None: self._columns = self._infer_columns(row) lgr.debug("Inferred columns: %r", self._columns) if self._normalizer is None: self._init_prewrite() callables, row = self._normalizer(row) self._write(row, style) if callables: lgr.debug("Starting callables for row %r", row) self._start_callables(row, callables) @staticmethod def _infer_columns(row): try: columns = list(row.keys()) except AttributeError: raise ValueError("Can't infer columns from data") # Make sure we don't have any multi-column keys. flat = [] for column in columns: if isinstance(column, tuple): flat.extend(column) else: flat.append(column) return flat def __getitem__(self, key): """Get the (normalized) row for `key`. This interface is focused on _writing_ output, and the caller usually knows the values. However, this method can be useful for retrieving values that were produced asynchronously (see __call__). Parameters ---------- key : tuple Unique ID for a row, as specified by the `ids` property. Returns ------- A dictionary with the row's current value. """ try: return self._content[key] except KeyError as exc: # Suppress context. raise KeyError(exc) from None @contextmanager def outside_write(self, clear=False): """Stop writing rows and yield control to the caller. This context manager allows callers to interrupt the table output while writing their own output. On exit, the entire table is rewritten. Parameters ---------- clear : bool Before yielding to the caller, clear the visible rows. This only has an effect in "update" mode. """ update = self._mode == "update" with self._write_lock(): if update and clear: n_lines = min( # -1 for the last empty line of screen. self._stream.height - 1, self._last_content_len + self._get_last_summary_length()) self._stream.clear_last_lines(n_lines) yield if update: self._stream.write(str(self._content)) last_summary = self._last_summary if last_summary: self._stream.write(last_summary) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/summary.py0000644000175100001660000000527014756362760015556 0ustar00runnerdocker"""Summarize output. """ from collections.abc import Mapping from logging import getLogger from pyout.field import Nothing lgr = getLogger(__name__) class Summary(object): """Produce summary rows for a list of normalized rows. Parameters ---------- style : dict A style that follows the schema defined in pyout.elements. """ def __init__(self, style): self.style = style self._enabled = any("aggregate" in v for v in self.style.values() if isinstance(v, Mapping)) def __bool__(self): return self._enabled def summarize(self, columns, rows): """Return summary rows. Parameters ---------- columns : list of str Summarize values within these columns. rows : list of dicts Normalized rows that contain keys for `columns`. Returns ------- A list of summary rows. Each row is a tuple where the first item is the data and the second is a dict of keyword arguments that can be passed to StyleFields.render. """ agg_styles = {c: self.style[c]["aggregate"] for c in columns if "aggregate" in self.style[c]} summaries = {} for col, agg_fn in agg_styles.items(): lgr.debug("Summarizing column %r with %r", col, agg_fn) colvals = filter(lambda x: not isinstance(x, Nothing), (row[col] for row in rows)) summaries[col] = agg_fn(list(colvals)) if not summaries: return [] # The rest is just restructuring the summaries into rows that are # compatible with pyout.Content. Most the complexity below comes from # the fact that a summary function is allowed to return either a single # item or a list of items. maxlen = max(len(v) if isinstance(v, list) else 1 for v in summaries.values()) summary_rows = [] for rowidx in range(maxlen): sumrow = {} for column, values in summaries.items(): if isinstance(values, list): if rowidx >= len(values): continue sumrow[column] = values[rowidx] elif rowidx == 0: sumrow[column] = values for column in columns: if column not in sumrow: sumrow[column] = "" summary_rows.append((sumrow, {"style": self.style.get("aggregate_"), "adopt": False, "can_unhide": False})) return summary_rows ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tabular.py0000644000175100001660000001422714756362760015515 0ustar00runnerdocker"""Interface for styling tabular terminal output. This module defines the Tabular entry point. """ from contextlib import contextmanager from logging import getLogger import os # Eventually we may want to retire blessings: # https://github.com/pyout/pyout/issues/136 try: from blessed import Terminal except ImportError: from blessings import Terminal from pyout import interface from pyout.field import TermProcessors lgr = getLogger(__name__) class TerminalStream(interface.Stream): """Stream interface implementation using blessed/blessings.Terminal. """ def __init__(self, stream=None, interactive=None): super(TerminalStream, self).__init__( stream=stream, interactive=interactive) self.term = Terminal(stream=self.stream, # interactive=False maps to force_styling=None. force_styling=self.interactive or None) @property def width(self): """Maximum terminal width. """ if self.interactive: return self.term.width @property def height(self): """Terminal height. """ if self.interactive: return self.term.height def write(self, text): """Write `text` to terminal. """ self.term.stream.write(text) def clear_last_lines(self, n): """Clear last N lines of terminal output. """ self.term.stream.write( self.term.move_up * n + self.term.clear_eos) self.term.stream.flush() @contextmanager def _moveback(self, n): self.term.stream.write(self.term.move_up * n + self.term.clear_eol) try: yield finally: self.term.stream.write(self.term.move_down * (n - 1)) self.term.stream.flush() def overwrite_line(self, n, text): """Move back N lines and overwrite line with `text`. """ with self._moveback(n): self.term.stream.write(text) def move_to(self, n): """Move back N lines in terminal. """ self.term.stream.write(self.term.move_up * n) class Tabular(interface.Writer): """Interface for writing and updating styled terminal output. Parameters ---------- columns : list of str or OrderedDict, optional Column names. An OrderedDict can be used instead of a sequence to provide a map of short names to the displayed column names. If not given, the keys will be extracted from the first row of data that the object is called with, which is particularly useful if the row is an OrderedDict. This argument must be given if this instance will not be called with a mapping. style : dict, optional Each top-level key should be a column name and the value should be a style dict that overrides the `default_style` class attribute. See the "Examples" section below. stream : stream object, optional Write output to this stream (sys.stdout by default). interactive : boolean, optional Whether stream is considered interactive. By default, this is determined by calling `stream.isatty()`. If non-interactive, the bold, color, and underline keys will be ignored, and the mode will default to "final". mode : {update, incremental, final}, optional Mode of display. * update (default): Go back and update the fields. This includes resizing the automated widths. * incremental: Don't go back to update anything. * final: finalized representation appropriate for redirecting to file Defaults to "update" if the stream supports updates and "incremental" otherwise. If the stream is non-interactive, defaults to "final". continue_on_failure : bool, optional If an asynchronous worker fails, the default behavior is to continue and report the failures at the end. Set this flag to false in order to abort writing the table and raise if any exception is received. wait_for_top : int, optional Wait for the asynchronous workers of this many top-most rows to finish before proceeding with a row before adding a row that would take the top row off screen. max_workers : int, optional Use at most this number of concurrent workers when retrieving values asynchronously (i.e., when producers are specified as row values). The default matches the default of `concurrent.futures.ThreadPoolExecutor` as of Python 3.8: `min(32, os.cpu_count() + 4)`. Examples -------- Create a `Tabular` instance for two output fields, "name" and "status". >>> out = Tabular(["name", "status"], style={"status": {"width": 5}}) The first field, "name", is taken as the unique ID. The `style` argument is used to override the default width for the "status" field that is defined by the class attribute `default_style`. Write a row to stdout: >>> out({"name": "foo", "status": "OK"}) Write another row, overriding the style: >>> out({"name": "bar", "status": "BAD"}, ... style={"status": {"color": "red", "bold": True}}) """ def __init__(self, columns=None, style=None, stream=None, interactive=None, mode=None, continue_on_failure=True, wait_for_top=3, max_workers=None): in_jupyter = "JPY_PARENT_PID" in os.environ if in_jupyter: # TODO: More work is needed to render nicely in Jupyter. For now, # just trigger the final, non-interactive rendering. mode = mode or "final" interactive = False if interactive is None else interactive super(Tabular, self).__init__( columns, style, stream=stream, interactive=interactive, mode=mode, continue_on_failure=continue_on_failure, wait_for_top=wait_for_top, max_workers=max_workers) streamer = TerminalStream(stream=stream, interactive=interactive) if streamer.interactive: processors = TermProcessors(streamer.term) else: processors = None super(Tabular, self)._init(style, streamer, processors) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tabular_dummy.py0000644000175100001660000000346014756362760016725 0ustar00runnerdocker"""Not much of an interface for styling tabular terminal output. This module defines a mostly useless Tabular entry point. Previous lines are not updated, and the results are not styled (e.g., no coloring or bolding of values). In other words, this is not a real attempt to support Windows. """ from pyout import interface class NoUpdateTerminalStream(interface.Stream): def __init__(self, stream=None, interactive=None): super(NoUpdateTerminalStream, self).__init__( stream=stream, interactive=interactive) self.supports_updates = False def _die(self, *args, **kwargs): raise NotImplementedError("{!s} does not support 'update' methods" .format(self.__class__.__name__)) clear_last_lines = _die overwrite_line = _die move_to = _die # Height and width are the fallback defaults of py3's # shutil.get_terminal_size(). @property def width(self): return 80 @property def height(self): return 24 def write(self, text): self.stream.write(text) class Tabular(interface.Writer): """Like `pyout.tabular.Tabular`, but broken. This doesn't support terminal styling or updating previous content. """ def __init__(self, columns=None, style=None, stream=None, interactive=None, mode=None, continue_on_failure=True, wait_for_top=3, max_workers=None): super(Tabular, self).__init__( columns, style, stream=stream, interactive=interactive, mode=mode, continue_on_failure=continue_on_failure, wait_for_top=wait_for_top, max_workers=max_workers) streamer = NoUpdateTerminalStream( stream=stream, interactive=interactive) super(Tabular, self)._init(style, streamer) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1740236285.9275966 pyout-0.8.1/pyout/tests/0000755000175100001660000000000014756362776014654 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tests/__init__.py0000644000175100001660000000000014756362760016744 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tests/conftest.py0000644000175100001660000000027714756362760017052 0ustar00runnerdockerimport pytest pytest.register_assert_rewrite("pyout.tests.utils") from pyout.elements import validate @pytest.fixture(autouse=True) def cache_clear(): yield validate.cache_clear() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tests/tabular.py0000644000175100001660000000174014756362760016653 0ustar00runnerdocker"""A Tabular for testing. """ from io import StringIO from unittest.mock import patch from pyout import Tabular as TheRealTabular from pyout.tests.terminal import Terminal class Tabular(TheRealTabular): """Test-specific subclass of pyout.Tabular. Like pyout.Tabular but `stream` is set to a StringIO object that reports to be interactive. Its value is accessible via the `stdout` property. """ def __init__(self, *args, **kwargs): stream = kwargs.pop("stream", None) if not stream: stream = StringIO() stream.isatty = lambda: True with patch("pyout.tabular.Terminal", Terminal): super(Tabular, self).__init__( *args, stream=stream, **kwargs) @property def stdout(self): return self._stream.stream.getvalue() def change_term_width(self, value): self._stream.term.width = value def change_term_height(self, value): self._stream.term.height = value ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tests/terminal.py0000644000175100001660000000374314756362760017041 0ustar00runnerdocker"""Terminal test utilities. """ from curses import tigetstr from curses import tparm from functools import partial import re # Eventually we may want to retire blessings: # https://github.com/pyout/pyout/issues/136 try: import blessed as bls except ImportError: import blessings as bls from pyout.tests.utils import assert_contains class Terminal(bls.Terminal): def __init__(self, *args, **kwargs): super(Terminal, self).__init__( *args, kind="xterm-256color", **kwargs) self._width = 100 self._height = 20 @property def width(self): return self._width @width.setter def width(self, value): self._width = value @property def height(self): return self._height @height.setter def height(self, value): self._height = value # unicode_cap, and unicode_parm are copied from blessings' tests. def unicode_cap(cap): """Return the result of ``tigetstr`` except as Unicode.""" return tigetstr(cap).decode('latin1') def unicode_parm(cap, *params): """Return the result of ``tparm(tigetstr())`` except as Unicode.""" return tparm(tigetstr(cap), *params).decode('latin1') COLORNUMS = {"black": 0, "red": 1, "green": 2, "yellow": 3, "blue": 4, "magenta": 5, "cyan": 6, "white": 7} def capres(name, value): """Format value with CAP key, followed by a reset. """ if name in COLORNUMS: prefix = unicode_parm("setaf", COLORNUMS[name]) else: prefix = unicode_cap(name) return prefix + value + unicode_cap("sgr0") def eq_repr_noclear(actual, expected): """Like `eq_repr`, but strip clear-related codes from `actual`. """ clear_codes = [re.escape(unicode_cap(x)) for x in ["el", "ed", "cuu1"]] match = re.match("(?:{}|{}|{})*(.*)".format(*clear_codes), actual) assert match, "This should always match" return repr(match.group(1)) == repr(expected) assert_contains_nc = partial(assert_contains, cmp=eq_repr_noclear) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tests/test_elements.py0000644000175100001660000000460014756362760020072 0ustar00runnerdockerimport pytest from unittest import mock from pyout.elements import adopt from pyout.elements import StyleValidationError from pyout.elements import validate from pyout.elements import value_type def test_adopt_noop(): default_value = {"align": "<", "width": 10, "attrs": []} style = {"name": default_value, "path": default_value, "status": default_value} newstyle = adopt(style, None) for key, value in style.items(): assert newstyle[key] == value def test_adopt(): default_value = {"align": "<", "width": 10, "attrs": []} style = {"name": default_value, "path": default_value, "status": default_value, "sep_": "non-mapping"} newstyle = adopt(style, {"path": {"width": 99}, "status": {"attrs": ["foo"]}, "sep_": "non-mapping update"}) for key, value in style.items(): if key == "path": expected = {"align": "<", "width": 99, "attrs": []} assert newstyle[key] == expected elif key == "status": expected = {"align": "<", "width": 10, "attrs": ["foo"]} assert newstyle[key] == expected elif key == "sep_": assert newstyle[key] == "non-mapping update" else: assert newstyle[key] == value def test_validate_error(): # With caching we want to ensure that we do not cache the error # somehow and do raise it again for _ in range(3): with pytest.raises(StyleValidationError): validate("not ok") def test_validate_ok(): with mock.patch("jsonschema.validate") as mock_validate: for i in range(3): validate({}) # With caching we must not revalidate mock_validate.assert_called_once() mock_validate.reset_mock() for _ in range(3): validate({"header_": {"colname": {"bold": True}}}) mock_validate.assert_called_once() def test_value_type(): assert value_type(True) == "simple" assert value_type("red") == "simple" assert value_type({"lookup": {"BAD": "red"}}) == "lookup" interval = {"interval": [(0, 50, "red"), (50, 80, "yellow")]} assert value_type(interval) == "interval" with pytest.raises(ValueError): value_type({"unknown": 1}) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tests/test_field.py0000644000175100001660000000273114756362760017344 0ustar00runnerdocker# -*- coding: utf-8 -*- import pytest from pyout.field import Field from pyout.field import Nothing from pyout.field import StyleProcessors def test_field_base(): assert Field()("ok") == "ok " assert Field(width=5, align="right")("ok") == " ok" def test_field_update(): field = Field() field.width = 2 assert field("ok") == "ok" def test_field_processors(): def pre(_, result): return result.upper() def post1(_, result): return "AAA" + result def post2(_, result): return result + "ZZZ" field = Field(width=6, align="center", default_keys=["some_key", "another_key"]) field.add("pre", "some_key", pre) field.add("post", "another_key", *[post1, post2]) assert field("ok") == "AAA OK ZZZ" with pytest.raises(ValueError): field.add("not pre or post", "k") with pytest.raises(ValueError): field.add("pre", "not registered key") @pytest.mark.parametrize("text", ["", "-", "…"], ids=["text=''", "text='-'", "text='…'"]) def test_something_about_nothing(text): nada = Nothing(text=text) assert not nada assert str(nada) == text assert "{:5}".format(nada) == "{:5}".format(text) assert "x" + nada == "x" + text assert nada + "x" == text + "x" def test_style_processor_render(): sp = StyleProcessors() with pytest.raises(NotImplementedError): sp.render("key", "value") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tests/test_interface.py0000644000175100001660000000176614756362760020230 0ustar00runnerdockerimport pytest # Eventually we may want to retire blessings: # https://github.com/pyout/pyout/issues/136 try: pytest.importorskip("blessed") except pytest.skip.Exception: pytest.importorskip("blessings") import inspect from pyout.interface import Stream from pyout.interface import Writer from pyout.tabular import Tabular from pyout.tabular import TerminalStream from pyout.tabular_dummy import NoUpdateTerminalStream from pyout.tabular_dummy import Tabular as DummyTabular @pytest.mark.parametrize("writer", [Tabular, DummyTabular], ids=["tabular", "dummy"]) def test_writer_children_match_signature(writer): assert inspect.signature(writer) == inspect.signature(Writer) @pytest.mark.parametrize("stream", [TerminalStream, NoUpdateTerminalStream], ids=["terminal", "noupdate"]) def test_stream_children_match_signature(stream): assert inspect.signature(stream) == inspect.signature(Stream) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tests/test_summary.py0000644000175100001660000000311714756362760017755 0ustar00runnerdockerfrom pyout.summary import Summary def eq(result, expect): """Unwrap summarize `result` and assert that it is equal to `expect`. """ assert [r[0] for r in result] == expect def test_summary_summarize_no_agg_columns(): sm = Summary({"col1": {}}) eq(sm.summarize(["col1"], [{"col1": "a"}, {"col1": "b"}]), []) def test_summary_summarize_atom_return(): sm = Summary({"col1": {"aggregate": len}}) eq(sm.summarize(["col1"], [{"col1": "a"}, {"col1": "b"}]), [{"col1": 2}]) def test_summary_summarize_list_return(): def unique_lens(xs): return list(sorted(set(map(len, xs)))) sm = Summary({"col1": {"aggregate": unique_lens}}) eq(sm.summarize(["col1"], [{"col1": "a"}, {"col1": "bcd"}, {"col1": "ef"}, {"col1": "g"}]), [{"col1": 1}, {"col1": 2}, {"col1": 3}]) def test_summary_summarize_multicolumn_return(): def unique_values(xs): return list(sorted(set(xs))) sm = Summary({"col1": {"aggregate": unique_values}, "col2": {"aggregate": len}, "col3": {"aggregate": unique_values}}) eq(sm.summarize(["col1", "col2", "col3"], [{"col1": "a", "col2": "x", "col3": "c"}, {"col1": "b", "col2": "y", "col3": "c"}, {"col1": "a", "col2": "z", "col3": "c"}]), [{"col1": "a", "col2": 3, "col3": "c"}, {"col1": "b", "col2": "", "col3": ""}]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tests/test_tabular.py0000644000175100001660000017232614756362760017723 0ustar00runnerdocker# -*- coding: utf-8 -*- import pytest # Eventually we may want to retire blessings: # https://github.com/pyout/pyout/issues/136 try: pytest.importorskip("blessed") except pytest.skip.Exception: pytest.importorskip("blessings") from collections import Counter from collections import OrderedDict import logging import sys import time import threading import traceback from pyout.common import ContentError from pyout.elements import StyleError from pyout.field import StyleFunctionError from pyout.tests.tabular import Tabular from pyout.tests.terminal import assert_contains_nc from pyout.tests.terminal import capres from pyout.tests.terminal import eq_repr_noclear from pyout.tests.terminal import unicode_cap from pyout.tests.utils import assert_eq_repr class AttrData(object): """Store `kwargs` as attributes. For testing tabular calls to construct row's data from an objects attributes. This doesn't use __getattr__ to map dict keys to attributes because then we'd have to handle a KeyError for the "missing" column tests. """ def __init__(self, **kwargs): for attr, value in kwargs.items(): setattr(self, attr, value) def test_tabular_write_color(): out = Tabular(["name"], style={"name": {"color": "green", "width": 3}}) out({"name": "foo"}) expected = capres("green", "foo") + "\n" assert_eq_repr(out.stdout, expected) def test_tabular_write_empty_string(): out = Tabular() out({"name": ""}) assert_eq_repr(out.stdout, "\n") def test_tabular_write_missing_column(): out = Tabular(columns=["name", "status"]) out({"name": "solo"}) assert_eq_repr(out.stdout, "solo\n") def test_tabular_write_missing_column_missing_text(): out = Tabular(columns=["name", "status"], style={"status": {"missing": "-"}}) out({"name": "solo"}) assert_eq_repr(out.stdout, "solo -\n") def test_tabular_write_columns_as_tuple(): out = Tabular(columns=("name", "status"), style={"header_": {}}) out({"name": "foo", "status": "ok"}) lines = out.stdout.splitlines() assert_contains_nc(lines, "name status", "foo ok ") @pytest.mark.parametrize("columns", [False, [], tuple()], ids=["False", "empty list", "empty tuple"]) def test_tabular_write_columns_falsey(columns): out = Tabular(columns=columns, style={"header_": {}}) out(OrderedDict([("name", "foo"), ("status", "ok")])) lines = out.stdout.splitlines() assert_contains_nc(lines, "name status", "foo ok ") def test_tabular_write_list_value(): out = Tabular(columns=["name", "status"]) out({"name": "foo", "status": [0, 1]}) assert_eq_repr(out.stdout, "foo [0, 1]\n") def test_tabular_write_missing_column_missing_object_data(): data = AttrData(name="solo") out = Tabular(columns=["name", "status"], style={"status": {"missing": "-"}}) out(data) assert_eq_repr(out.stdout, "solo -\n") def test_tabular_write_columns_from_orderdict_row(): out = Tabular(style={"name": {"width": 3}, "id": {"width": 3}, "status": {"width": 9}, "path": {"width": 8}}) row = OrderedDict([("name", "foo"), ("id", "001"), ("status", "installed"), ("path", "/tmp/foo")]) out(row) assert_eq_repr(out.stdout, "foo 001 installed /tmp/foo\n") @pytest.mark.parametrize("row", [["foo", "ok"], {"name": "foo", "status": "ok"}], ids=["sequence", "dict"]) def test_tabular_write_columns_orderdict_mapping(row): out = Tabular(OrderedDict([("name", "Long name"), ("status", "Status")]), style={"header_": {}, "name": {"width": 10}, "status": {"width": 6}}) out(row) expected = ("Long name Status\n" "foo ok \n") assert_eq_repr(out.stdout, expected) def test_tabular_write_data_as_list(): out = Tabular(["name", "status"], style={"name": {"width": 3}, "status": {"width": 9}}) out(["foo", "installed"]) out(["bar", "unknown"]) expected = "foo installed\nbar unknown \n" assert_eq_repr(out.stdout, expected) @pytest.mark.parametrize("data_type", ["seq", "obj"]) def test_tabular_write_unknown_column_non_dict(data_type): if data_type == "seq": row = ["a", "unk"] else: row = AttrData(name="a", unk="unk") out = Tabular(columns=["name"]) out(row) assert_eq_repr(out.stdout, "a\n") def test_tabular_write_unknown_column_dict(): out = Tabular(columns=["name"]) out({"name": "a", "unk": "unk"}) assert_eq_repr(out.stdout, "a unk\n") def test_tabular_write_unknown_column_after_first(): out = Tabular(columns=["name"]) out({"name": "a"}) out({"name": "b", "status": "ok"}) lines = out.stdout.splitlines() # First column is updated with appropriate missing value. assert_contains_nc(lines, "a ", "b ok") def test_tabular_write_unknown_column_after_first_custom_missing(): out = Tabular(columns=["name"], style={"status": {"missing": "-"}}) out({"name": "a"}) out({"name": "b", "status": "ok"}) lines = out.stdout.splitlines() assert_contains_nc(lines, "a - ", "b ok") def test_tabular_write_unknown_column_header(): out = Tabular(columns=["name"], style={"header_": {}}) out({"name": "a", "status": "ok"}) lines = out.stdout.splitlines() assert_contains_nc(lines, "name status", "a ok ") def test_tabular_width_no_style(): out = Tabular(["name"]) out(["a" * 105]) # The test terminal's width of 100 is used, not the default of 90 set in # elements.py. assert out.stdout == "a" * 97 + "...\n" def test_tabular_width_non_interactive_default(): out = Tabular(["name", "status"], interactive=False) a = "a" * 70 b = "b" * 100 with out: out([a, b]) assert out.stdout == "{} {}\n".format(a, b) def test_tabular_width_non_interactive_width_override(): out = Tabular(["name", "status"], style={"width_": 31, "default_": {"width": {"marker": "…"}}}, interactive=False) with out: out(["a" * 70, "b" * 100]) stdout = out.stdout assert stdout == "{} {}\n".format("a" * 14 + "…", "b" * 14 + "…") def test_tabular_width_non_interactive_col_max(): out = Tabular(["name", "status"], style={"status": {"width": {"max": 20, "marker": "…"}}}, interactive=False) with out: out(["a" * 70, "b" * 100]) stdout = out.stdout assert stdout == "{} {}\n".format("a" * 70, "b" * 19 + "…") def test_tabular_write_header(): out = Tabular(["name", "status"], style={"header_": {}, "name": {"width": 10}, "status": {"width": 10}}) out({"name": "foo", "status": "installed"}) out({"name": "bar", "status": "installed"}) expected = ("name status \n" "foo installed \n" "bar installed \n") assert_eq_repr(out.stdout, expected) def test_tabular_write_data_as_object(): out = Tabular(["name", "status"], style={"name": {"width": 3}, "status": {"width": 9}}) out(AttrData(name="foo", status="installed")) out(AttrData(name="bar", status="unknown")) expected = "foo installed\nbar unknown \n" assert out.stdout == expected def test_tabular_write_different_data_types_same_output(): style = {"header_": {}, "name": {"width": 10}, "status": {"width": 10}} out_list = Tabular(["name", "status"], style=style) out_dict = Tabular(["name", "status"], style=style) out_od = Tabular(style=style) out_list(["foo", "installed"]) out_list(["bar", "installed"]) out_dict({"name": "foo", "status": "installed"}) out_dict({"name": "bar", "status": "installed"}) out_od(OrderedDict([("name", "foo"), ("status", "installed")])) out_od(OrderedDict([("name", "bar"), ("status", "installed")])) assert out_dict.stdout == out_list.stdout assert out_dict.stdout == out_od.stdout def test_tabular_write_header_with_style(): out = Tabular(["name", "status"], style={"header_": {"underline": True}, "name": {"width": 4}, "status": {"width": 9, "color": "green"}}) out({"name": "foo", "status": "installed"}) expected = capres("smul", "name") + " " + \ capres("smul", "status") + " " + "\nfoo " + \ capres("green", "installed") + "\n" assert_eq_repr(out.stdout, expected) def test_tabular_nondefault_separator(): out = Tabular(["name", "status"], style={"header_": {}, "separator_": " | ", "name": {"width": 4}, "status": {"width": 9}}) out({"name": "foo", "status": "installed"}) out({"name": "bar", "status": "installed"}) expected = ("name | status \n" "foo | installed\n" "bar | installed\n") assert_eq_repr(out.stdout, expected) def test_tabular_write_data_as_list_no_columns(): out = Tabular(style={"name": {"width": 3}, "status": {"width": 9}}) with pytest.raises(ValueError): out(["foo", "installed"]) def test_tabular_write_style_override(): out = Tabular(["name"], style={"name": {"color": "green", "width": 3}}) out({"name": "foo"}, style={"name": {"color": "black", "width": 3}}) expected = capres("black", "foo") + "\n" assert_eq_repr(out.stdout, expected) def test_tabular_default_style(): out = Tabular(["name", "status"], style={"default_": {"width": 3}}) out({"name": "foo", "status": "OK"}) out({"name": "bar", "status": "OK"}) expected = ("foo OK \n" "bar OK \n") assert out.stdout == expected def test_tabular_write_multicolor(): out = Tabular(["name", "status"], style={"name": {"color": "green", "width": 3}, "status": {"color": "white", "width": 7}}) out({"name": "foo", "status": "unknown"}) expected = capres("green", "foo") + " " + \ capres("white", "unknown") + "\n" assert_eq_repr(out.stdout, expected) def test_tabular_write_all_whitespace_nostyle(): out = Tabular(style={"name": {"color": "green"}}) out({"name": " "}) assert_eq_repr(out.stdout, " \n") def test_tabular_write_style_flanking(): out = Tabular(columns=["name", "status"], style={"status": {"underline": True, "align": "center", "width": 7}, # Use "," to more easily see spaces in fields. "separator_": ","}) out({"name": "foo", "status": "bad"}) # The text is style but not the flanking whitespace. expected = "foo," + " " + capres("smul", "bad") + " \n" assert_eq_repr(out.stdout, expected) def test_tabular_write_style_flanking_newlines(): out = Tabular(columns=["name", "status"]) out({"name": "foo", "status": "a\nb"}) # The flanking regexp should match even if the field has new lines. assert out.stdout.strip() == "foo a\nb" def test_tabular_write_align(): out = Tabular(["name"], style={"name": {"align": "right", "width": 10}}) out({"name": "foo"}) assert_eq_repr(out.stdout, " foo\n") def test_tabular_rewrite(): out = Tabular(["name", "status"], style={"name": {"width": 3}, "status": {"width": 9}}) data = [{"name": "foo", "status": "unknown"}, {"name": "bar", "status": "installed"}] for row in data: out(row) out({"name": "foo", "status": "installed"}) expected = unicode_cap("cuu1") * 2 + unicode_cap("el") + "foo installed" assert_eq_repr(out.stdout.strip().splitlines()[-1], expected) def test_tabular_rewrite_with_header(): out = Tabular(["name", "status"], style={"header_": {}, "status": {"width": 9}}) data = [{"name": "foo", "status": "unknown"}, {"name": "bar", "status": "unknown"}] for row in data: out(row) out({"name": "bar", "status": "installed"}) expected = unicode_cap("cuu1") * 1 + unicode_cap("el") + "bar installed" assert_eq_repr(out.stdout.strip().splitlines()[-1], expected) def test_tabular_rewrite_multi_id(): out = Tabular(["name", "type", "status"], style={"name": {"width": 3}, "type": {"width": 1}, "status": {"width": 9}}) out.ids = ["name", "type"] data = [{"name": "foo", "type": "0", "status": "unknown"}, {"name": "foo", "type": "1", "status": "unknown"}, {"name": "bar", "type": "2", "status": "installed"}] for row in data: out(row) out({"name": "foo", "type": "0", "status": "installed"}) expected = unicode_cap("cuu1") * 3 + unicode_cap("el") + "foo 0 installed" assert_eq_repr(out.stdout.strip().splitlines()[-1], expected) def test_tabular_rewrite_multi_value(): out = Tabular(["name", "type", "status"], style={"name": {"width": 3}, "type": {"width": 1}, "status": {"width": 9}}) data = [{"name": "foo", "type": "0", "status": "unknown"}, {"name": "bar", "type": "1", "status": "unknown"}] for row in data: out(row) out({"name": "foo", "status": "installed", "type": "3"}) expected = unicode_cap("cuu1") * 2 + unicode_cap("el") + "foo 3 installed" assert_eq_repr(out.stdout.strip().splitlines()[-1], expected) def test_tabular_rewrite_auto_width(): out = Tabular(["name", "status"], style={"name": {"width": 3}, "status": {"width": "auto"}}) data = [{"name": "foo", "status": "unknown"}, {"name": "bar", "status": "unknown"}, {"name": "baz", "status": "unknown"}] for row in data: out(row) out({"name": "bar", "status": "installed"}) lines = out.stdout.splitlines() assert_contains_nc(lines, "foo unknown ", "baz unknown ") def test_tabular_non_hashable_id_error(): out = Tabular() out.ids = ["status"] with pytest.raises(ContentError): out({"name": "foo", "status": [0, 1]}) def test_tabular_content_get_idkey(): out = Tabular(["first", "last", "status"]) out.ids = ["first", "last"] data = [{"first": "foo", "last": "bert", "status": "ok"}, {"first": "foo", "last": "zoo", "status": "bad"}, {"first": "bar", "last": "t", "status": "unknown"}] for row in data: out(row) for idx, key in enumerate([("foo", "bert"), ("foo", "zoo"), ("bar", "t")]): assert out._content.get_idkey(idx) == key with pytest.raises(IndexError): out._content.get_idkey(4) def test_tabular_write_lookup_color(): out = Tabular(style={"name": {"width": 3}, "status": {"color": {"lookup": {"BAD": "red"}}, "width": 6}}) out(OrderedDict([("name", "foo"), ("status", "OK")])) out(OrderedDict([("name", "bar"), ("status", "BAD")])) expected = "foo " + "OK \n" + \ "bar " + capres("red", "BAD") + " \n" assert_eq_repr(out.stdout, expected) def test_tabular_write_lookup_bold(): out = Tabular(style={"name": {"width": 3}, "status": {"bold": {"lookup": {"BAD": True}}, "width": 6}}) out(OrderedDict([("name", "foo"), ("status", "OK")])) out(OrderedDict([("name", "bar"), ("status", "BAD")])) expected = "foo " + "OK \n" + \ "bar " + capres("bold", "BAD") + " \n" assert_eq_repr(out.stdout, expected) def test_tabular_write_lookup_bold_false(): out = Tabular(style={"name": {"width": 3}, "status": {"bold": {"lookup": {"BAD": False}}, "width": 6}}) out(OrderedDict([("name", "foo"), ("status", "OK")])) out(OrderedDict([("name", "bar"), ("status", "BAD")])) expected = ("foo OK \n" "bar BAD \n") assert_eq_repr(out.stdout, expected) def test_tabular_write_lookup_non_hashable(): out = Tabular(style={"status": {"color": {"lookup": {"BAD": "red"}}}}) out(OrderedDict([("name", "foo"), ("status", [0, 1])])) expected = "foo [0, 1]\n" assert_eq_repr(out.stdout, expected) def test_tabular_write_re_lookup_color(): out = Tabular( style={"name": {"width": 3}, "status": {"color": {"re_lookup": [["good", "green"], ["^bad$", "red"]]}, "width": 12}, "default_": {"re_flags": ["I"]}}) out(OrderedDict([("name", "foo"), ("status", "good")])) out(OrderedDict([("name", "bar"), ("status", "really GOOD")])) out(OrderedDict([("name", "oof"), ("status", "bad")])) out(OrderedDict([("name", "rab"), ("status", "not bad")])) expected = "foo " + capres("green", "good") + " \n" + \ "bar " + capres("green", "really GOOD") + " \n" + \ "oof " + capres("red", "bad") + " \n" + \ "rab not bad \n" assert_eq_repr(out.stdout, expected) def test_tabular_write_re_lookup_bold(): out = Tabular( style={"name": {"width": 3}, "status": {"bold": {"re_lookup": [["^!![XYZ]$", False], ["^!!.$", True]]}, "width": 3}}) out(OrderedDict([("name", "foo"), ("status", "!!Z")])) out(OrderedDict([("name", "bar"), ("status", "!!y")])) expected = "foo !!Z\n" + \ "bar " + capres("bold", "!!y") + "\n" assert_eq_repr(out.stdout, expected) def test_tabular_write_intervals_wrong_type(): out = Tabular(style={"name": {"width": 3}, "percent": {"color": {"interval": [[0, 50, "red"], [50, 80, "yellow"], [80, 100, "green"]]}, "width": 8}}) out(OrderedDict([("name", "foo"), ("percent", 88)])) out(OrderedDict([("name", "bar"), ("percent", "notfloat")])) expected = ["foo " + capres("green", "88") + " ", "bar notfloat"] assert_contains_nc(out.stdout.splitlines(), *expected) def test_tabular_write_intervals_color(): out = Tabular(style={"name": {"width": 3}, "percent": {"color": {"interval": [[0, 50, "red"], [50, 80, "yellow"], [80, 100, "green"]]}, "width": 7}}) out(OrderedDict([("name", "foo"), ("percent", 88)])) out(OrderedDict([("name", "bar"), ("percent", 33)])) expected = "foo " + capres("green", "88") + " \n" + \ "bar " + capres("red", "33") + " \n" assert_eq_repr(out.stdout, expected) def test_tabular_write_intervals_color_open_ended(): out = Tabular(style={"name": {"width": 3}, "percent": {"color": {"interval": [[None, 50, "red"], [80, None, "green"]]}, "width": 7}}) out(OrderedDict([("name", "foo"), ("percent", 88)])) out(OrderedDict([("name", "bar"), ("percent", 33)])) expected = "foo " + capres("green", "88") + " \n" + \ "bar " + capres("red", "33") + " \n" assert_eq_repr(out.stdout, expected) def test_tabular_write_intervals_color_catchall_range(): out = Tabular(style={"name": {"width": 3}, "percent": {"color": {"interval": [[None, None, "red"]]}, "width": 7}}) out(OrderedDict([("name", "foo"), ("percent", 88)])) out(OrderedDict([("name", "bar"), ("percent", 33)])) expected = "foo " + capres("red", "88") + " \n" + \ "bar " + capres("red", "33") + " \n" assert_eq_repr(out.stdout, expected) def test_tabular_write_intervals_color_outside_intervals(): out = Tabular(style={"name": {"width": 3}, "percent": {"color": {"interval": [[0, 50, "red"]]}, "width": 7}}) out(OrderedDict([("name", "foo"), ("percent", 88)])) out(OrderedDict([("name", "bar"), ("percent", 33)])) expected = "foo 88 \n" + \ "bar " + capres("red", "33") + " \n" assert_eq_repr(out.stdout, expected) def test_tabular_write_intervals_bold(): out = Tabular(style={"name": {"width": 3}, "percent": {"bold": {"interval": [[30, 50, False], [50, 80, True]]}, "width": 2}}) out(OrderedDict([("name", "foo"), ("percent", 78)])) out(OrderedDict([("name", "bar"), ("percent", 33)])) expected = "foo " + capres("bold", "78") + "\n" + \ "bar 33\n" assert_eq_repr(out.stdout, expected) def test_tabular_write_intervals_missing(): out = Tabular(style={"name": {"width": 3}, "percent": {"bold": {"interval": [[30, 50, False], [50, 80, True]]}, "width": 2}}) out(OrderedDict([("name", "foo"), ("percent", 78)])) # Interval lookup function can handle a missing value. out(OrderedDict([("name", "bar")])) expected = "foo " + capres("bold", "78") + "\n" + "bar \n" assert_eq_repr(out.stdout, expected) def test_tabular_write_transform(): out = Tabular(style={"val": {"transform": lambda x: x[::-1]}}) out(OrderedDict([("name", "foo"), ("val", "330")])) out(OrderedDict([("name", "bar"), ("val", "780")])) expected = ("foo 033\n" "bar 087\n") assert_eq_repr(out.stdout, expected) def test_tabular_write_transform_with_header(): out = Tabular(style={"header_": {}, "name": {"width": 4}, "val": {"transform": lambda x: x[::-1]}}) out(OrderedDict([("name", "foo"), ("val", "330")])) out(OrderedDict([("name", "bar"), ("val", "780")])) expected = ("name val\n" "foo 033\n" "bar 087\n") assert_eq_repr(out.stdout, expected) def test_tabular_write_transform_autowidth(): out = Tabular(style={"val": {"transform": lambda x: x * 2}}) out(OrderedDict([("name", "foo"), ("val", "330")])) out(OrderedDict([("name", "bar"), ("val", "7800")])) lines = out.stdout.splitlines() assert_contains_nc(lines, "foo 330330 ", "bar 78007800") def test_tabular_write_transform_on_header(): out = Tabular(style={"header_": {"transform": lambda x: x.upper()}, "name": {"width": 4}, "val": {"width": 3}}) out(OrderedDict([("name", "foo"), ("val", "330")])) out(OrderedDict([("name", "bar"), ("val", "780")])) expected = ("NAME VAL\n" "foo 330\n" "bar 780\n") assert_eq_repr(out.stdout, expected) def test_tabular_write_transform_func_error(): def dontlikeints(x): return x[::-1] out = Tabular(style={"name": {"width": 4}, "val": {"transform": dontlikeints}}) # The transform function receives the data as given, so it fails trying to # index an integer. try: out(OrderedDict([("name", "foo"), ("val", 330)])) except: exc_type, value, tb = sys.exc_info() try: assert isinstance(value, StyleFunctionError) names = [x[2] for x in traceback.extract_tb(tb)] assert "dontlikeints" in names finally: del tb def test_tabular_write_width_truncate_long(): out = Tabular(style={"name": {"width": 8}, "status": {"width": 3}}) out(OrderedDict([("name", "abcdefghijklmnop"), ("status", "OK")])) out(OrderedDict([("name", "bar"), ("status", "BAD")])) expected = ("abcde... OK \n" "bar BAD\n") assert out.stdout == expected def test_tabular_write_autowidth(): out = Tabular(style={"name": {"width": "auto"}, "status": {"width": "auto"}, "path": {"width": 6}}) out(OrderedDict([("name", "fooab"), ("status", "OK"), ("path", "/tmp/a")])) out(OrderedDict([("name", "bar"), ("status", "BAD"), ("path", "/tmp/b")])) lines = out.stdout.splitlines() assert_contains_nc(lines, "bar BAD /tmp/b", "fooab OK /tmp/a") def test_tabular_write_autowidth_with_header(): out = Tabular(style={"header_": {}, "name": {"width": "auto"}, "status": {"width": "auto"}}) out(OrderedDict([("name", "foobar"), ("status", "OK")])) out(OrderedDict([("name", "baz"), ("status", "OK")])) lines = out.stdout.splitlines() assert_contains_nc(lines, "name status") def test_tabular_write_autowidth_min(): out = Tabular(style={"name": {"width": "auto"}, "status": {"width": {"min": 5}}, "path": {"width": 6}}) out(OrderedDict([("name", "fooab"), ("status", "OK"), ("path", "/tmp/a")])) out(OrderedDict([("name", "bar"), ("status", "BAD"), ("path", "/tmp/b")])) lines = out.stdout.splitlines() assert_contains_nc(lines, "bar BAD /tmp/b", "fooab OK /tmp/a") @pytest.mark.parametrize("marker", [True, False, "…"], ids=["marker=True", "marker=False", "marker=…"]) def test_tabular_write_autowidth_min_max(marker): out = Tabular(style={"name": {"width": 3}, "status": {"width": {"min": 2, "max": 7}}, "path": {"width": {"max": 5, "marker": marker}}}) out(OrderedDict([("name", "foo"), ("status", "U"), ("path", "/tmp/a")])) if marker is True: assert out.stdout == "foo U /t...\n" elif marker: assert out.stdout == "foo U /tmp…\n" else: assert out.stdout == "foo U /tmp/\n" out(OrderedDict([("name", "bar"), ("status", "BAD!!!!!!!!!!!"), ("path", "/tmp/b")])) lines = out.stdout.splitlines() if marker is True: assert_contains_nc(lines, "foo U /t...", "bar BAD!... /t...") elif marker: assert_contains_nc(lines, "foo U /tmp…", "bar BAD!... /tmp…") else: assert_contains_nc(lines, "foo U /tmp/", "bar BAD!... /tmp/") def test_tabular_write_autowidth_min_max_with_header(): out = Tabular(style={"header_": {}, "name": {"width": 4}, "status": {"width": {"min": 2, "max": 8}}}) out(OrderedDict([("name", "foo"), ("status", "U")])) lines0 = out.stdout.splitlines() assert_contains_nc(lines0, "name status", "foo U ") out(OrderedDict([("name", "bar"), ("status", "BAD!!!!!!!!!!!")])) lines1 = out.stdout.splitlines() assert_contains_nc(lines1, "bar BAD!!...") def test_tabular_write_autowidth_min_frac(): out = Tabular(style={"width_": 12, "name": {"width": {"min": 0.5}}}) out(OrderedDict([("name", "foo"), ("status", "unknown")])) # 0.5 of table width => 6 characters for "foo" assert out.stdout == "foo un...\n" def test_tabular_write_autowidth_max_frac(): out = Tabular(style={"width_": 12, "name": {"width": {"max": 0.5}}}) out(OrderedDict([("name", "foo"), ("status", "ok")])) # 0.5 of table width => 6 characters for "foo", but it only needs 3. assert out.stdout == "foo ok\n" out(OrderedDict([("name", "longerthanmax"), ("status", "ko")])) lines0 = out.stdout.splitlines() # Value over 6 only takes up 6. assert_contains_nc(lines0, "lon... ko") def test_tabular_write_fixed_width_frac(): out = Tabular(style={"width_": 20, "name": {"width": 0.4}}) out(OrderedDict([("name", "foo"), ("status", "ok")])) assert out.stdout == "foo ok\n" def test_tabular_write_autowidth_different_data_types_same_output(): out_dict = Tabular(["name", "status"], style={"header_": {}, "name": {"width": 4}, "status": {"width": {"min": 2, "max": 8}}}) out_dict({"name": "foo", "status": "U"}) out_dict({"name": "bar", "status": "BAD!!!!!!!!!!!"}) out_list = Tabular(["name", "status"], style={"header_": {}, "name": {"width": 4}, "status": {"width": {"min": 2, "max": 8}}}) out_list(["foo", "U"]) out_list(["bar", "BAD!!!!!!!!!!!"]) assert out_dict.stdout == out_list.stdout def test_tabular_write_incompatible_width_exception(): out = Tabular(style={"header_": {}, "status": {"width": {"min": 4, "width": 9}}}) with pytest.raises(ValueError): out(OrderedDict([("name", "foo"), ("status", "U")])) def test_tabular_fixed_width_exceeds_total(): out = Tabular(style={"width_": 10, "status": {"width": 20}}) with pytest.raises(StyleError): out(OrderedDict([("name", ""), ("status", "")])) def test_tabular_number_of_columns_exceeds_total_width(): cols = ["a", "b", "c", "d"] out = Tabular(columns=cols, style={"width_": 3}) with pytest.raises(StyleError): out([c + "val" for c in cols]) def test_tabular_auto_width_exceeds_total(): out = Tabular(style={"width_": 13, "default_": {"width": {"marker": "…"}}}) out(OrderedDict([("name", "abcd"), ("status", "efghi"), ("path", "jklm")])) # The values are divided evenly. Subtracting the separators, there are 11 # available spaces. 'status' and 'path' get 4, while 'name' gets the # remaining 3. 'name' is shorted just because the columns are processed in # reverse alphabetical order. assert out.stdout == "ab… efg… jklm\n" def test_tabular_auto_width_exceeds_total_multiline(): out = Tabular(style={"width_": 15}) out(OrderedDict([("name", "abcd"), ("status", "efg"), ("path", "t/")])) assert out.stdout == "abcd efg t/\n" # name gets truncated due to predictable but arbitrary reverse alphabetical # sorting when assigning widths. out(OrderedDict([("name", "mooost"), ("status", "metoo"), ("path", "here")])) lines0 = out.stdout.splitlines() assert_contains_nc(lines0, "m... metoo here") out(OrderedDict([("name", "hi"), ("status", "jk"), ("path", "lm")])) lines1 = out.stdout.splitlines() assert_contains_nc(lines1, "hi jk lm ") out(OrderedDict([("name", "mnopqrs"), ("status", "tu"), ("path", "vwxyz")])) lines1 = out.stdout.splitlines() assert_contains_nc(lines1, "m... tu v...") @pytest.mark.parametrize("mode", ["update", "incremental"]) def test_tabular_width_change(mode): out = Tabular(mode=mode) out.change_term_width(10) out(OrderedDict([("name", "a"), ("path", "x" * 20)])) assert out.stdout.strip() == "a xxxxx..." # Mimic interactive change in width. out.change_term_width(22) out(OrderedDict([("name", "b"), ("path", "y" * 21)])) lines0 = out.stdout.splitlines() assert_contains_nc(lines0, "a xxxxxxxxxxxxxxxxxxxx") assert_contains_nc(lines0, "b yyyyyyyyyyyyyyyyy...") out.change_term_width(7) out(OrderedDict([("name", "b"), ("path", "z" * 21)])) lines1 = out.stdout.splitlines() assert_contains_nc(lines1, "a xx...") assert_contains_nc(lines1, "b zz...") def test_tabular_width_change_mode_final(): out = Tabular(["name", "status"], mode="final") out.change_term_width(10) with out: out(OrderedDict([("name", "a"), ("path", "x" * 20)])) assert out.stdout.strip() == "a xxxxxxxxxxxxxxxxxxxx" class Delayed(object): """Helper for producing a delayed callable. """ def __init__(self, value): self.value = value self.now = False def run(self): """Return `value` once `now` is true. """ t0 = time.time() while True: if time.time() - t0 > 10: sys.stderr.write(f"Testing helper Delayed({self.value}) was never properly terminated") traceback.print_stack() raise RuntimeError("Timeout") if self.now: value = self.value if callable(value): value = value() return value def gen(self): value = self.run() yield value @pytest.mark.timeout(10) def test_tabular_height_change(): delay0 = Delayed("A") out = Tabular(["name", "status"], wait_for_top=0, style={"name": {"width": 3}, "status": {"width": 3}}) out.change_term_height(3) # Even after the first query... out._stream.height with out: out({"name": "a", "status": (".", delay0.run)}) out({"name": "b", "status": "B"}) out({"name": "c", "status": "C"}) out({"name": "d", "status": "D"}) # ... a change in height is detected. out.change_term_height(5) delay0.now = True lines = out.stdout.splitlines() assert_contains_nc(lines, "a A ") # With a height of 5, a line-based update for A was done, so there isn't a # repeated value for other lines. assert sum(ln == "d D " for ln in lines) == 1 @pytest.mark.timeout(10) def test_tabular_write_callable_values(): delay0 = Delayed("done") delay1 = Delayed("over") with Tabular(["name", "status"]) as out: out({"name": "foo", "status": ("thinking", delay0.run)}) out({"name": "bar", "status": "ok"}) # A single callable can be passed rather than (initial_value, fn). out({"name": "baz", "status": delay1.run}) expected = ("foo thinking\n" "bar ok \n" "baz \n") assert_eq_repr(out.stdout, expected) delay0.now = True delay1.now = True lines = out.stdout.splitlines() assert_contains_nc(lines, "foo done ", "baz over ") @pytest.mark.timeout(10) def test_tabular_write_callable_transform_nothing(): delay0 = Delayed(3) out = Tabular(["name", "status"], style={"status": {"transform": lambda n: n + 2}}) with out: # The unspecified initial value is set to Nothing(). The transform # function above, which is designed to take a number, won't be called # with it. out({"name": "foo", "status": delay0.run}) assert_eq_repr(out.stdout, "foo\n") delay0.now = True lines = out.stdout.splitlines() assert_contains_nc(lines, "foo 5") @pytest.mark.timeout(10) def test_tabular_write_callable_re_lookup_non_string(): delay0 = Delayed(3) delay1 = Delayed("4") out = Tabular(["name", "status"], style={"status": {"color": {"re_lookup": [["[0-9]", "green"]]}}}) with out: out({"name": "foo", "status": delay0.run}) out({"name": "bar", "status": delay1.run}) delay0.now = True delay1.now = True lines = out.stdout.splitlines() # 3 was passed in as a number, so re_lookup ignores it assert_contains_nc(lines, "foo 3") # ... but it matches "4". assert_contains_nc(lines, "bar " + capres("green", "4")) @pytest.mark.timeout(10) def test_tabular_write_callable_values_multi_return(): delay = Delayed({"status": "done", "path": "/tmp/a"}) out = Tabular(["name", "status", "path"]) with out: out({"name": "foo", ("status", "path"): ("...", delay.run)}) out({"name": "bar", "status": "ok", "path": "na"}) expected = ("foo ... ...\n" "bar ok na \n") assert_eq_repr(out.stdout, expected) delay.now = True lines = out.stdout.splitlines() assert_contains_nc(lines, "foo done /tmp/a") @pytest.mark.timeout(10) def test_tabular_write_callable_unknown_column(): delay = Delayed({"status": "done", "unk": "unkval"}) out = Tabular(["name", "status"]) with out: out({"name": "foo", "status": delay.run}) delay.now = True assert_contains_nc(out.stdout.splitlines(), "foo done unkval") @pytest.mark.timeout(10) def test_tabular_write_callable_unknown_column_multikey(): delay = Delayed({"status": "done", "unk": "unk_value"}) out = Tabular(["name", "status"]) with out: out({"name": "foo", ("status", "unk"): delay.run}) delay.now = True assert_contains_nc(out.stdout.splitlines(), "foo done unk_value") @pytest.mark.timeout(10) def test_tabular_write_callable_only_unknown_columns_multikey(): delay = Delayed(("unk_value0", "unk_value1")) out = Tabular(["name", "status"]) with out: out({"name": "foo", ("unk0", "unk1"): delay.run}) delay.now = True assert_contains_nc(out.stdout.splitlines(), "foo unk_value0 unk_value1") @pytest.mark.timeout(10) def test_tabular_write_callable_sneaky_unknown_column(): delay = Delayed({"status": "ok", "unk": "unk_value"}) out = Tabular(["name", "status"]) with out: out({"name": "foo", "status": delay.run}) delay.now = True assert_contains_nc(out.stdout.splitlines(), "foo ok unk_value") @pytest.mark.timeout(10) def test_tabular_write_callable_returns_only_unknown(): delay = Delayed({"unk": "unk_value"}) out = Tabular(["name", "status"]) with out: out({"name": "foo", "status": delay.run}) delay.now = True assert_contains_nc(out.stdout.splitlines(), "foo unk_value") @pytest.mark.timeout(10) @pytest.mark.parametrize("nrows", [20, 21]) def test_tabular_callback_to_offscreen_row(nrows): delay = Delayed("OK") out = Tabular(style={"status": {"aggregate": len}}, wait_for_top=0) with out: for i in range(1, nrows + 1): status = delay.run if i == 3 else "s{:02d}".format(i) out(OrderedDict([("name", "foo{:02d}".format(i)), ("status", status)])) delay.now = True lines = out.stdout.splitlines() # The test terminal height is 20. The summary line takes up one # line and the current line takes up another, so we have 18 # available rows. Row 3 goes off the screen when we have 21 rows. if nrows > 20: # No leading escape codes because it was part of a whole repaint. nexpected_plain = 1 nexpected_updated = 0 else: nexpected_plain = 0 nexpected_updated = 1 assert len([ln for ln in lines if ln == "foo03 OK "]) == nexpected_plain cuu1 = unicode_cap("cuu1") updated = [l for l in lines if l.startswith(cuu1) and "foo03 OK " in l] assert len(updated) == nexpected_updated @pytest.mark.timeout(10) @pytest.mark.parametrize("header", [True, False], ids=["header", "no header"]) def test_tabular_callback_wait_for_top(header): delay_fns = {0: Delayed("v0"), 4: Delayed("v4"), 28: Delayed("v20")} style = {"header_": {}} if header else {} idxs = [] def run_tabular(): with Tabular(wait_for_top=2, style=style) as out: for i in range(40): if i in delay_fns: status = delay_fns[i].run else: status = "s{:02d}".format(i) out(OrderedDict([("name", "foo{:02d}".format(i)), ("status", status)])) idxs.append(i) # The wait_for_top functionality involves Tabular.__call__() blocking us, # so we need to test it in another thread. thread = threading.Thread(target=run_tabular) thread.daemon = True thread.start() def wait_then_check(idx_expected): wait = 0 while not idxs or idxs[-1] < idx_expected: time.sleep(0.1) wait += 0.1 # We've encountered the index we expected in `wait` seconds. A # conservative check that we're not going to see another row is to wait # that many seconds to make sure we're in the same spot. time.sleep(wait) assert idxs[-1] == idx_expected # None of the workers have returned, including the one in the first row. # So with a height of 20 and 1 row for the cursor, we to wait at 19 rows # (an index of 18). If there's a header, we can accommodate one fewer. wait_then_check(18 - header) delay_fns[0].now = True delay_fns[28].now = True # We've released the worker for the 1st row and the 28th, but the one in # the 5th is still going. We set wait_for_top=2, so we advance to having # the 4th row at the top. wait_then_check(21) delay_fns[4].now = True # We've released the final worker from the 4th row. The last row comes in. wait_then_check(39) thread.join() @pytest.mark.timeout(10) @pytest.mark.parametrize("result", [{"status": "done", "path": "/tmp/a"}, ("done", "/tmp/a")], ids=["result=tuple", "result=dict"]) def test_tabular_write_callable_values_multicol_key_infer_column(result): delay = Delayed(result) out = Tabular() with out: out(OrderedDict([("name", "foo"), (("status", "path"), ("...", delay.run))])) out(OrderedDict([("name", "bar"), ("status", "ok"), ("path", "na")])) expected = ("foo ... ...\n" "bar ok na \n") assert_eq_repr(out.stdout, expected) delay.now = True lines = out.stdout.splitlines() assert_contains_nc(lines, "foo done /tmp/a") @pytest.mark.timeout(10) @pytest.mark.parametrize("kind", ["function", "generator"]) @pytest.mark.parametrize("should_continue", [True, False]) def test_tabular_callback_exception_within(kind, should_continue): if kind == "generator": def fail(msg): def fn(): yield "ok" raise TypeError(msg) return fn else: def fail(msg): def fn(): raise TypeError(msg) return fn out = Tabular(max_workers=2, continue_on_failure=should_continue) rows = [OrderedDict([("name", "foo"), ("status", fail("foofail"))]), OrderedDict([("name", "bar"), ("status", fail("barfail"))]), OrderedDict([("name", "baz"), ("status", lambda: "only-if-continue")])] if should_continue: with out: for row in rows: out(row) stdout = out.stdout assert "only-if-continue" in stdout assert "foofail" in stdout assert "barfail" in stdout else: with pytest.raises(TypeError): with out: for row in rows: out(row) stdout = out.stdout assert "only-if-continue" not in stdout if should_continue: assert "only-if-continue" in stdout else: assert "only-if-continue" not in stdout # Regardless of the `continue_on_failure`, any value the generator yields # before failing will make it through. if kind == "generator": assert "foo ok" in stdout else: assert "foo ok" not in stdout @pytest.mark.timeout(10) def test_tabular_write_callable_cancel_on_exception(): def fail(): raise TypeError("wrong") delay_fail = Delayed(fail) delay = Delayed("ok") out = Tabular(["name", "status"], max_workers=1, continue_on_failure=False) with pytest.raises(TypeError): with out: out({"name": "foo", "status": delay_fail.run}) out({"name": "bar", "status": delay.run}) delay_fail.now = True assert out.stdout.splitlines()[:2] == ["foo", "bar"] @pytest.mark.timeout(10) def test_tabular_write_callable_kb_interrupt_in_exit(): delay0 = Delayed("v0") delay1 = Delayed("v1") out = Tabular(max_workers=1) def run_tabular(): with out: out(OrderedDict([("name", "foo"), ("status", delay0.run)])) out(OrderedDict([("name", "bar"), ("status", delay1.run)])) # Hold up until output from the first callable has been # written. while "v0" not in out.stdout: time.sleep(0.1) raise KeyboardInterrupt thread = threading.Thread(target=run_tabular) thread.daemon = True thread.start() delay0.now = True thread.join() stdout = out.stdout assert "KeyboardInterrupt" in stdout assert_contains_nc(stdout.splitlines(), "foo v0", "bar ") delay1.now = True @pytest.mark.timeout(10) def test_tabular_write_callable_kb_interrupt_during_wait(): delay0 = Delayed("v0") delay1 = Delayed("v1") out = Tabular(max_workers=1) def run_tabular(): def raise_kbint(): # Hold up until output from the callables has been # written. while True: stdout = out.stdout if "v0" and "v1" in stdout: break time.sleep(0.1) raise KeyboardInterrupt out.wait = raise_kbint with out: out(OrderedDict([("name", "foo"), ("status", delay0.run)])) out(OrderedDict([("name", "bar"), ("status", delay1.run)])) thread = threading.Thread(target=run_tabular) thread.daemon = True thread.start() delay0.now = True delay1.now = True thread.join() stdout = out.stdout assert_contains_nc(stdout.splitlines(), "foo v0", "bar ") assert "Keyboard interrupt" in stdout @pytest.mark.timeout(10) @pytest.mark.parametrize("kind", ["function", "generator"]) def test_tabular_callback_bad_value(caplog, kind): caplog.set_level(logging.ERROR) delay = Delayed("atom") out = Tabular() row = OrderedDict( [("name", "foo"), (("status", "path"), getattr(delay, "run" if kind == "function" else "gen"))]) with out: out(row) delay.now = True # Note that there is an unfortunate discrepancy between a regular function # and a generator value. With a regular function, the write happens in the # callback, where concurrent.futures catches it and calls # logging.exception(). With a generator value, the write happens as part # of the main asynchronous function, so it is processed like any other # error in the asynchronous function. assert "got 'atom'" in (caplog.text if kind == "function" else out.stdout) @pytest.mark.timeout(10) def test_tabular_cancel_in_exit(): delay_0 = Delayed("v0") delay_1 = Delayed("v1") delay_2 = Delayed("v2") out = Tabular(columns=["name", "status"], max_workers=1) rows = [{"name": "foo", "status": delay_0.run}, {"name": "bar", "status": delay_1.run}, {"name": "baz", "status": delay_2.run}] try: with out: for row in rows: out(row) if row["name"] == "foo": delay_0.now = True while "v0" not in out.stdout: time.sleep(0.01) raise TypeError("oh no") except TypeError: # delay_1 is running and must complete. delay_1.now = True stdout = out.stdout assert "oh no" in stdout assert "v0" in stdout assert "v1" not in stdout assert "v2" not in stdout delay_2.now = True @pytest.mark.timeout(10) def test_tabular_exc_in_exit_no_async(): out = Tabular(columns=["name", "status"]) rows = [{"name": "foo", "status": "a"}, {"name": "bar", "status": "b"}, {"name": "baz", "status": "c"}] try: with out: for row in rows: out(row) raise TypeError("oh no") except TypeError: expected = ["foo a", "bar b", "baz c"] assert out.stdout.splitlines() == expected @pytest.mark.timeout(10) def test_tabular_pool_shutdown(): delay_0 = Delayed("v0") delay_1 = Delayed("v1") delay_2 = Delayed("v2") out = Tabular(columns=["name", "status"], max_workers=1) with out: out({"name": "foo", "status": delay_0.run}) out({"name": "bar", "status": delay_1.run}) delay_0.now = True delay_1.now = True out._pool.shutdown(wait=False) with pytest.raises(RuntimeError): out({"name": "baz", "status": delay_2.run}) def delayed_gen_func(*values): if not values: values = ["update", "finished"] def fn(): for val in values: time.sleep(0.05) yield val return fn @pytest.mark.timeout(10) @pytest.mark.parametrize("gen_source", [delayed_gen_func(), delayed_gen_func()()], ids=["gen_func", "generator"]) def test_tabular_write_generator_function_values(gen_source): with Tabular(["name", "status"]) as out: out({"name": "foo", "status": ("waiting", gen_source)}) out({"name": "bar", "status": "ok"}) expected = ("foo waiting\n" "bar ok \n") assert_eq_repr(out.stdout, expected) lines = out.stdout.splitlines() assert_contains_nc(lines, "foo update ", "foo finished", "bar ok ") @pytest.mark.timeout(10) def test_tabular_write_generator_values_multireturn(): gen = delayed_gen_func({"status": "working"}, # for one of two columns {"path": "/tmp/a"}, # for the other of two columns {"path": "/tmp/b", # for both columns "status": "done"}) out = Tabular() with out: out(OrderedDict([("name", "foo"), (("status", "path"), ("...", gen))])) out(OrderedDict([("name", "bar"), ("status", "ok"), ("path", "na")])) expected = ("foo ... ...\n" "bar ok na \n") assert_eq_repr(out.stdout, expected) lines = out.stdout.splitlines() assert_contains_nc(lines, "foo working ...", "foo working /tmp/a", "foo done /tmp/b") def test_tabular_write_wait_noop_if_nothreads(): with Tabular(["name", "status"]) as out: out({"name": "foo", "status": "done"}) out({"name": "bar", "status": "ok"}) expected = ("foo done\n" "bar ok \n") assert_eq_repr(out.stdout, expected) @pytest.mark.timeout(10) @pytest.mark.parametrize("form", ["dict", "list", "attrs"]) def test_tabular_write_delayed(form): data = OrderedDict([("name", "foo"), ("paired0", 1), ("paired1", 2), ("solo", 3)]) if form == "dict": row = data elif form == "list": row = list(data.values()) elif form == "attrs": row = AttrData(**data) out = Tabular(list(data.keys()), style={"paired0": {"delayed": "pair"}, "paired1": {"delayed": "pair"}, "solo": {"delayed": True}}) with out: out(row) lines = out.stdout.splitlines() assert lines[0] == "foo" # Either paired0/paired1 came in first or solo came in first, but # paired0/paired1 should arrive together. firstin = [ln for ln in lines if eq_repr_noclear(ln, "foo 1 2") or eq_repr_noclear(ln, "foo 3")] assert len(firstin) == 1 assert eq_repr_noclear(lines[-1], "foo 1 2 3") @pytest.mark.timeout(10) def test_tabular_write_inspect_with_getitem(): delay0 = Delayed("done") out = Tabular(["name", "status"]) with out: out({"name": "foo", "status": ("thinking", delay0.run)}) delay0.now = True out[("foo",)] == {"name": "foo", "status": "done"} with pytest.raises(KeyError): out[("nothere",)] def test_tabular_hidden_column(): out = Tabular(["name"], style={"name": {"hide": True, "aggregate": len}}) out({"name": "foo"}) assert out.stdout.strip() == "" def test_tabular_hidden_if_missing_column(): out = Tabular(["name", "status", "letter"], style={"header_": {}, "name": {"aggregate": lambda _: "X"}, "status": {"hide": "if_missing", "aggregate": len}}) out({"name": "foo", "letter": "a"}) expected = ["name letter", "foo a ", "X "] assert out.stdout.splitlines() == expected out({"name": "bar", "status": "ok", "letter": "b"}) lines1 = out.stdout.splitlines() assert_contains_nc(lines1, "bar ok b ") assert_contains_nc(lines1, "X 1 ") def test_tabular_hidden_col_takes_back_auto_space(): out = Tabular(["name", "status", "letter"], style={"width_": 10, "default_": {"width": {"marker": "…"}}, "status": {"hide": "if_missing"}}) out({"name": "foo", "letter": "abcdefg"}) assert out.stdout.splitlines() == ["foo abcde…"] out({"name": "foo", "status": "ok"}) assert_contains_nc(out.stdout.splitlines(), "foo ok ab…") def test_tabular_summary(): def nbad(xs): return "{:d} failed".format(sum("BAD" == x for x in xs)) out = Tabular(style={"header_": {}, "status": {"aggregate": nbad}, "num": {"aggregate": sum}}) out(OrderedDict([("name", "foo"), ("status", "BAD"), ("num", 2)])) out(OrderedDict([("name", "bar"), ("status", "BAD"), ("num", 3)])) out(OrderedDict([("name", "baz"), ("status", "BAD"), ("num", 4)])) # Update "foo". out(OrderedDict([("name", "foo"), ("status", "OK"), ("num", 10)])) lines = out.stdout.splitlines() assert_contains_nc(lines, " 1 failed 2 ", " 2 failed 5 ", " 3 failed 9 ", " 2 failed 17 ") def test_tabular_shrinking_summary(): def counts(values): cnt = Counter(values) return ["{}: {:d}".format(k, cnt[k]) for k in sorted(cnt.keys())] out = Tabular(["name", "status"], style={"status": {"aggregate": counts}}) out({"name": "foo", "status": "unknown"}) out({"name": "bar", "status": "ok"}) # Remove the only occurrence of "unknown". out({"name": "foo", "status": "ok"}) lines = out.stdout.splitlines() # Two summary lines shrank to one, so we expect two move-ups and a clear. expected = unicode_cap("cuu1") * 2 + unicode_cap("ed") assert len([ln for ln in lines if ln.startswith(expected)]) == 1 def test_tabular_summary_avoid_repeated_clear(): out = Tabular(style={"name": {"aggregate": len}}) out(OrderedDict([("name", "foo")])) out(OrderedDict([("name", "bar"), ("new", "col")])) # The new column requires a rewrite, but the summary has already been # removed, so that shouldn't lead to an additional move-up and clear. lines = out.stdout.splitlines() assert_eq_repr(lines[-3], # -1: summary, -2: bar unicode_cap("cuu1") + unicode_cap("ed") + unicode_cap("cuu1") + "foo ") def test_tabular_mode_invalid(): with pytest.raises(ValueError): Tabular(["name", "status"], mode="unknown") def test_tabular_mode_default(): data = [OrderedDict([("name", "foo"), ("status", "OK")]), OrderedDict([("name", "bar"), ("status", "BAD")])] out0 = Tabular() with out0: for row in data: out0(row) out1 = Tabular(mode="update") with out1: for row in data: out1(row) assert out0.stdout == out1.stdout def test_tabular_mode_update_noninteractive(): out = Tabular(["name", "status"], interactive=False) assert out._mode == "final" def test_tabular_mode_incremental(): out = Tabular(["name", "status"], style={"status": {"aggregate": len}}, mode="incremental") with out: out({"name": "foo", "status": "ok"}) out({"name": "foo", "status": "ko"}) out({"name": "bar", "status": "unknown"}) assert "unknown" in out.stdout lines = out.stdout.splitlines() # Expect 5 lines: first two foos, then a whole repaint (2 lines) due to the # bar, and then one summary lines. assert len(lines) == 5 def test_tabular_mode_final(): out = Tabular(["name", "status"], mode="final") with out: out({"name": "foo", "status": "unknown"}) out({"name": "bar", "status": "ok"}) out({"name": "foo", "status": "ok"}) assert "unknown" not in out.stdout assert len(out.stdout.splitlines()) == 2 def test_tabular_mode_final_summary(): out = Tabular(["name", "status"], style={"status": {"aggregate": len}}, mode="final") with out: out({"name": "foo", "status": "unknown"}) out({"name": "bar", "status": "ok"}) out({"name": "foo", "status": "ok"}) assert "unknown" not in out.stdout lines = out.stdout.splitlines() # Expect three lines, two regular rows and one summary. assert len(lines) == 3 @pytest.mark.parametrize("clear", [True, False]) def test_tabular_outside_write(clear): out = Tabular(["name", "status"], style={"header_": {}, "status": {"aggregate": len}}) with out: out({"name": "foo", "status": "unknown"}) with out.outside_write(clear=clear): out._stream.write("outside 1\n") out._stream.write("outside 2\n") out({"name": "bar", "status": "ok"}) lines = out.stdout.splitlines() # 3 lines for the initial header, foo, and status, 2 lines from the user, 3 # for the refresh, and 2 lines for the bar write (1 for bar and 1 for the # status). assert len(lines) == 10 if clear: # "outside 1" is in the output, but not as a pure line due to the # clear=True. assert "outside 1" in out.stdout assert "outside 1" not in lines else: assert "outside 1" in lines assert "outside 2" in lines assert_contains_nc(lines[-4:], "foo unknown", "bar ok ", " 2 ") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tests/test_tabular_dummy.py0000644000175100001660000000240714756362760021126 0ustar00runnerdocker# -*- coding: utf-8 -*- from io import StringIO import pytest from pyout.tabular_dummy import NoUpdateTerminalStream from pyout.tabular_dummy import Tabular def test_stream_update_fail(): stream = NoUpdateTerminalStream() with pytest.raises(NotImplementedError): stream.clear_last_lines(3) def test_stream_update_hardcodes_height_width(): stream = NoUpdateTerminalStream() assert stream.width == 80 assert stream.height == 24 def test_tabular_basic(): out = Tabular(["name", "status"], stream=StringIO(), interactive=True, style={"name": {"color": "green", "width": {"marker": "…", "max": 4}, "transform": lambda x: x.upper()}}) with out: out({"name": "foo", "status": "fine"}) out({"name": "barbecue", "status": "dandy"}) # The default mode for tabular_dummy.Tabular is "incremental". assert out._stream.stream.getvalue() == ("FOO fine\n" "FOO fine \n" "BAR… dandy\n") def test_tabular_update_mode_disallowed(): with pytest.raises(ValueError): Tabular(["name", "status"], mode="update") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tests/test_tests_utils.py0000644000175100001660000000144014756362760020637 0ustar00runnerdockerimport pytest from pyout.tests.utils import assert_contains def test_assert_contains_empty(): with pytest.raises(AssertionError): assert_contains([], "aa") def test_assert_contains_doesnt(): with pytest.raises(AssertionError): assert_contains(["b"], "aa") def test_assert_contains_count(): with pytest.raises(AssertionError): assert_contains(["aa", "b"], "aa", count=2) assert_contains(["aa", "b", "aa"], "aa", count=2) assert_contains(["b"], "aa", count=0) def test_assert_contains_cmp(): with pytest.raises(AssertionError): assert_contains(["aa", "b"], "aa", cmp=lambda x, y: False) with pytest.raises(AssertionError): assert_contains(["a", "b"], "aa") assert_contains(["a", "b"], "aa", cmp=lambda x, y: x[0] == y[0]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tests/test_truncate.py0000644000175100001660000000421314756362760020103 0ustar00runnerdocker# -*- coding: utf-8 -*- import pytest from pyout.truncate import _splice as splice from pyout.truncate import Truncater def test_splice_non_positive(): with pytest.raises(ValueError): assert splice("", 0) def test_splice(): assert splice("", 10) == ("", "") assert splice("abc", 10) == ("a", "bc") assert splice("abc", 3) == ("a", "bc") assert splice("abcefg", 3) == ("a", "fg") def test_truncate_mark_true(): fn = Truncater(7, marker=True).truncate assert fn(None, "abc") == "abc" assert fn(None, "abcdefg") == "abcdefg" assert fn(None, "abcdefgh") == "abcd..." @pytest.mark.parametrize("where", ["left", "center", "right"]) def test_truncate_mark_string(where): fn = Truncater(7, marker="…", where=where).truncate assert fn(None, "abc") == "abc" assert fn(None, "abcdefg") == "abcdefg" expected = {"left": "…cdefgh", "center": "abc…fgh", "right": "abcdef…"} assert fn(None, "abcdefgh") == expected[where] @pytest.mark.parametrize("where", ["left", "center", "right"]) def test_truncate_mark_even(where): # Test out a marker with an even number of characters, mostly to get the # "center" style on seven characters to be uneven. fn = Truncater(7, marker="..", where=where).truncate expected = {"left": "..defgh", "center": "ab..fgh", "right": "abcde.."} assert fn(None, "abcdefgh") == expected[where] @pytest.mark.parametrize("where", ["left", "center", "right"]) def test_truncate_mark_short(where): fn = Truncater(2, marker=True, where=where).truncate assert fn(None, "abc") == ".." @pytest.mark.parametrize("where", ["left", "center", "right"]) def test_truncate_nomark(where): fn = Truncater(7, marker=False, where=where).truncate assert fn(None, "abc") == "abc" assert fn(None, "abcdefg") == "abcdefg" expected = {"left": "bcdefgh", "center": "abcefgh", "right": "abcdefg"} assert fn(None, "abcdefgh") == expected[where] def test_truncate_unknown_where(): with pytest.raises(ValueError): Truncater(7, marker=False, where="dunno") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/tests/utils.py0000644000175100001660000000201514756362760016355 0ustar00runnerdockerfrom operator import eq def assert_contains(collection, *items, **kwargs): """Check that each item in `items` is in `collection`. Parameters ---------- collection : list *items : list Query items. count : int, optional The number of times each item should occur in `collection`. cmp : callable, optional Function to compare equality with. It will be called with the element from `items` as the first argument and the element from `collection` as the second. Raises ------ AssertionError if any item does not occur `count` times in `collection`. """ count = kwargs.pop("count", 1) cmp = kwargs.pop("cmp", eq) for item in items: if not len([x for x in collection if cmp(x, item)]) == count: raise AssertionError("{!r} (x{}) not in {!r}".format( item, count, collection)) def assert_eq_repr(a, b): """Compare the repr's of `a` and `b` to escape escape codes. """ assert repr(a) == repr(b) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyout/truncate.py0000644000175100001660000000573014756362760015707 0ustar00runnerdocker"""Processor for field value truncation. """ def _truncate_right(value, length, marker): if len(value) <= length: short = value elif marker: nchars_free = length - len(marker) if nchars_free > 0: short = value[:nchars_free] + marker else: short = marker[:length] else: short = value[:length] return short def _truncate_left(value, length, marker): res = _truncate_right(value[::-1], length, marker[::-1] if marker else None) return res[::-1] def _splice(value, n): """Splice `value` at its center, retaining a total of `n` characters. Parameters ---------- value : str n : int The total length of the returned ends will not be greater than this value. Characters will be dropped from the center to reach this limit. Returns ------- A tuple of str: (head, tail). """ if n <= 0: raise ValueError("n must be positive") value_len = len(value) center = value_len // 2 left, right = value[:center], value[center:] if n >= value_len: return left, right n_todrop = value_len - n right_idx = n_todrop // 2 left_idx = right_idx + n_todrop % 2 return left[:-left_idx], right[right_idx:] def _truncate_center(value, length, marker): value_len = len(value) if value_len <= length: return value if marker: marker_len = len(marker) if marker_len < length: left, right = _splice(value, length - marker_len) parts = left, marker, right else: parts = _splice(marker, length) else: parts = _splice(value, length) return "".join(parts) class Truncater(object): """A processor that truncates the result to a given length. Note: You probably want to place the `truncate` method at the beginning of the processor list so that the truncation is based on the length of the original value. Parameters ---------- length : int Truncate the string to this length. marker : str or bool, optional Indicate truncation with this string. If True, indicate truncation by replacing the last three characters of a truncated string with '...'. If False, no truncation marker is added to a truncated string. where : {'left', 'center', 'right'}, optional Where to truncate the result. """ def __init__(self, length, marker=True, where="right"): self.length = length self.marker = "..." if marker is True else marker truncate_fns = {"left": _truncate_left, "center": _truncate_center, "right": _truncate_right} try: self._truncate_fn = truncate_fns[where] except KeyError: raise ValueError("Unrecognized `where` value: {}".format(where)) def truncate(self, _, result): return self._truncate_fn(result, self.length, self.marker) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1740236285.9275966 pyout-0.8.1/pyout.egg-info/0000755000175100001660000000000014756362776015204 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236285.0 pyout-0.8.1/pyout.egg-info/PKG-INFO0000644000175100001660000000560714756362775016310 0ustar00runnerdockerMetadata-Version: 2.2 Name: pyout Version: 0.8.1 Summary: Terminal styling for tabular data Home-page: https://github.com/pyout/pyout.git Author: Kyle Meyer Author-email: kyle@kyleam.com License: MIT Keywords: terminal,tty,console,formatting,style,color Classifier: Intended Audience :: Developers Classifier: Natural Language :: English Classifier: Environment :: Console Classifier: Development Status :: 1 - Planning Classifier: Topic :: Utilities Classifier: Operating System :: POSIX Classifier: Programming Language :: Python :: 3 Classifier: License :: OSI Approved :: MIT License Classifier: Topic :: Software Development :: Libraries Classifier: Topic :: Software Development :: User Interfaces Classifier: Topic :: Terminals Requires-Python: >=3.7 License-File: COPYING Requires-Dist: blessed; sys_platform != "win32" Requires-Dist: jsonschema>=3.0.0 Dynamic: author Dynamic: author-email Dynamic: classifier Dynamic: description Dynamic: home-page Dynamic: keywords Dynamic: license Dynamic: requires-dist Dynamic: requires-python Dynamic: summary =========================================== pyout: Terminal styling for structured data =========================================== .. image:: https://travis-ci.org/pyout/pyout.svg?branch=master :target: https://travis-ci.org/pyout/pyout .. image:: https://codecov.io/github/pyout/pyout/coverage.svg?branch=master :target: https://codecov.io/github/pyout/pyout?branch=master .. image:: https://img.shields.io/badge/License-MIT-yellow.svg :target: https://opensource.org/licenses/MIT ``pyout`` is a Python package that defines an interface for writing structured records as a table in a terminal. It is being developed to replace custom code for displaying tabular data in in ReproMan_ and DataLad_. See the Examples_ folder for how to get started. A primary goal of the interface is the separation of content from style and presentation. Current capabilities include - automatic width adjustment and updating of previous values - styling based on a field value or specified interval - defining a transform function that maps a raw value to the displayed value - defining a summary function that generates a summary of a column (e.g., value totals) - support for delayed, asynchronous values that are added to the table as they come in Status ====== This package is currently in early stages of development. While it should be usable in its current form, it may change in substantial ways that break backward compatibility, and many aspects currently lack polish and documentation. ``pyout`` requires Python 3 (>= 3.7). It is developed and tested in GNU/Linux environments and is expected to work in macOS environments as well. There is currently very limited Windows support. License ======= ``pyout`` is under the MIT License. See the COPYING file. .. _DataLad: https://datalad.org .. _ReproMan: http://reproman.repronim.org .. _Examples: examples/ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236285.0 pyout-0.8.1/pyout.egg-info/SOURCES.txt0000644000175100001660000000131514756362775017067 0ustar00runnerdockerCOPYING README.rst pyproject.toml setup.py pyout/__init__.py pyout/_version.py pyout/common.py pyout/elements.py pyout/field.py pyout/interface.py pyout/summary.py pyout/tabular.py pyout/tabular_dummy.py pyout/truncate.py pyout.egg-info/PKG-INFO pyout.egg-info/SOURCES.txt pyout.egg-info/dependency_links.txt pyout.egg-info/requires.txt pyout.egg-info/top_level.txt pyout/tests/__init__.py pyout/tests/conftest.py pyout/tests/tabular.py pyout/tests/terminal.py pyout/tests/test_elements.py pyout/tests/test_field.py pyout/tests/test_interface.py pyout/tests/test_summary.py pyout/tests/test_tabular.py pyout/tests/test_tabular_dummy.py pyout/tests/test_tests_utils.py pyout/tests/test_truncate.py pyout/tests/utils.py././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236285.0 pyout-0.8.1/pyout.egg-info/dependency_links.txt0000644000175100001660000000000114756362775021251 0ustar00runnerdocker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236285.0 pyout-0.8.1/pyout.egg-info/requires.txt0000644000175100001660000000006614756362775017605 0ustar00runnerdockerjsonschema>=3.0.0 [:sys_platform != "win32"] blessed ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236285.0 pyout-0.8.1/pyout.egg-info/top_level.txt0000644000175100001660000000000614756362775017731 0ustar00runnerdockerpyout ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/pyproject.toml0000644000175100001660000000036714756362760015245 0ustar00runnerdocker[build-system] requires = ["setuptools", "versioneer[toml]"] build-backend = 'setuptools.build_meta' [tool.versioneer] VCS = "git" style = "pep440" versionfile_source = "pyout/_version.py" versionfile_build = "pyout/_version.py" tag_prefix = "v" ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1740236285.9275966 pyout-0.8.1/setup.cfg0000644000175100001660000000004614756362776014153 0ustar00runnerdocker[egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1740236272.0 pyout-0.8.1/setup.py0000644000175100001660000000214114756362760014033 0ustar00runnerdockerfrom setuptools import setup import versioneer setup( name="pyout", version=versioneer.get_version(), cmdclass=versioneer.get_cmdclass(), author="Kyle Meyer", author_email="kyle@kyleam.com", description="Terminal styling for tabular data", license="MIT", url="https://github.com/pyout/pyout.git", packages=["pyout", "pyout.tests"], python_requires=">=3.7", install_requires=[ "blessed; sys_platform != 'win32'", "jsonschema>=3.0.0", ], long_description=open("README.rst").read(), classifiers=[ "Intended Audience :: Developers", "Natural Language :: English", "Environment :: Console", "Development Status :: 1 - Planning", "Topic :: Utilities", "Operating System :: POSIX", "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Topic :: Software Development :: Libraries", "Topic :: Software Development :: User Interfaces", "Topic :: Terminals" ], keywords=["terminal", "tty", "console", "formatting", "style", "color"] )