spyder_unittest-0.3.0/0000755072410100006200000000000013241567752014711 5ustar jitseamt00000000000000spyder_unittest-0.3.0/spyder_unittest.egg-info/0000755072410100006200000000000013241567752021650 5ustar jitseamt00000000000000spyder_unittest-0.3.0/spyder_unittest.egg-info/dependency_links.txt0000644072410100006200000000000113241567752025716 0ustar jitseamt00000000000000 spyder_unittest-0.3.0/spyder_unittest.egg-info/SOURCES.txt0000644072410100006200000000275113241567752023541 0ustar jitseamt00000000000000CHANGELOG.md LICENSE.txt MANIFEST.in README.md setup.cfg setup.py spyder_unittest/__init__.py spyder_unittest/unittestplugin.py spyder_unittest.egg-info/PKG-INFO spyder_unittest.egg-info/SOURCES.txt spyder_unittest.egg-info/dependency_links.txt spyder_unittest.egg-info/requires.txt spyder_unittest.egg-info/top_level.txt spyder_unittest/backend/__init__.py spyder_unittest/backend/abbreviator.py spyder_unittest/backend/frameworkregistry.py spyder_unittest/backend/jsonstream.py spyder_unittest/backend/noserunner.py spyder_unittest/backend/pytestrunner.py spyder_unittest/backend/pytestworker.py spyder_unittest/backend/runnerbase.py spyder_unittest/backend/unittestrunner.py spyder_unittest/backend/tests/__init__.py spyder_unittest/backend/tests/test_abbreviator.py spyder_unittest/backend/tests/test_frameworkregistry.py spyder_unittest/backend/tests/test_jsonstream.py spyder_unittest/backend/tests/test_noserunner.py spyder_unittest/backend/tests/test_pytestrunner.py spyder_unittest/backend/tests/test_pytestworker.py spyder_unittest/backend/tests/test_runnerbase.py spyder_unittest/backend/tests/test_unittestrunner.py spyder_unittest/tests/test_unittestplugin.py spyder_unittest/widgets/__init__.py spyder_unittest/widgets/configdialog.py spyder_unittest/widgets/datatree.py spyder_unittest/widgets/unittestgui.py spyder_unittest/widgets/tests/__init__.py spyder_unittest/widgets/tests/test_configdialog.py spyder_unittest/widgets/tests/test_datatree.py spyder_unittest/widgets/tests/test_unittestgui.pyspyder_unittest-0.3.0/spyder_unittest.egg-info/PKG-INFO0000644072410100006200000000257213241567752022753 0ustar jitseamt00000000000000Metadata-Version: 1.1 Name: spyder-unittest Version: 0.3.0 Summary: Plugin to run tests from within the Spyder IDE Home-page: https://github.com/spyder-ide/spyder-unittest Author: Spyder Project Contributors Author-email: UNKNOWN License: MIT Description-Content-Type: UNKNOWN Description: This is a plugin for the Spyder IDE that integrates popular unit test frameworks. It allows you to run tests and view the results. **Status:** This is a work in progress. It is useable, but only the basic functionality is implemented at the moment. The plugin currently supports the py.test and nose testing frameworks. Keywords: Qt PyQt4 PyQt5 spyder plugins testing Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Environment :: X11 Applications :: Qt Classifier: Environment :: Win32 (MS Windows) Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Topic :: Software Development :: Testing Classifier: Topic :: Text Editors :: Integrated Development Environments (IDE) spyder_unittest-0.3.0/spyder_unittest.egg-info/top_level.txt0000644072410100006200000000002013241567752024372 0ustar jitseamt00000000000000spyder_unittest spyder_unittest-0.3.0/spyder_unittest.egg-info/requires.txt0000644072410100006200000000001713241567752024246 0ustar jitseamt00000000000000lxml spyder>=3 spyder_unittest-0.3.0/spyder_unittest/0000755072410100006200000000000013241567752020156 5ustar jitseamt00000000000000spyder_unittest-0.3.0/spyder_unittest/backend/0000755072410100006200000000000013241567752021545 5ustar jitseamt00000000000000spyder_unittest-0.3.0/spyder_unittest/backend/noserunner.py0000644072410100006200000000624713227127652024321 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Support for Nose framework.""" # Third party imports from lxml import etree from spyder.config.base import get_translation # Local imports from spyder_unittest.backend.runnerbase import Category, RunnerBase, TestResult try: _ = get_translation("unittest", dirname="spyder_unittest") except KeyError as error: import gettext _ = gettext.gettext class NoseRunner(RunnerBase): """Class for running tests within Nose framework.""" module = 'nose' name = 'nose' def create_argument_list(self): """Create argument list for testing process.""" return [ '-m', self.module, '--with-xunit', '--xunit-file={}'.format(self.resultfilename) ] def finished(self): """Called when the unit test process has finished.""" output = self.read_all_process_output() testresults = self.load_data() self.sig_finished.emit(testresults, output) def load_data(self): """ Read and parse unit test results. This function reads the unit test results from the file with name `self.resultfilename` and parses them. The file should contain the test results in JUnitXML format. Returns ------- list of TestResult Unit test results. """ try: data = etree.parse(self.resultfilename).getroot() except OSError: data = [] testresults = [] for testcase in data: category = Category.OK status = 'ok' name = '{}.{}'.format(testcase.get('classname'), testcase.get('name')) message = '' time = float(testcase.get('time')) extras = [] for child in testcase: if child.tag in ('error', 'failure', 'skipped'): if child.tag == 'skipped': category = Category.SKIP else: category = Category.FAIL status = child.tag type_ = child.get('type') message = child.get('message', default='') if type_ and message: message = '{0}: {1}'.format(type_, message) elif type_: message = type_ if child.text: extras.append(child.text) elif child.tag in ('system-out', 'system-err'): if child.tag == 'system-out': heading = _('Captured stdout') else: heading = _('Captured stderr') contents = child.text.rstrip('\n') extras.append('----- {} -----\n{}'.format(heading, contents)) extra_text = '\n\n'.join(extras) testresults.append( TestResult(category, status, name, message, time, extra_text)) return testresults spyder_unittest-0.3.0/spyder_unittest/backend/pytestrunner.py0000644072410100006200000001212313241562107024665 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Support for py.test framework.""" # Standard library imports import os import os.path as osp # Local imports from spyder_unittest.backend.jsonstream import JSONStreamReader from spyder_unittest.backend.runnerbase import Category, RunnerBase, TestResult class PyTestRunner(RunnerBase): """Class for running tests within py.test framework.""" module = 'pytest' name = 'py.test' def create_argument_list(self): """Create argument list for testing process.""" pyfile = os.path.join(os.path.dirname(__file__), 'pytestworker.py') return [pyfile] def _prepare_process(self, config, pythonpath): """Prepare and return process for running the unit test suite.""" process = RunnerBase._prepare_process(self, config, pythonpath) process.readyReadStandardOutput.connect(self.read_output) return process def start(self, config, pythonpath): """Start process which will run the unit test suite.""" self.config = config self.reader = JSONStreamReader() self.output = '' RunnerBase.start(self, config, pythonpath) def read_output(self): """Called when test process emits output.""" output = self.read_all_process_output() result = self.reader.consume(output) self.process_output(result) def process_output(self, output): """ Process output of test process. Parameters ---------- output : list list of decoded Python object sent by test process. """ collected_list = [] collecterror_list = [] starttest_list = [] result_list = [] for result_item in output: if result_item['event'] == 'collected': testname = convert_nodeid_to_testname(result_item['nodeid']) collected_list.append(testname) elif result_item['event'] == 'collecterror': tupl = logreport_collecterror_to_tuple(result_item) collecterror_list.append(tupl) elif result_item['event'] == 'starttest': starttest_list.append(logreport_starttest_to_str(result_item)) elif result_item['event'] == 'logreport': testresult = logreport_to_testresult(result_item, self.config) result_list.append(testresult) elif result_item['event'] == 'finished': self.output = result_item['stdout'] if collected_list: self.sig_collected.emit(collected_list) if collecterror_list: self.sig_collecterror.emit(collecterror_list) if starttest_list: self.sig_starttest.emit(starttest_list) if result_list: self.sig_testresult.emit(result_list) def finished(self): """ Called when the unit test process has finished. This function emits `sig_finished`. """ self.sig_finished.emit(None, self.output) def normalize_module_name(name): """ Convert module name reported by pytest to Python conventions. This function strips the .py suffix and replaces '/' by '.', so that 'ham/spam.py' becomes 'ham.spam'. """ if name.endswith('.py'): name = name[:-3] return name.replace('/', '.') def convert_nodeid_to_testname(nodeid): """Convert a nodeid to a test name.""" module, name = nodeid.split('::', 1) module = normalize_module_name(module) return '{}.{}'.format(module, name) def logreport_collecterror_to_tuple(report): """Convert a 'collecterror' logreport to a (str, str) tuple.""" module = normalize_module_name(report['nodeid']) return (module, report['longrepr']) def logreport_starttest_to_str(report): """Convert a 'starttest' logreport to a str.""" return convert_nodeid_to_testname(report['nodeid']) def logreport_to_testresult(report, config): """Convert a logreport sent by test process to a TestResult.""" if report['outcome'] == 'passed': cat = Category.OK status = 'ok' elif report['outcome'] == 'failed': cat = Category.FAIL status = 'failure' else: cat = Category.SKIP status = report['outcome'] testname = convert_nodeid_to_testname(report['nodeid']) duration = report['duration'] message = report['message'] if 'message' in report else '' if 'longrepr' not in report: extra_text = '' elif isinstance(report['longrepr'], list): extra_text = report['longrepr'][2] else: extra_text = report['longrepr'] if 'sections' in report: for (heading, text) in report['sections']: extra_text += '----- {} -----\n'.format(heading) extra_text += text filename = osp.join(config.wdir, report['filename']) result = TestResult(cat, status, testname, message=message, time=duration, extra_text=extra_text, filename=filename, lineno=report['lineno']) return result spyder_unittest-0.3.0/spyder_unittest/backend/runnerbase.py0000644072410100006200000001646613241562107024265 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Classes for running tests within various frameworks.""" # Standard library imports import os import tempfile # Third party imports from qtpy.QtCore import (QObject, QProcess, QProcessEnvironment, QTextCodec, Signal) from spyder.py3compat import to_text_string from spyder.utils.misc import add_pathlist_to_PYTHONPATH, get_python_executable try: from importlib.util import find_spec as find_spec_or_loader except ImportError: # Python 2 from pkgutil import find_loader as find_spec_or_loader class Category: """Enum type representing category of test result.""" FAIL = 1 OK = 2 SKIP = 3 PENDING = 4 class TestResult: """Class representing the result of running a single test.""" def __init__(self, category, status, name, message='', time=None, extra_text='', filename=None, lineno=None): """ Construct a test result. Parameters ---------- category : Category status : str name : str message : str time : float or None extra_text : str filename : str or None lineno : int or None """ self.category = category self.status = status self.name = name self.message = message self.time = time extra_text = extra_text.rstrip() if extra_text: self.extra_text = extra_text.split("\n") else: self.extra_text = [] self.filename = filename self.lineno = lineno def __eq__(self, other): """Test for equality.""" return self.__dict__ == other.__dict__ class RunnerBase(QObject): """ Base class for running tests with a framework that uses JUnit XML. This is an abstract class, meant to be subclassed before being used. Concrete subclasses should define executable and create_argument_list(), All communication back to the caller is done via signals. Attributes ---------- module : str Name of Python module for test framework. This needs to be defined before the user can run tests. name : str Name of test framework, as presented to user. process : QProcess or None Process running the unit test suite. resultfilename : str Name of file in which test results are stored. Signals ------- sig_collected(list of str) Emitted when tests are collected. sig_collecterror(list of (str, str) tuples) Emitted when errors are encountered during collection. First element of tuple is test name, second element is error message. sig_starttest(list of str) Emitted just before tests are run. sig_testresult(list of TestResult) Emitted when tests are finished. sig_finished(list of TestResult, str) Emitted when test process finishes. First argument contains the test results, second argument contains the output of the test process. """ sig_collected = Signal(object) sig_collecterror = Signal(object) sig_starttest = Signal(object) sig_testresult = Signal(object) sig_finished = Signal(object, str) def __init__(self, widget, resultfilename=None): """ Construct test runner. Parameters ---------- widget : UnitTestWidget Unit test widget which constructs the test runner. resultfilename : str or None Name of file in which to store test results. If None, use default. """ QObject.__init__(self, widget) self.process = None if resultfilename is None: self.resultfilename = os.path.join(tempfile.gettempdir(), 'unittest.results') else: self.resultfilename = resultfilename @classmethod def is_installed(cls): """ Check whether test framework is installed. This function tests whether self.module is installed, but it does not import it. Returns ------- bool True if framework is installed, False otherwise. """ return find_spec_or_loader(cls.module) is not None def create_argument_list(self): """ Create argument list for testing process (dummy). This function should be defined before calling self.start(). """ raise NotImplementedError def _prepare_process(self, config, pythonpath): """ Prepare and return process for running the unit test suite. This sets the working directory and environment. """ process = QProcess(self) process.setProcessChannelMode(QProcess.MergedChannels) process.setWorkingDirectory(config.wdir) process.finished.connect(self.finished) if pythonpath is not None: env = [ to_text_string(_pth) for _pth in process.systemEnvironment() ] add_pathlist_to_PYTHONPATH(env, pythonpath) processEnvironment = QProcessEnvironment() for envItem in env: envName, separator, envValue = envItem.partition('=') processEnvironment.insert(envName, envValue) process.setProcessEnvironment(processEnvironment) return process def start(self, config, pythonpath): """ Start process which will run the unit test suite. The process is run in the working directory specified in 'config', with the directories in `pythonpath` added to the Python path for the test process. The test results are written to the file `self.resultfilename`. The standard output and error are also recorded. Once the process is finished, `self.finished()` will be called. Parameters ---------- config : TestConfig Unit test configuration. pythonpath : list of str List of directories to be added to the Python path Raises ------ RuntimeError If process failed to start. """ self.process = self._prepare_process(config, pythonpath) executable = get_python_executable() p_args = self.create_argument_list() try: os.remove(self.resultfilename) except OSError: pass self.process.start(executable, p_args) running = self.process.waitForStarted() if not running: raise RuntimeError def finished(self): """ Called when the unit test process has finished. This function should be implemented in derived classes. It should read the results (if necessary) and emit `sig_finished`. """ raise NotImplementedError def read_all_process_output(self): """Read and return all output from `self.process` as unicode.""" qbytearray = self.process.readAllStandardOutput() locale_codec = QTextCodec.codecForLocale() return to_text_string(locale_codec.toUnicode(qbytearray.data())) def stop_if_running(self): """Stop testing process if it is running.""" if self.process and self.process.state() == QProcess.Running: self.process.kill() spyder_unittest-0.3.0/spyder_unittest/backend/abbreviator.py0000644072410100006200000000477613227127652024430 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Class for abbreviating test names.""" class Abbreviator: """ Abbreviates names so that abbreviation identifies name uniquely. First, all names are split in components separated by full stop (like module names in Python). Every component is abbreviated by the smallest prefix not shared by other names in the same directory, except for the last component which is not changed. Attributes ---------- dic : dict of (str, [str, Abbreviator]) keys are the first-level components, values are a list, with the abbreviation as its first element and an Abbreviator for abbreviating the higher-level components as its second element. """ def __init__(self, names=[]): """ Constructor. Arguments --------- names : list of str list of words which needs to be abbreviated. """ self.dic = {} for name in names: self.add(name) def add(self, name): """ Add name to list of names to be abbreviated. Arguments --------- name : str """ if '.' not in name: return len_abbrev = 1 start, rest = name.split('.', 1) for other in self.dic: if start[:len_abbrev] == other[:len_abbrev]: if start == other: break while (start[:len_abbrev] == other[:len_abbrev] and len_abbrev < len(start) and len_abbrev < len(other)): len_abbrev += 1 if len_abbrev == len(start): self.dic[other][0] = other[:len_abbrev + 1] elif len_abbrev == len(other): self.dic[other][0] = other len_abbrev += 1 else: if len(self.dic[other][0]) < len_abbrev: self.dic[other][0] = other[:len_abbrev] else: self.dic[start] = [start[:len_abbrev], Abbreviator()] self.dic[start][1].add(rest) def abbreviate(self, name): """Return abbreviation of name.""" if '.' in name: start, rest = name.split('.', 1) res = (self.dic[start][0] + '.' + self.dic[start][1].abbreviate(rest)) else: res = name return res spyder_unittest-0.3.0/spyder_unittest/backend/frameworkregistry.py0000644072410100006200000000402513163162712025674 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Keep track of testing frameworks and create test runners when requested.""" class FrameworkRegistry(): """ Registry of testing frameworks and their associated runners. The test runner for a framework is responsible for running the tests and parsing the results. It should implement the interface of RunnerBase. Frameworks should first be registered using `.register()`. This registry can then create the assoicated test runner when `.create_runner()` is called. Attributes ---------- frameworks : dict of (str, type) Dictionary mapping names of testing frameworks to the types of the associated runners. """ def __init__(self): """Initialize self.""" self.frameworks = {} def register(self, runner_class): """Register runner class for a testing framework. Parameters ---------- runner_class : type Class used for creating tests runners for the framework. """ self.frameworks[runner_class.name] = runner_class def create_runner(self, framework, widget, tempfilename): """Create test runner associated to some testing framework. This creates an instance of the runner class whose `name` attribute equals `framework`. Parameters ---------- framework : str Name of testing framework. widget : UnitTestWidget Unit test widget which constructs the test runner. resultfilename : str or None Name of file in which to store test results. If None, use default. Returns ------- RunnerBase Newly created test runner Exceptions ---------- KeyError Provided testing framework has not been registered. """ cls = self.frameworks[framework] return cls(widget, tempfilename) spyder_unittest-0.3.0/spyder_unittest/backend/pytestworker.py0000644072410100006200000000703713237071714024701 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """ Script for running py.test tests. This script is meant to be run in a separate process by a PyTestRunner. It runs tests via the py.test framework and prints the results so that the PyTestRunner can read them. """ # Standard library imports import io import sys # Third party imports import pytest # Local imports from spyder_unittest.backend.jsonstream import JSONStreamWriter class StdoutBuffer(io.TextIOWrapper): """ Wrapper for binary stream which accepts both text and binary strings. Source: https://stackoverflow.com/a/19344871 """ def write(self, string): """Write text or binary string to underlying stream.""" try: return super(StdoutBuffer, self).write(string) except TypeError: # redirect encoded byte strings directly to buffer return super(StdoutBuffer, self).buffer.write(string) class SpyderPlugin(): """Pytest plugin which reports in format suitable for Spyder.""" def __init__(self, writer): """Constructor.""" self.writer = writer def pytest_collectreport(self, report): """Called by py.test after collecting tests from a file.""" if report.outcome == 'failed': self.writer.write({ 'event': 'collecterror', 'nodeid': report.nodeid, 'longrepr': report.longrepr.longrepr }) def pytest_itemcollected(self, item): """Called by py.test when a test item is collected.""" nodeid = item.name x = item.parent while x.parent: nodeid = x.name + '::' + nodeid x = x.parent self.writer.write({ 'event': 'collected', 'nodeid': nodeid }) def pytest_runtest_logstart(self, nodeid, location): """Called by py.test before running a test.""" self.writer.write({ 'event': 'starttest', 'nodeid': nodeid }) def pytest_runtest_logreport(self, report): """Called by py.test when a (phase of a) test is completed.""" if report.when in ['setup', 'teardown'] and report.outcome == 'passed': return data = {'event': 'logreport', 'when': report.when, 'outcome': report.outcome, 'nodeid': report.nodeid, 'sections': report.sections, 'duration': report.duration, 'filename': report.location[0], 'lineno': report.location[1]} if report.longrepr: if isinstance(report.longrepr, tuple): data['longrepr'] = report.longrepr else: data['longrepr'] = str(report.longrepr) if hasattr(report, 'wasxfail'): data['wasxfail'] = report.wasxfail if hasattr(report.longrepr, 'reprcrash'): data['message'] = report.longrepr.reprcrash.message self.writer.write(data) def main(args): """Run py.test with the Spyder plugin.""" old_stdout = sys.stdout stdout_buffer = StdoutBuffer(io.BytesIO(), sys.stdout.encoding) sys.stdout = stdout_buffer writer = JSONStreamWriter(old_stdout) pytest.main(args, plugins=[SpyderPlugin(writer)]) stdout_buffer.seek(0) data = {'event': 'finished', 'stdout': stdout_buffer.read()} writer.write(data) sys.stdout = old_stdout if __name__ == '__main__': main(sys.argv[1:]) spyder_unittest-0.3.0/spyder_unittest/backend/__init__.py0000644072410100006200000000033413047602633023646 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Parts of the unittest plugin that are not related to the GUI.""" spyder_unittest-0.3.0/spyder_unittest/backend/unittestrunner.py0000644072410100006200000001135613241562107025223 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Support for unittest framework.""" # Standard library imports import re # Local imports from spyder_unittest.backend.runnerbase import Category, RunnerBase, TestResult class UnittestRunner(RunnerBase): """Class for running tests with unittest module in standard library.""" module = 'unittest' name = 'unittest' def create_argument_list(self): """Create argument list for testing process.""" return ['-m', self.module, 'discover', '-v'] def finished(self): """ Called when the unit test process has finished. This function reads the results and emits `sig_finished`. """ output = self.read_all_process_output() testresults = self.load_data(output) self.sig_finished.emit(testresults, output) def load_data(self, output): """ Read and parse output from unittest module. Returns ------- list of TestResult Unit test results. """ res = [] lines = output.splitlines() line_index = 0 test_index = None while line_index < len(lines): data = self.try_parse_result(lines[line_index]) if data: if data[2] == 'ok': cat = Category.OK elif data[2] == 'FAIL' or data[2] == 'ERROR': cat = Category.FAIL else: cat = Category.SKIP name = '{}.{}'.format(data[1], data[0]) tr = TestResult(category=cat, status=data[2], name=name, message=data[3]) res.append(tr) line_index += 1 test_index = -1 continue data = self.try_parse_exception_header(lines, line_index) if data: line_index = data[0] test_index = next( i for i, tr in enumerate(res) if tr.name == '{}.{}'.format(data[2], data[1])) data = self.try_parse_footer(lines, line_index) if data: line_index = data test_index = -1 continue if test_index is not None: res[test_index].extra_text.append(lines[line_index] + '\n') line_index += 1 return res def try_parse_result(self, line): """ Try to parse a line of text as a test result. Returns ------- tuple of str or None If line represents a test result, then return a tuple with four strings: the name of the test function, the name of the test class, the test result, and the reason (if no reason is given, the fourth string is empty). Otherwise, return None. """ regexp = (r'([^\d\W]\w*) \(([^\d\W][\w.]*)\) \.\.\. ' '(ok|FAIL|ERROR|skipped|expected failure|unexpected success)' "( '([^']*)')?\Z") match = re.match(regexp, line) if match: msg = match.groups()[4] or '' return match.groups()[:3] + (msg, ) else: return None def try_parse_exception_header(self, lines, line_index): """ Try to parse the header of an exception in unittest output. Returns ------- (int, str, str) or None If an exception header is parsed successfully, then return a tuple with the new line index, the name of the test function, and the name of the test class. Otherwise, return None. """ if lines[line_index] != '': return None if not all(char == '=' for char in lines[line_index + 1]): return None regexp = r'\w+: ([^\d\W]\w*) \(([^\d\W][\w.]*)\)\Z' match = re.match(regexp, lines[line_index + 2]) if not match: return None if not all(char == '-' for char in lines[line_index + 3]): return None return (line_index + 4, ) + match.groups() def try_parse_footer(self, lines, line_index): """ Try to parse footer of unittest output. Returns ------- int or None New line index if footer is parsed successfully, None otherwise """ if lines[line_index] != '': return None if not all(char == '-' for char in lines[line_index + 1]): return None if not re.match(r'^Ran [\d]+ tests? in', lines[line_index + 2]): return None if lines[line_index + 3] != '': return None return line_index + 5 spyder_unittest-0.3.0/spyder_unittest/backend/jsonstream.py0000644072410100006200000000654013215746752024311 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) r""" Reader and writer for sending stream of python objects using JSON. These classes can be used to send Python objects (specifically, ints, floats, strings, bools, lists, dictionaries or None) over a text stream. Partially received objects are correctly handled. Since multiple JSON-encoded objects cannot simply concatenated (i.e., JSON is not a framed protocol), every object is sent over the text channel in the format "N \n s \n", where the string s is its JSON encoding and N is the length of s. """ # Standard library imports import json # Third party imports from spyder.py3compat import PY2, to_text_string class JSONStreamWriter: """ Writer for sending stream of python objects using JSON. This class can be used to send a stream of python objects over a text stream using JSON. It is the responsibility of the caller to open and close the stream. Attributes ---------- stream : TextIOBase text stream that the objects are sent over. """ def __init__(self, stream): """Constructor.""" self.stream = stream def write(self, obj): """ Write Python object to the stream and flush. Arguments --------- obj : object Object to be written. The type should be supported by JSON (i.e., int, float, str, bool, list, dict or None). """ txt = json.dumps(obj) if PY2: txt = to_text_string(txt) self.stream.write(to_text_string(len(txt)) + '\n') self.stream.write(txt + '\n') self.stream.flush() class JSONStreamReader: """ Reader for sending stream of Python objects using JSON. This class is used to receive a stream sent by JSONStreamWriter. Attributes ---------- buffer : str Text encoding an object that has not been completely received yet. """ def __init__(self): """Constructor.""" self.buffer = '' def consume(self, txt): """ Decode given text and return list of objects encoded in it. If only a part of the encoded text of an object is passed, then it is stored and combined with the remainder in the next call. """ index = 0 res = [] txt = self.buffer + txt while index < len(txt): has_r = False # whether line ends with \r\n or \n end_of_line1 = txt.find('\n', index) try: len_encoding = int(txt[index:end_of_line1]) except ValueError: raise ValueError('txt = %s index = %d end_of_line1 = %d' % (repr(txt), index, end_of_line1)) if end_of_line1 + len_encoding + 2 > len(txt): # 2 for two \n break if txt[end_of_line1 + len_encoding + 1] == '\r': if end_of_line1 + len_encoding + 3 > len(txt): break else: has_r = True encoding = txt[end_of_line1 + 1:end_of_line1 + len_encoding + 1] res.append(json.loads(encoding)) index = end_of_line1 + len_encoding + 2 if has_r: index += 1 self.buffer = txt[index:] return res spyder_unittest-0.3.0/spyder_unittest/backend/tests/0000755072410100006200000000000013241567752022707 5ustar jitseamt00000000000000spyder_unittest-0.3.0/spyder_unittest/backend/tests/test_abbreviator.py0000644072410100006200000000414013227127652026612 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Tests for abbreviator.py""" # Local imports from spyder_unittest.backend.abbreviator import Abbreviator def test_abbreviator_with_one_word(): abb = Abbreviator() abb.add('ham') assert abb.abbreviate('ham') == 'ham' def test_abbreviator_with_one_word_with_two_components(): abb = Abbreviator() abb.add('ham.spam') assert abb.abbreviate('ham.spam') == 'h.spam' def test_abbreviator_with_one_word_with_three_components(): abb = Abbreviator() abb.add('ham.spam.eggs') assert abb.abbreviate('ham.spam.eggs') == 'h.s.eggs' def test_abbreviator_without_common_prefix(): abb = Abbreviator(['ham.foo', 'spam.foo']) assert abb.abbreviate('ham.foo') == 'h.foo' assert abb.abbreviate('spam.foo') == 's.foo' def test_abbreviator_with_prefix(): abb = Abbreviator(['test_ham.x', 'test_spam.x']) assert abb.abbreviate('test_ham.x') == 'test_h.x' assert abb.abbreviate('test_spam.x') == 'test_s.x' def test_abbreviator_with_first_word_prefix_of_second(): abb = Abbreviator(['ham.x', 'hameggs.x']) assert abb.abbreviate('ham.x') == 'ham.x' assert abb.abbreviate('hameggs.x') == 'hame.x' def test_abbreviator_with_second_word_prefix_of_first(): abb = Abbreviator(['hameggs.x', 'ham.x']) assert abb.abbreviate('hameggs.x') == 'hame.x' assert abb.abbreviate('ham.x') == 'ham.x' def test_abbreviator_with_three_words(): abb = Abbreviator(['hamegg.x', 'hameggs.x', 'hall.x']) assert abb.abbreviate('hamegg.x') == 'hamegg.x' assert abb.abbreviate('hameggs.x') == 'hameggs.x' assert abb.abbreviate('hall.x') == 'hal.x' def test_abbreviator_with_multilevel(): abb = Abbreviator(['ham.eggs.foo', 'ham.spam.bar', 'eggs.ham.foo', 'eggs.hamspam.bar']) assert abb.abbreviate('ham.eggs.foo') == 'h.e.foo' assert abb.abbreviate('ham.spam.bar') == 'h.s.bar' assert abb.abbreviate('eggs.ham.foo') == 'e.ham.foo' assert abb.abbreviate('eggs.hamspam.bar') == 'e.hams.bar' spyder_unittest-0.3.0/spyder_unittest/backend/tests/test_runnerbase.py0000644072410100006200000000063413215746752026467 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Tests for baserunner.py""" # Local imports from spyder_unittest.backend.runnerbase import RunnerBase def test_runnerbase_with_nonexisting_module(): class FooRunner(RunnerBase): module = 'nonexisiting' assert not FooRunner.is_installed() spyder_unittest-0.3.0/spyder_unittest/backend/tests/test_frameworkregistry.py0000644072410100006200000000147513163162712030103 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Tests for frameworkregistry.py""" # Third party imports import pytest # Local imports from spyder_unittest.backend.frameworkregistry import FrameworkRegistry class MockRunner: name = 'foo' def __init__(self, *args): self.init_args = args def test_frameworkregistry_when_empty(): reg = FrameworkRegistry() with pytest.raises(KeyError): reg.create_runner('foo', None, 'temp.txt') def test_frameworkregistry_after_registering(): reg = FrameworkRegistry() reg.register(MockRunner) runner = reg.create_runner('foo', None, 'temp.txt') assert isinstance(runner, MockRunner) assert runner.init_args == (None, 'temp.txt') spyder_unittest-0.3.0/spyder_unittest/backend/tests/test_unittestrunner.py0000644072410100006200000000657213227127652027436 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Tests for unittestrunner.py""" # Local imports from spyder_unittest.backend.runnerbase import Category from spyder_unittest.backend.unittestrunner import UnittestRunner def test_unittestrunner_load_data(): output = """test_isupper (teststringmethods.TestStringMethods) ... ok test_split (teststringmethods.TestStringMethods) ... ok extra text\n""" runner = UnittestRunner(None) res = runner.load_data(output) assert len(res) == 2 assert res[0].category == Category.OK assert res[0].status == 'ok' assert res[0].name == 'teststringmethods.TestStringMethods.test_isupper' assert res[0].message == '' assert res[0].extra_text == [] assert res[1].category == Category.OK assert res[1].status == 'ok' assert res[1].name == 'teststringmethods.TestStringMethods.test_split' assert res[1].message == '' assert res[1].extra_text == ['extra text\n'] def test_unittestrunner_load_data_removes_footer(): output = """test1 (test_foo.Bar) ... ok ---------------------------------------------------------------------- Ran 1 test in 0.000s OK """ runner = UnittestRunner(None) res = runner.load_data(output) assert len(res) == 1 assert res[0].category == Category.OK assert res[0].status == 'ok' assert res[0].name == 'test_foo.Bar.test1' assert res[0].extra_text == [] def test_unittestrunner_load_data_with_exception(): output = """test1 (test_foo.Bar) ... FAIL test2 (test_foo.Bar) ... ok ====================================================================== FAIL: test1 (test_foo.Bar) ---------------------------------------------------------------------- Traceback (most recent call last): File "/somepath/test_foo.py", line 5, in test1 self.assertEqual(1, 2) AssertionError: 1 != 2 """ runner = UnittestRunner(None) res = runner.load_data(output) assert len(res) == 2 assert res[0].category == Category.FAIL assert res[0].status == 'FAIL' assert res[0].name == 'test_foo.Bar.test1' assert res[0].extra_text[0].startswith('Traceback') assert res[0].extra_text[-1].endswith('AssertionError: 1 != 2\n') assert res[1].category == Category.OK assert res[1].status == 'ok' assert res[1].name == 'test_foo.Bar.test2' assert res[1].extra_text == [] def test_try_parse_header_with_ok(): runner = UnittestRunner(None) text = 'test_isupper (testfoo.TestStringMethods) ... ok' res = runner.try_parse_result(text) assert res == ('test_isupper', 'testfoo.TestStringMethods', 'ok', '') def test_try_parse_header_with_xfail(): runner = UnittestRunner(None) text = 'test_isupper (testfoo.TestStringMethods) ... expected failure' res = runner.try_parse_result(text) assert res == ('test_isupper', 'testfoo.TestStringMethods', 'expected failure', '') def test_try_parse_header_with_message(): runner = UnittestRunner(None) text = "test_nothing (testfoo.Tests) ... skipped 'msg'" res = runner.try_parse_result(text) assert res == ('test_nothing', 'testfoo.Tests', 'skipped', 'msg') def test_try_parse_header_starting_with_digit(): runner = UnittestRunner(None) text = '0est_isupper (testfoo.TestStringMethods) ... ok' res = runner.try_parse_result(text) assert res is None spyder_unittest-0.3.0/spyder_unittest/backend/tests/test_pytestworker.py0000644072410100006200000001620713237071714027101 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Tests for pytestworker.py""" # Standard library imports import os # Third party imports import pytest # Local imports from spyder_unittest.backend.jsonstream import JSONStreamWriter from spyder_unittest.backend.pytestworker import SpyderPlugin, main try: from unittest.mock import call, create_autospec, Mock except ImportError: from mock import call, create_autospec, Mock # Python 2 class EmptyClass: pass @pytest.fixture def plugin(): mock_writer = create_autospec(JSONStreamWriter) return SpyderPlugin(mock_writer) def test_spyderplugin_test_collectreport_with_success(plugin): report = EmptyClass() report.outcome = 'success' report.nodeid = 'foo.py::bar' plugin.pytest_collectreport(report) plugin.writer.write.assert_not_called() def test_spyderplugin_test_collectreport_with_failure(plugin): report = EmptyClass() report.outcome = 'failed' report.nodeid = 'foo.py::bar' report.longrepr = EmptyClass() report.longrepr.longrepr = 'message' plugin.pytest_collectreport(report) plugin.writer.write.assert_called_once_with({ 'event': 'collecterror', 'nodeid': 'foo.py::bar', 'longrepr': 'message' }) def test_spyderplugin_test_itemcollected(plugin): testitem = EmptyClass() testitem.name = 'bar' testitem.parent = EmptyClass() testitem.parent.name = 'foo.py' testitem.parent.parent = EmptyClass testitem.parent.parent.name = 'notused' testitem.parent.parent.parent = None plugin.pytest_itemcollected(testitem) plugin.writer.write.assert_called_once_with({ 'event': 'collected', 'nodeid': 'foo.py::bar' }) def standard_logreport(): report = EmptyClass() report.when = 'call' report.outcome = 'passed' report.nodeid = 'foo.py::bar' report.duration = 42 report.sections = [] report.longrepr = '' report.location = ('foo.py', 24, 'bar') return report def test_spyderplugin_runtest_logreport(plugin): report = standard_logreport() plugin.pytest_runtest_logreport(report) plugin.writer.write.assert_called_once_with({ 'event': 'logreport', 'when': 'call', 'outcome': 'passed', 'nodeid': 'foo.py::bar', 'duration': 42, 'sections': [], 'filename': 'foo.py', 'lineno': 24 }) def test_spyderplugin_runtest_logreport_passes_longrepr(plugin): report = standard_logreport() report.longrepr = 15 plugin.pytest_runtest_logreport(report) plugin.writer.write.assert_called_once_with({ 'event': 'logreport', 'when': 'call', 'outcome': 'passed', 'nodeid': 'foo.py::bar', 'duration': 42, 'sections': [], 'filename': 'foo.py', 'lineno': 24, 'longrepr': '15' }) def test_spyderplugin_runtest_logreport_with_longrepr_tuple(plugin): report = standard_logreport() report.longrepr = ('ham', 'spam') plugin.pytest_runtest_logreport(report) plugin.writer.write.assert_called_once_with({ 'event': 'logreport', 'when': 'call', 'outcome': 'passed', 'nodeid': 'foo.py::bar', 'duration': 42, 'sections': [], 'filename': 'foo.py', 'lineno': 24, 'longrepr': ('ham', 'spam') }) def test_spyderplugin_runtest_logreport_passes_wasxfail(plugin): report = standard_logreport() report.wasxfail = '' plugin.pytest_runtest_logreport(report) plugin.writer.write.assert_called_once_with({ 'event': 'logreport', 'when': 'call', 'outcome': 'passed', 'nodeid': 'foo.py::bar', 'duration': 42, 'sections': [], 'filename': 'foo.py', 'lineno': 24, 'wasxfail': '' }) def test_spyderplugin_runtest_logreport_passes_message(plugin): class MockLongrepr: def __init__(self): self.reprcrash = EmptyClass() self.reprcrash.message = 'msg' def __str__(self): return 'text' report = standard_logreport() report.longrepr = MockLongrepr() plugin.pytest_runtest_logreport(report) plugin.writer.write.assert_called_once_with({ 'event': 'logreport', 'when': 'call', 'outcome': 'passed', 'nodeid': 'foo.py::bar', 'duration': 42, 'sections': [], 'filename': 'foo.py', 'lineno': 24, 'longrepr': 'text', 'message': 'msg' }) def test_spyderplugin_runtest_logreport_ignores_teardown_passed(plugin): report = standard_logreport() report.when = 'teardown' plugin.pytest_runtest_logreport(report) plugin.writer.write.assert_not_called() def test_main_captures_stdout_and_stderr(monkeypatch): def mock_main(args, plugins): print('output') monkeypatch.setattr( 'spyder_unittest.backend.pytestworker.pytest.main', mock_main) mock_writer = create_autospec(JSONStreamWriter) MockJSONStreamWriter = Mock(return_value=mock_writer) monkeypatch.setattr( 'spyder_unittest.backend.pytestworker.JSONStreamWriter', MockJSONStreamWriter) main(None) mock_writer.write.assert_called_once_with({ 'event': 'finished', 'stdout': 'output\n'}) def test_pytestworker_integration(monkeypatch, tmpdir): os.chdir(tmpdir.strpath) testfilename = tmpdir.join('test_foo.py').strpath with open(testfilename, 'w') as f: f.write("def test_ok(): assert 1+1 == 2\n" "def test_fail(): assert 1+1 == 3\n") mock_writer = create_autospec(JSONStreamWriter) MockJSONStreamWriter = Mock(return_value=mock_writer) monkeypatch.setattr( 'spyder_unittest.backend.pytestworker.JSONStreamWriter', MockJSONStreamWriter) main([testfilename]) args = mock_writer.write.call_args_list assert args[0][0][0]['event'] == 'collected' assert args[0][0][0]['nodeid'] == 'test_foo.py::test_ok' assert args[1][0][0]['event'] == 'collected' assert args[1][0][0]['nodeid'] == 'test_foo.py::test_fail' assert args[2][0][0]['event'] == 'starttest' assert args[2][0][0]['nodeid'] == 'test_foo.py::test_ok' assert args[3][0][0]['event'] == 'logreport' assert args[3][0][0]['when'] == 'call' assert args[3][0][0]['outcome'] == 'passed' assert args[3][0][0]['nodeid'] == 'test_foo.py::test_ok' assert args[3][0][0]['sections'] == [] assert args[3][0][0]['filename'] == 'test_foo.py' assert args[3][0][0]['lineno'] == 0 assert 'duration' in args[3][0][0] assert args[4][0][0]['event'] == 'starttest' assert args[4][0][0]['nodeid'] == 'test_foo.py::test_fail' assert args[5][0][0]['event'] == 'logreport' assert args[5][0][0]['when'] == 'call' assert args[5][0][0]['outcome'] == 'failed' assert args[5][0][0]['nodeid'] == 'test_foo.py::test_fail' assert args[5][0][0]['sections'] == [] assert args[5][0][0]['filename'] == 'test_foo.py' assert args[5][0][0]['lineno'] == 1 assert 'duration' in args[5][0][0] assert args[6][0][0]['event'] == 'finished' assert 'pytest' in args[6][0][0]['stdout'] spyder_unittest-0.3.0/spyder_unittest/backend/tests/test_noserunner.py0000644072410100006200000000606613235065375026523 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Tests for noserunner.py""" # Local imports from spyder_unittest.backend.noserunner import NoseRunner from spyder_unittest.backend.runnerbase import Category def test_noserunner_load_data(tmpdir): result_file = tmpdir.join('results') result_txt = """ text text2 """ result_file.write(result_txt) runner = NoseRunner(None, result_file.strpath) results = runner.load_data() assert len(results) == 3 assert results[0].category == Category.OK assert results[0].status == 'ok' assert results[0].name == 'test_foo.test1' assert results[0].message == '' assert results[0].time == 0.04 assert results[0].extra_text == [] assert results[1].category == Category.FAIL assert results[1].status == 'failure' assert results[1].name == 'test_foo.test2' assert results[1].message == 'failure message' assert results[1].time == 0.01 assert results[1].extra_text == ['text'] assert results[2].category == Category.SKIP assert results[2].status == 'skipped' assert results[2].name == 'test_foo.test3' assert results[2].message == 'skip message' assert results[2].time == 0.05 assert results[2].extra_text == ['text2'] def test_noserunner_load_data_failing_test_with_stdout(tmpdir): result_file = tmpdir.join('results') result_txt = """ text stdout text """ result_file.write(result_txt) runner = NoseRunner(None, result_file.strpath) results = runner.load_data() assert results[0].extra_text == ['text', '', '----- Captured stdout -----', 'stdout text'] def test_noserunner_load_data_passing_test_with_stdout(tmpdir): result_file = tmpdir.join('results') result_txt = """ stdout text """ result_file.write(result_txt) runner = NoseRunner(None, result_file.strpath) results = runner.load_data() assert results[0].extra_text == ['----- Captured stdout -----', 'stdout text'] spyder_unittest-0.3.0/spyder_unittest/backend/tests/test_jsonstream.py0000644072410100006200000000371213227127652026503 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Tests for jsonstream.py""" # Standard library imports from io import StringIO, TextIOBase # Local imports from spyder_unittest.backend.jsonstream import (JSONStreamReader, JSONStreamWriter) try: from unittest.mock import create_autospec except ImportError: from mock import create_autospec # Python 2 def test_jsonstreamwriter_with_list(): stream = StringIO() writer = JSONStreamWriter(stream) writer.write([1, 2]) assert stream.getvalue() == '6\n[1, 2]\n' def test_jsonstreamwriter_with_unicode(): stream = StringIO() writer = JSONStreamWriter(stream) writer.write(u'三') # u prefix for Python2 compatibility assert stream.getvalue() == '8\n"\\u4e09"\n' def test_jsonstreamwriter_flushes(): stream = create_autospec(TextIOBase) writer = JSONStreamWriter(stream) writer.write(1) stream.flush.assert_called_once_with() def test_jsonstreamreader_with_list(): reader = JSONStreamReader() assert reader.consume('6\n[1, 2]\n') == [[1, 2]] def test_jsonstreamreader_with_windows_lineending(): reader = JSONStreamReader() assert reader.consume('6\r\n[1, 2]\r\n') == [[1, 2]] def test_jsonstreamreader_with_unicode(): reader = JSONStreamReader() assert reader.consume('8\n"\\u4e09"\n') == [u'三'] def test_jsonstreamreader_with_partial_frames(): reader = JSONStreamReader() txt = '1\n2\n' * 3 assert reader.consume(txt[:2]) == [] assert reader.consume(txt[2:-2]) == [2, 2] assert reader.consume(txt[-2:]) == [2] def test_jsonsteamreader_writer_integration(): stream = StringIO() writer = JSONStreamWriter(stream) reader = JSONStreamReader() writer.write([1, 2]) writer.write({'a': 'b'}) assert reader.consume(stream.getvalue()) == [[1, 2], {'a': 'b'}] spyder_unittest-0.3.0/spyder_unittest/backend/tests/test_pytestrunner.py0000644072410100006200000001656513241562107027104 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Tests for pytestrunner.py""" # Standard library imports import os import os.path as osp # Third party imports from qtpy.QtCore import QByteArray from spyder.utils.misc import get_python_executable # Local imports from spyder_unittest.backend.pytestrunner import (PyTestRunner, logreport_to_testresult) from spyder_unittest.backend.runnerbase import Category, TestResult from spyder_unittest.widgets.configdialog import Config try: from unittest.mock import Mock except ImportError: from mock import Mock # Python 2 def test_pytestrunner_is_installed(): assert PyTestRunner(None).is_installed() def test_pytestrunner_start(monkeypatch): MockQProcess = Mock() monkeypatch.setattr('spyder_unittest.backend.runnerbase.QProcess', MockQProcess) mock_process = MockQProcess() mock_process.systemEnvironment = lambda: ['VAR=VALUE', 'PYTHONPATH=old'] MockEnvironment = Mock() monkeypatch.setattr( 'spyder_unittest.backend.runnerbase.QProcessEnvironment', MockEnvironment) mock_environment = MockEnvironment() mock_remove = Mock(side_effect=OSError()) monkeypatch.setattr('spyder_unittest.backend.runnerbase.os.remove', mock_remove) MockJSONStreamReader = Mock() monkeypatch.setattr( 'spyder_unittest.backend.pytestrunner.JSONStreamReader', MockJSONStreamReader) mock_reader = MockJSONStreamReader() runner = PyTestRunner(None, 'results') config = Config('py.test', 'wdir') runner.start(config, ['pythondir']) mock_process.setWorkingDirectory.assert_called_once_with('wdir') mock_process.finished.connect.assert_called_once_with(runner.finished) mock_process.setProcessEnvironment.assert_called_once_with( mock_environment) workerfile = os.path.abspath( os.path.join(os.path.dirname(__file__), os.pardir, 'pytestworker.py')) mock_process.start.assert_called_once_with( get_python_executable(), [workerfile]) mock_environment.insert.assert_any_call('VAR', 'VALUE') # mock_environment.insert.assert_any_call('PYTHONPATH', 'pythondir:old') # TODO: Find out why above test fails mock_remove.called_once_with('results') assert runner.reader is mock_reader def test_pytestrunner_read_output(monkeypatch): runner = PyTestRunner(None) runner.process = Mock() qbytearray = QByteArray(b'encoded') runner.process.readAllStandardOutput = Mock(return_value=qbytearray) runner.reader = Mock() runner.reader.consume = Mock(return_value='decoded') runner.process_output = Mock() runner.read_output() assert runner.reader.consume.called_once_with('encoded') assert runner.process_output.called_once_with('decoded') def test_pytestrunner_process_output_with_collected(qtbot): runner = PyTestRunner(None) output = [{'event': 'collected', 'nodeid': 'spam.py::ham'}, {'event': 'collected', 'nodeid': 'eggs.py::bacon'}] with qtbot.waitSignal(runner.sig_collected) as blocker: runner.process_output(output) expected = ['spam.ham', 'eggs.bacon'] assert blocker.args == [expected] def test_pytestrunner_process_output_with_collecterror(qtbot): runner = PyTestRunner(None) output = [{ 'event': 'collecterror', 'nodeid': 'ham/spam.py', 'longrepr': 'msg' }] with qtbot.waitSignal(runner.sig_collecterror) as blocker: runner.process_output(output) expected = [('ham.spam', 'msg')] assert blocker.args == [expected] def test_pytestrunner_process_output_with_starttest(qtbot): runner = PyTestRunner(None) output = [{'event': 'starttest', 'nodeid': 'ham/spam.py::ham'}, {'event': 'starttest', 'nodeid': 'ham/eggs.py::bacon'}] with qtbot.waitSignal(runner.sig_starttest) as blocker: runner.process_output(output) expected = ['ham.spam.ham', 'ham.eggs.bacon'] assert blocker.args == [expected] def standard_logreport_output(): return { 'event': 'logreport', 'when': 'call', 'outcome': 'passed', 'nodeid': 'foo.py::bar', 'filename': 'foo.py', 'lineno': 24, 'duration': 42 } def test_pytestrunner_process_output_with_logreport_passed(qtbot): runner = PyTestRunner(None) runner.config = Config(wdir='ham') output = [standard_logreport_output()] with qtbot.waitSignal(runner.sig_testresult) as blocker: runner.process_output(output) expected = [TestResult(Category.OK, 'ok', 'foo.bar', time=42, filename=osp.join('ham', 'foo.py'), lineno=24)] assert blocker.args == [expected] def test_logreport_to_testresult_passed(): report = standard_logreport_output() expected = TestResult(Category.OK, 'ok', 'foo.bar', time=42, filename=osp.join('ham', 'foo.py'), lineno=24) assert logreport_to_testresult(report, Config(wdir='ham')) == expected def test_logreport_to_testresult_failed(): report = standard_logreport_output() report['outcome'] = 'failed' report['message'] = 'msg' report['longrepr'] = 'exception text' expected = TestResult(Category.FAIL, 'failure', 'foo.bar', message='msg', time=42, extra_text='exception text', filename=osp.join('ham', 'foo.py'), lineno=24) assert logreport_to_testresult(report, Config(wdir='ham')) == expected def test_logreport_to_testresult_skipped(): report = standard_logreport_output() report['when'] = 'setup' report['outcome'] = 'skipped' report['longrepr'] = ['file', 24, 'skipmsg'] expected = TestResult(Category.SKIP, 'skipped', 'foo.bar', time=42, extra_text='skipmsg', filename=osp.join('ham', 'foo.py'), lineno=24) assert logreport_to_testresult(report, Config(wdir='ham')) == expected def test_logreport_to_testresult_xfail(): report = standard_logreport_output() report['outcome'] = 'skipped' report['message'] = 'msg' report['longrepr'] = 'exception text' report['wasxfail'] = '' expected = TestResult(Category.SKIP, 'skipped', 'foo.bar', message='msg', time=42, extra_text='exception text', filename=osp.join('ham', 'foo.py'), lineno=24) assert logreport_to_testresult(report, Config(wdir='ham')) == expected def test_logreport_to_testresult_xpass(): report = standard_logreport_output() report['wasxfail'] = '' expected = TestResult(Category.OK, 'ok', 'foo.bar', time=42, filename=osp.join('ham', 'foo.py'), lineno=24) assert logreport_to_testresult(report, Config(wdir='ham')) == expected def test_logreport_to_testresult_with_output(): report = standard_logreport_output() report['sections'] = [['Captured stdout call', 'ham\n'], ['Captured stderr call', 'spam\n']] txt = ('----- Captured stdout call -----\nham\n' '----- Captured stderr call -----\nspam\n') expected = TestResult(Category.OK, 'ok', 'foo.bar', time=42, extra_text=txt, filename=osp.join('ham', 'foo.py'), lineno=24) assert logreport_to_testresult(report, Config(wdir='ham')) == expected spyder_unittest-0.3.0/spyder_unittest/backend/tests/__init__.py0000644072410100006200000000030213163162712025001 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Tests for spyder_unittest.backend .""" spyder_unittest-0.3.0/spyder_unittest/__init__.py0000644072410100006200000000044513241567673022274 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Spyder unitest plugin.""" # Local imports from .unittestplugin import UnitTestPlugin as PLUGIN_CLASS __version__ = '0.3.0' PLUGIN_CLASS spyder_unittest-0.3.0/spyder_unittest/widgets/0000755072410100006200000000000013241567752021624 5ustar jitseamt00000000000000spyder_unittest-0.3.0/spyder_unittest/widgets/unittestgui.py0000644072410100006200000003236013241562107024553 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Unit Testing widget.""" from __future__ import with_statement # Standard library imports import copy import os.path as osp import sys # Third party imports from qtpy.QtCore import Signal from qtpy.QtWidgets import (QHBoxLayout, QLabel, QMenu, QMessageBox, QToolButton, QVBoxLayout, QWidget) from spyder.config.base import get_conf_path, get_translation from spyder.utils import icon_manager as ima from spyder.utils.qthelpers import create_action, create_toolbutton from spyder.widgets.variableexplorer.texteditor import TextEditor # Local imports from spyder_unittest.backend.frameworkregistry import FrameworkRegistry from spyder_unittest.backend.noserunner import NoseRunner from spyder_unittest.backend.pytestrunner import PyTestRunner from spyder_unittest.backend.runnerbase import Category, TestResult from spyder_unittest.backend.unittestrunner import UnittestRunner from spyder_unittest.widgets.configdialog import Config, ask_for_config from spyder_unittest.widgets.datatree import TestDataModel, TestDataView # This is needed for testing this module as a stand alone script try: _ = get_translation("unittest", dirname="spyder_unittest") except KeyError as error: import gettext _ = gettext.gettext # Supported testing framework FRAMEWORKS = {NoseRunner, PyTestRunner, UnittestRunner} class UnitTestWidget(QWidget): """ Unit testing widget. Attributes ---------- config : Config or None Configuration for running tests, or `None` if not set. default_wdir : str Default choice of working directory. framework_registry : FrameworkRegistry Registry of supported testing frameworks. pre_test_hook : function returning bool or None If set, contains function to run before running tests; abort the test run if hook returns False. pythonpath : list of str Directories to be added to the Python path when running tests. testrunner : TestRunner or None Object associated with the current test process, or `None` if no test process is running at the moment. Signals ------- sig_finished: Emitted when plugin finishes processing tests. sig_newconfig(Config): Emitted when test config is changed. Argument is new config, which is always valid. sig_edit_goto(str, int): Emitted if editor should go to some position. Arguments are file name and line number (zero-based). """ VERSION = '0.0.1' sig_finished = Signal() sig_newconfig = Signal(Config) sig_edit_goto = Signal(str, int) def __init__(self, parent, options_button=None, options_menu=None): """Unit testing widget.""" QWidget.__init__(self, parent) self.setWindowTitle("Unit testing") self.config = None self.pythonpath = None self.default_wdir = None self.pre_test_hook = None self.testrunner = None self.output = None self.testdataview = TestDataView(self) self.testdatamodel = TestDataModel(self) self.testdataview.setModel(self.testdatamodel) self.testdataview.sig_edit_goto.connect(self.sig_edit_goto) self.testdatamodel.sig_summary.connect(self.set_status_label) self.framework_registry = FrameworkRegistry() for runner in FRAMEWORKS: self.framework_registry.register(runner) self.start_button = create_toolbutton(self, text_beside_icon=True) self.set_running_state(False) self.status_label = QLabel('', self) self.create_actions() self.options_menu = options_menu or QMenu() self.options_menu.addAction(self.config_action) self.options_menu.addAction(self.log_action) self.options_menu.addAction(self.collapse_action) self.options_menu.addAction(self.expand_action) self.options_button = options_button or QToolButton(self) self.options_button.setIcon(ima.icon('tooloptions')) self.options_button.setPopupMode(QToolButton.InstantPopup) self.options_button.setMenu(self.options_menu) self.options_button.setAutoRaise(True) hlayout = QHBoxLayout() hlayout.addWidget(self.start_button) hlayout.addStretch() hlayout.addWidget(self.status_label) hlayout.addStretch() hlayout.addWidget(self.options_button) layout = QVBoxLayout() layout.addLayout(hlayout) layout.addWidget(self.testdataview) self.setLayout(layout) @property def config(self): """Return current test configuration.""" return self._config @config.setter def config(self, new_config): """Set test configuration and emit sig_newconfig if valid.""" self._config = new_config if self.config_is_valid(): self.sig_newconfig.emit(new_config) def set_config_without_emit(self, new_config): """Set test configuration but do not emit any signal.""" self._config = new_config def create_actions(self): """Create the actions for the unittest widget.""" self.config_action = create_action( self, text=_("Configure ..."), icon=ima.icon('configure'), triggered=self.configure) self.log_action = create_action( self, text=_('Show output'), icon=ima.icon('log'), triggered=self.show_log) self.collapse_action = create_action( self, text=_('Collapse all'), icon=ima.icon('collapse'), triggered=self.testdataview.collapseAll) self.expand_action = create_action( self, text=_('Expand all'), icon=ima.icon('expand'), triggered=self.testdataview.expandAll) return [ self.config_action, self.log_action, self.collapse_action, self.expand_action ] def show_log(self): """Show output of testing process.""" if self.output: TextEditor( self.output, title=_("Unit testing output"), readonly=True, size=(700, 500)).exec_() def configure(self): """Configure tests.""" if self.config: oldconfig = self.config else: oldconfig = Config(wdir=self.default_wdir) frameworks = self.framework_registry.frameworks config = ask_for_config(frameworks, oldconfig) if config: self.config = config def config_is_valid(self, config=None): """ Return whether configuration for running tests is valid. Parameters ---------- config : Config or None configuration for unit tests. If None, use `self.config`. """ if config is None: config = self.config return (config and config.framework and osp.isdir(config.wdir)) def maybe_configure_and_start(self): """ Ask for configuration if necessary and then run tests. If the current test configuration is not valid (or not set(, then ask the user to configure. Then run the tests. """ if not self.config_is_valid(): self.configure() if self.config_is_valid(): self.run_tests() def run_tests(self, config=None): """ Run unit tests. First, run `self.pre_test_hook` if it is set, and abort if its return value is `False`. Then, run the unit tests. The process's output is consumed by `read_output()`. When the process finishes, the `finish` signal is emitted. Parameters ---------- config : Config or None configuration for unit tests. If None, use `self.config`. In either case, configuration should be valid. """ if self.pre_test_hook: if self.pre_test_hook() is False: return if config is None: config = self.config pythonpath = self.pythonpath self.testdatamodel.testresults = [] self.testdetails = [] tempfilename = get_conf_path('unittest.results') self.testrunner = self.framework_registry.create_runner( config.framework, self, tempfilename) self.testrunner.sig_finished.connect(self.process_finished) self.testrunner.sig_collected.connect(self.tests_collected) self.testrunner.sig_collecterror.connect(self.tests_collect_error) self.testrunner.sig_starttest.connect(self.tests_started) self.testrunner.sig_testresult.connect(self.tests_yield_result) try: self.testrunner.start(config, pythonpath) except RuntimeError: QMessageBox.critical(self, _("Error"), _("Process failed to start")) else: self.set_running_state(True) self.status_label.setText(_('Running tests ...')) def set_running_state(self, state): """ Change start/stop button according to whether tests are running. If tests are running, then display a stop button, otherwise display a start button. Parameters ---------- state : bool Set to True if tests are running. """ button = self.start_button try: button.clicked.disconnect() except TypeError: # raised if not connected to any handler pass if state: button.setIcon(ima.icon('stop')) button.setText(_('Stop')) button.setToolTip(_('Stop current test process')) if self.testrunner: button.clicked.connect(self.testrunner.stop_if_running) else: button.setIcon(ima.icon('run')) button.setText(_("Run tests")) button.setToolTip(_('Run unit tests')) button.clicked.connect( lambda checked: self.maybe_configure_and_start()) def process_finished(self, testresults, output): """ Called when unit test process finished. This function collects and shows the test results and output. Parameters ---------- testresults : list of TestResult or None `None` indicates all test results have already been transmitted. output : str """ self.output = output self.set_running_state(False) self.testrunner = None self.log_action.setEnabled(bool(output)) if testresults: self.testdatamodel.testresults = testresults self.replace_pending_with_not_run() self.sig_finished.emit() def replace_pending_with_not_run(self): """Change status of pending tests to 'not run''.""" new_results = [] for res in self.testdatamodel.testresults: if res.category == Category.PENDING: new_res = copy.copy(res) new_res.category = Category.SKIP new_res.status = _('not run') new_results.append(new_res) if new_results: self.testdatamodel.update_testresults(new_results) def tests_collected(self, testnames): """Called when tests are collected.""" testresults = [TestResult(Category.PENDING, _('pending'), name) for name in testnames] self.testdatamodel.add_testresults(testresults) def tests_started(self, testnames): """Called when tests are about to be run.""" testresults = [TestResult(Category.PENDING, _('pending'), name, message=_('running')) for name in testnames] self.testdatamodel.update_testresults(testresults) def tests_collect_error(self, testnames_plus_msg): """Called when errors are encountered during collection.""" testresults = [TestResult(Category.FAIL, _('failure'), name, message=_('collection error'), extra_text=msg) for name, msg in testnames_plus_msg] self.testdatamodel.add_testresults(testresults) def tests_yield_result(self, testresults): """Called when test results are received.""" self.testdatamodel.update_testresults(testresults) def set_status_label(self, msg): """ Set status label to the specified message. Arguments --------- msg: str """ self.status_label.setText('{}'.format(msg)) def test(): """ Run widget test. Show the unittest widgets, configured so that our own tests are run when the user clicks "Run tests". """ from spyder.utils.qthelpers import qapplication app = qapplication() widget = UnitTestWidget(None) # set wdir to .../spyder_unittest wdir = osp.abspath(osp.join(osp.dirname(__file__), osp.pardir)) widget.config = Config('py.test', wdir) # add wdir's parent to python path, so that `import spyder_unittest` works rootdir = osp.abspath(osp.join(wdir, osp.pardir)) widget.pythonpath = rootdir widget.resize(800, 600) widget.show() sys.exit(app.exec_()) if __name__ == '__main__': test() spyder_unittest-0.3.0/spyder_unittest/widgets/configdialog.py0000644072410100006200000001271013163162712024612 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """ Functionality for asking the user to specify the test configuration. The main entry point is `ask_for_config()`. """ # Standard library imports from collections import namedtuple import os.path as osp # Third party imports from qtpy.compat import getexistingdirectory from qtpy.QtCore import Slot from qtpy.QtWidgets import (QApplication, QComboBox, QDialog, QDialogButtonBox, QHBoxLayout, QLabel, QLineEdit, QPushButton, QVBoxLayout) from spyder.config.base import get_translation from spyder.py3compat import getcwd, to_text_string from spyder.utils import icon_manager as ima try: _ = get_translation("unittest", dirname="spyder_unittest") except KeyError as error: import gettext _ = gettext.gettext Config = namedtuple('Config', ['framework', 'wdir']) Config.__new__.__defaults__ = (None, '') class ConfigDialog(QDialog): """ Dialog window for specifying test configuration. The window contains a combobox with all the frameworks, a line edit box for specifying the working directory, a button to use a file browser for selecting the directory, and OK and Cancel buttons. Initially, no framework is selected and the OK button is disabled. Selecting a framework enables the OK button. """ def __init__(self, frameworks, config, parent=None): """ Construct a dialog window. Parameters ---------- frameworks : dict of (str, type) Names of all supported frameworks with their associated class (assumed to be a subclass of RunnerBase) config : Config Initial configuration parent : QWidget """ super(ConfigDialog, self).__init__(parent) self.setWindowTitle(_('Configure tests')) layout = QVBoxLayout(self) framework_layout = QHBoxLayout() framework_label = QLabel(_('Test framework')) framework_layout.addWidget(framework_label) self.framework_combobox = QComboBox(self) for ix, (name, runner) in enumerate(sorted(frameworks.items())): installed = runner.is_installed() if installed: label = name else: label = '{} ({})'.format(name, _('not available')) self.framework_combobox.addItem(label) self.framework_combobox.model().item(ix).setEnabled(installed) framework_layout.addWidget(self.framework_combobox) layout.addLayout(framework_layout) layout.addSpacing(10) wdir_label = QLabel(_('Directory from which to run tests')) layout.addWidget(wdir_label) wdir_layout = QHBoxLayout() self.wdir_lineedit = QLineEdit(self) wdir_layout.addWidget(self.wdir_lineedit) self.wdir_button = QPushButton(ima.icon('DirOpenIcon'), '', self) self.wdir_button.setToolTip(_("Select directory")) self.wdir_button.clicked.connect(lambda: self.select_directory()) wdir_layout.addWidget(self.wdir_button) layout.addLayout(wdir_layout) layout.addSpacing(20) self.buttons = QDialogButtonBox(QDialogButtonBox.Ok | QDialogButtonBox.Cancel) layout.addWidget(self.buttons) self.buttons.accepted.connect(self.accept) self.buttons.rejected.connect(self.reject) self.ok_button = self.buttons.button(QDialogButtonBox.Ok) self.ok_button.setEnabled(False) self.framework_combobox.currentIndexChanged.connect( self.framework_changed) self.framework_combobox.setCurrentIndex(-1) if config.framework: index = self.framework_combobox.findText(config.framework) if index != -1: self.framework_combobox.setCurrentIndex(index) self.wdir_lineedit.setText(config.wdir) @Slot(int) def framework_changed(self, index): """Called when selected framework changes.""" if index != -1: self.ok_button.setEnabled(True) def select_directory(self): """Display dialog for user to select working directory.""" basedir = to_text_string(self.wdir_lineedit.text()) if not osp.isdir(basedir): basedir = getcwd() title = _("Select directory") directory = getexistingdirectory(self, title, basedir) if directory: self.wdir_lineedit.setText(directory) def get_config(self): """ Return the test configuration specified by the user. Returns ------- Config Test configuration """ framework = self.framework_combobox.currentText() if framework == '': framework = None return Config(framework=framework, wdir=self.wdir_lineedit.text()) def ask_for_config(frameworks, config, parent=None): """ Ask user to specify a test configuration. This is a convenience function which displays a modal dialog window of type `ConfigDialog`. """ dialog = ConfigDialog(frameworks, config, parent) result = dialog.exec_() if result == QDialog.Accepted: return dialog.get_config() if __name__ == '__main__': app = QApplication([]) frameworks = ['nose', 'py.test', 'unittest'] config = Config(framework=None, wdir=getcwd()) print(ask_for_config(frameworks, config)) spyder_unittest-0.3.0/spyder_unittest/widgets/__init__.py0000644072410100006200000000027313047602633023727 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Widgets for unittest plugin.""" spyder_unittest-0.3.0/spyder_unittest/widgets/datatree.py0000644072410100006200000003546013241562107023764 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Model and view classes for storing and displaying test results.""" # Standard library imports from collections import Counter from operator import attrgetter # Third party imports from qtpy import PYQT4 from qtpy.QtCore import QAbstractItemModel, QModelIndex, Qt, Signal from qtpy.QtGui import QBrush, QColor, QFont from qtpy.QtWidgets import QMenu, QTreeView from spyder.config.base import get_translation from spyder.utils.qthelpers import create_action # Local imports from spyder_unittest.backend.abbreviator import Abbreviator from spyder_unittest.backend.runnerbase import Category try: _ = get_translation("unittest", dirname="spyder_unittest") except KeyError as error: import gettext _ = gettext.gettext COLORS = { Category.OK: QBrush(QColor("#C1FFBA")), Category.FAIL: QBrush(QColor("#FF5050")), Category.SKIP: QBrush(QColor("#C5C5C5")), Category.PENDING: QBrush(QColor("#C5C5C5")) } STATUS_COLUMN = 0 NAME_COLUMN = 1 MESSAGE_COLUMN = 2 TIME_COLUMN = 3 HEADERS = [_('Status'), _('Name'), _('Message'), _('Time (ms)')] TOPLEVEL_ID = 2 ** 32 - 1 class TestDataView(QTreeView): """ Tree widget displaying test results. Signals ------- sig_edit_goto(str, int): Emitted if editor should go to some position. Arguments are file name and line number (zero-based). """ sig_edit_goto = Signal(str, int) def __init__(self, parent=None): """Constructor.""" QTreeView.__init__(self, parent) self.header().setDefaultAlignment(Qt.AlignCenter) self.setItemsExpandable(True) self.setSortingEnabled(True) self.header().setSortIndicatorShown(False) self.header().sortIndicatorChanged.connect(self.sortByColumn) self.header().sortIndicatorChanged.connect( lambda col, order: self.header().setSortIndicatorShown(True)) self.setExpandsOnDoubleClick(False) self.doubleClicked.connect(self.go_to_test_definition) def reset(self): """ Reset internal state of the view and read all data afresh from model. This function is called whenever the model data changes drastically. """ QTreeView.reset(self) self.resizeColumns() self.spanFirstColumn(0, self.model().rowCount() - 1) def rowsInserted(self, parent, firstRow, lastRow): """Called when rows are inserted.""" QTreeView.rowsInserted(self, parent, firstRow, lastRow) self.resizeColumns() self.spanFirstColumn(firstRow, lastRow) def dataChanged(self, topLeft, bottomRight, roles=[]): """Called when data in model has changed.""" if PYQT4: QTreeView.dataChanged(self, topLeft, bottomRight) else: QTreeView.dataChanged(self, topLeft, bottomRight, roles) self.resizeColumns() while topLeft.parent().isValid(): topLeft = topLeft.parent() while bottomRight.parent().isValid(): bottomRight = bottomRight.parent() self.spanFirstColumn(topLeft.row(), bottomRight.row()) def contextMenuEvent(self, event): """Called when user requests a context menu.""" index = self.indexAt(event.pos()) index = self.make_index_canonical(index) if not index: return # do nothing if no item under mouse position contextMenu = self.build_context_menu(index) contextMenu.exec_(event.globalPos()) def go_to_test_definition(self, index): """Ask editor to go to definition of test corresponding to index.""" index = self.make_index_canonical(index) filename, lineno = self.model().data(index, Qt.UserRole) if filename is not None: if lineno is None: lineno = 0 self.sig_edit_goto.emit(filename, lineno) def make_index_canonical(self, index): """ Convert given index to canonical index for the same test. For every test, the canonical index points to the item on the top level in the first column corresponding to the given position. If the given index is invalid, then return None. """ if not index.isValid(): return None while index.parent().isValid(): # find top-level node index = index.parent() index = index.sibling(index.row(), 0) # go to first column return index def build_context_menu(self, index): """Build context menu for test item that given index points to.""" contextMenu = QMenu(self) if self.isExpanded(index): menuItem = create_action(self, _('Collapse'), triggered=lambda: self.collapse(index)) else: menuItem = create_action(self, _('Expand'), triggered=lambda: self.expand(index)) menuItem.setEnabled(self.model().hasChildren(index)) contextMenu.addAction(menuItem) menuItem = create_action( self, _('Go to definition'), triggered=lambda: self.go_to_test_definition(index)) test_location = self.model().data(index, Qt.UserRole) menuItem.setEnabled(test_location[0] is not None) contextMenu.addAction(menuItem) return contextMenu def resizeColumns(self): """Resize column to fit their contents.""" for col in range(self.model().columnCount()): self.resizeColumnToContents(col) def spanFirstColumn(self, firstRow, lastRow): """ Make first column span whole row in second-level children. Note: Second-level children display the test output. Arguments --------- firstRow : int Index of first row to act on. lastRow : int Index of last row to act on. Note that this row is included in the range, following Qt conventions and contrary to Python conventions. """ model = self.model() for row in range(firstRow, lastRow + 1): index = model.index(row, 0) for i in range(model.rowCount(index)): self.setFirstColumnSpanned(i, index, True) class TestDataModel(QAbstractItemModel): """ Model class storing test results for display. Test results are stored as a list of TestResults in the property `self.testresults`. Every test is exposed as a child of the root node, with extra information as second-level nodes. As in every model, an iteem of data is identified by its index, which is a tuple (row, column, id). The id is TOPLEVEL_ID for top-level items. For level-2 items, the id is the index of the test in `self.testresults`. Signals ------- sig_summary(str) Emitted with new summary if test results change. """ sig_summary = Signal(str) def __init__(self, parent=None): """Constructor.""" QAbstractItemModel.__init__(self, parent) self.abbreviator = Abbreviator() self.testresults = [] try: self.monospace_font = parent.window().editor.get_plugin_font() except AttributeError: # If run standalone for testing self.monospace_font = QFont("Courier New") self.monospace_font.setPointSize(10) @property def testresults(self): """List of test results.""" return self._testresults @testresults.setter def testresults(self, new_value): """Setter for test results.""" self.beginResetModel() self.abbreviator = Abbreviator(res.name for res in new_value) self._testresults = new_value self.endResetModel() self.emit_summary() def add_testresults(self, new_tests): """ Add new test results to the model. Arguments --------- new_tests : list of TestResult """ firstRow = len(self.testresults) lastRow = firstRow + len(new_tests) - 1 for test in new_tests: self.abbreviator.add(test.name) self.beginInsertRows(QModelIndex(), firstRow, lastRow) self.testresults.extend(new_tests) self.endInsertRows() self.emit_summary() def update_testresults(self, new_results): """ Update some test results by new results. The tests in `new_results` should already be included in `self.testresults` (otherwise a `KeyError` is raised). This function replaces the existing results by `new_results`. Arguments --------- new_results: list of TestResult """ idx_min = idx_max = None for new_result in new_results: for (idx, old_result) in enumerate(self.testresults): if old_result.name == new_result.name: self.testresults[idx] = new_result if idx_min is None: idx_min = idx_max = idx else: idx_min = min(idx_min, idx) idx_max = max(idx_max, idx) break else: raise KeyError('test not found') if idx_min is not None: self.dataChanged.emit(self.index(idx_min, 0), self.index(idx_max, len(HEADERS) - 1)) self.emit_summary() def index(self, row, column, parent=QModelIndex()): """ Construct index to given item of data. If `parent` not valid, then the item of data is on the top level. """ if not self.hasIndex(row, column, parent): # check bounds etc. return QModelIndex() if not parent.isValid(): return self.createIndex(row, column, TOPLEVEL_ID) else: testresult_index = parent.row() return self.createIndex(row, column, testresult_index) def data(self, index, role): """ Return data in `role` for item of data that `index` points to. If `role` is `DisplayRole`, then return string to display. If `role` is `TooltipRole`, then return string for tool tip. If `role` is `FontRole`, then return monospace font for level-2 items. If `role` is `BackgroundRole`, then return background color. If `role` is `TextAlignmentRole`, then return right-aligned for time. If `role` is `UserRole`, then return location of test as (file, line). """ if not index.isValid(): return None row = index.row() column = index.column() id = index.internalId() if role == Qt.DisplayRole: if id != TOPLEVEL_ID: return self.testresults[id].extra_text[index.row()] elif column == STATUS_COLUMN: return self.testresults[row].status elif column == NAME_COLUMN: return self.abbreviator.abbreviate(self.testresults[row].name) elif column == MESSAGE_COLUMN: return self.testresults[row].message elif column == TIME_COLUMN: time = self.testresults[row].time return '' if time is None else '{:.2f}'.format(time * 1e3) elif role == Qt.ToolTipRole: if id == TOPLEVEL_ID and column == NAME_COLUMN: return self.testresults[row].name elif role == Qt.FontRole: if id != TOPLEVEL_ID: return self.monospace_font elif role == Qt.BackgroundRole: if id == TOPLEVEL_ID: testresult = self.testresults[row] return COLORS[testresult.category] elif role == Qt.TextAlignmentRole: if id == TOPLEVEL_ID and column == TIME_COLUMN: return Qt.AlignRight elif role == Qt.UserRole: if id == TOPLEVEL_ID: testresult = self.testresults[row] return (testresult.filename, testresult.lineno) else: return None def headerData(self, section, orientation, role=Qt.DisplayRole): """Return data for specified header.""" if orientation == Qt.Horizontal and role == Qt.DisplayRole: return HEADERS[section] else: return None def parent(self, index): """Return index to parent of item that `index` points to.""" if not index.isValid(): return QModelIndex() id = index.internalId() if id == TOPLEVEL_ID: return QModelIndex() else: return self.index(id, 0) def rowCount(self, parent=QModelIndex()): """Return number of rows underneath `parent`.""" if not parent.isValid(): return len(self.testresults) if parent.internalId() == TOPLEVEL_ID and parent.column() == 0: return len(self.testresults[parent.row()].extra_text) return 0 def columnCount(self, parent=QModelIndex()): """Return number of rcolumns underneath `parent`.""" if not parent.isValid(): return len(HEADERS) else: return 1 def sort(self, column, order): """Sort model by `column` in `order`.""" def key_time(result): return result.time or -1 self.beginResetModel() reverse = order == Qt.DescendingOrder if column == STATUS_COLUMN: self.testresults.sort(key=attrgetter('category', 'status'), reverse=reverse) elif column == NAME_COLUMN: self.testresults.sort(key=attrgetter('name'), reverse=reverse) elif column == MESSAGE_COLUMN: self.testresults.sort(key=attrgetter('message'), reverse=reverse) elif column == TIME_COLUMN: self.testresults.sort(key=key_time, reverse=reverse) self.endResetModel() def summary(self): """Return summary for current results.""" def n_test_or_tests(n): test_or_tests = _('test') if n == 1 else _('tests') return '{} {}'.format(n, test_or_tests) if not len(self.testresults): return _('No results to show.') counts = Counter(res.category for res in self.testresults) if all(counts[cat] == 0 for cat in (Category.FAIL, Category.OK, Category.SKIP)): txt = n_test_or_tests(counts[Category.PENDING]) return _('collected {}').format(txt) msg = _('{} failed').format(n_test_or_tests(counts[Category.FAIL])) msg += _(', {} passed').format(counts[Category.OK]) if counts[Category.SKIP]: msg += _(', {} other').format(counts[Category.SKIP]) if counts[Category.PENDING]: msg += _(', {} pending').format(counts[Category.PENDING]) return msg def emit_summary(self): """Emit sig_summary with summary for current results.""" self.sig_summary.emit(self.summary()) spyder_unittest-0.3.0/spyder_unittest/widgets/tests/0000755072410100006200000000000013241567752022766 5ustar jitseamt00000000000000spyder_unittest-0.3.0/spyder_unittest/widgets/tests/test_unittestgui.py0000644072410100006200000002053513235065064026760 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Tests for unittestgui.py.""" # Standard library imports import os # Third party imports from qtpy.QtCore import Qt import pytest # Local imports from spyder_unittest.backend.runnerbase import Category, TestResult from spyder_unittest.widgets.configdialog import Config from spyder_unittest.widgets.unittestgui import UnitTestWidget try: from unittest.mock import Mock except ImportError: from mock import Mock # Python 2 def test_unittestwidget_forwards_sig_edit_goto(qtbot): widget = UnitTestWidget(None) qtbot.addWidget(widget) with qtbot.waitSignal(widget.sig_edit_goto) as blocker: widget.testdataview.sig_edit_goto.emit('ham', 42) assert blocker.args == ['ham', 42] def test_unittestwidget_set_config_emits_newconfig(qtbot): widget = UnitTestWidget(None) qtbot.addWidget(widget) config = Config(wdir=os.getcwd(), framework='unittest') with qtbot.waitSignal(widget.sig_newconfig) as blocker: widget.config = config assert blocker.args == [config] assert widget.config == config def test_unittestwidget_set_config_does_not_emit_when_invalid(qtbot): widget = UnitTestWidget(None) qtbot.addWidget(widget) config = Config(wdir=os.getcwd(), framework=None) with qtbot.assertNotEmitted(widget.sig_newconfig): widget.config = config assert widget.config == config def test_unittestwidget_process_finished_updates_results(qtbot): widget = UnitTestWidget(None) widget.testdatamodel = Mock() widget.testdatamodel.summary = lambda: 'message' widget.testdatamodel.testresults = [] results = [TestResult(Category.OK, 'ok', 'hammodule.spam')] widget.process_finished(results, 'output') assert widget.testdatamodel.testresults == results def test_unittestwidget_process_finished_with_results_none(qtbot): widget = UnitTestWidget(None) widget.testdatamodel = Mock() widget.testdatamodel.summary = lambda: 'message' results = [TestResult(Category.OK, 'ok', 'hammodule.spam')] widget.testdatamodel.testresults = results widget.process_finished(None, 'output') assert widget.testdatamodel.testresults == results def test_unittestwidget_replace_pending_with_not_run(qtbot): widget = UnitTestWidget(None) widget.testdatamodel = Mock() results = [TestResult(Category.PENDING, 'pending', 'hammodule.eggs'), TestResult(Category.OK, 'ok', 'hammodule.spam')] widget.testdatamodel.testresults = results widget.replace_pending_with_not_run() expected = [TestResult(Category.SKIP, 'not run', 'hammodule.eggs')] widget.testdatamodel.update_testresults.assert_called_once_with(expected) def test_unittestwidget_tests_collected(qtbot): widget = UnitTestWidget(None) widget.testdatamodel = Mock() details = ['hammodule.spam', 'hammodule.eggs'] widget.tests_collected(details) results = [TestResult(Category.PENDING, 'pending', 'hammodule.spam'), TestResult(Category.PENDING, 'pending', 'hammodule.eggs')] widget.testdatamodel.add_testresults.assert_called_once_with(results) def test_unittestwidget_tests_started(qtbot): widget = UnitTestWidget(None) widget.testdatamodel = Mock() details = ['hammodule.spam'] results = [TestResult(Category.PENDING, 'pending', 'hammodule.spam', 'running')] widget.tests_started(details) widget.testdatamodel.update_testresults.assert_called_once_with(results) def test_unittestwidget_tests_collect_error(qtbot): widget = UnitTestWidget(None) widget.testdatamodel = Mock() names_plus_msg = [('hammodule.spam', 'msg')] results = [TestResult(Category.FAIL, 'failure', 'hammodule.spam', 'collection error', extra_text='msg')] widget.tests_collect_error(names_plus_msg) widget.testdatamodel.add_testresults.assert_called_once_with(results) def test_unittestwidget_tests_yield_results(qtbot): widget = UnitTestWidget(None) widget.testdatamodel = Mock() results = [TestResult(Category.OK, 'ok', 'hammodule.spam')] widget.tests_yield_result(results) widget.testdatamodel.update_testresults.assert_called_once_with(results) def test_unittestwidget_set_message(qtbot): widget = UnitTestWidget(None) widget.status_label = Mock() widget.set_status_label('xxx') widget.status_label.setText.assert_called_once_with('xxx') def test_run_tests_starts_testrunner(qtbot): widget = UnitTestWidget(None) mockRunner = Mock() widget.framework_registry.create_runner = Mock(return_value=mockRunner) config = Config(wdir=None, framework='ham') widget.run_tests(config) widget.framework_registry.create_runner.call_count == 1 widget.framework_registry.create_runner.call_args[0][0] == 'ham' mockRunner.start.call_count == 1 def test_run_tests_with_pre_test_hook_returning_true(qtbot): widget = UnitTestWidget(None) mockRunner = Mock() widget.framework_registry.create_runner = Mock(return_value=mockRunner) widget.pre_test_hook = Mock(return_value=True) widget.run_tests(Config()) widget.pre_test_hook.call_count == 1 mockRunner.start.call_count == 1 def test_run_tests_with_pre_test_hook_returning_false(qtbot): widget = UnitTestWidget(None) mockRunner = Mock() widget.framework_registry.create_runner = Mock(return_value=mockRunner) widget.pre_test_hook = Mock(return_value=False) widget.run_tests(Config()) widget.pre_test_hook.call_count == 1 mockRunner.start.call_count == 0 @pytest.mark.parametrize('framework', ['py.test', 'nose']) def test_run_tests_and_display_results(qtbot, tmpdir, monkeypatch, framework): """Basic integration test.""" os.chdir(tmpdir.strpath) testfilename = tmpdir.join('test_foo.py').strpath with open(testfilename, 'w') as f: f.write("def test_ok(): assert 1+1 == 2\n" "def test_fail(): assert 1+1 == 3\n") MockQMessageBox = Mock() monkeypatch.setattr('spyder_unittest.widgets.unittestgui.QMessageBox', MockQMessageBox) widget = UnitTestWidget(None) qtbot.addWidget(widget) config = Config(wdir=tmpdir.strpath, framework=framework) with qtbot.waitSignal(widget.sig_finished, timeout=10000, raising=True): widget.run_tests(config) MockQMessageBox.assert_not_called() model = widget.testdatamodel assert model.rowCount() == 2 assert model.index(0, 0).data(Qt.DisplayRole) == 'ok' assert model.index(0, 1).data(Qt.DisplayRole) == 't.test_ok' assert model.index(0, 1).data(Qt.ToolTipRole) == 'test_foo.test_ok' assert model.index(0, 2).data(Qt.DisplayRole) == '' assert model.index(1, 0).data(Qt.DisplayRole) == 'failure' assert model.index(1, 1).data(Qt.DisplayRole) == 't.test_fail' assert model.index(1, 1).data(Qt.ToolTipRole) == 'test_foo.test_fail' def test_run_tests_using_unittest_and_display_results(qtbot, tmpdir, monkeypatch): """Basic check.""" os.chdir(tmpdir.strpath) testfilename = tmpdir.join('test_foo.py').strpath with open(testfilename, 'w') as f: f.write("import unittest\n" "class MyTest(unittest.TestCase):\n" " def test_ok(self): self.assertEqual(1+1, 2)\n" " def test_fail(self): self.assertEqual(1+1, 3)\n") MockQMessageBox = Mock() monkeypatch.setattr('spyder_unittest.widgets.unittestgui.QMessageBox', MockQMessageBox) widget = UnitTestWidget(None) qtbot.addWidget(widget) config = Config(wdir=tmpdir.strpath, framework='unittest') with qtbot.waitSignal(widget.sig_finished, timeout=10000, raising=True): widget.run_tests(config) MockQMessageBox.assert_not_called() model = widget.testdatamodel assert model.rowCount() == 2 assert model.index(0, 0).data(Qt.DisplayRole) == 'FAIL' assert model.index(0, 1).data(Qt.DisplayRole) == 't.M.test_fail' assert model.index(0, 1).data(Qt.ToolTipRole) == 'test_foo.MyTest.test_fail' assert model.index(0, 2).data(Qt.DisplayRole) == '' assert model.index(1, 0).data(Qt.DisplayRole) == 'ok' assert model.index(1, 1).data(Qt.DisplayRole) == 't.M.test_ok' assert model.index(1, 1).data(Qt.ToolTipRole) == 'test_foo.MyTest.test_ok' assert model.index(1, 2).data(Qt.DisplayRole) == '' spyder_unittest-0.3.0/spyder_unittest/widgets/tests/test_datatree.py0000644072410100006200000002352613241562107026165 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Tests for unittestgui.py.""" # Third party imports from qtpy.QtCore import QModelIndex, QPoint, Qt from qtpy.QtGui import QContextMenuEvent import pytest # Local imports from spyder_unittest.backend.runnerbase import Category, TestResult from spyder_unittest.widgets.datatree import (COLORS, TestDataModel, TestDataView) try: from unittest.mock import Mock except ImportError: from mock import Mock # Python 2 @pytest.fixture def view_and_model(qtbot): view = TestDataView() model = TestDataModel() # setModel() before populating testresults because setModel() does a sort view.setModel(model) res = [TestResult(Category.OK, 'status', 'foo.bar'), TestResult(Category.FAIL, 'error', 'foo.bar', 'kadoom', 0, 'crash!\nboom!', filename='ham.py', lineno=42)] model.testresults = res return view, model def test_contextMenuEvent_calls_exec(view_and_model, monkeypatch): # test that a menu is displayed when clicking on an item mock_exec = Mock() monkeypatch.setattr('spyder_unittest.widgets.datatree.QMenu.exec_', mock_exec) view, model = view_and_model pos = view.visualRect(model.index(0, 0)).center() event = QContextMenuEvent(QContextMenuEvent.Mouse, pos) view.contextMenuEvent(event) assert mock_exec.called # test that no menu is displayed when clicking below the bottom item mock_exec.reset_mock() pos = view.visualRect(model.index(1, 0)).bottomRight() pos += QPoint(0, 1) event = QContextMenuEvent(QContextMenuEvent.Mouse, pos) view.contextMenuEvent(event) assert not mock_exec.called def test_go_to_test_definition_with_invalid_target(view_and_model, qtbot): view, model = view_and_model with qtbot.assertNotEmitted(view.sig_edit_goto): view.go_to_test_definition(model.index(0, 0)) def test_go_to_test_definition_with_valid_target(view_and_model, qtbot): view, model = view_and_model with qtbot.waitSignal(view.sig_edit_goto) as blocker: view.go_to_test_definition(model.index(1, 0)) assert blocker.args == ['ham.py', 42] def test_go_to_test_definition_with_lineno_none(view_and_model, qtbot): view, model = view_and_model res = model.testresults res[1].lineno = None model.testresults = res with qtbot.waitSignal(view.sig_edit_goto) as blocker: view.go_to_test_definition(model.index(1, 0)) assert blocker.args == ['ham.py', 0] def test_make_index_canonical_with_index_in_column2(view_and_model): view, model = view_and_model index = model.index(1, 2) res = view.make_index_canonical(index) assert res == model.index(1, 0) def test_make_index_canonical_with_level2_index(view_and_model): view, model = view_and_model index = model.index(1, 0, model.index(1, 0)) res = view.make_index_canonical(index) assert res == model.index(1, 0) def test_make_index_canonical_with_invalid_index(view_and_model): view, model = view_and_model index = QModelIndex() res = view.make_index_canonical(index) assert res is None def test_build_context_menu(view_and_model): view, model = view_and_model menu = view.build_context_menu(model.index(0, 0)) assert menu.actions()[0].text() == 'Expand' assert menu.actions()[1].text() == 'Go to definition' def test_build_context_menu_with_disabled_entries(view_and_model): view, model = view_and_model menu = view.build_context_menu(model.index(0, 0)) assert menu.actions()[0].isEnabled() == False assert menu.actions()[1].isEnabled() == False def test_build_context_menu_with_enabled_entries(view_and_model): view, model = view_and_model menu = view.build_context_menu(model.index(1, 0)) assert menu.actions()[0].isEnabled() == True assert menu.actions()[1].isEnabled() == True def test_build_context_menu_with_expanded_entry(view_and_model): view, model = view_and_model view.expand(model.index(1, 0)) menu = view.build_context_menu(model.index(1, 0)) assert menu.actions()[0].text() == 'Collapse' assert menu.actions()[0].isEnabled() == True def test_testdatamodel_using_qtmodeltester(qtmodeltester): model = TestDataModel() res = [TestResult(Category.OK, 'status', 'foo.bar'), TestResult(Category.FAIL, 'error', 'foo.bar', 'kadoom', 0, 'crash!\nboom!')] model.testresults = res qtmodeltester.check(model) def test_testdatamodel_shows_abbreviated_name_in_table(qtbot): model = TestDataModel() res = TestResult(Category.OK, 'status', 'foo.bar', '', 0, '') model.testresults = [res] index = model.index(0, 1) assert model.data(index, Qt.DisplayRole) == 'f.bar' def test_testdatamodel_shows_full_name_in_tooltip(qtbot): model = TestDataModel() res = TestResult(Category.OK, 'status', 'foo.bar', '', 0, '') model.testresults = [res] index = model.index(0, 1) assert model.data(index, Qt.ToolTipRole) == 'foo.bar' def test_testdatamodel_shows_time(qtmodeltester): model = TestDataModel() res = TestResult(Category.OK, 'status', 'foo.bar', time=0.0012345) model.testresults = [res] index = model.index(0, 3) assert model.data(index, Qt.DisplayRole) == '1.23' assert model.data(index, Qt.TextAlignmentRole) == Qt.AlignRight def test_testdatamodel_shows_time_when_zero(qtmodeltester): model = TestDataModel() res = TestResult(Category.OK, 'status', 'foo.bar', time=0) model.testresults = [res] assert model.data(model.index(0, 3), Qt.DisplayRole) == '0.00' def test_testdatamodel_shows_time_when_blank(qtmodeltester): model = TestDataModel() res = TestResult(Category.OK, 'status', 'foo.bar') model.testresults = [res] assert model.data(model.index(0, 3), Qt.DisplayRole) == '' def test_testdatamodel_data_background(): model = TestDataModel() res = [TestResult(Category.OK, 'status', 'foo.bar'), TestResult(Category.FAIL, 'error', 'foo.bar', 'kadoom')] model.testresults = res index = model.index(0, 0) assert model.data(index, Qt.BackgroundRole) == COLORS[Category.OK] index = model.index(1, 2) assert model.data(index, Qt.BackgroundRole) == COLORS[Category.FAIL] def test_testdatamodel_data_userrole(): model = TestDataModel() res = [TestResult(Category.OK, 'status', 'foo.bar', filename='somefile', lineno=42)] model.testresults = res index = model.index(0, 0) assert model.data(index, Qt.UserRole) == ('somefile', 42) def test_testdatamodel_add_tests(qtbot): def check_args1(parent, begin, end): return not parent.isValid() and begin == 0 and end == 0 def check_args2(parent, begin, end): return not parent.isValid() and begin == 1 and end == 1 model = TestDataModel() assert model.testresults == [] result1 = TestResult(Category.OK, 'status', 'foo.bar') with qtbot.waitSignals([model.rowsInserted, model.sig_summary], check_params_cbs=[check_args1, None], raising=True): model.add_testresults([result1]) assert model.testresults == [result1] result2 = TestResult(Category.FAIL, 'error', 'foo.bar', 'kadoom') with qtbot.waitSignals([model.rowsInserted, model.sig_summary], check_params_cbs=[check_args2, None], raising=True): model.add_testresults([result2]) assert model.testresults == [result1, result2] def test_testdatamodel_replace_tests(qtbot): def check_args(topLeft, bottomRight, *args): return (topLeft.row() == 0 and topLeft.column() == 0 and not topLeft.parent().isValid() and bottomRight.row() == 0 and bottomRight.column() == 3 and not bottomRight.parent().isValid()) model = TestDataModel() result1 = TestResult(Category.OK, 'status', 'foo.bar') model.testresults = [result1] result2 = TestResult(Category.FAIL, 'error', 'foo.bar', 'kadoom') with qtbot.waitSignals([model.dataChanged, model.sig_summary], check_params_cbs=[check_args, None], raising=True): model.update_testresults([result2]) assert model.testresults == [result2] STANDARD_TESTRESULTS = [ TestResult(Category.OK, 'status', 'foo.bar', time=2), TestResult(Category.FAIL, 'failure', 'fu.baz', 'kaboom',time=1), TestResult(Category.FAIL, 'error', 'fu.bar', 'boom')] def test_testdatamodel_sort_by_status_ascending(qtbot): model = TestDataModel() model.testresults = STANDARD_TESTRESULTS[:] with qtbot.waitSignal(model.modelReset): model.sort(0, Qt.AscendingOrder) expected = [STANDARD_TESTRESULTS[k] for k in [2, 1, 0]] assert model.testresults == expected def test_testdatamodel_sort_by_status_descending(): model = TestDataModel() model.testresults = STANDARD_TESTRESULTS[:] model.sort(0, Qt.DescendingOrder) expected = [STANDARD_TESTRESULTS[k] for k in [0, 1, 2]] assert model.testresults == expected def test_testdatamodel_sort_by_name(): model = TestDataModel() model.testresults = STANDARD_TESTRESULTS[:] model.sort(1, Qt.AscendingOrder) expected = [STANDARD_TESTRESULTS[k] for k in [0, 2, 1]] assert model.testresults == expected def test_testdatamodel_sort_by_message(): model = TestDataModel() model.testresults = STANDARD_TESTRESULTS[:] model.sort(2, Qt.AscendingOrder) expected = [STANDARD_TESTRESULTS[k] for k in [0, 2, 1]] assert model.testresults == expected def test_testdatamodel_sort_by_time(): model = TestDataModel() model.testresults = STANDARD_TESTRESULTS[:] model.sort(3, Qt.AscendingOrder) expected = [STANDARD_TESTRESULTS[k] for k in [2, 1, 0]] assert model.testresults == expected spyder_unittest-0.3.0/spyder_unittest/widgets/tests/test_configdialog.py0000644072410100006200000000711213163162712027013 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Tests for configdialog.py.""" # Standard library imports import os # Third party imports from qtpy.QtWidgets import QDialogButtonBox # Local imports from spyder_unittest.widgets.configdialog import Config, ConfigDialog class SpamRunner: name = 'spam' @classmethod def is_installed(cls): return False class HamRunner: name = 'ham' @classmethod def is_installed(cls): return True class EggsRunner: name = 'eggs' @classmethod def is_installed(cls): return True frameworks = {r.name: r for r in [SpamRunner, HamRunner, EggsRunner]} def default_config(): return Config(framework=None, wdir=os.getcwd()) def test_configdialog_uses_frameworks(qtbot): configdialog = ConfigDialog({'eggs': EggsRunner}, default_config()) assert configdialog.framework_combobox.count() == 1 assert configdialog.framework_combobox.itemText(0) == 'eggs' def test_configdialog_indicates_unvailable_frameworks(qtbot): configdialog = ConfigDialog({'spam': SpamRunner}, default_config()) assert configdialog.framework_combobox.count() == 1 assert configdialog.framework_combobox.itemText( 0) == 'spam (not available)' def test_configdialog_disables_unavailable_frameworks(qtbot): configdialog = ConfigDialog(frameworks, default_config()) model = configdialog.framework_combobox.model() assert model.item(0).isEnabled() # eggs assert model.item(1).isEnabled() # ham assert not model.item(2).isEnabled() # spam def test_configdialog_sets_initial_config(qtbot): config = default_config() configdialog = ConfigDialog(frameworks, config) assert configdialog.get_config() == config def test_configdialog_click_ham(qtbot): configdialog = ConfigDialog(frameworks, default_config()) qtbot.addWidget(configdialog) configdialog.framework_combobox.setCurrentIndex(1) assert configdialog.get_config().framework == 'ham' def test_configdialog_ok_initially_disabled(qtbot): configdialog = ConfigDialog(frameworks, default_config()) qtbot.addWidget(configdialog) assert not configdialog.buttons.button(QDialogButtonBox.Ok).isEnabled() def test_configdialog_ok_setting_framework_initially_enables_ok(qtbot): config = Config(framework='eggs', wdir=os.getcwd()) configdialog = ConfigDialog(frameworks, config) qtbot.addWidget(configdialog) assert configdialog.buttons.button(QDialogButtonBox.Ok).isEnabled() def test_configdialog_clicking_pytest_enables_ok(qtbot): configdialog = ConfigDialog(frameworks, default_config()) qtbot.addWidget(configdialog) configdialog.framework_combobox.setCurrentIndex(1) assert configdialog.buttons.button(QDialogButtonBox.Ok).isEnabled() def test_configdialog_wdir_lineedit(qtbot): configdialog = ConfigDialog(frameworks, default_config()) qtbot.addWidget(configdialog) wdir = os.path.normpath(os.path.join(os.getcwd(), os.path.pardir)) configdialog.wdir_lineedit.setText(wdir) assert configdialog.get_config().wdir == wdir def test_configdialog_wdir_button(qtbot, monkeypatch): configdialog = ConfigDialog(frameworks, default_config()) qtbot.addWidget(configdialog) wdir = os.path.normpath(os.path.join(os.getcwd(), os.path.pardir)) monkeypatch.setattr( 'spyder_unittest.widgets.configdialog.getexistingdirectory', lambda parent, caption, basedir: wdir) configdialog.wdir_button.click() assert configdialog.get_config().wdir == wdir spyder_unittest-0.3.0/spyder_unittest/widgets/tests/__init__.py0000644072410100006200000000030213163162712025060 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Tests for spyder_unittest.widgets .""" spyder_unittest-0.3.0/spyder_unittest/unittestplugin.py0000644072410100006200000002140713235065064023622 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Unit testing Plugin.""" # Third party imports from qtpy.QtWidgets import QVBoxLayout from spyder.config.base import get_translation from spyder.plugins import SpyderPluginWidget from spyder.py3compat import getcwd from spyder.utils import icon_manager as ima from spyder.utils.qthelpers import create_action from spyder.widgets.projects.config import ProjectConfig # Local imports from spyder_unittest.widgets.configdialog import Config from spyder_unittest.widgets.unittestgui import UnitTestWidget _ = get_translation("unittest", dirname="spyder_unittest") class UnitTestPlugin(SpyderPluginWidget): """Spyder plugin for unit testing.""" CONF_SECTION = 'unittest' CONF_DEFAULTS = [(CONF_SECTION, {'framework': '', 'wdir': ''})] CONF_VERSION = '0.1.0' def __init__(self, parent): """ Initialize plugin and corresponding widget. The part of the initialization that depends on `parent` is done in `self.register_plugin()`. """ SpyderPluginWidget.__init__(self, parent) self.main = parent # Spyder 3 compatibility # Create unit test widget. For compatibility with Spyder 3.x # here we check if the plugin has the attributes # 'options_button' and 'options_menu'. See issue 83 if hasattr(self, 'options_button') and hasattr(self, 'options_menu'): # Works with Spyder 4.x self.unittestwidget = UnitTestWidget( self.main, options_button=self.options_button, options_menu=self.options_menu) else: # Works with Spyder 3.x self.unittestwidget = UnitTestWidget(self.main) # Add unit test widget in dockwindow layout = QVBoxLayout() layout.addWidget(self.unittestwidget) self.setLayout(layout) # Initialize plugin self.initialize_plugin() def update_pythonpath(self): """ Update Python path used to run unit tests. This function is called whenever the Python path set in Spyder changes. It synchronizes the Python path in the unittest widget with the Python path in Spyder. """ self.unittestwidget.pythonpath = self.main.get_spyder_pythonpath() def handle_project_change(self): """ Handle the event where the current project changes. This updates the default working directory for running tests and loads the test configuration from the project preferences. """ self.update_default_wdir() self.load_config() def update_default_wdir(self): """ Update default working dir for running unit tests. The default working dir for running unit tests is set to the project directory if a project is open, or the current working directory if no project is opened. This function is called whenever this directory may change. """ wdir = self.main.projects.get_active_project_path() if not wdir: # if no project opened wdir = getcwd() self.unittestwidget.default_wdir = wdir def load_config(self): """ Load test configuration from project preferences. If the test configuration stored in the project preferences is valid, then use it. If it is not valid (e.g., because the user never configured testing for this project) or no project is opened, then invalidate the current test configuration. """ project = self.main.projects.get_active_project() if not project: self.unittestwidget.set_config_without_emit(None) return try: project_conf = project.CONF[self.CONF_SECTION] except KeyError: project_conf = ProjectConfig( name=self.CONF_SECTION, root_path=project.root_path, filename=self.CONF_SECTION + '.ini', defaults=self.CONF_DEFAULTS, load=True, version=self.CONF_VERSION) project.CONF[self.CONF_SECTION] = project_conf new_config = Config( framework=project_conf.get(self.CONF_SECTION, 'framework'), wdir=project_conf.get(self.CONF_SECTION, 'wdir')) if not self.unittestwidget.config_is_valid(new_config): new_config = None self.unittestwidget.set_config_without_emit(new_config) def save_config(self, test_config): """ Save test configuration in project preferences. If no project is opened, then do not save. """ project = self.main.projects.get_active_project() if not project: return project_conf = project.CONF[self.CONF_SECTION] project_conf.set(self.CONF_SECTION, 'framework', test_config.framework) project_conf.set(self.CONF_SECTION, 'wdir', test_config.wdir) def goto_in_editor(self, filename, lineno): """ Go to specified line in editor. This function is called when the unittest widget emits `sig_edit_goto`. Note that the line number in the signal is zero based (the first line is line 0), but the editor expects a one-based line number. """ self.main.editor.load(filename, lineno + 1, '') # ----- SpyderPluginWidget API -------------------------------------------- def get_plugin_title(self): """Return widget title.""" return _("Unit testing") def get_plugin_icon(self): """Return widget icon.""" return ima.icon('profiler') def get_focus_widget(self): """Return the widget to give focus to this dockwidget when raised.""" return self.unittestwidget.testdataview def get_plugin_actions(self): """Return a list of actions related to plugin.""" return self.unittestwidget.create_actions() def on_first_registration(self): """Action to be performed on first plugin registration.""" self.main.tabify_plugins(self.main.help, self) self.dockwidget.hide() def register_plugin(self): """Register plugin in Spyder's main window.""" # Get information from Spyder proper into plugin self.update_pythonpath() self.update_default_wdir() # Connect to relevant signals self.main.sig_pythonpath_changed.connect(self.update_pythonpath) self.main.workingdirectory.set_explorer_cwd.connect( self.update_default_wdir) self.main.projects.sig_project_created.connect( self.handle_project_change) self.main.projects.sig_project_loaded.connect( self.handle_project_change) self.main.projects.sig_project_closed.connect( self.handle_project_change) self.unittestwidget.sig_newconfig.connect(self.save_config) self.unittestwidget.sig_edit_goto.connect(self.goto_in_editor) # Add plugin as dockwidget to main window self.main.add_dockwidget(self) # Create action and add it to Spyder's menu unittesting_act = create_action( self, _("Run unit tests"), icon=ima.icon('profiler'), shortcut="Shift+Alt+F11", triggered=self.maybe_configure_and_start) self.main.run_menu_actions += [unittesting_act] self.main.editor.pythonfile_dependent_actions += [unittesting_act] # Save all files before running tests self.unittestwidget.pre_test_hook = self.main.editor.save_all def refresh_plugin(self): """Refresh unit testing widget.""" # For compatibility with Spyder 3.x here we check if the plugin # has the attributes 'options_button' and 'options_menu'. See issue 83 if hasattr(self, 'options_button') and hasattr(self, 'options_menu'): self.options_menu.clear() self.get_plugin_actions() def closing_plugin(self, cancelable=False): """Perform actions before parent main window is closed.""" return True def apply_plugin_settings(self, options): """Apply configuration file's plugin settings.""" pass # ----- Public API -------------------------------------------------------- def maybe_configure_and_start(self): """ Ask for configuration if necessary and then run tests. Raise unittest widget. If the current test configuration is not valid (or not set), then ask the user to configure. Then run the tests. """ if self.dockwidget and not self.ismaximized: self.dockwidget.setVisible(True) self.dockwidget.setFocus() self.dockwidget.raise_() self.unittestwidget.maybe_configure_and_start() spyder_unittest-0.3.0/spyder_unittest/tests/0000755072410100006200000000000013241567752021320 5ustar jitseamt00000000000000spyder_unittest-0.3.0/spyder_unittest/tests/test_unittestplugin.py0000644072410100006200000001121313235065064026015 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2017 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """Tests for unittestplugin.py""" # Third party imports import pytest # Local imports from spyder_unittest.unittestplugin import UnitTestPlugin from spyder_unittest.widgets.configdialog import Config try: from unittest.mock import Mock except ImportError: from mock import Mock # Python 2 @pytest.fixture def plugin(qtbot): """Set up the unittest plugin.""" res = UnitTestPlugin(None) qtbot.addWidget(res) res.main = Mock() res.main.get_spyder_pythonpath = lambda: 'fakepythonpath' res.main.run_menu_actions = [42] res.main.editor.pythonfile_dependent_actions = [42] res.main.projects.get_active_project_path = lambda: None res.register_plugin() return res def test_plugin_initialization(plugin): plugin.show() assert len(plugin.main.run_menu_actions) == 2 assert plugin.main.run_menu_actions[1].text() == 'Run unit tests' def test_plugin_pythonpath(plugin): # Test signal/slot connection plugin.main.sig_pythonpath_changed.connect.assert_called_with( plugin.update_pythonpath) # Test pythonpath is set to path provided by Spyder assert plugin.unittestwidget.pythonpath == 'fakepythonpath' # Test that change in path propagates plugin.main.get_spyder_pythonpath = lambda: 'anotherpath' plugin.update_pythonpath() assert plugin.unittestwidget.pythonpath == 'anotherpath' def test_plugin_wdir(plugin, monkeypatch, tmpdir): # Test signal/slot connections plugin.main.workingdirectory.set_explorer_cwd.connect.assert_called_with( plugin.update_default_wdir) plugin.main.projects.sig_project_created.connect.assert_called_with( plugin.handle_project_change) plugin.main.projects.sig_project_loaded.connect.assert_called_with( plugin.handle_project_change) plugin.main.projects.sig_project_closed.connect.assert_called_with( plugin.handle_project_change) # Test default_wdir is set to current working dir monkeypatch.setattr('spyder_unittest.unittestplugin.getcwd', lambda: 'fakecwd') plugin.update_default_wdir() assert plugin.unittestwidget.default_wdir == 'fakecwd' # Test after opening project, default_wdir is set to project dir project = Mock() project.CONF = {} project.root_path = str(tmpdir) plugin.main.projects.get_active_project = lambda: project plugin.main.projects.get_active_project_path = lambda: project.root_path plugin.handle_project_change() assert plugin.unittestwidget.default_wdir == str(tmpdir) # Test after closing project, default_wdir is set back to cwd plugin.main.projects.get_active_project = lambda: None plugin.main.projects.get_active_project_path = lambda: None plugin.handle_project_change() assert plugin.unittestwidget.default_wdir == 'fakecwd' def test_plugin_config(plugin, tmpdir, qtbot): # Test config file does not exist and config is empty config_file_path = tmpdir.join('.spyproject', 'unittest.ini') assert not config_file_path.check() assert plugin.unittestwidget.config is None # Open project project = Mock() project.CONF = {} project.root_path = str(tmpdir) plugin.main.projects.get_active_project = lambda: project plugin.main.projects.get_active_project_path = lambda: project.root_path plugin.handle_project_change() # Test config file does exist but config is empty assert config_file_path.check() assert 'framework = ' in config_file_path.read().splitlines() assert plugin.unittestwidget.config is None # Set config and test that this is recorded in config file config = Config(framework='ham', wdir=str(tmpdir)) with qtbot.waitSignal(plugin.unittestwidget.sig_newconfig): plugin.unittestwidget.config = config assert 'framework = ham' in config_file_path.read().splitlines() # Close project and test that config is empty plugin.main.projects.get_active_project = lambda: None plugin.main.projects.get_active_project_path = lambda: None plugin.handle_project_change() assert plugin.unittestwidget.config is None # Re-open project and test that config is correctly read plugin.main.projects.get_active_project = lambda: project plugin.main.projects.get_active_project_path = lambda: project.root_path plugin.handle_project_change() assert plugin.unittestwidget.config == config def test_plugin_goto_in_editor(plugin, qtbot): plugin.unittestwidget.sig_edit_goto.emit('somefile', 42) plugin.main.editor.load.assert_called_with('somefile', 43, '') spyder_unittest-0.3.0/setup.py0000644072410100006200000000533113163203052016404 0ustar jitseamt00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Spyder Project Contributors # Licensed under the terms of the MIT License # (see LICENSE.txt for details) """ Setup script for spyder_unittest """ from setuptools import setup, find_packages import os import os.path as osp def get_version(): """Get version from source file""" import codecs with codecs.open("spyder_unittest/__init__.py", encoding="utf-8") as f: lines = f.read().splitlines() for l in lines: if "__version__" in l: version = l.split("=")[1].strip() version = version.replace("'", '').replace('"', '') return version def get_package_data(name, extlist): """Return data files for package *name* with extensions in *extlist*""" flist = [] # Workaround to replace os.path.relpath (not available until Python 2.6): offset = len(name) + len(os.pathsep) for dirpath, _dirnames, filenames in os.walk(name): for fname in filenames: if not fname.startswith('.') and osp.splitext(fname)[1] in extlist: flist.append(osp.join(dirpath, fname)[offset:]) return flist # Requirements REQUIREMENTS = ['lxml', 'spyder>=3'] EXTLIST = ['.jpg', '.png', '.json', '.mo', '.ini'] LIBNAME = 'spyder_unittest' LONG_DESCRIPTION = """ This is a plugin for the Spyder IDE that integrates popular unit test frameworks. It allows you to run tests and view the results. **Status:** This is a work in progress. It is useable, but only the basic functionality is implemented at the moment. The plugin currently supports the py.test and nose testing frameworks. """ setup( name=LIBNAME, version=get_version(), packages=find_packages(), package_data={LIBNAME: get_package_data(LIBNAME, EXTLIST)}, keywords=["Qt PyQt4 PyQt5 spyder plugins testing"], install_requires=REQUIREMENTS, url='https://github.com/spyder-ide/spyder-unittest', license='MIT', author="Spyder Project Contributors", description='Plugin to run tests from within the Spyder IDE', long_description=LONG_DESCRIPTION, classifiers=[ 'Development Status :: 4 - Beta', 'Environment :: X11 Applications :: Qt', 'Environment :: Win32 (MS Windows)', 'Intended Audience :: Developers', 'License :: OSI Approved :: MIT License', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Topic :: Software Development :: Testing', 'Topic :: Text Editors :: Integrated Development Environments (IDE)']) spyder_unittest-0.3.0/CHANGELOG.md0000644072410100006200000002243413241567547016531 0ustar jitseamt00000000000000# History of changes ## Version 0.3.0 (2018/02/16) This version includes improved support of `py.test` (test results are displayed as they come in, double clicking on a test result opens the test in the editor) as well as various other improvements. ### Issues Closed * [Issue 106](https://github.com/spyder-ide/spyder-unittest/issues/106) - After sorting, test details are lost ([PR 110](https://github.com/spyder-ide/spyder-unittest/pull/110)) * [Issue 103](https://github.com/spyder-ide/spyder-unittest/issues/103) - "Go to" not working unless working directory is correctly set ([PR 109](https://github.com/spyder-ide/spyder-unittest/pull/109)) * [Issue 98](https://github.com/spyder-ide/spyder-unittest/issues/98) - Running unittest tests within py.test results in error ([PR 102](https://github.com/spyder-ide/spyder-unittest/pull/102)) * [Issue 96](https://github.com/spyder-ide/spyder-unittest/issues/96) - Use new colors for passed and failed tests ([PR 108](https://github.com/spyder-ide/spyder-unittest/pull/108)) * [Issue 94](https://github.com/spyder-ide/spyder-unittest/issues/94) - Enable sorting in table of test results ([PR 104](https://github.com/spyder-ide/spyder-unittest/pull/104)) * [Issue 93](https://github.com/spyder-ide/spyder-unittest/issues/93) - Handle errors in py.test's collection phase ([PR 99](https://github.com/spyder-ide/spyder-unittest/pull/99)) * [Issue 92](https://github.com/spyder-ide/spyder-unittest/issues/92) - Retitle "Kill" (tests) button to "Stop" ([PR 107](https://github.com/spyder-ide/spyder-unittest/pull/107)) * [Issue 89](https://github.com/spyder-ide/spyder-unittest/issues/89) - Write tests for UnitTestPlugin ([PR 95](https://github.com/spyder-ide/spyder-unittest/pull/95)) * [Issue 87](https://github.com/spyder-ide/spyder-unittest/issues/87) - Don't display test time when using unittest ([PR 105](https://github.com/spyder-ide/spyder-unittest/pull/105)) * [Issue 86](https://github.com/spyder-ide/spyder-unittest/issues/86) - Use sensible precision when displaying test times ([PR 105](https://github.com/spyder-ide/spyder-unittest/pull/105)) * [Issue 83](https://github.com/spyder-ide/spyder-unittest/issues/83) - Changes for compatibility with new undocking behavior of Spyder ([PR 84](https://github.com/spyder-ide/spyder-unittest/pull/84)) * [Issue 77](https://github.com/spyder-ide/spyder-unittest/issues/77) - Be smarter about abbreviating test names * [Issue 71](https://github.com/spyder-ide/spyder-unittest/issues/71) - Save before running tests (?) ([PR 101](https://github.com/spyder-ide/spyder-unittest/pull/101)) * [Issue 50](https://github.com/spyder-ide/spyder-unittest/issues/50) - Use py.test's API to run tests ([PR 91](https://github.com/spyder-ide/spyder-unittest/pull/91)) * [Issue 43](https://github.com/spyder-ide/spyder-unittest/issues/43) - Save selected test framework ([PR 90](https://github.com/spyder-ide/spyder-unittest/pull/90)) * [Issue 31](https://github.com/spyder-ide/spyder-unittest/issues/31) - Add issues/PRs templates ([PR 111](https://github.com/spyder-ide/spyder-unittest/pull/111)) * [Issue 13](https://github.com/spyder-ide/spyder-unittest/issues/13) - Display test results as they come in ([PR 91](https://github.com/spyder-ide/spyder-unittest/pull/91)) * [Issue 12](https://github.com/spyder-ide/spyder-unittest/issues/12) - Double clicking on test name should take you somewhere useful ([PR 100](https://github.com/spyder-ide/spyder-unittest/pull/100)) In this release 18 issues were closed. ### Pull Requests Merged * [PR 111](https://github.com/spyder-ide/spyder-unittest/pull/111) - Update docs for new release ([31](https://github.com/spyder-ide/spyder-unittest/issues/31)) * [PR 110](https://github.com/spyder-ide/spyder-unittest/pull/110) - Emit modelReset after sorting test results ([106](https://github.com/spyder-ide/spyder-unittest/issues/106)) * [PR 109](https://github.com/spyder-ide/spyder-unittest/pull/109) - Store full path to file containing test in TestResult ([103](https://github.com/spyder-ide/spyder-unittest/issues/103)) * [PR 108](https://github.com/spyder-ide/spyder-unittest/pull/108) - Use paler shade of red as background for failing tests ([96](https://github.com/spyder-ide/spyder-unittest/issues/96)) * [PR 107](https://github.com/spyder-ide/spyder-unittest/pull/107) - Relabel 'Kill' button ([92](https://github.com/spyder-ide/spyder-unittest/issues/92)) * [PR 105](https://github.com/spyder-ide/spyder-unittest/pull/105) - Improve display of test times ([87](https://github.com/spyder-ide/spyder-unittest/issues/87), [86](https://github.com/spyder-ide/spyder-unittest/issues/86)) * [PR 104](https://github.com/spyder-ide/spyder-unittest/pull/104) - Allow user to sort tests ([94](https://github.com/spyder-ide/spyder-unittest/issues/94)) * [PR 102](https://github.com/spyder-ide/spyder-unittest/pull/102) - Use nodeid when collecting tests using py.test ([98](https://github.com/spyder-ide/spyder-unittest/issues/98)) * [PR 101](https://github.com/spyder-ide/spyder-unittest/pull/101) - Save all files before running tests ([71](https://github.com/spyder-ide/spyder-unittest/issues/71)) * [PR 100](https://github.com/spyder-ide/spyder-unittest/pull/100) - Implement go to test definition for py.test ([12](https://github.com/spyder-ide/spyder-unittest/issues/12)) * [PR 99](https://github.com/spyder-ide/spyder-unittest/pull/99) - Handle errors encountered when py.test collect tests ([93](https://github.com/spyder-ide/spyder-unittest/issues/93)) * [PR 97](https://github.com/spyder-ide/spyder-unittest/pull/97) - Abbreviate module names when displaying test names * [PR 95](https://github.com/spyder-ide/spyder-unittest/pull/95) - Add unit tests for plugin ([89](https://github.com/spyder-ide/spyder-unittest/issues/89)) * [PR 91](https://github.com/spyder-ide/spyder-unittest/pull/91) - Display py.test results as they come in ([50](https://github.com/spyder-ide/spyder-unittest/issues/50), [13](https://github.com/spyder-ide/spyder-unittest/issues/13)) * [PR 90](https://github.com/spyder-ide/spyder-unittest/pull/90) - Load and save configuration for tests ([43](https://github.com/spyder-ide/spyder-unittest/issues/43)) * [PR 85](https://github.com/spyder-ide/spyder-unittest/pull/85) - Remove PySide from CI scripts and remove Scrutinizer * [PR 84](https://github.com/spyder-ide/spyder-unittest/pull/84) - PR: Show undock action ([83](https://github.com/spyder-ide/spyder-unittest/issues/83)) In this release 17 pull requests were closed. ## Version 0.2.0 (2017/08/20) The main change in this version is that it adds support for tests written using the `unittest` framework available in the standard Python library. ### Issues Closed * [Issue 79](https://github.com/spyder-ide/spyder-unittest/issues/79) - Remove QuantifiedCode * [Issue 74](https://github.com/spyder-ide/spyder-unittest/issues/74) - Also test against spyder's master branch in CI * [Issue 70](https://github.com/spyder-ide/spyder-unittest/issues/70) - Point contributors to ciocheck * [Issue 41](https://github.com/spyder-ide/spyder-unittest/issues/41) - Add function for registering test frameworks * [Issue 15](https://github.com/spyder-ide/spyder-unittest/issues/15) - Check whether test framework is installed * [Issue 11](https://github.com/spyder-ide/spyder-unittest/issues/11) - Abbreviate test names * [Issue 4](https://github.com/spyder-ide/spyder-unittest/issues/4) - Add unittest support In this release 7 issues were closed. ### Pull Requests Merged * [PR 82](https://github.com/spyder-ide/spyder-unittest/pull/82) - Enable Scrutinizer * [PR 81](https://github.com/spyder-ide/spyder-unittest/pull/81) - Update README.md * [PR 80](https://github.com/spyder-ide/spyder-unittest/pull/80) - Install Spyder from github 3.x branch when testing on Circle * [PR 78](https://github.com/spyder-ide/spyder-unittest/pull/78) - Properly handle test frameworks which are not installed * [PR 75](https://github.com/spyder-ide/spyder-unittest/pull/75) - Shorten test name displayed in widget * [PR 72](https://github.com/spyder-ide/spyder-unittest/pull/72) - Support unittest * [PR 69](https://github.com/spyder-ide/spyder-unittest/pull/69) - Process coverage stats using coveralls * [PR 68](https://github.com/spyder-ide/spyder-unittest/pull/68) - Add framework registry for associating testing frameworks with runners * [PR 67](https://github.com/spyder-ide/spyder-unittest/pull/67) - Install the tests alongside the module In this release 9 pull requests were closed. ## Version 0.1.2 (2017/03/04) This version fixes a bug in the packaging code. ### Pull Requests Merged * [PR 63](https://github.com/spyder-ide/spyder-unittest/pull/63) - Fix parsing of module version In this release 1 pull request was closed. ## Version 0.1.1 (2017/02/11) This version improves the packaging. The code itself was not changed. ### Issues Closed * [Issue 58](https://github.com/spyder-ide/spyder-unittest/issues/58) - Normalized copyright information * [Issue 57](https://github.com/spyder-ide/spyder-unittest/issues/57) - Depend on nose and pytest at installation * [Issue 56](https://github.com/spyder-ide/spyder-unittest/issues/56) - Add the test suite to the release tarball In this release 3 issues were closed. ### Pull Requests Merged * [PR 59](https://github.com/spyder-ide/spyder-unittest/pull/59) - Improve distributed package In this release 1 pull request was closed. ## Version 0.1.0 (2017/02/05) Initial release, supporting nose and py.test frameworks.spyder_unittest-0.3.0/README.md0000644072410100006200000001212213241566671016165 0ustar jitseamt00000000000000# spyder-unittest ## Project information [![license](https://img.shields.io/pypi/l/spyder-unittest.svg)](./LICENSE) [![pypi version](https://img.shields.io/pypi/v/spyder-unittest.svg)](https://pypi.python.org/pypi/spyder-unittest) [![Join the chat at https://gitter.im/spyder-ide/public](https://badges.gitter.im/spyder-ide/spyder.svg)](https://gitter.im/spyder-ide/public) [![OpenCollective Backers](https://opencollective.com/spyder/backers/badge.svg?color=blue)](#backers) [![OpenCollective Sponsors](https://opencollective.com/spyder/sponsors/badge.svg?color=blue)](#sponsors) ## Build status [![Build Status](https://travis-ci.org/spyder-ide/spyder-unittest.svg?branch=master)](https://travis-ci.org/spyder-ide/spyder-unittest) [![Build status](https://ci.appveyor.com/api/projects/status/d9wa6whp1fpq4uii?svg=true)](https://ci.appveyor.com/project/spyder-ide/spyder-unittest) [![CircleCI](https://circleci.com/gh/spyder-ide/spyder-unittest/tree/master.svg?style=shield)](https://circleci.com/gh/spyder-ide/spyder-unittest/tree/master) [![Coverage Status](https://coveralls.io/repos/github/spyder-ide/spyder-unittest/badge.svg?branch=master)](https://coveralls.io/github/spyder-ide/spyder-unittest?branch=master) ---- ## Important Announcement: Spyder is unfunded! Since mid November/2017, [Anaconda, Inc](https://www.anaconda.com/) has stopped funding Spyder development, after doing it for the past 18 months. Because of that, development will focus from now on maintaining Spyder 3 at a much slower pace than before. If you want to contribute to maintain Spyder, please consider donating at https://opencollective.com/spyder We appreciate all the help you can provide us and can't thank you enough for supporting the work of Spyder devs and Spyder development. If you want to know more about this, please read this [page](https://github.com/spyder-ide/spyder/wiki/Anaconda-stopped-funding-Spyder). ---- ## Description ![screenshot](./screenshot.png) This is a plugin for Spyder that integrates popular unit test frameworks. It allows you to run tests and view the results. The plugin supports the `unittest` framework in the Python standard library and the `py.test` and `nose` testing frameworks. Support for `py.test` is most complete at the moment. ## Installation The unittest plugin is available in the `spyder-ide` channel in Anaconda and in PyPI, so it can be installed with the following commands: * Using Anaconda: `conda install -c spyder-ide spyder-unittest` * Using pip: `pip install spyder-unittest` All dependencies will be automatically installed. You have to restart Spyder before you can use the plugin. ## Usage The plugin adds an item `Run unit tests` to the `Run` menu in Spyder. Click on this to run the unit tests. After you specify the testing framework and the directory under which the tests are stored, the tests are run. The `Unit testing` window pane (displayed at the top of this file) will pop up with the results. If you are using `py.test`, you can double-click on a test to view it in the editor. If you want to run tests in a different directory or switch testing frameworks, click `Configure` in the Options menu (cogwheel icon), which is located in the upper right corner of the `Unit testing` pane. ## Feedback Bug reports, feature requests and other ideas are more than welcome on the [issue tracker](https://github.com/spyder-ide/spyder-unittest/issues). You may use for general discussion. ## Development Development of the plugin is done at https://github.com/spyder-ide/spyder-unittest . You can install the development version of the plugin by cloning the git repository and running `pip install .`, possibly with the `--editable` flag. The plugin has the following dependencies: * [spyder](https://github.com/spyder-ide/spyder) (obviously), at least version 3.0 * [lxml](http://lxml.de/) * the testing framework that you will be using: [py.test](https://pytest.org) and/or [nose](https://nose.readthedocs.io) In order to run the tests distributed with this plugin, you need [nose](https://nose.readthedocs.io), [py.test](https://pytest.org) and [pytest-qt](https://github.com/pytest-dev/pytest-qt). If you use Python 2, you also need [mock](https://github.com/testing-cabal/mock). You are very welcome to submit code contributations in the form of pull requests to the [issue tracker](https://github.com/spyder-ide/spyder-unittest/issues). GitHub is configured to run pull requests automatically against the test suite and against several automatic style checkers using [ciocheck](https://github.com/ContinuumIO/ciocheck). The style checkers can be rather finicky so you may want to install ciocheck locally and run them before submitting the code. ## Contributing Everyone is welcome to contribute! ## Backers Support us with a monthly donation and help us continue our activities. [![Backers](https://opencollective.com/spyder/backers.svg)](https://opencollective.com/spyder#support) ## Sponsors Become a sponsor to get your logo on our README on Github. [![Sponsors](https://opencollective.com/spyder/sponsors.svg)](https://opencollective.com/spyder#support) spyder_unittest-0.3.0/PKG-INFO0000644072410100006200000000257213241567752016014 0ustar jitseamt00000000000000Metadata-Version: 1.1 Name: spyder_unittest Version: 0.3.0 Summary: Plugin to run tests from within the Spyder IDE Home-page: https://github.com/spyder-ide/spyder-unittest Author: Spyder Project Contributors Author-email: UNKNOWN License: MIT Description-Content-Type: UNKNOWN Description: This is a plugin for the Spyder IDE that integrates popular unit test frameworks. It allows you to run tests and view the results. **Status:** This is a work in progress. It is useable, but only the basic functionality is implemented at the moment. The plugin currently supports the py.test and nose testing frameworks. Keywords: Qt PyQt4 PyQt5 spyder plugins testing Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Environment :: X11 Applications :: Qt Classifier: Environment :: Win32 (MS Windows) Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Topic :: Software Development :: Testing Classifier: Topic :: Text Editors :: Integrated Development Environments (IDE) spyder_unittest-0.3.0/MANIFEST.in0000644072410100006200000000014213072704707016437 0ustar jitseamt00000000000000include CHANGELOG.md include LICENSE.txt include README.md recursive-include spyder_unittest *.py spyder_unittest-0.3.0/setup.cfg0000644072410100006200000000014413241567752016531 0ustar jitseamt00000000000000[bdist_wheel] universal = 1 [tool:pytest] python_classes = [egg_info] tag_build = tag_date = 0 spyder_unittest-0.3.0/LICENSE.txt0000644072410100006200000000210513047602633016522 0ustar jitseamt00000000000000The MIT License (MIT) Copyright © 2013 Spyder Project Contributors Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.