tablib-0.13.0/0000755000175000017500000000000013440456503012376 5ustar josephjosephtablib-0.13.0/.gitignore0000644000175000017500000000043713440456503014372 0ustar josephjoseph# application builds build/* dist/* MANIFEST # python skin *.pyc *.pyo # osx noise .DS_Store profile # pycharm noise .idea .idea/* # vi noise *.swp docs/_build/* coverage.xml nosetests.xml junit-py25.xml junit-py26.xml junit-py27.xml # tox noise .tox # pyenv noise .python-version tablib-0.13.0/.travis.yml0000644000175000017500000000021713440456503014507 0ustar josephjosephlanguage: python cache: pip python: - 2.7 - 3.4 - 3.5 - 3.6 install: - pip install -r requirements.txt script: python test_tablib.py tablib-0.13.0/HACKING0000644000175000017500000000114613440456503013367 0ustar josephjosephWhere possible, please follow PEP8 with regard to coding style. Sometimes the line length restriction is too hard to follow, so don't bend over backwards there. Triple-quotes should always be """, single quotes are ' unless using " would result in less escaping within the string. All modules, functions, and methods should be well documented reStructuredText for Sphinx AutoDoc. All functionality should be available in pure Python. Optional C (via Cython) implementations may be written for performance reasons, but should never replace the Python implementation. Lastly, don't take yourself too seriously :)tablib-0.13.0/README.rst0000644000175000017500000000727213440456503014075 0ustar josephjosephTablib: format-agnostic tabular dataset library =============================================== .. image:: https://travis-ci.org/kennethreitz/tablib.svg?branch=master :target: https://travis-ci.org/kennethreitz/tablib :: _____ ______ ___________ ______ __ /_______ ____ /_ ___ /___(_)___ /_ _ __/_ __ `/__ __ \__ / __ / __ __ \ / /_ / /_/ / _ /_/ /_ / _ / _ /_/ / \__/ \__,_/ /_.___/ /_/ /_/ /_.___/ Tablib is a format-agnostic tabular dataset library, written in Python. Output formats supported: - Excel (Sets + Books) - JSON (Sets + Books) - YAML (Sets + Books) - Pandas DataFrames (Sets) - HTML (Sets) - Jira (Sets) - TSV (Sets) - ODS (Sets) - CSV (Sets) - DBF (Sets) Note that tablib *purposefully* excludes XML support. It always will. (Note: This is a joke. Pull requests are welcome.) If you're interested in financially supporting Kenneth Reitz open source, consider `visiting this link `_. Your support helps tremendously with sustainability of motivation, as Open Source is no longer part of my day job. Overview -------- `tablib.Dataset()` A Dataset is a table of tabular data. It may or may not have a header row. They can be build and manipulated as raw Python datatypes (Lists of tuples|dictionaries). Datasets can be imported from JSON, YAML, DBF, and CSV; they can be exported to XLSX, XLS, ODS, JSON, YAML, DBF, CSV, TSV, and HTML. `tablib.Databook()` A Databook is a set of Datasets. The most common form of a Databook is an Excel file with multiple spreadsheets. Databooks can be imported from JSON and YAML; they can be exported to XLSX, XLS, ODS, JSON, and YAML. Usage ----- Populate fresh data files: :: headers = ('first_name', 'last_name') data = [ ('John', 'Adams'), ('George', 'Washington') ] data = tablib.Dataset(*data, headers=headers) Intelligently add new rows: :: >>> data.append(('Henry', 'Ford')) Intelligently add new columns: :: >>> data.append_col((90, 67, 83), header='age') Slice rows: :: >>> print(data[:2]) [('John', 'Adams', 90), ('George', 'Washington', 67)] Slice columns by header: :: >>> print(data['first_name']) ['John', 'George', 'Henry'] Easily delete rows: :: >>> del data[1] Exports ------- Drumroll please........... JSON! +++++ :: >>> print(data.export('json')) [ { "last_name": "Adams", "age": 90, "first_name": "John" }, { "last_name": "Ford", "age": 83, "first_name": "Henry" } ] YAML! +++++ :: >>> print(data.export('yaml')) - {age: 90, first_name: John, last_name: Adams} - {age: 83, first_name: Henry, last_name: Ford} CSV... ++++++ :: >>> print(data.export('csv')) first_name,last_name,age John,Adams,90 Henry,Ford,83 EXCEL! ++++++ :: >>> with open('people.xls', 'wb') as f: ... f.write(data.export('xls')) DBF! ++++ :: >>> with open('people.dbf', 'wb') as f: ... f.write(data.export('dbf')) Pandas DataFrame! +++++++++++++++++ :: >>> print(data.export('df')): first_name last_name age 0 John Adams 90 1 Henry Ford 83 It's that easy. Installation ------------ To install tablib, simply: :: $ pip install tablib[pandas] Make sure to check out `Tablib on PyPi `_! Contribute ---------- If you'd like to contribute, simply fork `the repository`_, commit your changes to the **develop** branch (or branch off of it), and send a pull request. Make sure you add yourself to AUTHORS_. .. _`the repository`: http://github.com/kennethreitz/tablib .. _AUTHORS: http://github.com/kennethreitz/tablib/blob/master/AUTHORS tablib-0.13.0/tox.ini0000644000175000017500000000020113440456503013702 0ustar josephjoseph[tox] minversion = 2.4 envlist = py27, py34, py35, py36 [testenv] deps = pytest extras = pandas commands = python setup.py test tablib-0.13.0/LICENSE0000644000175000017500000000203413440456503013402 0ustar josephjosephCopyright 2016 Kenneth Reitz Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.tablib-0.13.0/AUTHORS0000644000175000017500000000116013440456503013444 0ustar josephjosephTablib is written and maintained by Kenneth Reitz and various contributors: Development Lead ```````````````` - Kenneth Reitz Core Contributors ````````````````` - Iuri de Silvio Patches and Suggestions ``````````````````````` - Luke Lee - Josh Ourisman - Luca Beltrame - Benjamin Wohlwend - Erik Youngren - Mark Rogers - Mark Walling - Mike Waldner - Joel Friedly - Jakub Janoszek - Marc Abramowitz - Alex Gaynor - James Douglass - Tommy Anthony - Rabin Nankhwa - Marco Dallagiacoma - Mathias Loesch - Tushar Makkar - Andrii Soldatenko - Bruno Soares - Tsuyoshi Hombashi tablib-0.13.0/HISTORY.rst0000644000175000017500000001142113440456503014270 0ustar josephjosephHistory ------- 0.11.5 (2017-06-13) +++++++++++++++++++ - Use ``yaml.safe_load`` for importing yaml. 0.11.4 (2017-01-23) +++++++++++++++++++ - Use built-in `json` package if available - Support Python 3.5+ in classifiers ** Bugfixes ** - Fixed textual representation for Dataset with no headers - Handle decimal types 0.11.3 (2016-02-16) +++++++++++++++++++ - Release fix. 0.11.2 (2016-02-16) +++++++++++++++++++ **Bugfixes** - Fix export only formats. - Fix for xlsx output. 0.11.1 (2016-02-07) +++++++++++++++++++ **Bugfixes** - Fixed packaging error on Python 3. 0.11.0 (2016-02-07) +++++++++++++++++++ **New Formats!** - Added LaTeX table export format (``Dataset.latex``). - Support for dBase (DBF) files (``Dataset.dbf``). **Improvements** - New import/export interface (``Dataset.export()``, ``Dataset.load()``). - CSV custom delimiter support (``Dataset.export('csv', delimiter='$')``). - Adding ability to remove duplicates to all rows in a dataset (``Dataset.remove_duplicates()``). - Added a mechanism to avoid ``datetime.datetime`` issues when serializing data. - New ``detect_format()`` function (mostly for internal use). - Update the vendored unicodecsv to fix ``None`` handling. - Only freeze the headers row, not the headers columns (xls). **Breaking Changes** - ``detect()`` function removed. **Bugfixes** - Fix XLSX import. - Bugfix for ``Dataset.transpose().transpose()``. 0.10.0 (2014-05-27) +++++++++++++++++++ * Unicode Column Headers * ALL the bugfixes! 0.9.11 (2011-06-30) +++++++++++++++++++ * Bugfixes 0.9.10 (2011-06-22) +++++++++++++++++++ * Bugfixes 0.9.9 (2011-06-21) ++++++++++++++++++ * Dataset API Changes * ``stack_rows`` => ``stack``, ``stack_columns`` => ``stack_cols`` * column operations have their own methods now (``append_col``, ``insert_col``) * List-style ``pop()`` * Redis-style ``rpush``, ``lpush``, ``rpop``, ``lpop``, ``rpush_col``, and ``lpush_col`` 0.9.8 (2011-05-22) ++++++++++++++++++ * OpenDocument Spreadsheet support (.ods) * Full Unicode TSV support 0.9.7 (2011-05-12) ++++++++++++++++++ * Full XLSX Support! * Pickling Bugfix * Compat Module 0.9.6 (2011-05-12) ++++++++++++++++++ * ``seperators`` renamed to ``separators`` * Full unicode CSV support 0.9.5 (2011-03-24) ++++++++++++++++++ * Python 3.1, Python 3.2 Support (same code base!) * Formatter callback support * Various bug fixes 0.9.4 (2011-02-18) ++++++++++++++++++ * Python 2.5 Support! * Tox Testing for 2.5, 2.6, 2.7 * AnyJSON Integrated * OrderedDict support * Caved to community pressure (spaces) 0.9.3 (2011-01-31) ++++++++++++++++++ * Databook duplication leak fix. * HTML Table output. * Added column sorting. 0.9.2 (2010-11-17) ++++++++++++++++++ * Transpose method added to Datasets. * New frozen top row in Excel output. * Pickling support for Datasets and Rows. * Support for row/column stacking. 0.9.1 (2010-11-04) ++++++++++++++++++ * Minor reference shadowing bugfix. 0.9.0 (2010-11-04) ++++++++++++++++++ * Massive documentation update! * Tablib.org! * Row tagging and Dataset filtering! * Column insert/delete support * Column append API change (header required) * Internal Changes (Row object and use thereof) 0.8.5 (2010-10-06) ++++++++++++++++++ * New import system. All dependencies attempt to load from site-packages, then fallback on tenderized modules. 0.8.4 (2010-10-04) ++++++++++++++++++ * Updated XLS output: Only wrap if '\\n' in cell. 0.8.3 (2010-10-04) ++++++++++++++++++ * Ability to append new column passing a callable as the value that will be applied to every row. 0.8.2 (2010-10-04) ++++++++++++++++++ * Added alignment wrapping to written cells. * Added separator support to XLS. 0.8.1 (2010-09-28) ++++++++++++++++++ * Packaging Fix 0.8.0 (2010-09-25) ++++++++++++++++++ * New format plugin system! * Imports! ELEGANT Imports! * Tests. Lots of tests. 0.7.1 (2010-09-20) ++++++++++++++++++ * Reverting methods back to properties. * Windows bug compensated in documentation. 0.7.0 (2010-09-20) ++++++++++++++++++ * Renamed DataBook Databook for consistency. * Export properties changed to methods (XLS filename / StringIO bug). * Optional Dataset.xls(path='filename') support (for writing on windows). * Added utf-8 on the worksheet level. 0.6.4 (2010-09-19) ++++++++++++++++++ * Updated unicode export for XLS. * More exhaustive unit tests. 0.6.3 (2010-09-14) ++++++++++++++++++ * Added Dataset.append() support for columns. 0.6.2 (2010-09-13) ++++++++++++++++++ * Fixed Dataset.append() error on empty dataset. * Updated Dataset.headers property w/ validation. * Added Testing Fixtures. 0.6.1 (2010-09-12) ++++++++++++++++++ * Packaging hotfixes. 0.6.0 (2010-09-11) ++++++++++++++++++ * Public Release. * Export Support for XLS, JSON, YAML, and CSV. * DataBook Export for XLS, JSON, and YAML. * Python Dict Property Support. tablib-0.13.0/setup.py0000755000175000017500000000344613440456503014122 0ustar josephjoseph#!/usr/bin/env python # -*- coding: utf-8 -*- import os import re import sys try: from setuptools import setup except ImportError: from distutils.core import setup if sys.argv[-1] == 'publish': os.system("python setup.py sdist upload") sys.exit() if sys.argv[-1] == 'test': try: __import__('py') except ImportError: print('py.test required.') sys.exit(1) errors = os.system('py.test test_tablib.py') sys.exit(bool(errors)) packages = [ 'tablib', 'tablib.formats', 'tablib.packages', 'tablib.packages.dbfpy', 'tablib.packages.dbfpy3' ] install = [ 'odfpy', 'openpyxl>=2.4.0', 'backports.csv', 'xlrd', 'xlwt', 'pyyaml', ] with open('tablib/core.py', 'r') as fd: version = re.search(r'^__version__\s*=\s*[\'"]([^\'"]*)[\'"]', fd.read(), re.MULTILINE).group(1) setup( name='tablib', version=version, description='Format agnostic tabular data library (XLS, JSON, YAML, CSV)', long_description=(open('README.rst').read() + '\n\n' + open('HISTORY.rst').read()), author='Kenneth Reitz', author_email='me@kennethreitz.org', url='http://python-tablib.org', packages=packages, license='MIT', classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'Natural Language :: English', 'License :: OSI Approved :: MIT License', 'Programming Language :: Python', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', ], tests_require=['pytest'], install_requires=install, extras_require={ 'pandas': ['pandas'], }, ) tablib-0.13.0/test_tablib.py0000755000175000017500000010151013440456503015245 0ustar josephjoseph#!/usr/bin/env python # -*- coding: utf-8 -*- """Tests for Tablib.""" from __future__ import unicode_literals import datetime import doctest import json import sys import unittest from uuid import uuid4 import tablib from tablib.compat import markup, unicode, is_py3 from tablib.core import Row from tablib.formats import csv as csv_format class TablibTestCase(unittest.TestCase): """Tablib test cases.""" def setUp(self): """Create simple data set with headers.""" global data, book data = tablib.Dataset() book = tablib.Databook() self.headers = ('first_name', 'last_name', 'gpa') self.john = ('John', 'Adams', 90) self.george = ('George', 'Washington', 67) self.tom = ('Thomas', 'Jefferson', 50) self.founders = tablib.Dataset(headers=self.headers, title='Founders') self.founders.append(self.john) self.founders.append(self.george) self.founders.append(self.tom) def tearDown(self): """Teardown.""" pass def test_empty_append(self): """Verify append() correctly adds tuple with no headers.""" new_row = (1, 2, 3) data.append(new_row) # Verify width/data self.assertTrue(data.width == len(new_row)) self.assertTrue(data[0] == new_row) def test_empty_append_with_headers(self): """Verify append() correctly detects mismatch of number of headers and data. """ data.headers = ['first', 'second'] new_row = (1, 2, 3, 4) self.assertRaises(tablib.InvalidDimensions, data.append, new_row) def test_set_headers_with_incorrect_dimension(self): """Verify headers correctly detects mismatch of number of headers and data. """ data.append(self.john) def set_header_callable(): data.headers = ['first_name'] self.assertRaises(tablib.InvalidDimensions, set_header_callable) def test_add_column(self): """Verify adding column works with/without headers.""" data.append(['kenneth']) data.append(['bessie']) new_col = ['reitz', 'monke'] data.append_col(new_col) self.assertEqual(data[0], ('kenneth', 'reitz')) self.assertEqual(data.width, 2) # With Headers data.headers = ('fname', 'lname') new_col = [21, 22] data.append_col(new_col, header='age') self.assertEqual(data['age'], new_col) def test_add_column_no_data_no_headers(self): """Verify adding new column with no headers.""" new_col = ('reitz', 'monke') data.append_col(new_col) self.assertEqual(data[0], tuple([new_col[0]])) self.assertEqual(data.width, 1) self.assertEqual(data.height, len(new_col)) def test_add_column_with_header_ignored(self): """Verify append_col() ignores the header if data.headers has not previously been set """ new_col = ('reitz', 'monke') data.append_col(new_col, header='first_name') self.assertEqual(data[0], tuple([new_col[0]])) self.assertEqual(data.width, 1) self.assertEqual(data.height, len(new_col)) self.assertEqual(data.headers, None) def test_add_column_with_header_and_headers_only_exist(self): """Verify append_col() with header correctly detects mismatch when headers exist but there is no existing row data """ data.headers = ['first_name'] # no data new_col = ('allen') def append_col_callable(): data.append_col(new_col, header='middle_name') self.assertRaises(tablib.InvalidDimensions, append_col_callable) def test_add_column_with_header_and_data_exists(self): """Verify append_col() works when headers and rows exists""" data.headers = self.headers data.append(self.john) new_col = [10]; data.append_col(new_col, header='age') self.assertEqual(data.height, 1) self.assertEqual(data.width, len(self.john) + 1) self.assertEqual(data['age'], new_col) self.assertEqual(len(data.headers), len(self.headers) + 1) def test_add_callable_column(self): """Verify adding column with values specified as callable.""" new_col = lambda x: x[0] self.founders.append_col(new_col, header='first_again') def test_header_slicing(self): """Verify slicing by headers.""" self.assertEqual(self.founders['first_name'], [self.john[0], self.george[0], self.tom[0]]) self.assertEqual(self.founders['last_name'], [self.john[1], self.george[1], self.tom[1]]) self.assertEqual(self.founders['gpa'], [self.john[2], self.george[2], self.tom[2]]) def test_get_col(self): """Verify getting columns by index""" self.assertEqual( self.founders.get_col(list(self.headers).index('first_name')), [self.john[0], self.george[0], self.tom[0]]) self.assertEqual( self.founders.get_col(list(self.headers).index('last_name')), [self.john[1], self.george[1], self.tom[1]]) self.assertEqual( self.founders.get_col(list(self.headers).index('gpa')), [self.john[2], self.george[2], self.tom[2]]) def test_data_slicing(self): """Verify slicing by data.""" # Slice individual rows self.assertEqual(self.founders[0], self.john) self.assertEqual(self.founders[:1], [self.john]) self.assertEqual(self.founders[1:2], [self.george]) self.assertEqual(self.founders[-1], self.tom) self.assertEqual(self.founders[3:], []) # Slice multiple rows self.assertEqual(self.founders[:], [self.john, self.george, self.tom]) self.assertEqual(self.founders[0:2], [self.john, self.george]) self.assertEqual(self.founders[1:3], [self.george, self.tom]) self.assertEqual(self.founders[2:], [self.tom]) def test_row_slicing(self): """Verify Row's __getslice__ method. Issue #184.""" john = Row(self.john) self.assertEqual(john[:], list(self.john[:])) self.assertEqual(john[0:], list(self.john[0:])) self.assertEqual(john[:2], list(self.john[:2])) self.assertEqual(john[0:2], list(self.john[0:2])) self.assertEqual(john[0:-1], list(self.john[0:-1])) def test_delete(self): """Verify deleting from dataset works.""" # Delete from front of object del self.founders[0] self.assertEqual(self.founders[:], [self.george, self.tom]) # Verify dimensions, width should NOT change self.assertEqual(self.founders.height, 2) self.assertEqual(self.founders.width, 3) # Delete from back of object del self.founders[1] self.assertEqual(self.founders[:], [self.george]) # Verify dimensions, width should NOT change self.assertEqual(self.founders.height, 1) self.assertEqual(self.founders.width, 3) # Delete from invalid index self.assertRaises(IndexError, self.founders.__delitem__, 3) def test_json_export(self): """Verify exporting dataset object as JSON""" address_id = uuid4() headers = self.headers + ('address_id',) founders = tablib.Dataset(headers=headers, title='Founders') founders.append(('John', 'Adams', 90, address_id)) founders_json = founders.export('json') expected_json = ( '[{"first_name": "John", "last_name": "Adams", "gpa": 90, ' '"address_id": "%s"}]' % str(address_id) ) self.assertEqual(founders_json, expected_json) def test_csv_export(self): """Verify exporting dataset object as CSV.""" # Build up the csv string with headers first, followed by each row csv = '' for col in self.headers: csv += col + ',' csv = csv.strip(',') + '\r\n' for founder in self.founders: for col in founder: csv += str(col) + ',' csv = csv.strip(',') + '\r\n' self.assertEqual(csv, self.founders.csv) def test_tsv_export(self): """Verify exporting dataset object as TSV.""" # Build up the tsv string with headers first, followed by each row tsv = '' for col in self.headers: tsv += col + '\t' tsv = tsv.strip('\t') + '\r\n' for founder in self.founders: for col in founder: tsv += str(col) + '\t' tsv = tsv.strip('\t') + '\r\n' self.assertEqual(tsv, self.founders.tsv) def test_html_export(self): """HTML export""" html = markup.page() html.table.open() html.thead.open() html.tr(markup.oneliner.th(self.founders.headers)) html.thead.close() for founder in self.founders: html.tr(markup.oneliner.td(founder)) html.table.close() html = str(html) self.assertEqual(html, self.founders.html) def test_html_export_none_value(self): """HTML export""" html = markup.page() html.table.open() html.thead.open() html.tr(markup.oneliner.th(['foo', '', 'bar'])) html.thead.close() html.tr(markup.oneliner.td(['foo', '', 'bar'])) html.table.close() html = str(html) headers = ['foo', None, 'bar']; d = tablib.Dataset(['foo', None, 'bar'], headers=headers) self.assertEqual(html, d.html) def test_jira_export(self): expected = """||first_name||last_name||gpa|| |John|Adams|90| |George|Washington|67| |Thomas|Jefferson|50|""" self.assertEqual(expected, self.founders.jira) def test_jira_export_no_headers(self): self.assertEqual('|a|b|c|', tablib.Dataset(['a', 'b', 'c']).jira) def test_jira_export_none_and_empty_values(self): self.assertEqual('| | |c|', tablib.Dataset(['', None, 'c']).jira) def test_jira_export_empty_dataset(self): self.assertTrue(tablib.Dataset().jira is not None) def test_latex_export(self): """LaTeX export""" expected = """\ % Note: add \\usepackage{booktabs} to your preamble % \\begin{table}[!htbp] \\centering \\caption{Founders} \\begin{tabular}{lrr} \\toprule first\\_name & last\\_name & gpa \\\\ \\cmidrule(r){1-1} \\cmidrule(lr){2-2} \\cmidrule(l){3-3} John & Adams & 90 \\\\ George & Washington & 67 \\\\ Thomas & Jefferson & 50 \\\\ \\bottomrule \\end{tabular} \\end{table} """ output = self.founders.latex self.assertEqual(output, expected) def test_latex_export_empty_dataset(self): self.assertTrue(tablib.Dataset().latex is not None) def test_latex_export_no_headers(self): d = tablib.Dataset() d.append(('one', 'two', 'three')) self.assertTrue('one' in d.latex) def test_latex_export_caption(self): d = tablib.Dataset() d.append(('one', 'two', 'three')) self.assertFalse('caption' in d.latex) d.title = 'Title' self.assertTrue('\\caption{Title}' in d.latex) def test_latex_export_none_values(self): headers = ['foo', None, 'bar'] d = tablib.Dataset(['foo', None, 'bar'], headers=headers) output = d.latex self.assertTrue('foo' in output) self.assertFalse('None' in output) def test_latex_escaping(self): d = tablib.Dataset(['~', '^']) output = d.latex self.assertFalse('~' in output) self.assertTrue('textasciitilde' in output) self.assertFalse('^' in output) self.assertTrue('textasciicircum' in output) def test_str_no_columns(self): d = tablib.Dataset(['a', 1], ['b', 2], ['c', 3]) output = '%s' % d self.assertEqual(output.splitlines(), [ 'a|1', 'b|2', 'c|3' ]) def test_unicode_append(self): """Passes in a single unicode character and exports.""" if is_py3: new_row = ('å', 'é') else: exec ("new_row = (u'å', u'é')") data.append(new_row) data.json data.yaml data.csv data.tsv data.xls data.xlsx data.ods data.html data.jira data.latex data.df data.rst def test_datetime_append(self): """Passes in a single datetime and a single date and exports.""" new_row = ( datetime.datetime.now(), datetime.datetime.today(), ) data.append(new_row) data.json data.yaml data.csv data.tsv data.xls data.xlsx data.ods data.html data.jira data.latex data.rst def test_book_export_no_exceptions(self): """Test that various exports don't error out.""" book = tablib.Databook() book.add_sheet(data) book.json book.yaml book.xls book.xlsx book.ods book.html data.rst def test_json_import_set(self): """Generate and import JSON set serialization.""" data.append(self.john) data.append(self.george) data.headers = self.headers _json = data.json data.json = _json self.assertEqual(json.loads(_json), json.loads(data.json)) def test_json_import_book(self): """Generate and import JSON book serialization.""" data.append(self.john) data.append(self.george) data.headers = self.headers book.add_sheet(data) _json = book.json book.json = _json self.assertEqual(json.loads(_json), json.loads(book.json)) def test_yaml_import_set(self): """Generate and import YAML set serialization.""" data.append(self.john) data.append(self.george) data.headers = self.headers _yaml = data.yaml data.yaml = _yaml self.assertEqual(_yaml, data.yaml) def test_yaml_import_book(self): """Generate and import YAML book serialization.""" data.append(self.john) data.append(self.george) data.headers = self.headers book.add_sheet(data) _yaml = book.yaml book.yaml = _yaml self.assertEqual(_yaml, book.yaml) def test_csv_import_set(self): """Generate and import CSV set serialization.""" data.append(self.john) data.append(self.george) data.headers = self.headers _csv = data.csv data.csv = _csv self.assertEqual(_csv, data.csv) def test_csv_import_set_semicolons(self): """Test for proper output with semicolon separated CSV.""" data.append(self.john) data.append(self.george) data.headers = self.headers _csv = data.get_csv(delimiter=';') data.set_csv(_csv, delimiter=';') self.assertEqual(_csv, data.get_csv(delimiter=';')) def test_csv_import_set_with_spaces(self): """Generate and import CSV set serialization when row values have spaces.""" data.append(('Bill Gates', 'Microsoft')) data.append(('Steve Jobs', 'Apple')) data.headers = ('Name', 'Company') _csv = data.csv data.csv = _csv self.assertEqual(_csv, data.csv) def test_csv_import_set_semicolon_with_spaces(self): """Generate and import semicolon separated CSV set serialization when row values have spaces.""" data.append(('Bill Gates', 'Microsoft')) data.append(('Steve Jobs', 'Apple')) data.headers = ('Name', 'Company') _csv = data.get_csv(delimiter=';') data.set_csv(_csv, delimiter=';') self.assertEqual(_csv, data.get_csv(delimiter=';')) def test_csv_import_set_with_newlines(self): """Generate and import CSV set serialization when row values have newlines.""" data.append(('Markdown\n=======', 'A cool language\n\nwith paragraphs')) data.append(('reStructedText\n==============', 'Another cool language\n\nwith paragraphs')) data.headers = ('title', 'body') _csv = data.csv data.csv = _csv self.assertEqual(_csv, data.csv) def test_csv_import_set_with_unicode_str(self): """Import CSV set with non-ascii characters in unicode literal""" csv_text = ( "id,givenname,surname,loginname,email,pref_firstname,pref_lastname\n" "13765,Ævar,Arnfjörð,testing,test@example.com,Ævar,Arnfjörð" ) data.csv = csv_text self.assertEqual(data.width, 7) def test_tsv_import_set(self): """Generate and import TSV set serialization.""" data.append(self.john) data.append(self.george) data.headers = self.headers _tsv = data.tsv data.tsv = _tsv self.assertEqual(_tsv, data.tsv) def test_dbf_import_set(self): data.append(self.john) data.append(self.george) data.headers = self.headers _dbf = data.dbf data.dbf = _dbf # self.assertEqual(_dbf, data.dbf) try: self.assertEqual(_dbf, data.dbf) except AssertionError: index = 0 so_far = '' for reg_char, data_char in zip(_dbf, data.dbf): so_far += chr(data_char) if reg_char != data_char and index not in [1, 2, 3]: raise AssertionError('Failing at char %s: %s vs %s %s' % ( index, reg_char, data_char, so_far)) index += 1 def test_dbf_export_set(self): """Test DBF import.""" data.append(self.john) data.append(self.george) data.append(self.tom) data.headers = self.headers _regression_dbf = (b'\x03r\x06\x06\x03\x00\x00\x00\x81\x00\xab\x00\x00' b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' b'\x00\x00\x00FIRST_NAME\x00C\x00\x00\x00\x00P\x00\x00\x00\x00\x00' b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00LAST_NAME\x00\x00C\x00' b'\x00\x00\x00P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' b'\x00\x00GPA\x00\x00\x00\x00\x00\x00\x00\x00N\x00\x00\x00\x00\n' b'\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\r' ) _regression_dbf += b' John' + (b' ' * 75) _regression_dbf += b' Adams' + (b' ' * 74) _regression_dbf += b' 90.0000000' _regression_dbf += b' George' + (b' ' * 73) _regression_dbf += b' Washington' + (b' ' * 69) _regression_dbf += b' 67.0000000' _regression_dbf += b' Thomas' + (b' ' * 73) _regression_dbf += b' Jefferson' + (b' ' * 70) _regression_dbf += b' 50.0000000' _regression_dbf += b'\x1a' if is_py3: # If in python3, decode regression string to binary. # _regression_dbf = bytes(_regression_dbf, 'utf-8') # _regression_dbf = _regression_dbf.replace(b'\n', b'\r') pass try: self.assertEqual(_regression_dbf, data.dbf) except AssertionError: index = 0 found_so_far = '' for reg_char, data_char in zip(_regression_dbf, data.dbf): # found_so_far += chr(data_char) if reg_char != data_char and index not in [1, 2, 3]: raise AssertionError( 'Failing at char %s: %s vs %s (found %s)' % ( index, reg_char, data_char, found_so_far)) index += 1 def test_dbf_format_detect(self): """Test the DBF format detection.""" _dbf = (b'\x03r\x06\x03\x03\x00\x00\x00\x81\x00\xab\x00\x00' b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' b'\x00\x00\x00FIRST_NAME\x00C\x00\x00\x00\x00P\x00\x00\x00\x00\x00' b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00LAST_NAME\x00\x00C\x00' b'\x00\x00\x00P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' b'\x00\x00GPA\x00\x00\x00\x00\x00\x00\x00\x00N\x00\x00\x00\x00\n' b'\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\r' ) _dbf += b' John' + (b' ' * 75) _dbf += b' Adams' + (b' ' * 74) _dbf += b' 90.0000000' _dbf += b' George' + (b' ' * 73) _dbf += b' Washington' + (b' ' * 69) _dbf += b' 67.0000000' _dbf += b' Thomas' + (b' ' * 73) _dbf += b' Jefferson' + (b' ' * 70) _dbf += b' 50.0000000' _dbf += b'\x1a' _yaml = '- {age: 90, first_name: John, last_name: Adams}' _tsv = 'foo\tbar' _csv = '1,2,3\n4,5,6\n7,8,9\n' _json = '[{"last_name": "Adams","age": 90,"first_name": "John"}]' _bunk = ( '¡¡¡¡¡¡¡¡£™∞¢£§∞§¶•¶ª∞¶•ªº••ª–º§•†•§º¶•†¥ª–º•§ƒø¥¨©πƒø†ˆ¥ç©¨√øˆ¥≈†ƒ¥ç©ø¨çˆ¥ƒçø¶' ) self.assertTrue(tablib.formats.dbf.detect(_dbf)) self.assertFalse(tablib.formats.dbf.detect(_yaml)) self.assertFalse(tablib.formats.dbf.detect(_tsv)) self.assertFalse(tablib.formats.dbf.detect(_csv)) self.assertFalse(tablib.formats.dbf.detect(_json)) self.assertFalse(tablib.formats.dbf.detect(_bunk)) def test_csv_format_detect(self): """Test CSV format detection.""" _csv = ( '1,2,3\n' '4,5,6\n' '7,8,9\n' ) _bunk = ( '¡¡¡¡¡¡¡¡£™∞¢£§∞§¶•¶ª∞¶•ªº••ª–º§•†•§º¶•†¥ª–º•§ƒø¥¨©πƒø†ˆ¥ç©¨√øˆ¥≈†ƒ¥ç©ø¨çˆ¥ƒçø¶' ) self.assertTrue(tablib.formats.csv.detect(_csv)) self.assertFalse(tablib.formats.csv.detect(_bunk)) def test_tsv_format_detect(self): """Test TSV format detection.""" _tsv = ( '1\t2\t3\n' '4\t5\t6\n' '7\t8\t9\n' ) _bunk = ( '¡¡¡¡¡¡¡¡£™∞¢£§∞§¶•¶ª∞¶•ªº••ª–º§•†•§º¶•†¥ª–º•§ƒø¥¨©πƒø†ˆ¥ç©¨√øˆ¥≈†ƒ¥ç©ø¨çˆ¥ƒçø¶' ) self.assertTrue(tablib.formats.tsv.detect(_tsv)) self.assertFalse(tablib.formats.tsv.detect(_bunk)) def test_json_format_detect(self): """Test JSON format detection.""" _json = '[{"last_name": "Adams","age": 90,"first_name": "John"}]' _bunk = ( '¡¡¡¡¡¡¡¡£™∞¢£§∞§¶•¶ª∞¶•ªº••ª–º§•†•§º¶•†¥ª–º•§ƒø¥¨©πƒø†ˆ¥ç©¨√øˆ¥≈†ƒ¥ç©ø¨çˆ¥ƒçø¶' ) self.assertTrue(tablib.formats.json.detect(_json)) self.assertFalse(tablib.formats.json.detect(_bunk)) def test_yaml_format_detect(self): """Test YAML format detection.""" _yaml = '- {age: 90, first_name: John, last_name: Adams}' _tsv = 'foo\tbar' _bunk = ( '¡¡¡¡¡¡---///\n\n\n¡¡£™∞¢£§∞§¶•¶ª∞¶•ªº••ª–º§•†•§º¶•†¥ª–º•§ƒø¥¨©πƒø†ˆ¥ç©¨√øˆ¥≈†ƒ¥ç©ø¨çˆ¥ƒçø¶' ) self.assertTrue(tablib.formats.yaml.detect(_yaml)) self.assertFalse(tablib.formats.yaml.detect(_bunk)) self.assertFalse(tablib.formats.yaml.detect(_tsv)) def test_auto_format_detect(self): """Test auto format detection.""" _yaml = '- {age: 90, first_name: John, last_name: Adams}' _json = '[{"last_name": "Adams","age": 90,"first_name": "John"}]' _csv = '1,2,3\n4,5,6\n7,8,9\n' _tsv = '1\t2\t3\n4\t5\t6\n7\t8\t9\n' _bunk = '¡¡¡¡¡¡---///\n\n\n¡¡£™∞¢£§∞§¶•¶ª∞¶•ªº••ª–º§•†•§º¶•†¥ª–º•§ƒø¥¨©πƒø†ˆ¥ç©¨√øˆ¥≈†ƒ¥ç©ø¨çˆ¥ƒçø¶' self.assertEqual(tablib.detect_format(_yaml), 'yaml') self.assertEqual(tablib.detect_format(_csv), 'csv') self.assertEqual(tablib.detect_format(_tsv), 'tsv') self.assertEqual(tablib.detect_format(_json), 'json') self.assertEqual(tablib.detect_format(_bunk), None) def test_transpose(self): """Transpose a dataset.""" transposed_founders = self.founders.transpose() first_row = transposed_founders[0] second_row = transposed_founders[1] self.assertEqual(transposed_founders.headers, ["first_name", "John", "George", "Thomas"]) self.assertEqual(first_row, ("last_name", "Adams", "Washington", "Jefferson")) self.assertEqual(second_row, ("gpa", 90, 67, 50)) def test_transpose_multiple_headers(self): data = tablib.Dataset() data.headers = ("first_name", "last_name", "age") data.append(('John', 'Adams', 90)) data.append(('George', 'Washington', 67)) data.append(('John', 'Tyler', 71)) self.assertEqual(data.transpose().transpose().dict, data.dict) def test_row_stacking(self): """Row stacking.""" to_join = tablib.Dataset(headers=self.founders.headers) for row in self.founders: to_join.append(row=row) row_stacked = self.founders.stack(to_join) for column in row_stacked.headers: original_data = self.founders[column] expected_data = original_data + original_data self.assertEqual(row_stacked[column], expected_data) def test_column_stacking(self): """Column stacking""" to_join = tablib.Dataset(headers=self.founders.headers) for row in self.founders: to_join.append(row=row) column_stacked = self.founders.stack_cols(to_join) for index, row in enumerate(column_stacked): original_data = self.founders[index] expected_data = original_data + original_data self.assertEqual(row, expected_data) self.assertEqual(column_stacked[0], ("John", "Adams", 90, "John", "Adams", 90)) def test_sorting(self): """Sort columns.""" sorted_data = self.founders.sort(col="first_name") self.assertEqual(sorted_data.title, 'Founders') first_row = sorted_data[0] second_row = sorted_data[2] third_row = sorted_data[1] expected_first = self.founders[1] expected_second = self.founders[2] expected_third = self.founders[0] self.assertEqual(first_row, expected_first) self.assertEqual(second_row, expected_second) self.assertEqual(third_row, expected_third) def test_remove_duplicates(self): """Unique Rows.""" self.founders.append(self.john) self.founders.append(self.george) self.founders.append(self.tom) self.assertEqual(self.founders[0], self.founders[3]) self.assertEqual(self.founders[1], self.founders[4]) self.assertEqual(self.founders[2], self.founders[5]) self.assertEqual(self.founders.height, 6) self.founders.remove_duplicates() self.assertEqual(self.founders[0], self.john) self.assertEqual(self.founders[1], self.george) self.assertEqual(self.founders[2], self.tom) self.assertEqual(self.founders.height, 3) def test_wipe(self): """Purge a dataset.""" new_row = (1, 2, 3) data.append(new_row) # Verify width/data self.assertTrue(data.width == len(new_row)) self.assertTrue(data[0] == new_row) data.wipe() new_row = (1, 2, 3, 4) data.append(new_row) self.assertTrue(data.width == len(new_row)) self.assertTrue(data[0] == new_row) def test_subset(self): """Create a subset of a dataset""" rows = (0, 2) columns = ('first_name', 'gpa') data.headers = self.headers data.append(self.john) data.append(self.george) data.append(self.tom) # Verify data is truncated subset = data.subset(rows=rows, cols=columns) self.assertEqual(type(subset), tablib.Dataset) self.assertEqual(subset.headers, list(columns)) self.assertEqual(subset._data[0].list, ['John', 90]) self.assertEqual(subset._data[1].list, ['Thomas', 50]) def test_formatters(self): """Confirm formatters are being triggered.""" def _formatter(cell_value): return str(cell_value).upper() self.founders.add_formatter('last_name', _formatter) for name in [r['last_name'] for r in self.founders.dict]: self.assertTrue(name.isupper()) def test_unicode_csv(self): """Check if unicode in csv export doesn't raise.""" data = tablib.Dataset() if sys.version_info[0] > 2: data.append(['\xfc', '\xfd']) else: exec ("data.append([u'\xfc', u'\xfd'])") data.csv def test_csv_column_select(self): """Build up a CSV and test selecting a column""" data = tablib.Dataset() data.csv = self.founders.csv headers = data.headers self.assertTrue(isinstance(headers[0], unicode)) orig_first_name = self.founders[self.headers[0]] csv_first_name = data[headers[0]] self.assertEqual(orig_first_name, csv_first_name) def test_csv_column_delete(self): """Build up a CSV and test deleting a column""" data = tablib.Dataset() data.csv = self.founders.csv target_header = data.headers[0] self.assertTrue(isinstance(target_header, unicode)) del data[target_header] self.assertTrue(target_header not in data.headers) def test_csv_column_sort(self): """Build up a CSV and test sorting a column by name""" data = tablib.Dataset() data.csv = self.founders.csv orig_target_header = self.founders.headers[1] target_header = data.headers[1] self.founders.sort(orig_target_header) data.sort(target_header) self.assertEqual(self.founders[orig_target_header], data[target_header]) def test_unicode_renders_markdown_table(self): # add another entry to test right field width for # integer self.founders.append(('Old', 'Man', 100500)) self.assertEqual('first_name|last_name |gpa ', unicode(self.founders).split('\n')[0]) def test_databook_add_sheet_accepts_only_dataset_instances(self): class NotDataset(object): def append(self, item): pass dataset = NotDataset() dataset.append(self.john) self.assertRaises(tablib.InvalidDatasetType, book.add_sheet, dataset) def test_databook_add_sheet_accepts_dataset_subclasses(self): class DatasetSubclass(tablib.Dataset): pass # just checking if subclass of tablib.Dataset can be added to Databook dataset = DatasetSubclass() dataset.append(self.john) dataset.append(self.tom) try: book.add_sheet(dataset) except tablib.InvalidDatasetType: self.fail("Subclass of tablib.Dataset should be accepted by Databook.add_sheet") def test_csv_formatter_support_kwargs(self): """Test CSV import and export with formatter configuration.""" data.append(self.john) data.append(self.george) data.headers = self.headers expected = 'first_name;last_name;gpa\nJohn;Adams;90\nGeorge;Washington;67\n' kwargs = dict(delimiter=';', lineterminator='\n') _csv = data.export('csv', **kwargs) self.assertEqual(expected, _csv) # the import works but consider default delimiter=',' d1 = tablib.import_set(_csv, format="csv") self.assertEqual(1, len(d1.headers)) d2 = tablib.import_set(_csv, format="csv", **kwargs) self.assertEqual(3, len(d2.headers)) def test_databook_formatter_support_kwargs(self): """Test XLSX export with formatter configuration.""" self.founders.export('xlsx', freeze_panes=False) def test_databook_formatter_with_new_lines(self): """Test XLSX export with new line in content.""" self.founders.append(('First\nSecond', 'Name', 42)) self.founders.export('xlsx') def test_rst_force_grid(self): data.append(self.john) data.append(self.george) data.headers = self.headers simple = tablib.formats._rst.export_set(data) grid = tablib.formats._rst.export_set(data, force_grid=True) self.assertNotEqual(simple, grid) self.assertNotIn('+', simple) self.assertIn('+', grid) class DocTests(unittest.TestCase): def test_rst_formatter_doctests(self): results = doctest.testmod(tablib.formats._rst) self.assertEqual(results.failed, 0) if __name__ == '__main__': unittest.main() tablib-0.13.0/MANIFEST.in0000644000175000017500000000010513440456503014130 0ustar josephjosephinclude HISTORY.rst README.rst LICENSE AUTHORS NOTICE test_tablib.py tablib-0.13.0/docs/0000755000175000017500000000000013440456503013326 5ustar josephjosephtablib-0.13.0/docs/_templates/0000755000175000017500000000000013440456503015463 5ustar josephjosephtablib-0.13.0/docs/_templates/sidebarintro.html0000644000175000017500000000157313440456503021044 0ustar josephjoseph

About Tablib

Tablib is an MIT Licensed format-agnostic tabular dataset library, written in Python. It allows you to import, export, and manipulate tabular data sets. Advanced features include, segregation, dynamic columns, tags & filtering, and seamless format import & export.

Feedback

Feedback is greatly appreciated. If you have any questions, comments, random praise, or anonymous threats, shoot me an email.

Useful Links

tablib-0.13.0/docs/_templates/sidebarlogo.html0000644000175000017500000000052613440456503020646 0ustar josephjoseph

About Tablib

Tablib is an MIT Licensed format-agnostic tabular dataset library, written in Python. It allows you to import, export, and manipulate tabular data sets. Advanced features include, segregation, dynamic columns, tags & filtering, and seamless format import & export.

tablib-0.13.0/docs/index.rst0000644000175000017500000000555713440456503015203 0ustar josephjoseph.. Tablib documentation master file, created by sphinx-quickstart on Tue Oct 5 15:25:21 2010. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Tablib: Pythonic Tabular Datasets ================================= Release v\ |version|. (:ref:`Installation `) .. Contents: .. .. .. toctree:: .. :maxdepth: 2 .. .. Indices and tables .. ================== .. .. * :ref:`genindex` .. * :ref:`modindex` .. * :ref:`search` Tablib is an `MIT Licensed `_ format-agnostic tabular dataset library, written in Python. It allows you to import, export, and manipulate tabular data sets. Advanced features include segregation, dynamic columns, tags & filtering, and seamless format import & export. :: >>> data = tablib.Dataset(headers=['First Name', 'Last Name', 'Age']) >>> for i in [('Kenneth', 'Reitz', 22), ('Bessie', 'Monke', 21)]: ... data.append(i) >>> print(data.export('json')) [{"Last Name": "Reitz", "First Name": "Kenneth", "Age": 22}, {"Last Name": "Monke", "First Name": "Bessie", "Age": 21}] >>> print(data.export('yaml')) - {Age: 22, First Name: Kenneth, Last Name: Reitz} - {Age: 21, First Name: Bessie, Last Name: Monke} >>> data.export('xlsx') >>> data.export('df') First Name Last Name Age 0 Kenneth Reitz 22 1 Bessie Monke 21 Testimonials ------------ `National Geographic `_, `Digg, Inc `_, `Northrop Grumman `_, `Discovery Channel `_, and `The Sunlight Foundation `_ use Tablib internally. **Greg Thorton** Tablib by @kennethreitz saved my life. I had to consolidate like 5 huge poorly maintained lists of domains and data. It was a breeze! **Dave Coutts** It's turning into one of my most used modules of 2010. You really hit a sweet spot for managing tabular data with a minimal amount of code and effort. **Joshua Ourisman** Tablib has made it so much easier to deal with the inevitable 'I want an Excel file!' requests from clients... **Brad Montgomery** I think you nailed the "Python Zen" with tablib. Thanks again for an awesome lib! User's Guide ------------ This part of the documentation, which is mostly prose, begins with some background information about Tablib, then focuses on step-by-step instructions for getting the most out of your datasets. .. toctree:: :maxdepth: 2 intro .. toctree:: :maxdepth: 2 install .. toctree:: :maxdepth: 2 tutorial .. toctree:: :maxdepth: 2 development API Reference ------------- If you are looking for information on a specific function, class or method, this part of the documentation is for you. .. toctree:: :maxdepth: 2 api tablib-0.13.0/docs/__init__.py0000644000175000017500000000000013440456503015425 0ustar josephjosephtablib-0.13.0/docs/development.rst0000644000175000017500000001404513440456503016406 0ustar josephjoseph.. _development: Development =========== Tablib is under active development, and contributors are welcome. If you have a feature request, suggestion, or bug report, please open a new issue on GitHub_. To submit patches, please send a pull request on GitHub_. .. _GitHub: http://github.com/kennethreitz/tablib/ .. _design: --------------------- Design Considerations --------------------- Tablib was developed with a few :pep:`20` idioms in mind. #. Beautiful is better than ugly. #. Explicit is better than implicit. #. Simple is better than complex. #. Complex is better than complicated. #. Readability counts. A few other things to keep in mind: #. Keep your code DRY. #. Strive to be as simple (to use) as possible. .. _scm: -------------- Source Control -------------- Tablib source is controlled with Git_, the lean, mean, distributed source control machine. The repository is publicly accessible. .. code-block:: console git clone git://github.com/kennethreitz/tablib.git The project is hosted on **GitHub**. GitHub: http://github.com/kennethreitz/tablib Git Branch Structure ++++++++++++++++++++ Feature / Hotfix / Release branches follow a `Successful Git Branching Model`_ . Git-flow_ is a great tool for managing the repository. I highly recommend it. ``master`` Current production release (|version|) on PyPi. Each release is tagged. When submitting patches, please place your feature/change in its own branch prior to opening a pull request on GitHub_. .. _Git: http://git-scm.org .. _`Successful Git Branching Model`: http://nvie.com/posts/a-successful-git-branching-model/ .. _git-flow: http://github.com/nvie/gitflow .. _newformats: ------------------ Adding New Formats ------------------ Tablib welcomes new format additions! Format suggestions include: * MySQL Dump Coding by Convention ++++++++++++++++++++ Tablib features a micro-framework for adding format support. The easiest way to understand it is to use it. So, let's define our own format, named *xxx*. 1. Write a new format interface. :class:`tablib.core` follows a simple pattern for automatically utilizing your format throughout Tablib. Function names are crucial. Example **tablib/formats/_xxx.py**: :: title = 'xxx' def export_set(dset): .... # returns string representation of given dataset def export_book(dbook): .... # returns string representation of given databook def import_set(dset, in_stream): ... # populates given Dataset with given datastream def import_book(dbook, in_stream): ... # returns Databook instance def detect(stream): ... # returns True if given stream is parsable as xxx .. admonition:: Excluding Support If the format excludes support for an import/export mechanism (*e.g.* :class:`csv ` excludes :class:`Databook ` support), simply don't define the respective functions. Appropriate errors will be raised. 2. Add your new format module to the :class:`tablib.formats.available` tuple. 3. Add a mock property to the :class:`Dataset ` class with verbose `reStructured Text`_ docstring. This alleviates IDE confusion, and allows for pretty auto-generated Sphinx_ documentation. 4. Write respective :ref:`tests `. .. _testing: -------------- Testing Tablib -------------- Testing is crucial to Tablib's stability. This stable project is used in production by many companies and developers, so it is important to be certain that every version released is fully operational. When developing a new feature for Tablib, be sure to write proper tests for it as well. When developing a feature for Tablib, the easiest way to test your changes for potential issues is to simply run the test suite directly. .. code-block:: console $ ./test_tablib.py `Jenkins CI`_, amongst other tools, supports Java's xUnit testing report format. Nose_ allows us to generate our own xUnit reports. Installing nose is simple. .. code-block:: console $ pip install nose Once installed, we can generate our xUnit report with a single command. .. code-block:: console $ nosetests test_tablib.py --with-xunit This will generate a **nosetests.xml** file, which can then be analyzed. .. _Nose: http://somethingaboutorange.com/mrl/projects/nose/ .. _jenkins: ---------------------- Continuous Integration ---------------------- Every commit made to the **develop** branch is automatically tested and inspected upon receipt with `Travis CI`_. If you have access to the main repository and broke the build, you will receive an email accordingly. Anyone may view the build status and history at any time. https://travis-ci.org/kennethreitz/tablib Additional reports will also be included here in the future, including :pep:`8` checks and stress reports for extremely large datasets. .. _`Jenkins CI`: https://travis-ci.org/ .. _docs: ----------------- Building the Docs ----------------- Documentation is written in the powerful, flexible, and standard Python documentation format, `reStructured Text`_. Documentation builds are powered by the powerful Pocoo project, Sphinx_. The :ref:`API Documentation ` is mostly documented inline throughout the module. The Docs live in ``tablib/docs``. In order to build them, you will first need to install Sphinx. .. code-block:: console $ pip install sphinx Then, to build an HTML version of the docs, simply run the following from the ``docs`` directory: .. code-block:: console $ make html Your ``docs/_build/html`` directory will then contain an HTML representation of the documentation, ready for publication on most web servers. You can also generate the documentation in **epub**, **latex**, **json**, *&c* similarly. .. _`reStructured Text`: http://docutils.sourceforge.net/rst.html .. _Sphinx: http://sphinx.pocoo.org .. _`GitHub Pages`: http://pages.github.com ---------- Make sure to check out the :ref:`API Documentation `. tablib-0.13.0/docs/intro.rst0000644000175000017500000000537713440456503015227 0ustar josephjoseph.. _intro: Introduction ============ This part of the documentation covers all the interfaces of Tablib. Tablib is a format-agnostic tabular dataset library, written in Python. It allows you to Pythonically import, export, and manipulate tabular data sets. Advanced features include segregation, dynamic columns, tags / filtering, and seamless format import/export. Philosophy ---------- Tablib was developed with a few :pep:`20` idioms in mind. #. Beautiful is better than ugly. #. Explicit is better than implicit. #. Simple is better than complex. #. Complex is better than complicated. #. Readability counts. All contributions to Tablib should keep these important rules in mind. .. mit: MIT License ----------- A large number of open source projects you find today are `GPL Licensed`_. While the GPL has its time and place, it should most certainly not be your go-to license for your next open source project. A project that is released as GPL cannot be used in any commercial product without the product itself also being offered as open source. The MIT, BSD, and ISC licenses are great alternatives to the GPL that allow your open-source software to be used in proprietary, closed-source software. Tablib is released under terms of `The MIT License`_. .. _`GPL Licensed`: http://www.opensource.org/licenses/gpl-license.php .. _`The MIT License`: http://www.opensource.org/licenses/mit-license.php .. _license: Tablib License -------------- Copyright 2017 Kenneth Reitz Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. .. _pythonsupport: Pythons Supported ----------------- At this time, the following Python platforms are officially supported: * cPython 2.7 * cPython 3.3 * cPython 3.4 * cPython 3.5 * cPython 3.6 Support for other Pythons will be rolled out soon. Now, go :ref:`Install Tablib `. tablib-0.13.0/docs/krstyle.sty0000644000175000017500000000604213440456503015566 0ustar josephjoseph\definecolor{TitleColor}{rgb}{0,0,0} \definecolor{InnerLinkColor}{rgb}{0,0,0} \renewcommand{\maketitle}{% \begin{titlepage}% \let\footnotesize\small \let\footnoterule\relax \ifsphinxpdfoutput \begingroup % This \def is required to deal with multi-line authors; it % changes \\ to ', ' (comma-space), making it pass muster for % generating document info in the PDF file. \def\\{, } \pdfinfo{ /Author (\@author) /Title (\@title) } \endgroup \fi \begin{flushright}% %\sphinxlogo% {\center \vspace*{3cm} \includegraphics{logo.pdf} \vspace{3cm} \par {\rm\Huge \@title \par}% {\em\LARGE \py@release\releaseinfo \par} {\large \@date \par \py@authoraddress \par }}% \end{flushright}%\par \@thanks \end{titlepage}% \cleardoublepage% \setcounter{footnote}{0}% \let\thanks\relax\let\maketitle\relax %\gdef\@thanks{}\gdef\@author{}\gdef\@title{} } \fancypagestyle{normal}{ \fancyhf{} \fancyfoot[LE,RO]{{\thepage}} \fancyfoot[LO]{{\nouppercase{\rightmark}}} \fancyfoot[RE]{{\nouppercase{\leftmark}}} \fancyhead[LE,RO]{{ \@title, \py@release}} \renewcommand{\headrulewidth}{0.4pt} \renewcommand{\footrulewidth}{0.4pt} } \fancypagestyle{plain}{ \fancyhf{} \fancyfoot[LE,RO]{{\thepage}} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0.4pt} } \titleformat{\section}{\Large}% {\py@TitleColor\thesection}{0.5em}{\py@TitleColor}{\py@NormalColor} \titleformat{\subsection}{\large}% {\py@TitleColor\thesubsection}{0.5em}{\py@TitleColor}{\py@NormalColor} \titleformat{\subsubsection}{}% {\py@TitleColor\thesubsubsection}{0.5em}{\py@TitleColor}{\py@NormalColor} \titleformat{\paragraph}{\large}% {\py@TitleColor}{0em}{\py@TitleColor}{\py@NormalColor} \ChNameVar{\raggedleft\normalsize} \ChNumVar{\raggedleft \bfseries\Large} \ChTitleVar{\raggedleft \rm\Huge} \renewcommand\thepart{\@Roman\c@part} \renewcommand\part{% \pagestyle{empty} \if@noskipsec \leavevmode \fi \cleardoublepage \vspace*{6cm}% \@afterindentfalse \secdef\@part\@spart} \def\@part[#1]#2{% \ifnum \c@secnumdepth >\m@ne \refstepcounter{part}% \addcontentsline{toc}{part}{\thepart\hspace{1em}#1}% \else \addcontentsline{toc}{part}{#1}% \fi {\parindent \z@ %\center \interlinepenalty \@M \normalfont \ifnum \c@secnumdepth >\m@ne \rm\Large \partname~\thepart \par\nobreak \fi \MakeUppercase{\rm\Huge #2}% \markboth{}{}\par}% \nobreak \vskip 8ex \@afterheading} \def\@spart#1{% {\parindent \z@ %\center \interlinepenalty \@M \normalfont \huge \bfseries #1\par}% \nobreak \vskip 3ex \@afterheading} % use inconsolata font \usepackage{inconsolata} % fix single quotes, for inconsolata. (does not work) %%\usepackage{textcomp} %%\begingroup %% \catcode`'=\active %% \g@addto@macro\@noligs{\let'\textsinglequote} %% \endgroup %%\endinput tablib-0.13.0/docs/conf.py0000644000175000017500000001704113440456503014630 0ustar josephjoseph# -*- coding: utf-8 -*- # # Tablib documentation build configuration file, created by # sphinx-quickstart on Tue Oct 5 15:25:21 2010. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('..')) import tablib # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.viewcode'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Tablib' copyright = u'2016. A Kenneth Reitz Project' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = tablib.__version__ # The full version, including alpha/beta/rc tags. release = version # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'flask_theme_support.FlaskyStyle' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. html_use_smartypants = True # Custom sidebar templates, maps document names to template names. html_sidebars = { 'index': ['sidebarintro.html', 'sourcelink.html', 'searchbox.html'], '**': ['sidebarlogo.html', 'localtoc.html', 'relations.html', 'sourcelink.html', 'searchbox.html'] } # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. html_show_sphinx = False # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'Tablibdoc' # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'Tablib.tex', u'Tablib Documentation', u'Kenneth Reitz', 'manual'), ] latex_use_modindex = False latex_elements = { 'fontpkg': r'\usepackage{mathpazo}', 'papersize': 'a4paper', 'pointsize': '12pt', 'preamble': r'\usepackage{krstyle}' } latex_use_parts = True latex_additional_files = ['krstyle.sty'] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'tablib', u'Tablib Documentation', [u'Kenneth Reitz'], 1) ] sys.path.append(os.path.abspath('_themes')) html_theme_path = ['_themes'] html_theme = 'kr'tablib-0.13.0/docs/_themes/0000755000175000017500000000000013440456503014752 5ustar josephjosephtablib-0.13.0/docs/_themes/kr/0000755000175000017500000000000013440456503015366 5ustar josephjosephtablib-0.13.0/docs/_themes/kr/relations.html0000644000175000017500000000111613440456503020253 0ustar josephjoseph

Related Topics

tablib-0.13.0/docs/_themes/kr/theme.conf0000644000175000017500000000017113440456503017336 0ustar josephjoseph[theme] inherit = basic stylesheet = flasky.css pygments_style = flask_theme_support.FlaskyStyle [options] touch_icon = tablib-0.13.0/docs/_themes/kr/layout.html0000644000175000017500000000370413440456503017575 0ustar josephjoseph{%- extends "basic/layout.html" %} {%- block extrahead %} {{ super() }} {% if theme_touch_icon %} {% endif %} {% endblock %} {%- block relbar2 %}{% endblock %} {%- block footer %} Fork me on GitHub {%- endblock %} tablib-0.13.0/docs/_themes/kr/static/0000755000175000017500000000000013440456503016655 5ustar josephjosephtablib-0.13.0/docs/_themes/kr/static/flasky.css_t0000644000175000017500000001645413440456503021215 0ustar josephjoseph/* * flasky.css_t * ~~~~~~~~~~~~ * * :copyright: Copyright 2010 by Armin Ronacher. Modifications by Kenneth Reitz. * :license: Flask Design License, see LICENSE for details. */ {% set page_width = '940px' %} {% set sidebar_width = '220px' %} @import url("basic.css"); /* -- page layout ----------------------------------------------------------- */ body { font-family: 'goudy old style', 'minion pro', 'bell mt', Georgia, 'Hiragino Mincho Pro'; font-size: 17px; background-color: white; color: #000; margin: 0; padding: 0; } div.document { width: {{ page_width }}; margin: 30px auto 0 auto; } div.documentwrapper { float: left; width: 100%; } div.bodywrapper { margin: 0 0 0 {{ sidebar_width }}; } div.sphinxsidebar { width: {{ sidebar_width }}; } hr { border: 1px solid #B1B4B6; } div.body { background-color: #ffffff; color: #3E4349; padding: 0 30px 0 30px; } img.floatingflask { padding: 0 0 10px 10px; float: right; } div.footer { width: {{ page_width }}; margin: 20px auto 30px auto; font-size: 14px; color: #888; text-align: right; } div.footer a { color: #888; } div.related { display: none; } div.sphinxsidebar a { color: #444; text-decoration: none; border-bottom: 1px dotted #999; } div.sphinxsidebar a:hover { border-bottom: 1px solid #999; } div.sphinxsidebar { font-size: 14px; line-height: 1.5; } div.sphinxsidebarwrapper { padding: 18px 10px; } div.sphinxsidebarwrapper p.logo { padding: 0 0 20px 0; margin: 0; text-align: center; } div.sphinxsidebar h3, div.sphinxsidebar h4 { font-family: 'Garamond', 'Georgia', serif; color: #444; font-size: 24px; font-weight: normal; margin: 0 0 5px 0; padding: 0; } div.sphinxsidebar h4 { font-size: 20px; } div.sphinxsidebar h3 a { color: #444; } div.sphinxsidebar p.logo a, div.sphinxsidebar h3 a, div.sphinxsidebar p.logo a:hover, div.sphinxsidebar h3 a:hover { border: none; } div.sphinxsidebar p { color: #555; margin: 10px 0; } div.sphinxsidebar ul { margin: 10px 0; padding: 0; color: #000; } div.sphinxsidebar input { border: 1px solid #ccc; font-family: 'Georgia', serif; font-size: 1em; } /* -- body styles ----------------------------------------------------------- */ a { color: #004B6B; text-decoration: underline; } a:hover { color: #6D4100; text-decoration: underline; } div.body h1, div.body h2, div.body h3, div.body h4, div.body h5, div.body h6 { font-family: 'Garamond', 'Georgia', serif; font-weight: normal; margin: 30px 0px 10px 0px; padding: 0; } div.body h1 { margin-top: 0; padding-top: 0; font-size: 240%; } div.body h2 { font-size: 180%; } div.body h3 { font-size: 150%; } div.body h4 { font-size: 130%; } div.body h5 { font-size: 100%; } div.body h6 { font-size: 100%; } a.headerlink { color: #ddd; padding: 0 4px; text-decoration: none; } a.headerlink:hover { color: #444; background: #eaeaea; } div.body p, div.body dd, div.body li { line-height: 1.4em; } div.admonition { background: #fafafa; margin: 20px -30px; padding: 10px 30px; border-top: 1px solid #ccc; border-bottom: 1px solid #ccc; } div.admonition tt.xref, div.admonition a tt { border-bottom: 1px solid #fafafa; } dd div.admonition { margin-left: -60px; padding-left: 60px; } div.admonition p.admonition-title { font-family: 'Garamond', 'Georgia', serif; font-weight: normal; font-size: 24px; margin: 0 0 10px 0; padding: 0; line-height: 1; } div.admonition p.last { margin-bottom: 0; } div.highlight { background-color: white; } dt:target, .highlight { background: #FAF3E8; } div.note { background-color: #eee; border: 1px solid #ccc; } div.seealso { background-color: #ffc; border: 1px solid #ff6; } div.topic { background-color: #eee; } p.admonition-title { display: inline; } p.admonition-title:after { content: ":"; } pre, tt { font-family: 'Consolas', 'Menlo', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 0.9em; } img.screenshot { } tt.descname, tt.descclassname { font-size: 0.95em; } tt.descname { padding-right: 0.08em; } img.screenshot { -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils { border: 1px solid #888; -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils td, table.docutils th { border: 1px solid #888; padding: 0.25em 0.7em; } table.field-list, table.footnote { border: none; -moz-box-shadow: none; -webkit-box-shadow: none; box-shadow: none; } table.footnote { margin: 15px 0; width: 100%; border: 1px solid #eee; background: #fdfdfd; font-size: 0.9em; } table.footnote + table.footnote { margin-top: -15px; border-top: none; } table.field-list th { padding: 0 0.8em 0 0; } table.field-list td { padding: 0; } table.footnote td.label { width: 0px; padding: 0.3em 0 0.3em 0.5em; } table.footnote td { padding: 0.3em 0.5em; } dl { margin: 0; padding: 0; } dl dd { margin-left: 30px; } blockquote { margin: 0 0 0 30px; padding: 0; } ul, ol { margin: 10px 0 10px 30px; padding: 0; } pre { background: #eee; padding: 7px 30px; margin: 15px -30px; line-height: 1.3em; } dl pre, blockquote pre, li pre { margin-left: -60px; padding-left: 60px; } dl dl pre { margin-left: -90px; padding-left: 90px; } tt { background-color: #ecf0f3; color: #222; /* padding: 1px 2px; */ } tt.xref, a tt { background-color: #FBFBFB; border-bottom: 1px solid white; } a.reference { text-decoration: none; border-bottom: 1px dotted #004B6B; } a.reference:hover { border-bottom: 1px solid #6D4100; } a.footnote-reference { text-decoration: none; font-size: 0.7em; vertical-align: top; border-bottom: 1px dotted #004B6B; } a.footnote-reference:hover { border-bottom: 1px solid #6D4100; } a:hover tt { background: #EEE; } @media screen and (max-width: 600px) { div.sphinxsidebar { display: none; } div.documentwrapper { margin-left: 0; margin-top: 0; margin-right: 0; margin-bottom: 0; } div.bodywrapper { margin-top: 0; margin-right: 0; margin-bottom: 0; margin-left: 0; } ul { margin-left: 0; } .document { width: auto; } .bodywrapper { margin: 0; } .footer { width: auto; } } /* scrollbars */ ::-webkit-scrollbar { width: 6px; height: 6px; } ::-webkit-scrollbar-button:start:decrement, ::-webkit-scrollbar-button:end:increment { display: block; height: 10px; } ::-webkit-scrollbar-button:vertical:increment { background-color: #fff; } ::-webkit-scrollbar-track-piece { background-color: #eee; -webkit-border-radius: 3px; } ::-webkit-scrollbar-thumb:vertical { height: 50px; background-color: #ccc; -webkit-border-radius: 3px; } ::-webkit-scrollbar-thumb:horizontal { width: 50px; background-color: #ccc; -webkit-border-radius: 3px; } /* misc. */ .revsys-inline { display: none!important; }tablib-0.13.0/docs/_themes/kr/static/small_flask.css0000644000175000017500000000172013440456503021657 0ustar josephjoseph/* * small_flask.css_t * ~~~~~~~~~~~~~~~~~ * * :copyright: Copyright 2010 by Armin Ronacher. * :license: Flask Design License, see LICENSE for details. */ body { margin: 0; padding: 20px 30px; } div.documentwrapper { float: none; background: white; } div.sphinxsidebar { display: block; float: none; width: 102.5%; margin: 50px -30px -20px -30px; padding: 10px 20px; background: #333; color: white; } div.sphinxsidebar h3, div.sphinxsidebar h4, div.sphinxsidebar p, div.sphinxsidebar h3 a { color: white; } div.sphinxsidebar a { color: #aaa; } div.sphinxsidebar p.logo { display: none; } div.document { width: 100%; margin: 0; } div.related { display: block; margin: 0; padding: 10px 0 20px 0; } div.related ul, div.related ul li { margin: 0; padding: 0; } div.footer { display: none; } div.bodywrapper { margin: 0; } div.body { min-height: 0; padding: 0; } tablib-0.13.0/docs/_themes/.gitignore0000644000175000017500000000002613440456503016740 0ustar josephjoseph*.pyc *.pyo .DS_Store tablib-0.13.0/docs/_themes/README.rst0000644000175000017500000000132613440456503016443 0ustar josephjosephkrTheme Sphinx Style ==================== This repository contains sphinx styles Kenneth Reitz uses in most of his projects. It is a drivative of Mitsuhiko's themes for Flask and Flask related projects. To use this style in your Sphinx documentation, follow this guide: 1. put this folder as _themes into your docs folder. Alternatively you can also use git submodules to check out the contents there. 2. add this to your conf.py: :: sys.path.append(os.path.abspath('_themes')) html_theme_path = ['_themes'] html_theme = 'flask' The following themes exist: **kr** the standard flask documentation theme for large projects **kr_small** small one-page theme. Intended to be used by very small addon libraries. tablib-0.13.0/docs/_themes/LICENSE0000644000175000017500000000350513440456503015762 0ustar josephjosephModifications: Copyright (c) 2011 Kenneth Reitz. Original Project: Copyright (c) 2010 by Armin Ronacher. Some rights reserved. Redistribution and use in source and binary forms of the theme, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * The names of the contributors may not be used to endorse or promote products derived from this software without specific prior written permission. We kindly ask you to only use these themes in an unmodified manner just for Flask and Flask-related products, not for unrelated projects. If you like the visual style and want to use it for your own projects, please consider making some larger changes to the themes (such as changing font faces, sizes, colors or margins). THIS THEME IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS THEME, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. tablib-0.13.0/docs/_themes/flask_theme_support.py0000644000175000017500000001141313440456503021402 0ustar josephjoseph# flasky extensions. flasky pygments style based on tango style from pygments.style import Style from pygments.token import Keyword, Name, Comment, String, Error, \ Number, Operator, Generic, Whitespace, Punctuation, Other, Literal class FlaskyStyle(Style): background_color = "#f8f8f8" default_style = "" styles = { # No corresponding class for the following: #Text: "", # class: '' Whitespace: "underline #f8f8f8", # class: 'w' Error: "#a40000 border:#ef2929", # class: 'err' Other: "#000000", # class 'x' Comment: "italic #8f5902", # class: 'c' Comment.Preproc: "noitalic", # class: 'cp' Keyword: "bold #004461", # class: 'k' Keyword.Constant: "bold #004461", # class: 'kc' Keyword.Declaration: "bold #004461", # class: 'kd' Keyword.Namespace: "bold #004461", # class: 'kn' Keyword.Pseudo: "bold #004461", # class: 'kp' Keyword.Reserved: "bold #004461", # class: 'kr' Keyword.Type: "bold #004461", # class: 'kt' Operator: "#582800", # class: 'o' Operator.Word: "bold #004461", # class: 'ow' - like keywords Punctuation: "bold #000000", # class: 'p' # because special names such as Name.Class, Name.Function, etc. # are not recognized as such later in the parsing, we choose them # to look the same as ordinary variables. Name: "#000000", # class: 'n' Name.Attribute: "#c4a000", # class: 'na' - to be revised Name.Builtin: "#004461", # class: 'nb' Name.Builtin.Pseudo: "#3465a4", # class: 'bp' Name.Class: "#000000", # class: 'nc' - to be revised Name.Constant: "#000000", # class: 'no' - to be revised Name.Decorator: "#888", # class: 'nd' - to be revised Name.Entity: "#ce5c00", # class: 'ni' Name.Exception: "bold #cc0000", # class: 'ne' Name.Function: "#000000", # class: 'nf' Name.Property: "#000000", # class: 'py' Name.Label: "#f57900", # class: 'nl' Name.Namespace: "#000000", # class: 'nn' - to be revised Name.Other: "#000000", # class: 'nx' Name.Tag: "bold #004461", # class: 'nt' - like a keyword Name.Variable: "#000000", # class: 'nv' - to be revised Name.Variable.Class: "#000000", # class: 'vc' - to be revised Name.Variable.Global: "#000000", # class: 'vg' - to be revised Name.Variable.Instance: "#000000", # class: 'vi' - to be revised Number: "#990000", # class: 'm' Literal: "#000000", # class: 'l' Literal.Date: "#000000", # class: 'ld' String: "#4e9a06", # class: 's' String.Backtick: "#4e9a06", # class: 'sb' String.Char: "#4e9a06", # class: 'sc' String.Doc: "italic #8f5902", # class: 'sd' - like a comment String.Double: "#4e9a06", # class: 's2' String.Escape: "#4e9a06", # class: 'se' String.Heredoc: "#4e9a06", # class: 'sh' String.Interpol: "#4e9a06", # class: 'si' String.Other: "#4e9a06", # class: 'sx' String.Regex: "#4e9a06", # class: 'sr' String.Single: "#4e9a06", # class: 's1' String.Symbol: "#4e9a06", # class: 'ss' Generic: "#000000", # class: 'g' Generic.Deleted: "#a40000", # class: 'gd' Generic.Emph: "italic #000000", # class: 'ge' Generic.Error: "#ef2929", # class: 'gr' Generic.Heading: "bold #000080", # class: 'gh' Generic.Inserted: "#00A000", # class: 'gi' Generic.Output: "#888", # class: 'go' Generic.Prompt: "#745334", # class: 'gp' Generic.Strong: "bold #000000", # class: 'gs' Generic.Subheading: "bold #800080", # class: 'gu' Generic.Traceback: "bold #a40000", # class: 'gt' } tablib-0.13.0/docs/_themes/kr_small/0000755000175000017500000000000013440456503016556 5ustar josephjosephtablib-0.13.0/docs/_themes/kr_small/theme.conf0000644000175000017500000000027013440456503020526 0ustar josephjoseph[theme] inherit = basic stylesheet = flasky.css nosidebar = true pygments_style = flask_theme_support.FlaskyStyle [options] index_logo = '' index_logo_height = 120px github_fork = '' tablib-0.13.0/docs/_themes/kr_small/layout.html0000644000175000017500000000124713440456503020765 0ustar josephjoseph{% extends "basic/layout.html" %} {% block header %} {{ super() }} {% if pagename == 'index' %}
{% endif %} {% endblock %} {% block footer %} {% if pagename == 'index' %}
{% endif %} {% endblock %} {# do not display relbars #} {% block relbar1 %}{% endblock %} {% block relbar2 %} {% if theme_github_fork %} Fork me on GitHub {% endif %} {% endblock %} {% block sidebar1 %}{% endblock %} {% block sidebar2 %}{% endblock %} tablib-0.13.0/docs/_themes/kr_small/static/0000755000175000017500000000000013440456503020045 5ustar josephjosephtablib-0.13.0/docs/_themes/kr_small/static/flasky.css_t0000644000175000017500000001075313440456503022401 0ustar josephjoseph/* * flasky.css_t * ~~~~~~~~~~~~ * * Sphinx stylesheet -- flasky theme based on nature theme. * * :copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ @import url("basic.css"); /* -- page layout ----------------------------------------------------------- */ body { font-family: 'Georgia', serif; font-size: 17px; color: #000; background: white; margin: 0; padding: 0; } div.documentwrapper { float: left; width: 100%; } div.bodywrapper { margin: 40px auto 0 auto; width: 700px; } hr { border: 1px solid #B1B4B6; } div.body { background-color: #ffffff; color: #3E4349; padding: 0 30px 30px 30px; } img.floatingflask { padding: 0 0 10px 10px; float: right; } div.footer { text-align: right; color: #888; padding: 10px; font-size: 14px; width: 650px; margin: 0 auto 40px auto; } div.footer a { color: #888; text-decoration: underline; } div.related { line-height: 32px; color: #888; } div.related ul { padding: 0 0 0 10px; } div.related a { color: #444; } /* -- body styles ----------------------------------------------------------- */ a { color: #004B6B; text-decoration: underline; } a:hover { color: #6D4100; text-decoration: underline; } div.body { padding-bottom: 40px; /* saved for footer */ } div.body h1, div.body h2, div.body h3, div.body h4, div.body h5, div.body h6 { font-family: 'Garamond', 'Georgia', serif; font-weight: normal; margin: 30px 0px 10px 0px; padding: 0; } {% if theme_index_logo %} div.indexwrapper h1 { text-indent: -999999px; background: url({{ theme_index_logo }}) no-repeat center center; height: {{ theme_index_logo_height }}; } {% endif %} div.body h2 { font-size: 180%; } div.body h3 { font-size: 150%; } div.body h4 { font-size: 130%; } div.body h5 { font-size: 100%; } div.body h6 { font-size: 100%; } a.headerlink { color: white; padding: 0 4px; text-decoration: none; } a.headerlink:hover { color: #444; background: #eaeaea; } div.body p, div.body dd, div.body li { line-height: 1.4em; } div.admonition { background: #fafafa; margin: 20px -30px; padding: 10px 30px; border-top: 1px solid #ccc; border-bottom: 1px solid #ccc; } div.admonition p.admonition-title { font-family: 'Garamond', 'Georgia', serif; font-weight: normal; font-size: 24px; margin: 0 0 10px 0; padding: 0; line-height: 1; } div.admonition p.last { margin-bottom: 0; } div.highlight{ background-color: white; } dt:target, .highlight { background: #FAF3E8; } div.note { background-color: #eee; border: 1px solid #ccc; } div.seealso { background-color: #ffc; border: 1px solid #ff6; } div.topic { background-color: #eee; } div.warning { background-color: #ffe4e4; border: 1px solid #f66; } p.admonition-title { display: inline; } p.admonition-title:after { content: ":"; } pre, tt { font-family: 'Consolas', 'Menlo', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 0.85em; } img.screenshot { } tt.descname, tt.descclassname { font-size: 0.95em; } tt.descname { padding-right: 0.08em; } img.screenshot { -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils { border: 1px solid #888; -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils td, table.docutils th { border: 1px solid #888; padding: 0.25em 0.7em; } table.field-list, table.footnote { border: none; -moz-box-shadow: none; -webkit-box-shadow: none; box-shadow: none; } table.footnote { margin: 15px 0; width: 100%; border: 1px solid #eee; } table.field-list th { padding: 0 0.8em 0 0; } table.field-list td { padding: 0; } table.footnote td { padding: 0.5em; } dl { margin: 0; padding: 0; } dl dd { margin-left: 30px; } pre { padding: 0; margin: 15px -30px; padding: 8px; line-height: 1.3em; padding: 7px 30px; background: #eee; border-radius: 2px; -moz-border-radius: 2px; -webkit-border-radius: 2px; } dl pre { margin-left: -60px; padding-left: 60px; } tt { background-color: #ecf0f3; color: #222; /* padding: 1px 2px; */ } tt.xref, a tt { background-color: #FBFBFB; } a:hover tt { background: #EEE; } tablib-0.13.0/docs/api.rst0000644000175000017500000000164613440456503014640 0ustar josephjoseph.. _api: === API === .. module:: tablib This part of the documentation covers all the interfaces of Tablib. For parts where Tablib depends on external libraries, we document the most important right here and provide links to the canonical documentation. -------------- Dataset Object -------------- .. autoclass:: Dataset :inherited-members: --------------- Databook Object --------------- .. autoclass:: Databook :inherited-members: --------- Functions --------- .. autofunction:: detect .. autofunction:: import_set ---------- Exceptions ---------- .. class:: InvalidDatasetType You're trying to add something that doesn't quite look right. .. class:: InvalidDimensions You're trying to add something that doesn't quite fit right. .. class:: UnsupportedFormat You're trying to add something that doesn't quite taste right. Now, go start some :ref:`Tablib Development `.tablib-0.13.0/docs/tutorial.rst0000644000175000017500000002330513440456503015726 0ustar josephjoseph.. _quickstart: ========== Quickstart ========== .. module:: tablib Eager to get started? This page gives a good introduction in how to get started with Tablib. This assumes you already have Tablib installed. If you do not, head over to the :ref:`Installation ` section. First, make sure that: * Tablib is :ref:`installed ` * Tablib is :ref:`up-to-date ` Let's get started with some simple use cases and examples. ------------------ Creating a Dataset ------------------ A :class:`Dataset ` is nothing more than what its name implies—a set of data. Creating your own instance of the :class:`tablib.Dataset` object is simple. :: data = tablib.Dataset() You can now start filling this :class:`Dataset ` object with data. .. admonition:: Example Context From here on out, if you see ``data``, assume that it's a fresh :class:`Dataset ` object. ----------- Adding Rows ----------- Let's say you want to collect a simple list of names. :: # collection of names names = ['Kenneth Reitz', 'Bessie Monke'] for name in names: # split name appropriately fname, lname = name.split() # add names to Dataset data.append([fname, lname]) You can get a nice, Pythonic view of the dataset at any time with :class:`Dataset.dict`:: >>> data.dict [('Kenneth', 'Reitz'), ('Bessie', 'Monke')] -------------- Adding Headers -------------- It's time to enhance our :class:`Dataset` by giving our columns some titles. To do so, set :class:`Dataset.headers`. :: data.headers = ['First Name', 'Last Name'] Now our data looks a little different. :: >>> data.dict [{'Last Name': 'Reitz', 'First Name': 'Kenneth'}, {'Last Name': 'Monke', 'First Name': 'Bessie'}] -------------- Adding Columns -------------- Now that we have a basic :class:`Dataset` in place, let's add a column of **ages** to it. :: data.append_col([22, 20], header='Age') Let's view the data now. :: >>> data.dict [{'Last Name': 'Reitz', 'First Name': 'Kenneth', 'Age': 22}, {'Last Name': 'Monke', 'First Name': 'Bessie', 'Age': 20}] It's that easy. -------------- Importing Data -------------- Creating a :class:`tablib.Dataset` object by importing a pre-existing file is simple. :: imported_data = Dataset().load(open('data.csv').read()) This detects what sort of data is being passed in, and uses an appropriate formatter to do the import. So you can import from a variety of different file types. -------------- Exporting Data -------------- Tablib's killer feature is the ability to export your :class:`Dataset` objects into a number of formats. **Comma-Separated Values** :: >>> data.export('csv') Last Name,First Name,Age Reitz,Kenneth,22 Monke,Bessie,20 **JavaScript Object Notation** :: >>> data.export('json') [{"Last Name": "Reitz", "First Name": "Kenneth", "Age": 22}, {"Last Name": "Monke", "First Name": "Bessie", "Age": 20}] **YAML Ain't Markup Language** :: >>> data.export('yaml') - {Age: 22, First Name: Kenneth, Last Name: Reitz} - {Age: 20, First Name: Bessie, Last Name: Monke} **Microsoft Excel** :: >>> data.export('xls') **Pandas DataFrame** :: >>> data.export('df') First Name Last Name Age 0 Kenneth Reitz 22 1 Bessie Monke 21 ------------------------ Selecting Rows & Columns ------------------------ You can slice and dice your data, just like a standard Python list. :: >>> data[0] ('Kenneth', 'Reitz', 22) If we had a set of data consisting of thousands of rows, it could be useful to get a list of values in a column. To do so, we access the :class:`Dataset` as if it were a standard Python dictionary. :: >>> data['First Name'] ['Kenneth', 'Bessie'] You can also access the column using its index. :: >>> data.headers ['Last Name', 'First Name', 'Age'] >>> data.get_col(1) ['Kenneth', 'Bessie'] Let's find the average age. :: >>> ages = data['Age'] >>> float(sum(ages)) / len(ages) 21.0 ----------------------- Removing Rows & Columns ----------------------- It's easier than you could imagine. Delete a column:: >>> del data['Col Name'] Delete a range of rows:: >>> del data[0:12] ============== Advanced Usage ============== This part of the documentation services to give you an idea that are otherwise hard to extract from the :ref:`API Documentation ` And now for something completely different. .. _dyncols: --------------- Dynamic Columns --------------- .. versionadded:: 0.8.3 Thanks to Josh Ourisman, Tablib now supports adding dynamic columns. A dynamic column is a single callable object (*e.g.* a function). Let's add a dynamic column to our :class:`Dataset` object. In this example, we have a function that generates a random grade for our students. :: import random def random_grade(row): """Returns a random integer for entry.""" return (random.randint(60,100)/100.0) data.append_col(random_grade, header='Grade') Let's have a look at our data. :: >>> data.export('yaml') - {Age: 22, First Name: Kenneth, Grade: 0.6, Last Name: Reitz} - {Age: 20, First Name: Bessie, Grade: 0.75, Last Name: Monke} Let's remove that column. :: >>> del data['Grade'] When you add a dynamic column, the first argument that is passed in to the given callable is the current data row. You can use this to perform calculations against your data row. For example, we can use the data available in the row to guess the gender of a student. :: def guess_gender(row): """Calculates gender of given student data row.""" m_names = ('Kenneth', 'Mike', 'Yuri') f_names = ('Bessie', 'Samantha', 'Heather') name = row[0] if name in m_names: return 'Male' elif name in f_names: return 'Female' else: return 'Unknown' Adding this function to our dataset as a dynamic column would result in: :: >>> data.export('yaml') - {Age: 22, First Name: Kenneth, Gender: Male, Last Name: Reitz} - {Age: 20, First Name: Bessie, Gender: Female, Last Name: Monke} .. _tags: ---------------------------- Filtering Datasets with Tags ---------------------------- .. versionadded:: 0.9.0 When constructing a :class:`Dataset` object, you can add tags to rows by specifying the ``tags`` parameter. This allows you to filter your :class:`Dataset` later. This can be useful to separate rows of data based on arbitrary criteria (*e.g.* origin) that you don't want to include in your :class:`Dataset`. Let's tag some students. :: students = tablib.Dataset() students.headers = ['first', 'last'] students.rpush(['Kenneth', 'Reitz'], tags=['male', 'technical']) students.rpush(['Bessie', 'Monke'], tags=['female', 'creative']) Now that we have extra meta-data on our rows, we can easily filter our :class:`Dataset`. Let's just see Male students. :: >>> students.filter(['male']).yaml - {first: Kenneth, Last: Reitz} It's that simple. The original :class:`Dataset` is untouched. Open an Excel Workbook and read first sheet -------------------------------- To open an Excel 2007 and later workbook with a single sheet (or a workbook with multiple sheets but you just want the first sheet), use the following: data = tablib.Dataset() data.xlsx = open('my_excel_file.xlsx', 'rb').read() print(data) Excel Workbook With Multiple Sheets ------------------------------------ When dealing with a large number of :class:`Datasets ` in spreadsheet format, it's quite common to group multiple spreadsheets into a single Excel file, known as a Workbook. Tablib makes it extremely easy to build workbooks with the handy :class:`Databook` class. Let's say we have 3 different :class:`Datasets `. All we have to do is add them to a :class:`Databook` object... :: book = tablib.Databook((data1, data2, data3)) ... and export to Excel just like :class:`Datasets `. :: with open('students.xls', 'wb') as f: f.write(book.xls) The resulting ``students.xls`` file will contain a separate spreadsheet for each :class:`Dataset` object in the :class:`Databook`. .. admonition:: Binary Warning Make sure to open the output file in binary mode. .. _separators: ---------- Separators ---------- .. versionadded:: 0.8.2 When constructing a spreadsheet, it's often useful to create a blank row containing information on the upcoming data. So, :: daniel_tests = [ ('11/24/09', 'Math 101 Mid-term Exam', 56.), ('05/24/10', 'Math 101 Final Exam', 62.) ] suzie_tests = [ ('11/24/09', 'Math 101 Mid-term Exam', 56.), ('05/24/10', 'Math 101 Final Exam', 62.) ] # Create new dataset tests = tablib.Dataset() tests.headers = ['Date', 'Test Name', 'Grade'] # Daniel's Tests tests.append_separator('Daniel\'s Scores') for test_row in daniel_tests: tests.append(test_row) # Susie's Tests tests.append_separator('Susie\'s Scores') for test_row in suzie_tests: tests.append(test_row) # Write spreadsheet to disk with open('grades.xls', 'wb') as f: f.write(tests.export('xls')) The resulting **tests.xls** will have the following layout: Daniel's Scores: * '11/24/09', 'Math 101 Mid-term Exam', 56. * '05/24/10', 'Math 101 Final Exam', 62. Suzie's Scores: * '11/24/09', 'Math 101 Mid-term Exam', 56. * '05/24/10', 'Math 101 Final Exam', 62. .. admonition:: Format Support At this time, only :class:`Excel ` output supports separators. ---- Now, go check out the :ref:`API Documentation ` or begin :ref:`Tablib Development `. tablib-0.13.0/docs/Makefile0000644000175000017500000001075613440456503014777 0ustar josephjoseph# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Tablib.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Tablib.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/Tablib" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Tablib" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." make -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." tablib-0.13.0/docs/install.rst0000644000175000017500000000254313440456503015532 0ustar josephjoseph.. _install: Installation ============ This part of the documentation covers the installation of Tablib. The first step to using any software package is getting it properly installed. .. _installing: ----------------- Installing Tablib ----------------- Distribute & Pip ---------------- Of course, the recommended way to install Tablib is with `pip `_: .. code-block:: console $ pip install tablib[pandas] ------------------- Download the Source ------------------- You can also install tablib from source. The latest release (|version|) is available from GitHub. * tarball_ * zipball_ .. _ Once you have a copy of the source, you can embed it in your Python package, or install it into your site-packages easily. .. code-block:: console $ python setup.py install To download the full source history from Git, see :ref:`Source Control `. .. _tarball: http://github.com/kennethreitz/tablib/tarball/master .. _zipball: http://github.com/kennethreitz/tablib/zipball/master .. _updates: Staying Updated --------------- The latest version of Tablib will always be available here: * PyPI: http://pypi.python.org/pypi/tablib/ * GitHub: http://github.com/kennethreitz/tablib/ When a new version is available, upgrading is simple:: $ pip install tablib --upgrade Now, go get a :ref:`Quick Start `. tablib-0.13.0/tablib/0000755000175000017500000000000013440456503013633 5ustar josephjosephtablib-0.13.0/tablib/formats/0000755000175000017500000000000013440456503015306 5ustar josephjosephtablib-0.13.0/tablib/formats/_df.py0000644000175000017500000000202013440456503016402 0ustar josephjoseph""" Tablib - DataFrame Support. """ import sys if sys.version_info[0] > 2: from io import BytesIO else: from cStringIO import StringIO as BytesIO try: from pandas import DataFrame except ImportError: DataFrame = None import tablib from tablib.compat import unicode title = 'df' extensions = ('df', ) def detect(stream): """Returns True if given stream is a DataFrame.""" if DataFrame is None: return False try: DataFrame(stream) return True except ValueError: return False def export_set(dset, index=None): """Returns DataFrame representation of DataBook.""" if DataFrame is None: raise NotImplementedError( 'DataFrame Format requires `pandas` to be installed.' ' Try `pip install tablib[pandas]`.') dataframe = DataFrame(dset.dict, columns=dset.headers) return dataframe def import_set(dset, in_stream): """Returns dataset from DataFrame.""" dset.wipe() dset.dict = in_stream.to_dict(orient='records') tablib-0.13.0/tablib/formats/_tsv.py0000644000175000017500000000137113440456503016635 0ustar josephjoseph# -*- coding: utf-8 -*- """ Tablib - TSV (Tab Separated Values) Support. """ from tablib.compat import unicode from tablib.formats._csv import ( export_set as export_set_wrapper, import_set as import_set_wrapper, detect as detect_wrapper, ) title = 'tsv' extensions = ('tsv',) DELIMITER = unicode('\t') def export_set(dataset): """Returns TSV representation of Dataset.""" return export_set_wrapper(dataset, delimiter=DELIMITER) def import_set(dset, in_stream, headers=True): """Returns dataset from TSV stream.""" return import_set_wrapper(dset, in_stream, headers=headers, delimiter=DELIMITER) def detect(stream): """Returns True if given stream is valid TSV.""" return detect_wrapper(stream, delimiter=DELIMITER) tablib-0.13.0/tablib/formats/_html.py0000644000175000017500000000315113440456503016763 0ustar josephjoseph# -*- coding: utf-8 -*- """ Tablib - HTML export support. """ import sys if sys.version_info[0] > 2: from io import BytesIO as StringIO from tablib.packages import markup3 as markup else: from cStringIO import StringIO from tablib.packages import markup import tablib from tablib.compat import unicode import codecs BOOK_ENDINGS = 'h3' title = 'html' extensions = ('html', ) def export_set(dataset): """HTML representation of a Dataset.""" stream = StringIO() page = markup.page() page.table.open() if dataset.headers is not None: new_header = [item if item is not None else '' for item in dataset.headers] page.thead.open() headers = markup.oneliner.th(new_header) page.tr(headers) page.thead.close() for row in dataset: new_row = [item if item is not None else '' for item in row] html_row = markup.oneliner.td(new_row) page.tr(html_row) page.table.close() # Allow unicode characters in output wrapper = codecs.getwriter("utf8")(stream) wrapper.writelines(unicode(page)) return stream.getvalue().decode('utf-8') def export_book(databook): """HTML representation of a Databook.""" stream = StringIO() # Allow unicode characters in output wrapper = codecs.getwriter("utf8")(stream) for i, dset in enumerate(databook._datasets): title = (dset.title if dset.title else 'Set %s' % (i)) wrapper.write('<%s>%s\n' % (BOOK_ENDINGS, title, BOOK_ENDINGS)) wrapper.write(dset.html) wrapper.write('\n') return stream.getvalue().decode('utf-8') tablib-0.13.0/tablib/formats/_dbf.py0000644000175000017500000000514513440456503016557 0ustar josephjoseph# -*- coding: utf-8 -*- """ Tablib - DBF Support. """ import tempfile import struct import os from tablib.compat import StringIO from tablib.compat import dbfpy from tablib.compat import is_py3 if is_py3: from tablib.packages.dbfpy3 import dbf from tablib.packages.dbfpy3 import dbfnew from tablib.packages.dbfpy3 import record as dbfrecord import io else: from tablib.packages.dbfpy import dbf from tablib.packages.dbfpy import dbfnew from tablib.packages.dbfpy import record as dbfrecord title = 'dbf' extensions = ('csv',) DEFAULT_ENCODING = 'utf-8' def export_set(dataset): """Returns DBF representation of a Dataset""" new_dbf = dbfnew.dbf_new() temp_file, temp_uri = tempfile.mkstemp() # create the appropriate fields based on the contents of the first row first_row = dataset[0] for fieldname, field_value in zip(dataset.headers, first_row): if type(field_value) in [int, float]: new_dbf.add_field(fieldname, 'N', 10, 8) else: new_dbf.add_field(fieldname, 'C', 80) new_dbf.write(temp_uri) dbf_file = dbf.Dbf(temp_uri, readOnly=0) for row in dataset: record = dbfrecord.DbfRecord(dbf_file) for fieldname, field_value in zip(dataset.headers, row): record[fieldname] = field_value record.store() dbf_file.close() dbf_stream = open(temp_uri, 'rb') if is_py3: stream = io.BytesIO(dbf_stream.read()) else: stream = StringIO(dbf_stream.read()) dbf_stream.close() os.close(temp_file) os.remove(temp_uri) return stream.getvalue() def import_set(dset, in_stream, headers=True): """Returns a dataset from a DBF stream.""" dset.wipe() if is_py3: _dbf = dbf.Dbf(io.BytesIO(in_stream)) else: _dbf = dbf.Dbf(StringIO(in_stream)) dset.headers = _dbf.fieldNames for record in range(_dbf.recordCount): row = [_dbf[record][f] for f in _dbf.fieldNames] dset.append(row) def detect(stream): """Returns True if the given stream is valid DBF""" #_dbf = dbf.Table(StringIO(stream)) try: if is_py3: if type(stream) is not bytes: stream = bytes(stream, 'utf-8') _dbf = dbf.Dbf(io.BytesIO(stream), readOnly=True) else: _dbf = dbf.Dbf(StringIO(stream), readOnly=True) return True except (ValueError, struct.error): # When we try to open up a file that's not a DBF, dbfpy raises a # ValueError. # When unpacking a string argument with less than 8 chars, struct.error is # raised. return False tablib-0.13.0/tablib/formats/_csv.py0000644000175000017500000000213013440456503016606 0ustar josephjoseph# -*- coding: utf-8 -*- """ Tablib - *SV Support. """ from tablib.compat import csv, StringIO, unicode title = 'csv' extensions = ('csv',) DEFAULT_DELIMITER = unicode(',') def export_set(dataset, **kwargs): """Returns CSV representation of Dataset.""" stream = StringIO() kwargs.setdefault('delimiter', DEFAULT_DELIMITER) _csv = csv.writer(stream, **kwargs) for row in dataset._package(dicts=False): _csv.writerow(row) return stream.getvalue() def import_set(dset, in_stream, headers=True, **kwargs): """Returns dataset from CSV stream.""" dset.wipe() kwargs.setdefault('delimiter', DEFAULT_DELIMITER) rows = csv.reader(StringIO(in_stream), **kwargs) for i, row in enumerate(rows): if (i == 0) and (headers): dset.headers = row elif row: dset.append(row) def detect(stream, delimiter=DEFAULT_DELIMITER): """Returns True if given stream is valid CSV.""" try: csv.Sniffer().sniff(stream, delimiters=delimiter) return True except (csv.Error, TypeError): return False tablib-0.13.0/tablib/formats/_xlsx.py0000644000175000017500000000757713440456503017035 0ustar josephjoseph# -*- coding: utf-8 -*- """ Tablib - XLSX Support. """ import sys if sys.version_info[0] > 2: from io import BytesIO else: from cStringIO import StringIO as BytesIO import openpyxl import tablib Workbook = openpyxl.workbook.Workbook ExcelWriter = openpyxl.writer.excel.ExcelWriter get_column_letter = openpyxl.utils.get_column_letter from tablib.compat import unicode title = 'xlsx' extensions = ('xlsx',) def detect(stream): """Returns True if given stream is a readable excel file.""" try: openpyxl.reader.excel.load_workbook(stream) return True except openpyxl.shared.exc.InvalidFileException: pass def export_set(dataset, freeze_panes=True): """Returns XLSX representation of Dataset.""" wb = Workbook() ws = wb.worksheets[0] ws.title = dataset.title if dataset.title else 'Tablib Dataset' dset_sheet(dataset, ws, freeze_panes=freeze_panes) stream = BytesIO() wb.save(stream) return stream.getvalue() def export_book(databook, freeze_panes=True): """Returns XLSX representation of DataBook.""" wb = Workbook() for sheet in wb.worksheets: wb.remove(sheet) for i, dset in enumerate(databook._datasets): ws = wb.create_sheet() ws.title = dset.title if dset.title else 'Sheet%s' % (i) dset_sheet(dset, ws, freeze_panes=freeze_panes) stream = BytesIO() wb.save(stream) return stream.getvalue() def import_set(dset, in_stream, headers=True): """Returns databook from XLS stream.""" dset.wipe() xls_book = openpyxl.reader.excel.load_workbook(BytesIO(in_stream)) sheet = xls_book.active dset.title = sheet.title for i, row in enumerate(sheet.rows): row_vals = [c.value for c in row] if (i == 0) and (headers): dset.headers = row_vals else: dset.append(row_vals) def import_book(dbook, in_stream, headers=True): """Returns databook from XLS stream.""" dbook.wipe() xls_book = openpyxl.reader.excel.load_workbook(BytesIO(in_stream)) for sheet in xls_book.worksheets: data = tablib.Dataset() data.title = sheet.title for i, row in enumerate(sheet.rows): row_vals = [c.value for c in row] if (i == 0) and (headers): data.headers = row_vals else: data.append(row_vals) dbook.add_sheet(data) def dset_sheet(dataset, ws, freeze_panes=True): """Completes given worksheet from given Dataset.""" _package = dataset._package(dicts=False) for i, sep in enumerate(dataset._separators): _offset = i _package.insert((sep[0] + _offset), (sep[1],)) bold = openpyxl.styles.Font(bold=True) wrap_text = openpyxl.styles.Alignment(wrap_text=True) for i, row in enumerate(_package): row_number = i + 1 for j, col in enumerate(row): col_idx = get_column_letter(j + 1) cell = ws['%s%s' % (col_idx, row_number)] # bold headers if (row_number == 1) and dataset.headers: # cell.value = unicode('%s' % col, errors='ignore') cell.value = unicode(col) cell.font = bold if freeze_panes: # Export Freeze only after first Line ws.freeze_panes = 'A2' # bold separators elif len(row) < dataset.width: cell.value = unicode('%s' % col, errors='ignore') cell.font = bold # wrap the rest else: try: if '\n' in col: cell.value = unicode('%s' % col, errors='ignore') cell.alignment = wrap_text else: cell.value = unicode('%s' % col, errors='ignore') except TypeError: cell.value = unicode(col) tablib-0.13.0/tablib/formats/_json.py0000644000175000017500000000235413440456503016774 0ustar josephjoseph# -*- coding: utf-8 -*- """ Tablib - JSON Support """ import decimal import json from uuid import UUID import tablib title = 'json' extensions = ('json', 'jsn') def serialize_objects_handler(obj): if isinstance(obj, (decimal.Decimal, UUID)): return str(obj) elif hasattr(obj, 'isoformat'): return obj.isoformat() else: return obj def export_set(dataset): """Returns JSON representation of Dataset.""" return json.dumps(dataset.dict, default=serialize_objects_handler) def export_book(databook): """Returns JSON representation of Databook.""" return json.dumps(databook._package(), default=serialize_objects_handler) def import_set(dset, in_stream): """Returns dataset from JSON stream.""" dset.wipe() dset.dict = json.loads(in_stream) def import_book(dbook, in_stream): """Returns databook from JSON stream.""" dbook.wipe() for sheet in json.loads(in_stream): data = tablib.Dataset() data.title = sheet['title'] data.dict = sheet['data'] dbook.add_sheet(data) def detect(stream): """Returns True if given stream is valid JSON.""" try: json.loads(stream) return True except ValueError: return False tablib-0.13.0/tablib/formats/__init__.py0000644000175000017500000000074613440456503017426 0ustar josephjoseph# -*- coding: utf-8 -*- """ Tablib - formats """ from . import _csv as csv from . import _json as json from . import _xls as xls from . import _yaml as yaml from . import _tsv as tsv from . import _html as html from . import _xlsx as xlsx from . import _ods as ods from . import _dbf as dbf from . import _latex as latex from . import _df as df from . import _rst as rst from . import _jira as jira available = (json, xls, yaml, csv, dbf, tsv, html, jira, latex, xlsx, ods, df, rst) tablib-0.13.0/tablib/formats/_xls.py0000644000175000017500000000635313440456503016634 0ustar josephjoseph# -*- coding: utf-8 -*- """ Tablib - XLS Support. """ import sys from tablib.compat import BytesIO, xrange import tablib import xlrd import xlwt from xlrd.biffh import XLRDError title = 'xls' extensions = ('xls',) # special styles wrap = xlwt.easyxf("alignment: wrap on") bold = xlwt.easyxf("font: bold on") def detect(stream): """Returns True if given stream is a readable excel file.""" try: xlrd.open_workbook(file_contents=stream) return True except (TypeError, XLRDError): pass try: xlrd.open_workbook(file_contents=stream.read()) return True except (AttributeError, XLRDError): pass try: xlrd.open_workbook(filename=stream) return True except: return False def export_set(dataset): """Returns XLS representation of Dataset.""" wb = xlwt.Workbook(encoding='utf8') ws = wb.add_sheet(dataset.title if dataset.title else 'Tablib Dataset') dset_sheet(dataset, ws) stream = BytesIO() wb.save(stream) return stream.getvalue() def export_book(databook): """Returns XLS representation of DataBook.""" wb = xlwt.Workbook(encoding='utf8') for i, dset in enumerate(databook._datasets): ws = wb.add_sheet(dset.title if dset.title else 'Sheet%s' % (i)) dset_sheet(dset, ws) stream = BytesIO() wb.save(stream) return stream.getvalue() def import_set(dset, in_stream, headers=True): """Returns databook from XLS stream.""" dset.wipe() xls_book = xlrd.open_workbook(file_contents=in_stream) sheet = xls_book.sheet_by_index(0) dset.title = sheet.name for i in xrange(sheet.nrows): if (i == 0) and (headers): dset.headers = sheet.row_values(0) else: dset.append(sheet.row_values(i)) def import_book(dbook, in_stream, headers=True): """Returns databook from XLS stream.""" dbook.wipe() xls_book = xlrd.open_workbook(file_contents=in_stream) for sheet in xls_book.sheets(): data = tablib.Dataset() data.title = sheet.name for i in xrange(sheet.nrows): if (i == 0) and (headers): data.headers = sheet.row_values(0) else: data.append(sheet.row_values(i)) dbook.add_sheet(data) def dset_sheet(dataset, ws): """Completes given worksheet from given Dataset.""" _package = dataset._package(dicts=False) for i, sep in enumerate(dataset._separators): _offset = i _package.insert((sep[0] + _offset), (sep[1],)) for i, row in enumerate(_package): for j, col in enumerate(row): # bold headers if (i == 0) and dataset.headers: ws.write(i, j, col, bold) # frozen header row ws.panes_frozen = True ws.horz_split_pos = 1 # bold separators elif len(row) < dataset.width: ws.write(i, j, col, bold) # wrap the rest else: try: if '\n' in col: ws.write(i, j, col, wrap) else: ws.write(i, j, col) except TypeError: ws.write(i, j, col) tablib-0.13.0/tablib/formats/_yaml.py0000644000175000017500000000223513440456503016763 0ustar josephjoseph# -*- coding: utf-8 -*- """ Tablib - YAML Support. """ import tablib import yaml title = 'yaml' extensions = ('yaml', 'yml') def export_set(dataset): """Returns YAML representation of Dataset.""" return yaml.safe_dump(dataset._package(ordered=False)) def export_book(databook): """Returns YAML representation of Databook.""" return yaml.safe_dump(databook._package(ordered=False)) def import_set(dset, in_stream): """Returns dataset from YAML stream.""" dset.wipe() dset.dict = yaml.safe_load(in_stream) def import_book(dbook, in_stream): """Returns databook from YAML stream.""" dbook.wipe() for sheet in yaml.safe_load(in_stream): data = tablib.Dataset() data.title = sheet['title'] data.dict = sheet['data'] dbook.add_sheet(data) def detect(stream): """Returns True if given stream is valid YAML.""" try: _yaml = yaml.safe_load(stream) if isinstance(_yaml, (list, tuple, dict)): return True else: return False except (yaml.parser.ParserError, yaml.reader.ReaderError, yaml.scanner.ScannerError): return False tablib-0.13.0/tablib/formats/_latex.py0000644000175000017500000000717413440456503017145 0ustar josephjoseph# -*- coding: utf-8 -*- """Tablib - LaTeX table export support. Generates a LaTeX booktabs-style table from the dataset. """ import re from tablib.compat import unicode title = 'latex' extensions = ('tex',) TABLE_TEMPLATE = """\ %% Note: add \\usepackage{booktabs} to your preamble %% \\begin{table}[!htbp] \\centering %(CAPTION)s \\begin{tabular}{%(COLSPEC)s} \\toprule %(HEADER)s %(MIDRULE)s %(BODY)s \\bottomrule \\end{tabular} \\end{table} """ TEX_RESERVED_SYMBOLS_MAP = dict([ ('\\', '\\textbackslash{}'), ('{', '\\{'), ('}', '\\}'), ('$', '\\$'), ('&', '\\&'), ('#', '\\#'), ('^', '\\textasciicircum{}'), ('_', '\\_'), ('~', '\\textasciitilde{}'), ('%', '\\%'), ]) TEX_RESERVED_SYMBOLS_RE = re.compile( '(%s)' % '|'.join(map(re.escape, TEX_RESERVED_SYMBOLS_MAP.keys()))) def export_set(dataset): """Returns LaTeX representation of dataset :param dataset: dataset to serialize :type dataset: tablib.core.Dataset """ caption = '\\caption{%s}' % dataset.title if dataset.title else '%' colspec = _colspec(dataset.width) header = _serialize_row(dataset.headers) if dataset.headers else '' midrule = _midrule(dataset.width) body = '\n'.join([_serialize_row(row) for row in dataset]) return TABLE_TEMPLATE % dict(CAPTION=caption, COLSPEC=colspec, HEADER=header, MIDRULE=midrule, BODY=body) def _colspec(dataset_width): """Generates the column specification for the LaTeX `tabular` environment based on the dataset width. The first column is justified to the left, all further columns are aligned to the right. .. note:: This is only a heuristic and most probably has to be fine-tuned post export. Column alignment should depend on the data type, e.g., textual content should usually be aligned to the left while numeric content almost always should be aligned to the right. :param dataset_width: width of the dataset """ spec = 'l' for _ in range(1, dataset_width): spec += 'r' return spec def _midrule(dataset_width): """Generates the table `midrule`, which may be composed of several `cmidrules`. :param dataset_width: width of the dataset to serialize """ if not dataset_width or dataset_width == 1: return '\\midrule' return ' '.join([_cmidrule(colindex, dataset_width) for colindex in range(1, dataset_width + 1)]) def _cmidrule(colindex, dataset_width): """Generates the `cmidrule` for a single column with appropriate trimming based on the column position. :param colindex: Column index :param dataset_width: width of the dataset """ rule = '\\cmidrule(%s){%d-%d}' if colindex == 1: # Rule of first column is trimmed on the right return rule % ('r', colindex, colindex) if colindex == dataset_width: # Rule of last column is trimmed on the left return rule % ('l', colindex, colindex) # Inner columns are trimmed on the left and right return rule % ('lr', colindex, colindex) def _serialize_row(row): """Returns string representation of a single row. :param row: single dataset row """ new_row = [_escape_tex_reserved_symbols(unicode(item)) if item else '' for item in row] return 6 * ' ' + ' & '.join(new_row) + ' \\\\' def _escape_tex_reserved_symbols(input): """Escapes all TeX reserved symbols ('_', '~', etc.) in a string. :param input: String to escape """ def replace(match): return TEX_RESERVED_SYMBOLS_MAP[match.group()] return TEX_RESERVED_SYMBOLS_RE.sub(replace, input) tablib-0.13.0/tablib/formats/_rst.py0000644000175000017500000001775013440456503016641 0ustar josephjoseph# -*- coding: utf-8 -*- """ Tablib - reStructuredText Support """ from __future__ import division from __future__ import print_function from __future__ import unicode_literals from textwrap import TextWrapper from tablib.compat import ( median, unicode, izip_longest, ) title = 'rst' extensions = ('rst',) MAX_TABLE_WIDTH = 80 # Roughly. It may be wider to avoid breaking words. JUSTIFY_LEFT = 'left' JUSTIFY_CENTER = 'center' JUSTIFY_RIGHT = 'right' JUSTIFY_VALUES = (JUSTIFY_LEFT, JUSTIFY_CENTER, JUSTIFY_RIGHT) def to_unicode(value): if isinstance(value, bytes): return value.decode('utf-8') return unicode(value) def _max_word_len(text): """ Return the length of the longest word in `text`. >>> _max_word_len('Python Module for Tabular Datasets') 8 """ return max((len(word) for word in text.split())) def _get_column_string_lengths(dataset): """ Returns a list of string lengths of each column, and a list of maximum word lengths. """ if dataset.headers: column_lengths = [[len(h)] for h in dataset.headers] word_lens = [_max_word_len(h) for h in dataset.headers] else: column_lengths = [[] for _ in range(dataset.width)] word_lens = [0 for _ in range(dataset.width)] for row in dataset.dict: values = iter(row.values() if hasattr(row, 'values') else row) for i, val in enumerate(values): text = to_unicode(val) column_lengths[i].append(len(text)) word_lens[i] = max(word_lens[i], _max_word_len(text)) return column_lengths, word_lens def _row_to_lines(values, widths, wrapper, sep='|', justify=JUSTIFY_LEFT): """ Returns a table row of wrapped values as a list of lines """ if justify not in JUSTIFY_VALUES: raise ValueError('Value of "justify" must be one of "{}"'.format( '", "'.join(JUSTIFY_VALUES) )) if justify == JUSTIFY_LEFT: just = lambda text, width: text.ljust(width) elif justify == JUSTIFY_CENTER: just = lambda text, width: text.center(width) else: just = lambda text, width: text.rjust(width) lpad = sep + ' ' if sep else '' rpad = ' ' + sep if sep else '' pad = ' ' + sep + ' ' cells = [] for value, width in zip(values, widths): wrapper.width = width text = to_unicode(value) cell = wrapper.wrap(text) cells.append(cell) lines = izip_longest(*cells, fillvalue='') lines = ( (just(cell_line, widths[i]) for i, cell_line in enumerate(line)) for line in lines ) lines = [''.join((lpad, pad.join(line), rpad)) for line in lines] return lines def _get_column_widths(dataset, max_table_width=MAX_TABLE_WIDTH, pad_len=3): """ Returns a list of column widths proportional to the median length of the text in their cells. """ str_lens, word_lens = _get_column_string_lengths(dataset) median_lens = [int(median(lens)) for lens in str_lens] total = sum(median_lens) if total > max_table_width - (pad_len * len(median_lens)): column_widths = (max_table_width * l // total for l in median_lens) else: column_widths = (l for l in median_lens) # Allow for separator and padding: column_widths = (w - pad_len if w > pad_len else w for w in column_widths) # Rather widen table than break words: column_widths = [max(w, l) for w, l in zip(column_widths, word_lens)] return column_widths def export_set_as_simple_table(dataset, column_widths=None): """ Returns reStructuredText grid table representation of dataset. """ lines = [] wrapper = TextWrapper() if column_widths is None: column_widths = _get_column_widths(dataset, pad_len=2) border = ' '.join(['=' * w for w in column_widths]) lines.append(border) if dataset.headers: lines.extend(_row_to_lines( dataset.headers, column_widths, wrapper, sep='', justify=JUSTIFY_CENTER, )) lines.append(border) for row in dataset.dict: values = iter(row.values() if hasattr(row, 'values') else row) lines.extend(_row_to_lines(values, column_widths, wrapper, '')) lines.append(border) return '\n'.join(lines) def export_set_as_grid_table(dataset, column_widths=None): """ Returns reStructuredText grid table representation of dataset. >>> from tablib import Dataset >>> from tablib.formats import rst >>> bits = ((0, 0), (1, 0), (0, 1), (1, 1)) >>> data = Dataset() >>> data.headers = ['A', 'B', 'A and B'] >>> for a, b in bits: ... data.append([bool(a), bool(b), bool(a * b)]) >>> print(rst.export_set(data, force_grid=True)) +-------+-------+-------+ | A | B | A and | | | | B | +=======+=======+=======+ | False | False | False | +-------+-------+-------+ | True | False | False | +-------+-------+-------+ | False | True | False | +-------+-------+-------+ | True | True | True | +-------+-------+-------+ """ lines = [] wrapper = TextWrapper() if column_widths is None: column_widths = _get_column_widths(dataset) header_sep = '+=' + '=+='.join(['=' * w for w in column_widths]) + '=+' row_sep = '+-' + '-+-'.join(['-' * w for w in column_widths]) + '-+' lines.append(row_sep) if dataset.headers: lines.extend(_row_to_lines( dataset.headers, column_widths, wrapper, justify=JUSTIFY_CENTER, )) lines.append(header_sep) for row in dataset.dict: values = iter(row.values() if hasattr(row, 'values') else row) lines.extend(_row_to_lines(values, column_widths, wrapper)) lines.append(row_sep) return '\n'.join(lines) def _use_simple_table(head0, col0, width0): """ Use a simple table if the text in the first column is never wrapped >>> _use_simple_table('menu', ['egg', 'bacon'], 10) True >>> _use_simple_table(None, ['lobster thermidor', 'spam'], 10) False """ if head0 is not None: head0 = to_unicode(head0) if len(head0) > width0: return False for cell in col0: cell = to_unicode(cell) if len(cell) > width0: return False return True def export_set(dataset, **kwargs): """ Returns reStructuredText table representation of dataset. Returns a simple table if the text in the first column is never wrapped, otherwise returns a grid table. >>> from tablib import Dataset >>> bits = ((0, 0), (1, 0), (0, 1), (1, 1)) >>> data = Dataset() >>> data.headers = ['A', 'B', 'A and B'] >>> for a, b in bits: ... data.append([bool(a), bool(b), bool(a * b)]) >>> table = data.rst >>> table.split('\\n') == [ ... '===== ===== =====', ... ' A B A and', ... ' B ', ... '===== ===== =====', ... 'False False False', ... 'True False False', ... 'False True False', ... 'True True True ', ... '===== ===== =====', ... ] True """ if not dataset.dict: return '' force_grid = kwargs.get('force_grid', False) max_table_width = kwargs.get('max_table_width', MAX_TABLE_WIDTH) column_widths = _get_column_widths(dataset, max_table_width) use_simple_table = _use_simple_table( dataset.headers[0] if dataset.headers else None, dataset.get_col(0), column_widths[0], ) if use_simple_table and not force_grid: return export_set_as_simple_table(dataset, column_widths) else: return export_set_as_grid_table(dataset, column_widths) def export_book(databook): """ reStructuredText representation of a Databook. Tables are separated by a blank line. All tables use the grid format. """ return '\n\n'.join(export_set(dataset, force_grid=True) for dataset in databook._datasets) tablib-0.13.0/tablib/formats/_ods.py0000644000175000017500000000561613440456503016614 0ustar josephjoseph# -*- coding: utf-8 -*- """ Tablib - ODF Support. """ from odf import opendocument, style, table, text from tablib.compat import BytesIO, unicode title = 'ods' extensions = ('ods',) bold = style.Style(name="bold", family="paragraph") bold.addElement(style.TextProperties(fontweight="bold", fontweightasian="bold", fontweightcomplex="bold")) def export_set(dataset): """Returns ODF representation of Dataset.""" wb = opendocument.OpenDocumentSpreadsheet() wb.automaticstyles.addElement(bold) ws = table.Table(name=dataset.title if dataset.title else 'Tablib Dataset') wb.spreadsheet.addElement(ws) dset_sheet(dataset, ws) stream = BytesIO() wb.save(stream) return stream.getvalue() def export_book(databook): """Returns ODF representation of DataBook.""" wb = opendocument.OpenDocumentSpreadsheet() wb.automaticstyles.addElement(bold) for i, dset in enumerate(databook._datasets): ws = table.Table(name=dset.title if dset.title else 'Sheet%s' % (i)) wb.spreadsheet.addElement(ws) dset_sheet(dset, ws) stream = BytesIO() wb.save(stream) return stream.getvalue() def dset_sheet(dataset, ws): """Completes given worksheet from given Dataset.""" _package = dataset._package(dicts=False) for i, sep in enumerate(dataset._separators): _offset = i _package.insert((sep[0] + _offset), (sep[1],)) for i, row in enumerate(_package): row_number = i + 1 odf_row = table.TableRow(stylename=bold, defaultcellstylename='bold') for j, col in enumerate(row): try: col = unicode(col, errors='ignore') except TypeError: ## col is already unicode pass ws.addElement(table.TableColumn()) # bold headers if (row_number == 1) and dataset.headers: odf_row.setAttribute('stylename', bold) ws.addElement(odf_row) cell = table.TableCell() p = text.P() p.addElement(text.Span(text=col, stylename=bold)) cell.addElement(p) odf_row.addElement(cell) # wrap the rest else: try: if '\n' in col: ws.addElement(odf_row) cell = table.TableCell() cell.addElement(text.P(text=col)) odf_row.addElement(cell) else: ws.addElement(odf_row) cell = table.TableCell() cell.addElement(text.P(text=col)) odf_row.addElement(cell) except TypeError: ws.addElement(odf_row) cell = table.TableCell() cell.addElement(text.P(text=col)) odf_row.addElement(cell) tablib-0.13.0/tablib/formats/_jira.py0000644000175000017500000000170113440456503016743 0ustar josephjoseph# -*- coding: utf-8 -*- """Tablib - Jira table export support. Generates a Jira table from the dataset. """ from tablib.compat import unicode title = 'jira' def export_set(dataset): """Formats the dataset according to the Jira table syntax: ||heading 1||heading 2||heading 3|| |col A1|col A2|col A3| |col B1|col B2|col B3| :param dataset: dataset to serialize :type dataset: tablib.core.Dataset """ header = _get_header(dataset.headers) if dataset.headers else '' body = _get_body(dataset) return '%s\n%s' % (header, body) if header else body def _get_body(dataset): return '\n'.join([_serialize_row(row) for row in dataset]) def _get_header(headers): return _serialize_row(headers, delimiter='||') def _serialize_row(row, delimiter='|'): return '%s%s%s' % (delimiter, delimiter.join([unicode(item) if item else ' ' for item in row]), delimiter) tablib-0.13.0/tablib/core.py0000644000175000017500000010367513440456503015151 0ustar josephjoseph# -*- coding: utf-8 -*- """ tablib.core ~~~~~~~~~~~ This module implements the central Tablib objects. :copyright: (c) 2016 by Kenneth Reitz. :license: MIT, see LICENSE for more details. """ from collections import OrderedDict from copy import copy from operator import itemgetter from tablib import formats from tablib.compat import unicode __title__ = 'tablib' __version__ = '0.13.0' __build__ = 0x001201 __author__ = 'Kenneth Reitz' __license__ = 'MIT' __copyright__ = 'Copyright 2017 Kenneth Reitz' __docformat__ = 'restructuredtext' class Row(object): """Internal Row object. Mainly used for filtering.""" __slots__ = ['_row', 'tags'] def __init__(self, row=list(), tags=list()): self._row = list(row) self.tags = list(tags) def __iter__(self): return (col for col in self._row) def __len__(self): return len(self._row) def __repr__(self): return repr(self._row) def __getslice__(self, i, j): return self._row[i:j] def __getitem__(self, i): return self._row[i] def __setitem__(self, i, value): self._row[i] = value def __delitem__(self, i): del self._row[i] def __getstate__(self): slots = dict() for slot in self.__slots__: attribute = getattr(self, slot) slots[slot] = attribute return slots def __setstate__(self, state): for (k, v) in list(state.items()): setattr(self, k, v) def rpush(self, value): self.insert(0, value) def lpush(self, value): self.insert(len(value), value) def append(self, value): self.rpush(value) def insert(self, index, value): self._row.insert(index, value) def __contains__(self, item): return (item in self._row) @property def tuple(self): """Tuple representation of :class:`Row`.""" return tuple(self._row) @property def list(self): """List representation of :class:`Row`.""" return list(self._row) def has_tag(self, tag): """Returns true if current row contains tag.""" if tag == None: return False elif isinstance(tag, str): return (tag in self.tags) else: return bool(len(set(tag) & set(self.tags))) class Dataset(object): """The :class:`Dataset` object is the heart of Tablib. It provides all core functionality. Usually you create a :class:`Dataset` instance in your main module, and append rows as you collect data. :: data = tablib.Dataset() data.headers = ('name', 'age') for (name, age) in some_collector(): data.append((name, age)) Setting columns is similar. The column data length must equal the current height of the data and headers must be set :: data = tablib.Dataset() data.headers = ('first_name', 'last_name') data.append(('John', 'Adams')) data.append(('George', 'Washington')) data.append_col((90, 67), header='age') You can also set rows and headers upon instantiation. This is useful if dealing with dozens or hundreds of :class:`Dataset` objects. :: headers = ('first_name', 'last_name') data = [('John', 'Adams'), ('George', 'Washington')] data = tablib.Dataset(*data, headers=headers) :param \*args: (optional) list of rows to populate Dataset :param headers: (optional) list strings for Dataset header row :param title: (optional) string to use as title of the Dataset .. admonition:: Format Attributes Definition If you look at the code, the various output/import formats are not defined within the :class:`Dataset` object. To add support for a new format, see :ref:`Adding New Formats `. """ _formats = {} def __init__(self, *args, **kwargs): self._data = list(Row(arg) for arg in args) self.__headers = None # ('title', index) tuples self._separators = [] # (column, callback) tuples self._formatters = [] self.headers = kwargs.get('headers') self.title = kwargs.get('title') self._register_formats() def __len__(self): return self.height def __getitem__(self, key): if isinstance(key, (str, unicode)): if key in self.headers: pos = self.headers.index(key) # get 'key' index from each data return [row[pos] for row in self._data] else: raise KeyError else: _results = self._data[key] if isinstance(_results, Row): return _results.tuple else: return [result.tuple for result in _results] def __setitem__(self, key, value): self._validate(value) self._data[key] = Row(value) def __delitem__(self, key): if isinstance(key, (str, unicode)): if key in self.headers: pos = self.headers.index(key) del self.headers[pos] for i, row in enumerate(self._data): del row[pos] self._data[i] = row else: raise KeyError else: del self._data[key] def __repr__(self): try: return '<%s dataset>' % (self.title.lower()) except AttributeError: return '' def __unicode__(self): result = [] # Add unicode representation of headers. if self.__headers: result.append([unicode(h) for h in self.__headers]) # Add unicode representation of rows. result.extend(list(map(unicode, row)) for row in self._data) lens = [list(map(len, row)) for row in result] field_lens = list(map(max, zip(*lens))) # delimiter between header and data if self.__headers: result.insert(1, ['-' * length for length in field_lens]) format_string = '|'.join('{%s:%s}' % item for item in enumerate(field_lens)) return '\n'.join(format_string.format(*row) for row in result) def __str__(self): return self.__unicode__() # --------- # Internals # --------- @classmethod def _register_formats(cls): """Adds format properties.""" for fmt in formats.available: try: try: setattr(cls, fmt.title, property(fmt.export_set, fmt.import_set)) setattr(cls, 'get_%s' % fmt.title, fmt.export_set) setattr(cls, 'set_%s' % fmt.title, fmt.import_set) cls._formats[fmt.title] = (fmt.export_set, fmt.import_set) except AttributeError: setattr(cls, fmt.title, property(fmt.export_set)) setattr(cls, 'get_%s' % fmt.title, fmt.export_set) cls._formats[fmt.title] = (fmt.export_set, None) except AttributeError: cls._formats[fmt.title] = (None, None) def _validate(self, row=None, col=None, safety=False): """Assures size of every row in dataset is of proper proportions.""" if row: is_valid = (len(row) == self.width) if self.width else True elif col: if len(col) < 1: is_valid = True else: is_valid = (len(col) == self.height) if self.height else True else: is_valid = all((len(x) == self.width for x in self._data)) if is_valid: return True else: if not safety: raise InvalidDimensions return False def _package(self, dicts=True, ordered=True): """Packages Dataset into lists of dictionaries for transmission.""" # TODO: Dicts default to false? _data = list(self._data) if ordered: dict_pack = OrderedDict else: dict_pack = dict # Execute formatters if self._formatters: for row_i, row in enumerate(_data): for col, callback in self._formatters: try: if col is None: for j, c in enumerate(row): _data[row_i][j] = callback(c) else: _data[row_i][col] = callback(row[col]) except IndexError: raise InvalidDatasetIndex if self.headers: if dicts: data = [dict_pack(list(zip(self.headers, data_row))) for data_row in _data] else: data = [list(self.headers)] + list(_data) else: data = [list(row) for row in _data] return data def _get_headers(self): """An *optional* list of strings to be used for header rows and attribute names. This must be set manually. The given list length must equal :class:`Dataset.width`. """ return self.__headers def _set_headers(self, collection): """Validating headers setter.""" self._validate(collection) if collection: try: self.__headers = list(collection) except TypeError: raise TypeError else: self.__headers = None headers = property(_get_headers, _set_headers) def _get_dict(self): """A native Python representation of the :class:`Dataset` object. If headers have been set, a list of Python dictionaries will be returned. If no headers have been set, a list of tuples (rows) will be returned instead. A dataset object can also be imported by setting the `Dataset.dict` attribute: :: data = tablib.Dataset() data.dict = [{'age': 90, 'first_name': 'Kenneth', 'last_name': 'Reitz'}] """ return self._package() def _set_dict(self, pickle): """A native Python representation of the Dataset object. If headers have been set, a list of Python dictionaries will be returned. If no headers have been set, a list of tuples (rows) will be returned instead. A dataset object can also be imported by setting the :class:`Dataset.dict` attribute. :: data = tablib.Dataset() data.dict = [{'age': 90, 'first_name': 'Kenneth', 'last_name': 'Reitz'}] """ if not len(pickle): return # if list of rows if isinstance(pickle[0], list): self.wipe() for row in pickle: self.append(Row(row)) # if list of objects elif isinstance(pickle[0], dict): self.wipe() self.headers = list(pickle[0].keys()) for row in pickle: self.append(Row(list(row.values()))) else: raise UnsupportedFormat dict = property(_get_dict, _set_dict) def _clean_col(self, col): """Prepares the given column for insert/append.""" col = list(col) if self.headers: header = [col.pop(0)] else: header = [] if len(col) == 1 and hasattr(col[0], '__call__'): col = list(map(col[0], self._data)) col = tuple(header + col) return col @property def height(self): """The number of rows currently in the :class:`Dataset`. Cannot be directly modified. """ return len(self._data) @property def width(self): """The number of columns currently in the :class:`Dataset`. Cannot be directly modified. """ try: return len(self._data[0]) except IndexError: try: return len(self.headers) except TypeError: return 0 def load(self, in_stream, format=None, **kwargs): """ Import `in_stream` to the :class:`Dataset` object using the `format`. :param \*\*kwargs: (optional) custom configuration to the format `import_set`. """ if not format: format = detect_format(in_stream) export_set, import_set = self._formats.get(format, (None, None)) if not import_set: raise UnsupportedFormat('Format {0} cannot be imported.'.format(format)) import_set(self, in_stream, **kwargs) return self def export(self, format, **kwargs): """ Export :class:`Dataset` object to `format`. :param \*\*kwargs: (optional) custom configuration to the format `export_set`. """ export_set, import_set = self._formats.get(format, (None, None)) if not export_set: raise UnsupportedFormat('Format {0} cannot be exported.'.format(format)) return export_set(self, **kwargs) # ------- # Formats # ------- @property def xls(): """A Legacy Excel Spreadsheet representation of the :class:`Dataset` object, with :ref:`separators`. Cannot be set. .. note:: XLS files are limited to a maximum of 65,000 rows. Use :class:`Dataset.xlsx` to avoid this limitation. .. admonition:: Binary Warning :class:`Dataset.xls` contains binary data, so make sure to write in binary mode:: with open('output.xls', 'wb') as f: f.write(data.xls) """ pass @property def xlsx(): """An Excel '07+ Spreadsheet representation of the :class:`Dataset` object, with :ref:`separators`. Cannot be set. .. admonition:: Binary Warning :class:`Dataset.xlsx` contains binary data, so make sure to write in binary mode:: with open('output.xlsx', 'wb') as f: f.write(data.xlsx) """ pass @property def ods(): """An OpenDocument Spreadsheet representation of the :class:`Dataset` object, with :ref:`separators`. Cannot be set. .. admonition:: Binary Warning :class:`Dataset.ods` contains binary data, so make sure to write in binary mode:: with open('output.ods', 'wb') as f: f.write(data.ods) """ pass @property def csv(): """A CSV representation of the :class:`Dataset` object. The top row will contain headers, if they have been set. Otherwise, the top row will contain the first row of the dataset. A dataset object can also be imported by setting the :class:`Dataset.csv` attribute. :: data = tablib.Dataset() data.csv = 'age, first_name, last_name\\n90, John, Adams' Import assumes (for now) that headers exist. .. admonition:: Binary Warning for Python 2 :class:`Dataset.csv` uses \\r\\n line endings by default so, in Python 2, make sure to write in binary mode:: with open('output.csv', 'wb') as f: f.write(data.csv) If you do not do this, and you export the file on Windows, your CSV file will open in Excel with a blank line between each row. .. admonition:: Line endings for Python 3 :class:`Dataset.csv` uses \\r\\n line endings by default so, in Python 3, make sure to include newline='' otherwise you will get a blank line between each row when you open the file in Excel:: with open('output.csv', 'w', newline='') as f: f.write(data.csv) If you do not do this, and you export the file on Windows, your CSV file will open in Excel with a blank line between each row. """ pass @property def tsv(): """A TSV representation of the :class:`Dataset` object. The top row will contain headers, if they have been set. Otherwise, the top row will contain the first row of the dataset. A dataset object can also be imported by setting the :class:`Dataset.tsv` attribute. :: data = tablib.Dataset() data.tsv = 'age\tfirst_name\tlast_name\\n90\tJohn\tAdams' Import assumes (for now) that headers exist. """ pass @property def yaml(): """A YAML representation of the :class:`Dataset` object. If headers have been set, a YAML list of objects will be returned. If no headers have been set, a YAML list of lists (rows) will be returned instead. A dataset object can also be imported by setting the :class:`Dataset.yaml` attribute: :: data = tablib.Dataset() data.yaml = '- {age: 90, first_name: John, last_name: Adams}' Import assumes (for now) that headers exist. """ pass @property def df(): """A DataFrame representation of the :class:`Dataset` object. A dataset object can also be imported by setting the :class:`Dataset.df` attribute: :: data = tablib.Dataset() data.df = DataFrame(np.random.randn(6,4)) Import assumes (for now) that headers exist. """ pass @property def json(): """A JSON representation of the :class:`Dataset` object. If headers have been set, a JSON list of objects will be returned. If no headers have been set, a JSON list of lists (rows) will be returned instead. A dataset object can also be imported by setting the :class:`Dataset.json` attribute: :: data = tablib.Dataset() data.json = '[{"age": 90, "first_name": "John", "last_name": "Adams"}]' Import assumes (for now) that headers exist. """ pass @property def html(): """A HTML table representation of the :class:`Dataset` object. If headers have been set, they will be used as table headers. ..notice:: This method can be used for export only. """ pass @property def dbf(): """A dBASE representation of the :class:`Dataset` object. A dataset object can also be imported by setting the :class:`Dataset.dbf` attribute. :: # To import data from an existing DBF file: data = tablib.Dataset() data.dbf = open('existing_table.dbf').read() # to import data from an ASCII-encoded bytestring: data = tablib.Dataset() data.dbf = '' .. admonition:: Binary Warning :class:`Dataset.dbf` contains binary data, so make sure to write in binary mode:: with open('output.dbf', 'wb') as f: f.write(data.dbf) """ pass @property def latex(): """A LaTeX booktabs representation of the :class:`Dataset` object. If a title has been set, it will be exported as the table caption. .. note:: This method can be used for export only. """ pass @property def jira(): """A Jira table representation of the :class:`Dataset` object. .. note:: This method can be used for export only. """ pass # ---- # Rows # ---- def insert(self, index, row, tags=list()): """Inserts a row to the :class:`Dataset` at the given index. Rows inserted must be the correct size (height or width). The default behaviour is to insert the given row to the :class:`Dataset` object at the given index. """ self._validate(row) self._data.insert(index, Row(row, tags=tags)) def rpush(self, row, tags=list()): """Adds a row to the end of the :class:`Dataset`. See :class:`Dataset.insert` for additional documentation. """ self.insert(self.height, row=row, tags=tags) def lpush(self, row, tags=list()): """Adds a row to the top of the :class:`Dataset`. See :class:`Dataset.insert` for additional documentation. """ self.insert(0, row=row, tags=tags) def append(self, row, tags=list()): """Adds a row to the :class:`Dataset`. See :class:`Dataset.insert` for additional documentation. """ self.rpush(row, tags) def extend(self, rows, tags=list()): """Adds a list of rows to the :class:`Dataset` using :class:`Dataset.append` """ for row in rows: self.append(row, tags) def lpop(self): """Removes and returns the first row of the :class:`Dataset`.""" cache = self[0] del self[0] return cache def rpop(self): """Removes and returns the last row of the :class:`Dataset`.""" cache = self[-1] del self[-1] return cache def pop(self): """Removes and returns the last row of the :class:`Dataset`.""" return self.rpop() # ------- # Columns # ------- def insert_col(self, index, col=None, header=None): """Inserts a column to the :class:`Dataset` at the given index. Columns inserted must be the correct height. You can also insert a column of a single callable object, which will add a new column with the return values of the callable each as an item in the column. :: data.append_col(col=random.randint) If inserting a column, and :class:`Dataset.headers` is set, the header attribute must be set, and will be considered the header for that row. See :ref:`dyncols` for an in-depth example. .. versionchanged:: 0.9.0 If inserting a column, and :class:`Dataset.headers` is set, the header attribute must be set, and will be considered the header for that row. .. versionadded:: 0.9.0 If inserting a row, you can add :ref:`tags ` to the row you are inserting. This gives you the ability to :class:`filter ` your :class:`Dataset` later. """ if col is None: col = [] # Callable Columns... if hasattr(col, '__call__'): col = list(map(col, self._data)) col = self._clean_col(col) self._validate(col=col) if self.headers: # pop the first item off, add to headers if not header: raise HeadersNeeded() # corner case - if header is set without data elif header and self.height == 0 and len(col): raise InvalidDimensions self.headers.insert(index, header) if self.height and self.width: for i, row in enumerate(self._data): row.insert(index, col[i]) self._data[i] = row else: self._data = [Row([row]) for row in col] def rpush_col(self, col, header=None): """Adds a column to the end of the :class:`Dataset`. See :class:`Dataset.insert` for additional documentation. """ self.insert_col(self.width, col, header=header) def lpush_col(self, col, header=None): """Adds a column to the top of the :class:`Dataset`. See :class:`Dataset.insert` for additional documentation. """ self.insert_col(0, col, header=header) def insert_separator(self, index, text='-'): """Adds a separator to :class:`Dataset` at given index.""" sep = (index, text) self._separators.append(sep) def append_separator(self, text='-'): """Adds a :ref:`separator ` to the :class:`Dataset`.""" # change offsets if headers are or aren't defined if not self.headers: index = self.height if self.height else 0 else: index = (self.height + 1) if self.height else 1 self.insert_separator(index, text) def append_col(self, col, header=None): """Adds a column to the :class:`Dataset`. See :class:`Dataset.insert_col` for additional documentation. """ self.rpush_col(col, header) def get_col(self, index): """Returns the column from the :class:`Dataset` at the given index.""" return [row[index] for row in self._data] # ---- # Misc # ---- def add_formatter(self, col, handler): """Adds a :ref:`formatter` to the :class:`Dataset`. .. versionadded:: 0.9.5 :param col: column to. Accepts index int or header str. :param handler: reference to callback function to execute against each cell value. """ if isinstance(col, unicode): if col in self.headers: col = self.headers.index(col) # get 'key' index from each data else: raise KeyError if not col > self.width: self._formatters.append((col, handler)) else: raise InvalidDatasetIndex return True def filter(self, tag): """Returns a new instance of the :class:`Dataset`, excluding any rows that do not contain the given :ref:`tags `. """ _dset = copy(self) _dset._data = [row for row in _dset._data if row.has_tag(tag)] return _dset def sort(self, col, reverse=False): """Sort a :class:`Dataset` by a specific column, given string (for header) or integer (for column index). The order can be reversed by setting ``reverse`` to ``True``. Returns a new :class:`Dataset` instance where columns have been sorted. """ if isinstance(col, (str, unicode)): if not self.headers: raise HeadersNeeded _sorted = sorted(self.dict, key=itemgetter(col), reverse=reverse) _dset = Dataset(headers=self.headers, title=self.title) for item in _sorted: row = [item[key] for key in self.headers] _dset.append(row=row) else: if self.headers: col = self.headers[col] _sorted = sorted(self.dict, key=itemgetter(col), reverse=reverse) _dset = Dataset(headers=self.headers, title=self.title) for item in _sorted: if self.headers: row = [item[key] for key in self.headers] else: row = item _dset.append(row=row) return _dset def transpose(self): """Transpose a :class:`Dataset`, turning rows into columns and vice versa, returning a new ``Dataset`` instance. The first row of the original instance becomes the new header row.""" # Don't transpose if there is no data if not self: return _dset = Dataset() # The first element of the headers stays in the headers, # it is our "hinge" on which we rotate the data new_headers = [self.headers[0]] + self[self.headers[0]] _dset.headers = new_headers for index, column in enumerate(self.headers): if column == self.headers[0]: # It's in the headers, so skip it continue # Adding the column name as now they're a regular column # Use `get_col(index)` in case there are repeated values row_data = [column] + self.get_col(index) row_data = Row(row_data) _dset.append(row=row_data) return _dset def stack(self, other): """Stack two :class:`Dataset` instances together by joining at the row level, and return new combined ``Dataset`` instance.""" if not isinstance(other, Dataset): return if self.width != other.width: raise InvalidDimensions # Copy the source data _dset = copy(self) rows_to_stack = [row for row in _dset._data] other_rows = [row for row in other._data] rows_to_stack.extend(other_rows) _dset._data = rows_to_stack return _dset def stack_cols(self, other): """Stack two :class:`Dataset` instances together by joining at the column level, and return a new combined ``Dataset`` instance. If either ``Dataset`` has headers set, than the other must as well.""" if not isinstance(other, Dataset): return if self.headers or other.headers: if not self.headers or not other.headers: raise HeadersNeeded if self.height != other.height: raise InvalidDimensions try: new_headers = self.headers + other.headers except TypeError: new_headers = None _dset = Dataset() for column in self.headers: _dset.append_col(col=self[column]) for column in other.headers: _dset.append_col(col=other[column]) _dset.headers = new_headers return _dset def remove_duplicates(self): """Removes all duplicate rows from the :class:`Dataset` object while maintaining the original order.""" seen = set() self._data[:] = [row for row in self._data if not (tuple(row) in seen or seen.add(tuple(row)))] def wipe(self): """Removes all content and headers from the :class:`Dataset` object.""" self._data = list() self.__headers = None def subset(self, rows=None, cols=None): """Returns a new instance of the :class:`Dataset`, including only specified rows and columns. """ # Don't return if no data if not self: return if rows is None: rows = list(range(self.height)) if cols is None: cols = list(self.headers) #filter out impossible rows and columns rows = [row for row in rows if row in range(self.height)] cols = [header for header in cols if header in self.headers] _dset = Dataset() #filtering rows and columns _dset.headers = list(cols) _dset._data = [] for row_no, row in enumerate(self._data): data_row = [] for key in _dset.headers: if key in self.headers: pos = self.headers.index(key) data_row.append(row[pos]) else: raise KeyError if row_no in rows: _dset.append(row=Row(data_row)) return _dset class Databook(object): """A book of :class:`Dataset` objects. """ _formats = {} def __init__(self, sets=None): if sets is None: self._datasets = list() else: self._datasets = sets self._register_formats() def __repr__(self): try: return '<%s databook>' % (self.title.lower()) except AttributeError: return '' def wipe(self): """Removes all :class:`Dataset` objects from the :class:`Databook`.""" self._datasets = [] @classmethod def _register_formats(cls): """Adds format properties.""" for fmt in formats.available: try: try: setattr(cls, fmt.title, property(fmt.export_book, fmt.import_book)) cls._formats[fmt.title] = (fmt.export_book, fmt.import_book) except AttributeError: setattr(cls, fmt.title, property(fmt.export_book)) cls._formats[fmt.title] = (fmt.export_book, None) except AttributeError: cls._formats[fmt.title] = (None, None) def sheets(self): return self._datasets def add_sheet(self, dataset): """Adds given :class:`Dataset` to the :class:`Databook`.""" if isinstance(dataset, Dataset): self._datasets.append(dataset) else: raise InvalidDatasetType def _package(self, ordered=True): """Packages :class:`Databook` for delivery.""" collector = [] if ordered: dict_pack = OrderedDict else: dict_pack = dict for dset in self._datasets: collector.append(dict_pack( title = dset.title, data = dset._package(ordered=ordered) )) return collector @property def size(self): """The number of the :class:`Dataset` objects within :class:`Databook`.""" return len(self._datasets) def load(self, format, in_stream, **kwargs): """ Import `in_stream` to the :class:`Databook` object using the `format`. :param \*\*kwargs: (optional) custom configuration to the format `import_book`. """ if not format: format = detect_format(in_stream) export_book, import_book = self._formats.get(format, (None, None)) if not import_book: raise UnsupportedFormat('Format {0} cannot be loaded.'.format(format)) import_book(self, in_stream, **kwargs) return self def export(self, format, **kwargs): """ Export :class:`Databook` object to `format`. :param \*\*kwargs: (optional) custom configuration to the format `export_book`. """ export_book, import_book = self._formats.get(format, (None, None)) if not export_book: raise UnsupportedFormat('Format {0} cannot be exported.'.format(format)) return export_book(self, **kwargs) def detect_format(stream): """Return format name of given stream.""" for fmt in formats.available: try: if fmt.detect(stream): return fmt.title except AttributeError: pass def import_set(stream, format=None, **kwargs): """Return dataset of given stream.""" return Dataset().load(stream, format, **kwargs) def import_book(stream, format=None, **kwargs): """Return dataset of given stream.""" return Databook().load(stream, format, **kwargs) class InvalidDatasetType(Exception): "Only Datasets can be added to a DataBook" class InvalidDimensions(Exception): "Invalid size" class InvalidDatasetIndex(Exception): "Outside of Dataset size" class HeadersNeeded(Exception): "Header parameter must be given when appending a column in this Dataset." class UnsupportedFormat(NotImplementedError): "Format is not supported" tablib-0.13.0/tablib/compat.py0000644000175000017500000000140313440456503015466 0ustar josephjoseph# -*- coding: utf-8 -*- """ tablib.compat ~~~~~~~~~~~~~ Tablib compatiblity module. """ import sys is_py3 = (sys.version_info[0] > 2) if is_py3: from io import BytesIO from io import StringIO from tablib.packages import markup3 as markup from statistics import median from itertools import zip_longest as izip_longest import csv import tablib.packages.dbfpy3 as dbfpy unicode = str xrange = range else: from cStringIO import StringIO as BytesIO from StringIO import StringIO from tablib.packages import markup from tablib.packages.statistics import median from itertools import izip_longest from backports import csv import tablib.packages.dbfpy as dbfpy unicode = unicode xrange = xrange tablib-0.13.0/tablib/__init__.py0000644000175000017500000000027213440456503015745 0ustar josephjoseph""" Tablib. """ from tablib.core import ( Databook, Dataset, detect_format, import_set, import_book, InvalidDatasetType, InvalidDimensions, UnsupportedFormat, __version__ ) tablib-0.13.0/tablib/packages/0000755000175000017500000000000013440456503015411 5ustar josephjosephtablib-0.13.0/tablib/packages/markup.py0000644000175000017500000004473713440456503017301 0ustar josephjoseph# This code is in the public domain, it comes # with absolutely no warranty and you can do # absolutely whatever you want with it. __date__ = '17 May 2007' __version__ = '1.7' __doc__= """ This is markup.py - a Python module that attempts to make it easier to generate HTML/XML from a Python program in an intuitive, lightweight, customizable and pythonic way. The code is in the public domain. Version: %s as of %s. Documentation and further info is at http://markup.sourceforge.net/ Please send bug reports, feature requests, enhancement ideas or questions to nogradi at gmail dot com. Installation: drop markup.py somewhere into your Python path. """ % ( __version__, __date__ ) import string class element: """This class handles the addition of a new element.""" def __init__( self, tag, case='lower', parent=None ): self.parent = parent if case == 'lower': self.tag = tag.lower( ) else: self.tag = tag.upper( ) def __call__( self, *args, **kwargs ): if len( args ) > 1: raise ArgumentError( self.tag ) # if class_ was defined in parent it should be added to every element if self.parent is not None and self.parent.class_ is not None: if 'class_' not in kwargs: kwargs['class_'] = self.parent.class_ if self.parent is None and len( args ) == 1: x = [ self.render( self.tag, False, myarg, mydict ) for myarg, mydict in _argsdicts( args, kwargs ) ] return '\n'.join( x ) elif self.parent is None and len( args ) == 0: x = [ self.render( self.tag, True, myarg, mydict ) for myarg, mydict in _argsdicts( args, kwargs ) ] return '\n'.join( x ) if self.tag in self.parent.twotags: for myarg, mydict in _argsdicts( args, kwargs ): self.render( self.tag, False, myarg, mydict ) elif self.tag in self.parent.onetags: if len( args ) == 0: for myarg, mydict in _argsdicts( args, kwargs ): self.render( self.tag, True, myarg, mydict ) # here myarg is always None, because len( args ) = 0 else: raise ClosingError( self.tag ) elif self.parent.mode == 'strict_html' and self.tag in self.parent.deptags: raise DeprecationError( self.tag ) else: raise InvalidElementError( self.tag, self.parent.mode ) def render( self, tag, single, between, kwargs ): """Append the actual tags to content.""" out = u"<%s" % tag for key, value in kwargs.iteritems( ): if value is not None: # when value is None that means stuff like <... checked> key = key.strip('_') # strip this so class_ will mean class, etc. if key in ['http_equiv', 'accept_charset']: key.replace('_','-') out = u"%s %s=\"%s\"" % ( out, key, escape( value ) ) else: out = u"%s %s" % ( out, key ) if between is not None: out = u"%s>%s" % ( out, between, tag ) else: if single: out = u"%s />" % out else: out = u"%s>" % out if self.parent is not None: self.parent.content.append( out ) else: return out def close( self ): """Append a closing tag unless element has only opening tag.""" if self.tag in self.parent.twotags: self.parent.content.append( "" % self.tag ) elif self.tag in self.parent.onetags: raise ClosingError( self.tag ) elif self.parent.mode == 'strict_html' and self.tag in self.parent.deptags: raise DeprecationError( self.tag ) def open( self, **kwargs ): """Append an opening tag.""" if self.tag in self.parent.twotags or self.tag in self.parent.onetags: self.render( self.tag, False, None, kwargs ) elif self.mode == 'strict_html' and self.tag in self.parent.deptags: raise DeprecationError( self.tag ) class page: """This is our main class representing a document. Elements are added as attributes of an instance of this class.""" def __init__( self, mode='strict_html', case='lower', onetags=None, twotags=None, separator='\n', class_=None ): """Stuff that effects the whole document. mode -- 'strict_html' for HTML 4.01 (default) 'html' alias for 'strict_html' 'loose_html' to allow some deprecated elements 'xml' to allow arbitrary elements case -- 'lower' element names will be printed in lower case (default) 'upper' they will be printed in upper case onetags -- list or tuple of valid elements with opening tags only twotags -- list or tuple of valid elements with both opening and closing tags these two keyword arguments may be used to select the set of valid elements in 'xml' mode invalid elements will raise appropriate exceptions separator -- string to place between added elements, defaults to newline class_ -- a class that will be added to every element if defined""" valid_onetags = [ "AREA", "BASE", "BR", "COL", "FRAME", "HR", "IMG", "INPUT", "LINK", "META", "PARAM" ] valid_twotags = [ "A", "ABBR", "ACRONYM", "ADDRESS", "B", "BDO", "BIG", "BLOCKQUOTE", "BODY", "BUTTON", "CAPTION", "CITE", "CODE", "COLGROUP", "DD", "DEL", "DFN", "DIV", "DL", "DT", "EM", "FIELDSET", "FORM", "FRAMESET", "H1", "H2", "H3", "H4", "H5", "H6", "HEAD", "HTML", "I", "IFRAME", "INS", "KBD", "LABEL", "LEGEND", "LI", "MAP", "NOFRAMES", "NOSCRIPT", "OBJECT", "OL", "OPTGROUP", "OPTION", "P", "PRE", "Q", "SAMP", "SCRIPT", "SELECT", "SMALL", "SPAN", "STRONG", "STYLE", "SUB", "SUP", "TABLE", "TBODY", "TD", "TEXTAREA", "TFOOT", "TH", "THEAD", "TITLE", "TR", "TT", "UL", "VAR" ] deprecated_onetags = [ "BASEFONT", "ISINDEX" ] deprecated_twotags = [ "APPLET", "CENTER", "DIR", "FONT", "MENU", "S", "STRIKE", "U" ] self.header = [ ] self.content = [ ] self.footer = [ ] self.case = case self.separator = separator # init( ) sets it to True so we know that has to be printed at the end self._full = False self.class_= class_ if mode == 'strict_html' or mode == 'html': self.onetags = valid_onetags self.onetags += map( string.lower, self.onetags ) self.twotags = valid_twotags self.twotags += map( string.lower, self.twotags ) self.deptags = deprecated_onetags + deprecated_twotags self.deptags += map( string.lower, self.deptags ) self.mode = 'strict_html' elif mode == 'loose_html': self.onetags = valid_onetags + deprecated_onetags self.onetags += map( string.lower, self.onetags ) self.twotags = valid_twotags + deprecated_twotags self.twotags += map( string.lower, self.twotags ) self.mode = mode elif mode == 'xml': if onetags and twotags: self.onetags = onetags self.twotags = twotags elif ( onetags and not twotags ) or ( twotags and not onetags ): raise CustomizationError( ) else: self.onetags = russell( ) self.twotags = russell( ) self.mode = mode else: raise ModeError( mode ) def __getattr__( self, attr ): if attr.startswith("__") and attr.endswith("__"): raise AttributeError(attr) return element( attr, case=self.case, parent=self ) def __str__( self ): if self._full and ( self.mode == 'strict_html' or self.mode == 'loose_html' ): end = [ '', '' ] else: end = [ ] return self.separator.join( self.header + self.content + self.footer + end ) def __call__( self, escape=False ): """Return the document as a string. escape -- False print normally True replace < and > by < and > the default escape sequences in most browsers""" if escape: return _escape( self.__str__( ) ) else: return self.__str__( ) def add( self, text ): """This is an alias to addcontent.""" self.addcontent( text ) def addfooter( self, text ): """Add some text to the bottom of the document""" self.footer.append( text ) def addheader( self, text ): """Add some text to the top of the document""" self.header.append( text ) def addcontent( self, text ): """Add some text to the main part of the document""" self.content.append( text ) def init( self, lang='en', css=None, metainfo=None, title=None, header=None, footer=None, charset=None, encoding=None, doctype=None, bodyattrs=None, script=None ): """This method is used for complete documents with appropriate doctype, encoding, title, etc information. For an HTML/XML snippet omit this method. lang -- language, usually a two character string, will appear as in html mode (ignored in xml mode) css -- Cascading Style Sheet filename as a string or a list of strings for multiple css files (ignored in xml mode) metainfo -- a dictionary in the form { 'name':'content' } to be inserted into meta element(s) as (ignored in xml mode) bodyattrs --a dictionary in the form { 'key':'value', ... } which will be added as attributes of the element as (ignored in xml mode) script -- dictionary containing src:type pairs, title -- the title of the document as a string to be inserted into a title element as my title (ignored in xml mode) header -- some text to be inserted right after the element (ignored in xml mode) footer -- some text to be inserted right before the element (ignored in xml mode) charset -- a string defining the character set, will be inserted into a element (ignored in xml mode) encoding -- a string defining the encoding, will be put into to first line of the document as in xml mode (ignored in html mode) doctype -- the document type string, defaults to in html mode (ignored in xml mode)""" self._full = True if self.mode == 'strict_html' or self.mode == 'loose_html': if doctype is None: doctype = "" self.header.append( doctype ) self.html( lang=lang ) self.head( ) if charset is not None: self.meta( http_equiv='Content-Type', content="text/html; charset=%s" % charset ) if metainfo is not None: self.metainfo( metainfo ) if css is not None: self.css( css ) if title is not None: self.title( title ) if script is not None: self.scripts( script ) self.head.close() if bodyattrs is not None: self.body( **bodyattrs ) else: self.body( ) if header is not None: self.content.append( header ) if footer is not None: self.footer.append( footer ) elif self.mode == 'xml': if doctype is None: if encoding is not None: doctype = "" % encoding else: doctype = "" self.header.append( doctype ) def css( self, filelist ): """This convenience function is only useful for html. It adds css stylesheet(s) to the document via the element.""" if isinstance( filelist, basestring ): self.link( href=filelist, rel='stylesheet', type='text/css', media='all' ) else: for file in filelist: self.link( href=file, rel='stylesheet', type='text/css', media='all' ) def metainfo( self, mydict ): """This convenience function is only useful for html. It adds meta information via the element, the argument is a dictionary of the form { 'name':'content' }.""" if isinstance( mydict, dict ): for name, content in mydict.iteritems( ): self.meta( name=name, content=content ) else: raise TypeError ("Metainfo should be called with a dictionary argument of name:content pairs.") def scripts( self, mydict ): """Only useful in html, mydict is dictionary of src:type pairs will be rendered as """ if isinstance( mydict, dict ): for src, type in mydict.iteritems( ): self.script( '', src=src, type='text/%s' % type ) else: raise TypeError ("Script should be given a dictionary of src:type pairs.") class _oneliner: """An instance of oneliner returns a string corresponding to one element. This class can be used to write 'oneliners' that return a string immediately so there is no need to instantiate the page class.""" def __init__( self, case='lower' ): self.case = case def __getattr__( self, attr ): if attr.startswith("__") and attr.endswith("__"): raise AttributeError(attr) return element( attr, case=self.case, parent=None ) oneliner = _oneliner( case='lower' ) upper_oneliner = _oneliner( case='upper' ) def _argsdicts( args, mydict ): """A utility generator that pads argument list and dictionary values, will only be called with len( args ) = 0, 1.""" if len( args ) == 0: args = None, elif len( args ) == 1: args = _totuple( args[0] ) else: raise Exception("We should have never gotten here.") mykeys = mydict.keys( ) myvalues = map( _totuple, mydict.values( ) ) maxlength = max( map( len, [ args ] + myvalues ) ) for i in xrange( maxlength ): thisdict = { } for key, value in zip( mykeys, myvalues ): try: thisdict[ key ] = value[i] except IndexError: thisdict[ key ] = value[-1] try: thisarg = args[i] except IndexError: thisarg = args[-1] yield thisarg, thisdict def _totuple( x ): """Utility stuff to convert string, int, float, None or anything to a usable tuple.""" if isinstance( x, basestring ): out = x, elif isinstance( x, ( int, float ) ): out = str( x ), elif x is None: out = None, else: out = tuple( x ) return out def escape( text, newline=False ): """Escape special html characters.""" if isinstance( text, basestring ): if '&' in text: text = text.replace( '&', '&' ) if '>' in text: text = text.replace( '>', '>' ) if '<' in text: text = text.replace( '<', '<' ) if '\"' in text: text = text.replace( '\"', '"' ) if '\'' in text: text = text.replace( '\'', '"' ) if newline: if '\n' in text: text = text.replace( '\n', '
' ) return text _escape = escape def unescape( text ): """Inverse of escape.""" if isinstance( text, basestring ): if '&' in text: text = text.replace( '&', '&' ) if '>' in text: text = text.replace( '>', '>' ) if '<' in text: text = text.replace( '<', '<' ) if '"' in text: text = text.replace( '"', '\"' ) return text class dummy: """A dummy class for attaching attributes.""" pass doctype = dummy( ) doctype.frameset = "" doctype.strict = "" doctype.loose = "" class russell: """A dummy class that contains anything.""" def __contains__( self, item ): return True class MarkupError( Exception ): """All our exceptions subclass this.""" def __str__( self ): return self.message class ClosingError( MarkupError ): def __init__( self, tag ): self.message = "The element '%s' does not accept non-keyword arguments (has no closing tag)." % tag class OpeningError( MarkupError ): def __init__( self, tag ): self.message = "The element '%s' can not be opened." % tag class ArgumentError( MarkupError ): def __init__( self, tag ): self.message = "The element '%s' was called with more than one non-keyword argument." % tag class InvalidElementError( MarkupError ): def __init__( self, tag, mode ): self.message = "The element '%s' is not valid for your mode '%s'." % ( tag, mode ) class DeprecationError( MarkupError ): def __init__( self, tag ): self.message = "The element '%s' is deprecated, instantiate markup.page with mode='loose_html' to allow it." % tag class ModeError( MarkupError ): def __init__( self, mode ): self.message = "Mode '%s' is invalid, possible values: strict_html, loose_html, xml." % mode class CustomizationError( MarkupError ): def __init__( self ): self.message = "If you customize the allowed elements, you must define both types 'onetags' and 'twotags'." if __name__ == '__main__': print (__doc__) tablib-0.13.0/tablib/packages/dbfpy/0000755000175000017500000000000013440456503016515 5ustar josephjosephtablib-0.13.0/tablib/packages/dbfpy/dbf.py0000644000175000017500000002202213440456503017620 0ustar josephjoseph#! /usr/bin/env python """DBF accessing helpers. FIXME: more documentation needed Examples: Create new table, setup structure, add records: dbf = Dbf(filename, new=True) dbf.addField( ("NAME", "C", 15), ("SURNAME", "C", 25), ("INITIALS", "C", 10), ("BIRTHDATE", "D"), ) for (n, s, i, b) in ( ("John", "Miller", "YC", (1980, 10, 11)), ("Andy", "Larkin", "", (1980, 4, 11)), ): rec = dbf.newRecord() rec["NAME"] = n rec["SURNAME"] = s rec["INITIALS"] = i rec["BIRTHDATE"] = b rec.store() dbf.close() Open existed dbf, read some data: dbf = Dbf(filename, True) for rec in dbf: for fldName in dbf.fieldNames: print '%s:\t %s (%s)' % (fldName, rec[fldName], type(rec[fldName])) print dbf.close() """ """History (most recent first): 11-feb-2007 [als] export INVALID_VALUE; Dbf: added .ignoreErrors, .INVALID_VALUE 04-jul-2006 [als] added export declaration 20-dec-2005 [yc] removed fromStream and newDbf methods: use argument of __init__ call must be used instead; added class fields pointing to the header and record classes. 17-dec-2005 [yc] split to several modules; reimplemented 13-dec-2005 [yc] adapted to the changes of the `strutil` module. 13-sep-2002 [als] support FoxPro Timestamp datatype 15-nov-1999 [jjk] documentation updates, add demo 24-aug-1998 [jjk] add some encodeValue methods (not tested), other tweaks 08-jun-1998 [jjk] fix problems, add more features 20-feb-1998 [jjk] fix problems, add more features 19-feb-1998 [jjk] add create/write capabilities 18-feb-1998 [jjk] from dbfload.py """ __version__ = "$Revision: 1.7 $"[11:-2] __date__ = "$Date: 2007/02/11 09:23:13 $"[7:-2] __author__ = "Jeff Kunce " __all__ = ["Dbf"] from . import header from . import record from utils import INVALID_VALUE class Dbf(object): """DBF accessor. FIXME: docs and examples needed (dont' forget to tell about problems adding new fields on the fly) Implementation notes: ``_new`` field is used to indicate whether this is a new data table. `addField` could be used only for the new tables! If at least one record was appended to the table it's structure couldn't be changed. """ __slots__ = ("name", "header", "stream", "_changed", "_new", "_ignore_errors") HeaderClass = header.DbfHeader RecordClass = record.DbfRecord INVALID_VALUE = INVALID_VALUE # initialization and creation helpers def __init__(self, f, readOnly=False, new=False, ignoreErrors=False): """Initialize instance. Arguments: f: Filename or file-like object. new: True if new data table must be created. Assume data table exists if this argument is False. readOnly: if ``f`` argument is a string file will be opend in read-only mode; in other cases this argument is ignored. This argument is ignored even if ``new`` argument is True. headerObj: `header.DbfHeader` instance or None. If this argument is None, new empty header will be used with the all fields set by default. ignoreErrors: if set, failing field value conversion will return ``INVALID_VALUE`` instead of raising conversion error. """ if isinstance(f, basestring): # a filename self.name = f if new: # new table (table file must be # created or opened and truncated) self.stream = file(f, "w+b") else: # tabe file must exist self.stream = file(f, ("r+b", "rb")[bool(readOnly)]) else: # a stream self.name = getattr(f, "name", "") self.stream = f if new: # if this is a new table, header will be empty self.header = self.HeaderClass() else: # or instantiated using stream self.header = self.HeaderClass.fromStream(self.stream) self.ignoreErrors = ignoreErrors self._new = bool(new) self._changed = False # properties closed = property(lambda self: self.stream.closed) recordCount = property(lambda self: self.header.recordCount) fieldNames = property( lambda self: [_fld.name for _fld in self.header.fields]) fieldDefs = property(lambda self: self.header.fields) changed = property(lambda self: self._changed or self.header.changed) def ignoreErrors(self, value): """Update `ignoreErrors` flag on the header object and self""" self.header.ignoreErrors = self._ignore_errors = bool(value) ignoreErrors = property( lambda self: self._ignore_errors, ignoreErrors, doc="""Error processing mode for DBF field value conversion if set, failing field value conversion will return ``INVALID_VALUE`` instead of raising conversion error. """) # protected methods def _fixIndex(self, index): """Return fixed index. This method fails if index isn't a numeric object (long or int). Or index isn't in a valid range (less or equal to the number of records in the db). If ``index`` is a negative number, it will be treated as a negative indexes for list objects. Return: Return value is numeric object maning valid index. """ if not isinstance(index, (int, long)): raise TypeError("Index must be a numeric object") if index < 0: # index from the right side # fix it to the left-side index index += len(self) + 1 if index >= len(self): raise IndexError("Record index out of range") return index # iterface methods def close(self): self.flush() self.stream.close() def flush(self): """Flush data to the associated stream.""" if self.changed: self.header.setCurrentDate() self.header.write(self.stream) self.stream.flush() self._changed = False def indexOfFieldName(self, name): """Index of field named ``name``.""" # FIXME: move this to header class return self.header.fields.index(name) def newRecord(self): """Return new record, which belong to this table.""" return self.RecordClass(self) def append(self, record): """Append ``record`` to the database.""" record.index = self.header.recordCount record._write() self.header.recordCount += 1 self._changed = True self._new = False def addField(self, *defs): """Add field definitions. For more information see `header.DbfHeader.addField`. """ if self._new: self.header.addField(*defs) else: raise TypeError("At least one record was added, " "structure can't be changed") # 'magic' methods (representation and sequence interface) def __repr__(self): return "Dbf stream '%s'\n" % self.stream + repr(self.header) def __len__(self): """Return number of records.""" return self.recordCount def __getitem__(self, index): """Return `DbfRecord` instance.""" return self.RecordClass.fromStream(self, self._fixIndex(index)) def __setitem__(self, index, record): """Write `DbfRecord` instance to the stream.""" record.index = self._fixIndex(index) record._write() self._changed = True self._new = False # def __del__(self): # """Flush stream upon deletion of the object.""" # self.flush() def demo_read(filename): _dbf = Dbf(filename, True) for _rec in _dbf: print print(repr(_rec)) _dbf.close() def demo_create(filename): _dbf = Dbf(filename, new=True) _dbf.addField( ("NAME", "C", 15), ("SURNAME", "C", 25), ("INITIALS", "C", 10), ("BIRTHDATE", "D"), ) for (_n, _s, _i, _b) in ( ("John", "Miller", "YC", (1981, 1, 2)), ("Andy", "Larkin", "AL", (1982, 3, 4)), ("Bill", "Clinth", "", (1983, 5, 6)), ("Bobb", "McNail", "", (1984, 7, 8)), ): _rec = _dbf.newRecord() _rec["NAME"] = _n _rec["SURNAME"] = _s _rec["INITIALS"] = _i _rec["BIRTHDATE"] = _b _rec.store() print(repr(_dbf)) _dbf.close() if __name__ == '__main__': import sys _name = len(sys.argv) > 1 and sys.argv[1] or "county.dbf" demo_create(_name) demo_read(_name) # vim: set et sw=4 sts=4 : tablib-0.13.0/tablib/packages/dbfpy/header.py0000644000175000017500000002225413440456503020324 0ustar josephjoseph"""DBF header definition. TODO: - handle encoding of the character fields (encoding information stored in the DBF header) """ """History (most recent first): 16-sep-2010 [als] fromStream: fix century of the last update field 11-feb-2007 [als] added .ignoreErrors 10-feb-2007 [als] added __getitem__: return field definitions by field name or field number (zero-based) 04-jul-2006 [als] added export declaration 15-dec-2005 [yc] created """ __version__ = "$Revision: 1.6 $"[11:-2] __date__ = "$Date: 2010/09/16 05:06:39 $"[7:-2] __all__ = ["DbfHeader"] try: import cStringIO except ImportError: # when we're in python3, we cStringIO has been replaced by io.StringIO import io as cStringIO import datetime import struct import time from . import fields from . import utils class DbfHeader(object): """Dbf header definition. For more information about dbf header format visit `http://www.clicketyclick.dk/databases/xbase/format/dbf.html#DBF_STRUCT` Examples: Create an empty dbf header and add some field definitions: dbfh = DbfHeader() dbfh.addField(("name", "C", 10)) dbfh.addField(("date", "D")) dbfh.addField(DbfNumericFieldDef("price", 5, 2)) Create a dbf header with field definitions: dbfh = DbfHeader([ ("name", "C", 10), ("date", "D"), DbfNumericFieldDef("price", 5, 2), ]) """ __slots__ = ("signature", "fields", "lastUpdate", "recordLength", "recordCount", "headerLength", "changed", "_ignore_errors") ## instance construction and initialization methods def __init__(self, fields=None, headerLength=0, recordLength=0, recordCount=0, signature=0x03, lastUpdate=None, ignoreErrors=False, ): """Initialize instance. Arguments: fields: a list of field definitions; recordLength: size of the records; headerLength: size of the header; recordCount: number of records stored in DBF; signature: version number (aka signature). using 0x03 as a default meaning "File without DBT". for more information about this field visit ``http://www.clicketyclick.dk/databases/xbase/format/dbf.html#DBF_NOTE_1_TARGET`` lastUpdate: date of the DBF's update. this could be a string ('yymmdd' or 'yyyymmdd'), timestamp (int or float), datetime/date value, a sequence (assuming (yyyy, mm, dd, ...)) or an object having callable ``ticks`` field. ignoreErrors: error processing mode for DBF fields (boolean) """ self.signature = signature if fields is None: self.fields = [] else: self.fields = list(fields) self.lastUpdate = utils.getDate(lastUpdate) self.recordLength = recordLength self.headerLength = headerLength self.recordCount = recordCount self.ignoreErrors = ignoreErrors # XXX: I'm not sure this is safe to # initialize `self.changed` in this way self.changed = bool(self.fields) # @classmethod def fromString(cls, string): """Return header instance from the string object.""" return cls.fromStream(cStringIO.StringIO(str(string))) fromString = classmethod(fromString) # @classmethod def fromStream(cls, stream): """Return header object from the stream.""" stream.seek(0) _data = stream.read(32) (_cnt, _hdrLen, _recLen) = struct.unpack(" DbfRecord._write(); added delete() method. 16-dec-2005 [yc] record definition moved from `dbf`. """ __version__ = "$Revision: 1.7 $"[11:-2] __date__ = "$Date: 2007/02/11 09:05:49 $"[7:-2] __all__ = ["DbfRecord"] from itertools import izip import utils class DbfRecord(object): """DBF record. Instances of this class shouldn't be created manualy, use `dbf.Dbf.newRecord` instead. Class implements mapping/sequence interface, so fields could be accessed via their names or indexes (names is a preffered way to access fields). Hint: Use `store` method to save modified record. Examples: Add new record to the database: db = Dbf(filename) rec = db.newRecord() rec["FIELD1"] = value1 rec["FIELD2"] = value2 rec.store() Or the same, but modify existed (second in this case) record: db = Dbf(filename) rec = db[2] rec["FIELD1"] = value1 rec["FIELD2"] = value2 rec.store() """ __slots__ = "dbf", "index", "deleted", "fieldData" ## creation and initialization def __init__(self, dbf, index=None, deleted=False, data=None): """Instance initialiation. Arguments: dbf: A `Dbf.Dbf` instance this record belonogs to. index: An integer record index or None. If this value is None, record will be appended to the DBF. deleted: Boolean flag indicating whether this record is a deleted record. data: A sequence or None. This is a data of the fields. If this argument is None, default values will be used. """ self.dbf = dbf # XXX: I'm not sure ``index`` is necessary self.index = index self.deleted = deleted if data is None: self.fieldData = [_fd.defaultValue for _fd in dbf.header.fields] else: self.fieldData = list(data) # XXX: validate self.index before calculating position? position = property(lambda self: self.dbf.header.headerLength + \ self.index * self.dbf.header.recordLength) def rawFromStream(cls, dbf, index): """Return raw record contents read from the stream. Arguments: dbf: A `Dbf.Dbf` instance containing the record. index: Index of the record in the records' container. This argument can't be None in this call. Return value is a string containing record data in DBF format. """ # XXX: may be write smth assuming, that current stream # position is the required one? it could save some # time required to calculate where to seek in the file dbf.stream.seek(dbf.header.headerLength + index * dbf.header.recordLength) return dbf.stream.read(dbf.header.recordLength) rawFromStream = classmethod(rawFromStream) def fromStream(cls, dbf, index): """Return a record read from the stream. Arguments: dbf: A `Dbf.Dbf` instance new record should belong to. index: Index of the record in the records' container. This argument can't be None in this call. Return value is an instance of the current class. """ return cls.fromString(dbf, cls.rawFromStream(dbf, index), index) fromStream = classmethod(fromStream) def fromString(cls, dbf, string, index=None): """Return record read from the string object. Arguments: dbf: A `Dbf.Dbf` instance new record should belong to. string: A string new record should be created from. index: Index of the record in the container. If this argument is None, record will be appended. Return value is an instance of the current class. """ return cls(dbf, index, string[0]=="*", [_fd.decodeFromRecord(string) for _fd in dbf.header.fields]) fromString = classmethod(fromString) ## object representation def __repr__(self): _template = "%%%ds: %%s (%%s)" % max([len(_fld) for _fld in self.dbf.fieldNames]) _rv = [] for _fld in self.dbf.fieldNames: _val = self[_fld] if _val is utils.INVALID_VALUE: _rv.append(_template % (_fld, "None", "value cannot be decoded")) else: _rv.append(_template % (_fld, _val, type(_val))) return "\n".join(_rv) ## protected methods def _write(self): """Write data to the dbf stream. Note: This isn't a public method, it's better to use 'store' instead publically. Be design ``_write`` method should be called only from the `Dbf` instance. """ self._validateIndex(False) self.dbf.stream.seek(self.position) self.dbf.stream.write(self.toString()) # FIXME: may be move this write somewhere else? # why we should check this condition for each record? if self.index == len(self.dbf): # this is the last record, # we should write SUB (ASCII 26) self.dbf.stream.write("\x1A") ## utility methods def _validateIndex(self, allowUndefined=True, checkRange=False): """Valid ``self.index`` value. If ``allowUndefined`` argument is True functions does nothing in case of ``self.index`` pointing to None object. """ if self.index is None: if not allowUndefined: raise ValueError("Index is undefined") elif self.index < 0: raise ValueError("Index can't be negative (%s)" % self.index) elif checkRange and self.index <= self.dbf.header.recordCount: raise ValueError("There are only %d records in the DBF" % self.dbf.header.recordCount) ## interface methods def store(self): """Store current record in the DBF. If ``self.index`` is None, this record will be appended to the records of the DBF this records belongs to; or replaced otherwise. """ self._validateIndex() if self.index is None: self.index = len(self.dbf) self.dbf.append(self) else: self.dbf[self.index] = self def delete(self): """Mark method as deleted.""" self.deleted = True def toString(self): """Return string packed record values.""" return "".join([" *"[self.deleted]] + [ _def.encodeValue(_dat) for (_def, _dat) in izip(self.dbf.header.fields, self.fieldData) ]) def asList(self): """Return a flat list of fields. Note: Change of the list's values won't change real values stored in this object. """ return self.fieldData[:] def asDict(self): """Return a dictionary of fields. Note: Change of the dicts's values won't change real values stored in this object. """ return dict([_i for _i in izip(self.dbf.fieldNames, self.fieldData)]) def __getitem__(self, key): """Return value by field name or field index.""" if isinstance(key, (long, int)): # integer index of the field return self.fieldData[key] # assuming string field name return self.fieldData[self.dbf.indexOfFieldName(key)] def __setitem__(self, key, value): """Set field value by integer index of the field or string name.""" if isinstance(key, (int, long)): # integer index of the field return self.fieldData[key] # assuming string field name self.fieldData[self.dbf.indexOfFieldName(key)] = value # vim: et sts=4 sw=4 : tablib-0.13.0/tablib/packages/dbfpy/utils.py0000644000175000017500000001146213440456503020233 0ustar josephjoseph"""String utilities. TODO: - allow strings in getDateTime routine; """ """History (most recent first): 11-feb-2007 [als] added INVALID_VALUE 10-feb-2007 [als] allow date strings padded with spaces instead of zeroes 20-dec-2005 [yc] handle long objects in getDate/getDateTime 16-dec-2005 [yc] created from ``strutil`` module. """ __version__ = "$Revision: 1.4 $"[11:-2] __date__ = "$Date: 2007/02/11 08:57:17 $"[7:-2] import datetime import time def unzfill(str): """Return a string without ASCII NULs. This function searchers for the first NUL (ASCII 0) occurance and truncates string till that position. """ try: return str[:str.index('\0')] except ValueError: return str def getDate(date=None): """Return `datetime.date` instance. Type of the ``date`` argument could be one of the following: None: use current date value; datetime.date: this value will be returned; datetime.datetime: the result of the date.date() will be returned; string: assuming "%Y%m%d" or "%y%m%dd" format; number: assuming it's a timestamp (returned for example by the time.time() call; sequence: assuming (year, month, day, ...) sequence; Additionaly, if ``date`` has callable ``ticks`` attribute, it will be used and result of the called would be treated as a timestamp value. """ if date is None: # use current value return datetime.date.today() if isinstance(date, datetime.date): return date if isinstance(date, datetime.datetime): return date.date() if isinstance(date, (int, long, float)): # date is a timestamp return datetime.date.fromtimestamp(date) if isinstance(date, basestring): date = date.replace(" ", "0") if len(date) == 6: # yymmdd return datetime.date(*time.strptime(date, "%y%m%d")[:3]) # yyyymmdd return datetime.date(*time.strptime(date, "%Y%m%d")[:3]) if hasattr(date, "__getitem__"): # a sequence (assuming date/time tuple) return datetime.date(*date[:3]) return datetime.date.fromtimestamp(date.ticks()) def getDateTime(value=None): """Return `datetime.datetime` instance. Type of the ``value`` argument could be one of the following: None: use current date value; datetime.date: result will be converted to the `datetime.datetime` instance using midnight; datetime.datetime: ``value`` will be returned as is; string: *** CURRENTLY NOT SUPPORTED ***; number: assuming it's a timestamp (returned for example by the time.time() call; sequence: assuming (year, month, day, ...) sequence; Additionaly, if ``value`` has callable ``ticks`` attribute, it will be used and result of the called would be treated as a timestamp value. """ if value is None: # use current value return datetime.datetime.today() if isinstance(value, datetime.datetime): return value if isinstance(value, datetime.date): return datetime.datetime.fromordinal(value.toordinal()) if isinstance(value, (int, long, float)): # value is a timestamp return datetime.datetime.fromtimestamp(value) if isinstance(value, basestring): raise NotImplementedError("Strings aren't currently implemented") if hasattr(value, "__getitem__"): # a sequence (assuming date/time tuple) return datetime.datetime(*tuple(value)[:6]) return datetime.datetime.fromtimestamp(value.ticks()) class classproperty(property): """Works in the same way as a ``property``, but for the classes.""" def __get__(self, obj, cls): return self.fget(cls) class _InvalidValue(object): """Value returned from DBF records when field validation fails The value is not equal to anything except for itself and equal to all empty values: None, 0, empty string etc. In other words, invalid value is equal to None and not equal to None at the same time. This value yields zero upon explicit conversion to a number type, empty string for string types, and False for boolean. """ def __eq__(self, other): return not other def __ne__(self, other): return not (other is self) def __nonzero__(self): return False def __int__(self): return 0 __long__ = __int__ def __float__(self): return 0.0 def __str__(self): return "" def __unicode__(self): return u"" def __repr__(self): return "" # invalid value is a constant singleton INVALID_VALUE = _InvalidValue() # vim: set et sts=4 sw=4 : tablib-0.13.0/tablib/packages/dbfpy/__init__.py0000644000175000017500000000000013440456503020614 0ustar josephjosephtablib-0.13.0/tablib/packages/dbfpy/fields.py0000644000175000017500000003427413440456503020347 0ustar josephjoseph"""DBF fields definitions. TODO: - make memos work """ """History (most recent first): 26-may-2009 [als] DbfNumericFieldDef.decodeValue: strip zero bytes 05-feb-2009 [als] DbfDateFieldDef.encodeValue: empty arg produces empty date 16-sep-2008 [als] DbfNumericFieldDef decoding looks for decimal point in the value to select float or integer return type 13-mar-2008 [als] check field name length in constructor 11-feb-2007 [als] handle value conversion errors 10-feb-2007 [als] DbfFieldDef: added .rawFromRecord() 01-dec-2006 [als] Timestamp columns use None for empty values 31-oct-2006 [als] support field types 'F' (float), 'I' (integer) and 'Y' (currency); automate export and registration of field classes 04-jul-2006 [als] added export declaration 10-mar-2006 [als] decode empty values for Date and Logical fields; show field name in errors 10-mar-2006 [als] fix Numeric value decoding: according to spec, value always is string representation of the number; ensure that encoded Numeric value fits into the field 20-dec-2005 [yc] use field names in upper case 15-dec-2005 [yc] field definitions moved from `dbf`. """ __version__ = "$Revision: 1.14 $"[11:-2] __date__ = "$Date: 2009/05/26 05:16:51 $"[7:-2] __all__ = ["lookupFor",] # field classes added at the end of the module import datetime import struct import sys from . import utils ## abstract definitions class DbfFieldDef(object): """Abstract field definition. Child classes must override ``type`` class attribute to provide datatype infromation of the field definition. For more info about types visit `http://www.clicketyclick.dk/databases/xbase/format/data_types.html` Also child classes must override ``defaultValue`` field to provide default value for the field value. If child class has fixed length ``length`` class attribute must be overriden and set to the valid value. None value means, that field isn't of fixed length. Note: ``name`` field must not be changed after instantiation. """ __slots__ = ("name", "length", "decimalCount", "start", "end", "ignoreErrors") # length of the field, None in case of variable-length field, # or a number if this field is a fixed-length field length = None # field type. for more information about fields types visit # `http://www.clicketyclick.dk/databases/xbase/format/data_types.html` # must be overriden in child classes typeCode = None # default value for the field. this field must be # overriden in child classes defaultValue = None def __init__(self, name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False, ): """Initialize instance.""" assert self.typeCode is not None, "Type code must be overriden" assert self.defaultValue is not None, "Default value must be overriden" ## fix arguments if len(name) >10: raise ValueError("Field name \"%s\" is too long" % name) name = str(name).upper() if self.__class__.length is None: if length is None: raise ValueError("[%s] Length isn't specified" % name) length = int(length) if length <= 0: raise ValueError("[%s] Length must be a positive integer" % name) else: length = self.length if decimalCount is None: decimalCount = 0 ## set fields self.name = name # FIXME: validate length according to the specification at # http://www.clicketyclick.dk/databases/xbase/format/data_types.html self.length = length self.decimalCount = decimalCount self.ignoreErrors = ignoreErrors self.start = start self.end = stop def __cmp__(self, other): return cmp(self.name, str(other).upper()) def __hash__(self): return hash(self.name) def fromString(cls, string, start, ignoreErrors=False): """Decode dbf field definition from the string data. Arguments: string: a string, dbf definition is decoded from. length of the string must be 32 bytes. start: position in the database file. ignoreErrors: initial error processing mode for the new field (boolean) """ assert len(string) == 32 _length = ord(string[16]) return cls(utils.unzfill(string)[:11], _length, ord(string[17]), start, start + _length, ignoreErrors=ignoreErrors) fromString = classmethod(fromString) def toString(self): """Return encoded field definition. Return: Return value is a string object containing encoded definition of this field. """ if sys.version_info < (2, 4): # earlier versions did not support padding character _name = self.name[:11] + "\0" * (11 - len(self.name)) else: _name = self.name.ljust(11, '\0') return ( _name + self.typeCode + #data address chr(0) * 4 + chr(self.length) + chr(self.decimalCount) + chr(0) * 14 ) def __repr__(self): return "%-10s %1s %3d %3d" % self.fieldInfo() def fieldInfo(self): """Return field information. Return: Return value is a (name, type, length, decimals) tuple. """ return (self.name, self.typeCode, self.length, self.decimalCount) def rawFromRecord(self, record): """Return a "raw" field value from the record string.""" return record[self.start:self.end] def decodeFromRecord(self, record): """Return decoded field value from the record string.""" try: return self.decodeValue(self.rawFromRecord(record)) except: if self.ignoreErrors: return utils.INVALID_VALUE else: raise def decodeValue(self, value): """Return decoded value from string value. This method shouldn't be used publicly. It's called from the `decodeFromRecord` method. This is an abstract method and it must be overridden in child classes. """ raise NotImplementedError def encodeValue(self, value): """Return str object containing encoded field value. This is an abstract method and it must be overriden in child classes. """ raise NotImplementedError ## real classes class DbfCharacterFieldDef(DbfFieldDef): """Definition of the character field.""" typeCode = "C" defaultValue = "" def decodeValue(self, value): """Return string object. Return value is a ``value`` argument with stripped right spaces. """ return value.rstrip(" ") def encodeValue(self, value): """Return raw data string encoded from a ``value``.""" return str(value)[:self.length].ljust(self.length) class DbfNumericFieldDef(DbfFieldDef): """Definition of the numeric field.""" typeCode = "N" # XXX: now I'm not sure it was a good idea to make a class field # `defaultValue` instead of a generic method as it was implemented # previously -- it's ok with all types except number, cuz # if self.decimalCount is 0, we should return 0 and 0.0 otherwise. defaultValue = 0 def decodeValue(self, value): """Return a number decoded from ``value``. If decimals is zero, value will be decoded as an integer; or as a float otherwise. Return: Return value is a int (long) or float instance. """ value = value.strip(" \0") if "." in value: # a float (has decimal separator) return float(value) elif value: # must be an integer return int(value) else: return 0 def encodeValue(self, value): """Return string containing encoded ``value``.""" _rv = ("%*.*f" % (self.length, self.decimalCount, value)) if len(_rv) > self.length: _ppos = _rv.find(".") if 0 <= _ppos <= self.length: _rv = _rv[:self.length] else: raise ValueError("[%s] Numeric overflow: %s (field width: %i)" % (self.name, _rv, self.length)) return _rv class DbfFloatFieldDef(DbfNumericFieldDef): """Definition of the float field - same as numeric.""" typeCode = "F" class DbfIntegerFieldDef(DbfFieldDef): """Definition of the integer field.""" typeCode = "I" length = 4 defaultValue = 0 def decodeValue(self, value): """Return an integer number decoded from ``value``.""" return struct.unpack("= 1: _rv = datetime.datetime.fromordinal(_jdn - self.JDN_GDN_DIFF) _rv += datetime.timedelta(0, _msecs / 1000.0) else: # empty date _rv = None return _rv def encodeValue(self, value): """Return a string-encoded ``value``.""" if value: value = utils.getDateTime(value) # LE byteorder _rv = struct.pack("<2I", value.toordinal() + self.JDN_GDN_DIFF, (value.hour * 3600 + value.minute * 60 + value.second) * 1000) else: _rv = "\0" * self.length assert len(_rv) == self.length return _rv _fieldsRegistry = {} def registerField(fieldCls): """Register field definition class. ``fieldCls`` should be subclass of the `DbfFieldDef`. Use `lookupFor` to retrieve field definition class by the type code. """ assert fieldCls.typeCode is not None, "Type code isn't defined" # XXX: use fieldCls.typeCode.upper()? in case of any decign # don't forget to look to the same comment in ``lookupFor`` method _fieldsRegistry[fieldCls.typeCode] = fieldCls def lookupFor(typeCode): """Return field definition class for the given type code. ``typeCode`` must be a single character. That type should be previously registered. Use `registerField` to register new field class. Return: Return value is a subclass of the `DbfFieldDef`. """ # XXX: use typeCode.upper()? in case of any decign don't # forget to look to the same comment in ``registerField`` return _fieldsRegistry[typeCode] ## register generic types for (_name, _val) in globals().items(): if isinstance(_val, type) and issubclass(_val, DbfFieldDef) \ and (_name != "DbfFieldDef"): __all__.append(_name) registerField(_val) del _name, _val # vim: et sts=4 sw=4 : tablib-0.13.0/tablib/packages/dbfpy/dbfnew.py0000644000175000017500000001252213440456503020336 0ustar josephjoseph#!/usr/bin/python """.DBF creation helpers. Note: this is a legacy interface. New code should use Dbf class for table creation (see examples in dbf.py) TODO: - handle Memo fields. - check length of the fields accoring to the `http://www.clicketyclick.dk/databases/xbase/format/data_types.html` """ """History (most recent first) 04-jul-2006 [als] added export declaration; updated for dbfpy 2.0 15-dec-2005 [yc] define dbf_new.__slots__ 14-dec-2005 [yc] added vim modeline; retab'd; added doc-strings; dbf_new now is a new class (inherited from object) ??-jun-2000 [--] added by Hans Fiby """ __version__ = "$Revision: 1.4 $"[11:-2] __date__ = "$Date: 2006/07/04 08:18:18 $"[7:-2] __all__ = ["dbf_new"] from dbf import * from fields import * from header import * from record import * class _FieldDefinition(object): """Field definition. This is a simple structure, which contains ``name``, ``type``, ``len``, ``dec`` and ``cls`` fields. Objects also implement get/setitem magic functions, so fields could be accessed via sequence iterface, where 'name' has index 0, 'type' index 1, 'len' index 2, 'dec' index 3 and 'cls' could be located at index 4. """ __slots__ = "name", "type", "len", "dec", "cls" # WARNING: be attentive - dictionaries are mutable! FLD_TYPES = { # type: (cls, len) "C": (DbfCharacterFieldDef, None), "N": (DbfNumericFieldDef, None), "L": (DbfLogicalFieldDef, 1), # FIXME: support memos # "M": (DbfMemoFieldDef), "D": (DbfDateFieldDef, 8), # FIXME: I'm not sure length should be 14 characters! # but temporary I use it, cuz date is 8 characters # and time 6 (hhmmss) "T": (DbfDateTimeFieldDef, 14), } def __init__(self, name, type, len=None, dec=0): _cls, _len = self.FLD_TYPES[type] if _len is None: if len is None: raise ValueError("Field length must be defined") _len = len self.name = name self.type = type self.len = _len self.dec = dec self.cls = _cls def getDbfField(self): "Return `DbfFieldDef` instance from the current definition." return self.cls(self.name, self.len, self.dec) def appendToHeader(self, dbfh): """Create a `DbfFieldDef` instance and append it to the dbf header. Arguments: dbfh: `DbfHeader` instance. """ _dbff = self.getDbfField() dbfh.addField(_dbff) class dbf_new(object): """New .DBF creation helper. Example Usage: dbfn = dbf_new() dbfn.add_field("name",'C',80) dbfn.add_field("price",'N',10,2) dbfn.add_field("date",'D',8) dbfn.write("tst.dbf") Note: This module cannot handle Memo-fields, they are special. """ __slots__ = ("fields",) FieldDefinitionClass = _FieldDefinition def __init__(self): self.fields = [] def add_field(self, name, typ, len, dec=0): """Add field definition. Arguments: name: field name (str object). field name must not contain ASCII NULs and it's length shouldn't exceed 10 characters. typ: type of the field. this must be a single character from the "CNLMDT" set meaning character, numeric, logical, memo, date and date/time respectively. len: length of the field. this argument is used only for the character and numeric fields. all other fields have fixed length. FIXME: use None as a default for this argument? dec: decimal precision. used only for the numric fields. """ self.fields.append(self.FieldDefinitionClass(name, typ, len, dec)) def write(self, filename): """Create empty .DBF file using current structure.""" _dbfh = DbfHeader() _dbfh.setCurrentDate() for _fldDef in self.fields: _fldDef.appendToHeader(_dbfh) _dbfStream = file(filename, "wb") _dbfh.write(_dbfStream) _dbfStream.close() def write_stream(self, stream): _dbfh = DbfHeader() _dbfh.setCurrentDate() for _fldDef in self.fields: _fldDef.appendToHeader(_dbfh) _dbfh.write(stream) if __name__ == '__main__': # create a new DBF-File dbfn = dbf_new() dbfn.add_field("name", 'C', 80) dbfn.add_field("price", 'N', 10, 2) dbfn.add_field("date", 'D', 8) dbfn.write("tst.dbf") # test new dbf print "*** created tst.dbf: ***" dbft = Dbf('tst.dbf', readOnly=0) print repr(dbft) # add a record rec = DbfRecord(dbft) rec['name'] = 'something' rec['price'] = 10.5 rec['date'] = (2000, 1, 12) rec.store() # add another record rec = DbfRecord(dbft) rec['name'] = 'foo and bar' rec['price'] = 12234 rec['date'] = (1992, 7, 15) rec.store() # show the records print "*** inserted 2 records into tst.dbf: ***" print repr(dbft) for i1 in range(len(dbft)): rec = dbft[i1] for fldName in dbft.fieldNames: print '%s:\t %s' % (fldName, rec[fldName]) print dbft.close() # vim: set et sts=4 sw=4 : tablib-0.13.0/tablib/packages/dbfpy3/0000755000175000017500000000000013440456503016600 5ustar josephjosephtablib-0.13.0/tablib/packages/dbfpy3/dbf.py0000644000175000017500000002206213440456503017707 0ustar josephjoseph#! /usr/bin/env python """DBF accessing helpers. FIXME: more documentation needed Examples: Create new table, setup structure, add records: dbf = Dbf(filename, new=True) dbf.addField( ("NAME", "C", 15), ("SURNAME", "C", 25), ("INITIALS", "C", 10), ("BIRTHDATE", "D"), ) for (n, s, i, b) in ( ("John", "Miller", "YC", (1980, 10, 11)), ("Andy", "Larkin", "", (1980, 4, 11)), ): rec = dbf.newRecord() rec["NAME"] = n rec["SURNAME"] = s rec["INITIALS"] = i rec["BIRTHDATE"] = b rec.store() dbf.close() Open existed dbf, read some data: dbf = Dbf(filename, True) for rec in dbf: for fldName in dbf.fieldNames: print '%s:\t %s (%s)' % (fldName, rec[fldName], type(rec[fldName])) print dbf.close() """ """History (most recent first): 11-feb-2007 [als] export INVALID_VALUE; Dbf: added .ignoreErrors, .INVALID_VALUE 04-jul-2006 [als] added export declaration 20-dec-2005 [yc] removed fromStream and newDbf methods: use argument of __init__ call must be used instead; added class fields pointing to the header and record classes. 17-dec-2005 [yc] split to several modules; reimplemented 13-dec-2005 [yc] adapted to the changes of the `strutil` module. 13-sep-2002 [als] support FoxPro Timestamp datatype 15-nov-1999 [jjk] documentation updates, add demo 24-aug-1998 [jjk] add some encodeValue methods (not tested), other tweaks 08-jun-1998 [jjk] fix problems, add more features 20-feb-1998 [jjk] fix problems, add more features 19-feb-1998 [jjk] add create/write capabilities 18-feb-1998 [jjk] from dbfload.py """ __version__ = "$Revision: 1.7 $"[11:-2] __date__ = "$Date: 2007/02/11 09:23:13 $"[7:-2] __author__ = "Jeff Kunce " __all__ = ["Dbf"] from . import header from . import record from .utils import INVALID_VALUE class Dbf(object): """DBF accessor. FIXME: docs and examples needed (dont' forget to tell about problems adding new fields on the fly) Implementation notes: ``_new`` field is used to indicate whether this is a new data table. `addField` could be used only for the new tables! If at least one record was appended to the table it's structure couldn't be changed. """ __slots__ = ("name", "header", "stream", "_changed", "_new", "_ignore_errors") HeaderClass = header.DbfHeader RecordClass = record.DbfRecord INVALID_VALUE = INVALID_VALUE # initialization and creation helpers def __init__(self, f, readOnly=False, new=False, ignoreErrors=False): """Initialize instance. Arguments: f: Filename or file-like object. new: True if new data table must be created. Assume data table exists if this argument is False. readOnly: if ``f`` argument is a string file will be opend in read-only mode; in other cases this argument is ignored. This argument is ignored even if ``new`` argument is True. headerObj: `header.DbfHeader` instance or None. If this argument is None, new empty header will be used with the all fields set by default. ignoreErrors: if set, failing field value conversion will return ``INVALID_VALUE`` instead of raising conversion error. """ if isinstance(f, str): # a filename self.name = f if new: # new table (table file must be # created or opened and truncated) self.stream = open(f, "w+b") else: # tabe file must exist self.stream = open(f, ("r+b", "rb")[bool(readOnly)]) else: # a stream self.name = getattr(f, "name", "") self.stream = f if new: # if this is a new table, header will be empty self.header = self.HeaderClass() else: # or instantiated using stream self.header = self.HeaderClass.fromStream(self.stream) self.ignoreErrors = ignoreErrors self._new = bool(new) self._changed = False # properties closed = property(lambda self: self.stream.closed) recordCount = property(lambda self: self.header.recordCount) fieldNames = property( lambda self: [_fld.name for _fld in self.header.fields]) fieldDefs = property(lambda self: self.header.fields) changed = property(lambda self: self._changed or self.header.changed) def ignoreErrors(self, value): """Update `ignoreErrors` flag on the header object and self""" self.header.ignoreErrors = self._ignore_errors = bool(value) ignoreErrors = property( lambda self: self._ignore_errors, ignoreErrors, doc="""Error processing mode for DBF field value conversion if set, failing field value conversion will return ``INVALID_VALUE`` instead of raising conversion error. """) # protected methods def _fixIndex(self, index): """Return fixed index. This method fails if index isn't a numeric object (long or int). Or index isn't in a valid range (less or equal to the number of records in the db). If ``index`` is a negative number, it will be treated as a negative indexes for list objects. Return: Return value is numeric object maning valid index. """ if not isinstance(index, int): raise TypeError("Index must be a numeric object") if index < 0: # index from the right side # fix it to the left-side index index += len(self) + 1 if index >= len(self): raise IndexError("Record index out of range") return index # iterface methods def close(self): self.flush() self.stream.close() def flush(self): """Flush data to the associated stream.""" if self.changed: self.header.setCurrentDate() self.header.write(self.stream) self.stream.flush() self._changed = False def indexOfFieldName(self, name): """Index of field named ``name``.""" # FIXME: move this to header class names = [f.name for f in self.header.fields] return names.index(name.upper()) def newRecord(self): """Return new record, which belong to this table.""" return self.RecordClass(self) def append(self, record): """Append ``record`` to the database.""" record.index = self.header.recordCount record._write() self.header.recordCount += 1 self._changed = True self._new = False def addField(self, *defs): """Add field definitions. For more information see `header.DbfHeader.addField`. """ if self._new: self.header.addField(*defs) else: raise TypeError("At least one record was added, " "structure can't be changed") # 'magic' methods (representation and sequence interface) def __repr__(self): return "Dbf stream '%s'\n" % self.stream + repr(self.header) def __len__(self): """Return number of records.""" return self.recordCount def __getitem__(self, index): """Return `DbfRecord` instance.""" return self.RecordClass.fromStream(self, self._fixIndex(index)) def __setitem__(self, index, record): """Write `DbfRecord` instance to the stream.""" record.index = self._fixIndex(index) record._write() self._changed = True self._new = False # def __del__(self): # """Flush stream upon deletion of the object.""" # self.flush() def demo_read(filename): _dbf = Dbf(filename, True) for _rec in _dbf: print() print(repr(_rec)) _dbf.close() def demo_create(filename): _dbf = Dbf(filename, new=True) _dbf.addField( ("NAME", "C", 15), ("SURNAME", "C", 25), ("INITIALS", "C", 10), ("BIRTHDATE", "D"), ) for (_n, _s, _i, _b) in ( ("John", "Miller", "YC", (1981, 1, 2)), ("Andy", "Larkin", "AL", (1982, 3, 4)), ("Bill", "Clinth", "", (1983, 5, 6)), ("Bobb", "McNail", "", (1984, 7, 8)), ): _rec = _dbf.newRecord() _rec["NAME"] = _n _rec["SURNAME"] = _s _rec["INITIALS"] = _i _rec["BIRTHDATE"] = _b _rec.store() print(repr(_dbf)) _dbf.close() if __name__ == '__main__': import sys _name = len(sys.argv) > 1 and sys.argv[1] or "county.dbf" demo_create(_name) demo_read(_name) # vim: set et sw=4 sts=4 : tablib-0.13.0/tablib/packages/dbfpy3/header.py0000644000175000017500000002203713440456503020406 0ustar josephjoseph"""DBF header definition. TODO: - handle encoding of the character fields (encoding information stored in the DBF header) """ """History (most recent first): 16-sep-2010 [als] fromStream: fix century of the last update field 11-feb-2007 [als] added .ignoreErrors 10-feb-2007 [als] added __getitem__: return field definitions by field name or field number (zero-based) 04-jul-2006 [als] added export declaration 15-dec-2005 [yc] created """ __version__ = "$Revision: 1.6 $"[11:-2] __date__ = "$Date: 2010/09/16 05:06:39 $"[7:-2] __all__ = ["DbfHeader"] import io import datetime import struct import time import sys from . import fields from .utils import getDate class DbfHeader(object): """Dbf header definition. For more information about dbf header format visit `http://www.clicketyclick.dk/databases/xbase/format/dbf.html#DBF_STRUCT` Examples: Create an empty dbf header and add some field definitions: dbfh = DbfHeader() dbfh.addField(("name", "C", 10)) dbfh.addField(("date", "D")) dbfh.addField(DbfNumericFieldDef("price", 5, 2)) Create a dbf header with field definitions: dbfh = DbfHeader([ ("name", "C", 10), ("date", "D"), DbfNumericFieldDef("price", 5, 2), ]) """ __slots__ = ("signature", "fields", "lastUpdate", "recordLength", "recordCount", "headerLength", "changed", "_ignore_errors") ## instance construction and initialization methods def __init__(self, fields=None, headerLength=0, recordLength=0, recordCount=0, signature=0x03, lastUpdate=None, ignoreErrors=False, ): """Initialize instance. Arguments: fields: a list of field definitions; recordLength: size of the records; headerLength: size of the header; recordCount: number of records stored in DBF; signature: version number (aka signature). using 0x03 as a default meaning "File without DBT". for more information about this field visit ``http://www.clicketyclick.dk/databases/xbase/format/dbf.html#DBF_NOTE_1_TARGET`` lastUpdate: date of the DBF's update. this could be a string ('yymmdd' or 'yyyymmdd'), timestamp (int or float), datetime/date value, a sequence (assuming (yyyy, mm, dd, ...)) or an object having callable ``ticks`` field. ignoreErrors: error processing mode for DBF fields (boolean) """ self.signature = signature if fields is None: self.fields = [] else: self.fields = list(fields) self.lastUpdate = getDate(lastUpdate) self.recordLength = recordLength self.headerLength = headerLength self.recordCount = recordCount self.ignoreErrors = ignoreErrors # XXX: I'm not sure this is safe to # initialize `self.changed` in this way self.changed = bool(self.fields) # @classmethod def fromString(cls, string): """Return header instance from the string object.""" return cls.fromStream(io.StringIO(str(string))) fromString = classmethod(fromString) # @classmethod def fromStream(cls, stream): """Return header object from the stream.""" stream.seek(0) first_32 = stream.read(32) if type(first_32) != bytes: _data = bytes(first_32, sys.getfilesystemencoding()) _data = first_32 (_cnt, _hdrLen, _recLen) = struct.unpack(" DbfRecord._write(); added delete() method. 16-dec-2005 [yc] record definition moved from `dbf`. """ __version__ = "$Revision: 1.7 $"[11:-2] __date__ = "$Date: 2007/02/11 09:05:49 $"[7:-2] __all__ = ["DbfRecord"] import sys from . import utils class DbfRecord(object): """DBF record. Instances of this class shouldn't be created manualy, use `dbf.Dbf.newRecord` instead. Class implements mapping/sequence interface, so fields could be accessed via their names or indexes (names is a preffered way to access fields). Hint: Use `store` method to save modified record. Examples: Add new record to the database: db = Dbf(filename) rec = db.newRecord() rec["FIELD1"] = value1 rec["FIELD2"] = value2 rec.store() Or the same, but modify existed (second in this case) record: db = Dbf(filename) rec = db[2] rec["FIELD1"] = value1 rec["FIELD2"] = value2 rec.store() """ __slots__ = "dbf", "index", "deleted", "fieldData" ## creation and initialization def __init__(self, dbf, index=None, deleted=False, data=None): """Instance initialiation. Arguments: dbf: A `Dbf.Dbf` instance this record belonogs to. index: An integer record index or None. If this value is None, record will be appended to the DBF. deleted: Boolean flag indicating whether this record is a deleted record. data: A sequence or None. This is a data of the fields. If this argument is None, default values will be used. """ self.dbf = dbf # XXX: I'm not sure ``index`` is necessary self.index = index self.deleted = deleted if data is None: self.fieldData = [_fd.defaultValue for _fd in dbf.header.fields] else: self.fieldData = list(data) # XXX: validate self.index before calculating position? position = property(lambda self: self.dbf.header.headerLength + \ self.index * self.dbf.header.recordLength) def rawFromStream(cls, dbf, index): """Return raw record contents read from the stream. Arguments: dbf: A `Dbf.Dbf` instance containing the record. index: Index of the record in the records' container. This argument can't be None in this call. Return value is a string containing record data in DBF format. """ # XXX: may be write smth assuming, that current stream # position is the required one? it could save some # time required to calculate where to seek in the file dbf.stream.seek(dbf.header.headerLength + index * dbf.header.recordLength) return dbf.stream.read(dbf.header.recordLength) rawFromStream = classmethod(rawFromStream) def fromStream(cls, dbf, index): """Return a record read from the stream. Arguments: dbf: A `Dbf.Dbf` instance new record should belong to. index: Index of the record in the records' container. This argument can't be None in this call. Return value is an instance of the current class. """ return cls.fromString(dbf, cls.rawFromStream(dbf, index), index) fromStream = classmethod(fromStream) def fromString(cls, dbf, string, index=None): """Return record read from the string object. Arguments: dbf: A `Dbf.Dbf` instance new record should belong to. string: A string new record should be created from. index: Index of the record in the container. If this argument is None, record will be appended. Return value is an instance of the current class. """ return cls(dbf, index, string[0]=="*", [_fd.decodeFromRecord(string) for _fd in dbf.header.fields]) fromString = classmethod(fromString) ## object representation def __repr__(self): _template = "%%%ds: %%s (%%s)" % max([len(_fld) for _fld in self.dbf.fieldNames]) _rv = [] for _fld in self.dbf.fieldNames: _val = self[_fld] if _val is utils.INVALID_VALUE: _rv.append(_template % (_fld, "None", "value cannot be decoded")) else: _rv.append(_template % (_fld, _val, type(_val))) return "\n".join(_rv) ## protected methods def _write(self): """Write data to the dbf stream. Note: This isn't a public method, it's better to use 'store' instead publically. Be design ``_write`` method should be called only from the `Dbf` instance. """ self._validateIndex(False) self.dbf.stream.seek(self.position) self.dbf.stream.write(bytes(self.toString(), sys.getfilesystemencoding())) # FIXME: may be move this write somewhere else? # why we should check this condition for each record? if self.index == len(self.dbf): # this is the last record, # we should write SUB (ASCII 26) self.dbf.stream.write(b"\x1A") ## utility methods def _validateIndex(self, allowUndefined=True, checkRange=False): """Valid ``self.index`` value. If ``allowUndefined`` argument is True functions does nothing in case of ``self.index`` pointing to None object. """ if self.index is None: if not allowUndefined: raise ValueError("Index is undefined") elif self.index < 0: raise ValueError("Index can't be negative (%s)" % self.index) elif checkRange and self.index <= self.dbf.header.recordCount: raise ValueError("There are only %d records in the DBF" % self.dbf.header.recordCount) ## interface methods def store(self): """Store current record in the DBF. If ``self.index`` is None, this record will be appended to the records of the DBF this records belongs to; or replaced otherwise. """ self._validateIndex() if self.index is None: self.index = len(self.dbf) self.dbf.append(self) else: self.dbf[self.index] = self def delete(self): """Mark method as deleted.""" self.deleted = True def toString(self): """Return string packed record values.""" # for (_def, _dat) in zip(self.dbf.header.fields, self.fieldData): # return "".join([" *"[self.deleted]] + [ _def.encodeValue(_dat) for (_def, _dat) in zip(self.dbf.header.fields, self.fieldData) ]) def asList(self): """Return a flat list of fields. Note: Change of the list's values won't change real values stored in this object. """ return self.fieldData[:] def asDict(self): """Return a dictionary of fields. Note: Change of the dicts's values won't change real values stored in this object. """ return dict([_i for _i in zip(self.dbf.fieldNames, self.fieldData)]) def __getitem__(self, key): """Return value by field name or field index.""" if isinstance(key, int): # integer index of the field return self.fieldData[key] # assuming string field name return self.fieldData[self.dbf.indexOfFieldName(key)] def __setitem__(self, key, value): """Set field value by integer index of the field or string name.""" if isinstance(key, int): # integer index of the field return self.fieldData[key] # assuming string field name self.fieldData[self.dbf.indexOfFieldName(key)] = value # vim: et sts=4 sw=4 : tablib-0.13.0/tablib/packages/dbfpy3/utils.py0000644000175000017500000001142513440456503020315 0ustar josephjoseph"""String utilities. TODO: - allow strings in getDateTime routine; """ """History (most recent first): 11-feb-2007 [als] added INVALID_VALUE 10-feb-2007 [als] allow date strings padded with spaces instead of zeroes 20-dec-2005 [yc] handle long objects in getDate/getDateTime 16-dec-2005 [yc] created from ``strutil`` module. """ __version__ = "$Revision: 1.4 $"[11:-2] __date__ = "$Date: 2007/02/11 08:57:17 $"[7:-2] import datetime import time def unzfill(str): """Return a string without ASCII NULs. This function searchers for the first NUL (ASCII 0) occurance and truncates string till that position. """ try: return str[:str.index(b'\0')] except ValueError: return str def getDate(date=None): """Return `datetime.date` instance. Type of the ``date`` argument could be one of the following: None: use current date value; datetime.date: this value will be returned; datetime.datetime: the result of the date.date() will be returned; string: assuming "%Y%m%d" or "%y%m%dd" format; number: assuming it's a timestamp (returned for example by the time.time() call; sequence: assuming (year, month, day, ...) sequence; Additionaly, if ``date`` has callable ``ticks`` attribute, it will be used and result of the called would be treated as a timestamp value. """ if date is None: # use current value return datetime.date.today() if isinstance(date, datetime.date): return date if isinstance(date, datetime.datetime): return date.date() if isinstance(date, (int, float)): # date is a timestamp return datetime.date.fromtimestamp(date) if isinstance(date, str): date = date.replace(" ", "0") if len(date) == 6: # yymmdd return datetime.date(*time.strptime(date, "%y%m%d")[:3]) # yyyymmdd return datetime.date(*time.strptime(date, "%Y%m%d")[:3]) if hasattr(date, "__getitem__"): # a sequence (assuming date/time tuple) return datetime.date(*date[:3]) return datetime.date.fromtimestamp(date.ticks()) def getDateTime(value=None): """Return `datetime.datetime` instance. Type of the ``value`` argument could be one of the following: None: use current date value; datetime.date: result will be converted to the `datetime.datetime` instance using midnight; datetime.datetime: ``value`` will be returned as is; string: *** CURRENTLY NOT SUPPORTED ***; number: assuming it's a timestamp (returned for example by the time.time() call; sequence: assuming (year, month, day, ...) sequence; Additionaly, if ``value`` has callable ``ticks`` attribute, it will be used and result of the called would be treated as a timestamp value. """ if value is None: # use current value return datetime.datetime.today() if isinstance(value, datetime.datetime): return value if isinstance(value, datetime.date): return datetime.datetime.fromordinal(value.toordinal()) if isinstance(value, (int, float)): # value is a timestamp return datetime.datetime.fromtimestamp(value) if isinstance(value, str): raise NotImplementedError("Strings aren't currently implemented") if hasattr(value, "__getitem__"): # a sequence (assuming date/time tuple) return datetime.datetime(*tuple(value)[:6]) return datetime.datetime.fromtimestamp(value.ticks()) class classproperty(property): """Works in the same way as a ``property``, but for the classes.""" def __get__(self, obj, cls): return self.fget(cls) class _InvalidValue(object): """Value returned from DBF records when field validation fails The value is not equal to anything except for itself and equal to all empty values: None, 0, empty string etc. In other words, invalid value is equal to None and not equal to None at the same time. This value yields zero upon explicit conversion to a number type, empty string for string types, and False for boolean. """ def __eq__(self, other): return not other def __ne__(self, other): return not (other is self) def __bool__(self): return False def __int__(self): return 0 __long__ = __int__ def __float__(self): return 0.0 def __str__(self): return "" def __unicode__(self): return "" def __repr__(self): return "" # invalid value is a constant singleton INVALID_VALUE = _InvalidValue() # vim: set et sts=4 sw=4 : tablib-0.13.0/tablib/packages/dbfpy3/__init__.py0000644000175000017500000000000013440456503020677 0ustar josephjosephtablib-0.13.0/tablib/packages/dbfpy3/fields.py0000644000175000017500000003433013440456503020423 0ustar josephjoseph"""DBF fields definitions. TODO: - make memos work """ """History (most recent first): 26-may-2009 [als] DbfNumericFieldDef.decodeValue: strip zero bytes 05-feb-2009 [als] DbfDateFieldDef.encodeValue: empty arg produces empty date 16-sep-2008 [als] DbfNumericFieldDef decoding looks for decimal point in the value to select float or integer return type 13-mar-2008 [als] check field name length in constructor 11-feb-2007 [als] handle value conversion errors 10-feb-2007 [als] DbfFieldDef: added .rawFromRecord() 01-dec-2006 [als] Timestamp columns use None for empty values 31-oct-2006 [als] support field types 'F' (float), 'I' (integer) and 'Y' (currency); automate export and registration of field classes 04-jul-2006 [als] added export declaration 10-mar-2006 [als] decode empty values for Date and Logical fields; show field name in errors 10-mar-2006 [als] fix Numeric value decoding: according to spec, value always is string representation of the number; ensure that encoded Numeric value fits into the field 20-dec-2005 [yc] use field names in upper case 15-dec-2005 [yc] field definitions moved from `dbf`. """ __version__ = "$Revision: 1.14 $"[11:-2] __date__ = "$Date: 2009/05/26 05:16:51 $"[7:-2] __all__ = ["lookupFor",] # field classes added at the end of the module import datetime import struct import sys from . import utils ## abstract definitions class DbfFieldDef(object): """Abstract field definition. Child classes must override ``type`` class attribute to provide datatype infromation of the field definition. For more info about types visit `http://www.clicketyclick.dk/databases/xbase/format/data_types.html` Also child classes must override ``defaultValue`` field to provide default value for the field value. If child class has fixed length ``length`` class attribute must be overriden and set to the valid value. None value means, that field isn't of fixed length. Note: ``name`` field must not be changed after instantiation. """ __slots__ = ("name", "decimalCount", "start", "end", "ignoreErrors") # length of the field, None in case of variable-length field, # or a number if this field is a fixed-length field length = None # field type. for more information about fields types visit # `http://www.clicketyclick.dk/databases/xbase/format/data_types.html` # must be overriden in child classes typeCode = None # default value for the field. this field must be # overriden in child classes defaultValue = None def __init__(self, name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False, ): """Initialize instance.""" assert self.typeCode is not None, "Type code must be overriden" assert self.defaultValue is not None, "Default value must be overriden" ## fix arguments if len(name) >10: raise ValueError("Field name \"%s\" is too long" % name) name = str(name).upper() if self.__class__.length is None: if length is None: raise ValueError("[%s] Length isn't specified" % name) length = int(length) if length <= 0: raise ValueError("[%s] Length must be a positive integer" % name) else: length = self.length if decimalCount is None: decimalCount = 0 ## set fields self.name = name # FIXME: validate length according to the specification at # http://www.clicketyclick.dk/databases/xbase/format/data_types.html self.length = length self.decimalCount = decimalCount self.ignoreErrors = ignoreErrors self.start = start self.end = stop def __cmp__(self, other): return cmp(self.name, str(other).upper()) def __hash__(self): return hash(self.name) def fromString(cls, string, start, ignoreErrors=False): """Decode dbf field definition from the string data. Arguments: string: a string, dbf definition is decoded from. length of the string must be 32 bytes. start: position in the database file. ignoreErrors: initial error processing mode for the new field (boolean) """ assert len(string) == 32 _length = string[16] return cls(utils.unzfill(string)[:11].decode('utf-8'), _length, string[17], start, start + _length, ignoreErrors=ignoreErrors) fromString = classmethod(fromString) def toString(self): """Return encoded field definition. Return: Return value is a string object containing encoded definition of this field. """ if sys.version_info < (2, 4): # earlier versions did not support padding character _name = self.name[:11] + "\0" * (11 - len(self.name)) else: _name = self.name.ljust(11, '\0') return ( _name + self.typeCode + #data address chr(0) * 4 + chr(self.length) + chr(self.decimalCount) + chr(0) * 14 ) def __repr__(self): return "%-10s %1s %3d %3d" % self.fieldInfo() def fieldInfo(self): """Return field information. Return: Return value is a (name, type, length, decimals) tuple. """ return (self.name, self.typeCode, self.length, self.decimalCount) def rawFromRecord(self, record): """Return a "raw" field value from the record string.""" return record[self.start:self.end] def decodeFromRecord(self, record): """Return decoded field value from the record string.""" try: return self.decodeValue(self.rawFromRecord(record)) except: if self.ignoreErrors: return utils.INVALID_VALUE else: raise def decodeValue(self, value): """Return decoded value from string value. This method shouldn't be used publicly. It's called from the `decodeFromRecord` method. This is an abstract method and it must be overridden in child classes. """ raise NotImplementedError def encodeValue(self, value): """Return str object containing encoded field value. This is an abstract method and it must be overriden in child classes. """ raise NotImplementedError ## real classes class DbfCharacterFieldDef(DbfFieldDef): """Definition of the character field.""" typeCode = "C" defaultValue = b'' def decodeValue(self, value): """Return string object. Return value is a ``value`` argument with stripped right spaces. """ return value.rstrip(b' ').decode('utf-8') def encodeValue(self, value): """Return raw data string encoded from a ``value``.""" return str(value)[:self.length].ljust(self.length) class DbfNumericFieldDef(DbfFieldDef): """Definition of the numeric field.""" typeCode = "N" # XXX: now I'm not sure it was a good idea to make a class field # `defaultValue` instead of a generic method as it was implemented # previously -- it's ok with all types except number, cuz # if self.decimalCount is 0, we should return 0 and 0.0 otherwise. defaultValue = 0 def decodeValue(self, value): """Return a number decoded from ``value``. If decimals is zero, value will be decoded as an integer; or as a float otherwise. Return: Return value is a int (long) or float instance. """ value = value.strip(b' \0') if b'.' in value: # a float (has decimal separator) return float(value) elif value: # must be an integer return int(value) else: return 0 def encodeValue(self, value): """Return string containing encoded ``value``.""" _rv = ("%*.*f" % (self.length, self.decimalCount, value)) if len(_rv) > self.length: _ppos = _rv.find(".") if 0 <= _ppos <= self.length: _rv = _rv[:self.length] else: raise ValueError("[%s] Numeric overflow: %s (field width: %i)" % (self.name, _rv, self.length)) return _rv class DbfFloatFieldDef(DbfNumericFieldDef): """Definition of the float field - same as numeric.""" typeCode = "F" class DbfIntegerFieldDef(DbfFieldDef): """Definition of the integer field.""" typeCode = "I" length = 4 defaultValue = 0 def decodeValue(self, value): """Return an integer number decoded from ``value``.""" return struct.unpack("= 1: _rv = datetime.datetime.fromordinal(_jdn - self.JDN_GDN_DIFF) _rv += datetime.timedelta(0, _msecs / 1000.0) else: # empty date _rv = None return _rv def encodeValue(self, value): """Return a string-encoded ``value``.""" if value: value = utils.getDateTime(value) # LE byteorder _rv = struct.pack("<2I", value.toordinal() + self.JDN_GDN_DIFF, (value.hour * 3600 + value.minute * 60 + value.second) * 1000) else: _rv = "\0" * self.length assert len(_rv) == self.length return _rv _fieldsRegistry = {} def registerField(fieldCls): """Register field definition class. ``fieldCls`` should be subclass of the `DbfFieldDef`. Use `lookupFor` to retrieve field definition class by the type code. """ assert fieldCls.typeCode is not None, "Type code isn't defined" # XXX: use fieldCls.typeCode.upper()? in case of any decign # don't forget to look to the same comment in ``lookupFor`` method _fieldsRegistry[fieldCls.typeCode] = fieldCls def lookupFor(typeCode): """Return field definition class for the given type code. ``typeCode`` must be a single character. That type should be previously registered. Use `registerField` to register new field class. Return: Return value is a subclass of the `DbfFieldDef`. """ # XXX: use typeCode.upper()? in case of any decign don't # forget to look to the same comment in ``registerField`` return _fieldsRegistry[chr(typeCode)] ## register generic types for (_name, _val) in list(globals().items()): if isinstance(_val, type) and issubclass(_val, DbfFieldDef) \ and (_name != "DbfFieldDef"): __all__.append(_name) registerField(_val) del _name, _val # vim: et sts=4 sw=4 : tablib-0.13.0/tablib/packages/dbfpy3/dbfnew.py0000644000175000017500000001222013440456503020414 0ustar josephjoseph#!/usr/bin/python """.DBF creation helpers. Note: this is a legacy interface. New code should use Dbf class for table creation (see examples in dbf.py) TODO: - handle Memo fields. - check length of the fields accoring to the `http://www.clicketyclick.dk/databases/xbase/format/data_types.html` """ """History (most recent first) 04-jul-2006 [als] added export declaration; updated for dbfpy 2.0 15-dec-2005 [yc] define dbf_new.__slots__ 14-dec-2005 [yc] added vim modeline; retab'd; added doc-strings; dbf_new now is a new class (inherited from object) ??-jun-2000 [--] added by Hans Fiby """ __version__ = "$Revision: 1.4 $"[11:-2] __date__ = "$Date: 2006/07/04 08:18:18 $"[7:-2] __all__ = ["dbf_new"] from .dbf import * from .fields import * from .header import * from .record import * class _FieldDefinition(object): """Field definition. This is a simple structure, which contains ``name``, ``type``, ``len``, ``dec`` and ``cls`` fields. Objects also implement get/setitem magic functions, so fields could be accessed via sequence iterface, where 'name' has index 0, 'type' index 1, 'len' index 2, 'dec' index 3 and 'cls' could be located at index 4. """ __slots__ = "name", "type", "len", "dec", "cls" # WARNING: be attentive - dictionaries are mutable! FLD_TYPES = { # type: (cls, len) "C": (DbfCharacterFieldDef, None), "N": (DbfNumericFieldDef, None), "L": (DbfLogicalFieldDef, 1), # FIXME: support memos # "M": (DbfMemoFieldDef), "D": (DbfDateFieldDef, 8), # FIXME: I'm not sure length should be 14 characters! # but temporary I use it, cuz date is 8 characters # and time 6 (hhmmss) "T": (DbfDateTimeFieldDef, 14), } def __init__(self, name, type, len=None, dec=0): _cls, _len = self.FLD_TYPES[type] if _len is None: if len is None: raise ValueError("Field length must be defined") _len = len self.name = name self.type = type self.len = _len self.dec = dec self.cls = _cls def getDbfField(self): "Return `DbfFieldDef` instance from the current definition." return self.cls(self.name, self.len, self.dec) def appendToHeader(self, dbfh): """Create a `DbfFieldDef` instance and append it to the dbf header. Arguments: dbfh: `DbfHeader` instance. """ _dbff = self.getDbfField() dbfh.addField(_dbff) class dbf_new(object): """New .DBF creation helper. Example Usage: dbfn = dbf_new() dbfn.add_field("name",'C',80) dbfn.add_field("price",'N',10,2) dbfn.add_field("date",'D',8) dbfn.write("tst.dbf") Note: This module cannot handle Memo-fields, they are special. """ __slots__ = ("fields",) FieldDefinitionClass = _FieldDefinition def __init__(self): self.fields = [] def add_field(self, name, typ, len, dec=0): """Add field definition. Arguments: name: field name (str object). field name must not contain ASCII NULs and it's length shouldn't exceed 10 characters. typ: type of the field. this must be a single character from the "CNLMDT" set meaning character, numeric, logical, memo, date and date/time respectively. len: length of the field. this argument is used only for the character and numeric fields. all other fields have fixed length. FIXME: use None as a default for this argument? dec: decimal precision. used only for the numric fields. """ self.fields.append(self.FieldDefinitionClass(name, typ, len, dec)) def write(self, filename): """Create empty .DBF file using current structure.""" _dbfh = DbfHeader() _dbfh.setCurrentDate() for _fldDef in self.fields: _fldDef.appendToHeader(_dbfh) _dbfStream = open(filename, "wb") _dbfh.write(_dbfStream) _dbfStream.close() if __name__ == '__main__': # create a new DBF-File dbfn = dbf_new() dbfn.add_field("name", 'C', 80) dbfn.add_field("price", 'N', 10, 2) dbfn.add_field("date", 'D', 8) dbfn.write("tst.dbf") # test new dbf print("*** created tst.dbf: ***") dbft = Dbf('tst.dbf', readOnly=0) print(repr(dbft)) # add a record rec = DbfRecord(dbft) rec['name'] = 'something' rec['price'] = 10.5 rec['date'] = (2000, 1, 12) rec.store() # add another record rec = DbfRecord(dbft) rec['name'] = 'foo and bar' rec['price'] = 12234 rec['date'] = (1992, 7, 15) rec.store() # show the records print("*** inserted 2 records into tst.dbf: ***") print(repr(dbft)) for i1 in range(len(dbft)): rec = dbft[i1] for fldName in dbft.fieldNames: print('%s:\t %s' % (fldName, rec[fldName])) print() dbft.close() # vim: set et sts=4 sw=4 : tablib-0.13.0/tablib/packages/markup3.py0000644000175000017500000004511513440456503017353 0ustar josephjoseph# This code is in the public domain, it comes # with absolutely no warranty and you can do # absolutely whatever you want with it. __date__ = '17 May 2007' __version__ = '1.7' __doc__= """ This is markup.py - a Python module that attempts to make it easier to generate HTML/XML from a Python program in an intuitive, lightweight, customizable and pythonic way. The code is in the public domain. Version: %s as of %s. Documentation and further info is at http://markup.sourceforge.net/ Please send bug reports, feature requests, enhancement ideas or questions to nogradi at gmail dot com. Installation: drop markup.py somewhere into your Python path. """ % ( __version__, __date__ ) import string class element: """This class handles the addition of a new element.""" def __init__( self, tag, case='lower', parent=None ): self.parent = parent if case == 'lower': self.tag = tag.lower( ) else: self.tag = tag.upper( ) def __call__( self, *args, **kwargs ): if len( args ) > 1: raise ArgumentError( self.tag ) # if class_ was defined in parent it should be added to every element if self.parent is not None and self.parent.class_ is not None: if 'class_' not in kwargs: kwargs['class_'] = self.parent.class_ if self.parent is None and len( args ) == 1: x = [ self.render( self.tag, False, myarg, mydict ) for myarg, mydict in _argsdicts( args, kwargs ) ] return '\n'.join( x ) elif self.parent is None and len( args ) == 0: x = [ self.render( self.tag, True, myarg, mydict ) for myarg, mydict in _argsdicts( args, kwargs ) ] return '\n'.join( x ) if self.tag in self.parent.twotags: for myarg, mydict in _argsdicts( args, kwargs ): self.render( self.tag, False, myarg, mydict ) elif self.tag in self.parent.onetags: if len( args ) == 0: for myarg, mydict in _argsdicts( args, kwargs ): self.render( self.tag, True, myarg, mydict ) # here myarg is always None, because len( args ) = 0 else: raise ClosingError( self.tag ) elif self.parent.mode == 'strict_html' and self.tag in self.parent.deptags: raise DeprecationError( self.tag ) else: raise InvalidElementError( self.tag, self.parent.mode ) def render( self, tag, single, between, kwargs ): """Append the actual tags to content.""" out = "<%s" % tag for key, value in kwargs.items( ): if value is not None: # when value is None that means stuff like <... checked> key = key.strip('_') # strip this so class_ will mean class, etc. if key == 'http_equiv': # special cases, maybe change _ to - overall? key = 'http-equiv' elif key == 'accept_charset': key = 'accept-charset' out = "%s %s=\"%s\"" % ( out, key, escape( value ) ) else: out = "%s %s" % ( out, key ) if between is not None: out = "%s>%s" % ( out, between, tag ) else: if single: out = "%s />" % out else: out = "%s>" % out if self.parent is not None: self.parent.content.append( out ) else: return out def close( self ): """Append a closing tag unless element has only opening tag.""" if self.tag in self.parent.twotags: self.parent.content.append( "" % self.tag ) elif self.tag in self.parent.onetags: raise ClosingError( self.tag ) elif self.parent.mode == 'strict_html' and self.tag in self.parent.deptags: raise DeprecationError( self.tag ) def open( self, **kwargs ): """Append an opening tag.""" if self.tag in self.parent.twotags or self.tag in self.parent.onetags: self.render( self.tag, False, None, kwargs ) elif self.mode == 'strict_html' and self.tag in self.parent.deptags: raise DeprecationError( self.tag ) class page: """This is our main class representing a document. Elements are added as attributes of an instance of this class.""" def __init__( self, mode='strict_html', case='lower', onetags=None, twotags=None, separator='\n', class_=None ): """Stuff that effects the whole document. mode -- 'strict_html' for HTML 4.01 (default) 'html' alias for 'strict_html' 'loose_html' to allow some deprecated elements 'xml' to allow arbitrary elements case -- 'lower' element names will be printed in lower case (default) 'upper' they will be printed in upper case onetags -- list or tuple of valid elements with opening tags only twotags -- list or tuple of valid elements with both opening and closing tags these two keyword arguments may be used to select the set of valid elements in 'xml' mode invalid elements will raise appropriate exceptions separator -- string to place between added elements, defaults to newline class_ -- a class that will be added to every element if defined""" valid_onetags = [ "AREA", "BASE", "BR", "COL", "FRAME", "HR", "IMG", "INPUT", "LINK", "META", "PARAM" ] valid_twotags = [ "A", "ABBR", "ACRONYM", "ADDRESS", "B", "BDO", "BIG", "BLOCKQUOTE", "BODY", "BUTTON", "CAPTION", "CITE", "CODE", "COLGROUP", "DD", "DEL", "DFN", "DIV", "DL", "DT", "EM", "FIELDSET", "FORM", "FRAMESET", "H1", "H2", "H3", "H4", "H5", "H6", "HEAD", "HTML", "I", "IFRAME", "INS", "KBD", "LABEL", "LEGEND", "LI", "MAP", "NOFRAMES", "NOSCRIPT", "OBJECT", "OL", "OPTGROUP", "OPTION", "P", "PRE", "Q", "SAMP", "SCRIPT", "SELECT", "SMALL", "SPAN", "STRONG", "STYLE", "SUB", "SUP", "TABLE", "TBODY", "TD", "TEXTAREA", "TFOOT", "TH", "THEAD", "TITLE", "TR", "TT", "UL", "VAR" ] deprecated_onetags = [ "BASEFONT", "ISINDEX" ] deprecated_twotags = [ "APPLET", "CENTER", "DIR", "FONT", "MENU", "S", "STRIKE", "U" ] self.header = [ ] self.content = [ ] self.footer = [ ] self.case = case self.separator = separator # init( ) sets it to True so we know that has to be printed at the end self._full = False self.class_= class_ if mode == 'strict_html' or mode == 'html': self.onetags = valid_onetags self.onetags += list(map( str.lower, self.onetags )) self.twotags = valid_twotags self.twotags += list(map( str.lower, self.twotags )) self.deptags = deprecated_onetags + deprecated_twotags self.deptags += list(map( str.lower, self.deptags )) self.mode = 'strict_html' elif mode == 'loose_html': self.onetags = valid_onetags + deprecated_onetags self.onetags += list(map( str.lower, self.onetags )) self.twotags = valid_twotags + deprecated_twotags self.twotags += list(map( str.lower, self.twotags )) self.mode = mode elif mode == 'xml': if onetags and twotags: self.onetags = onetags self.twotags = twotags elif ( onetags and not twotags ) or ( twotags and not onetags ): raise CustomizationError( ) else: self.onetags = russell( ) self.twotags = russell( ) self.mode = mode else: raise ModeError( mode ) def __getattr__( self, attr ): if attr.startswith("__") and attr.endswith("__"): raise AttributeError(attr) return element( attr, case=self.case, parent=self ) def __str__( self ): if self._full and ( self.mode == 'strict_html' or self.mode == 'loose_html' ): end = [ '', '' ] else: end = [ ] return self.separator.join( self.header + self.content + self.footer + end ) def __call__( self, escape=False ): """Return the document as a string. escape -- False print normally True replace < and > by < and > the default escape sequences in most browsers""" if escape: return _escape( self.__str__( ) ) else: return self.__str__( ) def add( self, text ): """This is an alias to addcontent.""" self.addcontent( text ) def addfooter( self, text ): """Add some text to the bottom of the document""" self.footer.append( text ) def addheader( self, text ): """Add some text to the top of the document""" self.header.append( text ) def addcontent( self, text ): """Add some text to the main part of the document""" self.content.append( text ) def init( self, lang='en', css=None, metainfo=None, title=None, header=None, footer=None, charset=None, encoding=None, doctype=None, bodyattrs=None, script=None ): """This method is used for complete documents with appropriate doctype, encoding, title, etc information. For an HTML/XML snippet omit this method. lang -- language, usually a two character string, will appear as in html mode (ignored in xml mode) css -- Cascading Style Sheet filename as a string or a list of strings for multiple css files (ignored in xml mode) metainfo -- a dictionary in the form { 'name':'content' } to be inserted into meta element(s) as (ignored in xml mode) bodyattrs --a dictionary in the form { 'key':'value', ... } which will be added as attributes of the element as (ignored in xml mode) script -- dictionary containing src:type pairs, title -- the title of the document as a string to be inserted into a title element as my title (ignored in xml mode) header -- some text to be inserted right after the element (ignored in xml mode) footer -- some text to be inserted right before the element (ignored in xml mode) charset -- a string defining the character set, will be inserted into a element (ignored in xml mode) encoding -- a string defining the encoding, will be put into to first line of the document as in xml mode (ignored in html mode) doctype -- the document type string, defaults to in html mode (ignored in xml mode)""" self._full = True if self.mode == 'strict_html' or self.mode == 'loose_html': if doctype is None: doctype = "" self.header.append( doctype ) self.html( lang=lang ) self.head( ) if charset is not None: self.meta( http_equiv='Content-Type', content="text/html; charset=%s" % charset ) if metainfo is not None: self.metainfo( metainfo ) if css is not None: self.css( css ) if title is not None: self.title( title ) if script is not None: self.scripts( script ) self.head.close() if bodyattrs is not None: self.body( **bodyattrs ) else: self.body( ) if header is not None: self.content.append( header ) if footer is not None: self.footer.append( footer ) elif self.mode == 'xml': if doctype is None: if encoding is not None: doctype = "" % encoding else: doctype = "" self.header.append( doctype ) def css( self, filelist ): """This convenience function is only useful for html. It adds css stylesheet(s) to the document via the element.""" if isinstance( filelist, str ): self.link( href=filelist, rel='stylesheet', type='text/css', media='all' ) else: for file in filelist: self.link( href=file, rel='stylesheet', type='text/css', media='all' ) def metainfo( self, mydict ): """This convenience function is only useful for html. It adds meta information via the element, the argument is a dictionary of the form { 'name':'content' }.""" if isinstance( mydict, dict ): for name, content in mydict.items( ): self.meta( name=name, content=content ) else: raise TypeError("Metainfo should be called with a dictionary argument of name:content pairs.") def scripts( self, mydict ): """Only useful in html, mydict is dictionary of src:type pairs will be rendered as """ if isinstance( mydict, dict ): for src, type in mydict.items( ): self.script( '', src=src, type='text/%s' % type ) else: raise TypeError("Script should be given a dictionary of src:type pairs.") class _oneliner: """An instance of oneliner returns a string corresponding to one element. This class can be used to write 'oneliners' that return a string immediately so there is no need to instantiate the page class.""" def __init__( self, case='lower' ): self.case = case def __getattr__( self, attr ): if attr.startswith("__") and attr.endswith("__"): raise AttributeError(attr) return element( attr, case=self.case, parent=None ) oneliner = _oneliner( case='lower' ) upper_oneliner = _oneliner( case='upper' ) def _argsdicts( args, mydict ): """A utility generator that pads argument list and dictionary values, will only be called with len( args ) = 0, 1.""" if len( args ) == 0: args = None, elif len( args ) == 1: args = _totuple( args[0] ) else: raise Exception("We should have never gotten here.") mykeys = list(mydict.keys( )) myvalues = list(map( _totuple, list(mydict.values( )) )) maxlength = max( list(map( len, [ args ] + myvalues )) ) for i in range( maxlength ): thisdict = { } for key, value in zip( mykeys, myvalues ): try: thisdict[ key ] = value[i] except IndexError: thisdict[ key ] = value[-1] try: thisarg = args[i] except IndexError: thisarg = args[-1] yield thisarg, thisdict def _totuple( x ): """Utility stuff to convert string, int, float, None or anything to a usable tuple.""" if isinstance( x, str ): out = x, elif isinstance( x, ( int, float ) ): out = str( x ), elif x is None: out = None, else: out = tuple( x ) return out def escape( text, newline=False ): """Escape special html characters.""" if isinstance( text, str ): if '&' in text: text = text.replace( '&', '&' ) if '>' in text: text = text.replace( '>', '>' ) if '<' in text: text = text.replace( '<', '<' ) if '\"' in text: text = text.replace( '\"', '"' ) if '\'' in text: text = text.replace( '\'', '"' ) if newline: if '\n' in text: text = text.replace( '\n', '
' ) return text _escape = escape def unescape( text ): """Inverse of escape.""" if isinstance( text, str ): if '&' in text: text = text.replace( '&', '&' ) if '>' in text: text = text.replace( '>', '>' ) if '<' in text: text = text.replace( '<', '<' ) if '"' in text: text = text.replace( '"', '\"' ) return text class dummy: """A dummy class for attaching attributes.""" pass doctype = dummy( ) doctype.frameset = "" doctype.strict = "" doctype.loose = "" class russell: """A dummy class that contains anything.""" def __contains__( self, item ): return True class MarkupError( Exception ): """All our exceptions subclass this.""" def __str__( self ): return self.message class ClosingError( MarkupError ): def __init__( self, tag ): self.message = "The element '%s' does not accept non-keyword arguments (has no closing tag)." % tag class OpeningError( MarkupError ): def __init__( self, tag ): self.message = "The element '%s' can not be opened." % tag class ArgumentError( MarkupError ): def __init__( self, tag ): self.message = "The element '%s' was called with more than one non-keyword argument." % tag class InvalidElementError( MarkupError ): def __init__( self, tag, mode ): self.message = "The element '%s' is not valid for your mode '%s'." % ( tag, mode ) class DeprecationError( MarkupError ): def __init__( self, tag ): self.message = "The element '%s' is deprecated, instantiate markup.page with mode='loose_html' to allow it." % tag class ModeError( MarkupError ): def __init__( self, mode ): self.message = "Mode '%s' is invalid, possible values: strict_html, loose_html, xml." % mode class CustomizationError( MarkupError ): def __init__( self ): self.message = "If you customize the allowed elements, you must define both types 'onetags' and 'twotags'." if __name__ == '__main__': print(__doc__) tablib-0.13.0/tablib/packages/__init__.py0000644000175000017500000000000013440456503017510 0ustar josephjosephtablib-0.13.0/tablib/packages/statistics.py0000644000175000017500000000103413440456503020153 0ustar josephjosephfrom __future__ import division def median(data): """ Return the median (middle value) of numeric data, using the common "mean of middle two" method. If data is empty, ValueError is raised. Mimics the behaviour of Python3's statistics.median >>> median([1, 3, 5]) 3 >>> median([1, 3, 5, 7]) 4.0 """ data = sorted(data) n = len(data) if not n: raise ValueError("No median for empty data") i = n // 2 if n % 2: return data[i] return (data[i - 1] + data[i]) / 2 tablib-0.13.0/Makefile0000644000175000017500000000021613440456503014035 0ustar josephjosephtest: python test_tablib.py publish: python setup.py register python setup.py sdist upload python setup.py bdist_wheel --universal upload tablib-0.13.0/NOTICE0000644000175000017500000000017213440456503013302 0ustar josephjosephTablib includes some vendorized Python libraries: markup. Markup License ============== Markup is in the public domain. tablib-0.13.0/requirements.txt0000644000175000017500000000052613440456503015665 0ustar josephjosephbackports.csv==1.0.6 certifi==2017.7.27.1 chardet==3.0.4 et-xmlfile==1.0.1 idna==2.6 jdcal==1.3 numpy==1.13.1 odfpy==1.3.5 openpyxl==2.4.8 pandas==0.20.3 pkginfo==1.4.1 python-dateutil==2.6.1 pytz==2017.2 PyYAML==3.12 requests==2.18.4 requests-toolbelt==0.8.0 six==1.10.0 tqdm==4.15.0 unicodecsv==0.14.1 urllib3==1.22 xlrd==1.1.0 xlwt==1.3.0