pax_global_header00006660000000000000000000000064140521466340014517gustar00rootroot0000000000000052 comment=8104cb27c0b222bd802b69df58204ab389fc714c vdf-3.4/000077500000000000000000000000001405214663400121445ustar00rootroot00000000000000vdf-3.4/.coveragerc000066400000000000000000000003541405214663400142670ustar00rootroot00000000000000 # Docs: https://coverage.readthedocs.org/en/latest/config.html [run] branch = False # If True, stores relative file paths in data file (needed for Github Actions). # Using this parameter requires coverage>=5.0 relative_files = True vdf-3.4/.github/000077500000000000000000000000001405214663400135045ustar00rootroot00000000000000vdf-3.4/.github/workflows/000077500000000000000000000000001405214663400155415ustar00rootroot00000000000000vdf-3.4/.github/workflows/testing.yml000066400000000000000000000037431405214663400177500ustar00rootroot00000000000000name: Tests on: push: branches: [ master ] paths-ignore: - '.gitignore' - '*.md' - '*.rst' - 'LICENSE' - 'requirements.txt' - 'vdf2json/**' pull_request: branches: [ master ] paths-ignore: - '.gitignore' - '*.md' - '*.rst' - 'LICENSE' - 'requirements.txt' - 'vdf2json/**' jobs: test: runs-on: ${{ matrix.os }} strategy: matrix: os: [ubuntu-latest, macos-latest, windows-latest] python-version: [2.7, 3.5, 3.6, 3.7, 3.8, 3.9] no-coverage: [0] include: - os: ubuntu-latest python-version: pypy-2.7 no-coverage: 1 - os: ubuntu-latest python-version: pypy-3.6 no-coverage: 1 steps: - uses: actions/checkout@v2 - name: Set up Python Env uses: actions/setup-python@v2 with: python-version: ${{ matrix.python-version }} - name: Display Python version run: python -c "import sys; print(sys.version)" - name: Install dependencies run: | make init - name: Run Tests env: NOCOV: ${{ matrix.no-coverage }} run: | make test - name: Upload to Coveralls # pypy + concurrenct=gevent not supported in coveragepy. See https://github.com/nedbat/coveragepy/issues/560 if: matrix.no-coverage == 0 env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} COVERALLS_PARALLEL: true COVERALLS_FLAG_NAME: "${{ matrix.os }}_${{ matrix.python-version }}" run: | coveralls --service=github coveralls: name: Finish Coveralls needs: test runs-on: ubuntu-latest container: python:3-slim steps: - name: Install coveralls run: | pip3 install --upgrade coveralls - name: Send coverage finish to coveralls.io env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | coveralls --finish vdf-3.4/.gitignore000066400000000000000000000000461405214663400141340ustar00rootroot00000000000000dist *.egg-info *.pyc .coverage *.swp vdf-3.4/LICENSE000066400000000000000000000020631405214663400131520ustar00rootroot00000000000000Copyright (c) 2015 Rossen Georgiev Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. vdf-3.4/Makefile000066400000000000000000000014531405214663400136070ustar00rootroot00000000000000# Makefile for VDF module define HELPBODY Available commands: make help - this thing. make init - install python dependancies make test - run tests and coverage make pylint - code analysis make build - pylint + test endef export HELPBODY help: @echo "$$HELPBODY" init: pip install -r dev_requirements.txt COVOPTS = --cov-config .coveragerc --cov=vdf ifeq ($(NOCOV), 1) COVOPTS = endif test: rm -f .coverage vdf/*.pyc tests/*.pyc PYTHONHASHSEED=0 pytest --tb=short $(COVOPTS) tests pylint: pylint -r n -f colorized vdf || true build: pylint test clean: rm -rf dist vdf.egg-info vdf/*.pyc dist: clean python setup.py sdist python setup.py bdist_wheel --universal register: python setup.py register -r pypi upload: dist register twine upload -r pypi dist/* vdf-3.4/README.rst000066400000000000000000000115171405214663400136400ustar00rootroot00000000000000| |pypi| |license| |coverage| |master_build| | |sonar_maintainability| |sonar_reliability| |sonar_security| Pure python module for (de)serialization to and from VDF that works just like ``json``. Tested and works on ``py2.7``, ``py3.3+``, ``pypy`` and ``pypy3``. VDF is Valve's KeyValue text file format https://developer.valvesoftware.com/wiki/KeyValues | Supported versions: ``kv1`` | Unsupported: ``kv2`` and ``kv3`` Install ------- You can grab the latest release from https://pypi.org/project/vdf/ or via ``pip`` .. code:: bash pip install vdf Install the current dev version from ``github`` .. code:: bash pip install git+https://github.com/ValvePython/vdf Problems & solutions -------------------- - There are known files that contain duplicate keys. This is supported the format and makes mapping to ``dict`` impossible. For this case the module provides ``vdf.VDFDict`` that can be used as mapper instead of ``dict``. See the example section for details. - By default de-serialization will return a ``dict``, which doesn't preserve nor guarantee key order on Python versions prior to 3.6, due to `hash randomization`_. If key order is important on old Pythons, I suggest using ``collections.OrderedDict``, or ``vdf.VDFDict``. Example usage ------------- For text representation .. code:: python import vdf # parsing vdf from file or string d = vdf.load(open('file.txt')) d = vdf.loads(vdf_text) d = vdf.parse(open('file.txt')) d = vdf.parse(vdf_text) # dumping dict as vdf to string vdf_text = vdf.dumps(d) indented_vdf = vdf.dumps(d, pretty=True) # dumping dict as vdf to file vdf.dump(d, open('file2.txt','w'), pretty=True) For binary representation .. code:: python d = vdf.binary_loads(vdf_bytes) b = vdf.binary_dumps(d) # alternative format - VBKV d = vdf.binary_loads(vdf_bytes, alt_format=True) b = vdf.binary_dumps(d, alt_format=True) # VBKV with header and CRC checking d = vdf.vbkv_loads(vbkv_bytes) b = vdf.vbkv_dumps(d) Using an alternative mapper .. code:: python d = vdf.loads(vdf_string, mapper=collections.OrderedDict) d = vdf.loads(vdf_string, mapper=vdf.VDFDict) ``VDFDict`` works much like the regular ``dict``, except it handles duplicates and remembers insert order. Additionally, keys can only be of type ``str``. The most important difference is that when trying to assigning a key that already exist it will create a duplicate instead of reassign the value to the existing key. .. code:: python >>> d = vdf.VDFDict() >>> d['key'] = 111 >>> d['key'] = 222 >>> d VDFDict([('key', 111), ('key', 222)]) >>> d.items() [('key', 111), ('key', 222)] >>> d['key'] 111 >>> d[(0, 'key')] # get the first duplicate 111 >>> d[(1, 'key')] # get the second duplicate 222 >>> d.get_all_for('key') [111, 222] >>> d[(1, 'key')] = 123 # reassign specific duplicate >>> d.get_all_for('key') [111, 123] >>> d['key'] = 333 >>> d.get_all_for('key') [111, 123, 333] >>> del d[(1, 'key')] >>> d.get_all_for('key') [111, 333] >>> d[(1, 'key')] 333 >>> print vdf.dumps(d) "key" "111" "key" "333" >>> d.has_duplicates() True >>> d.remove_all_for('key') >>> len(d) 0 >>> d.has_duplicates() False .. |pypi| image:: https://img.shields.io/pypi/v/vdf.svg?style=flat&label=latest%20version :target: https://pypi.org/project/vdf/ :alt: Latest version released on PyPi .. |license| image:: https://img.shields.io/pypi/l/vdf.svg?style=flat&label=license :target: https://pypi.org/project/vdf/ :alt: MIT License .. |coverage| image:: https://img.shields.io/coveralls/ValvePython/vdf/master.svg?style=flat :target: https://coveralls.io/r/ValvePython/vdf?branch=master :alt: Test coverage .. |sonar_maintainability| image:: https://sonarcloud.io/api/project_badges/measure?project=ValvePython_vdf&metric=sqale_rating :target: https://sonarcloud.io/dashboard?id=ValvePython_vdf :alt: SonarCloud Rating .. |sonar_reliability| image:: https://sonarcloud.io/api/project_badges/measure?project=ValvePython_vdf&metric=reliability_rating :target: https://sonarcloud.io/dashboard?id=ValvePython_vdf :alt: SonarCloud Rating .. |sonar_security| image:: https://sonarcloud.io/api/project_badges/measure?project=ValvePython_vdf&metric=security_rating :target: https://sonarcloud.io/dashboard?id=ValvePython_vdf :alt: SonarCloud Rating .. |master_build| image:: https://github.com/ValvePython/vdf/workflows/Tests/badge.svg?branch=master :target: https://github.com/ValvePython/vdf/actions?query=workflow%3A%22Tests%22+branch%3Amaster :alt: Build status of master branch .. _DuplicateOrderedDict: https://github.com/rossengeorgiev/dota2_notebooks/blob/master/DuplicateOrderedDict_for_VDF.ipynb .. _hash randomization: https://docs.python.org/2/using/cmdline.html#envvar-PYTHONHASHSEED vdf-3.4/dev_requirements.txt000066400000000000000000000005701405214663400162700ustar00rootroot00000000000000mock; python_version < '3.3' coverage>=5.0; python_version == '2.7' or python_version >= '3.5' pytest-cov>=2.7.0; python_version == '2.7' or python_version >= '3.5' # coveralls 2.0 has removed support for Python 2.7 and 3.4 git+https://github.com/andy-maier/coveralls-python.git@andy/add-py27#egg=coveralls; python_version == '2.7' coveralls>=2.1.2; python_version >= '3.5' vdf-3.4/setup.py000066400000000000000000000025351405214663400136630ustar00rootroot00000000000000#!/usr/bin/env python from setuptools import setup from codecs import open from os import path import vdf here = path.abspath(path.dirname(__file__)) with open(path.join(here, 'README.rst'), encoding='utf-8') as f: long_description = f.read() setup( name='vdf', version=vdf.__version__, description='Library for working with Valve\'s VDF text format', long_description=long_description, url='https://github.com/ValvePython/vdf', author='Rossen Georgiev', author_email='rossen@rgp.io', license='MIT', classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: MIT License', 'Topic :: Software Development :: Libraries :: Python Modules', 'Natural Language :: English', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', 'Programming Language :: Python :: Implementation :: PyPy', ], keywords='valve keyvalue vdf tf2 dota2 csgo', packages=['vdf'], zip_safe=True, ) vdf-3.4/tests/000077500000000000000000000000001405214663400133065ustar00rootroot00000000000000vdf-3.4/tests/__init__.py000066400000000000000000000000001405214663400154050ustar00rootroot00000000000000vdf-3.4/tests/test_binary_vdf.py000066400000000000000000000153341405214663400170500ustar00rootroot00000000000000import sys import unittest import vdf from io import BytesIO from collections import OrderedDict u = str if sys.version_info >= (3,) else unicode class BinaryVDF(unittest.TestCase): def test_BASE_INT(self): repr(vdf.BASE_INT()) def test_simple(self): pairs = [ ('a', 'test'), ('a2', b'\xd0\xb0\xd0\xb1\xd0\xb2\xd0\xb3'.decode('utf-8')), ('bb', 1), ('bb2', -500), ('ccc', 1.0), ('dddd', vdf.POINTER(1234)), ('fffff', vdf.COLOR(1234)), ('gggggg', vdf.UINT_64(1234)), ('hhhhhhh', vdf.INT_64(-1234)), ] data = OrderedDict(pairs) data['level1-1'] = OrderedDict(pairs) data['level1-1']['level2-1'] = OrderedDict(pairs) data['level1-1']['level2-2'] = OrderedDict(pairs) data['level1-2'] = OrderedDict(pairs) result = vdf.binary_loads(vdf.binary_dumps(data), mapper=OrderedDict) self.assertEqual(data, result) result = vdf.binary_loads(vdf.binary_dumps(data, alt_format=True), mapper=OrderedDict, alt_format=True) self.assertEqual(data, result) result = vdf.vbkv_loads(vdf.vbkv_dumps(data), mapper=OrderedDict) self.assertEqual(data, result) def test_vbkv_empty(self): with self.assertRaises(ValueError): vdf.vbkv_loads(b'') def test_loads_empty(self): self.assertEqual(vdf.binary_loads(b''), {}) self.assertEqual(vdf.binary_load(BytesIO(b'')), {}) def test_dumps_empty(self): self.assertEqual(vdf.binary_dumps({}), b'') buf = BytesIO() vdf.binary_dump({}, buf) self.assertEqual(buf.getvalue(), b'') def test_dumps_unicode(self): self.assertEqual(vdf.binary_dumps({u('a'): u('b')}), b'\x01a\x00b\x00\x08') def test_dumps_unicode_alternative(self): self.assertEqual(vdf.binary_dumps({u('a'): u('b')}, alt_format=True), b'\x01a\x00b\x00\x0b') def test_dump_params_invalid(self): with self.assertRaises(TypeError): vdf.binary_dump([], BytesIO()) with self.assertRaises(TypeError): vdf.binary_dump({}, b'aaaa') def test_dumps_params_invalid(self): with self.assertRaises(TypeError): vdf.binary_dumps([]) with self.assertRaises(TypeError): vdf.binary_dumps(b'aaaa') def test_dumps_key_invalid_type(self): with self.assertRaises(TypeError): vdf.binary_dumps({1:1}) with self.assertRaises(TypeError): vdf.binary_dumps({None:1}) def test_dumps_value_invalid_type(self): with self.assertRaises(TypeError): vdf.binary_dumps({'': None}) def test_alternative_format(self): with self.assertRaises(SyntaxError): vdf.binary_loads(b'\x00a\x00\x00b\x00\x0b\x0b') with self.assertRaises(SyntaxError): vdf.binary_loads(b'\x00a\x00\x00b\x00\x08\x08', alt_format=True) def test_load_params_invalid(self): with self.assertRaises(TypeError): vdf.binary_load(b'aaaa') with self.assertRaises(TypeError): vdf.binary_load(1234) with self.assertRaises(TypeError): vdf.binary_load(BytesIO(b'aaaa'), b'bbbb') def test_loads_params_invalid(self): with self.assertRaises(TypeError): vdf.binary_loads([]) with self.assertRaises(TypeError): vdf.binary_loads(11111) with self.assertRaises(TypeError): vdf.binary_loads(BytesIO()) with self.assertRaises(TypeError): vdf.binary_load(b'', b'bbbb') def test_loads_unbalanced_nesting(self): with self.assertRaises(SyntaxError): vdf.binary_loads(b'\x00a\x00\x00b\x00\x08') with self.assertRaises(SyntaxError): vdf.binary_loads(b'\x00a\x00\x00b\x00\x08\x08\x08\x08') def test_loads_unknown_type(self): with self.assertRaises(SyntaxError): vdf.binary_loads(b'\x33a\x00\x08') def test_loads_unterminated_string(self): with self.assertRaises(SyntaxError): vdf.binary_loads(b'\x01abbbb') def test_loads_type_checks(self): with self.assertRaises(TypeError): vdf.binary_loads(None) with self.assertRaises(TypeError): vdf.binary_loads(b'', mapper=list) def test_merge_multiple_keys_on(self): # VDFDict([('a', VDFDict([('a', '1'), ('b', '2')])), ('a', VDFDict([('a', '3'), ('c', '4')]))]) test = b'\x00a\x00\x01a\x001\x00\x01b\x002\x00\x08\x00a\x00\x01a\x003\x00\x01c\x004\x00\x08\x08' result = {'a': {'a': '3', 'b': '2', 'c': '4'}} self.assertEqual(vdf.binary_loads(test, merge_duplicate_keys=True), result) def test_merge_multiple_keys_off(self): # VDFDict([('a', VDFDict([('a', '1'), ('b', '2')])), ('a', VDFDict([('a', '3'), ('c', '4')]))]) test = b'\x00a\x00\x01a\x001\x00\x01b\x002\x00\x08\x00a\x00\x01a\x003\x00\x01c\x004\x00\x08\x08' result = {'a': {'a': '3', 'c': '4'}} self.assertEqual(vdf.binary_loads(test, merge_duplicate_keys=False), result) def test_raise_on_remaining(self): # default binary_loads is to raise with self.assertRaises(SyntaxError): vdf.binary_loads(b'\x01key\x00value\x00\x08' + b'aaaa') # do not raise self.assertEqual(vdf.binary_loads(b'\x01key\x00value\x00\x08' + b'aaaa', raise_on_remaining=False), {'key': 'value'}) def test_raise_on_remaining_with_file(self): buf = BytesIO(b'\x01key\x00value\x00\x08' + b'aaaa') # binary_load doesn't raise by default self.assertEqual(vdf.binary_load(buf), {'key': 'value'}) self.assertEqual(buf.read(), b'aaaa') # raise when extra data remains buf.seek(0) with self.assertRaises(SyntaxError): vdf.binary_load(buf, raise_on_remaining=True) self.assertEqual(buf.read(), b'aaaa') def test_vbkv_loads_empty(self): with self.assertRaises(ValueError): vdf.vbkv_loads(b'') def test_vbkv_dumps_empty(self): self.assertEqual(vdf.vbkv_dumps({}), b'VBKV\x00\x00\x00\x00') def test_vbkv_loads_invalid_header(self): with self.assertRaises(ValueError): vdf.vbkv_loads(b'DD1235764tdffhghsdf') def test_vbkv_loads_invalid_checksum(self): with self.assertRaises(ValueError): vdf.vbkv_loads(b'VBKV\x01\x02\x03\x04\x00a\x00\x0b\x0b') def test_loads_utf8_invalmid(self): self.assertEqual({'aaa': b'bb\xef\xbf\xbdbb'.decode('utf-8')}, vdf.binary_loads(b'\x01aaa\x00bb\xffbb\x00\x08')) def test_loads_utf16(self): self.assertEqual({'aaa': b'b\x00b\x00\xff\xffb\x00b\x00'.decode('utf-16le')}, vdf.binary_loads(b'\x05aaa\x00b\x00b\x00\xff\xffb\x00b\x00\x00\x00\x08')) vdf-3.4/tests/test_vdf.py000066400000000000000000000341441405214663400155040ustar00rootroot00000000000000import unittest import sys try: from unittest import mock except ImportError: import mock try: from StringIO import StringIO except ImportError: from io import StringIO import vdf class testcase_helpers_escapes(unittest.TestCase): # https://github.com/ValveSoftware/source-sdk-2013/blob/0d8dceea4310fde5706b3ce1c70609d72a38efdf/sp/src/tier1/utlbuffer.cpp#L57-L68 esc_chars_raw = "aa\n\t\v\b\r\f\a\\?\"'bb" esc_chars_escaped = 'aa\\n\\t\\v\\b\\r\\f\\a\\\\\\?\\"\\\'bb' def test_escape(self): self.assertEqual(vdf._escape(self.esc_chars_raw), self.esc_chars_escaped) def test_unescape(self): self.assertEqual(vdf._unescape(self.esc_chars_escaped), self.esc_chars_raw) def test_escape_unescape(self): self.assertEqual(vdf._unescape(vdf._escape(self.esc_chars_raw)), self.esc_chars_raw) class testcase_helpers_load(unittest.TestCase): def setUp(self): self.f = StringIO() def tearDown(self): self.f.close() @mock.patch("vdf.parse") def test_routine_loads(self, mock_parse): vdf.loads("") (fp,), _ = mock_parse.call_args self.assertIsInstance(fp, StringIO) def test_routine_loads_assert(self): for t in [5, 5.5, 1.0j, None, [], (), {}, lambda: 0, sys.stdin, self.f]: self.assertRaises(TypeError, vdf.loads, t) @mock.patch("vdf.parse") def test_routine_load(self, mock_parse): vdf.load(sys.stdin) mock_parse.assert_called_with(sys.stdin) vdf.load(self.f) mock_parse.assert_called_with(self.f) @mock.patch("vdf.parse") def test_routines_mapper_passing(self, mock_parse): vdf.load(sys.stdin, mapper=dict) mock_parse.assert_called_with(sys.stdin, mapper=dict) vdf.loads("", mapper=dict) (fp,), kw = mock_parse.call_args self.assertIsInstance(fp, StringIO) self.assertIs(kw['mapper'], dict) class CustomDict(dict): pass vdf.load(sys.stdin, mapper=CustomDict) mock_parse.assert_called_with(sys.stdin, mapper=CustomDict) vdf.loads("", mapper=CustomDict) (fp,), kw = mock_parse.call_args self.assertIsInstance(fp, StringIO) self.assertIs(kw['mapper'], CustomDict) def test_routine_load_assert(self): for t in [5, 5.5, 1.0j, None, [], (), {}, lambda: 0, '']: self.assertRaises(TypeError, vdf.load, t) class testcase_helpers_dump(unittest.TestCase): def setUp(self): self.f = StringIO() def tearDown(self): self.f.close() def test_dump_params_invalid(self): # pretty/escaped only accept bool with self.assertRaises(TypeError): vdf.dump({'a': 1}, StringIO(), pretty=1) with self.assertRaises(TypeError): vdf.dumps({'a': 1}, pretty=1) with self.assertRaises(TypeError): vdf.dump({'a': 1}, StringIO(), escaped=1) with self.assertRaises(TypeError): vdf.dumps({'a': 1}, escaped=1) def test_routine_dumps_asserts(self): for x in [5, 5.5, 1.0j, True, None, (), {}, lambda: 0, sys.stdin, self.f]: for y in [5, 5.5, 1.0j, None, [], (), {}, lambda: 0, sys.stdin, self.f]: self.assertRaises(TypeError, vdf.dumps, x, y) def test_routine_dump_asserts(self): for x in [5, 5.5, 1.0j, True, None, (), {}, lambda: 0, sys.stdin, self.f]: for y in [5, 5.5, 1.0j, True, None, [], (), {}, lambda: 0]: self.assertRaises(TypeError, vdf.dump, x, y) def test_routine_dump_writing(self): class CustomDict(dict): pass for mapper in (dict, CustomDict): src = mapper({"asd": "123"}) expected = vdf.dumps(src) vdf.dump(src, self.f) self.f.seek(0) self.assertEqual(expected, self.f.read()) class testcase_routine_parse(unittest.TestCase): def test_parse_bom_removal(self): result = vdf.loads(vdf.BOMS + '"asd" "123"') self.assertEqual(result, {'asd': '123'}) if sys.version_info[0] == 2: result = vdf.loads(vdf.BOMS_UNICODE + '"asd" "123"') self.assertEqual(result, {'asd': '123'}) def test_parse_source_asserts(self): for t in ['', 5, 5.5, 1.0j, True, None, (), {}, lambda: 0]: self.assertRaises(TypeError, vdf.parse, t) def test_parse_mapper_assert(self): self.assertRaises(TypeError, vdf.parse, StringIO(" "), mapper=list) def test_parse_file_source(self): self.assertEqual(vdf.parse(StringIO(" ")), {}) class CustomDict(dict): pass self.assertEqual(vdf.parse(StringIO(" "), mapper=CustomDict), {}) class testcase_VDF(unittest.TestCase): def test_empty(self): self.assertEqual(vdf.loads(""), {}) def test_keyvalue_pairs(self): INPUT = ''' "key1" "value1" key2 "value2" KEY3 "value3" "key4" value4 "key5" VALUE5 ''' EXPECTED = { 'key1': 'value1', 'key2': 'value2', 'KEY3': 'value3', 'key4': 'value4', 'key5': 'VALUE5', } self.assertEqual(vdf.loads(INPUT), EXPECTED) self.assertEqual(vdf.loads(INPUT, escaped=False), EXPECTED) def test_keyvalue_open_quoted(self): INPUT = ( '"key1" "a\n' 'b\n' 'c"\n' 'key2 "a\n' 'b\n' 'c"\n' ) EXPECTED = { 'key1': 'a\nb\nc', 'key2': 'a\nb\nc', } self.assertEqual(vdf.loads(INPUT), EXPECTED) self.assertEqual(vdf.loads(INPUT, escaped=False), EXPECTED) def test_multi_keyvalue_pairs(self): INPUT = ''' "root1" { "key1" "value1" key2 "value2" "key3" value3 } root2 { "key1" "value1" key2 "value2" "key3" value3 } ''' EXPECTED = { 'root1': { 'key1': 'value1', 'key2': 'value2', 'key3': 'value3', }, 'root2': { 'key1': 'value1', 'key2': 'value2', 'key3': 'value3', } } self.assertEqual(vdf.loads(INPUT), EXPECTED) self.assertEqual(vdf.loads(INPUT, escaped=False), EXPECTED) def test_deep_nesting(self): INPUT = ''' "root" { node1 { "node2" { NODE3 { "node4" { node5 { "node6" { NODE7 { "node8" { "key" "value" } } } } } } } } } ''' EXPECTED = { 'root': { 'node1': { 'node2': { 'NODE3': { 'node4': { 'node5': { 'node6': { 'NODE7': { 'node8': { 'key': 'value' }}}}}}}}} } self.assertEqual(vdf.loads(INPUT), EXPECTED) self.assertEqual(vdf.loads(INPUT, escaped=False), EXPECTED) def test_comments_and_blank_lines(self): INPUT = ''' // this is comment "key1" "value1" // another comment key2 "value2" // further comments "key3" value3 // useless comment key4 // comments comments comments { // is this a comment? k v // comment } // you only comment once // comment out of nowhere "key5" // pretty much anything here { // is this a comment? K V //comment } ''' EXPECTED = { 'key1': 'value1', 'key2': 'value2', 'key3': 'value3', 'key4': { 'k': 'v' }, 'key5': { 'K': 'V' }, } self.assertEqual(vdf.loads(INPUT), EXPECTED) self.assertEqual(vdf.loads(INPUT, escaped=False), EXPECTED) def test_hash_key(self): INPUT = '#include "asd.vdf"' EXPECTED = {'#include': 'asd.vdf'} self.assertEqual(vdf.loads(INPUT), EXPECTED) INPUT = '#base asd.vdf' EXPECTED = {'#base': 'asd.vdf'} self.assertEqual(vdf.loads(INPUT), EXPECTED) self.assertEqual(vdf.loads(INPUT, escaped=False), EXPECTED) def test_wierd_symbols_for_unquoted(self): INPUT = 'a asd.vdf\nb language_*lol*\nc zxc_-*.sss//\nd<2?$% /cde/$fgh/' EXPECTED = { 'a': 'asd.vdf', 'b': 'language_*lol*', 'c': 'zxc_-*.sss', 'd<2?$%': '/cde/$fgh/', } self.assertEqual(vdf.loads(INPUT), EXPECTED) self.assertEqual(vdf.loads(INPUT, escaped=False), EXPECTED) def test_space_for_unquoted(self): INPUT = 'a b c d \n efg h i\t // j k\n' EXPECTED= { 'a': 'b c d', 'efg': 'h i', } self.assertEqual(vdf.loads(INPUT), EXPECTED) self.assertEqual(vdf.loads(INPUT, escaped=False), EXPECTED) def test_merge_multiple_keys_on(self): INPUT = ''' a { a 1 b 2 } a { a 3 c 4 } ''' EXPECTED = {'a': {'a': '3', 'b': '2', 'c': '4'}} self.assertEqual(vdf.loads(INPUT, merge_duplicate_keys=True), EXPECTED) self.assertEqual(vdf.loads(INPUT, escaped=False, merge_duplicate_keys=True), EXPECTED) def test_merge_multiple_keys_off(self): INPUT = ''' a { a 1 b 2 } a { a 3 c 4 } ''' EXPECTED = {'a': {'a': '3', 'c': '4'}} self.assertEqual(vdf.loads(INPUT, merge_duplicate_keys=False), EXPECTED) self.assertEqual(vdf.loads(INPUT, escaped=False, merge_duplicate_keys=False), EXPECTED) def test_escape_before_last(self): INPUT = r''' "aaa\\" "1" "1" "bbb\\" ''' EXPECTED = { "aaa\\": "1", "1": "bbb\\", } self.assertEqual(vdf.loads(INPUT), EXPECTED) def test_escape_before_last_unescaped(self): INPUT = r''' "aaa\\" "1" "1" "bbb\\" ''' EXPECTED = { "aaa\\\\": "1", "1": "bbb\\\\", } self.assertEqual(vdf.loads(INPUT, escaped=False), EXPECTED) def test_single_line_empty_block(self): INPUT = ''' "root1" { "key1" {} key2 "value2" "key3" value3 } root2 { } root3 { "key1" "value1" key2 { } "key3" value3 } ''' EXPECTED = { 'root1': { 'key1': {}, 'key2': 'value2', 'key3': 'value3', }, 'root2': {}, 'root3': { 'key1': 'value1', 'key2': {}, 'key3': 'value3', } } self.assertEqual(vdf.loads(INPUT), EXPECTED) self.assertEqual(vdf.loads(INPUT, escaped=False), EXPECTED) def test_inline_opening_bracker(self): INPUT = ''' "root1" { "key1" { } key2 "value2" "key3" value3 } root2 { } root3 { "key1" "value1" key2 { } "key3" value3 } ''' EXPECTED = { 'root1': { 'key1': {}, 'key2': 'value2', 'key3': 'value3', }, 'root2': {}, 'root3': { 'key1': 'value1', 'key2': {}, 'key3': 'value3', } } self.assertEqual(vdf.loads(INPUT), EXPECTED) self.assertEqual(vdf.loads(INPUT, escaped=False), EXPECTED) def test_duplicate_key_with_value_from_str_to_mapper(self): INPUT = r''' level1 { key1 text1 key2 text2 } level1 { key2 { key3 text3 } } ''' EXPECTED = { "level1": { "key1": "text1", "key2": { "key3": "text3" } } } self.assertEqual(vdf.loads(INPUT), EXPECTED) class testcase_VDF_other(unittest.TestCase): def test_dumps_pretty_output(self): tests = [ [ {'1': '1'}, '"1" "1"\n', ], [ {'1': {'2': '2'}}, '"1"\n{\n\t"2" "2"\n}\n', ], [ {'1': {'2': {'3': '3'}}}, '"1"\n{\n\t"2"\n\t{\n\t\t"3" "3"\n\t}\n}\n', ], ] for test, expected in tests: self.assertEqual(vdf.dumps(test, pretty=True), expected) def test_parse_exceptions(self): tests = [ # expect bracket - invalid syntax '"asd"\n"zxc" "333"\n"', 'asd\nzxc 333\n"', # invalid syntax '"asd" "123"\n"zxc" "333"\n"', 'asd 123\nzxc 333\n"', '"asd\n\n\n\n\nzxc', '"asd" "bbb\n\n\n\n\nzxc', # one too many closing parenthasis '"asd"\n{\n"zxc" "123"\n}\n}\n}\n}\n', 'asd\n{\nzxc 123\n}\n}\n}\n}\n', # unclosed parenthasis '"asd"\n{\n"zxc" "333"\n' 'asd\n{\nzxc 333\n' ] for test in tests: self.assertRaises(SyntaxError, vdf.loads, test) vdf-3.4/tests/test_vdf_dict.py000066400000000000000000000224751405214663400165130ustar00rootroot00000000000000import unittest from vdf import VDFDict class VDFDictCase(unittest.TestCase): def test_init(self): with self.assertRaises(ValueError): VDFDict("asd zxc") with self.assertRaises(ValueError): VDFDict(5) with self.assertRaises(ValueError): VDFDict((('1',1), ('2', 2))) def test_repr(self): self.assertIsInstance(repr(VDFDict()), str) def test_len(self): self.assertEqual(len(VDFDict()), 0) self.assertEqual(len(VDFDict({'1':1})), 1) def test_verify_key_tuple(self): a = VDFDict() with self.assertRaises(ValueError): a._verify_key_tuple([]) with self.assertRaises(ValueError): a._verify_key_tuple((1,)) with self.assertRaises(ValueError): a._verify_key_tuple((1,1,1)) with self.assertRaises(TypeError): a._verify_key_tuple((None, 'asd')) with self.assertRaises(TypeError): a._verify_key_tuple(('1', 'asd')) with self.assertRaises(TypeError): a._verify_key_tuple((1, 1)) with self.assertRaises(TypeError): a._verify_key_tuple((1, None)) def test_normalize_key(self): a = VDFDict() self.assertEqual(a._normalize_key('AAA'), (0, 'AAA')) self.assertEqual(a._normalize_key((5, 'BBB')), (5, 'BBB')) def test_normalize_key_exception(self): a = VDFDict() with self.assertRaises(TypeError): a._normalize_key(5) with self.assertRaises(TypeError): a._normalize_key([]) with self.assertRaises(TypeError): a._normalize_key(None) def test_setitem(self): a = list(zip(map(str, range(5, 0, -1)), range(50, 0, -10))) b = VDFDict() for k,v in a: b[k] = v self.assertEqual(a, list(b.items())) def test_setitem_with_duplicates(self): a = list(zip(['5']*5, range(50, 0, -10))) b = VDFDict() for k,v in a: b[k] = v self.assertEqual(a, list(b.items())) def test_setitem_key_exceptions(self): with self.assertRaises(TypeError): VDFDict()[5] = None with self.assertRaises(TypeError): VDFDict()[(0, 5)] = None with self.assertRaises(ValueError): VDFDict()[(0, '5', 1)] = None def test_setitem_key_valid_types(self): VDFDict()['5'] = None VDFDict({'5': None})[(0, '5')] = None def test_setitem_keyerror_fullkey(self): with self.assertRaises(KeyError): VDFDict([("1", None)])[(1, "1")] = None def test_getitem(self): a = VDFDict([('1',2), ('1',3)]) self.assertEqual(a['1'], 2) self.assertEqual(a[(0, '1')], 2) self.assertEqual(a[(1, '1')], 3) def test_del(self): a = VDFDict([("1",1),("1",2),("5",51),("1",3),("5",52)]) b = [("1",1),("1",2),("1",3),("5",52)] del a["5"] self.assertEqual(list(a.items()), b) def test_del_by_fullkey(self): a = VDFDict([("1",1),("1",2),("5",51),("1",3),("5",52)]) b = [("1",1),("1",2),("1",3),("5",52)] del a[(0, "5")] self.assertEqual(list(a.items()), b) def test_del_first_duplicate(self): a = [("1",1),("1",2),("1",3),("1",4)] b = VDFDict(a) del b["1"] del b["1"] del b[(0, "1")] del b[(0, "1")] self.assertEqual(len(b), 0) def test_del_exception(self): with self.assertRaises(KeyError): a = VDFDict() del a["1"] with self.assertRaises(KeyError): a = VDFDict({'1':1}) del a[(1, "1")] def test_iter(self): a = VDFDict({"1": 1}) iter(a).__iter__ self.assertEqual(len(list(iter(a))), 1) def test_in(self): a = VDFDict({"1":2, "3":4, "5":6}) self.assertTrue('1' in a) self.assertTrue((0, '1') in a) self.assertFalse('6' in a) self.assertFalse((1, '1') in a) def test_eq(self): self.assertEqual(VDFDict(), VDFDict()) self.assertNotEqual(VDFDict(), VDFDict({'1':1})) self.assertNotEqual(VDFDict(), {'1':1}) a = [("a", 1), ("b", 5), ("a", 11)] self.assertEqual(VDFDict(a), VDFDict(a)) self.assertNotEqual(VDFDict(a), VDFDict(a[1:])) def test_clear(self): a = VDFDict([("1",2),("1",2),("5",3),("1",2)]) a.clear() self.assertEqual(len(a), 0) self.assertEqual(len(a.keys()), 0) self.assertEqual(len(list(a.iterkeys())), 0) self.assertEqual(len(a.values()), 0) self.assertEqual(len(list(a.itervalues())), 0) self.assertEqual(len(a.items()), 0) self.assertEqual(len(list(a.iteritems())), 0) def test_get(self): a = VDFDict([('1',11), ('1',22)]) self.assertEqual(a.get('1'), 11) self.assertEqual(a.get((1, '1')), 22) self.assertEqual(a.get('2', 33), 33) self.assertEqual(a.get((0, '2'), 44), 44) def test_setdefault(self): a = VDFDict([('1',11), ('1',22)]) self.assertEqual(a.setdefault('1'), 11) self.assertEqual(a.setdefault((0, '1')), 11) self.assertEqual(a.setdefault('2'), None) self.assertEqual(a.setdefault((0, '2')), None) self.assertEqual(a.setdefault('3', 33), 33) def test_pop(self): a = VDFDict([('1',11),('2',22),('1',33),('2',44),('2',55)]) self.assertEqual(a.pop('1'), 11) self.assertEqual(a.pop('1'), 33) with self.assertRaises(KeyError): a.pop('1') self.assertEqual(a.pop((1, '2')), 44) self.assertEqual(a.pop((1, '2')), 55) def test_popitem(self): a = [('1',11),('2',22),('1',33)] b = VDFDict(a) self.assertEqual(b.popitem(), a.pop()) self.assertEqual(b.popitem(), a.pop()) self.assertEqual(b.popitem(), a.pop()) with self.assertRaises(KeyError): b.popitem() def test_update(self): a = VDFDict([("1",2),("1",2),("5",3),("1",2)]) b = VDFDict() b.update([("1",2),("1",2)]) b.update([("5",3),("1",2)]) self.assertEqual(list(a.items()), list(b.items())) def test_update_exceptions(self): a = VDFDict() with self.assertRaises(TypeError): a.update(None) with self.assertRaises(TypeError): a.update(1) with self.assertRaises(TypeError): a.update("asd zxc") with self.assertRaises(ValueError): a.update([(1,1,1), (2,2,2)]) map_test = [ ("1", 2), ("4", 3),("4", 3),("4", 2), ("7", 2), ("1", 2), ] def test_keys(self): _dict = VDFDict(self.map_test) self.assertSequenceEqual( list(_dict.keys()), list(x[0] for x in self.map_test)) def test_values(self): _dict = VDFDict(self.map_test) self.assertSequenceEqual( list(_dict.values()), list(x[1] for x in self.map_test)) def test_items(self): _dict = VDFDict(self.map_test) self.assertSequenceEqual( list(_dict.items()), self.map_test) def test_direct_access_get(self): b = dict() a = VDFDict({"1":2, "3":4, "5":6}) for k,v in a.items(): b[k] = v self.assertEqual(dict(a.items()), b) def test_duplicate_keys(self): items = [('key1', 1), ('key1', 2), ('key3', 3), ('key1', 1)] keys = [x[0] for x in items] values = [x[1] for x in items] _dict = VDFDict(items) self.assertEqual(list(_dict.items()), items) self.assertEqual(list(_dict.keys()), keys) self.assertEqual(list(_dict.values()), values) def test_same_type_init(self): self.assertSequenceEqual( tuple(VDFDict(self.map_test).items()), tuple(VDFDict(VDFDict(self.map_test)).items())) def test_get_all_for(self): a = VDFDict([("1",2),("1",2**31),("5",3),("1",2)]) self.assertEqual( list(a.get_all_for("1")), [2,2**31,2], ) def test_get_all_for_invalid_key(self): a = VDFDict() with self.assertRaises(TypeError): a.get_all_for(None) with self.assertRaises(TypeError): a.get_all_for(5) with self.assertRaises(TypeError): a.get_all_for((0, '5')) def test_remove_all_for(self): a = VDFDict([("1",2),("1",2),("5",3),("1",2)]) a.remove_all_for("1") self.assertEqual(list(a.items()), [("5",3)]) self.assertEqual(len(a), 1) def test_remove_all_for_invalid_key(self): a = VDFDict() with self.assertRaises(TypeError): a.remove_all_for(None) with self.assertRaises(TypeError): a.remove_all_for(5) with self.assertRaises(TypeError): a.remove_all_for((0, '5')) def test_has_duplicates(self): # single level duplicate a = [('1', 11), ('1', 22)] b = VDFDict(a) self.assertTrue(b.has_duplicates()) # duplicate in nested c = VDFDict({'1': b}) self.assertTrue(c.has_duplicates()) # duplicate in nested dict d = VDFDict({'1': {'2': {'3': b}}}) self.assertTrue(d.has_duplicates()) # duplicate in nested dict d = VDFDict({'1': {'2': {'3': None}}}) self.assertFalse(d.has_duplicates()) vdf-3.4/vdf/000077500000000000000000000000001405214663400127235ustar00rootroot00000000000000vdf-3.4/vdf/__init__.py000066400000000000000000000421741405214663400150440ustar00rootroot00000000000000""" Module for deserializing/serializing to and from VDF """ __version__ = "3.4" __author__ = "Rossen Georgiev" import re import sys import struct from binascii import crc32 from io import BytesIO from io import StringIO as unicodeIO try: from collections.abc import Mapping except: from collections import Mapping from vdf.vdict import VDFDict # Py2 & Py3 compatibility if sys.version_info[0] >= 3: string_type = str int_type = int BOMS = '\ufffe\ufeff' def strip_bom(line): return line.lstrip(BOMS) else: from StringIO import StringIO as strIO string_type = basestring int_type = long BOMS = '\xef\xbb\xbf\xff\xfe\xfe\xff' BOMS_UNICODE = '\\ufffe\\ufeff'.decode('unicode-escape') def strip_bom(line): return line.lstrip(BOMS if isinstance(line, str) else BOMS_UNICODE) # string escaping _unescape_char_map = { r"\n": "\n", r"\t": "\t", r"\v": "\v", r"\b": "\b", r"\r": "\r", r"\f": "\f", r"\a": "\a", r"\\": "\\", r"\?": "?", r"\"": "\"", r"\'": "\'", } _escape_char_map = {v: k for k, v in _unescape_char_map.items()} def _re_escape_match(m): return _escape_char_map[m.group()] def _re_unescape_match(m): return _unescape_char_map[m.group()] def _escape(text): return re.sub(r"[\n\t\v\b\r\f\a\\\?\"']", _re_escape_match, text) def _unescape(text): return re.sub(r"(\\n|\\t|\\v|\\b|\\r|\\f|\\a|\\\\|\\\?|\\\"|\\')", _re_unescape_match, text) # parsing and dumping for KV1 def parse(fp, mapper=dict, merge_duplicate_keys=True, escaped=True): """ Deserialize ``s`` (a ``str`` or ``unicode`` instance containing a VDF) to a Python object. ``mapper`` specifies the Python object used after deserializetion. ``dict` is used by default. Alternatively, ``collections.OrderedDict`` can be used if you wish to preserve key order. Or any object that acts like a ``dict``. ``merge_duplicate_keys`` when ``True`` will merge multiple KeyValue lists with the same key into one instead of overwriting. You can se this to ``False`` if you are using ``VDFDict`` and need to preserve the duplicates. """ if not issubclass(mapper, Mapping): raise TypeError("Expected mapper to be subclass of dict, got %s" % type(mapper)) if not hasattr(fp, 'readline'): raise TypeError("Expected fp to be a file-like object supporting line iteration") stack = [mapper()] expect_bracket = False re_keyvalue = re.compile(r'^("(?P(?:\\.|[^\\"])*)"|(?P#?[a-z0-9\-\_\\\?$%<>]+))' r'([ \t]*(' r'"(?P(?:\\.|[^\\"])*)(?P")?' r'|(?P(?:(? ])+)' r'|(?P{[ \t]*)(?P})?' r'))?', flags=re.I) for lineno, line in enumerate(fp, 1): if lineno == 1: line = strip_bom(line) line = line.lstrip() # skip empty and comment lines if line == "" or line[0] == '/': continue # one level deeper if line[0] == "{": expect_bracket = False continue if expect_bracket: raise SyntaxError("vdf.parse: expected openning bracket", (getattr(fp, 'name', '<%s>' % fp.__class__.__name__), lineno, 1, line)) # one level back if line[0] == "}": if len(stack) > 1: stack.pop() continue raise SyntaxError("vdf.parse: one too many closing parenthasis", (getattr(fp, 'name', '<%s>' % fp.__class__.__name__), lineno, 0, line)) # parse keyvalue pairs while True: match = re_keyvalue.match(line) if not match: try: line += next(fp) continue except StopIteration: raise SyntaxError("vdf.parse: unexpected EOF (open key quote?)", (getattr(fp, 'name', '<%s>' % fp.__class__.__name__), lineno, 0, line)) key = match.group('key') if match.group('qkey') is None else match.group('qkey') val = match.group('qval') if val is None: val = match.group('val') if val is not None: val = val.rstrip() if val == "": val = None if escaped: key = _unescape(key) # we have a key with value in parenthesis, so we make a new dict obj (level deeper) if val is None: if merge_duplicate_keys and key in stack[-1]: _m = stack[-1][key] # we've descended a level deeper, if value is str, we have to overwrite it to mapper if not isinstance(_m, mapper): _m = stack[-1][key] = mapper() else: _m = mapper() stack[-1][key] = _m if match.group('eblock') is None: # only expect a bracket if it's not already closed or on the same line stack.append(_m) if match.group('sblock') is None: expect_bracket = True # we've matched a simple keyvalue pair, map it to the last dict obj in the stack else: # if the value is line consume one more line and try to match again, # until we get the KeyValue pair if match.group('vq_end') is None and match.group('qval') is not None: try: line += next(fp) continue except StopIteration: raise SyntaxError("vdf.parse: unexpected EOF (open quote for value?)", (getattr(fp, 'name', '<%s>' % fp.__class__.__name__), lineno, 0, line)) stack[-1][key] = _unescape(val) if escaped else val # exit the loop break if len(stack) != 1: raise SyntaxError("vdf.parse: unclosed parenthasis or quotes (EOF)", (getattr(fp, 'name', '<%s>' % fp.__class__.__name__), lineno, 0, line)) return stack.pop() def loads(s, **kwargs): """ Deserialize ``s`` (a ``str`` or ``unicode`` instance containing a JSON document) to a Python object. """ if not isinstance(s, string_type): raise TypeError("Expected s to be a str, got %s" % type(s)) try: fp = unicodeIO(s) except TypeError: fp = strIO(s) return parse(fp, **kwargs) def load(fp, **kwargs): """ Deserialize ``fp`` (a ``.readline()``-supporting file-like object containing a JSON document) to a Python object. """ return parse(fp, **kwargs) def dumps(obj, pretty=False, escaped=True): """ Serialize ``obj`` to a VDF formatted ``str``. """ if not isinstance(obj, Mapping): raise TypeError("Expected data to be an instance of``dict``") if not isinstance(pretty, bool): raise TypeError("Expected pretty to be of type bool") if not isinstance(escaped, bool): raise TypeError("Expected escaped to be of type bool") return ''.join(_dump_gen(obj, pretty, escaped)) def dump(obj, fp, pretty=False, escaped=True): """ Serialize ``obj`` as a VDF formatted stream to ``fp`` (a ``.write()``-supporting file-like object). """ if not isinstance(obj, Mapping): raise TypeError("Expected data to be an instance of``dict``") if not hasattr(fp, 'write'): raise TypeError("Expected fp to have write() method") if not isinstance(pretty, bool): raise TypeError("Expected pretty to be of type bool") if not isinstance(escaped, bool): raise TypeError("Expected escaped to be of type bool") for chunk in _dump_gen(obj, pretty, escaped): fp.write(chunk) def _dump_gen(data, pretty=False, escaped=True, level=0): indent = "\t" line_indent = "" if pretty: line_indent = indent * level for key, value in data.items(): if escaped and isinstance(key, string_type): key = _escape(key) if isinstance(value, Mapping): yield '%s"%s"\n%s{\n' % (line_indent, key, line_indent) for chunk in _dump_gen(value, pretty, escaped, level+1): yield chunk yield "%s}\n" % line_indent else: if escaped and isinstance(value, string_type): value = _escape(value) yield '%s"%s" "%s"\n' % (line_indent, key, value) # binary VDF class BASE_INT(int_type): def __repr__(self): return "%s(%d)" % (self.__class__.__name__, self) class UINT_64(BASE_INT): pass class INT_64(BASE_INT): pass class POINTER(BASE_INT): pass class COLOR(BASE_INT): pass BIN_NONE = b'\x00' BIN_STRING = b'\x01' BIN_INT32 = b'\x02' BIN_FLOAT32 = b'\x03' BIN_POINTER = b'\x04' BIN_WIDESTRING = b'\x05' BIN_COLOR = b'\x06' BIN_UINT64 = b'\x07' BIN_END = b'\x08' BIN_INT64 = b'\x0A' BIN_END_ALT = b'\x0B' def binary_loads(b, mapper=dict, merge_duplicate_keys=True, alt_format=False, raise_on_remaining=True): """ Deserialize ``b`` (``bytes`` containing a VDF in "binary form") to a Python object. ``mapper`` specifies the Python object used after deserializetion. ``dict` is used by default. Alternatively, ``collections.OrderedDict`` can be used if you wish to preserve key order. Or any object that acts like a ``dict``. ``merge_duplicate_keys`` when ``True`` will merge multiple KeyValue lists with the same key into one instead of overwriting. You can se this to ``False`` if you are using ``VDFDict`` and need to preserve the duplicates. """ if not isinstance(b, bytes): raise TypeError("Expected s to be bytes, got %s" % type(b)) return binary_load(BytesIO(b), mapper, merge_duplicate_keys, alt_format, raise_on_remaining) def binary_load(fp, mapper=dict, merge_duplicate_keys=True, alt_format=False, raise_on_remaining=False): """ Deserialize ``fp`` (a ``.read()``-supporting file-like object containing binary VDF) to a Python object. ``mapper`` specifies the Python object used after deserializetion. ``dict` is used by default. Alternatively, ``collections.OrderedDict`` can be used if you wish to preserve key order. Or any object that acts like a ``dict``. ``merge_duplicate_keys`` when ``True`` will merge multiple KeyValue lists with the same key into one instead of overwriting. You can se this to ``False`` if you are using ``VDFDict`` and need to preserve the duplicates. """ if not hasattr(fp, 'read') or not hasattr(fp, 'tell') or not hasattr(fp, 'seek'): raise TypeError("Expected fp to be a file-like object with tell()/seek() and read() returning bytes") if not issubclass(mapper, Mapping): raise TypeError("Expected mapper to be subclass of dict, got %s" % type(mapper)) # helpers int32 = struct.Struct(' 1: stack.pop() continue break key = read_string(fp) if t == BIN_NONE: if merge_duplicate_keys and key in stack[-1]: _m = stack[-1][key] else: _m = mapper() stack[-1][key] = _m stack.append(_m) elif t == BIN_STRING: stack[-1][key] = read_string(fp) elif t == BIN_WIDESTRING: stack[-1][key] = read_string(fp, wide=True) elif t in (BIN_INT32, BIN_POINTER, BIN_COLOR): val = int32.unpack(fp.read(int32.size))[0] if t == BIN_POINTER: val = POINTER(val) elif t == BIN_COLOR: val = COLOR(val) stack[-1][key] = val elif t == BIN_UINT64: stack[-1][key] = UINT_64(uint64.unpack(fp.read(int64.size))[0]) elif t == BIN_INT64: stack[-1][key] = INT_64(int64.unpack(fp.read(int64.size))[0]) elif t == BIN_FLOAT32: stack[-1][key] = float32.unpack(fp.read(float32.size))[0] else: raise SyntaxError("Unknown data type at offset %d: %s" % (fp.tell() - 1, repr(t))) if len(stack) != 1: raise SyntaxError("Reached EOF, but Binary VDF is incomplete") if raise_on_remaining and fp.read(1) != b'': fp.seek(-1, 1) raise SyntaxError("Binary VDF ended at offset %d, but there is more data remaining" % (fp.tell() - 1)) return stack.pop() def binary_dumps(obj, alt_format=False): """ Serialize ``obj`` to a binary VDF formatted ``bytes``. """ buf = BytesIO() binary_dump(obj, buf, alt_format) return buf.getvalue() def binary_dump(obj, fp, alt_format=False): """ Serialize ``obj`` to a binary VDF formatted ``bytes`` and write it to ``fp`` filelike object """ if not isinstance(obj, Mapping): raise TypeError("Expected obj to be type of Mapping") if not hasattr(fp, 'write'): raise TypeError("Expected fp to have write() method") for chunk in _binary_dump_gen(obj, alt_format=alt_format): fp.write(chunk) def _binary_dump_gen(obj, level=0, alt_format=False): if level == 0 and len(obj) == 0: return int32 = struct.Struct('= 3: _iter_values = 'values' _range = range _string_type = str import collections.abc as _c class _kView(_c.KeysView): def __iter__(self): return self._mapping.iterkeys() class _vView(_c.ValuesView): def __iter__(self): return self._mapping.itervalues() class _iView(_c.ItemsView): def __iter__(self): return self._mapping.iteritems() else: _iter_values = 'itervalues' _range = xrange _string_type = basestring _kView = lambda x: list(x.iterkeys()) _vView = lambda x: list(x.itervalues()) _iView = lambda x: list(x.iteritems()) class VDFDict(dict): def __init__(self, data=None): """ This is a dictionary that supports duplicate keys and preserves insert order ``data`` can be a ``dict``, or a sequence of key-value tuples. (e.g. ``[('key', 'value'),..]``) The only supported type for key is str. Get/set duplicates is done by tuples ``(index, key)``, where index is the duplicate index for the specified key. (e.g. ``(0, 'key')``, ``(1, 'key')``...) When the ``key`` is ``str``, instead of tuple, set will create a duplicate and get will look up ``(0, key)`` """ self.__omap = [] self.__kcount = Counter() if data is not None: if not isinstance(data, (list, dict)): raise ValueError("Expected data to be list of pairs or dict, got %s" % type(data)) self.update(data) def __repr__(self): out = "%s(" % self.__class__.__name__ out += "%s)" % repr(list(self.iteritems())) return out def __len__(self): return len(self.__omap) def _verify_key_tuple(self, key): if len(key) != 2: raise ValueError("Expected key tuple length to be 2, got %d" % len(key)) if not isinstance(key[0], int): raise TypeError("Key index should be an int") if not isinstance(key[1], _string_type): raise TypeError("Key value should be a str") def _normalize_key(self, key): if isinstance(key, _string_type): key = (0, key) elif isinstance(key, tuple): self._verify_key_tuple(key) else: raise TypeError("Expected key to be a str or tuple, got %s" % type(key)) return key def __setitem__(self, key, value): if isinstance(key, _string_type): key = (self.__kcount[key], key) self.__omap.append(key) elif isinstance(key, tuple): self._verify_key_tuple(key) if key not in self: raise KeyError("%s doesn't exist" % repr(key)) else: raise TypeError("Expected either a str or tuple for key") super(VDFDict, self).__setitem__(key, value) self.__kcount[key[1]] += 1 def __getitem__(self, key): return super(VDFDict, self).__getitem__(self._normalize_key(key)) def __delitem__(self, key): key = self._normalize_key(key) result = super(VDFDict, self).__delitem__(key) start_idx = self.__omap.index(key) del self.__omap[start_idx] dup_idx, skey = key self.__kcount[skey] -= 1 tail_count = self.__kcount[skey] - dup_idx if tail_count > 0: for idx in _range(start_idx, len(self.__omap)): if self.__omap[idx][1] == skey: oldkey = self.__omap[idx] newkey = (dup_idx, skey) super(VDFDict, self).__setitem__(newkey, self[oldkey]) super(VDFDict, self).__delitem__(oldkey) self.__omap[idx] = newkey dup_idx += 1 tail_count -= 1 if tail_count == 0: break if self.__kcount[skey] == 0: del self.__kcount[skey] return result def __iter__(self): return iter(self.iterkeys()) def __contains__(self, key): return super(VDFDict, self).__contains__(self._normalize_key(key)) def __eq__(self, other): if isinstance(other, VDFDict): return list(self.items()) == list(other.items()) else: return False def __ne__(self, other): return not self.__eq__(other) def clear(self): super(VDFDict, self).clear() self.__kcount.clear() self.__omap = list() def get(self, key, *args): return super(VDFDict, self).get(self._normalize_key(key), *args) def setdefault(self, key, default=None): if key not in self: self.__setitem__(key, default) return self.__getitem__(key) def pop(self, key): key = self._normalize_key(key) value = self.__getitem__(key) self.__delitem__(key) return value def popitem(self): if not self.__omap: raise KeyError("VDFDict is empty") key = self.__omap[-1] return key[1], self.pop(key) def update(self, data=None, **kwargs): if isinstance(data, dict): data = data.items() elif not isinstance(data, list): raise TypeError("Expected data to be a list or dict, got %s" % type(data)) for key, value in data: self.__setitem__(key, value) def iterkeys(self): return (key[1] for key in self.__omap) def keys(self): return _kView(self) def itervalues(self): return (self[key] for key in self.__omap) def values(self): return _vView(self) def iteritems(self): return ((key[1], self[key]) for key in self.__omap) def items(self): return _iView(self) def get_all_for(self, key): """ Returns all values of the given key """ if not isinstance(key, _string_type): raise TypeError("Key needs to be a string.") return [self[(idx, key)] for idx in _range(self.__kcount[key])] def remove_all_for(self, key): """ Removes all items with the given key """ if not isinstance(key, _string_type): raise TypeError("Key need to be a string.") for idx in _range(self.__kcount[key]): super(VDFDict, self).__delitem__((idx, key)) self.__omap = list(filter(lambda x: x[1] != key, self.__omap)) del self.__kcount[key] def has_duplicates(self): """ Returns ``True`` if the dict contains keys with duplicates. Recurses through any all keys with value that is ``VDFDict``. """ for n in getattr(self.__kcount, _iter_values)(): if n != 1: return True def dict_recurse(obj): for v in getattr(obj, _iter_values)(): if isinstance(v, VDFDict) and v.has_duplicates(): return True elif isinstance(v, dict): return dict_recurse(v) return False return dict_recurse(self) vdf-3.4/vdf2json/000077500000000000000000000000001405214663400136775ustar00rootroot00000000000000vdf-3.4/vdf2json/Makefile000066400000000000000000000002461405214663400153410ustar00rootroot00000000000000clean: rm -rf dist vdf2json.egg-info vdf2json/*.pyc dist: clean python setup.py sdist upload: dist python setup.py register -r pypi twine upload -r pypi dist/* vdf-3.4/vdf2json/README.rst000066400000000000000000000006411405214663400153670ustar00rootroot00000000000000.. code:: text usage: vdf2json [-h] [-p] [-ei encoding] [-eo encoding] [infile] [outfile] positional arguments: infile VDF outfile JSON (utf8) optional arguments: -h, --help show this help message and exit -p, --pretty pretty json output -ei encoding input encoding E.g.: utf8, utf-16le, etc -eo encoding output encoding E.g.: utf8, utf-16le, etc vdf-3.4/vdf2json/setup.py000066400000000000000000000026101405214663400154100ustar00rootroot00000000000000#!/usr/bin/env python from setuptools import setup from codecs import open from os import path here = path.abspath(path.dirname(__file__)) with open(path.join(here, 'README.rst'), encoding='utf-8') as f: long_description = f.read() setup( name='vdf2json', version='1.1', description='command line tool for converting VDF to JSON', long_description=long_description, url='https://github.com/rossengeorgiev/vdf-python', author='Rossen Georgiev', author_email='hello@rgp.io', license='MIT', classifiers=[ 'Development Status :: 5 - Production/Stable', 'License :: OSI Approved :: MIT License', 'Topic :: Text Processing ', 'Natural Language :: English', 'Operating System :: OS Independent', 'Environment :: Console', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.0', 'Programming Language :: Python :: 3.2', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', ], keywords='valve keyvalue vdf tf2 dota2 csgo cli commandline json', install_requires=['vdf>=1.4'], packages=['vdf2json'], entry_points={ 'console_scripts': [ 'vdf2json = vdf2json.cli:main' ] }, zip_safe=True, ) vdf-3.4/vdf2json/vdf2json/000077500000000000000000000000001405214663400154325ustar00rootroot00000000000000vdf-3.4/vdf2json/vdf2json/__init__.py000066400000000000000000000000001405214663400175310ustar00rootroot00000000000000vdf-3.4/vdf2json/vdf2json/cli.py000066400000000000000000000026711405214663400165610ustar00rootroot00000000000000#!/usr/bin/env python import argparse import sys import json import vdf import codecs from collections import OrderedDict def main(): p = argparse.ArgumentParser(prog='vdf2json') p.add_argument('infile', nargs='?', type=argparse.FileType('r'), default=sys.stdin, help="VDF") p.add_argument('outfile', nargs='?', type=argparse.FileType('w'), default=sys.stdout, help="JSON (utf8)") p.add_argument('-p', '--pretty', help='pretty json output', action='store_true') p.add_argument('-ei', default='utf-8', type=str, metavar='encoding', help='input encoding E.g.: utf8, utf-16le, etc') p.add_argument('-eo', default='utf-8', type=str, metavar='encoding', help='output encoding E.g.: utf8, utf-16le, etc') args = p.parse_args() # unicode pain if args.infile is not sys.stdin: args.infile.close() args.infile = codecs.open(args.infile.name, 'r', encoding=args.ei) else: args.infile = codecs.getreader(args.ei)(sys.stdin) if args.outfile is not sys.stdout: args.outfile.close() args.outfile = codecs.open(args.outfile.name, 'w', encoding=args.eo) else: args.outfile = codecs.getwriter(args.eo)(sys.stdout) data = vdf.loads(args.infile.read(), mapper=OrderedDict) json.dump(data, args.outfile, indent=4 if args.pretty else 0, ensure_ascii=False) if __name__ == '__main__': main()