pax_global_header 0000666 0000000 0000000 00000000064 14302173252 0014511 g ustar 00root root 0000000 0000000 52 comment=eeafcd6cff8cea1cdb107b2daac47e0fa9249520
python-opentimestamps-python-opentimestamps-v0.4.2/ 0000775 0000000 0000000 00000000000 14302173252 0022656 5 ustar 00root root 0000000 0000000 python-opentimestamps-python-opentimestamps-v0.4.2/.gitignore 0000664 0000000 0000000 00000000113 14302173252 0024641 0 ustar 00root root 0000000 0000000 *.pyc
local*.cfg
build/
dist/
opentimestamps.egg-info/
#Idea IDE
.idea/
python-opentimestamps-python-opentimestamps-v0.4.2/.travis.yml 0000664 0000000 0000000 00000000370 14302173252 0024767 0 ustar 00root root 0000000 0000000 git:
depth: 9999999
language: python
python:
- "3.4"
- "3.5"
- "3.6"
- "3.7"
- "3.8"
- "3.9"
# command to install dependencies
install:
- pip install -r requirements.txt
# command to run tests
script: python -m unittest discover -v
python-opentimestamps-python-opentimestamps-v0.4.2/LICENSE 0000664 0000000 0000000 00000017747 14302173252 0023703 0 ustar 00root root 0000000 0000000 python-opentimestamps is free software: you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by the
Free Software Foundation, either version 3 of the License, or (at your option)
any later version.
python-opentimestamps is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License
below for more details.
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
python-opentimestamps-python-opentimestamps-v0.4.2/README.md 0000664 0000000 0000000 00000001734 14302173252 0024142 0 ustar 00root root 0000000 0000000 # python-opentimestamps
Python3 library for creating and verifying OpenTimestamps proofs.
## Installation
From the PyPi repository:
pip3 install opentimestamps
## Structure
Similar to the author's `python-bitcoinlib`, the codebase is split between the
consensus-critical `opentimestamps.core.*` modules, and the
non-consensus-critical `opentimestamps.*` modules. The distinction between the
two is whether or not changes to that code are likely to lead to permanent
incompatibilities between versions that could lead to timestamp validation
returning inconsistent results between versions.
## Unit tests
python3 -m unittest discover -v
Additionally Travis is supported.
## SSL Root Certificates
On some MacOS setups SSL certificates may be missing. The following commands
could be of use to resolve this error (the below example assumes a user is
running Python "3.7", and is using Certifi package):
```
cd /Applications/Python\ 3.7
Install\ Certificates.command
```
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/ 0000775 0000000 0000000 00000000000 14302173252 0025726 5 ustar 00root root 0000000 0000000 python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/__init__.py 0000664 0000000 0000000 00000000635 14302173252 0030043 0 ustar 00root root 0000000 0000000 # Copyright (C) 2018 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
__version__ = "0.4.2"
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/bitcoin.py 0000664 0000000 0000000 00000006302 14302173252 0027730 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016-2018 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
from opentimestamps.core.timestamp import Timestamp, cat_sha256d
from opentimestamps.core.op import OpPrepend
from opentimestamps.core.notary import BitcoinBlockHeaderAttestation
def __make_btc_block_merkle_tree(blk_txids):
assert len(blk_txids) > 0
digests = blk_txids
while len(digests) > 1:
# The famously broken Satoshi algorithm: if the # of digests at this
# level is odd, double the last one.
if len(digests) % 2:
digests.append(digests[-1].msg)
next_level = []
for i in range(0,len(digests), 2):
next_level.append(cat_sha256d(digests[i], digests[i + 1]))
digests = next_level
return digests[0]
def make_timestamp_from_block(digest, block, blockheight, *, max_tx_size=1000):
"""Make a timestamp for a message in a block
Every transaction within the block is serialized and checked to see if the
raw serialized bytes contain the message. If one or more transactions do,
the smallest transaction is used to create a timestamp proof for that
specific message to the block header.
To limit the maximum size of proof, transactions larger than `max_tx_size`
are ignored.
Returns a timestamp for that message on success, None on failure.
"""
# Note how strategy changes if we add SHA256 midstate support
len_smallest_tx_found = max_tx_size + 1
commitment_tx = None
prefix = None
suffix = None
for tx in block.vtx:
serialized_tx = tx.serialize(params={'include_witness':False})
if len(serialized_tx) > len_smallest_tx_found:
continue
try:
i = serialized_tx.index(digest)
except ValueError:
continue
# Found it!
commitment_tx = tx
prefix = serialized_tx[0:i]
suffix = serialized_tx[i + len(digest):]
len_smallest_tx_found = len(serialized_tx)
if len_smallest_tx_found > max_tx_size:
return None
digest_timestamp = Timestamp(digest)
# Add the commitment ops necessary to go from the digest to the txid op
prefix_stamp = digest_timestamp.ops.add(OpPrepend(prefix))
txid_stamp = cat_sha256d(prefix_stamp, suffix)
assert commitment_tx.GetTxid() == txid_stamp.msg
# Create the txid list, with our commitment txid op in the appropriate
# place
block_txid_stamps = []
for tx in block.vtx:
if tx.GetTxid() != txid_stamp.msg:
block_txid_stamps.append(Timestamp(tx.GetTxid()))
else:
block_txid_stamps.append(txid_stamp)
# Build the merkle tree
merkleroot_stamp = __make_btc_block_merkle_tree(block_txid_stamps)
assert merkleroot_stamp.msg == block.hashMerkleRoot
attestation = BitcoinBlockHeaderAttestation(blockheight)
merkleroot_stamp.attestations.add(attestation)
return digest_timestamp
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/calendar.py 0000664 0000000 0000000 00000013030 14302173252 0030046 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016-2018 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import binascii
import urllib.request
import fnmatch
from urllib.parse import urljoin
from opentimestamps.core.timestamp import Timestamp
from opentimestamps.core.serialize import BytesDeserializationContext
def get_sanitised_resp_msg(exp):
"""Get the sanitise response messages from a calendar response
Returns the sanitised message, with any character not in the whitelist replaced by '_'
"""
# Note how new lines are _not_ allowed: this is important, as otherwise the
# message could include a second line pretending to be something else.
WHITELIST = b'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789#-.,; '
# Two lines of text
raw_msg = bytearray(exp.read(160))
for i in range(len(raw_msg)):
if raw_msg[i] not in WHITELIST:
raw_msg[i] = ord('_')
return raw_msg.decode()
class CommitmentNotFoundError(KeyError):
def __init__(self, reason):
super().__init__(reason)
self.reason = reason
class RemoteCalendar:
"""Remote calendar server interface"""
def __init__(self, url, user_agent="python-opentimestamps"):
if not isinstance(url, str):
raise TypeError("URL must be a string")
self.url = url
self.request_headers = {"Accept": "application/vnd.opentimestamps.v1",
"User-Agent": user_agent}
def submit(self, digest, timeout=None):
"""Submit a digest to the calendar
Returns a Timestamp committing to that digest
"""
req = urllib.request.Request(urljoin(self.url, 'digest'), data=digest, headers=self.request_headers)
with urllib.request.urlopen(req, timeout=timeout) as resp:
if resp.status != 200:
raise Exception("Unknown response from calendar: %d" % resp.status)
# FIXME: Not a particularly nice way of handling this, but it'll do
# the job for now.
resp_bytes = resp.read(10000)
if len(resp_bytes) > 10000:
raise Exception("Calendar response exceeded size limit")
ctx = BytesDeserializationContext(resp_bytes)
return Timestamp.deserialize(ctx, digest)
def get_timestamp(self, commitment, timeout=None):
"""Get a timestamp for a given commitment
Raises KeyError if the calendar doesn't have that commitment
"""
req = urllib.request.Request(
urljoin(self.url, 'timestamp/' + binascii.hexlify(commitment).decode('utf8')),
headers=self.request_headers)
try:
with urllib.request.urlopen(req, timeout=timeout) as resp:
if resp.status == 200:
# FIXME: Not a particularly nice way of handling this, but it'll do
# the job for now.
resp_bytes = resp.read(10000)
if len(resp_bytes) > 10000:
raise Exception("Calendar response exceeded size limit")
ctx = BytesDeserializationContext(resp_bytes)
return Timestamp.deserialize(ctx, commitment)
else:
raise Exception("Unknown response from calendar: %d" % resp.status)
except urllib.error.HTTPError as exp:
if exp.code == 404:
raise CommitmentNotFoundError(get_sanitised_resp_msg(exp))
else:
raise exp
class UrlWhitelist(set):
"""Glob-matching whitelist for URL's"""
def __init__(self, urls=()):
for url in urls:
self.add(url)
def add(self, url):
if not isinstance(url, str):
raise TypeError("URL must be a string")
if url.startswith('http://') or url.startswith('https://'):
parsed_url = urllib.parse.urlparse(url)
# FIXME: should have a more friendly error message
assert not parsed_url.params and not parsed_url.query and not parsed_url.fragment
set.add(self, parsed_url)
else:
self.add('http://' + url)
self.add('https://' + url)
def __contains__(self, url):
parsed_url = urllib.parse.urlparse(url)
# FIXME: probably should tell user why...
if parsed_url.params or parsed_url.query or parsed_url.fragment:
return False
for pattern in self:
if (parsed_url.scheme == pattern.scheme and
parsed_url.path == pattern.path and
fnmatch.fnmatch(parsed_url.netloc, pattern.netloc)):
return True
else:
return False
def __repr__(self):
return 'UrlWhitelist([%s])' % ','.join("'%s'" % url.geturl() for url in self)
DEFAULT_CALENDAR_WHITELIST = \
UrlWhitelist(['https://*.calendar.opentimestamps.org', # Run by Peter Todd
'https://*.calendar.eternitywall.com', # Run by Riccardo Casatta of Eternity Wall
'https://*.calendar.catallaxy.com', # Run by Vincent Cloutier of Catallaxy
])
DEFAULT_AGGREGATORS = \
('https://a.pool.opentimestamps.org',
'https://b.pool.opentimestamps.org',
'https://a.pool.eternitywall.com',
'https://ots.btc.catallaxy.com')
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/core/ 0000775 0000000 0000000 00000000000 14302173252 0026656 5 ustar 00root root 0000000 0000000 python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/core/__init__.py 0000664 0000000 0000000 00000001301 14302173252 0030762 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
"""Consensus-critical code
Basically, everything under opentimestamps.core has the property that changes
to it may break timestamp validation in non-backwards-compatible ways that are
difficult to recover from. We keep such code separate as a reminder to
ourselves to pay extra attention when making changes.
"""
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/core/dubious/ 0000775 0000000 0000000 00000000000 14302173252 0030330 5 ustar 00root root 0000000 0000000 python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/core/dubious/__init__.py 0000664 0000000 0000000 00000001237 14302173252 0032444 0 ustar 00root root 0000000 0000000 # Copyright (C) 2017 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
"""Timestamp proofs with dubious security
By "dubious" we mean techniques whose security model is uncertain, or likely to
unexpectedly weaken, and should be evaluated on a case-by-case basis. Proofs
falling under this category should be avoided for general production usage.
"""
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/core/dubious/notary.py 0000664 0000000 0000000 00000004623 14302173252 0032223 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
"""Dubious Timestamp signature verification"""
import opentimestamps.core.serialize
import opentimestamps.core.notary as notary
class EthereumBlockHeaderAttestation(notary.TimeAttestation):
"""Signed by the Ethereum blockchain
The commitment digest will be the merkleroot of the blockheader.
Ethereum attestations are in the "dubious" module as what exactly Ethereum
is has changed repeatedly in the past due to consensus failures and forks;
as of writing the Ethereum developers plan to radically change Ethereum's
consensus model to proof-of-stake, whose security model is at best dubious.
"""
TAG = bytes.fromhex('30fe8087b5c7ead7')
def __init__(self, height):
self.height = height
def __eq__(self, other):
if other.__class__ is EthereumBlockHeaderAttestation:
return self.height == other.height
else:
return super().__eq__(other)
def __lt__(self, other):
if other.__class__ is EthereumBlockHeaderAttestation:
return self.height < other.height
else:
return super().__lt__(other)
def __hash__(self):
return hash(self.height)
def verify_against_blockheader(self, digest, block):
"""Verify attestation against a block header
Returns the block time on success; raises VerificationError on failure.
"""
if len(digest) != 32:
raise opentimestamps.core.notary.VerificationError("Expected digest with length 32 bytes; got %d bytes" % len(digest))
elif digest != bytes.fromhex(block['transactionsRoot'][2:]):
raise opentimestamps.core.notary.VerificationError("Digest does not match merkleroot")
return block['timestamp']
def __repr__(self):
return 'EthereumBlockHeaderAttestation(%r)' % self.height
def _serialize_payload(self, ctx):
ctx.write_varuint(self.height)
@classmethod
def deserialize(cls, ctx):
height = ctx.read_varuint()
return EthereumBlockHeaderAttestation(height)
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/core/git.py 0000664 0000000 0000000 00000020325 14302173252 0030015 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
"""Git integration"""
import dbm
import git
import io
import os
from opentimestamps.core.timestamp import Timestamp, DetachedTimestampFile, make_merkle_tree
from opentimestamps.core.op import OpAppend, OpPrepend, OpSHA256
class GitTreeTimestamper:
"""Efficient, privacy-preserving, git tree timestamping
Unfortunately, the way git calculates commit hashes is less than ideal for
timestamping. The first problem is of course the fact that they're SHA1
hashes: still good enough for timestamping, but all the same a dubious
choice of hash algorithm.
The second problem is more subtle: What if I want to extract a timestamp
proof for an individual file in the commit? Since git hashes tree objects
as linear blobs of data, the proof will contain a significant amount of
extraneous metadata about files other than the one you want - inefficient
and a privacy risk.
This class solves these problems by recursively re-hashing a git tree and all blobs
in it with SHA256, using a cache of previously calculated hashes for
efficiency. Each git tree is hashed as a merkle tree, allowing paths to
individual blobs to be extracted efficiently.
For privacy, we guarantee that given a timestamp for a single item in a
given tree, an attacker trying to guess the contents of any other item in
the tree can only do so by brute-forcing all other content in the tree
simultaneously. We achieve this by deterministically generating nonces for
each item in the tree based on the item's hash, and the contents of the
rest of the tree.
However, note that we do _not_ prevent the attacker from learning about the
directly structure of the repository, including the number of items in each
directory.
"""
def __init__(self, tree, db=None, file_hash_op=OpSHA256(), tree_hash_op=OpSHA256()):
self.tree = tree
if db is None:
os.makedirs(tree.repo.git_dir + '/ots', exist_ok=True)
# WARNING: change the version number if any of the following is
# changed; __init__() is consensus-critical!
db = dbm.open(tree.repo.git_dir + '/ots/tree-hash-cache-v3', 'c')
self.db = db
self.file_hash_op = file_hash_op
self.tree_hash_op = tree_hash_op
def do_item(item):
try:
return (item, Timestamp(db[item.hexsha]))
except KeyError:
timestamp = None
if isinstance(item, git.Blob):
timestamp = Timestamp(file_hash_op.hash_fd(item.data_stream[3]))
elif isinstance(item, git.Tree):
stamper = GitTreeTimestamper(item, db=db, file_hash_op=file_hash_op, tree_hash_op=tree_hash_op)
timestamp = stamper.timestamp
elif isinstance(item, git.Submodule):
# A submodule is just a git commit hash.
#
# Unfortunately we're not guaranteed to have the repo
# behind it, so all we can do is timestamp that SHA1 hash.
#
# We do run it through the tree_hash_op to make it
# indistinguishable from other things; consider the
# degenerate case where the only thing in a git repo was a
# submodule.
timestamp = Timestamp(tree_hash_op(item.binsha))
else:
raise NotImplementedError("Don't know what to do with %r" % item)
db[item.hexsha] = timestamp.msg
return (item, timestamp)
self.contents = tuple(do_item(item) for item in self.tree)
if len(self.contents) > 1:
# Deterministically nonce contents in an all-or-nothing transform. As
# mentioned in the class docstring, we want to ensure that the the
# siblings of any leaf in the merkle tree don't give the attacker any
# information about what else is in the tree, unless the attacker
# already knows (or can brute-force) the entire contents of the tree.
#
# While not perfect - a user-provided persistant key would prevent the
# attacker from being able to brute-force the contents - this option
# has the advantage of being possible to calculate deterministically
# using only the tree itself, removing the need to keep secret keys
# that can easily be lost.
#
# First, calculate a nonce_key that depends on the entire contents of
# the tree. The 8-byte tag ensures the key calculated is unique for
# this purpose.
contents_sum = b''.join(stamp.msg for item, stamp in self.contents) + b'\x01\x89\x08\x0c\xfb\xd0\xe8\x08'
nonce_key = tree_hash_op.hash_fd(io.BytesIO(contents_sum))
# Second, calculate per-item nonces deterministically from that key,
# and add those nonces to the timestamps of every item in the tree.
#
# While we usually use 128-bit nonces, here we're using full-length
# nonces. Additionally, we pick append/prepend pseudo-randomly. This
# helps obscure the directory structure, as a commitment for a git tree
# is indistinguishable from a inner node in the per-git-tree merkle
# tree.
def deterministically_nonce_stamp(private_stamp):
nonce1 = tree_hash_op(private_stamp.msg + nonce_key)
nonce2 = tree_hash_op(nonce1)
side = OpPrepend if nonce1[0] & 0b1 else OpAppend
nonce_added = private_stamp.ops.add(side(nonce2))
return nonce_added.ops.add(tree_hash_op)
nonced_contents = (deterministically_nonce_stamp(stamp) for item, stamp in self.contents)
# Note how the current algorithm, if asked to timestamp a tree
# with a single thing in it, will return the hash of that thing
# directly. From the point of view of just commiting to the data that's
# perfectly fine, and probably (slightly) better as it reveals a little
# less information about directory structure.
self.timestamp = make_merkle_tree(nonced_stamp for nonced_stamp in nonced_contents)
elif len(self.contents) == 1:
# If there's only one item in the tree, the fancy all-or-nothing
# transform above is just a waste of ops, so use the tree contents
# directly instead.
self.timestamp = tuple(self.contents)[0][1]
else:
raise AssertionError("Empty git tree")
def __getitem__(self, path):
"""Get a DetachedTimestampFile for blob at path
The timestamp object returned will refer to self.timestamp, so
modifying self.timestamp will modify the timestamp returned.
If path does not exist, FileNotFound error will be raised.
If path exists, but is not a blob, ValueError will be raised.
"""
for item, item_stamp in self.contents:
if item.path == path:
if isinstance(item, git.Blob):
return DetachedTimestampFile(self.file_hash_op, item_stamp)
else:
raise ValueError("Path %r is not a blob" % item.path)
elif path.startswith(item.path + '/'):
if isinstance(item, git.Tree):
# recurse
tree_stamper = GitTreeTimestamper(item, db=self.db, file_hash_op=self.file_hash_op, tree_hash_op=self.tree_hash_op)
tree_stamper.timestamp.merge(item_stamp)
return tree_stamper[path]
else:
raise FileNotFoundError("Path %r not found; prefix %r is a blob" % (path, item.path))
else:
raise FileNotFoundError("Path %r not found" % path)
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/core/log.py 0000664 0000000 0000000 00000011011 14302173252 0030003 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
from opentimestamps.core.op import CryptOp
from opentimestamps.core.serialize import (
BadMagicError, DeserializationError, StreamSerializationContext, StreamDeserializationContext
)
from opentimestamps.core.packetstream import PacketReader, PacketWriter, PacketMissingError
class TimestampLog:
"""Timestamps for append-only files
With append-only files such as log files, rather than timestamping once, you
want to timestamp them repeatedly as new data is appended to them. Logfile
timestamps support this usecase by allowing multiple timestamps on the same
file to be recorded, with each timestamp including the length of the log file
at the time that particular timestamp was created.
In addition, logfile timestamps are serialized such that they themselves can be
written to append-only storage.
"""
HEADER_MAGIC = b'\x00OpenTimestamps\x00\x00Log\x00\xd9\x19\xc5\x3a\x99\xb1\x12\xe9\xa6\xa1\x00'
class TimestampLogReader(TimestampLog):
@classmethod
def open(cls, fd):
"""Open an existing timestamp log
fd must be positioned at the start of the log; the header will be
immediately read and DeserializationError raised if incorrect.
"""
ctx = StreamDeserializationContext(fd)
actual_magic = ctx.read_bytes(len(cls.HEADER_MAGIC))
if cls.HEADER_MAGIC != actual_magic:
raise BadMagicError(cls.HEADER_MAGIC, actual_magic)
file_hash_op = CryptOp.deserialize(ctx)
return cls(fd, file_hash_op)
def __init__(self, fd, file_hash_op):
"""Create a timestamp log reader instance
You probably want to use TimestampLogReader.open() instead.
"""
self.fd = fd
self.file_hash_op = file_hash_op
def __iter__(self):
"""Iterate through all timestamps in the timestamp log"""
while True:
try:
reader = PacketReader(self.fd)
except PacketMissingError:
break
ctx = StreamDeserializationContext(reader)
try:
length = ctx.read_varuint()
file_hash = ctx.read_bytes(self.file_hash_op.DIGEST_LENGTH)
timestamp = Timestamp.deserialize(ctx, file_hash)
yield (length, timestamp)
except DeserializationError as exp:
# FIXME: should provide a way to get insight into these errors
pass
class TimestampLogWriter(TimestampLog):
@classmethod
def open(cls, fd):
"""Open an existing timestamp log for writing
fd must be both readable and writable, and must be positioned at the
beginning of the timestamp log file. The header will be immediately
read, with BadMagicError raised if it's incorrect.
"""
# Use the log reader to read the header information
reader = TimestampLogReader.open(fd)
# Parse the entries to find the last one
for stamp in reader:
pass
# FIXME: pad the end as necessary to deal with trucated writes
return cls(fd, reader.file_hash_op)
@classmethod
def create(cls, fd, file_hash_op):
"""Create a new timestamp log
Writes the header appropriately.
"""
ctx = StreamSerializationContext(fd)
ctx.write_bytes(cls.HEADER_MAGIC)
file_hash_op.serialize(ctx)
return cls(fd, file_hash_op)
def __init__(self, fd, file_hash_op):
"""Create a new timestamp log writer
You probably want to use the open() or create() methods instead.
"""
self.fd = fd
self.file_hash_op = file_hash_op
def append(self, length, timestamp):
"""Add a new timestamp to the log"""
if len(timestamp.msg) != self.file_hash_op.DIGEST_LENGTH:
raise ValueError("Timestamp msg length does not match expected digest length; %d != %d" % (len(timestamp.msg), self.file_hash_op.DIGEST_LENGTH))
with PacketWriter(self.fd) as packet_fd:
ctx = StreamSerializationContext(packet_fd)
ctx.write_varuint(length)
ctx.write_bytes(timestamp.msg)
timestamp.serialize(ctx)
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/core/notary.py 0000664 0000000 0000000 00000030156 14302173252 0030551 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
"""Timestamp signature verification"""
import opentimestamps.core.serialize
class VerificationError(Exception):
"""Attestation verification errors"""
class TimeAttestation:
"""Time-attesting signature"""
TAG = None
TAG_SIZE = 8
# FIXME: What should this be?
MAX_PAYLOAD_SIZE = 8192
"""Maximum size of a attestation payload"""
def _serialize_payload(self, ctx):
raise NotImplementedError
def serialize(self, ctx):
ctx.write_bytes(self.TAG)
payload_ctx = opentimestamps.core.serialize.BytesSerializationContext()
self._serialize_payload(payload_ctx)
ctx.write_varbytes(payload_ctx.getbytes())
def __eq__(self, other):
"""Implementation of equality operator
WARNING: The exact behavior of this isn't yet well-defined enough to be
used for consensus-critical applications.
"""
if isinstance(other, TimeAttestation):
assert self.__class__ is not other.__class__ # should be implemented by subclass
return False
else:
return NotImplemented
def __lt__(self, other):
"""Implementation of less than operator
WARNING: The exact behavior of this isn't yet well-defined enough to be
used for consensus-critical applications.
"""
if isinstance(other, TimeAttestation):
assert self.__class__ is not other.__class__ # should be implemented by subclass
return self.TAG < other.TAG
else:
return NotImplemented
@classmethod
def deserialize(cls, ctx):
tag = ctx.read_bytes(cls.TAG_SIZE)
serialized_attestation = ctx.read_varbytes(cls.MAX_PAYLOAD_SIZE)
import opentimestamps.core.serialize
payload_ctx = opentimestamps.core.serialize.BytesDeserializationContext(serialized_attestation)
# FIXME: probably a better way to do this...
import opentimestamps.core.dubious.notary
if tag == PendingAttestation.TAG:
r = PendingAttestation.deserialize(payload_ctx)
elif tag == BitcoinBlockHeaderAttestation.TAG:
r = BitcoinBlockHeaderAttestation.deserialize(payload_ctx)
elif tag == LitecoinBlockHeaderAttestation.TAG:
r = LitecoinBlockHeaderAttestation.deserialize(payload_ctx)
elif tag == opentimestamps.core.dubious.notary.EthereumBlockHeaderAttestation.TAG:
r = opentimestamps.core.dubious.notary.EthereumBlockHeaderAttestation.deserialize(payload_ctx)
else:
return UnknownAttestation(tag, serialized_attestation)
# If attestations want to have unspecified fields for future
# upgradability they should do so explicitly.
payload_ctx.assert_eof()
return r
class UnknownAttestation(TimeAttestation):
"""Placeholder for attestations that don't support"""
def __init__(self, tag, payload):
if tag.__class__ != bytes:
raise TypeError("tag must be bytes instance; got %r" % tag.__class__)
elif len(tag) != self.TAG_SIZE:
raise ValueError("tag must be exactly %d bytes long; got %d" % (self.TAG_SIZE, len(tag)))
if payload.__class__ != bytes:
raise TypeError("payload must be bytes instance; got %r" % tag.__class__)
elif len(payload) > self.MAX_PAYLOAD_SIZE:
raise ValueError("payload must be <= %d bytes long; got %d" % (self.MAX_PAYLOAD_SIZE, len(payload)))
# FIXME: we should check that tag != one of the tags that we do know
# about; if it does the operators < and =, and hash() will likely act
# strangely
self.TAG = tag
self.payload = payload
def __repr__(self):
return 'UnknownAttestation(%r, %r)' % (self.TAG, self.payload)
def __eq__(self, other):
if other.__class__ is UnknownAttestation:
return self.TAG == other.TAG and self.payload == other.payload
else:
return super().__eq__(other)
def __lt__(self, other):
if other.__class__ is UnknownAttestation:
return (self.TAG, self.payload) < (other.TAG, other.payload)
else:
return super().__lt__(other)
def __hash__(self):
return hash((self.TAG, self.payload))
def _serialize_payload(self, ctx):
# Notice how this is write_bytes, not write_varbytes - the latter would
# incorrectly add a length header to the actual payload.
ctx.write_bytes(self.payload)
# Note how neither of these signatures actually has the time...
class PendingAttestation(TimeAttestation):
"""Pending attestation
Commitment has been recorded in a remote calendar for future attestation,
and we have a URI to find a more complete timestamp in the future.
Nothing other than the URI is recorded, nor is there provision made to add
extra metadata (other than the URI) in future upgrades. The rational here
is that remote calendars promise to keep commitments indefinitely, so from
the moment they are created it should be possible to find the commitment in
the calendar. Thus if you're not satisfied with the local verifiability of
a timestamp, the correct thing to do is just ask the remote calendar if
additional attestations are available and/or when they'll be available.
While we could additional metadata like what types of attestations the
remote calendar expects to be able to provide in the future, that metadata
can easily change in the future too. Given that we don't expect timestamps
to normally have more than a small number of remote calendar attestations,
it'd be better to have verifiers get the most recent status of such
information (possibly with appropriate negative response caching).
"""
TAG = bytes.fromhex('83dfe30d2ef90c8e')
MAX_URI_LENGTH = 1000
"""Maximum legal URI length, in bytes"""
ALLOWED_URI_CHARS = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-._/:"
"""Characters allowed in URI's
Note how we've left out the characters necessary for parameters, queries,
or fragments, as well as IPv6 [] notation, percent-encoding special
characters, and @ login notation. Hopefully this keeps us out of trouble!
"""
@classmethod
def check_uri(cls, uri):
"""Check URI for validity
Raises ValueError appropriately
"""
if len(uri) > cls.MAX_URI_LENGTH:
raise ValueError("URI exceeds maximum length")
for char in uri:
if char not in cls.ALLOWED_URI_CHARS:
raise ValueError("URI contains invalid character %r" % bytes([char]))
def __init__(self, uri):
if not isinstance(uri, str):
raise TypeError("URI must be a string")
self.check_uri(uri.encode())
self.uri = uri
def __repr__(self):
return 'PendingAttestation(%r)' % self.uri
def __eq__(self, other):
if other.__class__ is PendingAttestation:
return self.uri == other.uri
else:
return super().__eq__(other)
def __lt__(self, other):
if other.__class__ is PendingAttestation:
return self.uri < other.uri
else:
return super().__lt__(other)
def __hash__(self):
return hash(self.uri)
def _serialize_payload(self, ctx):
ctx.write_varbytes(self.uri.encode())
@classmethod
def deserialize(cls, ctx):
utf8_uri = ctx.read_varbytes(cls.MAX_URI_LENGTH)
try:
cls.check_uri(utf8_uri)
except ValueError as exp:
raise opentimestamps.core.serialize.DeserializationError("Invalid URI: %r" % exp)
return PendingAttestation(utf8_uri.decode())
class BitcoinBlockHeaderAttestation(TimeAttestation):
"""Signed by the Bitcoin blockchain
The commitment digest will be the merkleroot of the blockheader.
The block height is recorded so that looking up the correct block header in
an external block header database doesn't require every header to be stored
locally (33MB and counting). (remember that a memory-constrained local
client can save an MMR that commits to all blocks, and use an external service to fill
in pruned details).
Otherwise no additional redundant data about the block header is recorded.
This is very intentional: since the attestation contains (nearly) the
absolute bare minimum amount of data, we encourage implementations to do
the correct thing and get the block header from a by-height index, check
that the merkleroots match, and then calculate the time from the header
information. Providing more data would encourage implementations to cheat.
Remember that the only thing that would invalidate the block height is a
reorg, but in the event of a reorg the merkleroot will be invalid anyway,
so there's no point to recording data in the attestation like the header
itself. At best that would just give us extra confirmation that a reorg
made the attestation invalid; reorgs deep enough to invalidate timestamps are
exceptionally rare events anyway, so better to just tell the user the timestamp
can't be verified rather than add almost-never tested code to handle that case
more gracefully.
"""
TAG = bytes.fromhex('0588960d73d71901')
def __init__(self, height):
self.height = height
def __eq__(self, other):
if other.__class__ is BitcoinBlockHeaderAttestation:
return self.height == other.height
else:
return super().__eq__(other)
def __lt__(self, other):
if other.__class__ is BitcoinBlockHeaderAttestation:
return self.height < other.height
else:
return super().__lt__(other)
def __hash__(self):
return hash(self.height)
def verify_against_blockheader(self, digest, block_header):
"""Verify attestation against a block header
Returns the block time on success; raises VerificationError on failure.
"""
if len(digest) != 32:
raise VerificationError("Expected digest with length 32 bytes; got %d bytes" % len(digest))
elif digest != block_header.hashMerkleRoot:
raise VerificationError("Digest does not match merkleroot")
return block_header.nTime
def __repr__(self):
return 'BitcoinBlockHeaderAttestation(%r)' % self.height
def _serialize_payload(self, ctx):
ctx.write_varuint(self.height)
@classmethod
def deserialize(cls, ctx):
height = ctx.read_varuint()
return BitcoinBlockHeaderAttestation(height)
class LitecoinBlockHeaderAttestation(TimeAttestation):
"""Signed by the Litecoin blockchain
Identical in design to the BitcoinBlockHeaderAttestation.
"""
TAG = bytes.fromhex('06869a0d73d71b45')
def __init__(self, height):
self.height = height
def __eq__(self, other):
if other.__class__ is LitecoinBlockHeaderAttestation:
return self.height == other.height
else:
return super().__eq__(other)
def __lt__(self, other):
if other.__class__ is LitecoinBlockHeaderAttestation:
return self.height < other.height
else:
return super().__lt__(other)
def __hash__(self):
return hash(self.height)
def verify_against_blockheader(self, digest, block_header):
"""Verify attestation against a block header
Not implemented here until there is a well-maintained Litecoin
python library
"""
raise NotImplementedError()
def __repr__(self):
return 'LitecoinBlockHeaderAttestation(%r)' % self.height
def _serialize_payload(self, ctx):
ctx.write_varuint(self.height)
@classmethod
def deserialize(cls, ctx):
height = ctx.read_varuint()
return LitecoinBlockHeaderAttestation(height)
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/core/op.py 0000664 0000000 0000000 00000025205 14302173252 0027652 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import binascii
import hashlib
import sha3
import opentimestamps.core.serialize
class MsgValueError(ValueError):
"""Raised when an operation can't be applied to the specified message.
For example, because OpHexlify doubles the size of it's input, we restrict
the size of the message it can be applied to to avoid running out of
memory; OpHexlify raises this exception when that happens.
"""
class OpArgValueError(ValueError):
"""Raised when an operation argument has an invalid value
For example, if OpAppend/OpPrepend's argument is too long.
"""
class Op(tuple):
"""Timestamp proof operations
Operations are the edges in the timestamp tree, with each operation taking
a message and zero or more arguments to produce a result.
"""
SUBCLS_BY_TAG = {}
__slots__ = []
MAX_RESULT_LENGTH = 4096
"""Maximum length of an Op result
For a verifier, this limit is what limits the maximum amount of memory you
need at any one time to verify a particular timestamp path; while verifying
a particular commitment operation path previously calculated results can be
discarded.
Of course, if everything was a merkle tree you never need to append/prepend
anything near 4KiB of data; 64 bytes would be plenty even with SHA512. The
main need for this is compatibility with existing systems like Bitcoin
timestamps and Certificate Transparency servers. While the pathological
limits required by both are quite large - 1MB and 16MiB respectively - 4KiB
is perfectly adequate in both cases for more reasonable usage.
Op subclasses should set this limit even lower if doing so is appropriate
for them.
"""
MAX_MSG_LENGTH = 4096
"""Maximum length of the message an Op can be applied too
Similar to the result length limit, this limit gives implementations a sane
constraint to work with; the maximum result-length limit implicitly
constrains maximum message length anyway.
Op subclasses should set this limit even lower if doing so is appropriate
for them.
"""
def __eq__(self, other):
if isinstance(other, Op):
return self.TAG == other.TAG and tuple(self) == tuple(other)
else:
return NotImplemented
def __ne__(self, other):
if isinstance(other, Op):
return self.TAG != other.TAG or tuple(self) != tuple(other)
else:
return NotImplemented
def __lt__(self, other):
if isinstance(other, Op):
if self.TAG == other.TAG:
return tuple(self) < tuple(other)
else:
return self.TAG < other.TAG
else:
return NotImplemented
def __le__(self, other):
if isinstance(other, Op):
if self.TAG == other.TAG:
return tuple(self) <= tuple(other)
else:
return self.TAG < other.TAG
else:
return NotImplemented
def __gt__(self, other):
if isinstance(other, Op):
if self.TAG == other.TAG:
return tuple(self) > tuple(other)
else:
return self.TAG > other.TAG
else:
return NotImplemented
def __ge__(self, other):
if isinstance(other, Op):
if self.TAG == other.TAG:
return tuple(self) >= tuple(other)
else:
return self.TAG > other.TAG
else:
return NotImplemented
def __hash__(self):
return self.TAG[0] ^ tuple.__hash__(self)
def _do_op_call(self, msg):
raise NotImplementedError
def __call__(self, msg):
"""Apply the operation to a message
Raises MsgValueError if the message value is invalid, such as it being
too long, or it causing the result to be too long.
"""
if not isinstance(msg, bytes):
raise TypeError("Expected message to be bytes; got %r" % msg.__class__)
elif len(msg) > self.MAX_MSG_LENGTH:
raise MsgValueError("Message too long; %d > %d" % (len(msg), self.MAX_MSG_LENGTH))
r = self._do_op_call(msg)
# No operation should allow the result to be empty; that would
# trivially allow the commitment DAG to have a cycle in it.
assert len(r)
if len(r) > self.MAX_RESULT_LENGTH:
raise MsgValueError("Result too long; %d > %d" % (len(r), self.MAX_RESULT_LENGTH))
else:
return r
def __repr__(self):
return '%s()' % self.__class__.__name__
def __str__(self):
return '%s' % self.TAG_NAME
@classmethod
def _register_op(cls, subcls):
cls.SUBCLS_BY_TAG[subcls.TAG] = subcls
if cls != Op:
cls.__base__._register_op(subcls)
return subcls
def serialize(self, ctx):
ctx.write_bytes(self.TAG)
@classmethod
def deserialize_from_tag(cls, ctx, tag):
if tag in cls.SUBCLS_BY_TAG:
return cls.SUBCLS_BY_TAG[tag].deserialize_from_tag(ctx, tag)
else:
raise opentimestamps.core.serialize.DeserializationError("Unknown operation tag 0x%0x" % tag[0])
@classmethod
def deserialize(cls, ctx):
tag = ctx.read_bytes(1)
return cls.deserialize_from_tag(ctx, tag)
class UnaryOp(Op):
"""Operations that act on a single message"""
SUBCLS_BY_TAG = {}
def __new__(cls):
return tuple.__new__(cls)
def serialize(self, ctx):
super().serialize(ctx)
@classmethod
def deserialize_from_tag(cls, ctx, tag):
if tag in cls.SUBCLS_BY_TAG:
return cls.SUBCLS_BY_TAG[tag]()
else:
raise opentimestamps.core.serialize.DeserializationError("Unknown unary op tag 0x%0x" % tag[0])
class BinaryOp(Op):
"""Operations that act on a message and a single argument"""
SUBCLS_BY_TAG = {}
def __new__(cls, arg):
if not isinstance(arg, bytes):
raise TypeError("arg must be bytes")
elif not len(arg):
raise OpArgValueError("%s arg can't be empty" % cls.__name__)
elif len(arg) > cls.MAX_RESULT_LENGTH:
raise OpArgValueError("%s arg too long: %d > %d" % (cls.__name__, len(arg), cls.MAX_RESULT_LENGTH))
return tuple.__new__(cls, (arg,))
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, self[0])
def __str__(self):
return '%s %s' % (self.TAG_NAME, binascii.hexlify(self[0]).decode('utf8'))
def serialize(self, ctx):
super().serialize(ctx)
ctx.write_varbytes(self[0])
@classmethod
def deserialize_from_tag(cls, ctx, tag):
if tag in cls.SUBCLS_BY_TAG:
arg = ctx.read_varbytes(cls.MAX_RESULT_LENGTH, min_len=1)
return cls.SUBCLS_BY_TAG[tag](arg)
else:
raise opentimestamps.core.serialize.DeserializationError("Unknown binary op tag 0x%0x" % tag[0])
@BinaryOp._register_op
class OpAppend(BinaryOp):
"""Append a suffix to a message"""
TAG = b'\xf0'
TAG_NAME = 'append'
def _do_op_call(self, msg):
return msg + self[0]
@BinaryOp._register_op
class OpPrepend(BinaryOp):
"""Prepend a prefix to a message"""
TAG = b'\xf1'
TAG_NAME = 'prepend'
def _do_op_call(self, msg):
return self[0] + msg
@UnaryOp._register_op
class OpReverse(UnaryOp):
TAG = b'\xf2'
TAG_NAME = 'reverse'
def _do_op_call(self, msg):
if not len(msg):
raise MsgValueError("Can't reverse an empty message")
import warnings
warnings.warn("OpReverse may get removed; see https://github.com/opentimestamps/python-opentimestamps/issues/5", PendingDeprecationWarning)
return msg[::-1]
@UnaryOp._register_op
class OpHexlify(UnaryOp):
"""Convert bytes to lower-case hexadecimal representation
Note that hexlify can only be performed on messages that aren't empty;
hexlify on an empty message would create a cycle in the commitment graph.
"""
TAG = b'\xf3'
TAG_NAME = 'hexlify'
MAX_MSG_LENGTH = UnaryOp.MAX_RESULT_LENGTH // 2
"""Maximum length of message that we'll hexlify
Every invocation of hexlify doubles the size of its input, this is simply
half the maximum result length.
"""
def _do_op_call(self, msg):
if not len(msg):
raise MsgValueError("Can't hexlify an empty message")
return binascii.hexlify(msg)
class CryptOp(UnaryOp):
"""Cryptographic transformations
These transformations have the unique property that for any length message,
the size of the result they return is fixed. Additionally, they're the only
type of operation that can be applied directly to a stream.
"""
__slots__ = []
SUBCLS_BY_TAG = {}
DIGEST_LENGTH = None
def _do_op_call(self, msg):
r = hashlib.new(self.HASHLIB_NAME, bytes(msg)).digest()
assert len(r) == self.DIGEST_LENGTH
return r
def hash_fd(self, fd):
hasher = hashlib.new(self.HASHLIB_NAME)
while True:
chunk = fd.read(2**20) # 1MB chunks
if chunk:
hasher.update(chunk)
else:
break
return hasher.digest()
# Cryptographic operation tag numbers taken from RFC4880, although it's not
# guaranteed that they'll continue to match that RFC in the future.
@CryptOp._register_op
class OpSHA1(CryptOp):
# Remember that for timestamping, hash algorithms with collision attacks
# *are* secure! We've still proven that both messages existed prior to some
# point in time - the fact that they both have the same hash digest doesn't
# change that.
#
# Heck, even md5 is still secure enough for timestamping... but that's
# pushing our luck...
TAG = b'\x02'
TAG_NAME = 'sha1'
HASHLIB_NAME = "sha1"
DIGEST_LENGTH = 20
@CryptOp._register_op
class OpRIPEMD160(CryptOp):
TAG = b'\x03'
TAG_NAME = 'ripemd160'
HASHLIB_NAME = "ripemd160"
DIGEST_LENGTH = 20
@CryptOp._register_op
class OpSHA256(CryptOp):
TAG = b'\x08'
TAG_NAME = 'sha256'
HASHLIB_NAME = "sha256"
DIGEST_LENGTH = 32
@CryptOp._register_op
class OpKECCAK256(UnaryOp):
__slots__ = []
TAG = b'\x67'
TAG_NAME = 'keccak256'
DIGEST_LENGTH = 32
def _do_op_call(self, msg):
r = sha3.keccak_256(bytes(msg)).digest()
assert len(r) == self.DIGEST_LENGTH
return r
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/core/packetstream.py 0000664 0000000 0000000 00000016640 14302173252 0031722 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import io
import sys
"""Packet-writing support for append-only streams with truncation handling
Strictly append-only streams, such as files whose append-only attribute has
been set by chattr(1), are a useful way to avoid losing data. But using them
for complex serialized data poses a problem: truncated writes.
For instance, suppose we try to serialize the string 'Hello World!' to a stream
using var_bytes(). If everything goes correctly, the following will be written
to the stream:
b'\x0cHello World!'
However, suppose that there's an IO error after the third byte, leaving the
file truncated:
b'\x0cHe'
Since the stream is strictly append-only, if we write anything else to the
stream, the length byte will cause the deserializer to later incorrectly read
part of the next thing we write as though it were part of the original string.
While in theory we could fix this, doing so requires a lot of invasive code
changes to the (de)serialization code.
This module implements a much simpler solution with variable length packets.
Each packet can be any length, and it's guaranteed that in the event of a
truncated write, at worst the most recently written packet will be corrupted.
Secondly, it's guaranteed that in the event of a corrupt packet, additional
packets can be written succesfully even if the underlying stream is
append-only.
"""
class PacketWriter(io.BufferedIOBase):
"""Write an individual packet"""
def __init__(self, fd):
"""Create a new packet stream for writing
fd must be a buffered stream; a io.BufferedIOBase instance.
FIXME: fd must be blocking; the BlockingIOError exception isn't handled
correctly yet
"""
if not isinstance(fd, io.BufferedIOBase):
raise TypeError('fd must be buffered IO')
self.raw = fd
self.pending = b''
def write(self, buf):
if self.closed:
raise ValueError("write to closed packet")
pending = self.pending + buf
# the + 1 handles the case where the length of buf is an exact multiple
# of the max sub-packet size
for i in range(0, len(pending) + 1, 255):
chunk = pending[i:i+255]
if len(chunk) < 255:
assert 0 <= len(pending) - i < 255
self.pending = chunk
break
else:
assert len(chunk) == 255
try:
l = self.raw.write(b'\xff' + chunk)
assert l == 256
except io.BlockingIOError as exp:
# To support this, we'd need to look at characters_written to
# figure out what data from pending has been written.
raise Exception("non-blocking IO not yet supported: %r" % exp)
else:
assert False
return len(buf)
def flush_pending(self):
"""Flush pending data to the underlying stream
All pending data is written to the underlying stream, creating a
partial-length sub-packet if necessary. However the underlying stream
is _not_ flushed. If there is no pending data, this function is a
no-op.
"""
if self.closed:
raise ValueError("flush of closed packet")
if not self.pending:
return
assert len(self.pending) < 255
l = self.raw.write(bytes([len(self.pending)]) + self.pending)
assert l == 1 + len(self.pending)
self.pending = b''
try:
self.raw.flush()
except io.BlockingIOError as exp:
# To support this, we'd need to look at characters_written to
# figure out what data from pending has been written.
raise Exception("non-blocking IO not yet supported: %r" % exp)
def flush(self):
"""Flush the packet to disk
All pending data is written to the underlying stream with
flush_pending(), and flush() is called on that stream.
"""
self.flush_pending()
try:
self.raw.flush()
except io.BlockingIOError as exp:
# To support this, we'd need to look at characters_written to
# figure out what data from pending has been written.
raise Exception("non-blocking IO not yet supported: %r" % exp)
def close(self):
"""Close the packet
All pending data is written to the underlying stream, and the packet is
closed.
"""
self.flush_pending()
self.raw.write(b'\x00') # terminator to close the packet
# Note how we didn't call flush above; BufferedIOBase.close() calls
# self.flush() for us.
super().close()
class PacketMissingError(IOError):
"""Raised when a packet is completely missing"""
class PacketReader(io.BufferedIOBase):
"""Read an individual packet"""
def __init__(self, fd):
"""Create a new packet stream reader
The first byte of the packet will be read immediately; if that read()
fails PacketMissingError will be raised.
"""
self.raw = fd
# Bytes remaining until the end of the current sub-packet
l = fd.read(1)
if not l:
raise PacketMissingError("Packet completely missing")
self.len_remaining_subpacket = l[0]
# Whether the end of the entire packet has been reached
self.end_of_packet = False
# How many bytes are known to have been truncated (None if not known yet)
self.truncated = None
def read(self, size=-1):
if self.end_of_packet:
return b''
r = []
remaining = size if size >= 0 else sys.maxsize
while remaining and not self.end_of_packet:
if self.len_remaining_subpacket:
# The current subpacket hasn't been completely read.
l = min(remaining, self.len_remaining_subpacket)
b = self.raw.read(l)
r.append(b)
self.len_remaining_subpacket -= len(b)
remaining -= len(b)
if len(b) < l:
# read returned less than requested, so the sub-packet must
# be truncated; record how many bytes are missing. Note how
# we add one to that figure to account for the
# end-of-packet marker.
self.truncated = l - len(b) + 1
self.end_of_packet = True
else:
# All of the current subpacket has been read, so start reading
# the next sub-packet.
# Get length of next sub-packet
l = self.raw.read(1)
if l == b'':
# We're truncated by exactly one byte, the end-of-packet
# marker.
self.truncated = 1
self.end_of_packet = True
else:
# Succesfully read the length
self.len_remaining_subpacket = l[0]
if not self.len_remaining_subpacket:
self.end_of_packet = True
return b''.join(r)
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/core/serialize.py 0000664 0000000 0000000 00000016473 14302173252 0031232 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
"""Consensus-critical recursive descent serialization/deserialization"""
import binascii
import io
class DeserializationError(Exception):
"""Base class for all errors encountered during deserialization"""
class BadMagicError(DeserializationError):
"""A magic number is incorrect
Raise this when the file format magic number is incorrect.
"""
def __init__(self, expected_magic, actual_magic):
super().__init__('Expected magic bytes 0x%s, but got 0x%s instead' % (binascii.hexlify(expected_magic).decode(),
binascii.hexlify(actual_magic).decode()))
class UnsupportedMajorVersion(DeserializationError):
"""Unsupported major version
Raise this a major version is unsupported
"""
class TruncationError(DeserializationError):
"""Truncated data encountered while deserializing"""
class TrailingGarbageError(DeserializationError):
"""Trailing garbage found after deserialization finished
Raised when deserialization otherwise succeeds without errors, but excess
data is present after the data we expected to get.
"""
class RecursionLimitError(DeserializationError):
"""Data is too deeply nested to be deserialized
Raised when deserializing recursively defined data structures that exceed
the recursion limit for that particular data structure.
"""
class SerializerTypeError(TypeError):
"""Wrong type for specified serializer"""
class SerializerValueError(ValueError):
"""Inappropriate value to be serialized (of correct type)"""
class SerializationContext:
"""Context for serialization
Allows multiple serialization targets to share the same codebase, for
instance bytes, memoized serialization, hashing, etc.
"""
def write_bool(self, value):
"""Write a bool"""
raise NotImplementedError
def write_varuint(self, value):
"""Write a variable-length unsigned integer"""
raise NotImplementedError
def write_bytes(self, value):
"""Write fixed-length bytes"""
raise NotImplementedError
def write_varbytes(self, value):
"""Write variable-length bytes"""
raise NotImplementedError
class DeserializationContext:
"""Context for deserialization
Allows multiple deserialization sources to share the same codebase, for
instance bytes, memoized serialization, hashing, etc.
"""
def read_bool(self):
"""Read a bool"""
raise NotImplementedError
def read_varuint(self, max_int):
"""Read a variable-length unsigned integer"""
raise NotImplementedError
def read_bytes(self, expected_length):
"""Read fixed-length bytes"""
raise NotImplementedError
def read_varbytes(self, value, max_length=None):
"""Read variable-length bytes
No more than max_length bytes will be read.
"""
raise NotImplementedError
def assert_magic(self, expected_magic):
"""Assert the presence of magic bytes
Raises BadMagicError if the magic bytes don't match, or if the read was
truncated.
Note that this isn't an assertion in the Python sense: debug/production
does not change the behavior of this function.
"""
raise NotImplementedError
def assert_eof(self):
"""Assert that we have reached the end of the data
Raises TrailingGarbageError(msg) if the end of file has not been reached.
Note that this isn't an assertion in the Python sense: debug/production
does not change the behavior of this function.
"""
raise NotImplementedError
class StreamSerializationContext(SerializationContext):
def __init__(self, fd):
"""Serialize to a stream"""
self.fd = fd
def write_bool(self, value):
if value is True:
self.fd.write(b'\xff')
elif value is False:
self.fd.write(b'\x00')
else:
raise TypeError('Expected bool; got %r' % value.__class__)
def write_varuint(self, value):
# unsigned little-endian base128 format (LEB128)
if value == 0:
self.fd.write(b'\x00')
else:
while value != 0:
b = value & 0b01111111
if value > 0b01111111:
b |= 0b10000000
self.fd.write(bytes([b]))
if value <= 0b01111111:
break
value >>= 7
def write_bytes(self, value):
self.fd.write(value)
def write_varbytes(self, value):
self.write_varuint(len(value))
self.fd.write(value)
class StreamDeserializationContext(DeserializationContext):
def __init__(self, fd):
"""Deserialize from a stream"""
self.fd = fd
def fd_read(self, l):
r = self.fd.read(l)
if len(r) != l:
raise TruncationError('Tried to read %d bytes but got only %d bytes' % \
(l, len(r)))
return r
def read_bool(self):
# unsigned little-endian base128 format (LEB128)
b = self.fd_read(1)[0]
if b == 0xff:
return True
elif b == 0x00:
return False
else:
raise DeserializationError('read_bool() expected 0xff or 0x00; got %d' % b)
def read_varuint(self):
value = 0
shift = 0
while True:
b = self.fd_read(1)[0]
value |= (b & 0b01111111) << shift
if not (b & 0b10000000):
break
shift += 7
return value
def read_bytes(self, expected_length=None):
if expected_length is None:
expected_length = self.read_varuint(None)
return self.fd_read(expected_length)
def read_varbytes(self, max_len, min_len=0):
l = self.read_varuint()
if l > max_len:
raise DeserializationError('varbytes max length exceeded; %d > %d' % (l, max_len))
if l < min_len:
raise DeserializationError('varbytes min length not met; %d < %d' % (l, min_len))
return self.fd_read(l)
def assert_magic(self, expected_magic):
actual_magic = self.fd.read(len(expected_magic))
if expected_magic != actual_magic:
raise BadMagicError(expected_magic, actual_magic)
def assert_eof(self):
excess = self.fd.read(1)
if excess:
raise TrailingGarbageError("Trailing garbage found after end of deserialized data")
class BytesSerializationContext(StreamSerializationContext):
def __init__(self):
"""Serialize to bytes"""
super().__init__(io.BytesIO())
def getbytes(self):
"""Return the bytes serialized to date"""
return self.fd.getvalue()
class BytesDeserializationContext(StreamDeserializationContext):
def __init__(self, buf):
"""Deserialize from bytes"""
super().__init__(io.BytesIO(buf))
# FIXME: need to check that there isn't extra crap at end of object
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/core/timestamp.py 0000664 0000000 0000000 00000035157 14302173252 0031246 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import binascii
from bitcoin.core import CTransaction, SerializationError, b2lx, b2x
from opentimestamps.core.op import Op, CryptOp, OpSHA256, OpAppend, OpPrepend, MsgValueError
from opentimestamps.core.notary import TimeAttestation, BitcoinBlockHeaderAttestation, LitecoinBlockHeaderAttestation
import opentimestamps.core.serialize
class OpSet(dict):
"""Set of operations"""
__slots__ = ['__make_timestamp']
def __init__(self, make_timestamp_func):
self.__make_timestamp = make_timestamp_func
def add(self, key):
"""Add key
Returns the value associated with that key
"""
try:
return self[key]
except KeyError:
value = self.__make_timestamp(key)
self[key] = value
return value
def __setitem__(self, op, new_timestamp):
try:
existing_timestamp = self[op]
except KeyError:
dict.__setitem__(self, op, new_timestamp)
return
if existing_timestamp.msg != new_timestamp.msg:
raise ValueError("Can't change existing result timestamp: timestamps are for different messages")
dict.__setitem__(self, op, new_timestamp)
class Timestamp:
"""Proof that one or more attestations commit to a message
The proof is in the form of a tree, with each node being a message, and the
edges being operations acting on those messages. The leafs of the tree are
attestations that attest to the time that messages in the tree existed prior.
"""
__slots__ = ['__msg', 'attestations', 'ops']
@property
def msg(self):
return self.__msg
def __init__(self, msg):
if not isinstance(msg, bytes):
raise TypeError("Expected msg to be bytes; got %r" % msg.__class__)
elif len(msg) > Op.MAX_MSG_LENGTH:
raise ValueError("Message exceeds Op length limit; %d > %d" % (len(msg), Op.MAX_MSG_LENGTH))
self.__msg = bytes(msg)
self.attestations = set()
self.ops = OpSet(lambda op: Timestamp(op(msg)))
def __eq__(self, other):
if isinstance(other, Timestamp):
return self.__msg == other.__msg and self.attestations == other.attestations and self.ops == other.ops
else:
return False
def __repr__(self):
return 'Timestamp(<%s>)' % binascii.hexlify(self.__msg).decode('utf8')
def merge(self, other):
"""Add all operations and attestations from another timestamp to this one
Raises ValueError if the other timestamp isn't for the same message
"""
if not isinstance(other, Timestamp):
raise TypeError("Can only merge Timestamps together")
if self.__msg != other.__msg:
raise ValueError("Can't merge timestamps for different messages together")
self.attestations.update(other.attestations)
for other_op, other_op_stamp in other.ops.items():
our_op_stamp = self.ops.add(other_op)
our_op_stamp.merge(other_op_stamp)
def serialize(self, ctx):
if not len(self.attestations) and not len(self.ops):
raise ValueError("An empty timestamp can't be serialized")
sorted_attestations = sorted(self.attestations)
if len(sorted_attestations) > 1:
for attestation in sorted_attestations[0:-1]:
ctx.write_bytes(b'\xff\x00')
attestation.serialize(ctx)
if len(self.ops) == 0:
ctx.write_bytes(b'\x00')
sorted_attestations[-1].serialize(ctx)
elif len(self.ops) > 0:
if len(sorted_attestations) > 0:
ctx.write_bytes(b'\xff\x00')
sorted_attestations[-1].serialize(ctx)
sorted_ops = sorted(self.ops.items(), key=lambda item: item[0])
for op, stamp in sorted_ops[0:-1]:
ctx.write_bytes(b'\xff')
op.serialize(ctx)
stamp.serialize(ctx)
last_op, last_stamp = sorted_ops[-1]
last_op.serialize(ctx)
last_stamp.serialize(ctx)
@classmethod
def deserialize(cls, ctx, initial_msg, _recursion_limit=256):
"""Deserialize
Because the serialization format doesn't include the message that the
timestamp operates on, you have to provide it so that the correct
operation results can be calculated.
The message you provide is assumed to be correct; if it causes a op to
raise MsgValueError when the results are being calculated (done
immediately, not lazily) DeserializationError is raised instead.
"""
# FIXME: The recursion limit is arguably a bit of a hack given that
# it's relatively easily avoided with a different implementation. On
# the other hand, it has the advantage of being a very simple
# solution to the problem, and the limit isn't likely to be reached by
# nearly all timestamps anyway.
#
# FIXME: Corresponding code to detect this condition is missing from
# the serialization/__init__() code.
if not _recursion_limit:
raise opentimestamps.core.serialize.RecursionLimitError("Reached timestamp recursion depth limit while deserializing")
# FIXME: note how a lazy implementation would have different behavior
# with respect to deserialization errors; is this a good design?
self = cls(initial_msg)
def do_tag_or_attestation(tag):
if tag == b'\x00':
attestation = TimeAttestation.deserialize(ctx)
self.attestations.add(attestation)
else:
op = Op.deserialize_from_tag(ctx, tag)
try:
result = op(initial_msg)
except MsgValueError as exp:
raise opentimestamps.core.serialize.DeserializationError("Invalid timestamp; message invalid for op %r: %r" % (op, exp))
stamp = Timestamp.deserialize(ctx, result, _recursion_limit=_recursion_limit-1)
self.ops[op] = stamp
tag = ctx.read_bytes(1)
while tag == b'\xff':
do_tag_or_attestation(ctx.read_bytes(1))
tag = ctx.read_bytes(1)
do_tag_or_attestation(tag)
return self
def all_attestations(self):
"""Iterate over all attestations recursively
Returns iterable of (msg, attestation)
"""
for attestation in self.attestations:
yield (self.msg, attestation)
for op_stamp in self.ops.values():
yield from op_stamp.all_attestations()
def str_tree(self, indent=0, verbosity=0):
"""Convert to tree (for debugging)"""
class bcolors:
HEADER = '\033[95m'
OKBLUE = '\033[94m'
OKGREEN = '\033[92m'
WARNING = '\033[93m'
FAIL = '\033[91m'
ENDC = '\033[0m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
def str_result(verb, parameter, result):
rr = ""
if verb > 0 and result is not None:
rr += " == "
result_hex = b2x(result)
if parameter is not None:
parameter_hex = b2x(parameter)
try:
index = result_hex.index(parameter_hex)
parameter_hex_highlight = bcolors.BOLD + parameter_hex + bcolors.ENDC
if index == 0:
rr += parameter_hex_highlight + result_hex[index+len(parameter_hex):]
else:
rr += result_hex[0:index] + parameter_hex_highlight
except ValueError:
rr += result_hex
else:
rr += result_hex
return rr
r = ""
if len(self.attestations) > 0:
for attestation in sorted(self.attestations):
r += " "*indent + "verify %s" % str(attestation) + str_result(verbosity, self.msg, None) + "\n"
if attestation.__class__ == BitcoinBlockHeaderAttestation:
r += " "*indent + "# Bitcoin block merkle root " + b2lx(self.msg) + "\n"
if attestation.__class__ == LitecoinBlockHeaderAttestation:
r += " "*indent + "# Litecoin block merkle root " + b2lx(self.msg) + "\n"
if len(self.ops) > 1:
for op, timestamp in sorted(self.ops.items()):
try:
CTransaction.deserialize(self.msg)
r += " " * indent + "* Transaction id " + b2lx(
OpSHA256()(OpSHA256()(self.msg))) + "\n"
except SerializationError:
pass
cur_res = op(self.msg)
cur_par = op[0] if len(op) > 0 else None
r += " " * indent + " -> " + "%s" % str(op) + str_result(verbosity, cur_par, cur_res) + "\n"
r += timestamp.str_tree(indent+4, verbosity=verbosity)
elif len(self.ops) > 0:
try:
CTransaction.deserialize(self.msg)
r += " " * indent + "# Transaction id " + \
b2lx(OpSHA256()(OpSHA256()(self.msg))) + "\n"
except SerializationError:
pass
op = tuple(self.ops.keys())[0]
cur_res = op(self.msg)
cur_par = op[0] if len(op) > 0 else None
r += " " * indent + "%s" % str(op) + str_result(verbosity, cur_par, cur_res) + "\n"
r += tuple(self.ops.values())[0].str_tree(indent, verbosity=verbosity)
return r
class DetachedTimestampFile:
"""A file containing a timestamp for another file
Contains a timestamp, along with a header and the digest of the file.
"""
HEADER_MAGIC = b'\x00OpenTimestamps\x00\x00Proof\x00\xbf\x89\xe2\xe8\x84\xe8\x92\x94'
"""Header magic bytes
Designed to be give the user some information in a hexdump, while being
identified as 'data' by the file utility.
"""
MIN_FILE_DIGEST_LENGTH = 20 # 160-bit hash
MAX_FILE_DIGEST_LENGTH = 32 # 256-bit hash
MAJOR_VERSION = 1
# While the git commit timestamps have a minor version, probably better to
# leave it out here: unlike Git commits round-tripping is an issue when
# timestamps are upgraded, and we could end up with bugs related to not
# saving/updating minor version numbers correctly.
@property
def file_digest(self):
"""The digest of the file that was timestamped"""
return self.timestamp.msg
def __init__(self, file_hash_op, timestamp):
self.file_hash_op = file_hash_op
if len(timestamp.msg) != file_hash_op.DIGEST_LENGTH:
raise ValueError("Timestamp message length and file_hash_op digest length differ")
self.timestamp = timestamp
def __repr__(self):
return 'DetachedTimestampFile(<%s:%s>)' % (str(self.file_hash_op), binascii.hexlify(self.file_digest).decode('utf8'))
def __eq__(self, other):
return (self.__class__ == other.__class__ and
self.file_hash_op == other.file_hash_op and
self.timestamp == other.timestamp)
@classmethod
def from_fd(cls, file_hash_op, fd):
fd_hash = file_hash_op.hash_fd(fd)
return cls(file_hash_op, Timestamp(fd_hash))
def serialize(self, ctx):
ctx.write_bytes(self.HEADER_MAGIC)
ctx.write_varuint(self.MAJOR_VERSION)
self.file_hash_op.serialize(ctx)
assert self.file_hash_op.DIGEST_LENGTH == len(self.timestamp.msg)
ctx.write_bytes(self.timestamp.msg)
self.timestamp.serialize(ctx)
@classmethod
def deserialize(cls, ctx):
ctx.assert_magic(cls.HEADER_MAGIC)
major = ctx.read_varuint() # FIXME: max-int limit
if major != cls.MAJOR_VERSION:
raise opentimestamps.core.serialize.UnsupportedMajorVersion("Version %d detached timestamp files are not supported" % major)
file_hash_op = CryptOp.deserialize(ctx)
file_hash = ctx.read_bytes(file_hash_op.DIGEST_LENGTH)
timestamp = Timestamp.deserialize(ctx, file_hash)
ctx.assert_eof()
return DetachedTimestampFile(file_hash_op, timestamp)
def cat_then_unary_op(unary_op_cls, left, right):
"""Concatenate left and right, then perform a unary operation on them
left and right can be either timestamps or bytes.
Appropriate intermediary append/prepend operations will be created as
needed for left and right.
"""
if not isinstance(left, Timestamp):
left = Timestamp(left)
if not isinstance(right, Timestamp):
right = Timestamp(right)
left_append_stamp = left.ops.add(OpAppend(right.msg))
right_prepend_stamp = right.ops.add(OpPrepend(left.msg))
assert(left_append_stamp == right_prepend_stamp)
# Left and right should produce the same thing, so we can set the timestamp
# of the left to the right.
left.ops[OpAppend(right.msg)] = right_prepend_stamp
return right_prepend_stamp.ops.add(unary_op_cls())
def cat_sha256(left, right):
return cat_then_unary_op(OpSHA256, left, right)
def cat_sha256d(left, right):
sha256_timestamp = cat_sha256(left, right)
return sha256_timestamp.ops.add(OpSHA256())
def make_merkle_tree(timestamps, binop=cat_sha256):
"""Merkelize a set of timestamps
A merkle tree of all the timestamps is built in-place using binop() to
timestamp each pair of timestamps. The exact algorithm used is structurally
identical to a merkle-mountain-range, although leaf sums aren't committed.
As this function is under the consensus-critical core, it's guaranteed that
the algorithm will not be changed in the future.
Returns the timestamp for the tip of the tree.
"""
stamps = timestamps
while True:
stamps = iter(stamps)
try:
prev_stamp = next(stamps)
except StopIteration:
raise ValueError("Need at least one timestamp")
next_stamps = []
for stamp in stamps:
if prev_stamp is not None:
next_stamps.append(binop(prev_stamp, stamp))
prev_stamp = None
else:
prev_stamp = stamp
if not next_stamps:
return prev_stamp
if prev_stamp is not None:
next_stamps.append(prev_stamp)
stamps = next_stamps
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/ 0000775 0000000 0000000 00000000000 14302173252 0027070 5 ustar 00root root 0000000 0000000 python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/__init__.py 0000664 0000000 0000000 00000000606 14302173252 0031203 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/core/ 0000775 0000000 0000000 00000000000 14302173252 0030020 5 ustar 00root root 0000000 0000000 python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/core/__init__.py 0000664 0000000 0000000 00000000606 14302173252 0032133 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/core/dubious/ 0000775 0000000 0000000 00000000000 14302173252 0031472 5 ustar 00root root 0000000 0000000 python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/core/dubious/__init__.py 0000664 0000000 0000000 00000000606 14302173252 0033605 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/core/dubious/test_notary.py 0000664 0000000 0000000 00000005664 14302173252 0034432 0 ustar 00root root 0000000 0000000 # Copyright (C) 2017 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import unittest
from opentimestamps.core.serialize import *
from opentimestamps.core.notary import *
from opentimestamps.core.dubious.notary import *
class Test_EthereumBlockHeaderAttestation(unittest.TestCase):
def test_serialize(self):
attestation = EthereumBlockHeaderAttestation(0)
expected_serialized = bytes.fromhex('30fe8087b5c7ead7' + '0100')
ctx = BytesSerializationContext()
attestation.serialize(ctx)
self.assertEqual(ctx.getbytes(), expected_serialized)
ctx = BytesDeserializationContext(expected_serialized)
attestation2 = TimeAttestation.deserialize(ctx)
self.assertEqual(attestation2.height, 0)
def test_verify(self):
eth_block_1 = {'uncles': [], 'size': 537, 'hash': '0x88e96d4537bea4d9c05d12549907b32561d3bf31f45aae734cdc119f13406cb6', 'gasLimit': 5000, 'number': 1, 'totalDifficulty': 34351349760, 'stateRoot': '0xd67e4d450343046425ae4271474353857ab860dbc0a1dde64b41b5cd3a532bf3', 'extraData': '0x476574682f76312e302e302f6c696e75782f676f312e342e32', 'sha3Uncles': '0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347', 'mixHash': '0x969b900de27b6ac6a67742365dd65f55a0526c41fd18e1b16f1a1215c2e66f59', 'transactionsRoot': '0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421', 'sealFields': ['0x969b900de27b6ac6a67742365dd65f55a0526c41fd18e1b16f1a1215c2e66f59', '0x539bd4979fef1ec4'], 'transactions': [], 'parentHash': '0xd4e56740f876aef8c010b86a40d5f56745a118d0906a34e69aec8c0db1cb8fa3', 'logsBloom': '0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000', 'author': '0x05a56e2d52c817161883f50c441c3228cfe54d9f', 'gasUsed': 0, 'timestamp': 1438269988, 'nonce': '0x539bd4979fef1ec4', 'difficulty': 17171480576, 'receiptsRoot': '0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421', 'miner': '0x05a56e2d52c817161883f50c441c3228cfe54d9f'}
attestation = EthereumBlockHeaderAttestation(1)
timestamp = attestation.verify_against_blockheader(bytes.fromhex("56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421"), eth_block_1)
self.assertEqual(1438269988, timestamp)
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/core/test_git.py 0000664 0000000 0000000 00000015406 14302173252 0032222 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import unittest
import dbm
import git
import tempfile
from bitcoin.core import b2x
from opentimestamps.core.timestamp import *
from opentimestamps.core.op import *
from opentimestamps.core.git import *
class Test_GitTreeTimestamper(unittest.TestCase):
def setUp(self):
self.db_dirs = []
def tearDown(self):
for d in self.db_dirs:
d.cleanup()
del self.db_dirs
def make_stamper(self, commit):
# Yes, we're using our own git repo as the test data!
repo = git.Repo(__file__ + '../../../../../')
db_dir = tempfile.TemporaryDirectory()
self.db_dirs.append(db_dir)
db = dbm.open(db_dir.name + '/db', 'c')
tree = repo.commit(commit).tree
return GitTreeTimestamper(tree, db=db)
def test_blobs(self):
"""Git blob hashing"""
stamper = self.make_stamper("53c68bc976c581636b84c82fe814fab178adf8a6")
for expected_hexdigest, path in (('9e34b52cfa5724a4d87e9f7f47e2699c14d918285a20bf47f5a2a7345999e543', 'LICENSE'),
('ef83ecaca007e8afbfcca834b75510a98b6c10036374bb0d9f42a63f69efcd11', 'opentimestamps/__init__.py'),
('ef83ecaca007e8afbfcca834b75510a98b6c10036374bb0d9f42a63f69efcd11', 'opentimestamps/tests/__init__.py'),
('745bd9059cf01edabe3a61198fe1147e01ff57eec69e29f2e617b8e376427082', 'opentimestamps/tests/core/test_core.py'),
('ef83ecaca007e8afbfcca834b75510a98b6c10036374bb0d9f42a63f69efcd11', 'opentimestamps/tests/core/__init__.py'),
('7cd2b5a8723814be27fe6b224cc76e52275b1ff149de157ce374d290d032e875', 'opentimestamps/core/__init__.py'),
('d41fb0337e687b26f3f5dd61d10ec5080ff0bdc32f90f2022f7e2d9eeba91442', 'README')):
stamp = stamper[path]
actual_hexdigest = b2x(stamp.file_digest)
self.assertEqual(expected_hexdigest, actual_hexdigest)
stamper = self.make_stamper("30f6c357d578e0921dc6fffd67e2af1ce1ca0ff2")
empty_stamp = stamper["empty"]
self.assertEqual(empty_stamp.file_digest, bytes.fromhex("e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"))
def test_empty_tree(self):
"""Git tree with a single empty file"""
stamper = self.make_stamper("30f6c357d578e0921dc6fffd67e2af1ce1ca0ff2")
# There's a single empty file in this directory. Thus the nonce_key is:
nonce_key = OpSHA256()(OpSHA256()(b'') + # one empty file
b'\x01\x89\x08\x0c\xfb\xd0\xe8\x08') # tag
nonce1 = OpSHA256()(OpSHA256()(b'') + nonce_key)
assert nonce1[0] & 0b1 == 1
nonce2 = OpSHA256()(nonce1)
self.assertEqual(stamper.timestamp.msg,
OpSHA256()(b''))
self.assertEqual(stamper.timestamp.msg, b"\xe3\xb0\xc4B\x98\xfc\x1c\x14\x9a\xfb\xf4\xc8\x99o\xb9$'\xaeA\xe4d\x9b\x93L\xa4\x95\x99\x1bxR\xb8U")
def test_two_file_tree(self):
"""Git tree with a two files"""
stamper = self.make_stamper("78eb5cdc1ec638be72d6fb7a38c4d24f2be5d081")
nonce_key = OpSHA256()(OpSHA256()(b'a\n') +
OpSHA256()(b'b\n') +
b'\x01\x89\x08\x0c\xfb\xd0\xe8\x08') # tag
n_a_nonce1 = OpSHA256()(OpSHA256()(b'a\n') + nonce_key)
assert n_a_nonce1[0] & 0b1 == 0
n_a_nonce2 = OpSHA256()(n_a_nonce1)
n_a = OpSHA256()(OpSHA256()(b'a\n') + n_a_nonce2)
n_b_nonce1 = OpSHA256()(OpSHA256()(b'b\n') + nonce_key)
assert n_b_nonce1[0] & 0b1 == 0
n_b_nonce2 = OpSHA256()(n_b_nonce1)
n_b = OpSHA256()(OpSHA256()(b'b\n') + n_b_nonce2)
self.assertEqual(stamper.timestamp.msg,
OpSHA256()(n_a + n_b))
self.assertEqual(stamper.timestamp.msg, b's\x0e\xc2h\xd4\xb3\xa5\xd4\xe6\x0e\xe9\xb2t\x89@\x95\xc8c_F3\x81a=\xc2\xd4qy\xaf\x8e\xa0\x87')
def test_tree_with_children(self):
"""Git tree with child trees"""
stamper = self.make_stamper("b22192fffb9aad27eb57986e7fe89f8047340346")
# These correspond to the final values from the test_empty_tree() and
# test_two_file_tree() test cases above; git git commit we're testing
# has the trees associated with those test cases in the one/ and two/
# directories respectively.
d_one = b"\xe3\xb0\xc4B\x98\xfc\x1c\x14\x9a\xfb\xf4\xc8\x99o\xb9$'\xaeA\xe4d\x9b\x93L\xa4\x95\x99\x1bxR\xb8U"
d_two = b's\x0e\xc2h\xd4\xb3\xa5\xd4\xe6\x0e\xe9\xb2t\x89@\x95\xc8c_F3\x81a=\xc2\xd4qy\xaf\x8e\xa0\x87'
nonce_key = OpSHA256()(d_one + d_two +
b'\x01\x89\x08\x0c\xfb\xd0\xe8\x08') # tag
n_one_nonce1 = OpSHA256()(d_one + nonce_key)
assert n_one_nonce1[0] & 0b1 == 0
n_one_nonce2 = OpSHA256()(n_one_nonce1)
n_one = OpSHA256()(d_one + n_one_nonce2)
n_two_nonce1 = OpSHA256()(d_two + nonce_key)
assert n_two_nonce1[0] & 0b1 == 0
n_two_nonce2 = OpSHA256()(n_two_nonce1)
n_two = OpSHA256()(d_two + n_two_nonce2)
self.assertEqual(stamper.timestamp.msg,
OpSHA256()(n_one + n_two))
def test_tree_with_prefix_matching_blob(self):
"""Git tree with prefix matching blob"""
stamper = self.make_stamper("75736a2524c624c1a08a574938686f83de5a8a86")
two_a_stamp = stamper['two/a']
def test_submodule(self):
"""Git tree with submodule"""
stamper = self.make_stamper("a3efe73f270866bc8d8f6ce01d22c02f14b21a1a")
self.assertEqual(stamper.timestamp.msg,
OpSHA256()(bytes.fromhex('48b96efa66e2958e955a31a7d9b8f2ac8384b8b9')))
def test_dangling_symlink(self):
"""Git tree with dangling symlink"""
stamper = self.make_stamper("a59620c107a67c4b6323e6e96aed9929d6a89618")
self.assertEqual(stamper.timestamp.msg,
OpSHA256()(b'does-not-exist'))
def test_huge_tree(self):
"""Really big git tree"""
# would cause the OpSHA256 length limits to be exceeded if it were used
# directly
stamper = self.make_stamper("a52fe6e3d4b15057ff41df0509dd302bc5863c29")
self.assertEqual(stamper.timestamp.msg,
b'\x1dW\x9c\xea\x94&`\xc2\xfb\xba \x19Q\x0f\xdb\xf0\x7f\x14\xe3\x14zb\t\xdb\xcf\xf93I\xe9h\xb9\x8d')
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/core/test_log.py 0000664 0000000 0000000 00000004545 14302173252 0032222 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import io
import unittest
from opentimestamps.core.timestamp import *
from opentimestamps.core.log import *
from opentimestamps.core.op import *
from opentimestamps.core.notary import *
class Test_TimestampLogWriter(unittest.TestCase):
def test_create(self):
"""Create new timestamp log"""
with io.BytesIO(b'') as fd:
writer = TimestampLogWriter.create(fd, OpSHA256())
del writer
self.assertEqual(fd.getvalue(),
b'\x00OpenTimestamps\x00\x00Log\x00\xd9\x19\xc5\x3a\x99\xb1\x12\xe9\xa6\xa1\x00' + # header
b'\x08') # sha256 op
def test_open(self):
"""Open existing timestamp log for writing"""
serialized = (b'\x00OpenTimestamps\x00\x00Log\x00\xd9\x19\xc5\x3a\x99\xb1\x12\xe9\xa6\xa1\x00' + # header
b'\x08') # sha256 op
with io.BytesIO(serialized) as fd:
writer = TimestampLogWriter.open(fd)
self.assertEqual(writer.file_hash_op, OpSHA256())
def test_append(self):
"""Append timestamps to the log"""
with io.BytesIO() as fd:
writer = TimestampLogWriter.create(fd, OpSHA256())
stamp = Timestamp(OpSHA256()(b''))
stamp.attestations.add(PendingAttestation('foobar'))
writer.append(0, stamp)
del writer
self.assertEqual(fd.getvalue(),
b'\x00OpenTimestamps\x00\x00Log\x00\xd9\x19\xc5\x3a\x99\xb1\x12\xe9\xa6\xa1\x00' + # header
b'\x08' + # sha256 op
b'\x32' + # start of first packet
b'\x00' + # length
bytes.fromhex('e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855') + # sha256 of b'' +
b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'foobar' + # attestation
b'\x00') # end of packet
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/core/test_notary.py 0000664 0000000 0000000 00000011024 14302173252 0032743 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import unittest
from opentimestamps.core.serialize import *
from opentimestamps.core.notary import *
class Test_UnknownAttestation(unittest.TestCase):
def test_repr(self):
"""repr(UnknownAttestation)"""
a = UnknownAttestation(bytes.fromhex('0102030405060708'), b'Hello World!')
self.assertEqual(repr(a), "UnknownAttestation(b'\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08', b'Hello World!')")
def test_serialization(self):
""""Serialization/deserialization of unknown attestations"""
expected_serialized = bytes.fromhex('0102030405060708') + b'\x0c' + b'Hello World!'
ctx = BytesDeserializationContext(expected_serialized)
a = TimeAttestation.deserialize(ctx)
self.assertEqual(a.TAG, bytes.fromhex('0102030405060708'))
self.assertEqual(a.payload, b'Hello World!')
# Test round trip
ctx = BytesSerializationContext()
a.serialize(ctx)
self.assertEqual(expected_serialized, ctx.getbytes())
def test_deserialize_too_long(self):
"""Deserialization of attestations with oversized payloads"""
ctx = BytesDeserializationContext(bytes.fromhex('0102030405060708') + b'\x81\x40' + b'x'*8193)
with self.assertRaises(DeserializationError):
TimeAttestation.deserialize(ctx)
# pending attestation
ctx = BytesDeserializationContext(bytes.fromhex('83dfe30d2ef90c8e') + b'\x81\x40' + b'x'*8193)
with self.assertRaises(DeserializationError):
TimeAttestation.deserialize(ctx)
class Test_PendingAttestation(unittest.TestCase):
def test_serialize(self):
pending_attestation = PendingAttestation('foobar')
expected_serialized = bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'foobar'
ctx = BytesSerializationContext()
pending_attestation.serialize(ctx)
self.assertEqual(ctx.getbytes(), expected_serialized)
ctx = BytesDeserializationContext(expected_serialized)
pending_attestation2 = TimeAttestation.deserialize(ctx)
self.assertEqual(pending_attestation2.uri, 'foobar')
def test_deserialize(self):
pending_attestation = PendingAttestation('foobar')
ctx = BytesSerializationContext()
pending_attestation.serialize(ctx)
self.assertEqual(ctx.getbytes(), bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'foobar')
def test_invalid_uri_deserialization(self):
# illegal character
ctx = BytesDeserializationContext(bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'fo%bar')
with self.assertRaises(DeserializationError):
TimeAttestation.deserialize(ctx)
# Too long
# Exactly 1000 bytes is ok
ctx = BytesDeserializationContext(bytes.fromhex('83dfe30d2ef90c8e' + 'ea07' + 'e807') + b'x'*1000)
TimeAttestation.deserialize(ctx)
# But 1001 isn't
ctx = BytesDeserializationContext(bytes.fromhex('83dfe30d2ef90c8e' + 'eb07' + 'e907') + b'x'*1001)
with self.assertRaises(DeserializationError):
TimeAttestation.deserialize(ctx)
def test_deserialization_trailing_garbage(self):
ctx = BytesDeserializationContext(bytes.fromhex('83dfe30d2ef90c8e' + '08' + '06') + b'foobarx')
with self.assertRaises(TrailingGarbageError):
TimeAttestation.deserialize(ctx)
class Test_BitcoinBlockHeaderAttestation(unittest.TestCase):
def test_deserialization_trailing_garbage(self):
ctx = BytesDeserializationContext(bytes.fromhex('0588960d73d71901' +
'02' + # two bytes of payload
'00' + # genesis block!
'ff')) # one byte of trailing garbage
with self.assertRaises(TrailingGarbageError):
TimeAttestation.deserialize(ctx)
class Test_AttestationsComparison(unittest.TestCase):
def test_attestation_comparison(self):
"""Comparing attestations"""
self.assertTrue(UnknownAttestation(b'unknown1', b'') < UnknownAttestation(b'unknown2', b''))
self.assertTrue(BitcoinBlockHeaderAttestation(1) < PendingAttestation(""))
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/core/test_op.py 0000664 0000000 0000000 00000007435 14302173252 0032060 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import unittest
from opentimestamps.core.op import *
class Test_Op(unittest.TestCase):
def test_append(self):
"""Append operation"""
self.assertEqual(OpAppend(b'suffix')(b'msg'), b'msgsuffix')
def test_append_invalid_arg(self):
"""Append op, invalid argument"""
with self.assertRaises(TypeError):
OpAppend('')
with self.assertRaises(OpArgValueError):
OpAppend(b'')
with self.assertRaises(OpArgValueError):
OpAppend(b'.'*4097)
def test_append_invalid_msg(self):
"""Append op, invalid message"""
with self.assertRaises(TypeError):
OpAppend(b'.')(None)
with self.assertRaises(TypeError):
OpAppend(b'.')('')
OpAppend(b'.')(b'.'*4095)
with self.assertRaises(MsgValueError):
OpAppend(b'.')(b'.'*4096)
def test_prepend(self):
"""Prepend operation"""
self.assertEqual(OpPrepend(b'prefix')(b'msg'), b'prefixmsg')
def test_prepend_invalid_arg(self):
"""Prepend op, invalid argument"""
with self.assertRaises(TypeError):
OpPrepend('')
with self.assertRaises(OpArgValueError):
OpPrepend(b'')
with self.assertRaises(OpArgValueError):
OpPrepend(b'.'*4097)
def test_prepend_invalid_msg(self):
"""Prepend op, invalid message"""
with self.assertRaises(TypeError):
OpPrepend(b'.')(None)
with self.assertRaises(TypeError):
OpPrepend(b'.')('')
OpPrepend(b'.')(b'.'*4095)
with self.assertRaises(MsgValueError):
OpPrepend(b'.')(b'.'*4096)
# def test_reverse(self):
# """Reverse operation"""
# self.assertEqual(OpReverse()(b'abcd'), b'dcba')
def test_hexlify(self):
"""Hexlify operation"""
for msg, expected in ((b'\x00', b'00'),
(b'\xde\xad\xbe\xef', b'deadbeef')):
self.assertEqual(OpHexlify()(msg), expected)
def test_hexlify_msg_length_limits(self):
"""Hexlify message length limits"""
OpHexlify()(b'.'*2048)
with self.assertRaises(MsgValueError):
OpHexlify()(b'.'*2049)
with self.assertRaises(MsgValueError):
OpHexlify()(b'')
def test_sha256(self):
"""SHA256 operation"""
self.assertEqual(OpSHA256()(b''), bytes.fromhex('e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'))
def test_ripemd160(self):
"""RIPEMD160 operation"""
self.assertEqual(OpRIPEMD160()(b''), bytes.fromhex('9c1185a5c5e9fc54612808977ee8f548b2258d31'))
def test_equality(self):
"""Operation equality"""
self.assertEqual(OpReverse(), OpReverse())
self.assertNotEqual(OpReverse(), OpSHA1())
self.assertEqual(OpAppend(b'foo'), OpAppend(b'foo'))
self.assertNotEqual(OpAppend(b'foo'), OpAppend(b'bar'))
self.assertNotEqual(OpAppend(b'foo'), OpPrepend(b'foo'))
def test_ordering(self):
"""Operation ordering"""
self.assertTrue(OpSHA1() < OpRIPEMD160())
# FIXME: more tests
def test_keccak256(self):
"""KECCAK256 operation"""
self.assertEqual(OpKECCAK256()(b''), bytes.fromhex('c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470'))
self.assertEqual(OpKECCAK256()(b'\x80'), bytes.fromhex('56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421'))
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/core/test_packetstream.py 0000664 0000000 0000000 00000017304 14302173252 0034121 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import contextlib
import io
import os
import tempfile
import unittest
from opentimestamps.core.packetstream import *
class Test_PacketWriter(unittest.TestCase):
def test_open_close(self):
"""Open followed by close writes a packet"""
with tempfile.NamedTemporaryFile() as tmpfile:
with open(tmpfile.name, 'wb') as fd:
writer = PacketWriter(fd)
self.assertFalse(writer.closed)
writer.close()
self.assertTrue(writer.closed)
with open(tmpfile.name, 'rb') as fd:
self.assertEqual(fd.read(), b'\x00')
def test_with(self):
"""Using PacketWrite as a context manager"""
with tempfile.NamedTemporaryFile() as tmpfile:
with open(tmpfile.name, 'wb') as fd:
with PacketWriter(fd) as writer:
pass
with open(tmpfile.name, 'rb') as fd:
self.assertEqual(fd.read(), b'\x00')
@contextlib.contextmanager
def assert_written(self, expected_contents):
with tempfile.NamedTemporaryFile() as tmpfile:
with open(tmpfile.name, 'wb') as fd:
with PacketWriter(fd) as writer:
yield writer
with open(tmpfile.name, 'rb') as fd:
actual_contents = fd.read()
self.assertEqual(expected_contents, actual_contents)
def test_empty_write(self):
"""Empty writes are no-ops"""
with self.assert_written(b'\x00') as writer:
writer.write(b'')
with self.assert_written(b'\x00') as writer:
writer.write(b'')
writer.write(b'')
def test_sub_block_write(self):
"""Writing less than one sub-block"""
with self.assert_written(b'\x01a\x00') as writer:
writer.write(b'a')
with self.assert_written(b'\x02ab\x00') as writer:
writer.write(b'a')
writer.write(b'b')
with self.assert_written(b'\xff' + b'x'*255 + b'\x00') as writer:
writer.write(b'x'*254)
writer.write(b'x'*1)
with self.assert_written(b'\xff' + b'x'*255 + b'\x00') as writer:
writer.write(b'x'*255)
def test_multi_sub_block_writes(self):
"""Writing more than one sub-block"""
with self.assert_written(b'\xff' + b'x'*255 + b'\x01x' + b'\x00') as writer:
writer.write(b'x' * 255)
writer.write(b'x' * 1)
with self.assert_written(b'\xff' + b'x'*255 + b'\x01x' + b'\x00') as writer:
writer.write(b'x' * (255 + 1))
with self.assert_written(b'\xff' + b'x'*255 + b'\xfe' + b'x'*254 + b'\x00') as writer:
writer.write(b'x' * 255)
writer.write(b'x' * 254)
with self.assert_written(b'\xff' + b'x'*255 + b'\xfe' + b'x'*254 + b'\x00') as writer:
writer.write(b'x' * (255 + 254))
with self.assert_written(b'\xff' + b'x'*255 + b'\xff' + b'x'*255 + b'\x00') as writer:
writer.write(b'x' * 255)
writer.write(b'x' * 255)
with self.assert_written(b'\xff' + b'x'*255 + b'\xff' + b'x'*255 + b'\x00') as writer:
writer.write(b'x' * (255 + 255))
with self.assert_written(b'\xff' + b'x'*255 + b'\xff' + b'x'*255 + b'\x01x' + b'\x00') as writer:
writer.write(b'x' * 255)
writer.write(b'x' * 255)
writer.write(b'x' * 1)
with self.assert_written(b'\xff' + b'x'*255 + b'\xff' + b'x'*255 + b'\x01x' + b'\x00') as writer:
writer.write(b'x' * (255 + 255 + 1))
def test_flush(self):
with self.assert_written(b'\x05Hello' + b'\x06World!' + b'\x00') as writer:
writer.write(b'Hello')
writer.flush()
writer.write(b'World!')
def test_del_does_not_close(self):
"""Deleting a PacketWriter does not close the underlying stream"""
with io.BytesIO() as fd:
writer = PacketWriter(fd)
del writer
self.assertFalse(fd.closed)
class Test_PacketReader(unittest.TestCase):
def test_close_only_packet(self):
"""Close does not close underlying stream"""
with io.BytesIO(b'\x00') as fd:
reader = PacketReader(fd)
reader.close()
self.assertTrue(reader.closed)
self.assertFalse(fd.closed)
def test_valid_empty_packet(self):
"""Empty, but valid, packets"""
with io.BytesIO(b'\x00') as fd:
reader = PacketReader(fd)
self.assertEqual(fd.tell(), 1)
self.assertFalse(reader.end_of_packet)
# reading nothing is a no-op
self.assertEqual(reader.read(0), b'')
self.assertFalse(reader.end_of_packet)
self.assertEqual(fd.tell(), 1)
self.assertEqual(reader.read(1), b'')
self.assertTrue(reader.end_of_packet)
self.assertEqual(fd.tell(), 1)
def test_single_sub_packet_read(self):
"""Reading less than a single sub-packet"""
with io.BytesIO(b'\x0cHello World!\x00') as fd:
reader = PacketReader(fd)
self.assertEqual(fd.tell(), 1)
self.assertEqual(reader.read(12), b'Hello World!')
self.assertFalse(reader.end_of_packet) # reader hasn't found out yet
self.assertEqual(fd.tell(), 13)
self.assertEqual(reader.read(), b'')
self.assertTrue(reader.end_of_packet)
self.assertEqual(fd.tell(), 14)
def test_multi_sub_packet_read(self):
"""Reads that span multiple sub-packets"""
with io.BytesIO(b'\x01H' + b'\x0bello World!' + b'\x00') as fd:
reader = PacketReader(fd)
self.assertEqual(fd.tell(), 1)
self.assertEqual(reader.read(12), b'Hello World!')
self.assertFalse(reader.end_of_packet) # reader hasn't found out yet
self.assertEqual(fd.tell(), 14)
self.assertEqual(reader.read(), b'')
self.assertTrue(reader.end_of_packet)
self.assertEqual(fd.tell(), 15)
def test_missing_packet(self):
"""Completely missing packet raises PacketMissingError"""
with io.BytesIO(b'') as fd:
with self.assertRaises(PacketMissingError):
PacketReader(fd)
def test_truncated_packet(self):
"""Packet truncated at the first sub-packet"""
with io.BytesIO(b'\x01') as fd:
reader = PacketReader(fd)
self.assertEqual(reader.read(), b'')
self.assertTrue(reader.end_of_packet)
self.assertEqual(reader.truncated, 2) # 1 byte of sub-packet, and the end of packet marker missing
self.assertEqual(fd.tell(), 1)
with io.BytesIO(b'\x02a') as fd:
reader = PacketReader(fd)
self.assertEqual(reader.read(), b'a')
self.assertTrue(reader.end_of_packet)
self.assertEqual(reader.truncated, 2) # 1 byte of sub-packet, and the end of packet marker missing
self.assertEqual(fd.tell(), 2)
with io.BytesIO(b'\x04ab') as fd:
reader = PacketReader(fd)
self.assertEqual(reader.read(1), b'a')
self.assertEqual(reader.read(), b'b')
self.assertTrue(reader.end_of_packet)
self.assertEqual(reader.truncated, 3) # 2 bytes of sub-packet, and the end of packet marker missing
self.assertEqual(fd.tell(), 3)
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/core/test_serialize.py 0000664 0000000 0000000 00000001406 14302173252 0033421 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import unittest
from opentimestamps.core.serialize import *
class Test_serialization(unittest.TestCase):
def test_assert_eof(self):
"""End-of-file assertions"""
ctx = BytesDeserializationContext(b'')
ctx.assert_eof()
with self.assertRaises(TrailingGarbageError):
ctx = BytesDeserializationContext(b'b')
ctx.assert_eof()
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/core/test_timestamp.py 0000664 0000000 0000000 00000024376 14302173252 0033450 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import unittest
from opentimestamps.core.notary import *
from opentimestamps.core.serialize import *
from opentimestamps.core.timestamp import *
from opentimestamps.core.op import *
class Test_Timestamp(unittest.TestCase):
def test_add_op(self):
"""Adding operations to timestamps"""
t = Timestamp(b'abcd')
t.ops.add(OpAppend(b'efgh'))
self.assertEqual(t.ops[OpAppend(b'efgh')], Timestamp(b'abcdefgh'))
# The second add should succeed with the timestamp unchanged
t.ops.add(OpAppend(b'efgh'))
self.assertEqual(t.ops[OpAppend(b'efgh')], Timestamp(b'abcdefgh'))
def test_set_result_timestamp(self):
"""Setting an op result timestamp"""
t1 = Timestamp(b'foo')
t2 = t1.ops.add(OpAppend(b'bar'))
t3 = t2.ops.add(OpAppend(b'baz'))
self.assertEqual(t1.ops[OpAppend(b'bar')].ops[OpAppend(b'baz')].msg, b'foobarbaz')
t1.ops[OpAppend(b'bar')] = Timestamp(b'foobar')
self.assertTrue(OpAppend(b'baz') not in t1.ops[OpAppend(b'bar')].ops)
def test_set_fail_if_wrong_message(self):
"""Setting an op result timestamp fails if the messages don't match"""
t = Timestamp(b'abcd')
t.ops.add(OpSHA256())
with self.assertRaises(ValueError):
t.ops[OpSHA256()] = Timestamp(b'wrong')
def test_merge(self):
"""Merging timestamps"""
with self.assertRaises(ValueError):
Timestamp(b'a').merge(Timestamp(b'b'))
t1 = Timestamp(b'a')
t2 = Timestamp(b'a')
t2.attestations.add(PendingAttestation('foobar'))
t1.merge(t2)
self.assertEqual(t1, t2)
# FIXME: more tests
def test_serialization(self):
"""Timestamp serialization/deserialization"""
def T(expected_instance, expected_serialized):
ctx = BytesSerializationContext()
expected_instance.serialize(ctx)
actual_serialized = ctx.getbytes()
self.assertEqual(expected_serialized, actual_serialized)
actual_instance = Timestamp.deserialize(BytesDeserializationContext(expected_serialized), expected_instance.msg)
self.assertEqual(expected_instance, actual_instance)
stamp = Timestamp(b'foo')
stamp.attestations.add(PendingAttestation('foobar'))
T(stamp, b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'foobar')
stamp.attestations.add(PendingAttestation('barfoo'))
T(stamp, b'\xff' + (b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'barfoo') + \
(b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'foobar'))
stamp.attestations.add(PendingAttestation('foobaz'))
T(stamp, b'\xff' + (b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'barfoo') + \
b'\xff' + (b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'foobar') + \
(b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'foobaz'))
sha256_stamp = stamp.ops.add(OpSHA256())
# Should fail - empty timestamps can't be serialized
with self.assertRaises(ValueError):
ctx = BytesSerializationContext()
stamp.serialize(ctx)
sha256_stamp.attestations.add(PendingAttestation('deeper'))
T(stamp, b'\xff' + (b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'barfoo') + \
b'\xff' + (b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'foobar') + \
b'\xff' + (b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'foobaz') + \
b'\x08' + (b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'deeper'))
def test_deserialization_invalid_op_msg(self):
"""Timestamp deserialization when message is invalid for op"""
serialized = (b'\xf0\x01\x00' + # OpAppend(b'\x00')
b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'barfoo') # perfectly valid pending attestation
# Perfectly ok, results is 4096 bytes long
Timestamp.deserialize(BytesDeserializationContext(serialized), b'.'*4095)
with self.assertRaises(DeserializationError):
# Not ok, result would be 4097 bytes long
Timestamp.deserialize(BytesDeserializationContext(serialized), b'.'*4096)
def test_deserialization_invalid_op_msg_2(self):
"""Deserialization of a timestamp that exceeds the recursion limit"""
serialized = (b'\x08'*256 + # OpSHA256, 256 times
b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'barfoo') # perfectly valid pending attestation
with self.assertRaises(RecursionLimitError):
Timestamp.deserialize(BytesDeserializationContext(serialized), b'')
def test_str_tree(self):
"""Converting timestamp to tree"""
t = Timestamp(b'')
t.ops.add(OpAppend(b'\x01'))
t.ops.add(OpSHA256())
self.assertEqual(t.str_tree(), " -> sha256\n -> append 01\n")
def test_equality(self):
"""Checking timestamp equality"""
t1 = Timestamp(b'')
t1.attestations = {BitcoinBlockHeaderAttestation(1), PendingAttestation("")}
t2 = Timestamp(b'')
self.assertFalse(t1 == t2)
t2.attestations = {PendingAttestation(""), BitcoinBlockHeaderAttestation(1)}
self.assertTrue(t1 == t2)
class Test_DetachedTimestampFile(unittest.TestCase):
def test_create_from_file(self):
file_stamp = DetachedTimestampFile.from_fd(OpSHA256(), io.BytesIO(b''))
self.assertEqual(file_stamp.file_digest, bytes.fromhex('e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'))
def test_hash_fd(self):
file_stamp = DetachedTimestampFile.from_fd(OpSHA256(), io.BytesIO(b''))
result = file_stamp.file_hash_op.hash_fd(io.BytesIO(b''))
self.assertEqual(result, bytes.fromhex('e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'))
def test_serialization(self):
def T(expected_instance, expected_serialized):
ctx = BytesSerializationContext()
expected_instance.serialize(ctx)
actual_serialized = ctx.getbytes()
self.assertEqual(expected_serialized, actual_serialized)
actual_instance = DetachedTimestampFile.deserialize(BytesDeserializationContext(expected_serialized))
self.assertEqual(expected_instance, actual_instance)
file_stamp = DetachedTimestampFile.from_fd(OpSHA256(), io.BytesIO(b''))
file_stamp.timestamp.attestations.add(PendingAttestation('foobar'))
T(file_stamp, (b'\x00OpenTimestamps\x00\x00Proof\x00\xbf\x89\xe2\xe8\x84\xe8\x92\x94' +
b'\x01' + # major version
b'\x08' + bytes.fromhex('e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855') +
b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'foobar'))
def test_deserialization_failures(self):
"""Deserialization failures"""
for serialized, expected_error in ((b'', BadMagicError),
(b'\x00Not a OpenTimestamps Proof \x00\xbf\x89\xe2\xe8\x84\xe8\x92\x94\x01', BadMagicError),
(b'\x00OpenTimestamps\x00\x00Proof\x00\xbf\x89\xe2\xe8\x84\xe8\x92\x94\x00', UnsupportedMajorVersion),
(b'\x00OpenTimestamps\x00\x00Proof\x00\xbf\x89\xe2\xe8\x84\xe8\x92\x94\x01' +
b'\x42' + # Not a valid opcode
b'\x00'*32 +
b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'foobar', DeserializationError),
(b'\x00OpenTimestamps\x00\x00Proof\x00\xbf\x89\xe2\xe8\x84\xe8\x92\x94\x01' +
b'\x08' + bytes.fromhex('e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855') +
b'\x00' + bytes.fromhex('83dfe30d2ef90c8e' + '07' + '06') + b'foobar' +
b'trailing garbage', TrailingGarbageError)):
with self.assertRaises(expected_error):
ctx = BytesDeserializationContext(serialized)
DetachedTimestampFile.deserialize(ctx)
class Test_cat_sha256(unittest.TestCase):
def test(self):
left = Timestamp(b'foo')
right = Timestamp(b'bar')
stamp_left_right= cat_sha256(left, right)
self.assertEqual(stamp_left_right.msg, bytes.fromhex('c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2'))
righter = Timestamp(b'baz')
stamp_righter = cat_sha256(stamp_left_right, righter)
self.assertEqual(stamp_righter.msg, bytes.fromhex('23388b16c66f1fa37ef14af8eb081712d570813e2afb8c8ae86efa726f3b7276'))
class Test_make_merkle_tree(unittest.TestCase):
def test(self):
def T(n, expected_merkle_root):
roots = [Timestamp(bytes([i])) for i in range(n)]
tip = make_merkle_tree(roots)
self.assertEqual(tip.msg, expected_merkle_root)
for root in roots:
pass # FIXME: check all roots lead to same timestamp
# Returned unchanged!
T(1, bytes.fromhex('00'))
# Manually calculated w/ pen-and-paper
T(2, bytes.fromhex('b413f47d13ee2fe6c845b2ee141af81de858df4ec549a58b7970bb96645bc8d2'))
T(3, bytes.fromhex('e6aa639123d8aac95d13d365ec3779dade4b49c083a8fed97d7bfc0d89bb6a5e'))
T(4, bytes.fromhex('7699a4fdd6b8b6908a344f73b8f05c8e1400f7253f544602c442ff5c65504b24'))
T(5, bytes.fromhex('aaa9609d0c949fee22c1c941a4432f32dc1c2de939e4af25207f0dc62df0dbd8'))
T(6, bytes.fromhex('ebdb4245f648b7e77b60f4f8a99a6d0529d1d372f98f35478b3284f16da93c06'))
T(7, bytes.fromhex('ba4603a311279dea32e8958bfb660c86237157bf79e6bfee857803e811d91b8f'))
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/test_bitcoin.py 0000664 0000000 0000000 00000014013 14302173252 0032127 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016-2018 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import unittest
from bitcoin.core import *
from opentimestamps.core.timestamp import *
from opentimestamps.bitcoin import *
class Test_make_timestamp_from_block(unittest.TestCase):
def test(self):
# genesis block!
block = CBlock.deserialize(x('010000006fe28c0ab6f1b372c1a6a246ae63f74f931e8365e15a089c68d6190000000000982051fd1e4ba744bbbe680e1fee14677ba1a3c3540bf7b1cdb606e857233e0e61bc6649ffff001d01e362990101000000010000000000000000000000000000000000000000000000000000000000000000ffffffff0704ffff001d0104ffffffff0100f2052a0100000043410496b538e853519c726a2c91e61ec11600ae1390813a627c66fb8be7947be63c52da7589379515d4e0a604f8141781e62294721166bf621e73a82cbf2342c858eeac00000000'))
# satoshi's pubkey
digest = x('0496b538e853519c726a2c91e61ec11600ae1390813a627c66fb8be7947be63c52da7589379515d4e0a604f8141781e62294721166bf621e73a82cbf2342c858ee')
root_stamp = make_timestamp_from_block(digest, block, 0)
(msg, attestation) = tuple(root_stamp.all_attestations())[0]
self.assertEqual(msg, lx('0e3e2357e806b6cdb1f70b54c3a3a17b6714ee1f0e68bebb44a74b1efd512098')) # merkleroot
self.assertEqual(attestation.height, 0)
# block #586, first block with 3 txs
block = CBlock.deserialize(x('0100000038babc9586a5fcd60713573494f4377e7c401c33aa24729a4f6cff46000000004d5969c0d10dcce60868fee4d4de80ba5ef38abaeed8a75daa63e48c963d7b1950476f49ffff001d2d9791370301000000010000000000000000000000000000000000000000000000000000000000000000ffffffff0804ffff001d025d06ffffffff0100f2052a0100000043410410daf049ef402de0b6adba8b0f7c392bcf9a6385116efc8b4143b8b7a7841e7de73b478ffe13b60c50ea01e24b4b48c24f5e0fbc5d6c8433c7ca7c3ed3ab8173ac0000000001000000050f40f5e65e115eb4bdb3007f0fb8beaa404cf7ae45de16074e8acc9b69bbf0c3000000004847304402201092da40af6dea8abcbeefb8586335b26d39d36be9b6c38d6c9cc18f20dd5886022045964de79a9008f68d53fc9bc58f9e30b224a1b98dbfda5c7b7b860f32c6aef101ffffffff1bb875b247332e558731c2c510f611d3dde991ea9fe69365bf445a0ccd513b190000000049483045022100b0a1d0a00251c56809a5ab5d7ba6cbe68b82c9bf4f806ee39c568ae537572c840220781ce69017ec3b2d6f96ffff4d19c80c224f40c73b8c26cba4b30e7f4171579b01ffffffff2099e1a92d94c35f0645683257c4c255165385f3e9129a85fed5a3f3d867c9b60000000049483045022100c8e980f43c616232e2d59dce08a5edb84aaa0915ea49780a8af367330216084a02203cc2628f16f995c7aaf6104cba64971963a4e084e4fbd0b6bcf825b47a09f8e301ffffffff5fb770c4de700aca7f74f5e6295f248edafa9423e446d76f4650df9b90f939a700000000494830450220745a8d99c51f98f5c93b8d2f5f14a1f2d8cc42ff7329645681bcafe846cbf50d022100b24e31186129f3ae6cc8a226d1eda389373652a9cf2095631fcc4345067c1ff301ffffffff968d4c096ee861307935d21d797a902b647dc970d3c8374cc13551f8397abbd80000000049483045022100ca65b3f290724d6c56fc333570fa342f2477f34b2a6c93c2e2d7216d9fe9088e022077e259a29ed1f988fab2b9f2ce17a4a56a20c188cadc72bca94e06a73826966501ffffffff0100ba1dd20500000043410497304efd3ab14d0dcbf1e901045a25f4b5dbaf576d074506fd8ded4122ba6f6bec0ed4698ce0e7928c0eaf9ddfb5387929b5d697e82e7aabebe04c10e5c87164ac0000000001000000010d26ba57ff82fefcb43826b45019043e2b6ef9aa8118b7f743167584a7f9cae70000000049483045022024fd7345df2b2bd0e6f8416529046b7d52bda5ffdb70146bc6d72b1ba73cabcd022100ff99c03006cc8f28d92e686f0ae640d20395177f329d0a9dbd560fd2a55aeee701ffffffff0100f2052a01000000434104888d890e1bd84c9e2ac363a9774414a081eb805cd2c0d52e49efc7170ebf342f1cdb284a2e2eb754fc8dd4525fe0caa3d3a525214d0b504dd75376b2f63804a8ac00000000'))
# one of the txids spent
digest = lx('c3f0bb699bcc8a4e0716de45aef74c40aabeb80f7f00b3bdb45e115ee6f5400f')
root_stamp = make_timestamp_from_block(digest, block, 586)
(msg, attestation) = tuple(root_stamp.all_attestations())[0]
self.assertEqual(msg, lx('197b3d968ce463aa5da7d8eeba8af35eba80ded4e4fe6808e6cc0dd1c069594d')) # merkleroot
self.assertEqual(attestation.height, 586)
# Check behavior when the digest is not found
root_stamp = make_timestamp_from_block(b'not in the block', block, 586)
self.assertEqual(root_stamp, None)
# Check that size limit is respected
root_stamp = make_timestamp_from_block(digest, block, 586, max_tx_size=1)
self.assertEqual(root_stamp, None)
def test_segwit(self):
# regtest block, with a segwit tx
block = CBlock.deserialize(x('0000002060f13fc5bbde8de36f8896a70071d53494f14bcda132968205db08be651f4402fba57b59d485e5446f674cb42917d85f370ee60bbe8a4149ca9d2d236bb2a5286645835affff7f200000000002020000000001010000000000000000000000000000000000000000000000000000000000000000ffffffff050281000101ffffffff02b8f2052a01000000232103c94c88a631f2286bf1404f550742cec64df1701e8374a17426d8375f6fbbc188ac0000000000000000266a24aa21a9edb6e18554334fb03b3ca9e17ef71614853731fffee928f9f63be4de7052b9b1c3012000000000000000000000000000000000000000000000000000000000000000000000000001000000000101bb7e38e3e5bf1ef124e96342cfdd6e4ac8e155a19d845bca68af1c2db420e3a5000000001716001457c8e57a3bdfd4e586fcbecabd958e5f7d5bae49fdffffff0290b1a43e1e00000017a914f55b72549c205fdf490ce331ac3f95ad4f7b2a24870000000000000000226a208e16cdc4bc8b6aae0a217d30662bf5bb7b732b0751746f587b411fecbc41574e02483045022100e6cb9807f0125e6c9bda818b280103177ad4956d8efd5d689d005d95e8d096bf0220138fce520f4cfbb255e0f7c56183002a1f489bbed7eebfa75bf7e9995533d3dc012103e7c5963292645605207996004489e09edb79edfd6b7d7e45562acb064ae9628380000000'))
# satoshi's pubkey
digest = x('8e16cdc4bc8b6aae0a217d30662bf5bb7b732b0751746f587b411fecbc41574e')
root_stamp = make_timestamp_from_block(digest, block, 129)
(msg, attestation) = tuple(root_stamp.all_attestations())[0]
self.assertEqual(msg, lx('28a5b26b232d9dca49418abe0be60e375fd81729b44c676f44e585d4597ba5fb')) # merkleroot
self.assertEqual(attestation.height, 129)
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/tests/test_calendar.py 0000664 0000000 0000000 00000003023 14302173252 0032250 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
import unittest
from opentimestamps.calendar import *
class Test_UrlWhitelist(unittest.TestCase):
def test_empty(self):
"""Empty whitelist"""
wl = UrlWhitelist()
self.assertNotIn('', wl)
self.assertNotIn('http://example.com', wl)
def test_exact_match(self):
"""Exact match"""
wl = UrlWhitelist(("https://example.com",))
self.assertIn("https://example.com", wl)
self.assertNotIn("http://example.com", wl)
self.assertNotIn("http://example.org", wl)
# I'm happy for this to be strict
self.assertIn("https://example.com", wl)
def test_add_scheme(self):
"""URL scheme added automatically"""
wl = UrlWhitelist(("example.com",))
self.assertIn("https://example.com", wl)
self.assertIn("http://example.com", wl)
def test_glob_match(self):
"""Glob matching"""
wl = UrlWhitelist(("*.example.com",))
self.assertIn("https://foo.example.com", wl)
self.assertIn("http://bar.example.com", wl)
self.assertIn("http://foo.bar.example.com", wl)
self.assertNotIn("http://barexample.com", wl)
python-opentimestamps-python-opentimestamps-v0.4.2/opentimestamps/timestamp.py 0000664 0000000 0000000 00000001354 14302173252 0030306 0 ustar 00root root 0000000 0000000 # Copyright (C) 2016 The OpenTimestamps developers
#
# This file is part of python-opentimestamps.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-opentimestamps including this file, may be copied,
# modified, propagated, or distributed except according to the terms contained
# in the LICENSE file.
"""Convenience functions for creating timestamps"""
import os
from opentimestamps.core.op import OpAppend, OpSHA256
def nonce_timestamp(private_timestamp, crypt_op=OpSHA256(), length=16):
"""Create a nonced version of a timestamp for privacy"""
stamp2 = private_timestamp.ops.add(OpAppend(os.urandom(length)))
return stamp2.ops.add(crypt_op)
python-opentimestamps-python-opentimestamps-v0.4.2/release-notes.md 0000664 0000000 0000000 00000001547 14302173252 0025755 0 ustar 00root root 0000000 0000000 # python-opentimestamps release notes
## v0.4.2
* Latest python-bitcoinlib marked as compatible; no other changes.
## v0.4.1
* Latest python-bitcoinlib marked as compatible; no other changes.
## v0.4.0
* Breaking change: Timestamp equality comparison now also checks attestations,
not just operations.
* Fixed issues with timestamp less than/greater than comparisons, (e.g. `ts1 < ts2`)
* Fixed `str_tree()` crash
## v0.3.0
* New calendar server! Thanks to Vincent Cloutier from Catallaxy.
* URL handling in calendar code now handles tailing slashes.
* New attestation: `LitecoinBlockHeaderAttestation`.
## v0.2.1
Fixed `make_timestamp_from_block()` w/ blocks containing segwit transactions.
## v0.2.0.1
Actually get that right...
## v0.2.0
`python-bitcoinlib` version required bumped to 0.9.0 for segwit compatibility.
## v0.1.0
Initial release.
python-opentimestamps-python-opentimestamps-v0.4.2/requirements.txt 0000664 0000000 0000000 00000000100 14302173252 0026131 0 ustar 00root root 0000000 0000000 python-bitcoinlib>=0.9.0,<0.12.0
GitPython>=2.0.8
pysha3>=1.0.2
python-opentimestamps-python-opentimestamps-v0.4.2/setup.py 0000664 0000000 0000000 00000007035 14302173252 0024375 0 ustar 00root root 0000000 0000000 # Always prefer setuptools over distutils
from setuptools import setup, find_packages
# To use a consistent encoding
from codecs import open
from os import path
from opentimestamps import __version__
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.md'), encoding='utf-8') as f:
long_description = f.read()
setup(
name='opentimestamps',
# Versions should comply with PEP440. For a discussion on single-sourcing
# the version across setup.py and the project code, see
# https://packaging.python.org/en/latest/single_source_version.html
version=__version__,
description='Create and verify OpenTimestamps proofs',
long_description=long_description,
long_description_content_type='text/markdown',
# The project's main homepage.
url='https://github.com/opentimestamps/python-opentimestamps',
# Author details
author='Peter Todd',
author_email='pete@petertodd.org',
# Choose your license
license='LGPL3',
# See https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers=[
# How mature is this project? Common values are
# 3 - Alpha
# 4 - Beta
# 5 - Production/Stable
'Development Status :: 4 - Beta',
# Indicate who your project is intended for
'Intended Audience :: Developers',
'Topic :: Security :: Cryptography',
# Pick your license as you wish (should match "license" above)
'License :: OSI Approved :: GNU Lesser General Public License v3 or later (LGPLv3+)',
# Specify the Python versions you support here. In particular, ensure
# that you indicate whether you support Python 2, Python 3 or both.
'Programming Language :: Python :: 3 :: Only',
],
# What does your project relate to?
keywords='cryptography timestamping bitcoin',
# You can just specify the packages manually here if your project is
# simple. Or you can use find_packages().
packages=find_packages(exclude=['contrib', 'docs', 'tests']),
# Alternatively, if you want to distribute just a my_module.py, uncomment
# this:
# py_modules=["my_module"],
# List run-time dependencies here. These will be installed by pip when
# your project is installed. For an analysis of "install_requires" vs pip's
# requirements files see:
# https://packaging.python.org/en/latest/requirements.html
install_requires=['python-bitcoinlib>=0.9.0,<0.12.0',
'pysha3>=1.0.2'],
# List additional groups of dependencies here (e.g. development
# dependencies). You can install these using the following syntax,
# for example:
# $ pip install -e .[dev,test]
extras_require={},
# If there are data files included in your packages that need to be
# installed, specify them here. If using Python 2.6 or less, then these
# have to be included in MANIFEST.in as well.
package_data={},
# Although 'package_data' is the preferred approach, in some case you may
# need to place data files outside of your packages. See:
# http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files # noqa
# In this case, 'data_file' will be installed into '/my_data'
data_files=[],
# To provide executable scripts, use entry points in preference to the
# "scripts" keyword. Entry points provide cross-platform support and allow
# pip to create the appropriate form of executable for the target platform.
entry_points={},
)