libarchive-c-2.9/ 0000755 0002166 0000144 00000000000 13553052677 014655 5 ustar changaco users 0000000 0000000 libarchive-c-2.9/.gitattributes 0000644 0002166 0000144 00000000031 13017341356 017530 0 ustar changaco users 0000000 0000000 version.py export-subst
libarchive-c-2.9/.gitignore 0000644 0002166 0000144 00000000101 13017341361 016617 0 ustar changaco users 0000000 0000000 *.egg-info/
/build/
/dist/
/env/
/htmlcov/
.coverage
*.pyc
.tox/
libarchive-c-2.9/.travis.yml 0000644 0002166 0000144 00000001623 13553041177 016761 0 ustar changaco users 0000000 0000000 language: python
matrix:
include:
- python: 3.5
env: TOXENV=py27,py35
- python: 3.6
env: TOXENV=py36
- python: 3.7
env: TOXENV=py37
- python: 3.8-dev
env: TOXENV=py38
branches:
only:
- master
cache:
directories:
- /opt/python-libarchive-c
env:
global:
- LIBARCHIVE=/opt/python-libarchive-c/lib/libarchive.so
before_install:
- sudo apt-get install -y zlib1g-dev liblzma-dev libbz2-dev libxml2-dev nettle-dev libattr1-dev libacl1-dev
- "if [ ! -e $LIBARCHIVE ]; then
wget http://libarchive.org/downloads/libarchive-3.3.2.tar.gz &&
tar -xf libarchive-3.3.2.tar.gz && cd libarchive-3.3.2 &&
./configure --prefix=/opt/python-libarchive-c --disable-bsdcpio --disable-bsdtar &&
make && sudo make install && cd .. ;
fi"
install: pip install tox
script: tox
notifications:
email: false
sudo: required
dist: xenial
libarchive-c-2.9/LICENSE.md 0000644 0002166 0000144 00000000063 13307216353 016246 0 ustar changaco users 0000000 0000000 https://creativecommons.org/publicdomain/zero/1.0/
libarchive-c-2.9/MANIFEST.in 0000644 0002166 0000144 00000000023 13017341352 016370 0 ustar changaco users 0000000 0000000 include version.py
libarchive-c-2.9/Makefile 0000644 0002166 0000144 00000000377 13017341361 016306 0 ustar changaco users 0000000 0000000 clean:
rm -rf build dist *.egg-info
find . -name \*.pyc -delete
chmod:
git ls-files -z | xargs -0 chmod u=rwX,g=rX,o=rX
dist: chmod
python3 setup.py sdist bdist_wheel
python2 setup.py sdist
upload: chmod
python3 setup.py sdist bdist_wheel upload
libarchive-c-2.9/PKG-INFO 0000644 0002166 0000144 00000005532 13553052677 015757 0 ustar changaco users 0000000 0000000 Metadata-Version: 2.1
Name: libarchive-c
Version: 2.9
Summary: Python interface to libarchive
Home-page: https://github.com/Changaco/python-libarchive-c
Author: Changaco
Author-email: changaco@changaco.oy.lc
License: CC0
Description: .. image:: https://travis-ci.org/Changaco/python-libarchive-c.svg
:target: https://travis-ci.org/Changaco/python-libarchive-c
A Python interface to libarchive. It uses the standard ctypes_ module to
dynamically load and access the C library.
.. _ctypes: https://docs.python.org/3/library/ctypes.html
Installation
============
pip install libarchive-c
Compatibility
=============
python
------
python-libarchive-c is currently tested with python 2.7, 3.4, 3.5, and 3.6.
If you find an incompatibility with older versions you can send us a small patch,
but we won't accept big changes.
libarchive
----------
python-libarchive-c may not work properly with obsolete versions of libarchive such as the ones included in MacOS. In that case you can install a recent version of libarchive (e.g. with ``brew install libarchive`` on MacOS) and use the ``LIBARCHIVE`` environment variable to point python-libarchive-c to it::
export LIBARCHIVE=/usr/local/Cellar/libarchive/3.3.3/lib/libarchive.13.dylib
Usage
=====
Import::
import libarchive
To extract an archive to the current directory::
libarchive.extract_file('test.zip')
``extract_memory`` extracts from a buffer instead, and ``extract_fd`` extracts
from a file descriptor.
To read an archive::
with libarchive.file_reader('test.7z') as archive:
for entry in archive:
for block in entry.get_blocks():
...
``memory_reader`` reads from a memory buffer instead, and ``fd_reader`` reads
from a file descriptor.
To create an archive::
with libarchive.file_writer('test.tar.gz', 'ustar', 'gzip') as archive:
archive.add_files('libarchive/', 'README.rst')
``memory_writer`` writes to a memory buffer instead, ``fd_writer`` writes to a
file descriptor, and ``custom_writer`` sends the data to a callback function.
You can also find more thorough examples in the ``tests/`` directory.
License
=======
`CC0 Public Domain Dedication `_
Keywords: archive libarchive 7z tar bz2 zip gz
Platform: UNKNOWN
Description-Content-Type: text/x-rst
libarchive-c-2.9/README.rst 0000644 0002166 0000144 00000003741 13402200204 016317 0 ustar changaco users 0000000 0000000 .. image:: https://travis-ci.org/Changaco/python-libarchive-c.svg
:target: https://travis-ci.org/Changaco/python-libarchive-c
A Python interface to libarchive. It uses the standard ctypes_ module to
dynamically load and access the C library.
.. _ctypes: https://docs.python.org/3/library/ctypes.html
Installation
============
pip install libarchive-c
Compatibility
=============
python
------
python-libarchive-c is currently tested with python 2.7, 3.4, 3.5, and 3.6.
If you find an incompatibility with older versions you can send us a small patch,
but we won't accept big changes.
libarchive
----------
python-libarchive-c may not work properly with obsolete versions of libarchive such as the ones included in MacOS. In that case you can install a recent version of libarchive (e.g. with ``brew install libarchive`` on MacOS) and use the ``LIBARCHIVE`` environment variable to point python-libarchive-c to it::
export LIBARCHIVE=/usr/local/Cellar/libarchive/3.3.3/lib/libarchive.13.dylib
Usage
=====
Import::
import libarchive
To extract an archive to the current directory::
libarchive.extract_file('test.zip')
``extract_memory`` extracts from a buffer instead, and ``extract_fd`` extracts
from a file descriptor.
To read an archive::
with libarchive.file_reader('test.7z') as archive:
for entry in archive:
for block in entry.get_blocks():
...
``memory_reader`` reads from a memory buffer instead, and ``fd_reader`` reads
from a file descriptor.
To create an archive::
with libarchive.file_writer('test.tar.gz', 'ustar', 'gzip') as archive:
archive.add_files('libarchive/', 'README.rst')
``memory_writer`` writes to a memory buffer instead, ``fd_writer`` writes to a
file descriptor, and ``custom_writer`` sends the data to a callback function.
You can also find more thorough examples in the ``tests/`` directory.
License
=======
`CC0 Public Domain Dedication `_
libarchive-c-2.9/libarchive/ 0000755 0002166 0000144 00000000000 13553052677 016765 5 ustar changaco users 0000000 0000000 libarchive-c-2.9/libarchive/__init__.py 0000644 0002166 0000144 00000001041 13553041177 021063 0 ustar changaco users 0000000 0000000 from .entry import ArchiveEntry
from .exception import ArchiveError
from .extract import extract_fd, extract_file, extract_memory
from .read import (
custom_reader, fd_reader, file_reader, memory_reader, stream_reader
)
from .write import custom_writer, fd_writer, file_writer, memory_writer
__all__ = [x.__name__ for x in (
ArchiveEntry,
ArchiveError,
extract_fd, extract_file, extract_memory,
custom_reader, fd_reader, file_reader, memory_reader, stream_reader,
custom_writer, fd_writer, file_writer, memory_writer
)]
libarchive-c-2.9/libarchive/entry.py 0000644 0002166 0000144 00000012127 13553041177 020474 0 ustar changaco users 0000000 0000000 from __future__ import division, print_function, unicode_literals
from contextlib import contextmanager
from ctypes import c_char_p, create_string_buffer
from . import ffi
@contextmanager
def new_archive_entry():
entry_p = ffi.entry_new()
try:
yield entry_p
finally:
ffi.entry_free(entry_p)
def format_time(seconds, nanos):
""" return float of seconds.nanos when nanos set, or seconds when not """
if nanos:
return float(seconds) + float(nanos) / 1000000000.0
return int(seconds)
class ArchiveEntry(object):
__slots__ = ('_archive_p', '_entry_p')
def __init__(self, archive_p, entry_p):
self._archive_p = archive_p
self._entry_p = entry_p
def __str__(self):
return self.pathname
@property
def filetype(self):
return ffi.entry_filetype(self._entry_p)
@property
def uid(self):
return ffi.entry_uid(self._entry_p)
@property
def gid(self):
return ffi.entry_gid(self._entry_p)
def get_blocks(self, block_size=ffi.page_size):
archive_p = self._archive_p
buf = create_string_buffer(block_size)
read = ffi.read_data
while 1:
r = read(archive_p, buf, block_size)
if r == 0:
break
yield buf.raw[0:r]
@property
def isblk(self):
return self.filetype & 0o170000 == 0o060000
@property
def ischr(self):
return self.filetype & 0o170000 == 0o020000
@property
def isdir(self):
return self.filetype & 0o170000 == 0o040000
@property
def isfifo(self):
return self.filetype & 0o170000 == 0o010000
@property
def islnk(self):
return bool(ffi.entry_hardlink_w(self._entry_p) or
ffi.entry_hardlink(self._entry_p))
@property
def issym(self):
return self.filetype & 0o170000 == 0o120000
def _linkpath(self):
return (ffi.entry_symlink_w(self._entry_p) or
ffi.entry_hardlink_w(self._entry_p) or
ffi.entry_symlink(self._entry_p) or
ffi.entry_hardlink(self._entry_p))
# aliases to get the same api as tarfile
linkpath = property(_linkpath)
linkname = property(_linkpath)
@property
def isreg(self):
return self.filetype & 0o170000 == 0o100000
@property
def isfile(self):
return self.isreg
@property
def issock(self):
return self.filetype & 0o170000 == 0o140000
@property
def isdev(self):
return self.ischr or self.isblk or self.isfifo or self.issock
@property
def atime(self):
sec_val = ffi.entry_atime(self._entry_p)
nsec_val = ffi.entry_atime_nsec(self._entry_p)
return format_time(sec_val, nsec_val)
def set_atime(self, timestamp_sec, timestamp_nsec):
return ffi.entry_set_atime(self._entry_p,
timestamp_sec, timestamp_nsec)
@property
def mtime(self):
sec_val = ffi.entry_mtime(self._entry_p)
nsec_val = ffi.entry_mtime_nsec(self._entry_p)
return format_time(sec_val, nsec_val)
def set_mtime(self, timestamp_sec, timestamp_nsec):
return ffi.entry_set_mtime(self._entry_p,
timestamp_sec, timestamp_nsec)
@property
def ctime(self):
sec_val = ffi.entry_ctime(self._entry_p)
nsec_val = ffi.entry_ctime_nsec(self._entry_p)
return format_time(sec_val, nsec_val)
def set_ctime(self, timestamp_sec, timestamp_nsec):
return ffi.entry_set_ctime(self._entry_p,
timestamp_sec, timestamp_nsec)
@property
def birthtime(self):
sec_val = ffi.entry_birthtime(self._entry_p)
nsec_val = ffi.entry_birthtime_nsec(self._entry_p)
return format_time(sec_val, nsec_val)
def set_birthtime(self, timestamp_sec, timestamp_nsec):
return ffi.entry_set_birthtime(self._entry_p,
timestamp_sec, timestamp_nsec)
def _getpathname(self):
return (ffi.entry_pathname_w(self._entry_p) or
ffi.entry_pathname(self._entry_p))
def _setpathname(self, value):
if not isinstance(value, bytes):
value = value.encode('utf8')
ffi.entry_update_pathname_utf8(self._entry_p, c_char_p(value))
pathname = property(_getpathname, _setpathname)
# aliases to get the same api as tarfile
path = property(_getpathname, _setpathname)
name = property(_getpathname, _setpathname)
@property
def size(self):
if ffi.entry_size_is_set(self._entry_p):
return ffi.entry_size(self._entry_p)
@property
def mode(self):
return ffi.entry_mode(self._entry_p)
@property
def strmode(self):
# note we strip the mode because archive_entry_strmode
# returns a trailing space: strcpy(bp, "?rwxrwxrwx ");
return ffi.entry_strmode(self._entry_p).strip()
@property
def rdevmajor(self):
return ffi.entry_rdevmajor(self._entry_p)
@property
def rdevminor(self):
return ffi.entry_rdevminor(self._entry_p)
libarchive-c-2.9/libarchive/exception.py 0000644 0002166 0000144 00000000665 13017341357 021333 0 ustar changaco users 0000000 0000000 from __future__ import division, print_function, unicode_literals
class ArchiveError(Exception):
def __init__(self, msg, errno=None, retcode=None, archive_p=None):
self.msg = msg
self.errno = errno
self.retcode = retcode
self.archive_p = archive_p
def __str__(self):
p = '%s (errno=%s, retcode=%s, archive_p=%s)'
return p % (self.msg, self.errno, self.retcode, self.archive_p)
libarchive-c-2.9/libarchive/extract.py 0000644 0002166 0000144 00000004507 13307207473 021010 0 ustar changaco users 0000000 0000000 from __future__ import division, print_function, unicode_literals
from contextlib import contextmanager
from ctypes import byref, c_longlong, c_size_t, c_void_p
from .ffi import (
write_disk_new, write_disk_set_options, write_free, write_header,
read_data_block, write_data_block, write_finish_entry, ARCHIVE_EOF
)
from .read import fd_reader, file_reader, memory_reader
EXTRACT_OWNER = 0x0001
EXTRACT_PERM = 0x0002
EXTRACT_TIME = 0x0004
EXTRACT_NO_OVERWRITE = 0x0008
EXTRACT_UNLINK = 0x0010
EXTRACT_ACL = 0x0020
EXTRACT_FFLAGS = 0x0040
EXTRACT_XATTR = 0x0080
EXTRACT_SECURE_SYMLINKS = 0x0100
EXTRACT_SECURE_NODOTDOT = 0x0200
EXTRACT_NO_AUTODIR = 0x0400
EXTRACT_NO_OVERWRITE_NEWER = 0x0800
EXTRACT_SPARSE = 0x1000
EXTRACT_MAC_METADATA = 0x2000
EXTRACT_NO_HFS_COMPRESSION = 0x4000
EXTRACT_HFS_COMPRESSION_FORCED = 0x8000
EXTRACT_SECURE_NOABSOLUTEPATHS = 0x10000
EXTRACT_CLEAR_NOCHANGE_FFLAGS = 0x20000
@contextmanager
def new_archive_write_disk(flags):
archive_p = write_disk_new()
write_disk_set_options(archive_p, flags)
try:
yield archive_p
finally:
write_free(archive_p)
def extract_entries(entries, flags=0):
"""Extracts the given archive entries into the current directory.
"""
buff, size, offset = c_void_p(), c_size_t(), c_longlong()
buff_p, size_p, offset_p = byref(buff), byref(size), byref(offset)
with new_archive_write_disk(flags) as write_p:
for entry in entries:
write_header(write_p, entry._entry_p)
read_p = entry._archive_p
while 1:
r = read_data_block(read_p, buff_p, size_p, offset_p)
if r == ARCHIVE_EOF:
break
write_data_block(write_p, buff, size, offset)
write_finish_entry(write_p)
def extract_fd(fd, flags=0):
"""Extracts an archive from a file descriptor into the current directory.
"""
with fd_reader(fd) as archive:
extract_entries(archive, flags)
def extract_file(filepath, flags=0):
"""Extracts an archive from a file into the current directory."""
with file_reader(filepath) as archive:
extract_entries(archive, flags)
def extract_memory(buffer_, flags=0):
"""Extracts an archive from memory into the current directory."""
with memory_reader(buffer_) as archive:
extract_entries(archive, flags)
libarchive-c-2.9/libarchive/ffi.py 0000644 0002166 0000144 00000021451 13553041177 020077 0 ustar changaco users 0000000 0000000 from __future__ import division, print_function, unicode_literals
from ctypes import (
c_char_p, c_int, c_uint, c_long, c_longlong, c_size_t, c_void_p,
c_wchar_p, CFUNCTYPE, POINTER,
)
try:
from ctypes import c_ssize_t
except ImportError:
from ctypes import c_longlong as c_ssize_t
import ctypes
from ctypes.util import find_library
import logging
import mmap
import os
from .exception import ArchiveError
logger = logging.getLogger('libarchive')
page_size = mmap.PAGESIZE
libarchive_path = os.environ.get('LIBARCHIVE') or find_library('archive')
libarchive = ctypes.cdll.LoadLibrary(libarchive_path)
# Constants
ARCHIVE_EOF = 1 # Found end of archive.
ARCHIVE_OK = 0 # Operation was successful.
ARCHIVE_RETRY = -10 # Retry might succeed.
ARCHIVE_WARN = -20 # Partial success.
ARCHIVE_FAILED = -25 # Current operation cannot complete.
ARCHIVE_FATAL = -30 # No more operations are possible.
REGULAR_FILE = 0o100000
DEFAULT_UNIX_PERMISSION = 0o664
# Callback types
WRITE_CALLBACK = CFUNCTYPE(
c_ssize_t, c_void_p, c_void_p, POINTER(c_void_p), c_size_t
)
READ_CALLBACK = CFUNCTYPE(
c_ssize_t, c_void_p, c_void_p, POINTER(c_void_p)
)
OPEN_CALLBACK = CFUNCTYPE(c_int, c_void_p, c_void_p)
CLOSE_CALLBACK = CFUNCTYPE(c_int, c_void_p, c_void_p)
VOID_CB = lambda *_: ARCHIVE_OK
# Type aliases, for readability
c_archive_p = c_void_p
c_archive_entry_p = c_void_p
# Helper functions
def _error_string(archive_p):
msg = error_string(archive_p)
if msg is None:
return
try:
return msg.decode('ascii')
except UnicodeDecodeError:
return msg
def archive_error(archive_p, retcode):
msg = _error_string(archive_p)
raise ArchiveError(msg, errno(archive_p), retcode, archive_p)
def check_null(ret, func, args):
if ret is None:
raise ArchiveError(func.__name__+' returned NULL')
return ret
def check_int(retcode, func, args):
if retcode >= 0:
return retcode
elif retcode == ARCHIVE_WARN:
logger.warning(_error_string(args[0]))
return retcode
else:
raise archive_error(args[0], retcode)
def ffi(name, argtypes, restype, errcheck=None):
f = getattr(libarchive, 'archive_'+name)
f.argtypes = argtypes
f.restype = restype
if errcheck:
f.errcheck = errcheck
globals()[name] = f
return f
# FFI declarations
# archive_util
errno = ffi('errno', [c_archive_p], c_int)
error_string = ffi('error_string', [c_archive_p], c_char_p)
# archive_entry
ffi('entry_new', [], c_archive_entry_p, check_null)
ffi('entry_filetype', [c_archive_entry_p], c_int)
ffi('entry_atime', [c_archive_entry_p], c_int)
ffi('entry_birthtime', [c_archive_entry_p], c_int)
ffi('entry_mtime', [c_archive_entry_p], c_int)
ffi('entry_ctime', [c_archive_entry_p], c_int)
ffi('entry_atime_nsec', [c_archive_entry_p], c_long)
ffi('entry_birthtime_nsec', [c_archive_entry_p], c_long)
ffi('entry_mtime_nsec', [c_archive_entry_p], c_long)
ffi('entry_ctime_nsec', [c_archive_entry_p], c_long)
ffi('entry_pathname', [c_archive_entry_p], c_char_p)
ffi('entry_pathname_w', [c_archive_entry_p], c_wchar_p)
ffi('entry_sourcepath', [c_archive_entry_p], c_char_p)
ffi('entry_size', [c_archive_entry_p], c_longlong)
ffi('entry_size_is_set', [c_archive_entry_p], c_int)
ffi('entry_mode', [c_archive_entry_p], c_int)
ffi('entry_strmode', [c_archive_entry_p], c_char_p)
ffi('entry_hardlink', [c_archive_entry_p], c_char_p)
ffi('entry_hardlink_w', [c_archive_entry_p], c_wchar_p)
ffi('entry_symlink', [c_archive_entry_p], c_char_p)
ffi('entry_symlink_w', [c_archive_entry_p], c_wchar_p)
ffi('entry_rdevmajor', [c_archive_entry_p], c_uint)
ffi('entry_rdevminor', [c_archive_entry_p], c_uint)
ffi('entry_uid', [c_archive_entry_p], c_longlong)
ffi('entry_gid', [c_archive_entry_p], c_longlong)
ffi('entry_set_size', [c_archive_entry_p, c_longlong], None)
ffi('entry_set_filetype', [c_archive_entry_p, c_uint], None)
ffi('entry_set_perm', [c_archive_entry_p, c_int], None)
ffi('entry_set_atime', [c_archive_entry_p, c_int, c_long], None)
ffi('entry_set_mtime', [c_archive_entry_p, c_int, c_long], None)
ffi('entry_set_ctime', [c_archive_entry_p, c_int, c_long], None)
ffi('entry_set_birthtime', [c_archive_entry_p, c_int, c_long], None)
ffi('entry_update_pathname_utf8', [c_archive_entry_p, c_char_p], None)
ffi('entry_clear', [c_archive_entry_p], c_archive_entry_p)
ffi('entry_free', [c_archive_entry_p], None)
# archive_read
ffi('read_new', [], c_archive_p, check_null)
READ_FORMATS = set((
'7zip', 'all', 'ar', 'cab', 'cpio', 'empty', 'iso9660', 'lha', 'mtree',
'rar', 'raw', 'tar', 'xar', 'zip', 'warc'
))
for f_name in list(READ_FORMATS):
try:
ffi('read_support_format_'+f_name, [c_archive_p], c_int, check_int)
except AttributeError: # pragma: no cover
logger.info('read format "%s" is not supported' % f_name)
READ_FORMATS.remove(f_name)
READ_FILTERS = set((
'all', 'bzip2', 'compress', 'grzip', 'gzip', 'lrzip', 'lzip', 'lzma',
'lzop', 'none', 'rpm', 'uu', 'xz', 'lz4', 'zstd'
))
for f_name in list(READ_FILTERS):
try:
ffi('read_support_filter_'+f_name, [c_archive_p], c_int, check_int)
except AttributeError: # pragma: no cover
logger.info('read filter "%s" is not supported' % f_name)
READ_FILTERS.remove(f_name)
ffi('read_open',
[c_archive_p, c_void_p, OPEN_CALLBACK, READ_CALLBACK, CLOSE_CALLBACK],
c_int, check_int)
ffi('read_open_fd', [c_archive_p, c_int, c_size_t], c_int, check_int)
ffi('read_open_filename_w', [c_archive_p, c_wchar_p, c_size_t],
c_int, check_int)
ffi('read_open_memory', [c_archive_p, c_void_p, c_size_t], c_int, check_int)
ffi('read_next_header', [c_archive_p, POINTER(c_void_p)], c_int, check_int)
ffi('read_next_header2', [c_archive_p, c_void_p], c_int, check_int)
ffi('read_close', [c_archive_p], c_int, check_int)
ffi('read_free', [c_archive_p], c_int, check_int)
# archive_read_disk
ffi('read_disk_new', [], c_archive_p, check_null)
ffi('read_disk_set_behavior', [c_archive_p, c_int], c_int, check_int)
ffi('read_disk_set_standard_lookup', [c_archive_p], c_int, check_int)
ffi('read_disk_open', [c_archive_p, c_char_p], c_int, check_int)
ffi('read_disk_open_w', [c_archive_p, c_wchar_p], c_int, check_int)
ffi('read_disk_descend', [c_archive_p], c_int, check_int)
# archive_read_data
ffi('read_data_block',
[c_archive_p, POINTER(c_void_p), POINTER(c_size_t), POINTER(c_longlong)],
c_int, check_int)
ffi('read_data', [c_archive_p, c_void_p, c_size_t], c_ssize_t, check_int)
ffi('read_data_skip', [c_archive_p], c_int, check_int)
# archive_write
ffi('write_new', [], c_archive_p, check_null)
ffi('write_set_options', [c_archive_p, c_char_p], c_int, check_int)
ffi('write_disk_new', [], c_archive_p, check_null)
ffi('write_disk_set_options', [c_archive_p, c_int], c_int, check_int)
WRITE_FORMATS = set((
'7zip', 'ar_bsd', 'ar_svr4', 'cpio', 'cpio_newc', 'gnutar', 'iso9660',
'mtree', 'mtree_classic', 'pax', 'pax_restricted', 'shar', 'shar_dump',
'ustar', 'v7tar', 'xar', 'zip', 'warc'
))
for f_name in list(WRITE_FORMATS):
try:
ffi('write_set_format_'+f_name, [c_archive_p], c_int, check_int)
except AttributeError: # pragma: no cover
logger.info('write format "%s" is not supported' % f_name)
WRITE_FORMATS.remove(f_name)
WRITE_FILTERS = set((
'b64encode', 'bzip2', 'compress', 'grzip', 'gzip', 'lrzip', 'lzip', 'lzma',
'lzop', 'uuencode', 'xz', 'lz4', 'zstd'
))
for f_name in list(WRITE_FILTERS):
try:
ffi('write_add_filter_'+f_name, [c_archive_p], c_int, check_int)
except AttributeError: # pragma: no cover
logger.info('write filter "%s" is not supported' % f_name)
WRITE_FILTERS.remove(f_name)
ffi('write_open',
[c_archive_p, c_void_p, OPEN_CALLBACK, WRITE_CALLBACK, CLOSE_CALLBACK],
c_int, check_int)
ffi('write_open_fd', [c_archive_p, c_int], c_int, check_int)
ffi('write_open_filename', [c_archive_p, c_char_p], c_int, check_int)
ffi('write_open_filename_w', [c_archive_p, c_wchar_p], c_int, check_int)
ffi('write_open_memory',
[c_archive_p, c_void_p, c_size_t, POINTER(c_size_t)],
c_int, check_int)
ffi('write_get_bytes_in_last_block', [c_archive_p], c_int, check_int)
ffi('write_get_bytes_per_block', [c_archive_p], c_int, check_int)
ffi('write_set_bytes_in_last_block', [c_archive_p, c_int], c_int, check_int)
ffi('write_set_bytes_per_block', [c_archive_p, c_int], c_int, check_int)
ffi('write_header', [c_archive_p, c_void_p], c_int, check_int)
ffi('write_data', [c_archive_p, c_void_p, c_size_t], c_ssize_t, check_int)
ffi('write_data_block', [c_archive_p, c_void_p, c_size_t, c_longlong],
c_int, check_int)
ffi('write_finish_entry', [c_archive_p], c_int, check_int)
ffi('write_fail', [c_archive_p], c_int, check_int)
ffi('write_close', [c_archive_p], c_int, check_int)
ffi('write_free', [c_archive_p], c_int, check_int)
# library version
ffi('version_number', [], c_int, check_int)
libarchive-c-2.9/libarchive/flags.py 0000644 0002166 0000144 00000000323 13307223462 020416 0 ustar changaco users 0000000 0000000 READDISK_RESTORE_ATIME = 0x0001
READDISK_HONOR_NODUMP = 0x0002
READDISK_MAC_COPYFILE = 0x0004
READDISK_NO_TRAVERSE_MOUNTS = 0x0008
READDISK_NO_XATTR = 0x0010
READDISK_NO_ACL = 0x0020
READDISK_NO_FFLAGS = 0x0040
libarchive-c-2.9/libarchive/read.py 0000644 0002166 0000144 00000007776 13553032254 020260 0 ustar changaco users 0000000 0000000 from __future__ import division, print_function, unicode_literals
from contextlib import contextmanager
from ctypes import cast, c_void_p, POINTER, create_string_buffer
from os import fstat, stat
from . import ffi
from .ffi import (ARCHIVE_EOF, OPEN_CALLBACK, READ_CALLBACK, CLOSE_CALLBACK,
VOID_CB, page_size)
from .entry import ArchiveEntry, new_archive_entry
class ArchiveRead(object):
def __init__(self, archive_p):
self._pointer = archive_p
def __iter__(self):
"""Iterates through an archive's entries.
"""
archive_p = self._pointer
read_next_header2 = ffi.read_next_header2
with new_archive_entry() as entry_p:
entry = ArchiveEntry(archive_p, entry_p)
while 1:
r = read_next_header2(archive_p, entry_p)
if r == ARCHIVE_EOF:
return
yield entry
@contextmanager
def new_archive_read(format_name='all', filter_name='all'):
"""Creates an archive struct suitable for reading from an archive.
Returns a pointer if successful. Raises ArchiveError on error.
"""
archive_p = ffi.read_new()
getattr(ffi, 'read_support_filter_'+filter_name)(archive_p)
getattr(ffi, 'read_support_format_'+format_name)(archive_p)
try:
yield archive_p
finally:
ffi.read_free(archive_p)
@contextmanager
def custom_reader(
read_func, format_name='all', filter_name='all',
open_func=VOID_CB, close_func=VOID_CB, block_size=page_size,
archive_read_class=ArchiveRead
):
"""Read an archive using a custom function.
"""
open_cb = OPEN_CALLBACK(open_func)
read_cb = READ_CALLBACK(read_func)
close_cb = CLOSE_CALLBACK(close_func)
with new_archive_read(format_name, filter_name) as archive_p:
ffi.read_open(archive_p, None, open_cb, read_cb, close_cb)
yield archive_read_class(archive_p)
@contextmanager
def fd_reader(fd, format_name='all', filter_name='all', block_size=4096):
"""Read an archive from a file descriptor.
"""
with new_archive_read(format_name, filter_name) as archive_p:
try:
block_size = fstat(fd).st_blksize
except (OSError, AttributeError): # pragma: no cover
pass
ffi.read_open_fd(archive_p, fd, block_size)
yield ArchiveRead(archive_p)
@contextmanager
def file_reader(path, format_name='all', filter_name='all', block_size=4096):
"""Read an archive from a file.
"""
with new_archive_read(format_name, filter_name) as archive_p:
try:
block_size = stat(path).st_blksize
except (OSError, AttributeError): # pragma: no cover
pass
ffi.read_open_filename_w(archive_p, path, block_size)
yield ArchiveRead(archive_p)
@contextmanager
def memory_reader(buf, format_name='all', filter_name='all'):
"""Read an archive from memory.
"""
with new_archive_read(format_name, filter_name) as archive_p:
ffi.read_open_memory(archive_p, cast(buf, c_void_p), len(buf))
yield ArchiveRead(archive_p)
@contextmanager
def stream_reader(stream, format_name='all', filter_name='all',
block_size=page_size):
"""Read an archive from a stream.
The `stream` object must support the standard `readinto` method.
"""
buf = create_string_buffer(block_size)
buf_p = cast(buf, c_void_p)
def read_func(archive_p, context, ptrptr):
# readinto the buffer, returns number of bytes read
length = stream.readinto(buf)
# write the address of the buffer into the pointer
ptrptr = cast(ptrptr, POINTER(c_void_p))
ptrptr[0] = buf_p
# tell libarchive how much data was written into the buffer
return length
open_cb = OPEN_CALLBACK(VOID_CB)
read_cb = READ_CALLBACK(read_func)
close_cb = CLOSE_CALLBACK(VOID_CB)
with new_archive_read(format_name, filter_name) as archive_p:
ffi.read_open(archive_p, None, open_cb, read_cb, close_cb)
yield ArchiveRead(archive_p)
libarchive-c-2.9/libarchive/write.py 0000644 0002166 0000144 00000020032 13553052124 020451 0 ustar changaco users 0000000 0000000 from __future__ import division, print_function, unicode_literals
from contextlib import contextmanager
from ctypes import byref, cast, c_char, c_size_t, c_void_p, POINTER
from . import ffi
from .entry import ArchiveEntry, new_archive_entry
from .ffi import (
OPEN_CALLBACK, WRITE_CALLBACK, CLOSE_CALLBACK, VOID_CB, REGULAR_FILE,
DEFAULT_UNIX_PERMISSION, ARCHIVE_EOF,
page_size, entry_sourcepath, entry_clear, read_disk_new, read_disk_open_w,
read_next_header2, read_disk_descend, read_free, write_header, write_data,
write_finish_entry, entry_set_size, entry_set_filetype, entry_set_perm,
read_disk_set_behavior
)
@contextmanager
def new_archive_read_disk(path, flags=0, lookup=False):
archive_p = read_disk_new()
read_disk_set_behavior(archive_p, flags)
if lookup:
ffi.read_disk_set_standard_lookup(archive_p)
read_disk_open_w(archive_p, path)
try:
yield archive_p
finally:
read_free(archive_p)
class ArchiveWrite(object):
def __init__(self, archive_p):
self._pointer = archive_p
def add_entries(self, entries):
"""Add the given entries to the archive.
"""
write_p = self._pointer
for entry in entries:
write_header(write_p, entry._entry_p)
for block in entry.get_blocks():
write_data(write_p, block, len(block))
write_finish_entry(write_p)
def add_files(self, *paths, **kw):
"""Read the given paths from disk and add them to the archive.
The keyword arguments (`**kw`) are passed to `new_archive_read_disk`.
"""
write_p = self._pointer
block_size = ffi.write_get_bytes_per_block(write_p)
if block_size <= 0:
block_size = 10240 # pragma: no cover
with new_archive_entry() as entry_p:
entry = ArchiveEntry(None, entry_p)
for path in paths:
with new_archive_read_disk(path, **kw) as read_p:
while 1:
r = read_next_header2(read_p, entry_p)
if r == ARCHIVE_EOF:
break
entry.pathname = entry.pathname.lstrip('/')
read_disk_descend(read_p)
write_header(write_p, entry_p)
if entry.isreg:
with open(entry_sourcepath(entry_p), 'rb') as f:
while 1:
data = f.read(block_size)
if not data:
break
write_data(write_p, data, len(data))
write_finish_entry(write_p)
entry_clear(entry_p)
def add_file_from_memory(
self, entry_path, entry_size, entry_data,
filetype=REGULAR_FILE, permission=DEFAULT_UNIX_PERMISSION,
atime=None, mtime=None, ctime=None, birthtime=None,
):
""""Add file from memory to archive.
:param entry_path: where entry should be places in archive
:type entry_path: str
:param entry_size: entire size of entry in bytes
:type entry_size: int
:param entry_data: content of entry
:type entry_data: bytes or Iterable[bytes]
:param filetype: which type of file: normal, symlink etc.
should entry be created as
:type filetype: octal number
:param permission: with which permission should entry be created
:type permission: octal number
:param atime: Last access time
:type atime: int seconds or tuple (int seconds, int nanoseconds)
:param mtime: Last modified time
:type mtime: int seconds or tuple (int seconds, int nanoseconds)
:param ctime: Creation time
:type ctime: int seconds or tuple (int seconds, int nanoseconds)
:param birthtime: Birth time (for archive formats that support it)
:type birthtime: int seconds or tuple (int seconds, int nanoseconds)
"""
archive_pointer = self._pointer
if isinstance(entry_data, bytes):
entry_data = (entry_data,)
elif isinstance(entry_data, str):
raise TypeError(
"entry_data: expected bytes, got %r" % type(entry_data)
)
with new_archive_entry() as archive_entry_pointer:
archive_entry = ArchiveEntry(None, archive_entry_pointer)
archive_entry.pathname = entry_path
entry_set_size(archive_entry_pointer, entry_size)
entry_set_filetype(archive_entry_pointer, filetype)
entry_set_perm(archive_entry_pointer, permission)
if atime is not None:
if not isinstance(atime, tuple):
atime = (atime, 0)
archive_entry.set_atime(*atime)
if mtime is not None:
if not isinstance(mtime, tuple):
mtime = (mtime, 0)
archive_entry.set_mtime(*mtime)
if ctime is not None:
if not isinstance(ctime, tuple):
ctime = (ctime, 0)
archive_entry.set_ctime(*ctime)
if birthtime is not None:
if not isinstance(birthtime, tuple):
birthtime = (birthtime, 0)
archive_entry.set_birthtime(*birthtime)
write_header(archive_pointer, archive_entry_pointer)
for chunk in entry_data:
if not chunk:
break
write_data(archive_pointer, chunk, len(chunk))
write_finish_entry(archive_pointer)
entry_clear(archive_entry_pointer)
@contextmanager
def new_archive_write(format_name, filter_name=None, options=''):
archive_p = ffi.write_new()
getattr(ffi, 'write_set_format_'+format_name)(archive_p)
if filter_name:
getattr(ffi, 'write_add_filter_'+filter_name)(archive_p)
if options:
if not isinstance(options, bytes):
options = options.encode('utf-8')
ffi.write_set_options(archive_p, options)
try:
yield archive_p
ffi.write_close(archive_p)
ffi.write_free(archive_p)
except Exception:
ffi.write_fail(archive_p)
ffi.write_free(archive_p)
raise
@contextmanager
def custom_writer(
write_func, format_name, filter_name=None,
open_func=VOID_CB, close_func=VOID_CB, block_size=page_size,
archive_write_class=ArchiveWrite, options=''
):
def write_cb_internal(archive_p, context, buffer_, length):
data = cast(buffer_, POINTER(c_char * length))[0]
return write_func(data)
open_cb = OPEN_CALLBACK(open_func)
write_cb = WRITE_CALLBACK(write_cb_internal)
close_cb = CLOSE_CALLBACK(close_func)
with new_archive_write(format_name, filter_name, options) as archive_p:
ffi.write_set_bytes_in_last_block(archive_p, 1)
ffi.write_set_bytes_per_block(archive_p, block_size)
ffi.write_open(archive_p, None, open_cb, write_cb, close_cb)
yield archive_write_class(archive_p)
@contextmanager
def fd_writer(
fd, format_name, filter_name=None,
archive_write_class=ArchiveWrite, options=''
):
with new_archive_write(format_name, filter_name, options) as archive_p:
ffi.write_open_fd(archive_p, fd)
yield archive_write_class(archive_p)
@contextmanager
def file_writer(
filepath, format_name, filter_name=None,
archive_write_class=ArchiveWrite, options=''
):
with new_archive_write(format_name, filter_name, options) as archive_p:
ffi.write_open_filename_w(archive_p, filepath)
yield archive_write_class(archive_p)
@contextmanager
def memory_writer(
buf, format_name, filter_name=None,
archive_write_class=ArchiveWrite, options=''
):
with new_archive_write(format_name, filter_name, options) as archive_p:
used = byref(c_size_t())
buf_p = cast(buf, c_void_p)
ffi.write_open_memory(archive_p, buf_p, len(buf), used)
yield archive_write_class(archive_p)
libarchive-c-2.9/libarchive_c.egg-info/ 0000755 0002166 0000144 00000000000 13553052677 020761 5 ustar changaco users 0000000 0000000 libarchive-c-2.9/libarchive_c.egg-info/PKG-INFO 0000644 0002166 0000144 00000005532 13553052677 022063 0 ustar changaco users 0000000 0000000 Metadata-Version: 2.1
Name: libarchive-c
Version: 2.9
Summary: Python interface to libarchive
Home-page: https://github.com/Changaco/python-libarchive-c
Author: Changaco
Author-email: changaco@changaco.oy.lc
License: CC0
Description: .. image:: https://travis-ci.org/Changaco/python-libarchive-c.svg
:target: https://travis-ci.org/Changaco/python-libarchive-c
A Python interface to libarchive. It uses the standard ctypes_ module to
dynamically load and access the C library.
.. _ctypes: https://docs.python.org/3/library/ctypes.html
Installation
============
pip install libarchive-c
Compatibility
=============
python
------
python-libarchive-c is currently tested with python 2.7, 3.4, 3.5, and 3.6.
If you find an incompatibility with older versions you can send us a small patch,
but we won't accept big changes.
libarchive
----------
python-libarchive-c may not work properly with obsolete versions of libarchive such as the ones included in MacOS. In that case you can install a recent version of libarchive (e.g. with ``brew install libarchive`` on MacOS) and use the ``LIBARCHIVE`` environment variable to point python-libarchive-c to it::
export LIBARCHIVE=/usr/local/Cellar/libarchive/3.3.3/lib/libarchive.13.dylib
Usage
=====
Import::
import libarchive
To extract an archive to the current directory::
libarchive.extract_file('test.zip')
``extract_memory`` extracts from a buffer instead, and ``extract_fd`` extracts
from a file descriptor.
To read an archive::
with libarchive.file_reader('test.7z') as archive:
for entry in archive:
for block in entry.get_blocks():
...
``memory_reader`` reads from a memory buffer instead, and ``fd_reader`` reads
from a file descriptor.
To create an archive::
with libarchive.file_writer('test.tar.gz', 'ustar', 'gzip') as archive:
archive.add_files('libarchive/', 'README.rst')
``memory_writer`` writes to a memory buffer instead, ``fd_writer`` writes to a
file descriptor, and ``custom_writer`` sends the data to a callback function.
You can also find more thorough examples in the ``tests/`` directory.
License
=======
`CC0 Public Domain Dedication `_
Keywords: archive libarchive 7z tar bz2 zip gz
Platform: UNKNOWN
Description-Content-Type: text/x-rst
libarchive-c-2.9/libarchive_c.egg-info/SOURCES.txt 0000644 0002166 0000144 00000001761 13553052677 022652 0 ustar changaco users 0000000 0000000 .gitattributes
.gitignore
.travis.yml
LICENSE.md
MANIFEST.in
Makefile
README.rst
setup.cfg
setup.py
tox.ini
version.py
libarchive/__init__.py
libarchive/entry.py
libarchive/exception.py
libarchive/extract.py
libarchive/ffi.py
libarchive/flags.py
libarchive/read.py
libarchive/write.py
libarchive_c.egg-info/PKG-INFO
libarchive_c.egg-info/SOURCES.txt
libarchive_c.egg-info/dependency_links.txt
libarchive_c.egg-info/top_level.txt
tests/__init__.py
tests/surrogateescape.py
tests/test_atime_mtime_ctime.py
tests/test_convert.py
tests/test_entry.py
tests/test_errors.py
tests/test_rwx.py
tests/test_security_flags.py
tests/data/flags.tar
tests/data/special.tar
tests/data/tar_relative.tar
tests/data/testtar.README
tests/data/testtar.tar
tests/data/testtar.tar.json
tests/data/unicode.tar
tests/data/unicode.tar.json
tests/data/unicode.zip
tests/data/unicode.zip.json
tests/data/unicode2.zip
tests/data/unicode2.zip.json
tests/data/프로그램.README
tests/data/프로그램.zip
tests/data/프로그램.zip.json libarchive-c-2.9/libarchive_c.egg-info/dependency_links.txt 0000644 0002166 0000144 00000000001 13553052677 025027 0 ustar changaco users 0000000 0000000
libarchive-c-2.9/libarchive_c.egg-info/top_level.txt 0000644 0002166 0000144 00000000013 13553052677 023505 0 ustar changaco users 0000000 0000000 libarchive
libarchive-c-2.9/setup.cfg 0000644 0002166 0000144 00000000210 13553052677 016467 0 ustar changaco users 0000000 0000000 [wheel]
universal = 1
[flake8]
exclude = .?*,env*/
ignore = E226,E731,W504
max-line-length = 80
[egg_info]
tag_build =
tag_date = 0
libarchive-c-2.9/setup.py 0000644 0002166 0000144 00000001151 13553052545 016357 0 ustar changaco users 0000000 0000000 import os
from os.path import join, dirname
from setuptools import setup, find_packages
from version import get_version
os.umask(0o022)
setup(
name='libarchive-c',
version=get_version(),
description='Python interface to libarchive',
author='Changaco',
author_email='changaco@changaco.oy.lc',
url='https://github.com/Changaco/python-libarchive-c',
license='CC0',
packages=find_packages(exclude=['tests']),
long_description=open(join(dirname(__file__), 'README.rst')).read(),
long_description_content_type='text/x-rst',
keywords='archive libarchive 7z tar bz2 zip gz',
)
libarchive-c-2.9/tests/ 0000755 0002166 0000144 00000000000 13553052677 016017 5 ustar changaco users 0000000 0000000 libarchive-c-2.9/tests/__init__.py 0000644 0002166 0000144 00000010506 13553041177 020123 0 ustar changaco users 0000000 0000000 from __future__ import division, print_function, unicode_literals
from contextlib import closing, contextmanager
from copy import copy
from os import chdir, getcwd, stat, walk
from os.path import abspath, dirname, join
from stat import S_ISREG
import tarfile
try:
from stat import filemode
except ImportError: # Python 2
filemode = tarfile.filemode
from libarchive import file_reader
from . import surrogateescape
data_dir = join(dirname(__file__), 'data')
surrogateescape.register()
def check_archive(archive, tree):
tree2 = copy(tree)
for e in archive:
epath = str(e).rstrip('/')
assert epath in tree2
estat = tree2.pop(epath)
assert e.mtime == int(estat['mtime'])
if not e.isdir:
size = e.size
if size is not None:
assert size == estat['size']
with open(epath, 'rb') as f:
for block in e.get_blocks():
assert f.read(len(block)) == block
leftover = f.read()
assert not leftover
# Check that there are no missing directories or files
assert len(tree2) == 0
def get_entries(location):
"""
Using the archive file at `location`, return an iterable of name->value
mappings for each libarchive.ArchiveEntry objects essential attributes.
Paths are base64-encoded because JSON is UTF-8 and cannot handle
arbitrary binary pathdata.
"""
with file_reader(location) as arch:
for entry in arch:
# libarchive introduces prefixes such as h prefix for
# hardlinks: tarfile does not, so we ignore the first char
mode = entry.strmode[1:].decode('ascii')
yield {
'path': surrogate_decode(entry.pathname),
'mtime': entry.mtime,
'size': entry.size,
'mode': mode,
'isreg': entry.isreg,
'isdir': entry.isdir,
'islnk': entry.islnk,
'issym': entry.issym,
'linkpath': surrogate_decode(entry.linkpath),
'isblk': entry.isblk,
'ischr': entry.ischr,
'isfifo': entry.isfifo,
'isdev': entry.isdev,
'uid': entry.uid,
'gid': entry.gid
}
def get_tarinfos(location):
"""
Using the tar archive file at `location`, return an iterable of
name->value mappings for each tarfile.TarInfo objects essential
attributes.
Paths are base64-encoded because JSON is UTF-8 and cannot handle
arbitrary binary pathdata.
"""
with closing(tarfile.open(location)) as tar:
for entry in tar:
path = surrogate_decode(entry.path or '')
if entry.isdir() and not path.endswith('/'):
path += '/'
# libarchive introduces prefixes such as h prefix for
# hardlinks: tarfile does not, so we ignore the first char
mode = filemode(entry.mode)[1:]
yield {
'path': path,
'mtime': entry.mtime,
'size': entry.size,
'mode': mode,
'isreg': entry.isreg(),
'isdir': entry.isdir(),
'islnk': entry.islnk(),
'issym': entry.issym(),
'linkpath': surrogate_decode(entry.linkpath or None),
'isblk': entry.isblk(),
'ischr': entry.ischr(),
'isfifo': entry.isfifo(),
'isdev': entry.isdev(),
'uid': entry.uid,
'gid': entry.gid
}
@contextmanager
def in_dir(dirpath):
prev = abspath(getcwd())
chdir(dirpath)
try:
yield
finally:
chdir(prev)
def stat_dict(path):
keys = set(('uid', 'gid', 'mtime'))
mode, _, _, _, uid, gid, size, _, mtime, _ = stat(path)
if S_ISREG(mode):
keys.add('size')
return {k: v for k, v in locals().items() if k in keys}
def treestat(d, stat_dict=stat_dict):
r = {}
for dirpath, dirnames, filenames in walk(d):
r[dirpath] = stat_dict(dirpath)
for fname in filenames:
fpath = join(dirpath, fname)
r[fpath] = stat_dict(fpath)
return r
def surrogate_decode(o):
if isinstance(o, bytes):
return o.decode('utf8', errors='surrogateescape')
return o
libarchive-c-2.9/tests/data/ 0000755 0002166 0000144 00000000000 13553052677 016730 5 ustar changaco users 0000000 0000000 libarchive-c-2.9/tests/data/flags.tar 0000644 0002166 0000144 00000024000 13307207473 020521 0 ustar changaco users 0000000 0000000 /tmp/python-libarchive-c-test-absolute-file 0000644 0001750 0001750 00000000000 13106045633 017676 0 ustar neko neko ../python-libarchive-c-test-dot-dot-file 0000644 0001750 0001750 00000000000 13106045633 016766 0 ustar neko neko libarchive-c-2.9/tests/data/special.tar 0000644 0002166 0000144 00000334000 13017341356 021046 0 ustar changaco users 0000000 0000000 0-REGTYPE 0000644 0001754 0000145 00000007265 10077465401 013372 0 ustar tarfile tarfile 0000000 0000000 GIF87a\` νssscccRRR999))) , \` I8ͻ Z@UZJf*tmx<$` À@dKp`&zx4AX3~4h鈑b{u
OhA6 GZhΨʇ
@h
n0ɇDyESh``}[[Ȑ>& Mb\JcÏ Ǩ+l"a+H!cPʆj8QTgoJ4/&%/sN&4Tի
QhRU
#$Y8(Kb؞;4
SAUUYԗt;aP] [bx)"BA3~ITPhZ@c lC@[f/\"rɅ0 s ,]:B6%_mtW)@`'܇ˀO wK)pq`5B6a6UVhh[š['4#-!`WN`"@)HxdցyAX$G:dFd:0Tixed6uB`Օu0gd&m